Just another WordPress.com site

Archive for October, 2011

Week 5- Comments

http://statsisboring.wordpress.com/2011/10/14/is-it-dishonest-to-remove-outliers-from-our-data/#comment-26

http://alhoward.wordpress.com/2011/10/18/is-it-dishonest-to-remove-outliers-andor-transform-data/#comment-10

http://psud87.wordpress.com/2011/10/18/is-it-dishonest-to-remove-ouliner-andor-transform-data/#comment-12

http://jessica0703.wordpress.com/2011/10/12/why-are-ethics-such-an-essential-component-for-research/#comment-19

Advertisements

Week 5- Reliability in Science

Reliability in science

Reliability offers importance in psychological experiments such as quality of measurement this is defined as the extent to which you can measure a participant’s true score, it is something that every scientist must be aware of, especially in social sciences and biological experiments. True score is defined as a person’s general ability to take part in a study to their full potential. Reliability is also how repeatable the experiment is… is this really important when it comes to scientific tests, and why?

Reliability is important when it comes to scientific tests, it is essential for experiments to have a measure of quality so that other psychologists can carry out the tests again and investigate whether their results correspond to those that have been published, the consistency of the experiment must be maintained, a researcher must use as many repeat sample groups as possible, to reduce the chance of an abnormal sample group skewing the results. For example, if you use the same samples in each manipulation of an experiment and one generates results that are completely different from the others, then you know that there may be faults within the experiment, this is also where pilot studies can be useful, as a smaller scale experiment can reduce the amount of abnormalities found in the proper experiment as these have already been altered.

In relation to the real world, reliability is essential when it comes to competition of products being sold, if a company presents something about their product which makes it stand out from the rest of the competition, and this is then proven to be wrong, the company loses the reliability it has been working so hard to gain, negative customer satisfaction also loses reliability as they will not give positive feedback to others. Their reputation will also be lost, as customers will go to companies whose products have never failed in reliability.

In conclusion, I believe that reliability is essential in scientific experiments and is something that all scientists do and should take into consideration within their experiments. Through research I have seen that reliability is also important within the consumer world, if a company or product lacks reliability then this can discredit their product and make it less sellable to the public.

Comments for Week 3

https://raw2392.wordpress.com/2011/10/03/do-you-need-statistics-to-understand-your-data/#comments

http://leylaosman.wordpress.com/2011/10/07/do-you-need-statistics-to-understand-your-data/

http://ryan1392.wordpress.com/2011/10/07/do-we-need-statistics-to-understand-our-results/

http://ruthtanti.wordpress.com/2011/10/05/wk-2-do-you-need-statistics-to-understand-your-data/

http://statsisboring.wordpress.com/2011/10/07/do-you-need-statistics-to-understand-your-data/

Week 3- It is dishonest to remove outliers and/or transform data?

“Is it dishonest to remove outliers and/or transform data?”

Outliers are defined as a data point that is extremely different from the others that have been produced in the sample. By extremely, it is meant that the data is several standard deviations away from the sample mean and that is follows a completely different pattern from the other data points. They can be easily spotted through things such as a high degree of inconsistency across the board of participants. Outliers flaw the design of the experiment and effect the results produced, it also effects the write up of the test, for example, if a ten people read 20 pages an hour of some text book and one reads 200 pages per hour, the average pages/hour will be thrown off by the one person that seems to have way above average skills. It could even be the difference in the hypothesis being supported or not, but is it dishonest to completely remove them from the data?

Many scientists do not believe that they should be removed from the data… it effects the reliability of the experiment should other scientists wish to carry out their own versions, the results would simply not match the results of the initial test, it also makes it easier to tamper with data, scientists who see that the data will not support the hypothesis could easily remove certain data points in order for it to then be supported.

However, other scientists do believe that outliers should be allowed to be removed, and that this could even be done before the data is analysed, the participant could be removed from the experimental condition, should it be seen that they are not following the instructions carefully or that they fail to engage in the experiment at all. Not removing these participants affects the validity of the results. Cleaning the data before the final project write up ensures that data represents the construct properly.

In conclusion I believe that it is okay to remove outliers from the experiment should it be done before the data is analysed, through the observation of an unengaged and despondent participant, although I do not believe that it should be allowed to be removed once the experiment and write up has begun, this is because I believe it affects the reliability of the experiment too much and when other scientists wish to carry out the test again, the same results will not be sought.

Do you need statistics to understand your data?

Statistics is important when it comes to understanding data… but is it essential? Is it all that is needed in order to fully understand the extent of the data been given, or are there other things that outweigh the importance of the statistical values? When data is collected it is automatically assumed that an output will be generated in order to see whether the ‘P’ value supports the hypothesis or the null hypothesis, but should the background reading into the data be more important, should understanding why the data has been generated be more important than the actual P value?

Statistics makes it simple to understand the bare face of data, to understand whether or not the experiment has been successful. Statistics tell a story in simple terms, this is through tables and graphs. The visible data makes it easy to determine what they actually mean, by looking at a produced graph, things like correlation can be easily spotted. This can then make it easy to see if one variable has caused or affected another variable, or whether the two are simply unconnected.

Even though statistics makes it easy to determine data and to visually compare elements, I do not see it to be the absolute essential to completely understanding data. I believe other elements are also important, such as reading into why the data may have been produced, or whether there has been other extraneous variables which has effected the data output which may have not previously been thought about, also whether changing the way the experiment has been carried out could change the data in order for it to support the hypothesis previously set.

To conclude I believe statistics to be partially important when it comes to understanding results, but I do not see it as the sole compartment when it comes to being data literate.

Tag Cloud