Judging the strength of the evidence
Scientists can have confidence in their conclusions if the evidence gathered in the experiment is strong. Before confirming a link between variables, they will evaluate their experimental methods and measuring techniques, and review the data taken.
Here are some questions that you may need to consider when you evaluate your experiment.
Method
Evaluation of method/experimental technique/improving accuracy.
- Were there any random errors in measuring, eg reaction time, parallax error? Did you try to reduce them?
- How precise were you in measuring, eg did you use instruments with higher resolutions and read to the smallest division?
- Were there any systematic errors, eg zero errors in the instruments?
- Was your method of taking results good, or did you make mistakes?
- Are there any sources of inaccuracy in your method, eg did you keep the control variables completely constant?
Suggest improvements to method/measuring technique to improve confidence in the data and the conclusion.
- What did you do well and what could you improve on?
- What would you do differently next time?
- Could you suggest improvements to the method that would increase the confidence in the outcome, eg larger sample size, more repeats, slow motion camera to show bounce heights or computer sensors to remove reaction time errors.
Evaluate the strength of evidence/data
Repeatability. How repeatable are your results? Did your results vary by large amounts? Did you have any off points on the graph, or anomalous readings in the table? How could you check these? Were enough repeat readings taken and a mean calculated? Is the sample size high enough?
Reproducibility. Did other groups following a similar method get the same graphs or patterns in their results, eg have you compared your graphs and data with other, different groups who followed the same method, to confirm the same conclusion has been reached? Was the range of the independent variable sufficient to show a pattern?