Today was experiment day. The students carried out their design from last class to test chocolate melting times. As one student said after leaving a chocolate chip on their tongue to melt that "This sounded a lot better than it actually is."
There were no issues that arose while collecting the data. I discussed how this experiment exhibited the properties of a good experimental design: control/compare, randomize, and replicate.
We then discussed analysis. Most students focused on treating each individual column of data (milk, semi-sweet, and dark) and analyzing the results. After no one could offer any alternative way of analyzing the data, I offered the idea that the differences in melting time could be compared and since the semi-sweet was the default standard, the analysis could look at milk minus semi-sweet melting times and dark minus semi-sweet melting times for each student.
As for the analysis, I emphasized that a comparison of the distribution was needed. Focusing solely on the differences in means was not enough. This is typically how students view or are taught how to compare results. As I pointed out, the spread and outliers also need to be considered to make a reasoned assessment.
Each group is to analyze and report their results and conclusions. I reminded students to include the purpose of the analysis and that they need to draw a conclusion. If there is no purpose for the analysis or if you aren't going to draw a conclusion from your analysis, why conduct the analysis?
I will be at a math conference and miss the next class. The groups are to prepare draft posters of their results and compare against work from other groups.
Visit the class summary for a student's perspective and to view the lesson slides.
Wednesday, April 17, 2013
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment