By now, anyone reading this blog has probably figured out the only thing I like more than teaching applied statistics is understanding what makes students learn the material better. My goal is not for them to learn the material long enough for the exam, but so they can actually do what the class is intended to do … apply statistics to find the answers to important questions.
My focus on the cognitive science underlying student success is no surprise to people on my campus. As such, I wasn’t the least bit surprised with a science faculty member contacted me to find out the answer to this question. What is better, direct teaching or forcing students to try to figure out something prior to being taught, then teaching them. The specific topic at hand was helping students to understand the application of mathematics in this particular science discipline. The topic came from a teaching listserve.
I have come across a research article addressing this very topic, the abstract can be found by copying and pasting this: http://onlinelibrary.wiley.com/doi/10.1111/cogs.12107/abstract
Kapur, M. (2014). Productive Failure in Learning Math, Cognitive Science, 38, 1008-1022.
What Kapur found is that though students learn a great deal from direct teaching, that is providing students with background information, showing them how to calculate a math problem, then having them practice, preferably in class, then out of class as homework, direct teaching may not always be the most effective way of having students learn how to solve problems in mathematics.
Instead of direct instruction, Kapur found that by providing students with the problem and having them figure out how to solve the problem before being instructed yields better long term learning, and also increases a students’ ability to apply that knowledge to other problems. Prior to instruction, almost every student fails. Yet there seems to be benefit in the attempt despite the failure.
I have used this very technique for years as has my science colleague I spoke of earlier.
I actually begin when I teach the (arithmetic) mean. By the time a student is in college he or she has calculated many means. It is actually a concepts taught to 8 year olds. What the students haven’t been taught is the formula for mean, at least not that my students seem to remember. Equally true, they haven’t thought about how the mean works. They just plug in numbers into their calculator and it spits out a number.
What I have them do is write in word the steps involved. Then, I let them ask me questions about symbols. If they can’t figure out they need the symbol for sum of the observations and total number of observation, I will eventual give them to the students. Either way, they have to create the formula for the mean.
After teaching students the conceptual meaning of the Sum of Squared, they determine the formula and process to find it. Then, I define for students variance, and again they generate the format. Obviously, they are asked to generate the definitional formula. In each case (which by the way covers several days), only about 3 or 4 students in a class of 40 are actually successful. However, most students, who initially won’t even try and respond to my requests with an “I don’t know” eventually start giving it a shot, and most of the time can get a piece of it correct. More importantly, students start thinking about statistics as a process and the definitional formulas as a set directions and explanation. Statistics begins to make sense to students, but it starts with failure.
The same process works with z-test, and all three t-tests (one sample, independent, and related).
There is no doubt that direct teaching will be faster, but forcing students to think about the underling concepts of applied statistics , even if it results in failure, seems to yield deeper and longer understanding, and after all, isn’t that what we are after?
I hope everyone has a great break and wonderful 2015!