# Category Archives: Core Concepts

## Critical Concept: Making Sense of Variability

Last week I introduced the five most critical topics I felt a professor of applied statistics could impart upon her students https://statisticalsage.wordpress.com/2012/06/10/the-most-critical-concepts-in-applied-statistics-treating-students-like-family/.

1. Making Sense of Variability
2. Capturing Variability
3. Normal Distribution
4. Sampling Distribution of the Means and Standard Error
5. Understanding Hypothesis Testing

This week, I would like to go into detail with the first of those five, “Making Sense of Variability.”

As this is the first critical topic, and I typically start teaching on the very first day of class, this is an issue I typically start to lay the foundation for  on the first day of class. I begin by  conduct activities that help students to think about variability.  Thus, it is typically the second day of class when I hit the foundational concept of variability, but before I enter into this, I provide a framework for students about knowledge. How do we know what we know? This is basic epistemology.

What is statistics? It is a way of us to gather knowledge. We are making sense out of variability. That is all that statistics is. Yet, for a student to understand that, he or she must first understand how do we gather knowledge? What are the “Ways of Knowing,” that is foundational epistemology? http://www.acperesearch.net/waysofknowing.pdf

• Empiricism – we learn through our observations/ perceptions (i.e., what are the data revealing?)
• Rationalism – we learn through the application of logic (i.e., if no one is in the woods when a tree falls, does it still make a sound?)
• Authority – we learn so much from what others tell us (i.e., what is your name? How do you know? Someone must have told you.)
• Intuition – gut instinct often reveals to us knowledge of a different type. (i.e., My dog’s love me. Science doesn’t tell me this, I just know.)

There are strengths and weaknesses to each way of knowing (follow the link above for additional details). By discussing this with students, we can start to understand why science has become a dance between empiricism and rationalism, as by combining the two … we first start with a logically deduced (or induced) hypothesis (rationalism) then we seek out data to test it (empiricism).  This goes so far beyond the steps of the scientific method discussed in middle schools throughout the US, as this progression is far from linear. I refer to it as a dance because there is give and take between these two approaches. Understanding how we know helps frame the context for statistics … it’s making sense of the observations. It’s just one small component of how we know anything.

In class assignment: This is fairly basic and should take no more than 10 – 15 minutes of class.

1. Start with asking students to make a list of what they know to be true.  (4 – 8 statements will work)
2. Have them form into groups of 4, and start to classify each of the statements into different ways of knowing. Any form of classification is fine.
3. Now, have them try to characterize each classification.
4. Bring the groups back together and see what they have found (in common/ different)
5. Now, introduce to them the four ways of knowing, providing an example and definition of each.
6. Have them put their examples into the four categories, and discuss.

Homework: Have a list of around 12 statements people “know,” and have them place them into one of the four categories.

From the four ways of knowing, we enter in into the foundational explanation of the Four Uses of Statistics. This topic is covered in detail in chapter 1 of Kiess and Green’s (2010) Statistical Concept for the Behavioral Sciences, 4/e. (http://www.pearsonhighered.com/educator/product/Statistical-Concepts-for-the-Behavioral-Sciences/9780205626243.page)

It helps to provide the basis for which most statistics are used, while providing them with concrete examples.

1. Describe samples: what is the number of people who live in dormitories in the classroom?
2. Draw inferences from samples to populations: If I wanted to get a sense of everyone in the class, but didn’t have the time to ask, I could just ask a sample of five students … how many hours per week to you expect to study for this class, and infer that the mean of the sample will be similar to the mean of the class (the population). Of course, issue’s of sampling error can start to be discussed.
3. Test hypotheses, about the relationship between two or more variables: I typically have students form a hypothesis, and then we get a sample of data to see if it’s correct. If you aren’t comfortable going with a class created option (which often requires you to tweak the hypothesis on the spot), a good hypothesis is … when it rains, people feel down.
4. Find associations among variables: Does where you sit in class impact how well you are going to do in the class?

Incredibly, within a few examples, many concepts that underlie the use of statistics can be introduced to students, and then, as you progress through the class, they can be reinforced. Now, you may say … how do the uses of statistics relate to making sense of variability?

1. Descriptive statistics capture the variability a sample
2. Inferential statistics capture the variability in the sample and use it to infer what is going on with the population
3. The critical variability in hypothesis testing is variability due to individual differences of the subjects who just happen to be in your sample, that is variability due to sampling error. The statistic estimates the sampling error, pulls it off, and enables us to test the hypothesis.
4. Of course, statistics used to find associations are looking for how variables are covarying (varying together).

Thus, all four uses of statistics are making sense of variability in different ways. They also have different statistics that they use to make sense of the variability.

This is a very term rich lesson, and I often encourage students to make use of flash cards. There also flash card apps and electric flashcards, but my students have told me … writing out their own flashcards on index cards works the best.

I will admit … this foundation is a bit awkward … I liken it to visiting a lot where a huge, never seen before building is going to be built. You are starting from scratch. Students don’t know where they are going, any more than you know what that new building is going to look like, but helping students to think about what they know and how they know it, how statistics are used, and helping them to get used to terms used in research and statistics, they begin to get comfortable with the thought of learning statistics … and it starts with the Critical Concept of Making Sense of Variability.

2 Comments

Filed under Core Concepts, Variability

## THE most critical concepts in applied statistics: Treating students like family

There is nothing like having a child preparing to learn statistics that really gets a mother to focus on … what are THE most critical concepts in applied statistics. I’ll be honest; I’m not basing this posting off of research, as sadly, no such research exists. It is, instead, based off of my experience in teaching and research coupled with the reality, I only have a few hours to cover the most important material to my son and sons and daughters of a few of my dearest friends. You see, they are all preparing to take a math statistics class either this summer or this fall. We all want our children to understand math stats in the larger concept of applied statistics.

In this posting, I will cover the outlines of what I have deemed most critical, then over the course of the next few weeks, I will detail the lessons, activities, and homework assignments.  Each session is equivalent to one weeks’ worth of work during a typical semester for the type of students I teach. As with everything … there may be some variability in how much time it takes to cover this material depending on your class size and student type.

#1: Making Sense of Variability

• Introduction to Epistemology — the four ways of knowing, with a focus on the dance between rationalism (forming hypotheses) and empiricism (gathering observations in the form of data).
• 4 Uses of Statistics: Describe, Infer, Test Hypotheses, Find Associations
• Introduction to research methods (just the experiment, and appropriate terms).
• Brief review of mean, median, and mode

Session #2: Capturing Variability

• Conceptually understanding variability (deviation) and the sum of squares
• Finding the Sum of Squares
• Obtaining the average Sum of Squares — the variance
• Understanding why we need the standard deviation (as it makes conceptual sense, where the variance doesn’t)
• Population Variance and Standard Deviation and Sample Variances and Standard Deviations used to infer the population

Session #3: Normal Distribution

• Review population vs. sample/ parameter vs. statistic
• Normal Distribution as a type of a population
• Properties of the Normal Distribution
• Area under the curve of a normal distribution
• Z-scores as a means of identifying location of an observation on the normal distribution

Session #4: Sampling Distribution of the Means and Standard Error

• Conceptually understanding a sampling distribution
• Exploring the variability in sample mean and understanding why
• Sampling Distribution and the Central Limit Theorem
• Standard Error of the Mean (actual and estimated)
• Introduction to the z-test as a means of finding the location of a sample mean on the sampling distribution of the means
• Comparing and Contrasting the Normal Distribution with the Sampling Distribution of the Means

Session #5: Understanding Hypothesis Testing

• Statistical Hypotheses
• Decisions/ Assumptions/ and Consequences (outside of statistics: common examples, selecting a college & deciding to go on a date).
• Steps of Hypothesis Testing: Research Hypothesis; Statistical Hypothesis; Creation of Sampling Distribution of the Means, and identification of rejection region; Gather Data/Calculate Statistic; Make a decision from data; Draw a Conclusion from data
• Errors in Statistical Decision Making

Now, by understanding all of these concepts, I believe my son and my friends’ children will be prepared to learn any calculation in statistic and better understand what is happening, and how they can interpret the results.

My hope for their classes is that the profession teaching the mathematical statistics class informs the students: Where in the formula the sampling error is calculated or estimated; the times when the statistic can and cannot be used; the assumptions underlying the statistic and what happens to the results when they are violated. I would like my son and my friends’ children to learn about basic parametric and nonparametric statistics, and a little about statistical computing.

Over the next few weeks, I will lay out detailed activities and homework assignments that align with these critical concepts.

Please let me know if you feel I missed a critical component or overstated a concept that you feel isn’t as critical.

## Difficult Concept: Teaching Sampling Error and Sampling Distribution of the Means

I am currently teaching sampling distribution of the means and sampling error to my students. They are difficult concepts to convey to students, and unlike much of my teaching, where lecture comprises a fair portion of my teaching time, I find myself “slowing down” the progress at this point by putting more of the activities in the hands of the students, forcing   them to participate in activities during class time, and requiring them to generate ideas in and out of class.

There are three activities that I use to help students learn the concept of the sampling distribution of the means and sampling error.

(1)    Generating hypothesis, then identifying “individual differences in extraneous variables”

• First, I model for them, using the Socratic method (asking them questions as a means of leading them to the answer), how to identify individual differences. I first do this when introducing extraneous variables, during the first week or two of class, and periodically do so throughout the first half of the semester, anytime I speak of Independent, Dependent, or Subject Variables, I have students generate the extraneous variables as well. This task, repeated early on, and especially as we approach sampling error, not only helps students to understand sampling error, but it makes the teaching of confounds easier as well. (Sampling error are random variations in extraneous variables, while confounds are systematic variations in extraneous variables.)
• I assign for homework, that students have to generate a hypothesis (by this point, they have been doing this throughout the semester), then generate a list of 10 individual differences in extraneous variables.
• During class time, they form groups, to discuss and critique each others’ list, then generate another list, as a group, that gets graded as a quiz. Truthfully, I have too many students (and no TA)  to grade all 80 of these assignments, by working in groups of 5, I have little trouble grading the list.

Notice how much time I spend on the concept of individual differences and extraneous variables. But, as a critical concept, it is time well spent. Truthfully, it comprises about 50 minutes, but it typically takes place over the course of weeks, helping build students’ thinking.

(2)    M&E creation of a pseudo empirical distribution of the means.

• I formally model sampling distribution in class with the M&M demonstration.  Though I’ve described this activity before, I’ll describe it again here.
• I get plain M&M’s whose proportion by color is: 24% blue, 14% brown, 16% green, 20% orange, 13% red, and 14% yellow.
• Each color receive a value (e.g., 1 – 6).
• I calculate what the mu would be given the stated proportions.
• I have students randomly sample N=X (that value depends on how many M&M’s I have to share with the students, 10 should be the smallest value).
• Students then calculate the mean for their sample.
• Then I have them report their sample means, I enter them into Excel and do a very quick (and sloppy) empirical sampling distribution, and tell them what mu is.
• We compare our mean of the mean to the mu, and talk about the variability in the rest of the sample means.
• We talk about how their individual sample means differ from mu and why.
• It seems so obvious to the students, that I can then switch over to other examples, like dog weight or performance on at recall for a list of words.
• Students generate the extraneous variables that serve as sampling error, just as the colors of the M&M’s can serve as sampling error.

(3)    I end with having students participate in a Mathematica Demonstration, both in and outside of class.  If you haven’t used Mathematica Demonstrations, start with  reviewing this prior blog https://statisticalsage.wordpress.com/2011/01/08/before-the-semester-starts-im-playing-with-pictures/ or this one https://statisticalsage.wordpress.com/2011/05/24/using-mathematica-deomnstrations-to-visualize-statistical-concepts/.

If you have used Mathematica, this demonstration works well in helping students to understanding the sampling distribution of the means

This year, I am requiring that student answer a series of questions about each mathematic demonstration to see if focusing them on the activity will increase what they are gaining from it.

For this demonstration the questions are as follows:

1. Try three different sample sizes. Which ones did you select? Draw the sampling distribution of the means by each N. What happens to the shape of the sampling distribution of the mean as N gets larger? Explain why this happens.

2. Using N = 15, change mu. What happens to the shape of the sampling distribution of the means as mu changes? Explain why this happens.

3. Write the symbol for standard error. Change the standard deviation. What happens to the standard error as sigma gets larger? Explain why this happens.

4. Define Sampling Distribution of the Means. Define sampling error. What value do we calculate to find sampling error. Write down that formula. Why is this such an important part of statistics?

As with all of our difficult concepts. If you have any recommendations, I encourage you to  first work on getting it published in http://www.teachpsychscience.org/ and then let us know about it!

Leave a comment

Filed under Engaging students, Sampling Distribution, Technology

## So, you don’t want to teach stats …

Here at Statistical Sage, though we have well over 100 followers from all over the world, most of our viewers seem to arrive to  us through Internet searchers. I always enjoy looking at the different terms people are using, and, in fact, plan on analyzing those terms to gain insight into the challenges people may be having in teaching applied statistics.

However, one search term caught my attention recently … “I don’t want to teach stats.”

I certainly understand about not wanting to teach certain classes we end up getting assigned to teach. I am sure I’m not alone in sighing, at least on occasion, when seeing what classes I will be teaching (or more importantly, what classes I won’t be teaching) for future semesters, but I have to admit, I have never thought “I don’t want to teach stats.”

If I were to talk to an individual who was “stuck” teaching statistics, here are the tips I would provide to them to help them through in teaching this class.

(1)    Never let your students know your lack of desire in teaching this class.

Students will be coming to your class not wanting to take it. You can’t give them additional reason as to why they are right, particularly when applied statistics is so critical for their future professional and graduate student success.

(2)    Don’t reinvent the wheel. Get the a syllabus from someone who has been successful in teaching the course. You can obtain a copy of a syllabus and tips on syllabi formation from a prior posting,

By the end of 2012, APA’s Division 2 Task Forces on Statistical Literacy will have recommendations for the teaching of applied statistics in psychology. This group will be providing to everyone a list of student learning outcomes, a bibliography of resources, a list of Best Practices in Teaching for each student learning outcome, and a detailed outline of assessment practices.  As this information becomes available, I will post it here.

(3)    Seek out from others who have taught this class the potential pit falls, and be prepared to address problems before they become problems. Understanding issues like the most critical concepts https://statisticalsage.wordpress.com/2010/11/23/core-statistical-concepts/  and activities to help students master them can  help you help your students before real challenges erupt. Though this blog is filled with such information, I recommend you start with Hal and Bonnie’s Five Tips to Teaching Applied Statistics,

https://statisticalsage.wordpress.com/2011/08/14/new-to-the-blog-follow-bonnie-and-hals-five-tips-to-teaching-applied-statistics/.  Or you can learn from others who are successful in  your discipline and apply the process of their success to the process of your success as a teacher of statistics https://statisticalsage.wordpress.com/2011/11/13/learning-from-steve-jobs/ .

(4)    Get a book that students find easy to read and understand that comes with it a set of homework problems (both in and out of the textbook). Of course, my favorite applied statistics book is Kiess and Green (2010) http://www.pearsonhighered.com/product?ISBN=0205626246. In addition to it coming with a detailed instructor’s manual, with specific classroom activities, chapter outline, and student outcomes, it also has about 5 homework assignments per chapter, and several  problems in the textbook for students to use, http://www.pearsonhighered.com/product?ISBN=0205626246#tabbed. A great book will make teaching applied statistics easier.

(5)    You are going to need to give examples in class of studies that use statistics. Have fun with it, and use studies YOU find interesting. If you find it interesting, it will be bound to show to the students, and talk about your own research or areas where statistics have been applied in your life. Given the example will take up about 15 minutes of each and every (50 minute) class, you can be guaranteed of at least part of every class time being interesting to you. If you are interested in the topics you are talking about, your students will be excited about coming to class to listen to what you have to say next, that enthusiasm will rub off on you, the professor, in a nice, circular, and upward lifting manner.

(6)    Chances are you are going to try to get out of having to teach statistics in the future. And let’s face it, you are probably just one new hire away from having your wishes fulfilled. However, I still encourage you to read up on pedagogy, because, after all … the economy is bad, and you may be at the bottom of that seniority pile for longer than you expected, as the senior faculty who should have retired years ago no longer can do so thanks decreases in their retirement funds. If you don’t want to invest a great deal of time in the study of pedagogy, that’s why StatisticalSage is here … for you, as, after all … you may be “stuck” teaching applied statistics, but you still cared enough to google “I don’t want to teach stats,” and you cared enough to read a few entries here. That means, you do care.

There are lots of things I don’t want to do … clean out my refrigerator, go for my annual check up with the doctor, go to the dentist for a teeth cleaning, and yet … I do it. And you can teach stats well, too, and who knows … maybe  you’ll even like it, a little.

Please let me know how your semester turns out!

## Difficult Concepts: Research Hypotheses vs. Statistical Hypotheses

I always cringe when I see a statement in a text or website such as “the research hypothesis, symbolized as H1 , states a relationship between variables.” No! No! No! How can students not be confused on the difference between research and statistical hypotheses when instructors are? H1 is not the research hypothesis, it is the alternative to the null hypothesis in a statistical test.

Let’s be very clear, in most research settings, there are two very distinct types of hypotheses: the Research or Experimental Hypothesis, and the Statistical Hypotheses. A research hypothesis is a statement of an expected or predicted relationship between two or more variables. It’s what the experimenter believes will happen in her research study. For example a researcher may hypothesize that prolonged exposure to loud noise will increase systolic blood pressure. In this instance the researcher predicts that exposure to prolonged noise (the independent variable) will increase systolic blood pressure (the dependent variable). This hypothesis sets the stage to design a study to collect empirical data to test its truth or falsity. From this research hypothesis we can imagine the scientist will, in some fashion, manipulate the amount of noise a person is exposed to and then take a measure of blood pressure. The choice of statistical test will depend upon the research design used, a very simple design may require only a t test, a more complex factorial design may require an analysis of variance, or if the design is correlational, a correlation coefficient may be used. Each of these statistical tests will possess different null and alternative hypotheses.

Regardless of the statistical test used, however, the test itself will not have a clue (if I am allowed to be anthropomorphic here) of where the measurement of the dependent variable came from or what it means. More years ago than I care to remember, C. Alan Boneau made this point very succinctly in an article in the American Psychologist (1961, 16, p.261): “The statistical test cares not whether a Social Desirability scale measures social desirability, or number of trials to extinction is an indicator of habit strength….Given unending piles of numbers from which to draw small samples, the t test and the F test will methodically decide for us whether the means of the piles are different.”

Rejecting a null hypothesis and accepting an alternative does not necessarily provide support for the research hypothesis that was tested. For example, a psychologist may predict an interaction of  her variables and find that she rejects the null hypothesis for the interaction in an analysis of variance. But the alternative hypothesis for interaction in an ANOVA simply indicates that an interaction occurred, and there are many ways for such an interaction to occur. The observed interaction may not be the interaction that was predicted in the research hypothesis.

So please, make life simpler and more understandable for your students. Don’t call a statistical alternative hypothesis a research hypotheses. It is not. Your students will appreciate you making the distinction.

## Difficult Concepts—Degrees of Freedom

Several posts ago, Bonnie said we would address some difficult concepts for student understanding of statistics. I thought I would take a shot at one of the concepts she listed, degrees of freedom (df).

To help understand this concept, let us first think of df in a non-statistical way and say that df refers to the ability to make independent choices, or take independent actions, in a situation. Consider a situation similar to one suggested by Suppose you have three tasks you wish to accomplish, for example that you want to go shopping, plan a vacation, and workout at the gym. Assume that each task will take about an hour and that you may do all on one day, or only one each day over the course of several days. I have created a situation with three degrees of freedom, you have three independent decisions to make. Suppose you decide you will go shopping today. Does this decision put any limitations on when you may do the other tasks? No, for you may still do the other tasks either today, or in the course of the next few days. Suppose next you decide to plan a vacation and you will do that that tomorrow. Does this decision place any limitation on when you may go to the gym? Again, no, because you still might go to the gym today, tomorrow, or on another day. Notice here, that each choice of when to do an activity is independent of each of the other choices. Thus, you have 3 degrees of freedom of choice in the order of doing the tasks.

Now, set a different scenario where I plan some limitation on the order in which you may do the tasks. You still have the same three tasks to do, except now you decide you will do only one a day and you want to have them all completed over a span of three days. This scenario has only 2 df, for there are only two independent decisions for you to make. After you have made a choice on two of the activities, the day for doing the third activity is “fixed” or decided by your other two choices. For example, suppose you decide to plan your vacation today. For this choice you have total freedom to make a decision for any of the three days. You next decide to plan when to go to the gym. Notice for this decision, however, you have only two choices left, either tomorrow or the following day. A statistician would say you have two degrees of freedom when making this decision. You decide to go the day after tomorrow. Finally, you have to plan shopping, but now you have essentially no choices open to you, it must be tomorrow. For this decision, you have no degrees of freedom. Thus, in a sense, you have 2 df in this scenario. You are free to make two choices, but making any two choices automatically determines your third choice.

Of course, the obvious question a student may ask is “What does all this have to do with statistics?” Let’s see. Statistically, the df are the number of scores that are free to vary when calculating a statistic, or in other words, the number of pieces of independent information available when calculating a statistic. Suppose you are told that a student took three quizzes, each worth a total of 10 points. You are asked to guess what her scores were. In this scenario, you may guess any three numbers as long as they are in the range from 0 to 10. In this example, you have 3 df, for each score is free to vary. Each score is an independent piece of information. Choosing the score for one quiz has no effect on either of the other two scores that you may choose.

But now I give you some information about the student’s performance by telling you that the total of her scores was 27. I have now created a scenario with 2 df. Suppose you guess 10 for the first score. Does choosing this score place any limitation on what you might guess for a score on the second, given that the total of the scores must be 27? No, for your choice of a second score is still free to vary from 0 to 10. You guess 9 for a second score. What about your choice of a third score? What must it be. If the total of the three scores is 27, and the first score you chose was 10, and the second 9, then your third choice must be 8 for a total of 27 to be obtained. In this instance, the third score is not free to vary if you know the total of the scores and any two of the three scores. For this example then, there are 2 df in the choice of scores. If you know the total of the three scores, then only two provide independent information, the third score becomes dependent on previous two scores. By giving you knowledge of the total of the scores I have reduced the df in the number of choices you have.

Can we now relate these two examples to the calculation of statistics? Consider that you have a sample of 10 scores and you want to calculate the mean for these scores. In order to do so, you must know all 10 scores, if you know only 9, you cannot calculate the mean. Thus if there are n scores in a sample, then for calculating the mean from this sample there are n df. Each score is free to vary, and an independent piece of information. You cannot calculate the mean unless you know all n scores. But suppose you know the mean for the scores and you want to calculate the standard deviation (s) for the scores. In these instance, there are 9 df for these scores, for if you know the mean, you need to know only 9 of the scores, the 10th score is in a sense “determined” for you by the value of the other 9 scores. So, for a set of n scores, there are – 1 df when calculating the standard deviation.

A question frequently arises when the idea of a fixed or determined score is discussed. Students may ask how can someone’s score on a test, for example, be “determined” or “fixed in value” by her other scores on tests? Students should be made to realize that during the actual data collection process all scores are free to vary and the concept of degrees of freedom does not apply. Degrees of freedom only come into play after the data have been collected and we are calculating statistics on those data.

These ideas can be expanded to the computation of other statistics. Consider analyzing data with a 2 2 chi-square test of independence. When we are collecting data for the contingency table, the concept of degrees of freedom is not applicable. After we have collected the scores, however, and each cell of the contingency table is filled, then we can use the cell totals to find the row and column marginal totals. Notice at this stage, that if I were to tell you the row and marginal totals, then I would need to give you only one cell total, and you would be able to determine the other three cell totals. In this instance, when knowing the row and marginal totals, there is only 1 df for the cell totals. In a more general sense, if there are r rows and c columns in a contingency table, then once the row and column totals are known, the table possesses (– 1) (c – 1) df.

I think giving students this intuitive overview of df helps them to understand where such numbers come from when they are learning about various statistical tests. Perhaps it may help to make statistics a little less mysterious.

## Tackling the tough concepts

Hello All.

I apologize for our absence over the last several weeks. Though it was my intention to work on “tackling the tough concepts in statistics.” I’ve, instead, been tackling the tough concepts of life and death, as my father died following a battle with cancer.  However, though I miss my Dad tremendously, it is time to continue with this blog.

Last year, I  posed a question to the sages … What are the critical concepts in applied statistics. Their response was an overwhelming … it’s not a matter of individual concepts but the overall application of statistics that is necessary for true understanding.

Of course, I wholeheartedly support this assertion, and yet, as I look at how people come across this blog, they almost always do so by searching for specific statistical concepts. I also have to argue that we have to make sure students understand certain concepts before they can grasp the larger application of statistics. Taken together, I think it is a worthwhile endeavor for a few blogs to be focussing on how to teach the critical concepts in statistics.

A search of the literature failed to yield a complete list of concepts. However, when looking at the website CAUSEweb.org, the Consortium for the Advancement of Undergraduate Statistics Education, they have resources on statistical concepts divided into eight sections, and I added one. Those concepts are Data, Central Tendency, Correlation and Covariation, Distribution and Graphs, Variability, Sampling, Sampling Distribution, and Inferences. However, I would also add to this list, Error, as though it overlaps with several of the prior concepts, it is a critical concept that requires direct attention.

Over the next several weeks, I will be addressing each of these concepts and provide information on how to best teach it.  Of course, if you feel I’m missing a concept, I encourage you to let us know.

Leave a comment

Filed under Core Concepts