Category Archives: Curriculum

Great Resource for the Teaching of Applied Statistics

Hello All,

The Society for the Teaching of Psychology has an office dedicated to great, peer-reviewed resources for teaching called the Office of Teaching Resources in Psychology.

Two such (free) resources for those of us teaching applied statistics include the free on-line book, Teaching Statistics and Research Methods: Tips from TOPS.

Another such resource, is Statistical Literacy in Psychology: Resources, Activities, and Assessment Methods

The web site housing these two resources is filled with great ideas, all of which have been peer-reviewed. You can find teaching resources including example syllabi as well as article on how to maximize your students’ learning. Even if you are teaching applied statistics in an area outside of psychology, I encourage you to make use of this value set of tools. ( )

Happy Teaching!




Leave a comment

Filed under Applied Statistics, Curriculum, Engaging students, Pedagogy, Preparing to Teach, Professional Development

Cognitive Biases and Decision Making

The goal of this blog is to talk about how to best teach students how to successfully use applied statistics. During the last few weeks, and for the next few weeks, it is my plan to talk about a specific group of students: Administrators interested in making data driven decisions. During a prior blog, I reviewed the several components that should be part of such a training session. 

From experience, I have found that administrators are best able to learn about how to make data driven decisions only when they first learn about epistemology, how we know what we know, which was covered at this blog, and cognitive biases, innate tendencies that keep us from accurately assessing what is going on around us. Business Insider summarized 56 organizational cognitive biases here, Psychology Today reviews how cognitive biases negatively impact businesses, 

Though there are many cognitive biases, in an organizational setting I like to speak about these 6.

  1. Confirmation Bias – we seek out information that matches what we already believe to be true, ignoring all information that contradicts our beliefs.
  2. Ingroup Bias – though quite complex, in short, we tend to prefer people we deem to be part of our group. We view them as more varied as people outside of our group. When they make mistakes we tend to be more forgiven or understanding. We tend to exert more energy to help them and protect them from harm.
  3. Projection Bias – what we think and feel is what others are thinking and feeling.
  4. Gambler’s Fallacy – that the risk we are about to take is going to pay off, especially after a series of bad events, as our luck is bound to change.
  5. Status-Quo Bias – most people are simply more comfortable when things stay the same, even if they are less than ideal. Organizational change is not comfortable for most people.
  6. Bandwagon Effect – I have heard people use the phrase, “sheeple” people who are following the herd regardless of what information might be saying otherwise.

What each of these cognitive biases have in common is that we are placed into a cognitive state where we ignore the data right in front of our nose, particularly if it is contradicting our firmly held belief. I was once in an administrative meeting after a particularly challenging decision, one for which the faculty were strongly against. And, yet, an administrator remarked that 80% of the faculty were on board with this decision. I didn’t know of one person, let alone 80% who were supportive of this decision outside of the people in the room, but a couple of cognitive biases were taking hold. The top administrators all felt this was a good idea, so with the bandwagon effect (among other pressures), so did the middle level administrators. Then, since they believed it to be a great decision for which they all agreed, they projected their thinking onto the vast majority of the faculty. A quick survey (formal or informal) would have helped them to see what the faculty were actually thinking. That information could have been used to either change their decision, weaken the intensity of the decision, or provide communication/justification as to why such a widely disagreed upon decision had to be implemented.

Properly designed measures and appropriate sampling techniques can yield great data that can be used to help provide insight to administrators to aid them in moving an organization forward.

Certainly, if we stick with our cognitive biases, we’ll feel better about ourselves, but that won’t help an organization become the best it can be, as in the end, an administrator is only as good as the decisions he or she is making.

In training administrators on how to best use applied statistics, start with explaining how data can help them achieving higher quality decisions by by-passing epistemological and cognitive bias limitations.


Filed under Applied Statistics, Curriculum, Data Driven Decisions

Epistemology and Decision Making

In a recent post,, I outlined what information should be used in a series of training sessions for administrators to make data driven decisions. I have found that many people are resistant to the benefits of using data to make decisions even though they throw around phrases like, “data driven decisions,” or “business analytics,” or “big data,”  and as such, we have to first help them to understand how do we know what we know, and how does data fit into that set of knowledge?

Epistemology is the study of how we know what we know. Though there are many ways of classifying and characterizing epistemology types, I find that there are 4 different ways that we know:

  1. Authority – We know what we know because someone tells us, and we believe it to be true. For example, everything I know about celebrating an event I learned because my mother and grandmothers told me so.
  2. Intuition – Our gut tells us what is true. For example, my gut tells me my dog loves me.
  3. Empiricism – We know through observations. This is why we collect data, to learn from it through empiricism.
  4. Rationalism – The use of logic, both inductive and deductive, will help us know the truth. This classic example characterizes rationalism well. If a tree falls in the woods and no one is there to hear it, does it make a sound? We use rationalism to know that it does make a sound.

High quality data driven decisions are, primarily, a dance between empiricism and rationalism. We make observations (enrollment is down 20% in Sports Management over the last 5 years), create a prediction or explanation as to what we think is or will be going on (maybe students in these majors are not getting jobs), collect data to test the prediction (students are struggling to find jobs), and then based on the results (Make improvements Sports Management that will provide students with the skills needed for future success). And though as a scientist this may seem like a natural and obvious way to go about seeking information, this is not the standard protocol of seeking knowledge in many administrative ranks.

Decisions from intuition reign supreme in many administrative circles. And, it is true that during the past two years in the administrative role I made many gut level decisions. However, given all of our cognitive biases, which will be discussed in a future posting, this often leads us to less than optimal decisions. In a Harvard Business Review article on how good leaders make bad decisions, you can find examples of leaders who ignored data and relied upon their intuition to discern what was the optimal decision.

In administration, you can also see many sets of truths to be proclaimed simply because the person in authority told someone it was true. I was in a meeting, and asked a question about the justification for an expense. I was expecting some data to support the expense. Instead I was told, “The President says so.” That is authority, in its purest form. And yet, organizations are chock full of people who permitted authority to push a group decision in the wrong direction. Follow this link for information on the Space Shuttle Challenge disaster, where the organizational culture and its reliance upon authority driven “truths” cost the lives of 7 astronauts in January of 1986.

I have found that before I can help administrators learn about techniques in data analysis to help answer important organizational questions, I have to first get them to think about what they know, how they know it, and recognize that it is through creating a prediction or explanation, then collecting data to evaluate that predication or explanation, and most importantly, letting the data speak as to what is going on, that we can unveil our eyes from our cognitive biases, and get to the bits of information we truly seek that will lead us to great decisions.

Epistemology and Cognitive biases go hand in hand in helping keep us from truly using data accurately for organizational decisions. As a result, after sharing with administrators the ways of knowing, we must also outline standard cognitive biases that keep us from seeing the truth. Common cognitive biases facing an organization will be discussed in a future post.

If you are interested in learning more about epistemology, these sites have detailed information. or


Leave a comment

Filed under Applied Statistics, Curriculum, Data Driven Decisions

THE most critical concepts in applied statistics: Treating students like family

There is nothing like having a child preparing to learn statistics that really gets a mother to focus on … what are THE most critical concepts in applied statistics. I’ll be honest; I’m not basing this posting off of research, as sadly, no such research exists. It is, instead, based off of my experience in teaching and research coupled with the reality, I only have a few hours to cover the most important material to my son and sons and daughters of a few of my dearest friends. You see, they are all preparing to take a math statistics class either this summer or this fall. We all want our children to understand math stats in the larger concept of applied statistics.

In this posting, I will cover the outlines of what I have deemed most critical, then over the course of the next few weeks, I will detail the lessons, activities, and homework assignments.  Each session is equivalent to one weeks’ worth of work during a typical semester for the type of students I teach. As with everything … there may be some variability in how much time it takes to cover this material depending on your class size and student type.

#1: Making Sense of Variability

  • Introduction to Epistemology — the four ways of knowing, with a focus on the dance between rationalism (forming hypotheses) and empiricism (gathering observations in the form of data).
  • 4 Uses of Statistics: Describe, Infer, Test Hypotheses, Find Associations
  • Introduction to research methods (just the experiment, and appropriate terms).
  • Brief review of mean, median, and mode

Session #2: Capturing Variability

  • Conceptually understanding variability (deviation) and the sum of squares
  • Finding the Sum of Squares
  • Obtaining the average Sum of Squares — the variance
  • Understanding why we need the standard deviation (as it makes conceptual sense, where the variance doesn’t)
  • Population Variance and Standard Deviation and Sample Variances and Standard Deviations used to infer the population

Session #3: Normal Distribution

  • Review population vs. sample/ parameter vs. statistic
  • Normal Distribution as a type of a population
  • Properties of the Normal Distribution
  • Area under the curve of a normal distribution
  • Z-scores as a means of identifying location of an observation on the normal distribution

Session #4: Sampling Distribution of the Means and Standard Error

  • Conceptually understanding a sampling distribution
  • Exploring the variability in sample mean and understanding why
  • Sampling Distribution and the Central Limit Theorem
  • Standard Error of the Mean (actual and estimated)
  • Introduction to the z-test as a means of finding the location of a sample mean on the sampling distribution of the means
  • Comparing and Contrasting the Normal Distribution with the Sampling Distribution of the Means

Session #5: Understanding Hypothesis Testing

  • Statistical Hypotheses
  • Decisions/ Assumptions/ and Consequences (outside of statistics: common examples, selecting a college & deciding to go on a date).
  • Steps of Hypothesis Testing: Research Hypothesis; Statistical Hypothesis; Creation of Sampling Distribution of the Means, and identification of rejection region; Gather Data/Calculate Statistic; Make a decision from data; Draw a Conclusion from data
  • Errors in Statistical Decision Making

Now, by understanding all of these concepts, I believe my son and my friends’ children will be prepared to learn any calculation in statistic and better understand what is happening, and how they can interpret the results.

My hope for their classes is that the profession teaching the mathematical statistics class informs the students: Where in the formula the sampling error is calculated or estimated; the times when the statistic can and cannot be used; the assumptions underlying the statistic and what happens to the results when they are violated. I would like my son and my friends’ children to learn about basic parametric and nonparametric statistics, and a little about statistical computing.

Over the next few weeks, I will lay out detailed activities and homework assignments that align with these critical concepts.

Please let me know if you feel I missed a critical component or overstated a concept that you feel isn’t as critical.


Filed under Core Concepts, Curriculum, Hypothesis Testing, Normal Distribution, Sampling Distribution, Standard Error, Variability, variance / standard deviation, z score

So, you don’t want to teach stats …

Here at Statistical Sage, though we have well over 100 followers from all over the world, most of our viewers seem to arrive to  us through Internet searchers. I always enjoy looking at the different terms people are using, and, in fact, plan on analyzing those terms to gain insight into the challenges people may be having in teaching applied statistics.

However, one search term caught my attention recently … “I don’t want to teach stats.”

I certainly understand about not wanting to teach certain classes we end up getting assigned to teach. I am sure I’m not alone in sighing, at least on occasion, when seeing what classes I will be teaching (or more importantly, what classes I won’t be teaching) for future semesters, but I have to admit, I have never thought “I don’t want to teach stats.”

If I were to talk to an individual who was “stuck” teaching statistics, here are the tips I would provide to them to help them through in teaching this class.

(1)    Never let your students know your lack of desire in teaching this class.

Students will be coming to your class not wanting to take it. You can’t give them additional reason as to why they are right, particularly when applied statistics is so critical for their future professional and graduate student success.

(2)    Don’t reinvent the wheel. Get the a syllabus from someone who has been successful in teaching the course. You can obtain a copy of a syllabus and tips on syllabi formation from a prior posting,

By the end of 2012, APA’s Division 2 Task Forces on Statistical Literacy will have recommendations for the teaching of applied statistics in psychology. This group will be providing to everyone a list of student learning outcomes, a bibliography of resources, a list of Best Practices in Teaching for each student learning outcome, and a detailed outline of assessment practices.  As this information becomes available, I will post it here.

(3)    Seek out from others who have taught this class the potential pit falls, and be prepared to address problems before they become problems. Understanding issues like the most critical concepts  and activities to help students master them can  help you help your students before real challenges erupt. Though this blog is filled with such information, I recommend you start with Hal and Bonnie’s Five Tips to Teaching Applied Statistics,  Or you can learn from others who are successful in  your discipline and apply the process of their success to the process of your success as a teacher of statistics .

(4)    Get a book that students find easy to read and understand that comes with it a set of homework problems (both in and out of the textbook). Of course, my favorite applied statistics book is Kiess and Green (2010) In addition to it coming with a detailed instructor’s manual, with specific classroom activities, chapter outline, and student outcomes, it also has about 5 homework assignments per chapter, and several  problems in the textbook for students to use, A great book will make teaching applied statistics easier.

(5)    You are going to need to give examples in class of studies that use statistics. Have fun with it, and use studies YOU find interesting. If you find it interesting, it will be bound to show to the students, and talk about your own research or areas where statistics have been applied in your life. Given the example will take up about 15 minutes of each and every (50 minute) class, you can be guaranteed of at least part of every class time being interesting to you. If you are interested in the topics you are talking about, your students will be excited about coming to class to listen to what you have to say next, that enthusiasm will rub off on you, the professor, in a nice, circular, and upward lifting manner.

(6)    Chances are you are going to try to get out of having to teach statistics in the future. And let’s face it, you are probably just one new hire away from having your wishes fulfilled. However, I still encourage you to read up on pedagogy, because, after all … the economy is bad, and you may be at the bottom of that seniority pile for longer than you expected, as the senior faculty who should have retired years ago no longer can do so thanks decreases in their retirement funds. If you don’t want to invest a great deal of time in the study of pedagogy, that’s why StatisticalSage is here … for you, as, after all … you may be “stuck” teaching applied statistics, but you still cared enough to google “I don’t want to teach stats,” and you cared enough to read a few entries here. That means, you do care.

There are lots of things I don’t want to do … clean out my refrigerator, go for my annual check up with the doctor, go to the dentist for a teeth cleaning, and yet … I do it. And you can teach stats well, too, and who knows … maybe  you’ll even like it, a little.

Please let me know how your semester turns out!


Filed under Core Concepts, Curriculum, Engaging students, Homework/ Assignments, Pedagogy, Statistics Syllabus, Text books

A Statistics Professor’s New Year’s Resolution – 2012

Happy 2012! It is time for us to set goals for the new year.

There is good reason for us all to make New Year’s Resolutions as applied statistics professor (and students)  as in doing so, it  increase the likelihood of us making a change ( The first step in making a change is to focus on the negative … what’s going on in your classroom that you would like to change or that needs improvement ( ?

Though I can’t attest to the quality of the data, it is reported that 40 – 45% of all people make New Year’s Resolutions … with weight loss and exercise topping that list, followed by quitting a bad habit like smoking, and managing money better. Setting a New Year’s Resolution actually does increase the likelihood of a person achieving that goal. But that shouldn’t be surprising … Yogi Berra is reported to say, “If you don’t know where you are going, you’ll end up some place else.” Specifically, a New Year’s Resolution is a goal for a person to achieve.

My professional goal for 2012 is two fold … (1) I reverted back to a cumulative final exam for this past semester, and noticed that there were a few areas where most students had challenges. My first New Year’s Resolution is to help students master these more challenging areas of applied statistics.  (2) I want more students to behave in a manner that will assure their success … you know, the basic things like coming to class, completing homework, and so forth.

However, simply hoping that my goals come true will not maximize the likelihood of them being reached. Borrowing from research on Deliberate Practice (, also reviewed in ), it helps to achieve ones goals is we:

  1. Clearly state what we are interesting in achieving, and a plan of how to achieve it.
  2. Make sure the goal is attainable, and that it takes us to a higher level of achievement.
  3. Establish a way of assessing our progress toward the goals.
  4. Practice, Practice, Practice, and revise, revise revise along the way, recognizing that there will be times that we won’t be successful, but that even in failure, we can learn, and try again.

I want to focus on increasing student learning, by looking at student weaknesses on the final exam. That is fairly specific. To do this I will:

  •  Identify the SLO by examining item analysis on the cumulative final exam.
  • Add additional homework assignments in these areas.
  • Add additional quizzes for students.
  • Notify my student tutor of the areas of weakness, and have her come up with special study sessions for these difficult areas, and make announcements to students … using the carrot of high grades on the final exam,
  • See if the in class activities/lectures are helping students master the material.

It will be easy to assess … homework & quiz performance, feedback from the student tutor, and ultimately student performance on the final exam will all provide evidence of whether my approach improves students’ performance. Throughout the spring semester, I will chronicle what those areas are and share with you additional homework and class activities. And my student tutor, Amy Lebkeucher, has agreed to talk about her experiences in helping students master this material, as well.

As for helping students adopt the kind of behavior we all want to see in our students … I haven’t found the right words to tell students to make them behave. I explicitly tell students what they need to do to be successful in class.  It is printed in the syllabus; I have other students tell them. I remind them on a regular basis, and yet, every semester I have students who fail my class because they simply didn’t buy the book … “Aren’t you one of those great teachers, where I don’t need to buy the book to be successful?” Students come to me at the end of the semester asking what they can do since they missed so many classes and homework … sigh. I know I’m not alone as the most recent report from the National Survey of Student Engagement (NSSE) has the typical college student is studying less than 15 hours a week … that’s one hour of studying per credit hour, which simply isn’t enough time. About a 1/3 of all students do NOT even review notes after taking them, and close to 1 out of 3 students who need help do not seek help from the professor! In short, the NSSE reveals what many of us are seeing … our students aren’t behaving in a manner that will maximize their success.

So, I’m adopting a “Marketing Campaign” that helps students to understand (1) attend class (2) study at least 2 hrs./ week/ credit hour (3) read all assigned reading at least thrice (4) establish a study plan and (5) implement self testing into their study plan. We I will assess this marketing campaign with surveys of student reported behavior, class attendance, and homework checks. However, I would be lying if I said I know what I need to do to help maximize students’ behavior. I’m thinking of trying something like , but … if this was an easy task, I would have had it fixed by now. This may not reach Deliberate Practice’s step of Attainable … but it’s worth trying.

Of course, during 2012, I will let  you know what works and what doesn’t … and if anyone has any ideas, please let us know.

May 2012 be a year of great professional growth, health, and peace for us all!

Leave a comment

Filed under Curriculum, Engaging students, Homework/ Assignments, Maximizing Cognitive Development, Pedagogy

How wonderful and I wish, I wish …

As I type this, I have one fifty minute class left to teach, and my time with my statistics class will be over. As with anything, each semester is varied. Some semesters I cover more information than other semesters. I liken this semester to driving through the city and hitting all green lights! As such, I believe my students were able to master additional information based on what is probably mostly good fortune.

So, here is my list of things I’m so thrilled I covered:

(1) Effect size statistics, like eta squared: Sure effect size statistics are not used that much, and lets face it, they are super easy to calculate, but my biggest reason for wanting to teach effect size statistics is it helps students to understand what a t-test or F-test can tell us (is there a difference) and what it can’t tell (how big is the effect). In fact, by spending about 20 minutes on the teaching of effect size statistics, students were better able to understand why the “p-value” for an observed t or F score provides us with no information. All we need to know is, did we pass the threshold.

(2) We find the critical value BEFORE calculating the observed value: This discussion helps focus student on the logic of statistical hypothesis testing. Specifically, statistical hypothesis testing works because we assume that the null hypothesis is true, that there is no effect of the independent variable on the dependent variable. With this assumption, we are able to generate the sampling distribution that provides us with information on the standard error. Now, if our sample mean is too extreme, we reject our initial hypothesis, the null, and accept the alternative hypothesis, that is the means are different. By finding the critical value prior to calculating the statistic, it helps focus students on that “line in the sand” to say … my observations are too extreme for me to stay with my current hypothesis. Students are far less likely to fall victim to equating p-value with the strength of the effect of the independent variable, or to conclude … the data is trending because I have a p-value of .07 or some other funky thing far too many people do with null hypothesis testing. By spending a bit more time on the steps involved in hypothesis testing, I think students are less likely to fall victim to the common misconceptions surrounding Statistical Null Hypothesis Testing.

(3) Though not a specific concept, I am pleased that for almost every concept I taught this semester I used new examples. Sure, I’m still a sage in training, no grey hair and all, but I was beginning to find myself using the same examples. As this is the third semester my supplement instructor, Amy, is taking notes in class, I felt I owed it to her, at least, to “keep it fresh.” I also found thinking about this blog helped spur my mind toward different examples. In doing so, I found some worked even better than my “old stand by” examples, but the great things was, when the new example flopped, I just quickly switched to the example I knew helped students.

Now for my Wish List of things I always wished I could have covered, but didn’t.

(1) Though I do get to cover the concepts of the F-test. I teach a three credit class, and only have time to cover the one-factor between subject ANOVA. If only I could cover a two-factor between subject ANOVA and a one-factor within subject ANOVA, I would feel my students would really understand the F-test (and as such, be less incline to misuse or over use it).

(2) Yet, I feel if I could cover non-parametrics, students would better understand the role of the assumptions in parametric tests, and issues like Power and random error could be even better understood. Plus they would get the benefit of learning about a really important class of statistics. Sadly, another semester has passed without me being able to cover this topic with the depth I think it deserves.

(3) I fear I don’t emphasize the weakness of statistics, and that they are only as good as the quality of the theories being tested in the design. They are also only as good as the quality of the sample and the quality of the measure. At least the latter two concepts get covered in classes that will follow the statistics class. But so few people speak of the topic of equifinity, that the same outcome can have multiple explanations. Again, though I touch on this, the idea of developing the alternative rival hypotheses that could explain the same empirical evidence is one I simply don’t have time to cover to the extent I would like. If you have a weak theory or haven’t taken into account the alternative rival hypotheses when designing your study, cool statistics will not improve the quality of your findings.

(4) Though I tell students the hypothesis drive everything, from the selection of the measure and research design, to the specific statistic one would select, and though there are example problems in the textbook (Integrating Your Knowledge) that students have to complete, I really wish we could spend more time on this.

Maybe next semester, I can find a way to reach my wish list … maybe!

Leave a comment

Filed under ANOVA Analysis of Variance, Core Concepts, Curriculum, effect size, Hypothesis Testing, Hypothesis Testing, non-parametric, Sampling Distribution, Statistical Hypothesis Testing, Statistical Tests, t test

Core Statistical Concepts

I have been spending the week thinking about what I consider to be the “core concepts” that need to be covered in an applied statistics class, be it in psychology, health, business, or education. However, before I post my personal thoughts, I felt it necessary to see what other applied statisticians had to say. In my search, I found . This work was conducted by John McKenzie (2004), Conveying the Core Concepts, is from the Proceedings of the ASA Section on Statistical Education, pages 2755-2757.

In reading what  McKenzie, and several other professors of applied statistics identified as the core concepts in statistics, I must say … I concur. Listed below are the core concepts in applied statistics … the information that, in my opinion, simply has to be covered regardless of illness, snow days, or anything else that could interrupt a professors’ teaching schedule.

Variability: Students cannot understand the purpose of statistics unless they get the concept of variability. Within this, we can further talk about variability due to chance and variability due to effect. Including in the discussion of variability should be the difference between systematic and random variability. I would have to say that not a class period goes by without me spending at least a little time on helping students to focus on issues of variability (especially variability due to the individual differences of the subjects who just happen to be in our sample). 

Randomness: Though I would see randomness and variability as being part of the same large concept, McKenzie’s work identified the concept of randomness as not only separate from variability but also critical for students to master.

Sampling Distribution: Along with Hypothesis Testing, the teaching of sampling distribution is considered to be one of the most complicated to teach.  I would concur, which is why I spend an entire class period just on a single activity with M&M’s to demonstrate the concept of sampling distribution. (Please see a prior blog entry for details on this tactile activity).

Hypothesis Testing: The sages and I spent the month of October and much of November discussing whether Hypothesis Testing is critical and if so, how to best tackle the teaching of this complex topic. Not surprising, McKenzie identified the teaching of hypothesis testing as being one of the two most difficult concepts to teach in applied statistics (the other being sampling distribution). Though there may be several published articles on hypothesis testing no longer being a critical concept to teach, the individuals who were surveyed for McKenzie’s work, certainly consider it to be a critical concepts.

Data Collection Methods: Though I have said to my students more times that I can count, “the quality of our statistics is limited by the quality of our sample,” I must admit to being a bit surprised that this was considered critical by others, especially since when I look at many undergraduate statistics textbooks, data collection methods are barely mentioned. Kiess and Green’s (2010) Statistical Concept for the Behavioral Sciences, 4/e, certainly tackles the issue of data collection methods.

Association vs. Causality: This core concept makes me smile, as often when I meet someone for the first time, and they ask me what I do … my response is often met with one of two comments … “Oh, I hated statistics” or “Correlation does not mean causation.” It’s kind of like me recalling how to greet a person in German, a class that I had for three years, and yet recall so little. We, as applied statisticians, certainly engrave this concept into the minds of our students, but I’m sure most of you are like me, hoping student get more than a “pat phrase” out of our classes.

 Significance (Statistical vs. Practical): This is a critical concept in applied statistics and one that is probably not mentioned in theoretical statistics classes. Sure, we delineate a mark in which we have to say … these results are too extreme for us to attribute them to “chance” … but just because we found a statistically significant difference, doesn’t mean it’s a difference that truly matters. In applied statistics, it’s not enough to understand how statistical significance works, but to be able to interpret the results to determine practical difference. I must admit to not covering this core concept to the same extent I cover the others.

As I think of other “critical concepts” they tend to be a bit more specific and fall under the larger concepts listed above (e.g., understanding what a standard deviation can tell us, clearly falls under the concept of variability. I invite all of you, to comment on what concepts, if any, are missing from this list.


Filed under Association vs. Causality, Core Concepts, Curriculum, Hypothesis Testing, Hypothesis Testing, Methods of Data Collection, Randomness, Sampling Distribution, Significance (Statistical/ Practical), Variability

Oh no … the semester is about to end … will we make it?

Hello All,

Firstly, yes, I know that I am about to make two posts on the same day. I actually wrote the earlier post weeks ago with the intention of posting it last week. However, this past week I was busy advising students for the spring semester … and before I knew it, the week had slipped away. Of course, that’s often how I feel about this time of the semester … the semester has slipped away. Will I have time to make sure the students will master all of the material designated in the curriculum as important? If I find myself running out of time, what should I skip or speed up on, knowing the students may leave my class with a conceptual gap?

Each semester is constrained by us having only a limited amount of time to assure students reach their outcomes for class. Yet, we have all had those semesters were we simply had to cut material … Spring of 2010 comes to mind were my students and I lost a full week of classes (one at a time) due to lots and lots of snow days.

So, the question I would like to pose for the sages and anyone else interested in commenting … for a first semester undergraduate applied statistics class … what are the most critical student learning outcomes that have to be mastered?

I look forward to hearing from you, and next week, I’ll make my comments as to what I feel is critical for students to learn.

Leave a comment

Filed under Curriculum