Being a professor provides us with an opportunity to have a hard end, every semester. With a built in break, to rejuvenate, lick our wounds, work on our scholarship, and to reflect on the prior academic year.
I remain concerned over the quality of course evaluations. Many universities do exactly what my institution did. An assistant provost created a committee of faculty members and administrators, none of whom had a background in the creation of psychometric tools. She provided them with some constraints … 10 items or fewer, it had to include an item on starting and stopping class on time, and it had to be free to administer.
A good measure, regardless of its purpose begins just with that … what is the purpose of this measure?
Yet, course evaluations are supposed to have multiple purposes.
- Provide administrators with information they can use to make retention/tenure/promotion decisions, often so that they have support for denying retention, tenure, or promotion. (Note: I spent a stint in administration … not all schools are like this, but enough are).
- Evaluate the quality of the professor.
- Provide feedback to the professor so he/she can make data driven improvements.
You can’t really impact how the measure you are assigned to use has been crafted, what’s it’s purpose, or what kind of information it will be provided to you. However, even if you are at an institution where there is no truly useful measure, the act of seeking anonymous feedback from your students can be useful in helping you to be more reflective on your teaching, and to make improvements for the future. (e.g., http://www.chronicle.com/article/As-Summer-Sets-In-a-Chance-to/240203?cid=wcontentgrid_hp_2)
Here are some tips to do just that:
- Before you even look at the results, look at what your goals for the semester were. Did you hope to improve student homework completion? Attendance? Make a list of that information, and see how your efforts translated to student responses.
- If your school’s course evaluation is not providing you with useful information, ask the students open ended questions that will provide you with useful information. Just make sure to administer it after the grades have been posted, and provide students with a way to provide that information anonymously. Survey Monkey is an easy way. If you are working with a course delivery system like D2L, there are survey options that you can make anonymous. But in the end, ask the questions you want/need the answer to.
- Look over the students responses on both the course evaluations and the survey you created. Read them … and then think about what they said. Then read it again. If you get back feedback that makes you sad or mad … put the course evaluations down for a couple of days, then come back to them.
- As you look through the responses … often after a couple of days of waiting to re-read them, evaluate those responses in the context of your course goals. Remember, this isn’t going to be like a t-test … a binary decision: reject the null/fail to reject the null. People just simply aren’t good teachers and sucky teachers. Instead, look for signs of when you were a good teacher. When were you not a good teacher
- Growing up, there was a prayer in my kitchen … you may know it …
The serenity to accept the things I cannot change.
The power to change the things that need changing
The Wisdom to know the difference.
I believe that as you focus on the students responses … keep that in mind. Some variables are out of your control, but others are within your control. Your job when you read over your course evaluations and reflect on your teaching is to have the wisdom to know the difference… and create some targeted goals to better reach the students you are serving.
I hope your summer is one of great reflection, rejuvenation, that you don’t have that many wounds to lick, but you do find some time to have fun with family and friends.