ds106: The Spring 2012 Course Evaluations

Last Fall, in preparation for teaching ds106 this Spring, I blogged about a number of trends in my Spring 2011 course evaluations and how I wanted to work on my relatively poor scores when it comes to at least two elements of the course: “Clear Criteria for Grading” and “timely return of graded materials.” (We got course evaluations back much much quicker this year thanks to the course evaluation system moving online.) Turns out despite the best laid plans of mice and menaces, my numbers in both categories went down slightly. For example, “clear criteria for grading” was 3.95 out of 5 last year, and was 3.93 this year, which kinda pisses me off because I spent a lot more time framing expectations this time around. That said, grading in ds106 is very much part and parcel of one’s engagement in the work happening around the community, and despite how much I push this it always comes as a surprise to students at the end of the semester when it comes to grading.

The other area, “timely return of graded material” was an average of about 4.25 out of 5 last year across both sections, this year it was 4.13 for my larger, combined class. I am not as surprised about this because I pushed off our weekly meetings and didn’t send letter grades out after we met. I think part of my problem with both these categories is I am torn between totally abandoning the traditional notion of the letter grade but still making sure students have a clear sense of where they are throughout the semester—which is what I do three times a semester in individual conferences. This approach takes a ton of time, but I figure the more intimate, focused feedback on their progress and work is far more useful than a hard and fast grade. But I may be wrong on this count. I ride them regularly in their comments and through email from week 2 on so they don’t fail miserably out of the gate (which works very well, I highly recommend this—particularly for online students), the feedback is regular and sometimes brutal 🙂

As for the other categories for instructor evaluation, there were some my scores increased significantly others where it fell marginally. For example, this semester the class “met on schedule” was up .18 points from an average of about 4.62 to 4.8. Not sure what to say here, we met pretty regularly last year as well. Also, the “presented material in an organized fashion” was up about one-tenth of a point from an average of about 4.23 to 4.33—I think this has a lot to do with all of Alan Levine’s amazing framing of the assignments each week—I benefitted similarly from teaching alongside Martha the previous year as well.

I’m a little sad to report that my enthusiasm score went from an average of about 4.93 to 4.89—I wonder if this a case of a bit of ds106 ennui. This is most distressing for me, enthusiasm is all I got—it’s my life! 🙂

You can see all of the Instructor Evaluation details below:

The second part of the evaluation is the focus on the course experience. And I was pretty impressed with the results this time around because all six categories increased, and the separation of course experience from the instructor is promising for ds106. My larger take away from these scores—which I fully understand are subjective, unreliable, and potentially earth threatening— is the idea of the course and its usefulness transcending any one idea of an instructor, which is nice.

“I acquired substantial knowledge and/or skills in this course.” 4.57 –> 4.67
“I found instructor’s feedback useful”. 4.63 –> 4.85 (this is odd given how much I got tagged on feedback in my evaluation)
“I was encouraged to ask questions about the course material.” 4.79 –> 4.81
“I found the instructor to be helpful in clarifying difficult material.” 4.68 –> 4.85
“I was encouraged to reflect critically on course content.” 4.69 –> 4.88
“I found the instructor to be available outside of class for help (e.g., during office hours, special appointments, via e-mail, telephone).” 4.78 –> 4.84

Across the board the experience of the course ranked higher than last year, and it also had a 4.82 overall score which is significantly higher than the UMW average of 4.38.


The only thing I did radically differently this semester was to provide every student with the option to take the class face-to-face (f2f) or entirely online. Of the 34 who finished the class, 19 took it f2f and the remaining 15 took fully online (sometimes a few of these online students came to class for the fun of it). So close to a 60/40 cut when given the option to take it f2f or online. What was unfortunate about this go around, unlike last semester, is that I can’t distinguish the feedback from f2f and online students given they’re all in one section—last semester there was a f2f and a separate online section. That’s another point to consider. I had about 10 to 12 more registered students in a single session of ds106 than I usually do (last year I was lucky enough to get paid for two sections, but that was “corrected”) so I decided to overload to see how that would go, and in retrospect it wasn’t that much more work because I did a good job of keeping the onus on them to be part of the community, comment, interact, and basically forge their own relationships within and beyond the class. Placing the responsibility back on the students and following up with them on it periodically is rather effective for managing a very mild overload of students in my experience (don’t imagine it can or should scale much beyond a handful of students).

More and more I think the distinction between taking d106 online and f2f is arbitrary in terms of the value of the experience, and for me that is a radical development. ds106 is a web native class, it is born of and on the web making the f2f experience potentially interesting and reassuring, but at the same time vestigial. I like the f2f classroom, I like getting to know students their and having the physical space to congregate in. That said, based on my own experience I’m pretty certain it would be erroneous to privilege it in this regard.

Finally, here are some anonymous student quotes from the evaluations that were pretty fun:

It doesn’t get much more awesome than this one—attack the evaluators!:

Attempting to evaluate my teacher in this manner is flawed and unacceptable. To pretend that my teacher, Jim Groom, deserves to be evaluated according to a quantified list of predetermined measures would be an assault to his viability as an instructor, an advisor, and a mentor. Although you wrap up this ‘course evaluation’ with a pretty bow (telling me my input is important, that these evaluations matter, etc.), I absolutely refuse to reduce myself to participation in your quantitative course evaluations and ‘outcome assessments.’ Do you really believe I can summarize my teacher’s abilities with a series of “5s,” symbolizing his extraordinary capacity to inspire students to learn? I sincerely hope you do not.

And this one ain’t so bad either:

This has been the most work intensive and the most rewarding course I’ve taken at UMW. It is unlike anything else I’ve taken before, and I’ve gained so much valuable knowledge about Web2.0 and online presence, not to mention the increased confidence in my creative and technical abilities. I will use the skills I’ve learned in this class for many years to come, no matter what profession I end up working in.

The idea of the intensity of the work is a recurring theme in the evaluations (which makes me happy, get your monies worth folks):

Overall I really enjoyed this class. At times I thought the work load was a little bit much for a 100 level class, but Jim was very understanding of our busy schedules.

Where is written that all 100 level classes are less work and by default easier? This is a fallacy of some kind, right?

The work was tedious most weeks, but the hard work was well worth it. I learned so many useful skills in this class. Allowing us an extra week for video was truly helpful–I recommend doing that again.

This is a great class that overloads right near the end, starting with the video weeks. Seriously, those burnt me out hardcore. The radio thing was already a TON of work, then 30 stars of video, THEN 15 remix stars ALL WHILE I’m supposed to be doing final projects, in addition to my OTHER CLASSES and work and a social life? C’mon.

This course has been very enlightening in terms of things that can be accomplished through the web as a medium. However, the class is so fast-paced, if you don’t have the time for it in your schedule, it is advised to NOT take it. It requires so much time and energy through some parts of the semester and can be difficult to juggle the coursework amidst work for other, more crucial classes! I love the idea of the course though and it has definitely been an interesting experience!

And the money quote which is at the heart of ds106:

I LOVED this class. It really opened my mind to my potential creativity

This entry was posted in digital storytelling and tagged , , . Bookmark the permalink.

14 Responses to ds106: The Spring 2012 Course Evaluations

  1. Scott Leslie says:

    Well done, though I did notice that this year they removed the question “I was certain the instructor would suddenly pull out a flame thrower and flambe our entire class while whistling the theme from Bonanza” which is too bad, as that would have brought those numbers up for sure!

  2. Tim Owens says:

    Oh my god that first student quote is amazing!

  3. Lisa M Lane says:

    I have also begun publishing my student evaluations on my blog and I laud you for doing it – it is extraordinarily useful to share in this way, for you and for everyone. Bravo.

    But I won’t allow you to fret about the “enthusiasm” thing. You are talking about a different set of students each time, with different perceptions and interpretations of the word. These “downward” differences are so minor as to be statistically insignificant considering the difference in the group.

    Anything that concerns you should be analyzed within a framework of which grades the students received at the end. I’ve never understood why such evaluations do not ask the student the effort they put in or the grade they were getting at the time of evaluation. I add those questions to every evaluation, and consider results within that context.

  4. I just got mine, Jim, and got similar dings for the grading/expectations. Frankly I found my students forgot about the syllabus where we spelled it out (like the ones who just never did any tutorials or assignment submissions). If I had to evaluate back, I’d toss a few pebbles for their obsession on the grade and not the act/art.

    I’m with Lisa, blogging your evals is a power approach as an open educator, and I’ll be copying you shortly. There were a few critical comments but completely out weighed by the comments of students who talked about how much value they got from the experience.

  5. Mike C. says:

    So first let me say that there’s pretty ample research that student evals are close to worthless at gauging learning outcomes. That said, they can be a predictor of persistence to a degree, and that part counts. More importantly, many tenure track professors would be skeptical of teaching in a way that significantly drove down evaluations, which are one of the only measures of teaching we collect (Hooray for the consumer culture of education….).

    I think at a class size of 27, your year to year variations that you are seeing are not likely to be meaningful, even in the informal way we demand of things like evals (they look to me small enough they could even be the effect of a single person). I’d also say that my guess is that the students are generally harder on the concept of grading expectations than some of the other stuff anyway — so DO NOT compare scores on them to the other qualities. It’s not a valid comparison. To get at what you want to get at, you want to look at the averages of these qualities across the college (or better yet, similar courses) and see if you’re above or below the base rate.

    I’ll give you an example — with cable TV customers are generally happy with reliability of service, pretty happy with the channel offerings, unhappy with the cost, and apoplectic at the thin set of offered packages (You mean there’s no package that has only AMC, Showtime, BBC America, C-Span, and FOX Soccer??!?! Fascists!).

    You take over the cable company, and put out a survey and find that customers HATE your packages. This is your company’s weak spot, right?

    No, not hardly. The only way to tell if it is your weak spot is to compare your ratings on this to ratings of other companies on this issue. Without some baseline data of how customers normally score this, the 1 to 5 scale is pretty meaningless.

    I mention this because I think this is great data to present on your ds106 road shows, but to make it really meaningful you’d want to item to item comparisons of each result to either UMW as a whole, or to a meaningful subset of UMW courses (say Ged Ed. 100 levels, or media literacy focused classes). I think that’d be a fascinating comparison, and it’d get to the heart of student perceptions about your course in a way that this might not.

    This is a great post though, and it looks at first glance like students are reacting really well.

  6. Mike C. says:

    Oh, and the student written comments are gold.

  7. Reverend says:

    @Scott,
    I had the flamethrower question removed because un like the rest of these absolutely neutral questions, I thought that would be a bit biased. How man y other faculty members have flamethrowers?

    @Tim,
    I wonder if you can guess who that was 🙂

    @Lisa,
    I saw you blog your evaluations last semester, I loved the whole thing. I have a whole ‘nother set of data I collected from the students more informally, and I plan on doing something similar with that shortly. I agree the differences are marginal, what I was hoping for though is a bigger jump in grading issues stuff—but I totally understand these scoares are marginal, and as MIke alluded to, unrelated to outcomes.

    @Alan,
    Yeah, the grading thing pisses me off a bit because we spend a ton of time commenting and meeting with them so that the grades become less of an albatross and more of a part of being there. I want to figure this out better over time, and I wonder if the way is to give start everyone with an A and then ding them as they fuck up royally. Though that would be just as much book kepping. I want to crowd source the grading 🙂

  8. Reverend says:

    @Mike,
    I am really glad you commented on this post because I am flailing a bit here with the stats. Firs, I have no delusions that these numbers with necessarily measure what they learned, but I am hoping they might begin to reflect an experience of sorts. I am interested in this for two reasons: 1) does an alternative model like ds106 potentially provide a better experience; 2) can we start examining evaluations for online/hybrid classes versus those of f2f. Given those two criteria, I think the first is accomplished along the lines you said, fine all 100 level courses across the school and see how ds106 ranks. I am happy to see that across all courses ds106 is at 4.82 vs. 4.38—is that kind of what you are suggesting? And drilling down into 100 level courses might be more telling is a good call, thanks for that.

    Also, I figure the variations are slight enough for most things not to matter, and the enthusiasm thing was a joke in that manner. I guess my ulterior reason for blogging this is to showcase the fact that the grading scores are still the lowest consistently over time, and I understand culturally given how we run ds106—but it still frustrates me given what I commented to Alan above.

    All this said, I am dreadfully aware this may not be at all useful in its present state, so that is why I am greatly appreciative you commented here so that I can turn this into a compelling narrative of why a 100 level course like ds106 is #4life 🙂

  9. Mike C. says:

    I think the numbers are *hugely* important, because students will not sign up for classes they hate or feel are unfair (or unpredictable in bad ways), and faculty won’t teach in ways that earn them scathing review that end up either in the tenure discussion or, if particularly bad, on the Dean’s desk. So absolutely this is important stuff.

    What I’m suggesting, if possible, is you see if you can get stuff like the initial six comparisons on courses more similar to yours — it may not be possible, but your course is similar in grading to a writing intensive course. And (in my experience) students have generally unreasonable demands for grading in writing intensive courses. Maybe you get that from the department comparison, but maybe you don’t. It might make more sense to compare yourself to the means for the English Comp program, etc.

  10. Mike C. says:

    (Incidentally, when I made my first comment I didn’t see that you actually had college wide data for both sets of numbers — I apparently scrolled past the third scan, so if part of it is a bit confusing, that’s why)

  11. My favorite course eval of all time was “She never told us nothing.” So much for constructivist teaching.

    Beyond the stats, just the act of sharing your critical reflections on your evaluations is huge. I’m inspired to blog mine next semester.

    I definitely think this is a strategy we need to share in the Program for Online Teaching, Lisa.

    Also, do most universities offer the option to add your own questions to the formal evaluation? I’ve found that really helps so I can get more objective data than something more informal that I solicit.

    Again, congratulations, Jim and Lisa for not only your open scholarship but open teaching.

    • Reverend says:

      Cris,

      I am with you and Lisa, and I am going to blog the follow-up questions I asked students during the semester when I dig myself out here sometime soon.

  12. Pingback: Evaluating the ds106 Evaluations - CogDogBlog

  13. Pingback: Fall 2012 ds106 Course Evaluations - CogDogBlog

Leave a Reply to Reverend Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.