Sunday, August 12, 2012

Peer Feedback: The Good, the Bad and the Ugly :-)

Okay, in the post about the virtual trash can, I promised to say something about the student feedback system, since that is indeed one of the remarkable and more interesting features of the class. For some students, it is clearly a strong (STRONG) motivator in their participation (and that means not just the getting feedback, but also in their motivation to give feedback), but for other students, it is a very negative motivation, and they have posted at the discussion board about leaving the class because of the poor quality of the feedback, and even the rudeness of the feedback.

This is a big topic again, I'm not going to be able to address all angles of it, but I want to start with exactly this inevitable element: the variability of the feedback. In a class with 5000 active participants (although I think we are down to closer to 4000 active participants during the second week), there is going to be a whole range of feedback, from the very zealous people who give feedback longer than the essay itself, to the grammar police (yes, they are everywhere), to the ill-informed grammar police (the single most active discussion that I have seen on the discussion board was about US v. UK spelling - the Brits were not happy about being told that they needed to learn to use a spellchecker), and on down to the "good job!" people with their two-word comments, and finally the people who commented not in English or who offered incomprehensible comments that had been translated by Google Translate (or similar), and, at the bottom of the heap, the sadistic comments (your essay is bullshit, you are a complete idiot, I cannot believe I had to read this crap, etc.). Oh, and don't forget the vigilante accusations of plagiarism based on misinterpretation of plagiarism detection software (yes, someone was accused of plagiarizing... from their own blog).

So, what kind of data is Coursera collecting about the efficacy of this process? None. What kind of feedback are people getting on their feedback? None. What kind of guidelines and tips did we get on offering feedback? (Almost) none. Given that this is a skill, and a skill that many people have not had to use in the past, I think we would need a LOT of tips and guidelines to help with that, along with feedback so that people who are just now developing this skill can estimate how well they are doing. My simple proposition (received with surprising assent at the discussion boards when I have suggested it... surprising because an awful lot of people are really hostile to any criticism of Coursera there) is that there should be "feedback on the feedback" - a simple 1-2-3 system, like the system we use for marking the essays, to send feedback back to the people about their comments. The 3s would be for those people who are totally knocking themselves out on the feedback and doing a really super job (god love 'em)... most of the feedback responses would probably be 2s (people would have to decide for themselves how they feel about the very large group of "good job!" feedback providers)... and some of the feedback would be 1s, as a way to let people know that something went wrong. Now, not all those 1s would be reliable (it could reflect something wrong on the receiving end of the feedback of course) - but I think there would be enough 1s both for individuals to benefit from that needed feedback and also for Coursera to ponder some intervention. If someone gets all 1s for several weeks in a row, something would have to be wrong, right? I would also suggest a 0 for inappropriate feedback; to tell someone that what they have written is "bullshit" is really unacceptable, at least in my opinion.

Plus, I would really really really like for all those people knocking themselves out to provide truly good feedback to get some 3s right back at them. Since the feedback is anonymous, there is no way to even say "thank you" for much-appreciated comments.

Okay, time has run out - there is more I should say here, but dinner calls... and I'm not sure I have the oomph left to say anything more about Coursera today. I wish them well, I really do... but I'm also surprised at how little they seem to have learned from all the things that are already going on in the world of online learning. Sadly, I am starting to think that is because they really are not interested in the radical newness of what online learning can offer. Instead, they want to replicate those big lecture courses at the elite universities... along with all the inherent weaknesses of those courses. Yes, this peer feedback is better than the single and perhaps hasty reading some students get from their overworked and underpaid TAs in those cattle-call lecture courses, and it certainly beats robograding. But couldn't it be even better? Absolutely yes. This peer feedback is the most social and the most intriguing aspect of the Coursera course for me, and I really hope they will improve on it in future iterations of the classes!


  1. I am taking this same Corsera course. I have been enjoying reading your posts on the subject, and I would agree that the students have been mostly abandoned to their own devices without direction.

    Peer feedback is an interesting concept, but in this course seems to have failed. Firstly, I would argue that the failure comes from us not truly being peers. We are not upper division English majors at a single university who would know the expectations of the system. While I have a degree in the humanities, my background in literature is strongly related to interpretation through a cultural and historical lens, not symbolism. Many of our fellow students have no experience in humanities courses leving them totally at sea.

    Secondly, the directions for writing and evaluation the essays have been minimal at best. Last week I read several that felt more like a rehashing of "The Annotated Alice" or miniature research papers. This is not a technique I would employ for such a short essay, but where does it fit within the rubric of the course? I also read one that felt as if the Grimm lecture had been transposed to Alice. All seemed to suffer from the lack of direction.

    Thirdly the scoring system is entirely inadequate. The score of one is used for a failed essay as in turning in spam. Giving such a score to an essay for content that was at least on topic is entirely demotivating. I personally had this experience as I had the random luck to have one evaluator hampered by English comprehension and one who I fail to understand his/her interpretation entirely as it consisted of a remark that I must not have liked the book and did not care about the essay. While the actual grade is entirely meaningless, I spent far less time on the "Dracula" essay with a why bother attitude, since I would have received as valuable feedback turning in a copy of the course syllabus. (The positive feedback was vague along the lines of well argued and excellent essay and provided no help as far as writing technique or content.) As a result of my own experience and from reading the essays, I have regulated the score of one in the content area either to non-performed essays or incomprehensible essays. With the complete lack of guidance on content and with a background focused on a different literary technique, I haven't the tools to aid people in the content area.

  2. Natasha, thanks so much for your comments - I can definitely see what you mean about the peers not being exactly peers. You are not alone in your background, and I've even met some people who are like me (older, teachers, etc.) - there are so many different backgrounds among the students I've met, and that is just a tiny percentage of the class overall. What surprises me is that Coursera has not gathered any actual data from us to find out just who we are, and to see what interesting correlations they could gather from the pattern of people and what they are doing in the course. Given that 'data analytics' is a big deal in online education these days, and some kind of baseline about WHO we are would be crucial in interpreting that data (just who is a "peer" to whom, for example), I cannot believe that Coursesa is letting all of that slip by without any effort to gather the data from us.

    In my own classes, the feedback system I use is so utterly different from this class that it's really interesting for me to watch how this is working: students do not "grade" each other in my classes (I find the whole idea of grading in general an awful business and would never make students grade each other), they are not expected to give comments back on technical writing issues (as so many of them have problems, sometimes very serious problems, proofreading their own writing - they are not English majors or even humanities majors for the most part), and the comments are very people-oriented, with people working together throughout the semester, interacting with each other and getting to know each other person-to-person through their writing and other projects (the total opposite of anonymous!).

    The larger issue you raise about literary theory has been something of real interest to me in the course also, and not something I anticipated. I had not expected the lectures to take what I consider to be a kind of marginal approach (I'll confess that a lot of it does leave me cold, just speaking personally) - given the general background of people in the class, I had assumed it would be focused on literary and historical context, very broad, along with some overarching themes that we would look at from book to book. Last week, I got an essay comparing Alice to a sperm, because she goes down into that long, dark rabbit hole. I really wanted to ask the person if they would have offered up that way of looking at Alice before having taken this class! I also pushed back in my feedback, I will admit - since there was no larger argument about Alice and masculinity, fertility, or anything that would be entailed by the comparison of Alice to a sperm, I really didn't know what the point was. Although it was definitely memorable! Not something that would ever come up in my class - although people have chosen some pretty unusual inanimate objects to use as storytellers in their own stories, ha ha.

  3. I agree with both of you concerning the problems in assessment for the Cousera course. I keep recalling "Flatland" and the idea that a 2-dimensional figure cannot understand a 3-dimensional figure except for a slice in 2 dimensions. There is a problem with untrained students who lack experience and appreciation of essay elements assessing matters of which they are not even aware.

    Although I too am opposed to peer assessment with marks, I agree there is value in receiving verbal feed-back. However, I would rather that a more detailed list of questions or considerations be provided to help students frame valid feed-back. And, there has to be some way to flag plagiarism and other inappropriate responses.

    I also continue to return to the concept of "peer" and I think that "peer" implies more commonality than just being together in a MOOC of many thousands. Many of the difficulties we have encountered in this course arise from the differences in education/language/training ...

    However, as you have well expressed on a number of occasions, Laura, the biggest problem is likely the unbelievable lack of responsiveness and communication by the administration for this course.

    I've just signed up for MOOC MOOC and will see what I learn in that course.

    1. Teach College, do I know you over at Google+...? There are a bunch of people I know doing the MOOCMOOC and if this were not the first week of school for me, I would be doing that for sure - I am one of Instructor's biggest fans and am really excited that a lot of people will now get to know more about Instructure and what they are doing. From what someone was telling me last night, the MOOCMOOC is just night-and-day compared to the lack of really good social networking in the Coursera course.

      The Flatland comparison is a great one! (I read all of Brian Greene's physics books last year and he makes great use of Flatland, too!) Also, I really agree that people would benefit from tips and guides to giving good feedback - those who are already good at that could just ignore such stuff, but the people who are truly uncertain would really appreciate I think.

      I am going to be very depressed if there is another outbreak of plagiarism and plagiarism reports this week. I wonder if anybody has even let Prof. Rabkin know about all this...? His teaching asst. says she is watching the discussion boards but from what I can tell she rarely (if ever?) participates. Hmmmm....

    2. Laura, my name is Susan Lieberman and I am a professor (law and accounting) at Humber Institute of Technology & Advanced Learning (a community college) in Toronto Ontario Canada. I have checked out your blog at Google +. I'm just starting a full-year sabbatical and thought I'd like to explore the world of MOOCs. The Coursera experience has been uneven to date for reasons discussed throughout your blog entries. As you pointed out, the discussions in the Cousera forums have been particularly disappointing and of limited value. I think of the entries as falling into a giant cess pool. There have been some insightful and helpful entries but they are so hard and time-consuming to find. I've very much enjoyed your comments on your blogs and in Coursera.

    3. Nice to meet you, Susan! If you are looking for people to connect up with and share ideas about teaching and such, I cannot say enough good things about Google+ ... and now with the advent of MOOCs, it's really great being able to compare experiences, since a lot of folks I've met at Google+ are taking advantage of the openness of the MOOCs to participate in different ones and compare notes. That's how I've realized that the Coursera courses vary from instance to instance, and that there are some truly connectivist MOOCs out there that don't start with the premise of modeling what they are doing on a university lecture course. Anyway, if you have the time and inclination, I think you might enjoy some of the teacher talk going on over at Google+. For me, it's been a great way to learn new things as a teacher, esp. as a teacher working online and very interested in technology! :-)

  4. I'll bet my feedback is longer than some essays. :)

    I chose to address one part of the peer review in my next post. But I needed to write about it in pieces--first receiving the review and next will be giving the review. The latter is likely where I'll address the issue of plagiarism.

    1. I really like the way you are setting up your blog entries so that there are clear, specific points for people to take away! I'll confess that I am using my blog more as a kind of therapy, just to get things off my mind that are nagging at me. This morning, for example, I got that email with the pseudo-data in it, comparing my peer ratings to the numbers overall... argh! I don't really have so much time this week for writing about Coursera but that email irked me enough that I may have to write something about it - just for therapy! :-)

  5. My guess (from the perspective of having taken a connectivist mooc on learning analytics) would be that Coursera is collecting everything, mining any and all data even the seemingly trivial for course management analytics, maybe a pinch of learning analytics, but not using any to make changes that would benefit us here and now.

    1. The thing that gets me, Vanessa, is that it is USELESS data. For example, this morning I got an email full of (pretend) data about the number of 1s 2s and 3s I've given out, compared to the rest of the class... but if they have not found out who in the class is a high school student (or younger), someone with some college, someone with a B.A., someone with an advanced degree, native speaker, non-native speaker with X level of fluency, then how can they interpret those numbers? It's like they have some sacred faith in the "10-30% of ratings will be 1s" ... without even being curious as to just who is giving those 1s. I'm giving out more 1s than other people and that's really not surprising; I'm giving out 1s to essays that, if the student were in my class, I would have to ask them to revise from scratch because they have not actually written something that can count as a college essay (totally leaving aside the whole "upper-division college" level which the course syllabus claims). If Coursera really wants to interpret my scores, they probably need to know something about who I am, instead of just sending me an email that, I guess, is indirectly suggesting to me I should not give so many 1s...? Because... of the numbers? I find it baffling. This is not about inter-rater reliability; it just looks like bad application of a formula to me, esp. given the sheer randomness factors when we have only read 8 essays out of the thousands being submitted.

    2. Coursera indeed gather our data, but once the class has ended. A week after I finished my CS101 and Introduction to Cryptography (defaulted this one) I got a survey form with questions ranging from general 'who I am, how old am I, where do I come from', etc,etc. to my comments, suggestions, and critics to the course, then checkboxes to rate the lecturer's ability and attentiveness, class quality, forum and video usefulness.

      In short, they do try to be next term. We're like beta tester by enrolling in the first term. Consider this your chance to make the class a better one in the future. Write everything you've written here in that form.

    3. Lisa, I've read that the dropout rates for these classes can be as high as 80% or 90% ... if they wait until the end of the class to gather feedback, they are going to miss out on the most important information, from all the people who left the class for some reason!

  6. When I signed up for this course I also registered for the HCI course that will be offered for the 2nd time in September and the difference in detail/focus for both the grading rubric and the way the assignments are presented, is staggering! Granted, different subjects require somewhat different approaches but I wonder how much Coursera can influence the focus and the way the individual universities/professors offer the courses in this online format.

    (interestingly the HCI course is the same one students of University of Helsinki can get 'official' credit for - it would be interesting to know what kind of internal assessment they do on top of the Coursera-peer-assessment)

    1. Thanks so much for that information, Elisa! If Coursera wants to succeed, I think they really do need to influence the focus as you say. When I read the Coursera-Michigan contract, I was really disappointed at the lack of attention to this question: it seems like Coursera is just setting a very very very minimal standard, and while this Fantasy-SciFi course meets their standard, I don't think it is something that is going to merit serious attention as being comparable to a college-level course, which is how Coursera is marketing itself (the elite universities and their professors, etc.).

    2. Peer assessment is not always there. Every class in Coursera have different grading system. So far I've got peer assessment, multiple choices, programming, and scripting exercises. That HCl course might not use peer assessment but short answer or multiple choice instead.

    3. University of Helsinki has already given credit for a couple of earlier courses (AI and SaaS) where you had to present your output and Coursera certificate to the teaching staff (who also took part to the course to be able to evaluate its quality) to get the credit. The SaaS course was categorized as "optional studies" with pass/fail grading.
      These being programming courses makes the assessment easier.

    4. Yes, Anonymous - the whole reason I enrolled in this course was that it is a humanities course, quite different from a STEM course (and I think I read that Udacity simply is not going to offer humanities courses at all - unlike Coursera). I think with a portfolio-based approach to the writing, it would be possible to have a final product that could be assessed by external reviewers. The current format, though, which consists of 10 unrevised, disjointed very short essays, does not really bring out the best in the students as writers and would also not be very amenable to review.

  7. The problem is u don't know who judge u.
    And what authority they have on the subject.
    It could be a 10 year old and what not.
    To me peer assessment is bad because off that.
    At a normal class the teacher will give real grades compare peer assessments,(i never experienced a teacher giving totally wrong character), because the teacher is authority on the subject.
    Peer assessments is a nice idea but it don't work in practice.
    And there are no tools for complaining,flagging.I think it's annoying when u get an grade and they haven't read the text ore have little understanding of subject.When it's clear the stuff the stuff they say are not quoted are in the text everywhere .It just baffles me how ignorant people can be...It's like face palm.
    I still pass the course so it is not big deal but it's very annoying.When people can't even read...
    So i come to the conclusion peer assessment on mooc in writing are not to be taken serious(as in grade u will get from a teacher/authority)



Note: Only a member of this blog may post a comment.