Written by 8:07 pm Opinions • 2 Comments

RateMyProfessors Doesn’t Make the Grade

Image from RateMyProfessors.com


How much faith can one have in a teacher-rating system that takes “hotness” into account? I had never even heard of RateMyProfessors.com until just a few weeks ago, during registration, when scores from one to five were being thrown into conversation as legitimate factors in considering what classes to take. I checked it out just to see how my second semester professors stacked up. The numbers ranged from encouraging to a little scary. Little emoticon faces lined my professors’ names—one yellow with glee, one blue with misery and a couple that were green with…some sort of ambivalence, I think. But beneath these surface scores and cartoon faces were comments that rarely seemed objective or helpful. Surfing through the ratings of random professors, it didn’t take me long to realize that the site is flawed and that to take it seriously would be a mistake.

Some of my favorite things to ignore on the Internet are user ratings. Usually placed next to reviews written by professionals, these appraisals of movies, books, video games and countless other items and content are often wildly biased and full of the nonsense that goes along with Internet anonymity. And this is part of the problem with RateMyProfessors. There are no set criteria in place for evaluating a professor, so we’re left with the ideas of nameless, faceless people whose opinions are often extreme enough to warrant posting on the site. Most students seem to think that the comments are more useful to read than just the numerical ratings. These let the user get a sense of why a given professor is loved or hated, but the comments themselves are still hard to take seriously, especially the negative reviews. The web site does not allow “libelous comments,” but there are still many that are ridiculous and cruel.

“Absolutely worthless. The day [this professor] retires will be the best thing to happen to Connecticut College in years,” was a choice example I came across. Another was, “I only wear slip-on shoes to [this professor’s] class because I would probably hang myself if I had some laces.” Comments like these are more common than you’d think, but thankfully it seems that most students understand not to take every rating at face value.

“You have to take it all with a grain of salt,” said Hannah Jeffrey ’14. “Sometimes you’ll see a super enthusiastic rating, and all the rest are just average, and you know to take the average ones more seriously.” Strength in the number of opinions seems to be a popular way to find the most accurate one.

Alison Carpenter ’13 said, “I know some people are biased, but if ten people say the same bad thing about a professor then there must be some truth to it.”
But I would still disagree. I’m sure there are cases where a professor deserves all of his or her bad ratings, but I’m also sure that there are many more cases where the only students who bothered to rate the professor were the ones that had very negative (or very positive) feelings about the class. This is due to many factors, teaching ability being only one among them. I found that many negative reviews were the product of the difficulty of a class’s subject matter and many positive reviews that seemed to be the result of teaching gimmicks like singing or other activities that have nothing to do with teaching or learning.

Another question I asked students was whether they believed that professors checked their own ratings regularly and actually took them into consideration. Many students believed that professors did, but that doesn’t seem to be the case. Professor Joseph Schroeder of the neuroscience department said that he’s visited the site maybe six times since he started teaching, and doesn’t pay very close attention to it.

“If the site were fair and more objective, then it would be a useful tool. But at this point you only get the two extremes in the ratings.” Schroeder also mentioned his problems with the ambiguity of the rating system. “The three or four categories [easiness, clarity, helpfulness] are too vague to make much sense,” he said. “I believe there’s a section for easiness. What does a high score in easiness mean? That you’re easy, or hard or fair?”

Philosophy professor Derek Turner says he hasn’t checked his score in years and also doesn’t think the site is particularly useful as a tool to students or teachers. “When you think about how a class went in retrospect, it’s really important to look at the whole picture,” he says. He thinks that a quantification of this is hard to find on a website which in general uses a very small sample size.

An idea I had while thinking about this article was to provide a similar, but more useful resource for Conn students by making the evaluations which students fill out at the end of each semester public online. I posed this idea to Dr. Turner and he informed me that many schools do just that, but that process would not be so simple at Conn. “As it stands now, each department has its own individual rating system and if the evaluations were all made public, it would be hard to make sense of them.”

But TJ Wellman of the religious studies department seemed to agree with this idea, and took it even farther. “What I would like to see is a more public forum” he said, suggesting not only that the evaluations be made public, but also that professors be able to comment on them, and maybe even have peer reviews between professors. As far as RateMyProfessors goes, Wellman agrees that it has flaws.

“The danger is that some of the professors that are most effective aren’t the most popular.” Wellman also agreed that most professors at Conn pay little attention to the site. “It seems to be a culture at some schools, where it is widely used, but not here. At a school this small, I think word of mouth is the most effective tool.”
All three professors said they do pay serious attention to the course evaluations returned at the end of the semester. These write-ups are important enough to be taken into consideration when a professor is eligible for tenure, and Wellman said he has used the evaluations to change his course material. Perhaps a public forum system, as Wellman suggests, would lead to even more productive changes in teaching style and course material based on student suggestion.

Through my interviews with professors, I learned that the faculty is currently considering the standardization of the student evaluations. Although I don’t entirely agree in the execution of RateMyProfessor, I do think that it’s a good idea. If these standardized evaluations come to fruition, then Conn would be in a position to adapt the RateMyProfessor idea into a smaller, more controlled system with ratings that would be much more accurate. Students would be able to see clearly which professors are truly passionate and dedicated to teaching and professors would be able to defend themselves against unfounded claims, or even adapt their future classes to student suggestions. The quality of a teacher is a delicate thing to quantify, and to do so properly requires a more controlled and complete method than RateMyProfessor.com has to offer. All we need are a larger sample size, a better method of evaluation and no more smiley face-based ratings. •

(Visited 73 times, 1 visits today)
[mc4wp_form id="5878"]
Close