Key Takeaways

  • Overall, male and female professors had similar numerical ratings and percentages of positive reviews.

  • The percent of positive reviews during the COVID-19 pandemic was slightly higher than those before the pandemic.

  • Common sentiments differed between male and female professor reviews. Reviews for male professors commonly included words related to their teaching, such as “hard,” “smart” and “understand,” while common words in reviews for female professors were “easy,” “sweet” and “interesting.”

↓ Jump to visualization

Whether it be through word-of-mouth, Bruinwalk, or official course evaluations, professor reviews help some students decide which classes to take.

At UCLA, official course evaluations by students also factor into a faculty review process that occurs every few years, said Ashley Sanders, the vice chair of the Digital Humanities Program, who teaches multiple courses per quarter.

The large influence that student course evaluations have on faculty review processes prompts a question about the role of bias in the evaluations themselves.

A 2019 study by researchers at Cambridge University, part of a growing body of research, found that professor reviews are largely biased against women and faculty of color, who consistently receive lower scores and more negative sentiments in course evaluations than their white male counterparts.

“I think one thing to note is how especially in the United States, we have this patriarchal system that places men at the top and women at the bottom, and I think that’s partly why society can be pretty hard on women,” said Antonio Rodriguez, a fourth-year gender studies student.

Sanders said female professors whose personalities align with stereotypically male characteristics, such as gruffness, may have their ratings suffer as a result.

“I think there’s a higher expectation of a sense of care and well-being in the classrooms of women instructors,” Sanders said.

Below, The Stack explores the text and numerical ratings of Bruinwalk reviews in order to evaluate differences in professor reviews based on gender. We also compared reviews pre-pandemic and during the COVID-19 pandemic to account for changes in learning and classes in general. We did not, however, control for differences in departments or subject areas that professors taught under.

We predicted professor genders based on the frequency of pronouns identified in reviews and chose to focus on male and female genders only, as non-binary professors could not be as easily determined. Approximately 70% of professors were identified as male and 29% identified as female, while 1% of professors were deemed inconclusive.

Inaccurate gender predictions could have resulted from students misgendering professors in reviews or from analyzing gender pronouns referring to course teaching assistants instead of professors themselves.

Numerical reviews for male and female professors

The rating system on Bruinwalk allows students to rate professors from 0 to 5 on various elements of their class. The average overall ratings of classes were generally the same for both female and male professors before and after the COVID-19 pandemic. When comparing ratings between the genders, female professors tended to be rated slightly higher than male professors in all categories except workload.

Some students have found that ratings on Bruinwalk do not match their experience in the classroom. Professors with bad ratings on Bruinwalk can turn out to have excellent classes, Rodriguez said.

Written reviews

The word cloud below displays the most common words found in both male and female professor reviews, with the size of the word corresponding to how often it appears in reviews. (Note that the words are mapped to a scale such that smaller words are enlarged for visibility.)

The percent differences in words used between male and female professor reviews were fairly small. The most common words for both genders tended to be classroom related – “professor,” “homework,” etc.

Below, we removed classroom-related words and filtered only for adjectives to examine how descriptive words were used differently in male and female professor reviews.

A closer look at adjectives used in male and female professor reviews

The top adjectives for both genders relate to course difficulty: “easy,” “hard,” “fair,” etc. Looking past this subset, the personality-related words used to describe professors tend to diverge by gender. If we order by words with the , male professors receive more reviews with words like “funny,” “generous” and “smart.” On the other hand, female professors receive more reviews with the words “sweet,” “helpful” and “social.”

“Female faculty will find themselves marked lower than male (faculty) on the same question, especially questions related to (the) presence of support and creating a positive classroom environment, simply because they’re being held to … a toxically-gendered different standard,” said Phil Chodrow, a visiting assistant adjunct professor in the Department of Mathematics.

These adjectives can be placed in the context of other commonly used words used in reviews for female versus male professors. The chart below plots the relative frequency of words used in reviews for female professors versus reviews for male professors.

Other words most commonly used for female versus male professors

Show:

Gendered words, such as “guy,” “man,” “female” and “lady” have the greatest difference in relative frequency.

The line on the chart indicates where a word used equally for reviews of males and females would lie. The heavy concentration of points along this line indicates that most words are used with equal frequencies for male and female professors.

Words that lie above this line, such as “interesting,” “loved” and “easy,” appeared more frequently in reviews for female professors, while words that lie below the line, such as “jokes,” “hard” and “best,” appeared more frequently in reviews for male professors.

“It’s important to recognize that this is not just about words used, but it’s actually about different axes of evaluation,” Chodrow said. Women are expected to be more caring and thus words like “warm” may show up more often in reviews for female professors than for male professors..

STEM-related words such as “physics,” “chem,” and “math” appear significantly more frequently for males than females, while words such as “art” and “English” appear more commonly for females. The differences may result from the higher percentage of male professors in STEM and female professors in the humanities, which were not disaggregated by department in our comparison.

Effect of the pandemic

The Stack used sentiment analysis to categorize reviews as “positive” or “negative,” depending on the language used in each review.

For both male and female professors, about 80% of the reviews given for quarters prior to Spring 2020 were positive. This number went up to about 85% for reviews given during the COVID-19 pandemic.

While student evaluations can offer a reflection of how different methods of teaching work for individual students, they can also simply be a measure of if a student was happy or a class was easy, Chodrow said.

In order to avoid the negative effect on hiring that can come from reviews, Chowdrow said portfolios including how a professor has grown or adapted in response to reviews would be a more fair hiring practice.

Beyond the negative career impacts, negative reviews can have personal impacts on the professors. Both Chodrow and Sanders said some professors choose not to read reviews due to the unkind and unfair content in some reviews.

“A real person is reading this (review) and will be impacted by what you say,” Sanders said.

About the data

The data was collected from Bruinwalk, an online platform that students can use to browse and write instructor reviews. All reviews with numerical rating information were factored into the corresponding charts and analysis, while all reviews with textual opinions were considered in our sentiment analysis. The reviews date back to the fall 2000 quarter, although the majority of reviews clustered around the past three years.

Official course evaluations from UCLA could not be obtained due to privacy reasons.

Methodology

Sentiment analysis was performed using the Python package nltk. All frequencies of the words were normalized by the total the number of words used for the specified gender.

We used the frequency of male and female pronouns in all the reviews for a professor to predict the professor’s gender. Professors with no pronouns in their reviews or an equal number of male and female pronouns were labeled indeterminate in our dataset.