PrometuNews
© 2026 Prometu NewsPowered by Prometu, Inc.
Technology4 min...

Study Reveals Similarities in Online Hate Speech: Misogyny and Misandry

Listen
Share

An innovative study reveals that online communities dedicated to hating men and women exhibit strikingly similar linguistic and behavioral patterns, suggesting a common toxicity in extremist digital environments.

OMNI
OMNI
#online hate#misogyny#misandry#Reddit#hate speech
Study Reveals Similarities in Online Hate Speech: Misogyny and Misandry

The research, published in the journal 'Scientific Reports', focuses on a comparative analysis of Reddit communities dedicated to hating men and women. The study sought to identify common language and behavioral patterns within these groups. The main goal was to determine whether the toxicity of these digital environments is independent of the victim's gender. Specific subreddits promoting extreme views on gender were analyzed, including misandric and misogynistic communities.

The methodology included linguistic analysis, content toxicity assessment, and the detection of emotions expressed in the texts. A computational tool was used to clean the text, and a deep learning model was employed to assess toxicity. Furthermore, machine learning algorithms were used to detect emotions such as sadness, joy, fear, and anger. The study aimed to unravel the internal dynamics of these groups and how expressions of hate manifest.

To conduct the study, four subreddits on Reddit with extreme views on gender were selected. Two of these groups were identified as misandric: a mainstream feminist subreddit and a radical one, the latter banned in 2020 for violating hate speech policies. On the misogynistic side, a men's rights subreddit and an involuntary celibate community were chosen, which was also banned for promoting hate and violence.

The selection of these groups allowed for a direct comparison of hate expressions directed at different genders. The data included text posts and comments generated between 2016 and 2022. A rigorous filtering process was applied to ensure that the analysis focused on gender-based hate.

The linguistic analysis compared the vocabulary used in the different subreddits. The twenty most frequent words in each community were identified. The results showed that the most common terms were used with similar frequency in all the analyzed groups. No significant linguistic differences were found that clearly separated the groups that focused on men from those that focused on women.

This finding suggests that the language used to express hate, regardless of the victim's gender, shares common characteristics. The study demonstrated that the language of hate transcends gender differences, showing a uniformity in the way hostility is expressed.

The evaluation of content toxicity was performed using a deep learning model trained to recognize hate speech. The majority of content in all four groups was classified as non-toxic. Almost all communities showed a dual pattern: a large peak indicating harmless text and a smaller peak indicating highly toxic text.

Although misogynistic communities showed a slightly higher peak in extreme toxicity, the overall patterns were remarkably similar. Erica Coppolillo, a researcher at the University of Calabria and the Italian National Research Council, led this study, which sought to fill the knowledge gap about misandry in the digital realm.

The analysis of emotions expressed in the texts revealed that hate was the most frequent emotion in all communities, followed by anger. Men's rights communities and mainstream feminists showed incredibly similar emotional patterns. Involuntary celibates tended to express more sadness, while radical feminists leaned more towards fear.

At the individual user level, the mainstream feminist community exhibited the highest levels of hate, followed by the radical feminist community and the men's rights community. The study, titled 'Women who hate men: a comparative analysis across extremist Reddit communities', highlights the need for content moderation strategies that address hate speech neutrally.
Editorial Note

This content has been synthesized and optimized to ensure clarity and neutrality. Based on: PsyPost