Today’s students are digital natives: they grew up with the internet and modern technology. In other words, the online world is a very real and integral part of their daily lives. This includes social media platforms like Instagram and TikTok. Over the course of the pandemic, teens increasingly turned to social media as a way to keep in touch with the friends they no longer saw in person. In fact, according to a study, 63% of parents reported that their children spent more time online during the pandemic. The internet offered students a much-needed social outlet and a way to interact with peers. But what happens when that interaction turns into cyberbullying, harassment, or hate speech? Here’s what schools need to know about hate speech on social media.
How Do Social Media Companies Define Hate Speech?
Meta, the company including Facebook and Instagram, defines hate speech as “a direct attack against people — rather than concepts or institutions— on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.” Moreover, Meta defines “attacks” as “violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation.” Additionally, Meta does not allow stereotypes that cause harm, such as “dehumanizing comparisons that have historically been used to attack, intimidate, or exclude specific groups, and that are often linked with offline violence.”
TikTok also includes hate speech in their community guidelines. TikTok does not allow “hateful behavior,” including “content that attacks, threatens, incites violence against, or otherwise dehumanizes an individual or a group” with protected attributes. These attributes include gender identity, race, sexual orientation, and immigration status.
Lastly, Twitter also has a policy against hate speech. Twitter does not allow users to “promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” Should an account exist solely to do harm to any of these groups, they can be deplatformed.
How Big of a Problem Is Hate Speech on Social Media?
According to Statista, from the start of 2020 to the third quarter of 2021, Facebook removed 160.4 million posts with hate speech. In the second quarter of 2021 alone, they removed 31.5 million instances of hate speech. In a 2021 survey, the Anti-Defamation League found that 41% of respondents experienced hate and harassment online. And over a quarter of respondents had faced intense harassment. This includes instances of “sexual harassment, stalking, physical threats, swatting, doxing, and sustained harassment.” Over the past few years, young people report increased exposure to hate online. According to an Axios report, the number of teenagers who frequently see racism on social media almost doubled from 2018 (12%) to 2020 (23%). Moreover, Black and LGBTQ+ youth are more likely to face hate online. Female youth are also more likely to face hate than males.
What Are the Effects of Online Hate Speech?
Clearly, hate speech on social media is a major problem. However, the damaging effects of hate speech don’t stay online. According to the Council on Foreign Relations (CFR), there’s a direct and dangerous connection between online hate speech and real-world violence. The CFR connects online hate speech to “a global increase in violence toward minorities, including mass shootings, lynchings, and ethnic cleansing.” Furthermore, the CFR notes that “social media can magnify discord” in the current political climate. In fact, according to the CFR, “rumors and invective disseminated online have contributed to violence” on “nearly every continent.”
Schools and Online Hate Speech
Hate speech can harm students – and greatly – even if it doesn’t result in violence. In the United States, three out of five teens report that they’ve experienced online bullying. Cyberbullying often is or quickly escalates to hate speech. Students face harassment related to gender, race, ethnicity, and sexuality. Moreover, such harassment can harm a student’s self-esteem. It can also lead to depression, anxiety, and even self-harm and suicidal ideation.
Schools have a legal responsibility to maintain a safe learning environment for all students. Therefore, it is imperative to monitor school computers for hate speech and harassment. Content filters alone aren’t enough. However, with screen monitoring software like LearnSafe, schools can detect hate speech and act before more harm is done. Additionally, LearnSafe detects talk of depression and suicidal ideation as well as the intent to self-harm. LearnSafe helps schools maintain a safe learning environment free from hate speech. It also helps schools see the warning signs and give their most vulnerable students the help they need.