Last month, the Christchurch shooting — and its subsequent viral status — revealed just how common white nationalism is online. Many people already knew that platforms like Facebook, Twitter, and YouTube harbored hateful rhetoric, but poor responses to the shooting put them in the government’s sights.

Yesterday, the House Judiciary Committee held a hearing questioning both Google and Facebook on the rise of white nationalism online. During YouTube’s livestream of the hearing, things got so ugly that YouTube eventually disabled the comments.

Screenshots from Buzzfeed News reporter Ryan Broderick on Twitter show comments such as “White haters!” and “Jews make their own problems”.

YouTube comments filling up with racist and hateful remarks during a Congressional hearing on white nationalism isn’t actually surprising, though.

In the United States, white nationalism has seen a steady rise. According to the FBI, 2017 marked the third consecutive year that hate crime reports in the U.S. hit new highs. Then in 2018, a survey from the Anti-Defamation League found it was a record year for online hate speech, reflecting a correlating rise in hate crimes.

YouTube ranked third for sites where users experienced the most hateful comments, preceded by Facebook and Twitter. This reflects how all three platforms have struggled with curbing hate speech, long before events like Christchurch caught global attention.

Due to increased pressure, online platforms have begun to take steps to tackle white nationalism and hate online, but not all changes are up to par.

Recently, Facebook banned white nationalism and white separatism after facing pressure from civil rights groups. In a blog post, Facebook said, “It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services.”

However, the company has blurred lines with its ban after it refused to remove a video promoting white nationalism. Facebook eventually removed the video, but not under its white nationalism ban.

In November, Twitter expanded its policies to prohibit dehumanizing speech. It’s also begun exploring options for labeling tweets from public figures that break its rules, namely due to President Donald Trump.

Earlier this month, YouTube announced that it was working on ways to stop “dislike campaigns” on videos by large online groups and it’s also stopped recommending conspiracy theory videos. However, YouTube recently came under fire after reports showed it ignored employee warnings about hateful content on the platform.

The Anti-Defamation League believes that online harassment is making hate crimes off the internet more common. The numbers are there to help support that claim, but online platforms continue to be reactionary in solving their hate speech issues.