By Violet Wallerstein
I would like to start this article by saying that I am against hate speech. Violent speech should not be used to discriminate and target others. Hate groups and hate speech are not welcome. No one should be attacked by this speech.
However, as Facebook and Twitter ramp up their algorithms to remove flagged hate speech more quickly, I wanted to address the different consequences of this. The one I find most interesting is that by removing hate speech immediately we are less likely to know who the racists, misogynists and homophobes are. And thus we are unable to find who amongst our Facebook friends are actually horrible human beings.
In order to take action against people who are being discriminatory or hateful, we first have to be able to identify them. The internet is known for “doxxing,” which is giving out people’s information and telling their employer or whoever so that people face real-life consequences for their hateful internet actions. Without knowledge of who these individuals are, the public will not be able to take action against them.
Like everywhere, the internet should be a safe space for everyone. There should not be any cyber attacks based on race, sexuality or gender. Hate speech should not be tolerated, but it is one of the easiest ways to identify and out who is discriminatory.
The internet also provides “hard” proof. Someone can always deny making a comment or performing a rude action, but what happens on the internet is forever and can be found and documented by hundreds or thousands of people rapidly, and put places where it cannot be deleted. It is harder to deny what has been put out for everyone to see.
Like I said earlier, I do not think hate speech is good and am strongly against it. I just think these consequences should be considered. Most people like to know who they are dealing with, and letting people put their idiotic and harmful comments out there is one way to do that.
Violet is a sophomore Biology major.