ETHICOMP 2018 presentation in Sopot, Poland
I presented a paper titled ‘Hate speech recognition AI – a new method for censorship?’ (Heimo, Naskali & Kimppa 2018) at Ethicomp 2018 this Monday. The paper looks at the problems of modern governmental censorship using AI. In it we argue that there is no major ethical theory in which the censorship itself is justified, and that the use of AI in censoring hate speech is an immoral act in itself.
Hate speech is wrong, but censoring hate speech might be worse…
The main problem with hate speech censorship is that there is no generally agreed-upon definition of hate speech, and the definitions we have are mainly subjective. Not knowing where the lines for possible prosecution are creates a chilling effect, where people start to censor themselves on difficult or heated topics, in fear of retribution. This problem is exasperated in the case of AI, where the rules for tagging a content hate speech are always completely opaque.
…especially with AI
The logic of AI is trained with example cases, but there are no logic rules that could be exctracted from the ‘brain’ of an AI. So there is no way to tell why a particular text was censored, except that to the trained AI it looked similar enough to previously censored content. Moreover, the party who controls the training material ultimately controls what gets censored and what does not. Possible human oversight would focus on flagged content, so the system will learn to lean towards their views, easily allowing for false negatives.
Implementing opaque governmental systems for AI-based censorship introduces a solid foundation for more questionable uses of such censorship tools by future rulers. This is a slippery slope we want to avoid. As such, AI is a particularly bad tool to use for hate speech censorship.
Full article available per request on Researchgate or via email.