Sunday, October 14, 2018
ProPublica watches Big Tech and may be increasing the pressure to purge extremist content
Fast Company has a major article by Katharine on “How ProPublica Became Big Tech’s Scariest Watchdog”. The explanation of ProPublica’s work (as led by Julia Angwin) with bots makes for some challenging reading.
It starts out by calling Facebook a “political battleground”.
Here’s a random story on ProPublica’s findings.
I’ll cut to the chase. There is no reasonable way with algorithms to identify “hate speech” with all possible protected groups, because the definitions of the groups are too controversial and too malleable. Outside of the N word, maybe. Even the F or Q words get used, for example, by gay writers in satire or to make ironic political points.
It might be easier to scan posts in some heavily inflected foreign languages, where endings actually pin down meaning outside of context. For example, in French, from subjunctive mood, it is much easier for an algorithm to identify writing that examines a supposition rather than claiming a fact – than it is in English, where context is all you have.
ProPublica, remember, augmented the public outrage of family separations at the border with its reporting. But it is still necessary to get all the hard facts on everything that is going on, and what the other risks are, before screaming about the emotional impact of what is happening and making uncompromising demands.