Monday, December 30, 2019

More storm clouds on the horizon for individual speakers (and "whistlegate" is part of it, as are the "free radicals")


Talk of ugly channels in the future continue.

Today, on CNN, a rabbi, apparently from Money NY, talking about the stabbing, suggested that social media companies (and maybe hosting platforms?) be expected to do background checks on people before letting them have accounts, and suggested they should be responsible when hate speech leads to violence.  Part of the tone of the remarks comes from the volume of threats against segments of his community especially in and around NYC.   

But the idea had been floated in Europe as a result of passing the Copyright Directive earlier this year and the “upload filter” Article 17 which would start going into effect in many EU countries now.  YouTube even admitted this possibility in the fall of 2018.

Some the rabbi's remarks sounded incorrect:  large social media platforms do automatically screen for hate speech and catch a lot of it.  Some other sites so not (or do much less).  

CNN reports that some people in Orthodox communities are concealing their identities in public (today, interview around 3:05 PM) and that president Trump could do more about this, by XO if necessary. 
   
Then today there was more blowback on Trump’s “retweetgate” regarding the whistleblower’s name (See BuzzfeedNews story by Ryan Broderick). YouTube told Tim Pool that the name cannot be repeated on its platform under any circumstances. And Bloomberg news (however conveniently) offers the opinion that Trump broke the law (the whistleblower protection act) doing this and that YouTube’s and Facebook’s policies are motivated by valid legal concern (how do you define “coordinated harm” anyway?).  Of course, other journalists maintain they are free to report whatever is public knowledge already and true.  The last part might be called into question as to the name.  But YT’s policy would mean that another person with the same name could not use his own name on their account (or in the case of FB, have an account at all). It is also worthy of note that apparently Facebook has banned linking to (conservative usually) news stories that purport to name the person (although the link that got Ford Fischer suspended did not contain the name.)  


William Feuer has a detailed article on CNBC examining how YouTube has changed its algorithm to direct content away from radical politics (especially on the right).  The article skims across the question as to whether YouTube still has moral responsibility for allowing the content to be there, or whether moral responsibility exists only with perpetrators of violence themselves.

The other angle, of concern to me, is that independent speech tends to run counter to traditional organizing and solidarity. It becomes a moral problem if it is propped up by a business model that radicalizes disadvantaged people into instability and violence, but the CNBC story would seem to quash the idea that YouTube’s existence depends on stochastic radicalization.
  
In another new BuzzfeedNews story by Kate Notopoulos, “How we killed the old Internet”, the writer characterizes Blogger (this platform) is on “life support” as owned by Google and mentions a 2018 tweet suggesting that not many people work there.  We’ll watch this one.

No comments: