Friday, February 27, 2015

There is a difference between imposing downstream liability, and requiring some reasonable monitoring by service providers for criminal content; Blogger punts on porn ban


I have often written here about the need to remain watchful about downstream liability for service providers, ranging from basic telecom services (those reclassified yesterday by the FCC) and Internet hosting companies (that offer shared and dedicated web hosting) as well as “free” service platforms (generally supported indirectly by advertising), like Blogger (this platform), YouTube, Wordpress, Vimeo, and Tumblr, as well as photo sharing. Of course that includes social networking sites like Facebook, Twitter, Google+, Myspace, LinkedIn, and some other smaller ones.  These depend on whitelisting or invitations, but are also  effectively quasi-publishing sites.  And that includes email providers (AOL). 
  
There has been a lot of public attention in recent weeks to two major issues: pornography or adult content (as well as issues like revenge porn), and recruitment of impressionable people, often older teens and mostly young adults (including women) for illegal or criminal activity, including fighting for foreign forces overseas.  I won’t get into the religious or ideological issue here.  I’m merely concerned because the latter activity is bound to lead, at the very least, to calls for pre-screening content on the web.  Let’s mention also that in the 2011-2012 period we went through a round of this kind of controversy with SOPA/PIPA regarding intellectual property piracy prevention.
  
Remember, we actually saw this kind of debate in the 1990s with the (struck down) censorship portion of the Communications Decency Act, even though ironically Section 230, which significantly protects providers, was part of that Act.  Later a similar conversation occurred with the extensive litigation over the Child Online Protection Act (COPA), to which I was a party, so I am quite experienced with the tone of the conversation on these matters.
  
I would presume that it is illegal to recruit someone to fight for an overseas interest, or to any criminal enterprise.  The illegality would normally trump free speech concerns with this issue (unless the law were challenged in court).  (That's not the case if the speech is simply offensive, or recruits someone to a disturbing but legally protected activity;  there is nothing illegal about recruiting someone to join the Westboro Baptist Church.)  So in an individual case, when there is an arrest, the offending content is always removed.  In various case reported in the media, various companies (especially Twitter) have closed accounts and removed such material when it is brought to their attention.  Still, a great deal of this content remains, and it is frankly very easy to find with little effort.
  
There are some automated tools that providers use to detect videos or images (or possibly text) that infringe on some copyrights, and other criminal issues like child pornography. Some of these tools involve the detection of digital watermarks, and could prevent some content from being posted in the first place. 
  
I want to reiterate that it is not appropriate or realistic to hold providers liable for inciting content that may get posted.  But, for providers with large volumes of users and profits, it is appropriate to expect them to devote some uses, including security employees making actual spot checks with “human eyes” to remove the most flagrant and obvious content that is easily found by anyone.
I’ve written about these matters extensively this week, on this blog, on the International Issues Blog (where I wrote in detail about the “recruiting” problem late last night), and on the COPA blog, where I have suggested that companies like Google and Microsoft take up the reins of the abandoned “voluntary content labeling” project, started a few years back in the UK by a group called ICRA (Internet Content Rating Association) and FOSI (Family Online Safety Institute).  I’ve also made such postings on Facebook and Google+.
  
This last week of February, in an extended cold winter, has been critical for the future of the Internet in more ways than one.  The seriousness of the recruiting and censorship problems may be obscured in the eyes of the general public by the bluster over the supposed “victory” in network neutrality.  That may become irrelevant if we lose the right to post user-generated content without gatekeepers. 




Update (later today):  Blogger appears to have deferred the "no porn" policy and will instead focus on enforcing its rules against monetizing porn (introduced in June 2013).  The posting is by Jessica from the Blogger Team.  Check the link from the Feb. 24 posting.  PC Mag has a whimsical account here, which Webroot tweeted today. The account didn't take the underlying problems seriously.




Update (March 4):  Blogger's updated policy statement is here. Note that adult interstitials may be turned on by Blogger, and blogs that "defy" could be removed.  Again,, the term "porn content" doesn't cover the scope of the problem.  

No comments: