Wednesday, April 17, 2019

EU countries ratify directive (19 out of 28 nations); France may implement it this summer; effects uncertain

19 of 28 member European Union member countries approved the “Copyright Directive” yesterday;  there were six no’s (that included Poland and Finland) and three abstentions.  The UK voted for it, despite the fact it might leave with Brexit by March 2020.

While the countries would have until the spring of 2021 to implement the policies, France says it wants a policy in place sometime this summer. 
There are many news stories, but one of the more explicit is “9to5Google”.  This links to a story (from November) that suggests Google News would be shut down in affected countries, and possibly so would YouTube.   Another possibility would be that only pre-approved channels would be allowed to continue to operate, or at least have their content visible in affected countries. Google's own blog entry is constructively critical. 
Techdirt has some details by country and predicts litigation may hold up the process.
Recode discusses the effects on large companies like Facebook and Google.  Large platforms will have to pay European publishers license fees to expand their content when users link to them, but this is not likely by itself to present a major problem. But remember Google News had pulled out of Spain before when it had a compulsory link fee (with no publisher opt-out). 
It's unclear how free blogging services like Tumblr, Blogger (this one) and Wordpress will react. It would seem logical for them to disable country specific extensions (like for France) since content can't be properly screened first, but they haven't said that.
Hosting companies like Godaddy also have been silent so far on how they would react, but the EU laws would seem to apply to them (even though they don't curate content). They could split themselves into country-specific pieces and not allow their customers' sites to be displayed in the affected countries. There is no other way to avoid the downstream liability risk. 

 Taken literally, the EU directive also applies to text as well as images and video. No one has thought to mention that on a hosted Wordrpess site, you can easily upload an image or reasonably sized video into a wp/includes directory, or you can just copy it with ftp.  So even for video this isn't limited to YouTube, Vimeo, etc.  Pewdiepie's entry into blockchain for video makes this even murkier. 

Tuesday, April 16, 2019

Harvard professor explains Leftist attack on free speech; Twitter ponders conceptual changes

Twitter is announcing some changes, which would include the ability of users to hide replies, which some speakers find necessary because of the hostile tone of many speakers.

But Jack Dorsey is reported to want to introduce the idea of following a “topic” as well as (or in place if) an account.  Axios reports on this.  Critics already say that could be abused by right wing extremists.

Dorsey’s own blog post is not so specific as to topic, but he indicates he wants to be less dependent on user reporting of abusive content.

Twitter, as reported yesterday, has been accused of being gullible on corporate copyright complaints.

I wanted to share an interesting perspective by Harvey C. Mansfield, a conservative government professor at Harvard, “The Theory Behind My Disinvitation” to speak at a convocation at Concorida University in Montreal.  He goes into a critique of feminism and discusses the earlier theory of expected female modesty (simply promoting a belief system that men needed), to a more provocative analysis of the far Left’s attack on free speech, as a cop-out from the need to accept real popular or group power itself for the “people”.  What he attacks sounds like an element of communism.

Monday, April 15, 2019

Twitter reported to delete news about piracy to "protect" companies; demanding that others "join you" compromises their own speech

Tim Pool now reports of Twitter taking down a tweet that didn’t break its rules but that reported on a pirated leak from Starz with a hyperlink (from TorrentFreak).

The tweet contained now copyright infringement and neither did the linked story, but it was unfavorable information about the company. The linked story might have contained incidental images but this is normal fair use to report facts.

It is possible to be guilty of copyright infringement if you intentionally link to material you know is infringing, especially for embeds.  In the US, litigation over these incidents is rare.  You wonder what will happen in the EU with the recent Directive.

You could wonder too about blog posts (or even tweets linking to them) that give away endings to books or movies – that doesn’t matter to me but it does to some people.

I also want to note a disturbing trend of some people on Twitter, many of whom I like, to say thinks like, “If you don’t stand overtly against white supremacy you stand with it.”  Two problems:  there are other things to stand against (like radical Islam). Agreed, the particularly problem mentioned in the tweet  is unusually clandestine. The other is the implication is that “my” individualized” speech could be taken down if I won’t join somebody else’s movement first.  That would undermine the integrity of all critical speech. That’s why I don’t like to see Facebook prodding people to “add buttons” to raise money for non-profits under their own names.  That erases the integrity of individualized or independent reporting and implies somebody subsidized it. This can really pull persons (like me) at least down into rabbit holes. 
And, of course, the Left’s answer is “you don’t experience our oppression.”

Update:  later today

Now Katherine Trendacosta reports that an Electronic Frontier Foundation tweet reporting the article was taken down with a DMCA "takedown" under safe harbor.  EFF has sent a counterclaim. Katherine Trendacosta reports

Sunday, April 14, 2019

Independent media personality clashes with game company over a wordmark, but this sounds like a situation where mediation should help

Yesterday, I watched Tim Pool’s video on the possible trademark dispute with a game company Studio FOW, and I embedded it in a short article on my Trademark Dilution blog.
The Quartering has a twenty-minute video now in which he suggests that both parties mediate and settle by making slight changes to their respective names.

He suggests that “Subverse” be called “Subverse Media” and that “The Subverse” in the game have a slight change (“The Subverse of ….”).

The game does not use Subverse in the domain name, but it did announce a Kickstarter for the game.
TheQuarterling likes the game (doesn’t see it as pornographic) and also follows Pool’s Timcasts.

In general, it is acceptable for the different businesses to use the same wordmark for their businesses if they are in different.  In my mind, a media outlet is different from a game company product and wouldn’t normally confuse visitors. 
But making money means taking into account average visitor literacy, which is not very good, it seems. 
I had a situation back in 2005 after I gave up my old "" domain and moved everything to "doaskdotell" when the hppub domain became a casino gambling site that people could confuse with me. 

Thursday, April 11, 2019

Senate Judiciary Committee (Ted Cruz) hints at changes to Section 230 if tech platforms won't remain "neutral"

The Senate Judiciary Committee held a three hour hearing on Internet platform content moderation and the widespread belief that the Big Tech companies are biased against conservatives. This follows a hearing in the House on harmful content on April 9 (yesterday's post, including a story about the sudden suspension and then restoration of Hunter Avallone). 

Ted Cruz opened with a statement and mentioned Section 230 (at about 3:00).  He suggested Big Tech enjoys a downstream liability protection that normal media does not, after it bargained with Congress to remained neutral.  He also suggested anti-trust action (and idea discussed by YouTuberLaw)  and possibly “principles of fraud”.

Ms. Hirono, however, talked about the prevalence of alt-right speech on social media, as amplified by social media algorithms.  She is concerned that most Americans get their news from social media and not traditional papers.

Ted Cruz later asked both Facebook and Twitter if they considered themselves “neutral public platforms”. 

Barbara Ortutay and Rachel Lerman of the AP report in the Washington Post that Facebook is “de-agothiming” content that is near the edge but not over it, and the same with Instagram, which won’t show edgy content to non-followers, story. 

Update: April 12

Elliot Harmon and India McKinney discuss a similar hearing in the House and talks about Ted Cruz's views of Section 230 here, April 9. 

Wednesday, April 10, 2019

House holds hearings on Internet hate speech and right-wing extremism online; YouTube lifestream intercepts vitriolic comments; the problem with meta-speech

The House Judiciary Committee held hearings on hate crimes and the rise of white nationalism on Tuesday, April 9, 2019. 

CNN was so preoccupied with Trump and Barr that it didn’t get around to showing this.
Candace Owens (who got things wrong on the Covington Kids in the past) had a particularly interesting exchange.

Tony Room has an article on p. A15 of the Wednesday, April 10, 2019 Washington Post, “A flood online of hate speech greets lawmakers probing Facebook and Google about white nationalism”. The point of the article, of course, was the comments. They’re still out there.  I wouldn’t say they are all that awful. 

I will have to play the entire hearing later. The speakers claim that YouTube and Facebook are piping vitriolic hate into homes. The trouble is, you can find what you want to find, because the algorithms send you content you were looking at. I rarely see much of this, except in intellectual introspections from a distance (like from Sargon of Akkad, etc).  You find offensive content because you want to be offended.

There is particularly a problem with “meta-speech”.  A lot of viewers don’t have the literacy or even intellect to differentiate between speech “about” something and speech that actually incites or promote something.  Recently there have been some videos by Internet personalities like David Pakman and the techie and normally upbeat Thio Joe that so many users are simply “stupid” and are babies (like Trump and the orange balloons).  There is a serious cognition gap across society, in the streets, in the military (although that’s gotten better), and in the online world.

Here’s an example of Twitter censorship of meta-speech, of journalist Sandi Bachom (injured in the Charlottesville incident before the car attack), for a post that happened to include an image of an obscure neo-Nazi symbol in reporting the march. Some of the others mentioned in her post are now in prison. 

There is even the idea, as in Joshua Greene’s 2013 book “Moral Tribes”, which I will review soon, that you can rationalize anything, based on your own idea of metamorality.
Picture: there was a conservative forum at National Harbor Gaylord Hotel in late February. There was a recent potential (radical Islam) terror threat at NH stopped by the FBI (various news reports) by police intercepting a stolen vehicle.

Update:  April 11 

Hunter Avallone reports having his YouTube channel suddenly suspended with no previous community guidelines strikes on Monday, the day before these hearings;  it was restored seven hours later.  I sampled his videos and found nothing that is normally understood to be hate speech (although some people on the Left wing fringe would see his being critical of some individuals with personal issues as "hateful").  It's ironic because YouTube had announced it would always give creators a courtesy warning (Verge story). 

Sunday, April 07, 2019

Twin hatred enemies of ordinary Americans go at each other and make content moderation difficult, catching ordinary people in the middle

Journalist Sulome Anderson continues the discussion of the difficulties Internet platforms have in monitoring especially white supremacist content compared to their problems a few years ago with ISIS in an Outlook Section article, “The Twin Hatreds” in the Washington Post today.

He argues that the two ideologies reinforce each other, with ordinary Americans (but especially various minorities, like Jews, PoC and LGBTQ, as with Pulse) caught in between. They have often mentioned one another recently (rather than other groups) but wind up plotting attacks that would affect ordinary civilians.
The latest online version is here  . The print version Sunday April 7 is slightly updated.

Anderson then goes on to argue that ISIS content is much easier to exclude partially because of federal foreign terrorism laws, which would not apply to white supremacy.  And the vocabulary of white nationalism, etc. keeps changing to adapt obvious detection and is based on ordinary English idioms, metaphors and memes.

Friday, April 05, 2019

Carlos Maza, Tim Pool separately take on whether tribalism is necessary for Internet video business to succeed, and this really matters now

Carlos Maza’s video “Why Every Social Media Site Is a Dumpster Fire” starts today's discussion. 

The main takeaway from his video (Sept 2018) is that you have to behave tribally to sell things and be successful (make your Internet operation pay its own way, which I have talked about before).

Yet Tim Pool (non-tribal indeed) seems to refute the idea.  He just passed “The Young Turks” and has a tremendous posting today.  Today he discussed his business plans – too intricate to describe right here.

Julia Alexander has an important article on The Verge on how YouTube is moving away from user generated content as a way to make a living, to established companies.  Situations like the EU Copyright Directive (to the extent that the effects spill out over the world) and now concerns over the subtle problems of avoiding terror promotion (yesterday)  are part of the change.  But there were problems all the way back to about 2014 according to her.    

In the broader context, I’ve discussed before that politically or issue-oriented independent speech needs to be able to support itself.  That’s partly because of these other issues (I won’t rewalk the entire argument here, but I will again, soon, I’m afraid I will have to).

There is an idea that to show people matter to you, then you need to be able to sell them things they want or need. Maza’s video says, to sell you have to make people feel they belong to their tribes. Pool says, not so fast.  We can get beyond this.

John Fish coincidentally took this on this week (TV blog).

Connect the dots, everyone.

Thursday, April 04, 2019

CNN notes that white supremacy is much harder for Tech companies to stop online than was ISIS

There is more material on the existential problems that the tech world faces in stopping white supremacy without shutting down free user speech on the Internet, even when compared with similar concerns about ISIS 3-4 years ago. Eliza Mackintosh explains for CN here

This discussion fits in with a longer discussion I gave of “stochastic threats” on Monday.
The problem is partly that the US ethno-right has set up codes words for extremist concepts based on common English phrases and memes.  Similar material in radical Islam is outside normal English language usage.

Furthermore, it is apparent that Donald Trump himself used some of these metaphors in his campaign. Some examples of possible results could include Comet Ping Pong, for example.
But generally the problems are more serious in other western countries outside the US.

There would be a good argument for letting people read the “manifesto” so they learn to recognize the code words better.

When I posted the link to this story on Twitter, the link to it did not expand.  It did on Facebook.

Wednesday, April 03, 2019

More on how to get started with ether tokens for Blockchain sites

If you want trade tokens on Minds or Steemit as you get going with blogging onto the blockchain, it looks like, at a minimum, you need to set up a wallet.  But you also need to have a way to putting funds into the wallet to get going.

The most straightforward way to do this appears to be to set up a regular Coinbase account and deposit some funds (with processing fees taken out) into one of the currencies supported, which includes Ethereum.

I see that I covered this on Feb. 25, but want to rewrite it with more links and expand.
ThioJoe explains setting up the Coinbase account.

However a post on Medium says you need to provide a photo id before you can set up a payment method. You can buy ether this way.

If you want to put the accounting for your own place in the blockchain and have your own cold storage (generally recommended), you then set up a wallet.  This means you keep your bitcoin off the exchange, like cash.  The safest type of wallet is paper, printed with a UPC code with sufficient quality (it should print the actual hexadecimal key out too), and you keep one in a safe deposit box, and probably a duplicate a home (out of sight, depending on your security).  You can keep it on a thumb drive or laptop and back it up in the cloud, but it might be less secure (and a thumb drive or laptop could be damaged).

To trade tokens on these new blockchain blog sites, you usually need Ethereum and you have to supply a private key. I am under the impression that a wallet key is expected, but maybe you can use the Coinbase account if you don’t care if you leave the tokens in an exchange. 

If you have other people wanting to pay you with Ethereum, conceivably you could use the wallet with no Coinbase or similar account, but that doesn't seem to be recommended. 
I'll do this soon and let everyone know how it turns out. 

Update:  April 4

Got Coinbase set up, with $10 worth of Ehterium deposited (2 coins).  The system didn't accept my Wells Fargo because it had trust accounts on it (I believe) and I had to go with Bank of America, which does not. 

Monday, April 01, 2019

"Stochastic" content is dangerous; now more western countries talk like they want to shut down all user generated content for the crimes of the few

I had not heard the term “stochastic terrorism” before today, but the concept is quite disturbing, and it seems to explain the draconian actions, for example, of the New Zealand government in not allowing individuals to possess the “manifesto”, as if it were contagious and like a virus.

The term occurs in a long video by “NonCompete” that starts out with examining the speaker’s mention of PewDiePie early in the document.  I won’t link to it today (I have a different one below), since I haven’t had time to watch all of it, and might prefer to handle it on a more isolated blog. Admittedly, if you look at the entire channel, its own describes it as anarchist-leftist. But the concept itself is very disturbing, and would take a real commitment to free speech (even with risks) to defend against. 

The concept seems to have originated on Blogger, ironically, in 2011. Certain conservative pundits are named as instigating it.  But some of Donald Trump's behavior during the 2016 campaign would qualify for this definition. The whole concept reminds one of the idea of a "Manchurian candidate", although the person is unknown (unlike someone recruited as in the famous two films). Wikipedia has a sublink, which seems to point to left-wing sources in the footnotes for the origin of the term. The term can be applied to both radical Islam and white supremacy, but particularly left-wing sources  (like "NonCompete" and "ContraPoints") tend to fear the latter more now as particularly insidious. malignant, hard to isolate, and something that cannot engaged with normal free speech. 
At the time, Obama had been in office two years, and the political climate was generally more stable than it is now.  I had just commemorated the passing of my own mother and was starting the next phase of my life, and here we are, now.  The Arab Spring was yet to happen, and Osama bin Laden would soon be found.

ISIS would become publicly notorious around 2014, as an aftermath of US pullout from Iraq and the whole situation with Assad in Syria, leading to the migrant crisis in Europe. But later in 2014 (actually starting in 2013 with the Treyvon Marin case), racial tension (particularly over police profiling0 would erupt in Ferguson and other cities. Even with Obama president, tensions increased.  The, as we know retrospectively, foreign meddling in social media would add to polarization and a group reaction from what we call the “identarian right” or alt-right. Donald Trump would take advantage of working class people “left behind” the capitalist, “elitist” economy in a way that Hillary Clinton didn’t try for urban poor (Sanders would have taken care of this).

Gradually, the “alt-right” became the threat to use this technique, allegedly for white supremacy purposes, which seemed to become a much more dangerous threat than most of us had thought, when we had been justifiably focused on radical Islam (and Pulse had happened in June 2016).  In August 2017 Charlottesville would happen.

The essential technique is to build up a set up of code words of dog whistles inside otherwise normal looking writing, to attract a certain radical audience.  At some unpredictable time, some person may go off.  This process would be related to so-called “shitposting”. I see that on Nov. 24, 2018 I had covered a video by ContraPoints that describes this kind of process, recognizing developing fascism in disguise.

Once an incident occurs, the intervening speakers have “plausible deniability”.  The Norway (2011),  Comet Ping Pong, Pittsburgh, and now Christchurch events might be construed as stochastic, aligned with some aspects of the alt-right. Pulse, aligned with radical Islam, probably is not; but maybe Paris 2015 should be so regarded, as well as should the Cartoon Controversy.

One problem with articulating this theory is that it does envelop what used to be considered very mainstream ideas, such as opposition forced school bussing in the past, or to quotas or some forms of affirmative action, talk of reverse discrimination, maybe opposition to reparations.  Any criticism of the most radical Left agenda could be construed this way and then silenced.   The video mentioned earlier views the Covington kids as presenting this threat, even despite the revising of interpretation of what really happened (and the litigation now).

Rolling Stone referred to the practice in discussing Donald Trump’s rough language at the convention and in the 2016 campaign, in an article by David Cohen here.   Jonathan Keats, in a Wired article in January 2010 (paywall) indicates that the stochastic process lets “bullies operate in plain sight”.

I think there is another related concept, what I have called “implicit content”, in relation to COPA and also in connection with a bizarre incident that occurred in the fall of 2005 when I was working as a substitute teacher (see July 27, 2007 post).

There is still another potential disturbing glitch with citizen commentary and journalism.  Once someone (even I) have a reputation for covering all major issues under the “connect the dots” (or “keeping them honest”) theory, an actor could create an incident simply to force journalists to draw attention to his grievances, even if they don’t publish the actor’s name (who may die in the incident or spent life in prison anyway).

This is a disturbing theory, right from the blogosphere.

But that's not all for right now.  I glanced at Timcast, and I guess it's a good (or bad) thing that I just did. 

As if all this were not enough, Damien Cave wrote in the New York Times on March 31 that some countries, especially Australia, where apparently there are proposals to require tech executives to vet all user generated content? 

Mark Zuckerberg, in his op-ed Washington Post, recently, called for more regulation of the Internet in a few areas, including privacy and “harmful content”.  

 His ideas seem reasonable enough on the face.  But, when making his proposal, does Zuckerberg understand “stochastic” harm?  Not many people do. 

Timcast already weighs in on Cave's article the NY Times piece,  He has developed the content of “Publication by Omission” on connection with Section 230.
There is a related concept called "steganography" which was a concern after 9/11, where low-volume websites might be hacked to provide covert terror instructions.  I don't recall that this has actually happened. 

Friday, March 29, 2019

Congress Homeland Security committee warns about extending platform liability for terror content

Jason Kelly and Aaron Mackey have an important warning essay today (March 29, 2019) on the Electronic Frontier Foundation site, “Don’t Repeat FOSTA’s Mistakes”, here

House Homeland Security Chair Bennie G. Thompson (D-MS) wrote a constituent letter warning platforms that he might press for further weakening of Section 230 protections regarding terrorist content. 

The EFF article points out that filters don’t know how to distinguish advocacy from meta-speech:  urging a destructive behavior is not the same thing as discussion about the behavior.  It might be easier in some foreign languages (than in English) were verb conjugation endings for subjunctive mood makes context clearer than is possible in English.

There is also a problem with literacy of human readers, who don’t understand that mention of something by an amateur speaker (when not representing an established non-profit) is not the same thing as advocacy.
There are those who urge, “why not just raise money for an organization to be your voice”.  Yes, really.  The Left is not ashamed to do this, conservatives (non-identarian) generally are more susceptbile to seeing the need for forced solidarity as personally shameful 
We’ll have to watch for Homeland Security hearings on this problem.

Thursday, March 28, 2019

Facebook bans some forms of nationalism and separatism as essentially racist (FB's explanation)

Although I added a link to a previous blog post (March 20) on this, I think it’s useful to link to Facebook’s own blog post on its policy change, to regard (essentially ban) content advocating “white separatism” or “white nationalism” as indistinguishable from “white supremacy”.  

The title of the post is “Standing Against Hate” (leading) and it links back to well known prohibitions against certain hate (or terror) groups using the platform.  Twitter had announced a similar “purge” on Dec. 18, 2017.

Vox has a long article by P.R. Lockhart here. As an anti-tribal non-identarian myself, I personally have no interest in supporting the identarian group aims of anyone; but I am concerned about the principles underneath this regarding the credibility and objectivity of public speech. 
It is true that federal law (as administered by the DOJ) prohibits organizing for the purposes of criminal activity (whether drug or sex trafficking, money laundering, or actual terror or other violent crime, or even overthrowing the government).
So up to a point, banning “groups” or certain organizing by platforms (or even their hosting   as banned by most host AUP’s) does make sense. 
The problem, however, is that typically social justice issues (even ones we now see as legitimate) tend to lead non-profits to try to organize and enlist everyone.  The First Amendment guarantees both free speech and free assembly, and sometimes in practical situations assembly and individual speech can come into conflict.  Individuals often turn to group organizing when they fall behind economically as individuals or families (e.g,. Prager U on “Why Trump Won”).  Others who are better off will resent the pressure from others to join up.

So it seems very objectionable to say that PoC can organize and sometimes go to the edge with some objectionable goals, but “whites” may not – even understanding the historical context in the US specifically (slavery and segregation), as it might be compared with that in other countries (Germany, Israel, South Africa, etc).

Facebook says it had wanted to consider ideas like patriotism and nationalism as legitimate; but given US history, it could not do that with “white” issues specifically, given partly the history of privilege in the past, and the constant connection to probably unlawful “groups” (and incidents like OKC, Atlanta (1996), Charleston SC, and Christchurch). That position would seem to apply to Europe, where in some countries the problem (with respect to separatism) sounds similar (consider Poland, Hungary, “soft fascism”).  It is also a big controversy in Russia among the former republics.  The comparison to Zionism in Israel is said to be a canard.

There’s another particularly sensitive issue: population demographics.  For a number of years, right-wing publications have complained that white families (in Europe and in North America) don’t have enough children, with ties into the migrant issue. But there is a larger context.  In the United States, immigrants (even non-white from Mexico) often lower their birth rates and, when faced with economic pressures of middle class life, often delay having children in the same way.  The low birth rate issue can be seen in terms of increasing eldercare burdens for everyone (not just whites), as with Social Security and Medicare, and even the ability to find and hire caregivers. Low (native ethnic) birth rates also feel anti-gay sentiments in some countries, especially Russia (even now with the politicization of “Eugene Onegin”).  This was a particular issue for me back in 1961 because I am an only child (the importance of lineage to parents). Some people will see the birth rate issue as racist, but it is not;  it is more about basic economics.

So at a certain level, Facebook’s action, while understandable and probably not having much effect on most users, is still disturbing from a free speech perspective. It's a little concerning that it manipulates and redirects search results rather than even simply banning them. 

Wednesday, March 27, 2019

Will Europe's radical Copyright "Directive" eventually affect speakers in the US, too?

I thought I would share ThioJoe’s video “Europe Just Ruined the Internet”, that is, politicians who have no idea how it really works.

The Verge (Vox) offers some analysis by Casey Newton, as it predicts the Web will be trisected into Europe, authoritarian countries (like China) and the western hemisphere. This requires each country's mapping the directive into its own laws, one sovereign state at a time.  Hopefully it will take a few years. 
Thojoe pretty much predicts the same thing toward the end of his video.   Especially galling is the idea that politicians think they are doing individual users a favor by transferring liability to platforms, which means on the face of it that platforms can no longer risk letting "ordinary paeans" speak for themselves. (Maybe the politicians really do want to keep the right-wing separatists who would break up the EU offline.) 
One observation strikes me:  most platforms are expected to make their best efforts to prevent copyrighted material from being uploaded through their services.  Theoretically this applies to text as well as music, images and video.
“Best efforts” might include (rather than impossible filters) prescreening who is allowed to post at all.  That could start in the countries most affected (France could be one of the worst, and the UK or “Whoops, England” might not even escape).  In the worst case, it would be only established organizations or companies. Or individuals might have to be screened – maybe with some kind of social credit system, which would invite political bias (especially from the Left). Or they might have to show that their web sites support themselves – no more free content.  Companies could be set up to offer paywall subscription bundles, and the bundles could screen and indemnify individuals who wanted to publish. But then consumers would have to know to look for them, so you'd have to set up free trials to be found, or some way for newbies to announce themselves. We're back into a loop that Axel Voss wants, that most ordinary people don't speak to the whole world anymore, unless you can figure out some real entry points. 

The problem then becomes, if this becomes widespread in Europe, platforms have every incentive to apply the same system in the US and Canada, etc.   Sites with poor performance would simply be shut down because of the unknown risk they create.

This also has political consequences (in the US for example),  which the Left wants, to curtail individual speech (which tends to be meritocratic and conservative – although the claims that this leads to white extremism are simply false) because it incorrectly thinks that will force more solidarity and give non-profits more control over what is said.  We’re already seeing some evidence of this, as Facebook boldly inserts requests in user’s timelines to run fundraisers for non-profits (which would destroy the integrity of their own speech).

And we don’t know how normal webhosting platforms (the old Web 1.0 stuff) where people pay to be hosted, would be affected.  Maybe their stuff would be blocked from the EU.  (Curiously, all my sites are available in China despite their political subversiveness and libertarian, anti-communist and anti-identarian content.  Blogger is not available, but I seem to get traffic on Blogger from China anyway.

Tuesday, March 26, 2019

New Zealand's law jailing people for possessing even "the Manifesto" follows the debate on gun control; the idea of a paper trigger "weapon"

The dark news keeps coming.

Today, the EU passed its copyright directive.  I’ve explored the possible consequences on another blog, link here.  

New Zealand, as widely reported now already, has made it a crime to possess a copy of the “manifesto” of the perpetrator of the attack March 15 in Christchurch.  On Monday, I wrote a post on my COPA blop comparing this kind of “possession” offence comparable in concept to normal laws putting people ein prison for possessing child pornography.

But a more apt comparison might be for illegal possession of weapons, and in some sense in the US the first two amendments are logically linked.

New Zealand (as in a link on the COPA blog) claims that the “manifesto” included precise directions and locations for the attack.  Now, when the attack was reported on the US East Coast late March 14, I didn’t pay a lot of attention at first as I was preparing for a weekend trip.  By the time I got to Union Station March 15 I realized, from various text messages, the magnitude of what had happened and I read the manifesto on my phone and then on my laptop, and, yes, I saved a PDF of it, which is now in my cloud.  The Document Cloud link now gives a 403 forbidden, which means that you have to be a registered user and approved to see it (which is allowed in the New Zealand law) from anywhere in the world.  For example, you can be an actual academic or journalism professional, employed itself. You can’t see it as a self-declared blogger.   I plan to travel to Canada in May, and it is conceivable I could get into trouble if I don’t remove the copy from the a laptop (and its cloud backup) because there has been a legal case in Ontario already.   One reason I read it and wanted a copy was to ensure there was no covert reference to me or my own work in the document.

This idea (of requiring press credentials) itself is troubling (it reminds me of a series of tweets by Ford Fischer establishing that you don’t need official press credentials to photograph the police). 
I take the NZ’ “censor’s” word for it on the claim that there were specific directions. (I don’t recall reading that specifically.)  So I can see how NZ could see this as like a specific threat or blueprint for an intended event – even though it now has happened, there could be another one intended. 

It is illegal in any democratic country to transmit a threat or even a hyperlink to one (although in the past it has been unusual for people to get into trouble over “mere” hyperlinks). So there is at least the theoretical possibility that linking to it now would be illegal even here.
But NZ’s point is that the document is somewhat like a paper weapon, or (by curious analogy) 3-D printed weapon (or, even as a more bizarre metaphor, a cryptocurrency paper wallet). If transmitted, it could lead to another incident. So possession of a copy of one is like possessing a weapon illegally.

After 9/11, there was talk that even ordinary websites could be targets for terrorist hackers wanting to transmit “steganographic” information to other possible attackers.  The NZ situation reminds me of that talk. 

There is also a question that if a reporter or even amateur blogger receives classified information without asking for it (it simply arrives unsolicited), and the recipient publishes it, is he/she committing a crime (in the US)?    Anders Corr discussed this in May 2017 with respect to Trump here.

In April 2002 an old site of mine (material later moved) had a hack at a sensitive spot (talking about nuclear terror) and the overlaying material might have been classified.  I have republished that in the past.  I did contact the FBI.  On at least three other occasions (the most recent was 2005) I have contacted and given information that was probably classified, and called the FBI each time and not published.  In 2008 I received information about a threatening situation in Africa but decided to publish it.

The freedom of self-defense is very important to a sense of personal identity for many people.  

Confiscation of weapons after an incident caused by “somebody else’s grievances” is a horrible experience. I certainly support closing of all loopholes (David Hogg has constantly battled the NRA on trying to hide these loopholes). In most cases (except for some unusual ones in rural areas) there is no legitimate reason for civilians to have military-style high firepower weapons (or bump stocks – which have now been outlawed suddenly in the US).

But by analogy, NZ is claiming that words alone can sometimes amount to a weapon, or at least a trigger. So it makes some logical sense to control possession of these keys or triggers.  But that can have serious ramifications for freedom of users to post their own content eventually, as platforms have to be so wary of the unpredictable possibility that another violent video or trigger manifesto will suddenly appear.

We have a world where the freedom we are allowed to express ourselves in certain individually tailored ways and even to defend ourselves, can be misused by others with catastrophic effects.  David Hogg certainly has expressed that idea with respect to weapons possession; but Hogg has used social media to advance his own agenda (as I have done with mine) and essentially used a medium that could be largely shutdown for public safety if the threat were compelling enough. The parallel should be noticed. This observation makes the need for social solidarity and even participation in tribalism a practical necessity, something the far Left preaches and that conservatives find shameful. Yet, at some point, we’re left with the possibility, if something bad happens “to you”, well, it was just your karma.  We have no other answer that works.
(Tuesday, March 26, 2019 at 4 PM EDT)

Update: Wednesday, March 27

Matt Christiansen weighs in on this.  Evil exists. The Chief Censor is following the terrorist's script. 

Sunday, March 24, 2019

Pressure on platforms to censor content and deplatform some speakers drives everyone toward identarianism

James Langford, business editor of the Washington Examiner, writes “Social media companies under pressure to censor violent content,”  p 42 of the March 26, 2019 issue. 

He quoted New Zealand prime minister Jacinda Ardern as saying about that (social media) platforms, “they are the publisher, not just the postman.”

But that goes against all the provisions in US law, like Section 230 and DMCA Safe Harbor, that enable user-generated content as we are used to it today to exist at all.  This statement is an existential threat to all individual speakers, confronting them with the need to be willing to let others peak for them through groups. It does play into identarianism and challenges (Ayn Rand style) individualism.
The same battle happens now in Europe, especially with Article 13 (in yesterday’s post) because the EU is about to make platforms legally the publishers of the posts.  In the US, the users are legally the publishers.

Facebook and YouTube were overwhelmed in stopping the spread of the smush (or "snuff") videos of the killings. Some measures, like one minute delays for all live-streams, might help the algorithms stop the worst incidents.

But the Langford article reports that Congress will be looking at more regulation soon.
Little discussed with all this is the major difference in how social media platforms work compared to shared hosting, which users pay for, and for which there is much less censorship, although that has been changing since Charlottesville.

But if all platforms were truly publishers, no one could self-publish.  Everything posted for the whole world to see on an index would have to pass suitability for publication just as in the past with most books and magazines.  It could be expanded, but individuals would have to show social credit before being allowed to be heard as individuals.  Which is what a lot of activists want – to force everyone to take sides and fight.
New Zealand has even made “possession” of a private copy of the “Manifesto” a crime, as if it were child pornography. The analogy is not logical however; a text document with no images did not require a victim to be created. Speakers may need to have private copies to know if they were mentioned, in order to anticipate any legal problems themselves.

Saturday, March 23, 2019

Protests in Germany over Article 13; YouTube CEO predicted most YouTubers could be shut down by liability risk

The turmoil over the EU Copyright Directive goes on. Matt Reynolds, of Wired, gives an overview, published recently, March 12, 2019, here

On the previous post, I did link a breaking story about a block in Poland which opposes Article 13 (and probably 11) and which could prevent the directive from passing in the EU Parliament on Wednesday, March 27.  People in the US will know what happened when they get up Wednesday AM because of the time difference.

Reynolds indicates that there probably would not be another vote before the May elections, and that EU member countries would have about two years to implement the directive into their local laws.  It would appear that individual companies (especially the large platforms) would have to set themselves up with each individual country.

That would be like a company’s having to do that in the U.S. with a state with unusual laws, which gives New York and California tremendous power.  In practice, this hasn’t usually been a problem (although there have been questions in how Section 230 is applied in some states, and other concepts, like defamation in fiction, can vary among states).

It’s also obvious that the wording in the law is so vague and open to interpretation, that in some countries somewhat opposed to the law (like Italy), there might be little change in practice.  The countries vary a lot;  Spain was quite strict on the link tax, not allowing publishers to opt out of requiring payment out of protectionism, and not concerned that Google news would pull out. France is said to be one of the worst enemies of amateur speech, but it is so distracted by yellow vest uprisings that it is hard to say what will happen. It’s unclear now whether the UK will be affected or whether it will leave the EU “in time”, which, ironically (compared to 2016), seems like a turnabout for free speech.

The obvious question will be, if the measure passes Wednesday morning US time (and given the Poland issue the outcome is really unpredictable now), what will US companies say to US users about it? 

The US media has hardly discussed it;  will it be shocked on Wednesday?

The Wired article links to several sources on YouTube including posts by YouTube CEO Susan Wojcicki, like this one on Oct 22, 2018 .   She bluntly states that Article 13 threatens to force platforms like YouTube to limit publishers to a handful of larger, trusted companies.  Does she mean only within Europe, or everywhere?  Would most vloggers in the US get shut down too?  Would my own videos embedded in my blogs go blank?  She then says that European users would lose access to most user generated content, so I guess she really meant her first sentence to mean, “within Europe”.  She should write carefully.

I can think of solutions.  You could imagine intermediary companies, which might set up paywall bundles and actually manage the link taxes.  But they could also screen potential publishers before they are allowed to post.  You could require a user to pass a quiz on copyright.  (Youtube already has a “copyright school”.)  You could screen for social acceptability – but then you’re having the new Internet publishing gatekeeper as able to give people “social credit scores” like China.  Imagine how free speech could be corrupted, by requiring evidence of volunteer service or the ability to raise money for nonprofits.  Yes, this could happen.  Some people would think this was OK.  It would have to be thought out very carefully, and it would create a new fight.  The very idea that I can suggest it with a straight face makes me Milo-dangerous. 
 Note that protests are starting in Europe, today in Cologne, Germany. They will spread.  How will this mix with the Yellow Jacket riots in Paris?  What about other Brexit-like separation movements from the Right?  Most of the support for Article 13 seems to come from the corporatized mainstream moderate Left (Macron).