Sunday, June 30, 2019

YouTube, Facebook "hate speech" rules could conceivably shut down discussions of low birth rates among wealthier people, and we need that conversation


Is talking about population demographics now presenting an “unacceptable ideology” (as YouTube would define it)?

In recent months, a lot of us have gotten informed about the existence of the objectionable racial idea or conspiracy theory, “great replacement”, originating from France and re-emphasizezd by Renaud Camus’s 2012 book.  Sometimes the theory also leads to anti-Semitism.  The hate-speech rules would bar presenting any ideology that admits one group to claim superiority over another, even as an abstract notion well-known from centuries of world history.  Until Charlottesville and then New Zealand, I had barely thought about the concept as such at all.  It has no chance of coming about through any reasonable political process. 
    
However, for perhaps the past two decades we have heard repeated warnings that lower birth rates among wealthier people (who in Europe and the Americas are more likely to be Caucasian). The caution about having children leads to fewer workers to support existing retirees and increases the strain on Social Security and retirement systems.

The caution is largely motivated by financial caution and development, but also by individualism, feminism, and gay rights. People who feel less “tribal” attachment tend to have fewer children.


In fact, the concern over lower birth rates has contributed to homophobia.  That’s obviously the case in Russia today (with the 2013 law).  But it also explained a lot of hostility toward homosexuals before Stonewall (and the tendency to conflate the problem with the Cold War and communism). 
  
In my own situation, since I was an only child, my announcement of “latent homosexuality” at William and Mary in 1961 was seen as a death penalty to the idea that my parents’ marriage would lead to a lineage.

People who live in relatively closed communities or “tribes” tend to have more children, but the men (husbands) especially feel more committed to lifelong marital intimacy if they believe that others around them have to follow the same moral rules of religious purity. These rules tended to demand a level of gender-conforming performance and communal risk-sharing among all persons in the group, the sort of thing that leads to drafting only men.

That’s a tough reality to follow.  Gay activists over the years have taken the intellectually lazy but culturally safe path of treating gender or sexually non conforming persons as members of separate (protected) classes rather than look at what happens at a psychological level.

And now looking at was really happening a half-century ago is almost forbidden by big Tech policy, out of fear of instigating violence and fascism all over again. The radical Left wants supposedly settled questions taken out of the purvey of free speech, which it seems a re-igniting the risk to its vulnerable protected classes. But, surprise, male-only Selective Service registration is not settled.

The Washington Post has a distantly tangential article by Hugh Ryan, "How Eugenics Gave Rise to Modern Homophobia", link. There is an odd twist in how the "born this way" meme works. 

Thursday, June 27, 2019

We are heading toward "certifying" independent content creators, but maybe that could turn out to be a good sustainable thing


It looks to me like we’re heading toward some kind of certification of “trusted” independent journalists (quasi "authoritative sources").

It sounds reasonable to me that YouTube could set up a separate subsidiary, let content creators apply for consideration with resumes (including technical equipment, certifications, education, paid work experience) and mayne a portfolio.

Many of the best ideas would be interview and panel discussion shows.  Content creators would be expected to have access to professional video interview studios. 

Facebook could also do this.

They could set up some sort of neutral (international) outside group to do the screening.

The end result would be that YouTube would be the “publisher” or “distributor” (like a movie studio) of the content and would lose Section 230 protections for this group (so it needs a separate trademark for this operation), but it would be able to offer opportunities to many more content creators than the traditional film and television and cable industry as we know it now (including Netflix).  But it might be a bit like a Netflix, offering bundled subscriptions to visitors (like Netflix) or offering pay for rental (like Amazon or YouTube Original now).  It could also find advertisers who were willing to advertise on this kind of content.  Tim Pool today warned that Silicon Valley is quickly creating a "new nobility" for who is allowed to be heard (partly because their business model now has to please woke activists), but at least a pseudo-certification could expand the "nobility" and include some vassals (as in Schoenberg's "Gurrelieder"). Twitter's labeling of otherwise unacceptable tweets from politicians (Trump) as well as other actions by Reddit and others are an example of online "feudalism" (Washington Post story). 
  
In addition to interview programs, raw footage of events could be shown by creators with this business sub-model (although there would be limitations or prohibitions on including violent material from demonstrators, crimes in progress, and the like).

This operation would include pure “journalists” (like Ford Fischer/News2share, and for that matter Gary Younge from the UK) and “commentators and interviewers” (Dave Rubin, David Pakman, Tim Pool, Matt Christiansen) and could include people seen as controversial (Jordan Peterson, Milo Yiannopolous), as well as technicians and educators or students (ThioJoe, John Fish) whose material is usually not political. (But now everything is political!)

David Pakman (who is monetized still) made a very important announcement today that is related to this idea, in my opinion at least.


This proposal would also help YouTube deal with Article 17 in the EU, which is gradually being implemented. YouTube had warned it might have to do something like this last October. 

It does leave open the question of “free content” of a political nature from “amateurs” and I’ve talked about that.  For the time being, the rest of YouTube (not part of the Partner program) and Facebook (similarly viewed) and other similar platforms (that want to recognized international, not US, standards for free speech including hate speech prohibitions) would have to follow the TOS and censorship exposure as recently defined, and given all the problems, it is only getting more restrictive for persons not officially pre-screened as "trusted".  

This idea does leave open the idea of a creator's earning "social credit" by some international standard, but I wonder if it's coming no matter what we do. It also would tend to tell "amateurs" (like me) that they have to join established non-profits and support them in a conventional manner to be heard.  The Left wants this -- more solidarity to protect its more vulnerable subgroup members. That could lead to my dropping out -- but I'm almost 76!  Actually, I have plans (through the end of 2021) on a Wordpress.   No one can become my voice for me. 

Tuesday, June 25, 2019

Sci-fi writer offers future op-ed about the end of user-generated content, and more



Science Fiction writer Cory Doctorow (who also writes for Electronic Frontier Foundation) has an op-ed from the future, which he says he should not have to publish in the New York Times. 

The op-ed suggests that soon social media sites and even webhosts will prohibit “amateurs” from talking about politics on their own.  Section 230 protections will be removed (and maybe DMCA Safe Harbor too – look at what happened in the EU with Article 17) and all platforms will be treated as publishers.  Self-publishing will be prohibited unless it can pay for itself with legitimate commerce (not patronage).
  
If you want to have a voice, you will have to join a “registered” non-profit and be willing so support things you don’t personally believe for other “oppressed” intersectional groups to be protected yourself.
  
Some businesses want this, too:  they want door-to-door and telemarketing work and high pressure salesmanship to be socially legitimate again.  I found that out in various unsolicited job interviews in the 2000s.  The Internet MGTOW’s were destroying solidarity.  Even Economic Invincibility (Martin Goldberg) admits it.


Facebook’s prodding of people to run non-profit fundraisers under their own name publicly is symptomatic of this problem.

  
This is what the radical, authoritarian, “Stalinist” Left wants now – it wants to force people to join them.

I’ll turn 76 soon, maybe I will be gone by then, but my “soul” will still know from whatever Universe my afterlife takes place in.

Quilette has a similar piece online “How free speech dies online”, by Daniel Friedman.    The article recognizes the stochastic problem with speech:  it isn’t just an individual piece of content, but the intentions of the speaker. That’s why YouTube has started banning ideologies, and Facebook “dangerous individuals and organizations”.

The small social network Ravelry, that appeals to knitting and people who make quilts and run bees (like for the AIDS quilts of the past), ban accounts for people who express support for Trump. Here is a slippery slope:  a social network practically demanding political loyalty to its candidates (NPR Vanessa Romo).  My own cousin, who passed away in Ohio from ALS in early 2018, was very big on quilting. (The site does say "don't talk about it here", but it also says it can't be inclusive if it allows talk supporting Trump. It says "Support of the Trump administration is undeniably support of White supremacy." That just isn't true.  Eduardo Sanchez-Ubanell made a comedy video about dating a Trump supporter.  Is such satire a way of saying this statement by Ravelry is untrue? 

Another group "rpg.net" had banned talk of Trump, considering his administration "an elected hate group" (Timcast, 2019/6/24, also vice.). 
There is also a report from Project Veritas about an attempt within a major tech company to prevent a “Trump situation” with the 2020 election.  This video (“Machine Learning Fairness”) is on Bitchute and it can’t be embedded. 
  
 The video from Ben Shapiro on June 6 talks about YouTube’s “fuzzy line” and goalpost-moving and vulnerability to a “heckler’s veto”.
  
The Verge, in an article by Julia Alexander, explains why "demonetization" of big (possibly extremist or stochastic) YouTube channels "doesn't work". Carlos Maza already tweeted it. But so did I. 

Saturday, June 22, 2019

"Wired" article shows that Congress wants to expand FOSTA-like legislation to other problems like opioids and deep fakes


Christine Biederman has a detailed article in this month’s “Wired” detailing the history of how the “fibbies” seized Backpage, after a nearly two decade legal battle, right after FOSTA passed in April 2018. The link is here.  This has a free-article paywall (I have a print subscription).  The initial raid in Arizona was quite sudden and broke up a wedding. The title is “Inside Backpage.com’s vicious battle with the Feds”.
  

Section 230 was used to defeat most attempts to seize it.  At some point, Backpage began to “sanitize” ads to remove keywords and memes so that they would not appear to be supporting sex trafficking. It is arguable that such activity would have canceled their “moderation privilege” (or “good Samaritan clause”) even under Section 230 before FOSTA. It is certainly arguable that such behavior amounts to contributing to the crime intentionally (it is no longer “stochastic”).

The government also used civil asset forfeiture against Backpage to make it harder for it to defend itself legally in terms of financial burden.  Civil asset forfeiture is unusual in free speech cases with digital assets.

The long article warns about another state attorneys-general letter to weaken Section 230 with respect to other crimes, including opioid distribution, identity theft (another blog in my setup), election meddling, and a new problem, deep fakes.

Thursday, June 20, 2019

Hawley's bill on Section 230 provokes alarming podcasts and blog posts, including mine and Tim Pool's


A lot of talk in the past 24 hours.


First in the video above, Section 230 relates to the CDA, not DMCA.

OK, I gave a 100-minute billcast on “Stephen Ignoramus” yesterday, link

A little comment about my work on the EMP and electric grid problem was accidentally lost in editing. I would also add that I “announced” Hawley’s bill at the start of the video, but forgot to really explain the “moderator paradox” in that Section 230 does allow reasonable moderation and axiomatic manipulation of content. The problem is that the major social media platforms do so much of it that many view them as publishers.

I discussed a couple of predictions of mine:  that we will see YouTube selecting whom it wants not only to monetize but even publish at all on its platform, partly because of the EU Copyright Directive issues but also now because “hate speech” (especially its stochastic nature on the right) is impossible to define in a way it can be moderated objectively.  I also explained that I need to keep my own speech; I can’t be forced into a situation where I have to work for somebody else first (and serve their ends).

Well, all of this follows one of my own more controversial posts Tuesday, about my plans after 2021.  as well as an earlier one there (May 30) proposing bundled paywalls as part of a solution (Stephen was already familiar with that).

This morning, Tim Pool made a particularly alarming post predicting that his channel will be banned from YouTube (although he could continue on Minds, Bitchute, etc).   He discussed the ban of Black Pigeon Speaks, which was reversed by pressure. The channel had no strikes, but was suddenly accused of hate speech by YouTube’s new standards. Pool reinforces the idea that YouTube wants to be more like Netflix, with content from already professional sources, and fewer “amateurs” competing with people who have jobs in media.  Pool introduced an article by David Auerbach, “The Coming Gentrification of YouTube”. 

Generally Google companies give plenty of notice when they plan major changes in policy which can affect creators. But YouTube seems to be moving erratically and suddenly.
  
YouTube has also made statements about “keeping people safe”.  Some minority groups are able to claim that their members are personally more vulnerable to attacks from radicalized persons  (and need much more of a zero-tolerance approach) than most other users, who will not see anything terribly wrong with the speech that is being taken down.

Wednesday, June 19, 2019

GOP Senator introduces bill to remove Section 230 protections from large platforms, unless they allow audits showing political neutrality


Senator Josh Hawley (R-MO) has introduced a bill to blow up the business models of social media companies, whereby larger platforms become liable for the content of their platforms unless they submit audits to show that they are “politically neutral”.  The bill is called the Ending Support for Internet Censorship Act.


It is not clear if this would apply to conventional hosting providers like the host for my Wordpress blogs.

Here is the CNBC story (Mary Catherine Wellons).

Makena Kelly has a story on the Verge

Elizabeth Warren Brown has a perspective on Reason here
  
This story is so new that it will certainly change rapidly.  I will keep tabs on it. Tim Pool has a comprehensive response to it already.  He is largely right so far. At 13:20 he discusses putting Section 230 in the USMCA.



Update: June 21

Elliot Harmon of Electronic Frontier Foundation argues that the Hawley bill will be unconstitutional and quickly fail in court and provides background from the Prodigy case in 1995 that led to Section 230.  Remember AOL didn't fully open up Hometown AOL until Oct 1996, after Section 230 had been passed. 

Tuesday, June 18, 2019

YouTube appears ready to ban some subject matter altogether because of user literacy issues



Attorney Viva Frei discusses how one post on his video channel where he discussed the legal details of a deposition related to the lawsuit against Alex Jones by families from Sandy Hook, was removed by YouTube, after first being shadow banned.


Toward the end of the video, Frei admits that some topics might have to be off the table on YouTube and other user generated content in the future. But if so, then a journalist who uses the platform at all might call his own reputation for objectivity into question.

Here is a rather long Twitter thread from this morning. 
  
I had suggested that I looked at Redicetv and didn’t see anything obviously w.s., just normal conservatism.

There would occur a couple tweets about the idea that (1) I was willing to give possible hidden or stochastic Nazi-sympathetic sites and YouTube channels the benefit of the doubt in order to protect free speech, but (2) I would not give Carlos Maza the benefit of the doubt in insisting that deliberately anti-gay channels be deplatformed.
   
The calculation seems to be that with (2) Carlos believes that some material like Crowder’s will tempt “enemies” or unstable people to attack more vulnerable LGBT people (trans and fluid and low-income and POC), even though the same people probably don’t threaten me as someone who is better off. Protecting human life takes a higher priority than protecting individualized free speech.  But if this is true, individuals like me must become more comfortable with functioning as parts of groups than on their own.

Monday, June 17, 2019

Harvard really shouldn't let social justice mobs drive its decisions


Robby Soave on Reason discusses Harvard’s rescinding of its admission to “conservative” Parkland survivor Kyle Kashuv. 

Two years previously, at age 16, Kyle had apparently indulged in some silly behavior with racial slurs on on Google Docs.  It appears that he thought that this chat was “private”. It also appears that social justice warriors dug it out, on both the far Left and far Right.

Of course, as a private institution Harvard can do this lawfully, but I think that it is a serious mistake for universities to pay attention to information sent to it deliberately by “mobs”.  We are seeing Patreon all over again. 


It is also disturbing that Harvard invited Kyle to write an explanation and apology letter, and still rescinded admission.

I remember saying some pretty horrible things myself particularly in ninth grade (age 14) which got me a rare call into the office.  Had that happened today it would have meant expulsion and being set to an alternative school. Dr. Phil, in some programs back around 2007 centered on Myspace, would talk about “Internet mistakes” and warn that the teen brain is not mature enough to see around corners. True, some teens are mature enough now to invent anti-cancer tests or fusion reactors.  The best teens are more mature today than they were when I was growing up.

This is a very regrettable incident. Reason reports that right-wing mobs have tried to get David Hogg disinvited, and it’s hard to ignore the likelihood that Kyle’s association with conservative views contributed to the disinvitation.

David Brooks weighs in with a paradox on how we develop morality as we grow up. 

At this point, it's important to realize that this incident happened on a Google Docs study group document that was not supposed to be published in public mode;  it was intended to remain private. We all know Dr. Phil's warnings, again, about the corners. Maybe this is a disciplinary action for the school -- or would have been had it been caught at the time.  Private messages really should not become fodder of SJW's in the future. 

Zack Beauchamp of Vox gives a detailed explanation of how conservatives and liberals view this incident.  In general, the problem is that racism (and maybe sexism and homo/transphobia) is seen as such a structural problem that enhanced awareness of behavioral codes is demanded of everyone, even with private communications. 

Forbes Richard Vedder regards Harvard as an "embarrassment". 

Monica Hesse writes a piece in the Style section of the Washington Post  (the "Kyle's" of the world) that might sound vengeful. What does happen now to Kyle? 

We need to pay attention to how, especially the far Left right now, acts like it finds that combativeness works, rationalized by the need to protect its own tribe from random violence from “enemies”.
  
If you want to look at a great channel by a Harvard undergrad, look at John Fish (from Canada).  99% non-political (he seems to follow Jordan Peterson’s idea of individualism), but he has talked about the “attention economy” and the dangers of social media recently.

Sunday, June 16, 2019

Pinterest creates "privacy claim" risk for bloggers criticizing them (over pro-life position)


I won’t try to give all the details of the Pinterest mess. 

But a pro-life whistleblower was fired, after Pinterest marked a pro-choice group as “porn” internally and he exposed it, and then a Project Veritas video on the matter was removed for a “privacy violation” regarding one of the employees (???)  Even Tim Pool’s video on the same was removed for showing a clip from that violation.


There were also claims that the “privacy” violation was simply reporting someone’s name from a news story, as if any blog post which does that could be taken down, even if the name had been disclosed in an article already linked to.

A right-to-life argument ought to be in favor of abolishing Selective Service, for moral consistency.

I sometimes see anti-abortion demonstrators in front of an office building in Falls Church VA on Lee Highway.

Friday, June 14, 2019

EFF discusses big platforms with respect to transparency in censorship policies, post Voxadpocalypse


Electronic Frontier Foundation reports “Social media platforms increase transparency about content removal, but many users are kept in the dark when their speech is censored,” basic link , by Gennie Gebhart.  The series is called "Who Has Your Back?"
   
The article links to a table listing major social media platforms and rating them on (1) responding to legal requests (2) responding to platform requests (3) giving notice (4) appeals notice (5) appeals transparency and (6) Santa Clara Principle.

YouTube, Facebook and Twitter were reported as OK on Santa Clara Principles but all have had serious problems.


Oddly, Wordpress was not listed as complying with the principles. Vimeo was listed as complying with anything.  

There seems little progress on YouTube’s responding to Ford Fischer’s situation with respect to allowing some monetization of live (neutral) reporting of critical historical events (as they develop) including more extremist speech.  

Perhaps YouTube believes this sort of material needs to be produced only on larger platforms, particularly for documentary film (HBO, Participant Media) and that this no longer works.  I would think these platforms will fund more documentary film on Charlottesville.

YouTube is very concerned about misuse of their material by less educated users for radicalization, being blamed for making profit off it. It also seems to be concerned about speakers who reject the idea of protecting people specifically because they belong to specific groups rather than on individualistic values. 
  
Reason has an important op-ed by Nick Gillespie, that muscular censorship is really bad. 

Thursday, June 13, 2019

"Deep Fake" videos could set people up for ruined reputations, even deplatforming


Ben Collins of NBC News demonstrates the potential “deep fake” problem where Bill Hader (SNL) impersonates Arnold Schwazenegger, the transition starting at about 10 seconds into the video and taking six seconds. 


I have to say, I would not have noticed the transition.

Similar tricks have been played with Mark Zuckerberg and Nancy Pelosi.

This is said to be a national security or homeland security problem.

Tim Pool has pointed out that it would be possible to discredit someone by deep-faking their making an overtly racist remark. Social justice activists might really try to do this.
  
It’s interesting to wonder if libel and defamation law already cover this.

Wednesday, June 12, 2019

House Judiciary Committee holds hearing with legacy newspapers on the idea of a pseudo-link tax on social media companies (effectively)



The House Judiciary Committee opened hearings (June 11) on a request from the legacy news industry for exceptions in anti-trust laws written for old legacy media.   Cecilia Kang has the New York Times story here
  
Newspapers want to be able to bargain for better prices when their stories are copies into social media.


Yet to me this sounds like a variation of the “link tax” going into effect in the European Union as part of its Copyright Directive.

Tuesday, June 11, 2019

The demonetization of independent journalists (over embedded toxic content) on YouTube is rapidly coming to a head, and could spread to other areas if not "contained"


Today, I just sent three tweets to YouTube executives as a formal reply to the “conflation” or “meta-content” problem posed  by Ford Fischer (News2Share) and other independent journalists when they report on protests and demonstrations and some people they film articulate “unacceptable content”, which would be embedded in the “meta-content” (that is, the reporting with embedding of toxic content from speakers or protesters at a public event).
  
Here is the Twitter link. I've also documented this in Minds ("'/jboushka"). 

We’re waiting to see how YouTube will answer.

Again, there are several elements.  The main problem is with demonetization; independent journalists can’t make a living at this, or the content can’t pay for itself, which in turn causes other problems we have already talked about. YouTube fears that advertisers will object to content that contains livestream of violent extremism even if posted for journalism.

And YouTube could reasonably be concerned that some viewers will not understand the idea of journalism and will be radicalized by the embedded speech anyway.  Imagine if the Internet had been available in 1933.

But YouTube especially seems caught off guard with not having thought through a problem like this. Sundar Pichai (CEO of Google) called it a "hard computer science problem".  Ironically, John Fish, a Harvard undergraduate with a video channel on undergraduate life with a tech major (computer science), while usually non-political, has dropped hints on his channel that these kinds of problems are coming for the whole industry. It's as if Ford had created a "final exam" problem for a university journalism course. 
  
The problem could conceivably become a concern on Blogger, and in general on hosted platforms. But generally text (as opposed to video) is not viewed as so enticing (although remember how the Christchurch “manifesto” was feared and quarantined in New Zealand).  The notion that certain “ideas” must be quarantined as “viruses” seems novel but maybe the point of some “stochastic terrorist” activity.

    
I am beginning to believe that the demonetization of many journalistic or political commentary-oriented YouTube channels (otherwise not supporting commerce) on June 5 is somewhat coincidental to the Maza-Crowder problem and may even go back to 2018 when Susan Wojcicki made an alarming statement about the EU Article 13.

There are important stories today on Twitter, too, about embedding other people's tweets in direct messages, and about little known ways to troll with empty honeypot accounts;  Twitter will address these but I did not get around to reporting about them as issues in detail today. 

Sunday, June 09, 2019

Discovery process in Damore lawsuit could have implications for social media; a new bill balances traditional media to big tech


As I got home from Pride last night, I found this interesting video from Tim Pool, that Google will have to go through discovery for a lawsuit from James Damore over bias in its employment practices.


It is certainly correct that “conservatives” are not a protected class (although in immigration law political belief is a social group, interesting distinction).  I don’t personally like doing things by protected class.  But that’s part of what the 14th Amendment means as applied today.

The important point is that discovery will examine Google’s and YouTube’s business models. This could call into question many of their products and how user-generated content is monetized further, even on this platform (Blogger), where I have concerns over the long term. 
   
Pool noted the problems of depending on ads to pay for content, and suggested making big tech platforms "utilities" where you pay for metered use.
  
 I have suggested "bundled paywalls" and there is some evidence coming my way that this idea (needing funding for a startup) is getting attention. 
    
CNN this morning mentioned a new bill “HR 2054” a “Journalism Competition and Preservation Act”, which reverses previous anti-trust law and allows local newspapers to bargain collectively with large tech companies. There will be hearings in the House soon on this. CNN noted today.  The News Media Alliance has promoted it.  The group notes that today only 20% of revenue for newspapers comes from subscription (down from what used to be) and that dependence on advertisers is getting precarious for newspapers just as for tech sites, as it is easily manipulated by extremist activists. 

Friday, June 07, 2019

Twitter simplifies its TOS rules; Vox seems to double down on demands on YouTube, putting embedded political content in more danger everywhere?


Two days after YouTube Purge 2.0, Twitter took a good step and simplified its rules, reducing the word count from 2500 to 600, story here

The rules, as stated, seem fair and neutral enough, and if applied as written, may show that Dorsey et al take seriously their confrontation interview with Tim Pool last fall.  One of the more interesting rules has to do with election integrity. But this is not an existential problem for users, because the fears over election influence that had surfaced back around 2004 would be more much applicable to bloggers and vloggers.

The controversy over the YouTube purge continued Friday, as Vox seemed to double down on its “demands” of YouTube regarding GLBTQ+ creators, but in a way that seems identarian and polarizing.

A piece by Aja Romano on Vox seems to object to the presence of hate speech even if embedded for journalistic report, at least when applied to Crowder’s case.  But that isn’t so much about argument as about comedy and parody, for which there are other precedents.

But Vox had already released its sudden purge of monetization of controversial news videos even if justifiable by context.  Presenting someone talking about Nazism was seen as promoting Nazism because it seems gratuitous. But then, libraries still have “Mein Kampf”, right?  Or is it the quick access on the Internet that means that homemade journalism about extremism is no longer to be allowed, at least as a career? 
  
  
That seems to be the case for now as Carey Wedler interviews Ford Fischer. Let’s hope that YouTube rethinks this again.  Without independent journalism, Nick Sandmann would still be wrongfully seen as a pariah, because the mainstream trusted fourth estate didn’t do its job and it took independent journalists to provide a number set of eyes on what they had missed.

An article on Rolling Stone piece by Matt Tabbai is even more blunt on the meta-content problem. 

Wednesday, June 05, 2019

YouTube announces a purge or demonitization of a lot of political content, among controversy associated with a Vox journalist; Russian trolls may be using American blogs



 I was “on the road” today, and the mice played.

Seriously, YouTube (on its Creator Blog) announced a policy that will demonetize a lot of political content on YouTube and remove content that promotes “ethnic supremacy” (mostly white nationalism in the US and Europe). The blog post is “Preventing Harm to the Broader YouTube Community”. YouTube says it will also shawdowban marginal content, such as claims that major historical events didn’t happen. Would Logan Paul’s satire about Flat Earthers get downgraded?

There is a lot of detail here.  Look at the twitter feeds of Carlos Maza, Tim Pool, and Ford Fischer.  I’ll have to come back to some of this again.  There is a theory that Vox (Carlos’s employer) is doing this to force YouTube to demonetize low cost independent journalists who are seriously eroding larger established companies ability to make revenue and causing layoffs.

One of the problems is the idea that only safe way to handle domestic extremism (especially white supremacist ideology) is to quarantine it, to keep people from talking about it in order to look smart themselves or to make money on their channels. Some of Maza’s tweets suggest he believes that merely giving an extremist a voice at all, like an interview on a news clip, will lead to immature or mentally unstable people back into gun violence – and then we get back to the activism of David Hogg and his friends.
  
In fact, Ford Fischer (owner of News2Share) reports that his channel was demonitized almost immediately after the announcement, and one of two of his videos were removed. 

I personally think the public is safer if people know what has been said in a rally in a public place.
     
But there is an issue with defining “trusted content providers” which, in the case of independent individuals, sounds like social credit, making sure that indie providers don’t behave in such a way as to make extremists feel that the indie providers will give them attention if they protest and propagate their views.

After the recent Facebook Purge 4.0 I feared a YouTube Purge 2.0 was coming.  It’s here.
   

Before the news about the Purge came on my cell phone (as I left a movie in Woodbridge), I had (at a Starbucks before a movie) seen an NBC story about the trolls at the Internet Research Agency in Russia. I’ve noticed that three of my 16 Blogger entities (Bill Retires, Bill on Major Issues, and Identity Theft) have much higher per-post access counts than the others, and justified by Analytics.  
     
That could suggest that Russian or even Chinese bots could be accessing them for use in their own internal propaganda, for example, low birth rates and social security as an issue transposed to Russia.  
   
This is disturbing.

Update:  June 6 (on "Voxadpocalypse")

The Verge (a Vox imprint) has an article by Elizabeth Lopatto discussing further actions YouTube could consider. 

YouTube has a page on advertiser-friendly policies.  YouTube says a content creator should voluntarily flag controversial videos for no-ads to keep his channel monetized. That means that a creator would need to create a lot of non-controversial content and earn ad revenue from that to pay for the controversial content.

Again, some of the important policy or news-based independent channels do this for a living (for their owners).  I don't.  That creates even another set of problems that I have taken up before and will have to revisit again.  "Adsense" sometimes won't display ads on Blogger posts with more sensitive topics.

Here are the hate speech policies with examples.  Most of them are straightforward.  But there can be nit-picky problems.  For example, in a real world, transgender-ism has to be recognized as a medical issue before insurance will pay for treatment and surgery, which is what most liberals want. There are a couple of other logical fallacies there if you look for them.

(June 7: Note:  I just replaced the wrong link on the YouTube blog post: I had cached an older link.) 

Tuesday, June 04, 2019

Big Tech seems to welcome global regulation to keep out competitors, and might not need user generated content in the future the way it has


There is a lot of talk again about breaking up big Tech with anti-trust laws, but a more comprehensive explanation probably would be that on May 3 by Matthew Yglesias on Vox.   Elizabeth Dawson and Tony Room have a newer discussion in the Washington Post June 4. 

Yglesias points out that while a lot of it is about consumer prices and competition, and a lot more is about the unhealthful incentives for tech companies to mine consumer personal information, much of it is more fundamental.  You’re not supposed to be able to control the platform that can make things (like the media content from users) and then make the same things yourselves.  The problem is partly that the “algorithm” model of driving people into common chambers conflates that.

I actually think, as I’ve said, there are fundamental problems with too much content being free, and that paywalls are a good thing.  But we should offer the ability to bundle the paywalls so that users have more varied reading experiences and more balanced information sources.


“Kneecapping them sometimes helps them.” (Nick Thompson in this Wired video on CBS).
  
More regulation might, however, favor big companies and hamper startups and individuals, because of the controversy over layered speech (and over downstream liability, as like in EU with copyright).
In the meantime, Lior Lesig is pursuing an FTC action against possible collusion among some tech companies and payment processors.
  
There is even a more fundamental problem, that many more educated people stay in echo chambers where others can think in layers the way they do, but may become fixed on certain ideas (like free speech means all legal speech, or, on the other hand, real minorities must be protected from hate).
   
But there is a tendency for “the masses” to be unable to understand layered thinking on the web, and think that bringing something up (even alt-right ethno philosophy) in discussion is the same thing as promoting it.  So it is hard to define “toxic content”.

Today, Electronic Frontier Foundation offers a strike page The Impact of ‘Extremist’ Speech Regulations on Human Rights Content”, by Jillian C, York.  This leads to another paper, “Caught in the Net” by EFF, the Syrian Archive, and Witness, in response to a “Christchurch Call to Action”.  I’ll come back to the details soon.   We've still got a problem of "meta-speech" where mere mention of something (ethno-nationalism) is seen as inherently toxic. 

Saturday, June 01, 2019

David Brooks presents Solutions Journalism Network as a bridge between talk and action (maybe for less sociable people)


David Brooks, the Canadian commentator who tells us how “to be good”, has a few recent commentaries that seem to point to the need for more singleton-like people (like me) to become re-socialized. 

The latest post (as of today) talks about trolls (I might even be one, given his flexible definition) and crybullies (which only at first glance refer to the ragebait social media users from the identarian Left).

But the most important one recently seems to be the one on May 16, “The Big Story You Don’t Read About” with the tagline “Journalists don’t always cover what’s really going on”.

He talks about the Solutions Journalism Network, who people are responding to problems, largely locally, as what Brooks calls “weavers” (#WeaveThePeople), builders of social capital. 

 

I have to admit that for some time I have worked largely alone, without a responsibility for anyone, which, according to some interpretations of Nicholas Taleb’s “skin in the game” theory, would mean I shouldn’t be allowed to keep my own individual voice online at all.  I’ll come back to that later, and this seems like an extreme, ironically Marxist theory. Yet we’re seeing it in China already and big Tech is starting to think about “social credit” even here.  Should people be expected to engage big community service projects (maybe even traveling and camping out in disaster sites) with non-profits with people they don’t know?  Should they just jump in, enlist, and take orders again?

Recently, WJLA in Washington DC reported on a case where a young boy needed a kidney transplant and, even with a donor organ, could not get one as long as he “lived in a hospital” (like in the recent film “Five Feet Apart”).  So one of his teachers (a male) became a foster parent and took him in. That is off the charts for me but it is engagement at a personal level that was unimaginable the way I was brought up.