Friday, November 30, 2007

News associations pressure search engines to adopt "Automated Content Access Protocol" (ACAP)


News associations and publishers are advocating that search engine companies adopt an extension to the “robots.txt” concept that is supposed to block robots when that is desired by the publisher. The enhancement is called Automated Content Access Protocol (ACAP).

In recent years, book publishers have been concerned about the way insides of books are readily displayed by search engines, and news groups (like the Associated Press and Reuters, and the European Publishers Council) have objected to the display of pictures and significant blocks of content almost immediately by search engine companies. Nevertheless, the AP has sometimes given permission for search engine companies to host specific stories quickly.

The AP has expressed concern that professional journalists risk their lives (or sometimes imprisonment in foreign countries) to provide stories, and reproduction of them without permission can make their reporting economically unviable.

The Washington Post story ("Publishers Seeking Web Controls: News Organizations Propose Tighter Search Engine Rules") on this item is by Anick Jesdanun (from the AP), appears on page A2 Business of the Nov. 30, 2007 The Washington Post, link here.

As covered several times on this blog, deep linking by bloggers and other sites and publishers appears to be legal as long as the quoting site does not “frame” the link as if it were its own. The legal background for that case has been covered several times. (For example, Feb 7, 2006, here): or on March 9, 2007 here. EFF’s main link appears to be here.

Linking is essentially like a bibliographic reference or footnote in a high school or college term paper. Some news sites have clauses about “republishing or redistributing” stories but facts by themselves are not copyright protected (in a few cases though they could be trade secrets or security classified). The writer is on safest grounds when he or she adds perspective to the item, comparing it to other items previously published on the topic (as is accepted research). A writer’s own personal experience with an issue also adds perspective, although that can sometimes raise other issues (for employers, family, or others connected to the person who might believe they could affected by the information or the connection); this was an issue in the Nov 29 blog posting (below).
.

Thursday, November 29, 2007

Epiphany: All that Personal Stuff (or All that Jazz): constructors and self-instantiation on Web 2.0


I remember a particular epiphany. On a Saturday in early August, 1994, I walked into a family restaurant in Sterling, CO (known for the rumors of alien cattle mutilations), picked up a rag and had a burger. I vaguely remember now a story about some local soldier discharged under the new “don’t ask don’t tell.” As I walked out of the restaurant and got in my rental stick shift car to drive to Scottsbluff and Cheyenne, I knew that I would write my book. I knew that I would put myself in the public limelight, whatever the consequences, over this issue. I felt that events earlier in my life (well documented on these blogs) justified it and could affect the long term political and even judicial debate. Later, that evening, I would actually meet, at the motel bar (Cheyenne) a man at the who claimed to be gay and an LTC in Commandant Carl Mundy ‘s Marine Corps. Later in the trip I would circle back to Colorado Springs, given the furor over Colorado’s “Amendment 2” and, within earshot of the Focus on the Family property, have lunch with another migrated friend who asked if I was out there to “run around.” There would be more “planes, motels, and rental automobiles” in the mid 90s as I burned vacation time “running around” the country digging up more stories (an mountain driving with a clutch).

The book would lead to the websites and blogs of today (and the screenplay treatments); the DADT issue would bud out into practically every other issue in the cultural wars: the transition of a society based on family and tribal ties, based in part on emotion and automaticity, into one in which individualism and objectivism have developed new world views for younger generations. Economically, extreme capitalism and globalization would accompany the psychological transformations, banging up against international realities of the limitations of this planet and of the problems of being perceived as living off the sacrifices of less developed peoples. 9/11 would wake us up. Issues like terror, war, the draft, the idea of mandatory national service, global warming, energy shortages, pandemics, disasters, workplace risks (including now recent reports on graveyard shift dangers) and the wide array of individual human tragedies and misfortunes (now including eldercare demographics) portrayed in the media – all of these remind us that “burdens” are not shared equally and can in time threaten or undermine the “individual sovereignty” model or freedom – and they are all interconnected (ultimately back to "DADT") and sometimes seem to justify older notions of “public morality.” Variation in unelected responsibility does help explain some reckless behavior (the subprime mess for openers). However, individual freedom should not be taken for granted, and everyone, regardless of how "born", should take some personal responsibility in addressing these problems, sometimes requiring "sacrifices" outside normal notions of market economy (and not necessarily starting just when having children).

By the end of the 90s, it was clear that something profound had happened because of Internet technology. Anyone with a good enough message could make himself (or herself) a celebrity in cyberspace, simply because the topology (or, in the language of advanced calculus, measure) of cyberspace turned upsidedown all the economic and other valuation notions that had previously run the bricks and mortar world. We had a “second life” that was not exactly “real life” but could certainly affect “reality.” Domain names could be hoarded and sold as if they were real estate, almost. For one thing, by the end of the 90s, the pretense of “privacy” that was supposed to be protected by “don’t ask don’t tell” had been shredded by search engines. The world had morphed that quickly.

A sociologist calls this phenomenon “asymmetry.” One individual, because of a twist in circumstances (here, leveraging of suddenly available technology with self-publishing or “self-instantiation,” to borrow from java) gains influence way beyond his normal competitive ability or what would seem appropriate for his social responsibilities. Shawn Fanning made this point when he created Napster which, despite the period of legal copyright problems, certainly taught the music industry that it needed to change its business model quickly.

What I found was that one Internet domain, because of the effect of search engines (which started to become important in 1998) could have a real influence on political issues, if it was structured concentrically around the relationships among the issues and if if it kept up with the news. It just took one person, very little money, no real profits, and most of all, nor organization or employees. Previously, political advocacy had always been organizational and usually partisan. People had always been encouraged to band together in “solidarity” and engage in single-minded campaigns to get their way. Minority groups tended to present themselves as “victims” and GLBT tended to depend on superficial arguments about immutability because that had worked with race. People generally had depended on lobbyists or organizations to represent them and boil issues down to litanies of need; and often, people could personally speak out on specific topics because of conflicts possible with the workplace (potentially legal) or sometimes family; there was sometimes a lot of social pressure to spin certain issues in a slanted fashion to manipulate political or social comfort. Now, there was the capability to present issues in a compact but structured fashion, for one person or a small company to “control” it, and keep it deployed at low or no cost. In this state of affairs, politics could change for ever. This might not bode well for the future of K Street.

It’s important to realize how quickly things on the web get indexed and found everywhere on the planet (and that can include even the tribal areas of Pakistan), even when there are umptime million new blogs or sites created every day or week. Well-organized and presented material (it doesn’t have to be fancy – in fact static content gets loaded and found more quickly a lot of times) gets found and has an impact.

In a world where book publishers exercised so much legal due diligence before going to press, it seems like an accident that “amateurs” were turned loose to post their wares on the Web with no real understanding of possible legal consequences. In theory, intellectual property law (copyright, libel, right of publicity, etc) works the same way on the web as in print, but typically most of the time only defendants with deeper pockets are pursued (the exception is illegal downloading through P2P, which is a totally different issue, but arose with Napster, as above). Now, I do follow the expectations of i.p. law as normally understood in my postings. I fully credit sources, and I don’t target people. (I generally don’t publish the names of unconvicted defendants even though they were published in conventional newspaper stories first. I don't report on "private" matters of named celebrities, even though sometimes I can "see" them even in public places.) Much of my argument comes from personal material, most of which happened years or decades ago. With the passage of time, I believe, the value to the public of knowing about a historical incident that bears on an issue today (like the military DADT) can be considerable, while the practical effect on others involved in the incident (usually unnamed or pseudonymed, but conceivably they could recognize themselves or others could anyway from the circumstances).

However, in the past few years, and especially since about 2005, we have heard a barrage of reports about people getting into serious trouble with their behavior on the web. It's true that the "wild west" mentality of the early web days and the suddenly apparent (unknown before the mid 1990s) capability to broadcast content globally without much obvious personal accountability sets an example that some people misinterpret as an invitation for risky or harmful behavior online. It used to be that the most common problem was people getting fired for badmouthing their employers, who then would find the postings through search engines. Sometimes people tried to influence share prices with rumors and got into trouble with the SEC.

Then the nature of the problems started to expand as social networking sites with Web 2.0 became all the rage. Teenagers (and sometimes adults, especially women) got into trouble giving away provocative personal information and attracting bad people. Teens may say they want the public limelight, but often they do not fully understand the global nature of “fame” and are more concerned about in-crowd popularity, and see social networking sites as a primary social tool. Teenagers (and sometimes adults, as in the case in Missouri recently) would use the web to harass and tease each other. Sometimes people, in fun or to protest social convention, would post derogatory pictures of themselves (especially drug use or underage drinking) and lose jobs or at least job opportunities when employers trolled for this. People, sometimes ignorant of libel or privacy law, would post derogatory information or pictures of others on the web, and employers would find that too. People began to confuse the concepts of social interaction with legitimate publication. Social networking sites have responded by help and encouraging customers to whitelist their material (keep restricted to known lists of users) but even so a lot of derogatory material would get passed around into the wrong hands, just as in Gossip Girl. “Reputation defense” became a new industry.

Legal battles have raged since the mid 1990s about Internet pornography (the CDA and later COPA, the later of which I became involved with as a plaintiff because other material can be confused with pornography) but the creeping problem is much more subtle. Some lawyers call it “implicit content” – the “between the lines” meaning that visitors may attribute to material that they find out of context by searches on the web. Implicit content may concern the motives of the speaker, as they would be perceived by the visitor based on what the visitor is likely to know about the speaker’s familial, social or business position. A careless rendering of the concept could mean that a speaker has a particular personal problem because he or she addressed a certain controversial topic in a posting that itself is legitimate and presents no legal problems in the usual sense. Socratic online “thought experiments” can cause real trouble if they are perceived as Orson Welles broadcasts.

That’s why, for employers especially, the presence of “self-instantiation” on the web can present previously unknown problems, with many ambiguous and gray areas. It’s especially important in jobs where the person makes decisions about others, has some usual public trust (teachers), has to accept forced intimacy on the job (the military, fire, law enforcement), has access to very sensitive information (intelligence). We may be coming to realizing that “free speech” and the “right to publish” should not be identical concepts. For example, one can propose a rule that people in sensitive jobs should accept “professional” supervision of all their personal web activities or whitelist all of their material – the job would be the only “portal” to the public and the employer would have legal ownership of the person’s “right of publicity.” Another more common sense rule would be that one should not make a posting that compromises the confidence in others that he can do his job, even if he has a legitimate political motive (he may have to consider resigning instead if the issue is compelling enough).

Some people say that I should “keep a low profile” unless I am willing to take the risks (emotional and otherwise) that others believe they must take (often for family). (Some have said, why don’t I have the guts to run for office rather than remain an Internet pamphleteer. Then I’d have to be partisan and beg for money.) This notion is just unacceptable. I do what I think I have to do to contribute to my world my own way.

I look upon that "before and after" 1994 moment in Colorado as a selection of my second career, a commitment to rationalize the content and process of debate (starting with DADT). A future job could force me into a “quiet period” of some sort, or major restructuring. Money matters (no more specifics here). But I won’t do something where the point is to manipulate others into buying something that I had nothing to do with. No “we give you the words” stuff – I want to “give” the words rather than “take” them. The resolution of my career must remain with content that must come from me before I can sell it. It does seem like I am pushing "the knowledge of good and evil" as I believe that what people know should not depend just on their familial, tribal, religious or even business associations.

Sunday, November 25, 2007

New ventures try to sell user-generated content, while many surfers lurk


Last June, at the Digital Media conference at the AFI theater in Silver Spring, MD, user-generated content was all the rage in the symposia and discussions.

Saturday, Nov. 24, the Business Day section of The New York Times featured a story by Evelyn Nussenbaum, “Publisher Gets Web Readers to Fill Pages of Its Magazines,” here.

The story first discusses Halsey Minor and his technology info site CNET. I have been familiar with it for a number of years, and am on the email list. Shortly after my end-of-2001 layoff and buyout retirement, I remember getting a lot of emails at home about the world of an information technology manager, about all of the security and productivity issues. A manager of someone else’s world was the last thing, then at age 58, that I wanted to be. I had always been an individual contributor. The tone of those articles, along with the message boards and discussions that followed, struck me.

The article goes on to discuss the collaboration with Paul Cloutier and severy ventures, including JPGMag and Everyman. JPG encourages members of visitors to submit material (mainly photographs and visuals) that other members rate or vote on. The “best” material gets selected for a print magazine, that does appear to have some challenges to stay in the black. Advertising on both the magazine and website are kept within certain bounds so as not to disturb or annoy visitors or readers. This could be a way for some people to get "published" by a third party without working with agents in the usual manner or doing more conventional formal submissions. This is real publication of work offering royalties, not just payment-free publication of LTE's or forum commentaries.

Then, in today’s New York Times Magazine (Nov. 25), p. 26, Virginia Heffernan has a picaresque essay “The Medium: In Defense of Lurking: Solitary Consumption, not interactivity, many be the best thing about the Web,” link here.

Actually, I used to think that was what “surfing” was all about, especially back in the 90s. In these Patriot Act days, you wonder if someone could get too interested in tracking the sites you went to by IP number – and there are some things that are patently illegal to have images of on your computer. It’s a much bigger issue at work, but it could be at home, or on a family computer with other family members. Still, her article discusses a couple of varied issues, one of which is our loss of reading – especially literary materials – for pleasure. (I remember at George Washington University in English literature reading a Wordsworth poem about how poetry was supposed to give pleasure.) She mentions that professional actors (especially stage) have little time to read things other than the scripts they perform or audition for (even more so for Shakespeare-type actors – who at least can get as far as Marlowe), and that teachers (especially in humanities) have little time, after grading papers and preparing lessons, for material that goes outside of approved school curricula – even for themselves – and that creates certain tensions among teachers when outside influences (substitutes) come in the door. She also talks about how some “lurkers” insist on enforcing their constitutional right to anonymity when they blog (that’s a good thing in these days that employers mine Myspace and Facebook pages) – and that a New Republic editor got sacked for blogging anonymously on the magazine’s site, and that a foods executive got fired for promoting his stock anonymously on an investor trash board.

So it goes.

Picture: The new Washington-Less High School in Arlington VA (I graduated in 1961), taken Thanksgiving Day.

Wednesday, November 21, 2007

Wi-Fi linked to autism -- something to this, or an urban legend?


The media has a story tonight "Wi-Fi causing autism?" One immediate source is from John Biggs, and the "Crunch Gear" link is this.

The study appeared in the Australasian Journal of Clinical Environmental Medicine, authored by Dr. George Carlo, and claims that exposure to Wi-Fi signals causes heavy metal ions (like those you study in high school chemistry) to be trapped in certain brain cells in infants or toddlers.

The Computer Weekly article is by John-Paul Kamath and is called "Wi-Fi linked to childhood autism." The link is here. There is a suggestion that autism in young boys has increased in the same time frame that wireless became widespread. There is also concern about living near "hot spots."

The story mentions another study at "Penn University" that disputes these claims.

The story was posted on AOL today here, and suggested that parents had not been adequately screened about drug use. This suggestion brought very angry comments from parents with kids on AOL.

I had mentioned autism in my posting about libertarianism yesterday, where it seems that our actions and behaviors and expressions have effects that we don't take into account -- a questioning of "consequentialism".

Distantly related is a story by Martin Fackler in the Nov. 18 New York Times, "In Korea, a Boot Camp Cure for Web Obsession" for teens with cyber-addiction, link here.

Is this real, or is it an urban legend? After all, "I read it on the Internet!"

Tuesday, November 20, 2007

Yes, it's getting harder to remain a libertarian


I wrote on July 30 that it is getting harder to remain a libertarian – and actually LP party people have been saying that privately ever since 9/11. Then, I wrote about this from a mostly personal perspective. Now some of the recent trends that accompany globalization make it useful to look at the “big problems” categorically.

The most important problem seems to be that the Western world consumes, per person, and pollutes out of proportion to population. It’s obvious that the competition from big countries with large populations (most of all China) who want our standard of living can produce huge political and security problems.

That concerns decomposes into two overlapping areas: global warming, and oil consumption. Global warming, caused apparently by carbon emissions from developed countries, is likely to impact less developed parts of the world (floods in Bangladesh, droughts in Africa) most severely – meaning that consequences aren’t apparent at once to the “perpetrators” but do impact the “victims” by social class. Of course, all of this gets said as we realize that nature alone can produce huge climate changes (as it has over hundred of millions of years – that’s why there’s oil in the desert Middle East). Still, the evidence that man has contributed to the current situation is overwhelming, and so is the unpredictability of what may actually happen (what about the possible sudden loss of the Gulf Stream?)

The affiliated concern in oil consumption. It’s possible to reduce oil consumption while increasing global warming, and it’s possible that developed countries, when consuming fossil fuels, may pollute less (and emit fewer carbons) because of better technology. But it’s obvious that China and other countries are becoming competitive consumers for oil, and that the Western consumers are open to condemnation from Muslim zealots who claim that their lands have been occupied for cheap oil. This sort of thing leads to tirade like that of far left-wing Venezuelan president Hugo Chavez, who recently threated that oil would go to $200 a barrel if the US went on any more "redacted" military misadventures. ABC World News Tonight on "A Closer Look" ran a comparative simulation of oil use once China has the same rate of automobile ownership (without re-engineering away from petroleum fuel) as Americans.

Another area where “collective” forces reign is public health and the new concerns about pandemics. In the 1980s, the religious right tested the waters with the idea that private sexual behaviors (spreading HIV) could jeopardize the lives of the “innocent.” That died out, but in the past few years we are seeing new concerns about “healthy carriers” or the moral issues associated with personal “typhoid Marys”) with truly communicable diseases. It’s a complicated picture. Some of it is related to overuse of antibiotics – again, and example where “personal actions” complicate future public health. But with most viruses and many bacteria, the practical reality is that normally people develop resistance as they go through life. A person can be sickly as a child and then have very little work or productivity disruption as an adult because resistance is built up with a lifetime of crowds, gatherings, bars, and intimacy. The problem is that some populations (kids, the elderly, the immunosuppressed) are vulnerable to infections normally tolerated when carried by others – and there are some sudden surprises – like the unpredictable behavior of “superbugs”. Even so, draconian government actions against those identified with TB seem hard to justify, since TB is normally not easy to transmit.

And on top of all this, there is the concern of the fatal pandemic, bird flu, or the emergence of a virus so novel that the world population is unprepared for it. The mysteries of the 1918 Spanish flu become decoded and we realize it can happen again. But it is not personal behavior in the west that is the issue (other than increased mobility as with air travel); it seems that it may be agricultural practices in less developed parts of the world, where people live close to poultry and farm animals. But why hasn’t that been more of a problem in the past? Well, maybe it really has.

We find ourselves in a tug of war – can market driven technology keep us ahead of the threats? Why don’t we have an effective avian flu vaccine yet for mass use? One reason could be careless implementation of product liability laws. That’s a good libertarian argument. Technology can reduce threats and save lives. John Stossel makes that point all the time.

The social history of the past few decades has been the development of personal autonomy and individual sovereignty, at least for consenting adults. This has a tendency to leave a lot of people behind (those who are personally less “competitive” or more dependent on the socialization of others around them), and that leads to backlashes and demands that everyone be accountable to their own family and community (even if they don’t have their own children – and falling birth rates among the affluent are creating still another collective demographic problem with eldercare). The sociologist’s name for those demands is “public morality.”

An interesting sidebar to this could be the unexplained epidemic of autism (also Aspergers) in children (especially boys), about which an important letter appears in today's (Nov. 20, 2007) Washington Times. There are unproven theories about vaccines and toxins, but it's also possible that culture (and excessive infant media exposure) have something to do with it, making toddler brains decide to withdraw from what seems like gratuitous or unnecessary "real" social stimulation.

The challenges ahead (global warming, terrorism, pandemics -- as well as the usually overblown shocks presented on the History Channel's "mega-disasters" series) may well force people to accept more socialization and local or familial interdependence than has been expected of the past couple of generations. We even hear that now said in conjunction with the possibility of running out of oil without a global technological infrastructure that can support globalization on “current sunlight” – Matthew Simmons (in his book “Twilight in the Desert”) wrote that we need to learn to produce more locally and trade and travel less (in contradiction to the ideas of Thomas Friedman ‘s “Flat World”). But we heard that before, back in the “collectivist” 1970s with the oil shocks then, and we muddle through it and personal freedom got stronger, with productivity. Will that be true again?

Note: Feb 21 2008. Since this post was written, there has emerged more medical evidence linking autism and related conditions to genetics.

Friday, November 16, 2007

More major news sites offer a lot of free content


I’ve written about the previous “controversy” over deep linking, such as on Feb. 7, 2006 on this blog.

Today, Nov. 16, 2007, Frank Ahrens has a story on page D01, Business, The Washington Post, “Web Sites Tear Down That Wall,” here.

Ahrens discusses the trend for more newspapers and other online publications to offer more of their online content for free and depend on ad revenue. Some papers offer recent content free (even though you pay to purchase a hardcopy of essentially the same content). Most papers charge for full stories from “archives” that have aged a certain amount, but recently some papers have lengthened the free time. Some papers allow a certain monthly volume of stories (by email address or ip address) free, and then require registration, and then for a higher volume require payment. A few may charge for internet-only content that is not available in print.

An issue arises for papers now because much of their content is found by search engines, in what we call “sideways” access. That means that the visitor misses many of the intended ads. Bloggers (like me) often give direct deep links, as a practical matter, to give the visitor immediate verification of the truth of the comment, and immediate access to more details behind the story. (It’s still like footnoting.) I usually try to give the page and section and exact date of the print version, if known. It would be possible to give only the root link and then the exact URL with a link (to encourage a visit to the original link) but this seems silly and unproductive. Newspapers can, of course, try to target their ads to the content of each specific page, as there are plenty of software products in the advertising market that do this.

Since I live in the DC area, my radar screen often starts with The Washington Post and Washington Times, but I try to look at as many sources as practical. I have lived in Minneapolis and Dallas and am familiar with the publications there, as well as The New York Times and WSJ (discussed in Ahrens's story). AP, Reuters and other consolidated service stories often provide stories to all papers, which often can only be carried for a couple weeks. I try to link to the originals if I can find them, but sometimes I cannot.

Some papers will defeat deep links by replacing them with the home page, in order to force the visitor to see all the ads and go through their search engines. The Washington Times has done this recently, and I find that even with their own search many of their older stories have disappeared (don't even show up to be purchased with credit card).

Of course, other Internet businesses have gone to the ad model, including major ISPs like AOL, and this has been traumatic for employees.

Picture: From Sharpton's demonstration in Washington DC today, which I went to and covered myself (see Issues blog).

Thursday, November 15, 2007

Why "don't ask don't tell" is still a galactic-central issue


I recall in the mid 1990s that a trial attorney from a major Washington DC (Pennsylvania Ave) law firm told a GLBT audience that the military gay ban (aka “don’t ask don’t tell” policy) was one of the most important issues that it faces, and that it could potentially become one of the nation’s most important national security and public policy issues. That is so despite the fact that the quantity of people affected is, compare to many issues, relatively small (although close to 12000 people have been discharged since 1993 under the policy).

Some of the effects of the ban are practical. The military often provides careers and college educations to those who are otherwise disadvantaged, and the tuition recoupment cases are particularly troubling. From a security perspective, the ban seems to be causing the military to lose scare language and medical skills.

But the biggest concern is somewhat more symbolic. The ban, at least until studied in detail, implies to the public that GLBT people are not fit to do their share in defending the country, a point that has become more important again in a post 9/11 world where we cannot take our freedom for granted. If they are not fit to serve, why should they have equal rights in other areas?

I came of age when we had the Vietnam era military draft. I remember the controversy about student deferments, which were eliminated with the lottery in 1969. I also remembered the moral dichotomies: it seemed that a draft penalized the healthy and fit. It penalized men and assumed that they had to prove they could protect women and children before they had rights to their own lives. A lot of this kind of thinking started to recede after Nixon ended the draft around 1973, but since 9/11 we have heard a lot of talk about restoring it and about how the “sacrifices” should be shared equally. It gets expanded into discussions of universal national service. You wouldn’t need formal conscription (although the Selective Service System is still intact and ready to go); you could have a system with much stronger carrots, and some leading Democrats are pushing that angle. At least, that answers the objection of “involuntary servitude.” Still, the obvious question is, what about gays? “Civilian” service only? What about the Peace Corps in Muslim countries.

That brings up the central justification for the “ban” as it emerged in the 1993 debate: that it was necessary to protect the “privacy” of straight soldiers in conditions of forced intimacy. That also led to discussions about unit cohesion. I had an upsidedown relationship with this whole debate. I had been thrown out of William and Mary as a freshman in November 1961 after telling the Dean of Men that I was gay, and it was assumed that my presence and knowledge of my predisposition threatened the delicate “masculinity” or moral normal boys. (Although, after a stay-at-home college education and dorm graduate school, I would personally serve in the Army stateside for two years without incident, not exactly openly.) That’s similar to what the military was thinking. But we run into forced intimacy in other occupations: fire departments (this was a debate in NYC in the 1970s), law enforcement, intelligence. It seems that these concerns have not been as serious in these areas. One wonders about medicine, nursing, and even teaching, where at work one might have to attend to custodial needs of a non-consenting adult person.

To it’s credit, when Congress passed the 1993 law, it tried to say that the military is “different” from these other areas. Nevertheless, the message was clear: gay people, men especially, should not be trusted in some intimate same-sex environments.

I even thought at the time that some sort of more “humane” kind of “don’t tell” could work. After all, back then, the concern was “having a life” (we called it a “private life” and civil libertarians talked about “private choices” as “fundamental rights”) and not having to bring it into the workplace. This debate was going on just before the general public caught on to the Internet, and in a few years search engines were making everything public, to the point that public openness and candor became a virtue that seemed more important to many people than “privacy”, even in living quarters or in barracks.

It is true, there has been steady improvement in defense-related civilian employment for gays. I applied for a top secret clearance when I had a civilian job in 1972, and the outcome was indeterminate. By the early 1990s, the situation had become much better, and in 1995 the Clinton administration issued an executive order that supposedly protected gays in security clearances and civilian defense employment.

I was in an odd position, working in the 1990s for a company that specialized in selling life insurance to military officers when I decided to do the book. I was not comfortable with this, but fortunately the company was acquired, and I took a position with the acquiring company in Minneapolis. Still, this raises “moral” questions. If I am unsuitable to take up arms for my country, should I make a living off of it as a civilian? There are some people who would say no.

We get to the recent issue of ENDA which, along with hate crimes legislation, represents a practical advance in rights for many GLBT people, but it is far from perfect because of the various exclusions (trans-gender, military, disparate impact, etc.) But, really, we come back to the question of sharing responsibility.

It’s interesting how history tracks. Clinton’s attempt to lift the military ban came up after the “first” 1990-1991 Persian Gulf War, which represented more of a threat to security than maybe many of us realize today. The war showed (just as World War II and Vietnam had -- indeed, the Army had stopped “asking” by 1966), that the military can look the other way on gays when it needs people deployed in real combat. It was about this time – when there was real need – that security clearances were being handled more fairly, too. And, to close the circle, the American success (in the Persian Gulf War) then may well have helped contribute to the pressure on the Soviet Union at the end of 1991, leading to the collapse of old-style communism. And now, elements of that War helped set the stage for 9/11 and our situation with Iraq, Afghanistan, etc. and Al Qaeda today. (I leave the conduct of Bush on Iraq for another time.) All of these historical trends relate. And, by the way, this was all accomplished with a volunteer military, until Bush started the repeat Reserve and Guard deployments to Iraq and the “back door draft” leading to calls for a real draft again.

Not long after Clinton raised the issue of the military ban, gay marriage (at least in Hawaii and then Vermont) percolated in the mid 1990s as an issue, to explode with the Massachusetts decision in 2004. But even in the 90s we were starting to see “shared responsibility” as part of the debate. Here the domestic responsibility includes raising the next generation (the constant call for adoptive and foster parents, the publicity over the teacher shortage and “no child left behind” and caring for the last one (the demographics of eldercare with a lower birthrate, and how filial responsibility could come to affect GLBT people. All of these issues are interconnected, like routes on a graph theory problem.

It's encouraging that individualism has held together somewhat even after 9/11, with Lawrence v. Texas (2003) and the successful outcome in getting COPA declared unconstitutional at a district court trial last year. The political pressure to lift the ban through democratic legislative processes, given how things are evolving (and the evangelical political meltdown after all the scandals) make Mart Meehan's bill to lift the ban seem credible, when a few years ago it sounded like a fantasy (constitutional challenges to the ban have not fared well in appeals courts because of "deference to the military"). A good blockbuster film on it could help seal its demise.

In the period after Stonewall, the real gains were in being able to lead our lives within reason, at least in larger cities. In the 80s, many of us fought for our lives. In the 90s, the thinking changed, away from the emphasis on protecting private choices to the importance of full equality because equality is an important part of sharing responsibility (and sharing psychological “risk taking” too). In general, full public equality is important especially in situations where one needs to be counted on as an authority figure or role model (like a teacher). All of this only became clearer after the 9/11 events. The sensitivity of so many people to having their feelings protected when they make and try to keep marital commitments and raise families, while other people "tell" and speak with such candor on global media never before available, seems to be a major stumbling block.

There is an LCR-SLDN announcement today about a display on the National Mall in Washington around the end of Nov. 2007, discussed here.

Sunday, November 11, 2007

Loyalty to blood used to drive "religious morality"


A couple days ago I wrote (on the issues blog) about stories about the decline of evangelicalism as a political force – itself a questionable topic.

One aspect of this really stands out. That is, the way many people assume that loyalty to blood family and living as part of family is itself a primary moral virtue. When that is so, then lifelong monogamous marriage and children encapsulate the carrying out of this moral obligation. Family responsibilities that go with procreation are no longer “burdens” – they become “assets” because they help someone carry out responsibilities that already exist. One is expected to continue lineage as a basic biological responsibility, or (if unable because of some circumstance) provide close support to other family members who do. Marriage and children often provide men with social validity.

This mindset recognizes that "station in life" often tracks to family circumstances (which may provide some protection from external events beyond a person's control), and that parents and other family members are owed something back for the achievements and advances of their children. In the legal area, this view could come back if some states start enforcing their filial responsibility laws.

This moral view has been replaced over the past few decades with individual sovereignty and with a moral system based on harmlessness or non-aggression, and consent. This works best for people who define their own purpose before forming romantic commitments to others. That does not suit a lot of people well, who need the experience of family for meaning and support, and hence the tension of the culture wars.

People who do not marry and have their own children sometimes find that others expect them to “sacrifice” for those who do (as with the use of benefits and unpaid overtime in the workplace), and sometimes those with families don’t perceive this as a “sacrifice” – it is just part of a natural moral order. This may become more of an issue as people live longer and have fewer children, meaning more childless people will wind up with heavy eldercare burdens, and we might find filial responsibility laws in many states being enforced.

Philip Longman wrote about this in his 2004 book “The Empty Cradle” and notes a common, village-like dependency that we all have “on the quantity and quality of other people’s children.” If family responsibility is seen as an obligation that exists before one has children, then there may be more reason to have one's own children (marrying even at some risk of later divorce) even before resolving one's other plans in life. At the other end of the scale is the socially pejorative word "breeder".

People vary as to what makes them tick ("different strokes for different folks"). Since human beings have complex social and political structures and subtle variations among individuals in brain wiring (the "immutability" debate), some people find a cultural legacy more important personally than a biological one. Sometimes there is social tension over this; at other times there is resolution within the personality. Apparently Leonardo Da Vinci did not beget children, and probably Beethoven didn't, whereas Beethoven jealousy fought to care for his orphaned nephew. J.S. Bach, on the other hand, had many children.

All of this is not to say that freedom and passion don’t happen within the confines of older views of familial morality. But this view makes people vulnerable to jealousy, and to the political, religious, economic and patriarchal manipulations of others – well documented in soap operas.

Perhaps and underlying moral principle is supposed to be, "Don't expect to be heard from until you establish an immediate value to other people in a real way." That sounds anti-objectivist, to be sure. Usually, but not always, this sort of socialization appears as a developmental process in childhood and adolescence, and it may be resisted when the young person is expected to care for others on terms that make him or her uncomfortable about his or her own competitiveness and station in life. Nevertheless, it seems, it must happen. (It doesn't make somebody "grow up straight," the religious right's beliefs or convictions notwithstanding.) There are a couple of anecdotes -- about almost compulsory sibling responsibility, and then about family solidarity in the wake of rights being taken away (from the Japanese Nisei families during World War II) in English teacher Erica Jacobs 's column in the DC Examiner today (Nov 12, p 29), "When essays tell a story," here.

Thursday, November 08, 2007

Italy/UK roommate case underscores concerns about social networking profiles


There have been various news reports today about a young woman who apparently confessed to having been involved in the killing of her British roommate in Italy. Since she is not convicted and the case is early, I’m not including the name here for search engines. I might later. That’s part of the whole problem of ethical Internet behavior and reputations, as well as hard evidence in a criminal case, as we see even with this case.

What’s interesting, at least a little, is the stories about the young woman’s Myspace profile. Apparently a young man held in the case had just commented on a blog on her post.

The profile is now set to private, and Myspace says that the owner would have to add me (or anyone) as a “friend” to see it (“This user must add you as a friend to see his/her profile.”) . The same goes for the attached blogs and comments.

All of this gives empirical suggestion of a couple of things. Although Myspace is said to be much more “wide open” to the public than is Facebook (which is predicated on pre-existing affiliations or groups, often schools), in practice it appears that Myspace gives its users plenty of opportunity to keep content restricted to specific known lists (“friends”, a practice that we can call “whitelisting.” This practice has been recommended for people in jobs where public “reputation” or credibility is critical because the job involves leading or making important decisions about other people, or representing a company publicly.

Obviously, in this case at least, the material from her profile became easily available anyway to the police and other parties concerned with this. That corresponds also to media advice, to the effect that don’t think that “whitelisted” profile information doesn’t get around. Information on such profiles tends to behave like “rumor” or somewhat privately held but “unclassified” information did in all the years before the Internet where the social norms were set by the bricks and mortar world (as in plenty of Hitchcock films -- this case is unraveling like a Patricia Highsmith novel).

Here is a typical story on Seattlepi blog link:

Seattlepi has another entry today (link that starts with the warning “If there is a unwritten Law of Internet Privacy, it is this: Anything you post can and will be used against you in the court of public opinion” and goes on to discuss the idea that even semi-private information online can become self-incriminating, and that we, as a culture, have little experience in ethical behavior in this regard. Echo.

My own Myspace profile is minimalist and is here. I believe that the visitor can see it without logging on (it has not been made "private" or whitelisted). If someone can't access it, let me know.

Update: Nov 9


The NBC Today show discussed the issue of teenage and college girls seeking "fame" on the Internet with racy pictures and depictions of binge drinking and drug use and sometimes nudity, and the commentators said that the need or craving for instant global "15 minutes of fame" is something "we have never seen before." Again, the show emphasized that most social networking users have little concept of the long term consequences -- as with employment.

Along these lines, it's worth noting that Sony Screen Gems (dir. Gregory Hoblit) will, in early 2008, offer a movie "Untraceable" in which visitors to an Internet website actually cause the death of a victim by clicking on a site. (Don't confuse with M. Night Shyamalan's "Unbreakable").

Update: Nov. 16, 2007

ABC Nightline covered this case and mentioned also a book by a local Italian author with a similar story. Again, art predicts life.

Sunday, November 04, 2007

Journals, diaries, weblogs: documenting changes in social values


Whenever I subbed in a high school English class, it seemed that the first item on the lesson plan was always ten or fifteen minutes of journaling. Students kept not-too-personal “diaries” that teachers would check for progressing writing skills. Sometimes, the quality of what gets written is quite good and the content significant. Indeed, at least one major publishing event (“Freedom Writers”) resulted in California inner city schools as a result.

Journals do give the reader a street-level feel as to what is really going on in a culture. Fifty years from now, I would guess that some of them could be valuable literature, to tell future generations what really happened to real people in a historical period (this one). After all, some literature anthology reading assignments often are prose narratives of life in colonial or frontier America or some period in England. Usually the student needs some background in the social history of the period to appreciate the narrative.

The modern incarnation of the diary is, in part, the blog. Most of the better ones are detailed looks at some facet of some issue, often spiced up with some personal narrative that somehow links to the issue. The mainstream, professional credentialed press, having to be all things to all people, can’t get to that level of nuance and intimacy as an individual may in a blog (even though professional journalists write “professional” blogs about specific topics, like computer security, and everybody knows about political blogs tracking candidates.

That gives some insight into a blogger’s motives. I, for one, want people to know what is really going on, and I want future generations to know what really went on. I also want people go get beyond the obvious “self interest” when dealing with issues. Lobbying and advocacy organizations must, out of practical necessity, go for the lowest common denominator and present ads (and encourage form letters to politicians) making points in the simplest manner. Most modern politics and a lot of modern business, sad to say, must engage in manipulative sales efforts that contradict the notions of critical thinking we are supposed to learn in school (including the mandatory four years of high school English). Even many legal concepts (especially trademark law with "brand confusion" concerns and even right of publicity) are founded on an expectation of public reactivity without capacity for critical thinking.

A blogger’s motives have become a serious concern for employers, families, and possibly even the legal system. As I’ve noted, social networking sites have contributing to a culture that presents on-line self-presentation as a way get people to do things (first, interact, possibly in person, with the speaker). The immediate “benefit” to the speaker becomes more important than the content itself (especially with "free" or minimally revenue-generating content outside industry and guild norms). Employers then tend to look for a person’s online presence as a clue to what makes the person tick rather than for the objective merit of the content. This is a development that we really did not foresee developing until a few years ago, and the results have sometimes been costly for some job applicants and employees, sometimes unjustly. I don’t think it was even intended by Mark Zuckerberg and others who developed online social networking; it’s just a real-world result of a huge social experiment.

In my own weblogs, I’ve been documenting several trends that I have noted, developments that can have a surprising effect on our notions of individual liberty. One of these trends is what I’ve just noted, the way personal sites have been perceived as a social rather than publishing phenomenon. That gets into a murky area of intellectual property law called “implicit content”, which over time may turn out to be more important than censorship itself. As the COPA (Child Online Protection Act of 1998) litigation moves into its ninth year, I’m struck by how lawyer are paid six-figure salaries (maybe not that much) to draft arguments that rehash the same things over again, and miss the real point, that technology and society are changing faster than we can predict. Bloggers can do this on their own better than professionals can – until bloggers get jobs where they no longer can say what they want in public.

One of the trends has to do with the way we perceived personal autonomy v. “personal responsibility.” In the past few decades, we gotten used to thinking about responsibility for results of choices. Earlier generations accepted a more collective view of the concept, modulated by the family and group. What we’re having to deal with, in part, is that some responsibility for others comes at us and we have to accept some of it whether chosen or not. Family responsibility doesn’t always start with procreation. The demographic changes – longer life spans and lower birth rates – are making eldercare a moral issue for some childless people. We could come to the point of viewing filial responsibility as fundamental the way child support is. We’re not there yet, but we could well be heading in that direction. The major media hasn’t yet faced the impact of eldercare on the childless. Inevitably, it will have to, but you’ll see it first in blogs.

My venture into all of this started in the 1990s with the “don’t ask don’t tell” issue for gays in the military. What makes an issue like this (as well as the marriage and parenting issues for GLBT) is the returning notion at some responsibilities (and even sacrifices) have to be shared to defend freedom. The “greatest generation” that won World War II understood this, but in the 60s the “me generation” of boomers started to lose focus on this. But it is coming back with a vengeance.

Update: Nov. 7, 2007


NBC4 had a story tonight on making money through blogging, and for now I've quickly incorporated it into my blog on network neutrality (which would affect it) here.
However, I expect to look into this in more detail and probably post more information on this on this blog later.

Update: Nov 9, 2007

I am starting to develop a more "professional" blog with WordPress and MySQL database on the confluence of technical and legal issues. The opening blurb is here. There will be more details (and advertising related information) later.

Thursday, November 01, 2007

"Do not track" proposals with FTC could be gaining traction


Today, November 1, 2007 Catherine Rampbell has a Business Section, p D1 in The Washington Post, “’Do Not Track’ Registry Proposed for Web Use: Online Behavior Used to Tailor Ads.” Link is here. "Do not track" as a buzzword has a public emotional appeal similar to "do not call".

There will soon occur a two-day conference on behavioral advertising sponsored by the Federal Trade Commission. There is supposed to be a self-policing system called the Network Advertising Initiative. and it has a response to this “do not track” (similar in spirit to “do not call”) proposal here.

The World Privacy Forum maintains that only an about 25% of corporate advertisers are members of the NAI.

Louise Story has a New York Times story Oct. 31 “Online Marketers Joining Internet Priacy Efforts” here.

The Electronic Privacy Information Center has a short blurb on this Oct. 31, here.

Some people feel that more control of ‘tracking” could improve the effectiveness of Internet advertising, leading it to people who want to see it.

Internet advertising is important in supporting blogging (although it doesn’t yet appear to me that a “do not track” list would affect it) and citizen networked journalism, and could be an important source of funds for improving the quality of encyclopedia-like database repositories (linking opinions to facts, as I have often discussed) in the future. This issue bears careful watching.