Monday, January 23, 2006

Parsing grownups and kids on the Web: COPA, The Child Online Protection Act of 1998

One of the most troublesome problems with the Internet has been the fact that it is one reconciled space. Companies and individuals can post any material about any subject matter, and without some sort of intervention, any child can find material inappropriate for his age, especially through search engines. Inappropriate material is definitely not limited to pornography, as "adult content" is usually perceived. Many parents do not believe that their kids should not see materials about certain subject matter until they are older teens. This runs into ethical questions because some parents want their kids to grow up to ratify their own beliefs.

This is particularly troubling for new writers or content providers who may want to promote themselves and have material that is appropriate only for more mature audiences. Books can be segregated, and even mainstream books with somewhat adult materials must be thumbed through and read manually for the material to be found. Movies can be audience-screened according to MPAA rating. But web content is often free and typically can be viewed by anyone. This is a great opportunity for newbies, who might be accused of promoting themselves in front of children.

An important legal point is that the First Amendment protects the right of adults to produce and see material that is not legally obscene (and is not visual child pornography, and is not illegal for some other specific reason such as copyright infringement). But children clearly should not have easy access to some materials.

It is no surprise that there has been a flood of federal and state laws to deal with the issue. In 1996, Congress tried to protect minors with the Communications Decency Act, which tried to ban “indecency” (as opposed to obscenity) on the web. That was quickly struck down, but in 1998 Congress tried to pass the Child Online Protection Act (COPA) which defined a concept of “harmful to minors” as essentially “obscene with respect to minors” and then banned commercial websites from display HTM materials where they could be found and viewed by minors without credit cards or adult-id cards. A number of states followed suit with similar copycat laws.

COPA has been through some complicated challenges and made it to the Supreme Court twice. Right now, enforcement is enjoined, and there will be a trial on the merits of the claims of its unconstitutionality (in late 2006). But rather than belabor the legal points, what matters here is a survey of the various ways in which different audiences can see content that is appropriate for them.

First, what is wrong with an adult-id system? For one thing, that would drive away readers who fear loss of anonymity. Most adult-id cards as such are sold by true pornography sites. The other vehicle for identification in COPA was to be the credit card. But banks generally offer cards for financial transactions, not for identification. If many small companies (and especially pornography operations) were to store credit card information themselves, that would increase the risk of identity theft. And most pornography sites actually do require credit cards or memberships to see most of their content. The concern over pornographic teasers has been overblown in practice, and there are many forms of non-pornographic content that raise age-suitability issues.

Another issue was community standards, well known from obscenity law. With Internet content, a prosecutor from the country’s most conservative jurisdiction could set the rules for everybody. Some companies, such as quova.com, have developed systems that could limit access by geographical area. These systems are expensive and may not be suitable for small webmasters.

What is more appropriate is to give parents the ability to filter content that they don’t want their kids to see. Participation in a pluralistic, technological society requires more responsibility from parents than in the past. So that brings us to several more techniques.

The simplest technique is for parents to set up kids-only accounts. AOL and various other larger ISPs offer accounts graded by several age ranges. AOL will allow separate levels for different screennames with different passwords on the same master account. Some domains are excluded, or “blacklisted” for some age categories. For the youngest children, only domains on an approved “whitelist” can be viewed. For example, when I tried the young teen account, I found that Yahoo! was blocked but the CNN was allowed. My own domains were allowed when probably they shouldn’t be.

A second technique is to use any one of a number of content filters, such as Netnanny, on a kid’s account. These filters work in a similar way with white and black lists, and screen content for inappropriate content. They are far from foolproof. Some filtering companies allow companies to submit themselves to be blacklisted.

But the most promising concept is probably content rating. This approach considers a variety of reasons that content may be inappropriate for younger viewers, including violence, drugs, and psychological tension as well as sexually explicit materials (what we usually call “pornography”). A participating webmaster rates a site and can rate individual pages on a site differently. The parent installs hooks into the browser for his kid’s accounts to disallow content with various combinations of ratings. This effectively makes Internet content selectable in a manner conceptually similar to movie ratings.

There are presently two main players in this approach. ICRA, the Internet Content Rating Association, has the most comprehensive setup. The webmaster places an xml file in his root directory that points to various content labels, and then places links statements on the metatags of each file to get the rating for that file. There are some difficulties with this approach. ICRA requires every file (even files without troublesome content) to have an individual html link statement, which may not be practical for older websites that were pieced together manually without a template, or which have many files that are updated frequently. Microsoft now offers the ability to generate ICRA labels with FrontPage; however, html files derived from Word lose their links when maintained in Word (the link statement would have to be copied back in with Notepad), and labels for sites edited only with FrontPage or some other modern templating tool might be easier to maintain. Some kinds of files, such as pdf files, can be rated only with server-based labeling, which is not available to all webmasters. ICRA does have a technique, called Digimarc, for labeling images with watermarks.

Once a site is certified by the ICRA, the webmaster may use its label publicly, which provides a visual label for the visitor and provides the webmaster with some public evidence of good faith. It is clear that labeling could be useful in identifying other kinds of content, for example, separating fiction from non-fiction narratives. In certain situations, a writer’s intention in placing certain materials in a public space could be clarified by a label (for example, a disclaimer label that maintains that a screenplay script is fiction and should not be presumed, if accessed directly by a search engine, to represent real events). There is much more that could be done with the content labeling concept, but it is clear that some of the potential problems come from the ease with which content on the Internet can be found (especially by search engines) compared to physical media which presume that someone will pay for content and experience the whole work, and not just self-selected pieces.

Safesurf.com has a somewhat simpler system for labeling individual files, with similar capabilities for parents to place hooks into browsers.

I have labeled my own doaskdotell.com home page and some other files with both Safesurf and ICRA, and set up the Content under Internet Options in Internet Explorer (when logged on to AOL), and I have found that subordinate directories are blocked by the Safesurf labels.

Content rating is probably the most promising concept to address this whole issue. There would be many development problems to address, such as having major software vendors provide labeling utilities in their products. Content rating will hopefully become a growth area that could generate jobs.

You can find a lot more details at this link . I certainly welcome comments as this is an ongoing issue.

No comments: