Tuesday, January 30, 2007

Web 2.0 overview


Web 2.0

There has been a lot of hype about this buzzword in the last couple of years. The term was proposed by O’Reilly Media in 2004. It has become formal enough that the United States Patent and Trademark Office registered "Word 2.0" as a mark in 2006 for CMP Media. It is supposed to represent a migration from a paradigm where the Word Wide Web is largely a repository of rapidly and cheaply published but static content, to one which interacts with and provides sophisticated services to visitors. The original web seemed like a adjunct to low-cost desktop publishing that had developed in the 90s and that was self-publishing books and periodicals.

One development that helped lead to “2.0” is the search engine. I originally viewed my own site as a repository of expanded footnotes for those who had bought my books. Around 1998 and 1999, search engines started to become much more effective in making novice content likely to be found by others. This is largely an artifact of the way binary searches and exponents work in mathematics. This possibility is gradually recognized as affecting the way people receive and interpret information, and it could have major legal consequences. It is significant that most search engines no longer required publishers code metatags with search terms in order to become effectively indexed.

Some of the components are wikis, social networking sites, folksonomies, and a variety of mobile services to enable rapid communication between parties when on the road. The latter has included developing hyptertext language conventions suitable for display in a smaller space, as on a cell phone. A variety of tools, such as syndication with RSS feeds, allow publishers to interact more directly with end visitors.

Wikis, or course, bring up the idea of the online encyclopedia Wikipedia. In general, wikes are sites that allow users to modify content with minimal or no registration or professional certification. They even use specialized human-readable languages like MediaWiki, instead of hard-coded HTML or even HTML generated in a more usual manner from XML components. Wikis emphasize collaborative authoring. They have been criticized by educators and journalists as a source of reliable research information, but these criticisms could be met by providing bibliographic links and references to more conventional “old school” sources.

One capability of interest to me is the capability to correlate different lines of argument and associated history of incidents on a database and then display them in a format where the visitor can get an idea of the full scope of a problem (like, say, “gays in the military”, or “employers checking social networking sites”). I have tried to do that on my own sites (however static and “Web 1.0” they are) by organizing the arguments around the chapters of my first book.. I’ve also proposed another hypothetical organization here in a Mockup Database, here. I have also been experimenting with this idea with MySQL and java.

A folksomony is a user-generated content label. Wikipedia contrasts this with a “taxonomy” which is a static label assigned by an author, content originator, or some kind of controlled bureaucratic process. The idea of content labels to protect children, such as by ICRA (discussed on my other blog is essentially related to the idea of taxonomy. Customer controls have always, until now, been applied in the home (and the new Microsoft operating system VISTA greatly increases this capability) or by businesses at work. To enable users (outside of whitelists and blacklists) to label the content of others’ sites would represent a potentially significant advance in protecting minors, could be legally significant in the constitutional battle going on over Internet censorship (COPA).

The social networking site was conceived as a way for people to meet and interact within specific communities (like college campuses). However, because the profiles and blogs are often available on the web in the public space, they have become controversial and employers have become concerned about them, especially since about the middle of 2005. Social networking companies have been developing models to restrict the users of these sites to members known to be legitimately associated with their respective communities, so that they should not be viewed as a “publication” tool.

No comments: