Tuesday, November 01, 2011

Search engine companies try to emphasize originality, premium quality; controversy over tracking grows more complex

Tech Republic has an interesting “Tech Sanity Check” essay (by Jason Hiner) on search engine performance, especially Google, which has allegedly experienced some mishaps in trying to ferret out “link farms” from its search engine results.   The link is here.  Google has its own explanation of its efforts on its corporate blog, here.  There has been some particular controversy about Demand Media, which it answers here.

I don’t provide many links among my various domains and blogs for this reason – I don’t want to look like I’m doing something “wrong.”  But I think search engines could provide apps that allow an entity owning many domains and blogs to identify them and tell the search engine about them so that the engine can disregard them (according to “honesty” thinking) in calculating the popularity of a web page. Search engine companies have to keep these algorithms as trade secrets, but providing such a facility just sounds like common sense.  Within Blogger or Wordpress, it would seem that a search engine could easily determine blogs that belong to the same entity when cross-linking and discount these links.

In the late 1990s, webmasters were told to code metatags of phrases they wanted to be found – that has turned out to be unnecessary, as has paying services to improve “search engine ranking”.   I say this about sites that have a lot of text content and a lot of specific information. Sites that sell only one product or service and that are predicated on volume (as did many early in the dot-com bubble, before it imploded) may look at this differently.

Cnet has a detailed story by Peter Yared on content quality, “Has content become advertising for advertising?” here. Yared discusses the tricky problems and misunderstandings behind public calls for “do not track”, which could wind up hurting non-profits and politicians as much as old-fashioned dot-com-style sales sites.   He’s a bit critical of Huffington Post’s style of journalism for AOL – I’m not sure I agree with his criticism, as the HP has brought out a lot of new stories very quickly. He also discusses “retargeting”  (and all the financial incentives for it in the workplace) and  Google’s (and probably Bing’s and Yahoo!’s) efforts to reduce the effect of repetitive, subpar content. 

There’s another problem – even I’ve noticed – older items tend to stay on top of search results – because they have had more time to establish “legitimate” popularity.  It used to be that simple text or HTML pages ranked above columns pulled off databases with CSS or XSL, but that seems to have changed.

In my own case, I note another curious result. My "doaskdotell.com" moved above "vetsdoaskdotell" when I simplified the home page recently. 

The best advice for newbie webmasters is, as always, originality.  Besides just reporting on the rumored dangers to the Web of Protect-IP or SOPA, get into and figure out if the dangers are really there – and repot some original conclusions.  Get out and do some interviewing or reporting with your own happy feet.

Note also here that Hiner (in the original Tech Republic article) links to a July story on why he likes the looks of Google-plus.  Facebook, he says, is a "walled garden" and Twitter is a "controlled ecosystem", and Google+ will become like "connective tissue" (muscle) for the Web, driving people back to its Search.  Again, many will disagree (including Facebook).

Check out the October 2011 feature story in FastCompany by Farhad Manjoo, "The Great Tech War of 2012", comparing the strategies of Google, Facebook, Apple and Amazon, link here

No comments: