There is a New Google Patent that has surfaced. As always, it makes for fascinating reading.
It makes reference to editorial and automated detection of favored and non-favored sources and methods for ranking sites based on this. One of the key phrases in the document has Google improving search results “by providing a mechanism that enhances the ranking of search results by integrating editorial opinion”.
Fundamentally, this is no surprise at all. We have been through a couple of major generations of search already. The first was the Yahoo! directory – a completely manually put together directory of the web. A great idea, but not scalable. Then came Google – a completely algorithmic solution. Highly scalable algorithms that greatly improved overall search. However, this was shown to have limitations as well (for one thing, the determination of spammers to find the limits of the algorithms).
Google has been using human editors to review search result quality for some time. As far as we know, this has been limited to verifying search result quality using editors in Asia. And, recently Google released the Google Co-Op program, a program that allows users to personalize search by a number of methods. This program certainly enables users to express opinions (in an indirect manner) about content quality.
An integrated result makes sense. Leveraging the scalability of algorithms, and finding scalable ways to leverage human input, simply makes sense. This is where end-user search nirvana lies. It’s also where nirvana lies for the non-spamming publisher. Improving search quality is like motherhood and apple pie.
An example of leveraging human input might be via the measurement of popular opinion. This is specifically referenced in the patent. One concept for measuring popular opinion might me measuring the click-through rate on existing search results (i.e. if you are in the #1 position and do not get enough click-through rates related to a particular search query, you could be moved down – of course, the opposite would apply too).
An example of dynamically determining favored v.s. non-favored status would be in the case of a search query such as “free downloads”. In the event that the site coming up for the search query after the first pass analysis was determined to not actually offer a free download, it could be downgraded. Similarly, a site that was not ranked that highly that actually does offer a free download could be upgraded.
Of course, there are lots of ways to use human and machine input. It remains to be seen how Google’s use of this patent will materialize. But it’s exciting stuff. It’s where the next frontier in search will unfold.
Eric Enge leads the Digital Marketing practice for Perficient Digital. He designs studies and produces industry-related research to help prove, debunk, or evolve assumptions about digital marketing practices and their value. Eric is a writer, blogger, researcher, teacher, and keynote speaker and panelist at major industry conferences. Partnering with several other experts, Eric served as the lead author of The Art of SEO. Learn More About Eric Enge