Category: SEO

Google crosses 70% search market share in US

Posted by – August 12, 2008

Research firms Neilsen Online and Hitwise have released traffic information that shows strong Google growth in the US market. Google’s audience through its variety of web properties grew by a million from June to July to 129 million (Google has the largest audience in the US through its properties like and But even more damning for its rivals (namely Microsoft and Yahoo), and arguably for the webmaster community, is the information Hitwise released today about the most coveted traffic of all: search traffic. Hitwise announced that Google has crossed the 70% mark (they cite 70.77%) which is up 10% from July of last year and 2% from June of this year. Here’s the rest of the search landscape that Google dominates:

  • Google – 70.77%
  • Yahoo – 18.65%
  • MSN/Live – 5.36%
  • Ask – 3.53%

And here’s a graph from Hitwise:

Live Search’s Webmaster Center comes out of BETA

Posted by – August 9, 2008

Microsoft’s Live Search Webmaster Center came out of BETA today, with new features showing backlinks and crawl errors. Most agile webmasters would not have had much use for the crawl error tool, as they often have all the data they need in their own server logs but backlink metrics are very useful to SEO efforts and the more data you can get about your backlinks the better.

Till now, Yahoo’s site explorer has been the most useful tool, with the most accurate backlink data and it’s nice to see more transparancy from the search engines.

SEO Friendly Titles – The first WordPress plugin you should install

Posted by – August 5, 2008

One of the very first things I do when installing a WordPress blog is to hack the titles. By default the page titles that WordPress generates are not SEO friendly, and the individual post page titles put the title of the post after the blog and archive wording.

The all in one seo pack plugin allows you to modify the blog’s meta tags through the WordPress admin panel as well as on the individual posts through the post editor. Its defaults are sensible and it represents a cleaner solution than hacking at the code to do it yourself because of the abstraction gained through the WordPress plugin architecture.

It’s now part of my standard WordPress install and should really be a part of the core software.

Yahoo search update for August 2008

Posted by – August 4, 2008

Quick heads up for the SEO crowd: Yahoo search is being updated with new indexing and ranking algorithms today. You should see significant changes in their results as they are being rolled out.

Google testing related queries under individual search results

Posted by – August 2, 2008

Aaron Wall noted on his blog that Google is testing related search terms under individual sites in the results pages. Have a look at his screen shot here for an example:

Google gives details on its customized search results

Posted by – July 30, 2008

Google has been collecting searchers browsing and searching habits for years. They have their search logs, clickstream information for every site serving their AdSense ads, and for every site using their free web analytics program they have information on the browsing history for their toolbar users who do not opt out, and of all the users of their Web Accelerator proxy. And unlike many other companies that collect user data so Google actually uses their data in fundamental ways. So it comes as no surprise that they’d want to find ways to employ user information in their search algorithms. Clickstream data and folksonomy are some of the big areas that search algorithms are expected to use. Right now all major search engines use ranking algorithms that are primarily based on the Pagerank concept Google introduced and became famous for. They all use links on the web to establish authority, and no fundamental change has taken place in the evolving search algorithms in many years. They get better at filtering malicious manipulation and at tweaks that eek out relevancy but nothing groundbreaking.

So authority based on clickstream analysis and social indexing seemed like good ways to use data to further diversify the effort to allocate authority to web pages. What Google learned early is that they needed scale, and their initial data efforts (things like site ratings by their toolbar users) didn’t end up in their search algorithm. Folksonomy and social indexing doesn’t yet have enough scale to rely on and has potential for abuse, but the clickstream has scale and is harder to game given that traffic is essentially the authority and the people gaming the authority want traffic. So if they need traffic to rank well to get traffic then there’s a significant challenge to those manipulating rankings because they need the end for their means.

But Google is cautious with their core search product and has tweaked their algorithm very conservatively. And it has been hard to tell just how much clickstream data was playing a role in their search results and will continue to be as long as it’s such a minor supplement to their algorithm. Today, Google has posted a bit of information about this on their official blog in their efforts to shine more light on how they use private data. You can read about it here in full but the basics are no big surprises:

  • Location - Using geolocation or your input settings they customize results to your location slightly. My estimate is that they are mainly targeting searches with more local relevance. And example of such a search would be “pizza”. “Pizza” is more local than “hosting” and can benefit greatly from localization. Hosting, not so much.
  • Recent Searches – Because many users learn to refine their searches when they don’t find what they are looking for this session state is very relevant data. A user who’s been searching for “laptops” for the last few minutes is probably not looking for fruit when they type in “apple” next. They reveal that they store this information client side and that it is gone when you close your browser but because they mean cookies anyone who’s been seriously looking under the hood already know this.
  • Web History – If you have allowed them to track your web history through a Google account they use this to personalize your results. They don’t say much of anything about what they are really doing but this is where a the most can be done and there are far too many good ideas to list here. Some examples would be knowing what sites you prefer. Do you always click the Wikipedia link when it’s near the top of the search results even if there are higher ranked pages? Then they know you like Wikipedia and may promote its pages in your personalized results. Do you always search for home decor? Then maybe they’ll take that into consideration when you search for “design” and not give you so many results about design on the web. There are a lot of ways they can use this data, and this is probably an area they will explore further.

In summary, right now I’d say their are mainly going with simple personalizations and not really employing aggregate data and aggregate personalization to give aggressive differences. They are careful with their brand and will use user history with caution. After all, if your the use of your results lead to less relevance they fail and because personalization can be unpredictable (there must be some seriously weird browsing histories out there) they are going to be cautious and slight with this.

The secret to SEO

Posted by – July 29, 2008

It often seems that everyone working online claims to have expertise in Search Engine Optimization. It isn’t that surprising given the returns (traffic and money) but because SEO is a winner-take-most game knowing a little bit of SEO is about as useful as being a “little bit” pregnant. And no matter how much someone might know about SEO, if they don’t execute better than their competitors they lose.

And the thing about loosing in SEO is that it’s a loser-take-none game. The returns from SEO only begin after achieving the top rankings. Moving in rankings from, say, 50th place to 20th represents virtually no traffic gain (seriously, it could be as low as a difference of a dozen visitors a month).

So SEO is winner-take-most, where only the top spots get any significant traffic and in a competitive SEO market the secret is all about execution. You are competing directly against people who understand SEO (remember, everyone’s an “expert” here) and the depth of your insight that matters isn’t what industry names you can drop, or how well you can argue the pedantry of on-page optimization. It comes down to being able to focus your efforts more accurately and execute better than your opponent. Does your team execute more swiftly, efficiently and accurately than others? Does it do so without running increased risk of penalization? Does it cope with imitation and sabotage?

If not, please remember that there is no silver bullet except having the best people in a fast-paced team who can out maneuver the competition and the search engines’ improving algorithms. If you really want to be competitive in SEO, and because of the nature of the game you shouldn’t bother if you don’t plan to be competitive, you need good people. And if things get competitive and the other guys have good people you need better people.

So the secret to SEO is finding these technical rock stars to lead the critical portions of your online efforts. Businesses can be built around the right online marketing gurus, and when you find one, we’ll not only tell you how you can conquer your niche, but unlike the other “experts” we’ll actually do it.

Microsoft’s BrowseRank alternative to Google’s PageRank

Posted by – July 25, 2008

Cnet broke a story about Microsoft’s BrowseRank (pdf link) about a authority ranking algorithm proposal coming out of Microsoft’s Chinese R&D labs that proposes “Letting Web Users Vote for Page Importance”. There isn’t much new to this, other than the new term “BrowseRank” as Microsoft has long viewed clickstream data as a potential way to outdo Google’s search algorithm, which like all other major search engines revolves around a page ranking system Google introduced that uses links on the web to determine authority.

Using user traffic has potential if you can aggregate enough scale, but a lot of the data is behind walled gardens. You can easily look at public web pages to count links but access to clickstream data is not as simple. Your options are to buy ISP data, to sample traffic and extrapolate, or just collect as much as you can on your own properties (only a few companies with scale to have useful data).

So ultimately this kind of algorithm is unlikely to be a groundbreaking difference and seems destined to be a supplemental part of the general ranking algorithms. We’ll see more of it and it shows promise in smoothing out link anomalies like link farms but isn’t likely going to be the core of a major search engine any time soon.

Google’s bringing it out to measure again… 1 trillion urls!

Posted by – July 25, 2008

Google announced a new milestone of 1 trillion urls, which is impressive enough that we might as well forgive them for bringing us back to the index measuring wars of yesteryear. In the past, search engine bragging rights were about how much of the web their index contained. Then Google stopped publishing their index total on their home page and said it was quality (of search relevance) and not quality that mattered.

But a trillion’s a bit much to keep mum about so there you go. It doesn’t mean much but it’s interesting that it comes right before a stealth competitor launches a search engine they will claim to be the biggest (I think it’s a coincidence).

Matt Cutts announces new Google Toolbar Pagerank coming

Posted by – July 24, 2008

Matt Cutts, a Google engineer who leads their webspam team announced that Google is updating their tool bar Page rank soon.

What is “toolbar Pagerank”?

In the good ole days of SEO Google’s index was periodically updated, and the SEOs all rushed to check out their Pagerank and positions. Back then the correlation between the two was much bigger, and the Pagerank in the toolbar was what a lot of SEO professionals fixated on.

Over time, Google got better at discerning the context of pages and rankings depended a lot less on the url’s general Pagerank and Google also started delaying their tool bar Pagerank while beginning to constantly calculate the ranks. One of the reasons Google does this is to make it harder for SEOs to manipulate their rankings, as the feedback is delayed and it’s harder to tell what worked or didn’t work.

So now Google periodically (every 3-4 months) takes a snapshot of their Pagerank data and pushes it out to the public through the toolbar (and the various other sites that query their toolbar servers). For more information read Matt’s explanation here.