Tag: microsoft

Live Search’s Webmaster Center comes out of BETA

Posted by – August 9, 2008

Microsoft’s Live Search Webmaster Center came out of BETA today, with new features showing backlinks and crawl errors. Most agile webmasters would not have had much use for the crawl error tool, as they often have all the data they need in their own server logs but backlink metrics are very useful to SEO efforts and the more data you can get about your backlinks the better.

Till now, Yahoo’s site explorer has been the most useful tool, with the most accurate backlink data and it’s nice to see more transparancy from the search engines.

Microsoft launches new Live Search homepage design

Posted by – July 31, 2008

Microsoft has launched a new design on the home page of their Live.com search engine that you can see in the screen shot to your left.

The image has squares that link to different searches (image, map and web). For example, in the screen shot here the tool tip reads “What will you see on your Safari to Botswana?” and links to an “animals in Botswana” search.

I expect that they’ll use other images and search in the future, and this has a lot more to do with the branding of their search assets than any functional change. Because there is a lower cost of entry to having mapping and image search Microsoft’s maps and supplemental search services sometimes have more bells and whistles than Google’s equivalent features and Microsoft is keen to get these in front of people.

Microsoft’s BrowseRank alternative to Google’s PageRank

Posted by – July 25, 2008

Cnet broke a story about Microsoft’s BrowseRank (pdf link) about a authority ranking algorithm proposal coming out of Microsoft’s Chinese R&D labs that proposes “Letting Web Users Vote for Page Importance”. There isn’t much new to this, other than the new term “BrowseRank” as Microsoft has long viewed clickstream data as a potential way to outdo Google’s search algorithm, which like all other major search engines revolves around a page ranking system Google introduced that uses links on the web to determine authority.

Using user traffic has potential if you can aggregate enough scale, but a lot of the data is behind walled gardens. You can easily look at public web pages to count links but access to clickstream data is not as simple. Your options are to buy ISP data, to sample traffic and extrapolate, or just collect as much as you can on your own properties (only a few companies with scale to have useful data).

So ultimately this kind of algorithm is unlikely to be a groundbreaking difference and seems destined to be a supplemental part of the general ranking algorithms. We’ll see more of it and it shows promise in smoothing out link anomalies like link farms but isn’t likely going to be the core of a major search engine any time soon.