Archive for Search Engine Optimization

Yahoo User Interface Library

I am continually surprised by even good web designers/developers who are unaware of the Yahoo! User Interface Library. I am hoping that this post will help more of them discover this useful resource. If you are a site owner you may want to pass the information on to your designer. Who knows they may thank you for it :)

The Yahoo User Interface Library (YUI) is a collection of JavaScript and CSS resources that make it easier to build interactive applications in web browsers. Some like the Event Utility simply make in-browser programming easier while others like the Menu family of components make it a snip to add fly-out menus, customized context menus, or application-style menu bars to your website or web application.

Not only is the YUI Library the same high quality code that is used by Yahoo on its web properties but it is also free for both commercial and non-profit use (subject to minor restrictions).

An additional bonus is that if you’re using YUI for your own project Yahoo is offering free hosting for YUI components, both JavaScript and CSS , gzipped and with good cache-control using their state of the art network.

Support is provided through a Yahoo! User Interface Library Group and there is a YUI Blog for announcements.

I played around with the DataTable control which provides a powerful API to display screen reader accessible tabular data on a web page with sortable columns.

This is what I was able to produce in 15 minutes.
Read the rest of this entry »

Comments (1)

Image Search

Why would you want to optimize for image search and how do you do it? Here are some reasons for doing it:

Image search results are being increasingly used by the search engines in contextual search results to improve usability.

Google is serious about images. In October last year Google added an opt-in to Enhanced Image Search in Webmaster Tools. In conjunction with Image Labeler this allows Google to associate the images included in your site with labels that will improve indexing and search quality of those images.

If you search in Google or Yahoo for |pictures of diamond earrings|, |images of mowers|, |hammer image| or something similar, just above the organic listings you will get a row of relevant images. In Yahoo’s case sometimes the images are from Fliker rather than Yahoo image search, for example search Yahoo for |funny pictures|.

All three of the major search engines have a separate image search; Google image search, Yahoo image search and MSN image search.

Image search Statistics from Hitwise show image search is growing at 90% year on year and represents nearly 0.5% of all internet visits.

Traffic from image search can be targeted. It may not convert as well as organic search but it’s free!

Ok so how do you do it? The easiest way to explain is by example and I have created a new image for this page:

Image search

We will be optimizing this image for the term |image search| which currently has 5,660,000 results on Google Image Search.

  • Put the search term in the page url. In this case it’s
  • Put the search term in the page title. In this case it’s <title>Image Search</title>
  • Use the search term in close proximity to the image. In this case the search term appears twice in the sentence immediately following the image.
  • Make sure the page topic corresponds to the search term. In this case the page topic is definitely image search!
  • Make sure the image size is non-standard. In this case it’s 304 x
    203 pixels.
  • Make the image in .jpg or .gif format. In this case it’s .jpg
  • Name the image with the search term. In this case it’s image-search.jpg
  • Use the search term in the alt attribute, the title attribute and make sure that you have included the width and height declarations. In this case we have <img src="" alt="Image search" title="Image Search graphic signed by Googlebot" width="304" height="203" />

I am not suggesting that you do all of the above for every image on your site but if you choose some key pages and optimize the image(s) (or create them specially and then optimize them) on those pages there will be a tangible benefit.

The search engines image databases are not updated all that frequently but when the image above ranks I will post an addendum here.

Comments (1)

Hidden Text (Revisited)

Just over a year ago I posted on the dangers of hidden text and concluded with the advice "…don’t use hidden text to try to improve your rankings".

Here is a practical example of what may happen if you do.

Yesterday John Frost who runs the very popular Disney Blog posted that his blog had been delisted from the Google index and sure enough it had:

Google search shows no record of

Such is the power of popular blogs that within a couple of hours of John’s plea for help their was an explanation and a resolution from none other than Google Engineer and spam fighter in chief, Matt Cutts. He explains in a diplomatic and friendly comment that hidden text was responsible for the ban. Specifically this page code:

<h2 id="banner-description">Informing Disney Fans the World Over with the latest news and updates from all Disney companies, divisions, and related stories. Disney World, Disneyland, Disney Cruises, Disney Animation, Pixar, ESPN, and more are covered in as much detail as I can muster.</h2>

With this in the external CSS file:

overflow: hidden;
width: 0;
height: 0;
margin: 0;
padding: 0;
text-indent: -1000em;

As it happens this appears to be a generic Typepad problem in that when you set up a Typepad blog you are asked to enter a Weblog description which ends up being hidden by the CSS. However after Matt had pointed it out and John had removed the text, Matt helpfully submitted a reinclusion request.

Matt has gone off to talk to Six Apart the Typepad developers and The Disney Blog will be back in the index sometime next week.

The moral of the story is still the same - don’t use hidden text to try to improve your rankings.

Comments (5)

Five Questions for Web Designers

“The physician can bury his mistakes but the architect can only advise his client to plant vines”. Frank Lloyd Wright (1869 - 1959), New York Times, October 4, 1953.

If Frank Lloyd Wright were alive today I wonder what he would say about web designers’ mistakes. I get to see thousands of prospective clients and their competitors’ websites over the course of a year and although web design is improving I am still left thinking that 95% of web designers and web design firms just don’t understand the basics.

I have had to become an expert in diplomacy while explaining to prospective clients that the website for which they have paid hard earned money is (to put it politely) not as good as it might have been.

There seem to be five web design and build failures that come up again and again that require discussion with website owners. I rarely if ever get to talk through these points with the designers so I have listed them here as questions.

If you are thinking of having a new site or revamping your existing site you may want to make sure that these questions will be unnecessary before you appoint someone to carry out the work.

Here are the five questions for web designers:

1. Why don’t you learn what goes in the HEAD element?

Just because your client is unlikely to peruse the HEAD element doesn’t mean you should ignore it or fill it with garbage.

2. What’s so difficult about producing search engine friendly urls?

Dynamically generated urls can cause problems for search engine crawlers and may be ignored. Why not generate search engine friendly, human readable urls instead?

3. Why large logos?

Logos that take up 25% of the home page are a waste of valuable real estate. Users want to see what they came for not pictures of models staring up at the camera.

4. Do you leave blank alt tags for a reason?

Alt tags really do have a purpose. They are for the many users who use talking browsers, screen readers, text browsers or browsers on small devices.

5. Why don’t you use web standards like W3C?

Did you know that separating structure from presentation makes it easy for alternative browsing devices and screen readers to interpret the content? Or that using semantic and structured HTML makes for simpler development and easier maintenance? Or that less HTML means smaller file sizes and quicker downloads? Or that a semantically marked up document is easily adapted to alternative browsing devices and print? Or that if you use standards and write valid code you reduce the risk of future web browsers not being able to understand the code you have written?


Nofollow in Google, Yahoo and MSN

“If we value the pursuit of knowledge we must be free to follow wherever that search may lead us”. Adlai E. Stevenson II from a speech at the University of Wisconsin, Madison, October 8, 1952.

This is a compendium of our experiments and the experiments of others to determine how the major search engines currently treat the rel=”nofollow” attribute.

A few months ago I placed a rel=”nofollow” on one of the existing test pages that we had used in the past to determine the search engine indexing behavior of keywords in urls. The link was placed to a new test page with style=”text-decoration:none” to reduce the possibility of someone clicking it and signaling the existence of the new ‘linked to’ page as a referrer. Here is a partial screen shot of the page in Firefox using the SearchStatus extension which highlights rel=”nofollow” links. There are no other links to the new test page.

Nofollow link in the test page

Google, Yahoo and MSN are now showing a recent cache of the page and we can see how they handled the link.

We know that Google and Yahoo follow rel=”nofollow” links in the sense that they will visit the ‘linked to’ page. Valentin Agachi reported this in detail some time ago in his post Does rel=nofollow work? So for our own experiment and starting with the simplest behavior first:

MSN appears not to have spidered and certainly has not indexed the ‘linked to’ page:

MSN nofollow experiment result

Yahoo has spidered and indexed the ‘linked to’ page:

Yahoo nofollow experiment result

Yahoo also shows the page in the serps at 14/64 for an exact search on the anchor text.

Yahoo nofollow experiment serps result

Google has spidered but not indexed the page:

Google nofollow experiment result

Mark Barrera in his post “nofollow” - Does it Really Work Like Google Claims? has shown that if the ‘linked to’ page is in the index already then Google will rank the page for the anchor text. Google will also acknowledge the link on the cached page with “These terms only appear in links pointing to this page”.

Here is a summary of all our findings:

rel="nofollow" action
Follows the link
Not proven
Indexes the ‘linked to" page
Shows the existence of the link
Only for a previously indexed page
In SERPs for anchor text
Only for a previously indexed page

What we can’t know for sure is if the search engines are completely disregarding the rel=”nofollow” in their algorithms. Google says in the Official Google Blog “When Google sees the attribute (rel=”nofollow”) on hyperlinks, those links won’t get any credit when we rank websites in our search results”. MSN appears to disregard rel=”nofollow” links in every aspect and Yahoo seems to treat rel=”nofollow” links the same way as any other link but they are probably disregarding them for ranking purposes.

Comments (1)

« Previous entries · Next entries »