Marketing Insights

Teamwork Blog about marketing and business

Does Search Engine Optimization (SEO) Actually Work?

Seo Header

 

There are many things which web page designers can do to increase the ranking of the pages which they create.  Historically there have been technical aspects of search algorithms that could be leveraged by web developers to increase the ranking of a web page.  As search technology has matured all of those methods can increasingly be summed up with a single sentence: Make a web page that people want to see in their search results.

Chances are though, if you were asking whether a web design firm provides SEO you weren’t intending to ask whether they design web sites that people want to see.  You were asking whether they have services that they provide to improve search rankings after they create an awesome website.  Why did the term SEO ever get invented if there is no special sauce that web designers can add to a web page to increase it’s ranking?  Because there used to be.


How search engines work.


Each search engine has its own carefully guarded algorithm that changes on a daily basis.  As they gain more information about how to successfully perform searches they update their algorithms.  Generally, however, search can be described in three parts: a crawler, an indexer, and a queryer.  Each of those parts of a search engine contributes something to the process and good web designers need to know about all three of them.


The Crawler


A web crawler is the explorer of the search engine.  It boldly sets its eight nimble feet onto the freshly spun threads of the world wide web to find out what lies beyond the borders of yesterday’s obsolete map.  At least, that’s the imagery that was intended to be evoked when they chose the name.  The technical reality is that a web crawler is a program that follows web page links.  Each time it follows a link it checks whether or not the page that link points to is in the search engine’s database and, if it is, whether or not it’s changed significantly since the last time it was seen.  If it’s a new page or a page which has been changed since it was last seen, the crawler sends a copy of that page to the indexer.  The crawler then follows the links on that page to discover new pages.


The web crawler’s behavior is the part of a search engine that web designers have the least amount of influence over.  This is now and has always been the case.  Either the crawler is going to tell the indexer about their web page or it isn’t.  So long as the web site is placed in a part of the internet that crawlers search it will be found and indexed unless you explicitly ask that it not be.  The only other question is whether your web page is regularly re-indexed.  Whether that’s something you want is something that I’ll address in a little while.


The Indexer


The indexer is where SEO really got its start.  Imagine a search engine’s database as a gigantic library filled with over a billion books.  Once a crawler has discovered a brand new web page to add to the library, it’s the Indexer’s job to decide where to put it.  Modern search engines use categorization systems that are far more complicated than the index used by normal libraries.  But that wasn’t always the case.  The “card catalog” used by search engines to index the web started as giant word association databases.  Much like the children’s game where when I say “bird” you say “song”, search engines circa 1996 would associate a word with a web site.  You say “president” and Alta Vista says “www.whitehouse.gov”.  A keyword index allowed the search engine to answer queries using very simple algorithms.  All the engine had to do was give users the pages associated with the words used in a search query.  The more closely a web page was associated with a keyword, the higher on the results list that page was listed.

 

One simple way to build a keyword index is just to count the words on a web page.  If one out of every fifty words on a web page is “tool” then it probably is relevant to a search query with the word “tool” in it.  Web pages have a lot of words on them though and computers were a lot slower twenty years ago.  To get around that problem indexers often only skimmed web pages and relied more heavily on a web page’s meta keywords.  Meta keywords are words hidden inside the web page description that users never see.  In the early days of the internet, only the programs that form the infrastructure of the world wide web and the programmers who wrote them knew of their existence.  These super secret magic words are the mother sauce from which Search Engine Optimization was born.  Naive web designers built web sites that provided quality content and hoped that was enough to get the search engines to notice their page and list it when relevant terms were searched.  The enlightened practitioners of SEO scoffed at such naivete because they knew the truth.  Search engines couldn’t actually figure out whether a web page was relevant to a query.  They relied on all sorts of shortcuts that usually worked.  If you knew which shortcuts the search engines took you could take advantage of that knowledge and game the system.


Queryer


When you go to your local library you can usually use the card catalog to find a specific book you’re looking for.  If you aren’t looking for a specific book there are usually signs telling you which kinds of books you can expect to find in which section.  The indexing system of a community library is usually simple enough that most people can easily use it.  The “card catalog” built by search engines is unfortunately not so easy to use.  Finding a book in the library only occasionally requires the assistance of a librarian.  When searching the web you need the “librarian” almost every time.  The Queryer is that librarian.


Originally the Queryer of a search engine was its simplest part.  Once the index was built all that was left was to check which websites were associated with which words and give those to the user.  SEO killed this simplicity.  In order to achieve higher search rankings for a word like “value”, web designers would literally cover their websites with the word “value” written in invisible ink.  Search results were of predictably low quality because of this.  That’s where Google comes into the story.  Like everyone else they used word association to figure out which websites were relevant to a query but their Queryer was able to discriminate between high quality sites and low quality sites by taking advantage of one simple idea.  People link to good sites not bad ones.


Putting invisible words on web pages no longer worked to increase search rankings but the wise practitioners of SEO were not dissuaded.  They understood that search engines still weren’t actually capable of figuring out whether a web page was one that a user wanted to see.   Google’s innovation was still only an indirect indicator of quality.  They rolled up their sleeves and began developing crafty ways to exploit the new rules of the search ranking game.  They would create networks of web pages that did nothing but link to the pages they wrote.  They would automatically generate comments on the message boards of prestigious sites, copying and pasting their sites’ urls into the text.  These practices didn’t help users get to sites they wanted to see so Google found new ways to get around them.  Google would change its search algorithm.  SEO wizards would learn the rules and figure out new ways to game the system.  This arms race continued for the better part of a decade until one day SEO experts woke up and realized that Google had turned them into the naive web designers they had once scoffed at.  Their job was to design quality web sites and market them well.  SEO as its own specialty was dead.


Modern Search Engines


The decade long arms race between Google and SEO resulted in something that no one could have predicted but which, in hindsight, is obviously how it always had to turn out.  Search engines are the smartest programs that someone can blog about without having to flee to Russia afterwards.  The base for SEO’s secret sauce was the fact that search engines didn’t really know which web sites were relevant to a user’s query but that is no longer true.  Google, Microsoft, Baidu, Yahoo, Facebook, and others have spent trillions of dollars on artificial intelligence research in the last decade.  When Peter Norvig, a Google VP, was asked recently what percentage of the world’s artificial intelligence experts are employed by Google he said “less than 50% but more than 5%”.  In 1996 when someone said that a search engine reads web pages they were speaking figuratively.  Statements like that will either be literally true very soon or have been for some time now.


The indices of search engines are no longer keyword association lists.  The meta keywords are now completely ignored by most, if not all, search engines.  The new organizational systems are based on semantics.  What information is contained in a website?  How might someone use that website?  What types of graphics does this particular user prefer to see on websites?  How recently has the content of a website been updated?  Does the user want content that’s been updated recently or would they prefer content that has been unchanged for a decade?  There is no way to game a system that can actually answer those questions.  All you can do is build content that users legitimately want to see in their search results, confident in the knowledge that search engines really can tell the difference now.


References:


The wikipedia page on SEO gives a decent overview but some of the information is rather dated.

http://en.wikipedia.org/wiki/Search_engine_optimization


Google’s current “Search Engine Optimization” guide is really nothing more than a guide on how to write good web sites and market them well.

http://static.googleusercontent.com/media/www.google.com/en/us/webmasters/docs/search-engine-optimization-starter-guide.pdf


Articles from the early to mid 2000s can give you a good idea of what the state of the art was like during the period when SEO actually was its own specialty.

http://www.clickz.com/clickz/column/1717475/the-most-important-seo-strategy


The semantic web gave search engines lots of new tools for index creation and results quality.  This article is good for people interested in a more technical look at modern indexers.

http://blog.ahrefs.com/google-processes-queries-semantic-web-environment/


The Knowledge Graph is the new system being developed by Google to translate both web sites and queries into abstract information.  Once that’s accomplished determining which websites are relevant to which queries becomes one of measuring the distance between them.

http://www.google.com/insidesearch/features/search/knowledge.html


I didn’t get into too many details about what kinds of services are usually being provided when someone offers SEO.  This article does a decent job of that.

http://searchenginewatch.com/article/2316240/SEO-Really-is-Dead-Long-Live...Uh...What-Should-We-Call-This


Google and Bing both agree that most SEO practices from a decade ago or even a few years ago are useless unless they actually contribute to the sites real value.

http://searchenginewatch.com/article/2325632/Google-Bing-Agree-Past-SEO-Success-Guarantees-You-Nothing-Today

http://www.bing.com/blogs/site_blogs/b/webmaster/archive/2014/01/23/seo-past-performance-is-no-guarantee-of-future-results.aspx


Google VP Matt Cutts explains when freshness improves search ranking and when it doesn’t.

http://www.youtube.com/watch?v=o4hH4ZQ_19k


Andrei Broder wrote the definitive paper on what type of search queries there are.  If you are interested in maximizing your sites applicability to one particular use case or another it can be very useful.

http://www.sigir.org/forum/F2002/broder.pdf

 

Headquarters
 245 La Rue France
Lafayette, LA 70508
 

https://threebestrated.com/awards/advertising_agencies-lafayette-2020-clr.svg
 
 
Office Hours
9am - 5pm CST
Monday-Friday

Toll-Free
888.959.5806
Local
337.456.3300