Luca Vassalli's Website - Home page



2.Ethical SEO

3.Spider's view

4.SEO spam

5.General topics


2.6 Content

Obviously the main mean you can use to get the highest ranks is having a good, clean web page design with interesting and original content. The search engine is looking for the best content that corresponds to the user's needs, if you have a website with tons of pages of original content you will achieve the top ranks. The design is important but users are more disposed to forgive many design issues if the content is good, then to surf in a very appealing website with junky information.
Even if, from a strict SEO point of view these are useless, you must follow some general common sense rules to create a website where the users, who get there from a search engine or elsewhere, will like browsing: the theme of the website must be something that the surfers are interested in; the employees who write the content have to be interested in, and are familiar with, the topics they are talking about; typos and dead links have be detected and corrected as soon as possible; the pages should not be filled with too much information, keywords, or "stuff" (images, animations, banner ads and the like).

It is also important that your website is up to date: if you update every day it, probably the spider will visit it at least once a week, but if you update it once a year not only the spider will rarely visit your website but you will drop position in the result list. The search engines want to provide their user with the most updated information. This is one of the reason the website of the newspapers have a very good ranks, they have lots of content and they are updated daily.
If you run a website for a big company there are two way to accomplish this task: the first is to have a news section which is update regularly, but unfortunately not all the website are suitable for a similar kind of section; the second is to allow and encourage the employees to keep a blog in a separate area of the website, it will furnish fresh, regularly updated content with an informal point of view about what is going in the company, in the industry as a whole, or in the world in general.

When you add content to your website you need also to care that the content is not duplicated by other sources in the Web because if spotted by a search engine, it hurts. Thus, in case you need to duplicate content from some other website, you have to spend enough time to modify it a bit maybe with comments with your opinion or the like. Even if you do not duplicate from other sources it may happen that an another website copies yours, in this case you can spot this with appropriate tools, like the one at the url, and ask to the webmaster to remove the information he copied from you. Sometimes it may be the case to modify your own information avoiding risking to be banned for duplicate content.
It is really worth spending time to avoid duplicate content since the major search engines are getting better and better in the detection.
There are four main kinds of duplicate content.
Websites with identical pages are the main target of duplicate content filters. If two website are affiliate and for that reason have common pages, maybe commercializing the same product with the same page, they probably will see a drop in the whole website ranking and in particular for that page. The reason of the duplication does not matter; the search engine cannot understand the hidden reason of the duplication.
Scraped content is another flavour of duplicate content. It is taking content from a web site and repackaging it to make it look different, it starts to affect blogs and their syndication becoming really difficult to detect for the search engines.
E-Commerce Product Descriptions are also dangerous. It may happen that different websites sell the same product and use the same technical description, maybe taken from the producer's website. Avoiding the penalization involves commenting the description but it can also require writing your own original descriptions that is costly but it is the only way to be not penalised.
Finally there is a risk also in the distribution of articles. Let us think what may happen if an article is so good that many websites decide to publish it. In case the author does not allow that the article is modified, all the websites with the article will risk the duplicate content ban. In this case you have to consider how relevant the article is to your overall web page and then to the site as a whole, the more is relevant the less it is likely that just a comment to it can be enough.
Once you have detected a duplicate content in your website, consider that the search engine will look at the entire web page and its relationship to the whole site to decide if someone has to be punished and who is the guilty. Since this is not a trivial task there are cases of many legitimate websites who where unfairly banned because other bigger website scraped their one. Bigdaddy, the new change in Google, was supposed to fix the problem but it seems that it could not be going worse. It still has problem to decide the website which originally published the content, and to detect the legitim uses of duplicate content. Think to news feeds for instance, in that case there is a duplicate content because lots of users are actually looking for the same information. So it may still happen that you are banned because a bigger website copied your original content or because you speak about a very common recent news. We will have to wait some updates in the futere to see these problems fixed.