You Should Create Unique Content
From Panda’s initial launch Google has been gradually updating the algorithmic formula for the Panda. These changes have been aimed at reducing content duplication primarily while allowing for better returns for searchers when searching Google. Quality content can not be stressed enough. Panda filters out the low quality content on content farms or content that is thin or poorly written.
Making unique content is an essential part of SEO today. If you were an article spinner you may have been hit pretty hard by these Panda updates. A large amount of the content being created was being written for the search engines and not for the individuals reading the material. The Internet has always had an absorbent amount of spam and this spam was being created at an alarming rate.
First of all, the reduction in content allows the search engines to index content faster; which in turn produces better results. Today’s search engine markets are trying to produce the best results for the searchers. These results have improved because of quicker indexing and better quality content and sources. The impact of these changes has been enhanced by the implementation of the Panda by forcing better content and quality.
Content Spin off Test
This need to produce quality vs. quantity is still of the utmost importance today. Writing informative and helpful content that educates and adds value to someone reading your material is a good example of quality content. During the first stages of the Panda I myself wrote one article and spun it into five different articles for testing purposes. This allowed me to measure the impact of the Panda while the process was starting. Although I did not see a significant decline in results, there definitely were some adverse effects from doing this testing.
Although I did manage to alter the different versions of the content enough that it possibly made them fairly unique actually. I think where the real adverse effect came to play was when Google Panda actually started penalizing the sites the articles were submitted too. This is the only reason the site decreased for a short time in ranking. The sites the articles were submitted to, may have lost some of their luster or juice because of Panda. All in all, not sure of the correct reason, but since then the site has recovered.
Content Spin off Results
Since completion of the testing it was clear that the results and benefits of writing unique content is for sure the correct way to go. There are several reasons for this:
- Proving you’re an authority in your industry
- Makes it easier to claim your content
- No risk of declining results
- Knowing you followed guidelines
- Not being a spam contributor
- Not being hit by DCMA
Duplicated Content on Site
Duplicate content on a website is another area Panda has affected. An example of this type of content is when you have more than one page with the same information on each page. This is considered duplicate content. Although this is not a copyright issue, you essentially are producing the same content when placing the same information on your pages within the same domain. Why should this matter, you might ask? Well the goal of Panda here was to prevent the creation of pages aimed at just creating pages for the sake of it. Some websites were posting the same content on different pages to generate more keywords in titles and descriptions to produce more results. This is actually a spam method/strategy focused at capturing more search engine results based on mass production.
Although this was one of the principles behind the Panda, I can not actually say I saw much of an impact concerning duplicate content on websites. For example, some sites have multiple landing pages directed at different locations, but the content is the same except for the meta title’s and description’s and a few minor changes to location names on the pages. I suppose these can be considered landing pages, but the Panda do not seem to pay much attention to the fact that some of these sites have the exact same content on the pages, hence duplicate content. The only difference as said already, was the title’s and descriptions. Some of the descriptions were even the same except for a location name.
Avoiding Duplicate Content Issues
- Make use of canonical URL
- Use noindex and nofollow
- Avoid identical content
- Do not scrap for content
- Make sure body, title and description tags are different
Conclusion of Duplicate Content
Not exactly sure how much has to be changed to make it unique enough not to matter, but apparently not much from what I can tell at this point. The proof is in the pudding. I can truly come to this diagnosis because I have done the thing’s written about here on this post as well as tested it. One site in particular still has solid results in the SERP’s and has tons of duplicate content to this day. I believe it is safe to say that website’s with high authority were not hit to bad even if duplicate content is on the sites. Although the pages with duplicate content from the same domain may have been penalized on a page per page bases the overall domain was not impacted.
Where you effected by duplicate content or were your outcomes the same?
Google Panda Release Dates
Here is a complete list of the Panda updates provided by Search Engine Land:
Panda Update 1.0, Feb. 24, 2011 (11.8% of queries; announced; English in US only)
Panda Update 2.0, April 11, 2011 (2% of queries; rolled out in English internationally)
Panda Update 2.1, May 10, 2011 (no change given; confirmed, not announced)
Panda Update 2.2, June 16, 2011 (no change given; confirmed, not announced)
Panda Update 2.3, July 23, 2011 (no change given; confirmed, not announced)
Panda Update 2.4, Aug. 12, 2011 (6-9% of queries in many non-English languages; announced)
Panda Update 2.5, Sept. 28, 2011 (no change given; confirmed, not announced)
Panda Update 3.0, Oct. 19, 2011 (about 2% of queries; belatedly confirmed)
Panda Update 3.1, Nov. 18, 2011 (less than 1% of queries; announced)
Panda Update 3.2, Jan. 18, 2012 (no change given; confirmed, not announced)
Panda Update 3.3, Feb. 27, 2012 (no change given; announced)
Panda Update 3.4, March 23, 2012 (about 1.6% of queries impacted; announced)
Panda Update 3.5, April 19, 2012 (no change given; belatedly revealed)
Panda Update 3.6, April 27, 2012 (no change given; first update within days of another)
Panda Update 3.7, June 9, 2012 (1% of queries; belatedly announced)
Panda Update 3.8, June 25, 2012 (about 1% of queries; announced)
Panda Update 3.9, July 24, 2012 (about 1% of queries; announced)
Panda Update 3.91, Aug. 20, 2012 (about 1% of queries; belatedly announced)
Panda Update 3.92, Sept. 18, 2012 (less than 0.7% of queries; announced)
Panda Update 20, Sept. 27, 2012 (2.4% Of English Queries Impacted)
Panda Update 21, Nov. 5, 2012 (1.1% of English queries in US; 0.4% worldwide; confirmed, not announced)
Panda Update 22, Nov. 21, 2012 (0.8% of English queries were affected; confirmed, not announced)
Panda Update 23, Dec. 21, 2012 (1.3% of English queries were affected; confirmed, announced)
Panda Update 24, Jan. 22, 2013 (1.2% of English queries were affected; confirmed, announced)
Panda Update 25, 2013 (confirmed as coming; not confirmed as having happened)
Thank you for reading and all questions or comments are welcome. If this is helpful pass it along, join our news letter or just share a little. Will try to update this list when the next release comes out from Google.
“Do not let the Panda eat all your website traffic.”
As you may know or not know, Google is not providing keywords data when users are logged in to their accounts. This has been an on-going topic throughout the SEO world for sometime now. Link Worx Seo wrote about this topic on 10/23/2011 and 11/03/2011 and the effects it is having on SEO analysis. As you can see from a post by Search Engine Land, the numbers are increasing month by month. This is yet to be implemented world wide, but Google has plan’s to do so in the near future according to sources.
Steve Myers shared what he found after checking Poynter.org’s analytics: Keywords were hidden in 29 percent of searches in April. That’s up from 22.5 percent in November, shortly after the change was made. Now “(not provided)” makes up the largest category of search terms, dwarfing the second place term: Poynter. Overall, 6 percent of inbound traffic now comes from a black box. Source: Search Engine Land
If you are a large business relying on high amount’s of keyword data then this can be very disrupting to your business. As an SEO specialist this provides an annoying challenge of collecting adequate data. This data was said to remain in the single digits, but of course has surpassed the single digit numbers originally announced by Google. Currently, I have clients asking Link Worx Seo what the “not provided” means in the keyword list in Google Web Analytic’s. Trying to explain to the client why Google will not show the results of keyword’s being used to find their business is challenging to explain to some clients.
Some of these clients instantaneously ask, “Why would Google not show you the data?”. Clients say this makes no sense, at the same time quickly realize how this can effect their businesses after you have explained it to them in a way they can understand. For several of the businesses Link Worx Seo provides SEO services for, the “not provided“ is the #1 keyword in the list. As an original theory, it seemed Google may be trying to measure certain areas; such as:
Completely sure there are more, but this seems to make sense. There is no problem with wanting to provide higher security measures for Internet users, totally agree with these new measure’s of greater security. Along with Google and FireFox announcing the new secure search agreement, encrypted data has become a significant importance with regards to privacy issue’s surrounding Google and other companies. If your logged in to an account then this can be tracked by Google for obvious purposes… Of course if your logged into your Google account you always have the option to toggle the social and non-social button provided by Google. Your results are then filtered based on your toggle selection. Still believe that if your logged in, Google records the keyword data no matter what toggle choice you select.
At this point there is no information which has been found about whether or not the keyword data is recorded into Google Web Analytic‘s as “not provided” data if logged-in while the toggle button is set to non-social selection. If someone has some information pertaining to this, please notify us so we can be more accurate about our posting. Thank you in advance.
Google SSL Encrypted Searches Will go Beyond Just Google in the Future
As Google has implemented these changes to better privacy and has only been applied to the Google’s English version. Eventually it will be rolled out to all Google search versions, which will produce a larger percentage of “not provided” data. Is it possible that Google could be working towards securing all search data? What might this in-detail? What about all those custom search boxes on websites allowing people to perform searches directly on a website. It may be happening already considering you must have a Google account to obtain those custom search boxes for personalizing your website. However there is absolute sure fire answer for this one though. Anyway, at this point let’s say Google has already been collecting this keyword data by means of a tracking code generated when you fetch the form code for the custom search box. This makes perfect sense to try and secure data integrity as well for these custom search boxes. What is your take on custom search boxes?
“Make use of all the data you can from analytics.”