Archive

Archive for the ‘WEB CONTENT’ Category

How to Increase Traffic to Your Website With Images and Sitemaps

July 31, 2013 1 comment

Window and Linux Hosting Plans

Our Blogging Delay Update

First off, if anyone who is a subscriber has been wondering why the news letter has been the same for about two months now, it has been because of being extremely busy. There has been an extensive amount of work being completed elsewhere for another company over the last few months. Secondly, behind the scenes programming has been at a slow for a few due to the complexity of programming and the other work being completed for another company.

Currently, programming is taking place to include tags and categories a long with a website search functionality. Most of the programming is completed, except for some minor details with the indexing of the pages and documents being searched. The project has been extended for several reasons and will be completed as time allows. Has definitely been more than I bargained for at this time.

This new search function and tagging system corresponds to a previous post about external or internal blogging platforms and of which type is better. Because of this, the transferring of the blog post is still not completed yet and is primarily due to the amount of web content and ongoing work being completed elsewhere. Sooner or later it will all be completed, but not holding my breath for it any time soon from this point.

“Now, to the main purpose of the article.”

Add images to a sitemap for indexing

Image Sitemaps

XML and Image Sitemaps

After researching several areas of creating sitemaps for images and covering other areas of image optimization, it would appear that the use of images in sitemaps is extremely important when getting images indexed. I noticed that through the years images on sites without being in a sitemap seemed to get indexed anyway but were random and dependent on the names of the images, width and height inclusion, alt and title text and proper url formatting.

So figuring out the significance of placing images in a sitemap took a little thought and research. I suppose the most beneficial portion would be that placing these images on pages into a sitemap is beneficial because the images are for sure going to be indexed and not just by chance.

I learned a great deal about images being inserted into sitemaps and found some great resources while doing so. Most of the good sources I found required some kind of payment to actually get the images and pages properly formatted into a sitemap with no effort of coding.

I ended up creating my own sitemap from scratch to get the images for each page aligned/correlated to the specific pages without any additional cost. Although this was much more time-consuming to perform, the benefits are worth it since there were no out-of-pocket cost and the images are for sure going to be indexed.

Between the combination of images being inserted into a sitemap and providing specific instruction on not blocking certain image directories in the robots.txt file. My images are increasing in organic image results which overall more exposure. I also managed to insert a filter for GA to show the results of progress from the data in Google’s Analytic’s. This is leading to more success in results and brings more value to the sites at hand.

Image Optimization Tips

SEO Bullet Points Check every image has its size dimension (width and height) defined in the HTML

SEO Bullet Points

Name each image relevant to your page or image

SEO Bullet Points Do the same for the ‘Alt tag’ and ‘Title tag’ text

SEO Bullet Points Check your robots file for blocked image directories from being crawled

SEO Bullet Points Check for proper image paths (no broken links)

How to Check Images are Being Indexed

To easily check on whether or not your images are being indexed on Google, try doing a site search with your domain name (site: yourdomain) in www.images.google.com Any relevant images that are indexed for your site will appear in the image search results. Try it, it works.

Creating Image XML in Your Current Sitemap

Option one to let the search engine know your images exists for indexing. This is a combination of the current sitemap being used with XML images information added to it.

<!–?xml version=”1.0″ encoding=”UTF-8″ ?>
<urlset xmlns=”http://www.sitemaps.org/schemas/sitemap/0.9” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance
xsi:schemaLocation=”http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd“>
<url>
<loc>http://www.yourdomain.com</loc>
<lastmod>2013-07-13</lastmod>
<changefreq>daily</changefreq>
<priority>0.5</priority>
<image:image>
<image:loc>http://www.yourdomain.com/images/jpg/imagename.jpg</image:loc>
<image:caption>My Image Caption</image:caption>
</image:image>

You are allowed to have up to 1,000 images per page for indexing purposes, so adding more will not matter to the search engines at all.

A Separate Image Sitemap

Option two to let the search engine know your images exists for indexing. This can be done by creating a sitemap for just images in a non-combined XML sitemap for images only.

<!–?xml version=”1.0″ encoding=”UTF-8″?>
<urlset xmlns=”http://www.sitemaps.org/schemas/sitemap/0.9
xmlns:image=”http://www.google.com/schemas/sitemap-image/1.1″>
<url>
<loc>http://www.yourdomain.com</loc>
<image:image>
<image:loc>http://www.yourdomain.com/imagename.jpg</image:loc>
</image:image>

The two look very much the same but there is one significant difference you may notice on the XML declaration. The declaration specifies namespace differently as you can see from the examples. This is the main factor when differentiating the two sitemaps. You can name the images sitemap as sitemap_images.xml and place it into your root directory for indexing. These sitemaps allow for more accessible and faster crawling by the search engine bots. Overall, it helps improve the efficiency of the crawl process and ensures all of the important pages and images are indexed.

Extending the Sitemap

The tag’s <image:image> and <image:loc> are both required tags for the images indexing to work properly. As for the addition of the <image:caption> it is not necessary but is an added feature that possibly could be used in the future by the search engines. Basically it could be worth it to add now instead of going back into the future and adding it later. There is also another tag <image:geo_location> which can be added as well along with the tag <image:title> or even an <image:license> for each and every image. For the full list see Google’s page on Image Sitemaps.

Sitemap Tools and Webmaster Guidelines

Plus, here is a list of tools which will generate a sitemap for you. If you run a WordPress site you can install a plugin (e.g. Google XML Sitemap for Images) which will create the specialized Image sitemap for you.

Conclusion

My overall experience by playing around has led to the correct path of adding images to a sitemap as one sitemap and not two separate sitemaps. This seems to be my preferred method of adding images to a sitemap, but could be different for you. So play around with what works best for you and then decide. Plus, images have always been a great way to get more traffic to your website, especially since the search engines have placed more emphasis through the recent years on image search.

“Image optimization adds value”.

About Ty Whalin

Full-time Web Designer, SEO Manager, Cold Fusion Developer, business owner and Internet marketing entrepreneur. Educational background includes Central Florida Community College, where I majored in Computer Information Administration and Rasmussen College; Computer Information Technology. I am the Founder and CEO of Link Worx Seo, played drums for almost 30 years and proud father of one daughter.

Follow us on TwitterJoin us on FaceBookFollow us on LinkedInFollow us on PinterestFollow us on YouTubeFollow us on WordPressLink Worx SeoCome Join me on Google+

Bookmark and Share
Advertisements

Are you Claiming Your Web Content to Prevent Duplicated Articles?

If you are the originator of your content, be sure to claim it before someone else claims it.

Claim Web Content

First thought that comes to mind. Try making use of the rel=author attribute. For example: a WP blog allows for authorship as well as sites like BloggerSpot and ArticlesBase.com. From experience it would seem that when sites are joined or for a better term utilized as assets to help the later of the sites, duplicate content does not pose to be a problem when you have the capability to claim the content.

Because of the push for rel=authorship, making sure you are claiming your content no matter what site it is placed on is very important. Because of this, there has been discussion on how helpful articles directories are now a day’s.

Another thought on this as well revolves around the rel=me attribute. If you will look at the example provided by Google. Website’s allow for separate author page profiles that directly link to the article and the article directly links back to the author page profile. This cohesive linking allows for Google to determine you are the author of the content.

So, how Important are Article Directories?

Well this is the tricky part. After some testing and further research, article directories would appear to be more susceptible to duplicate web content issues than earlier day’s. Although from what I can gather from my research, I am at the thought of Google having a cut off point for how many times an article can be posted on other sites before it is considered duplicate web content.

I have placed the same content on the sources mentioned above in the first paragraph and not received any notifications about duplicate web content to this point. Nor have I seen a decrease in the stats. It leads me to believe this is fine because of the rel=authorship has been established and is firmly in place to and from these article sites.

But on the other hand the fact of posting articles in directories is still a service provided by tons of SEO companies it raises the concern of whether these directories are harmful or not. Furthermore, I have posted articles in some of these directories and have seen no loss in the results from doing so. As well as noticing no harmful outcomes from posting to article directories. Secondly, I have never seen any alerts from Google or found these articles to be duplicate web content through CopyScape services.

Window and Linux Hosting Plans

But one thing to note: CopyScape is only matching a percentage of the words associated with your links beings scanned. So for example a previous article posted on Google+ shows as duplicated content on CopyScape when scanning the url. This is what is odd to me. The fact that the post is from a WP blog and is only a link posted on Google+ and not the content. This of course is the basic free rendering from CopyScape, but at this point I am not placing much faith into CopyScape.

Other than doing some checking with CopyScape, setting up Google Alerts is relatively simple and only takes a few minutes. This is just another one of those little tools in your arsenal that provides some more protection from those dubious web content scrappers and plagiarizer’s. Although I can not think of one right now, there are also places on the web you can find snippets of code that allow a web address to be attached to the content when someone copies it from the pages. There is also another script that allows for a popup message to show when someone tries to copy your content as well. This little script prevents the right clicking from the mouse on your page.

What Constitutes Duplicate Content or Spun Content?

Have played around with spun content from one unique article and four other articles that were generated from it, that were said to be unique. While the articles were spun from one unique article it would appear they are actually still unique for the most part. But I had to improve on the actual writing on each one that was created from the original. I believe this was one of the primary reasons they did not show as duplicated content. Did some extensive editing to each one of those articles. Here are some check points to help you…

If you Have any of the Below, you Were Most Likely hit by Panda…

SEO Bullet Points Keyword stuffing

SEO Bullet Points Poorly-written content

SEO Bullet Points Spun content

SEO Bullet Points Illegible content

SEO Bullet PointsDuplicate content

SEO Bullet Points Thin content

SEO Bullet Points Links pointing to you from sites that contain poor content

blog-white-inner-seperator

You Should Properly Cite Articles or Content

Properly citing the article you are pulling the content from is necessary to avoid being penalized. Make use of the canonical method and anything you place on your pages that is not yours, place it inside of block quotes. The block quotes along with the link back to the original source page and canonical usage will prevent the duplicate content issue. Just in-case you did not understand canonical, it is placed in the top of the heading on web pages. Learn more about canonical here

Although if you are searching for a way to do it quickly you could use that automated software option, but you will regret it eventually. Truth is, there is no real way to do it fast and actually should not be done to fast otherwise it will not look natural. Another reason to stay away from the automated software bundles.
“Leave your thoughts here…”

About Ty Whalin

I am a web designer, SEO specialist, programmer and internet marketer entrepreneur. I attended Central Florida Community College where I majored in Computer Information Administration and Rasmussen College studying Computer Information Technology. I am the Founder and CEO of Link Worx Seo, played drums for almost 30 years, father of one daughter and enjoy everything life has to offer.

Follow us on TwitterJoin us on FaceBookFollow us on LinkedInFollow us on PinterestFollow us on YouTubeFollow us on WordPressLink Worx SeoCome Join me on Google+

Bookmark and Share

Duplicated Web Content – Google Authorship and Article Directories

December 29, 2012 1 comment
Duplicate Web Content

Duplicate Web Content

The Role of Google Authorship

If you are the author of the article, can Google tell the difference when using article submission services? The facts point to no at this point, but there are more and more sites starting to follow the same authorship standard and business site verification standards. With authorship being more common it is not as such an issue for the larger sites if you place the same article on them without changing it. For example: ArticlesBase.com which you may have already heard of, is a large article website that allows for article posting. Furthermore, this site allows for authorship credentials that proves you are the author of this content.

On the other hand smaller sites or sites that just do not provide quality would be cause for concern. This would be what I consider the most damaging when pertaining to duplication. This also brings to light that plenty of SEO businesses are still providing article directories for purposes of content syndication and link building. Would you consider this to be a form of duplication as well?

Granted you are taking one unique article and spinning it into more than one article of course, but this makes the article unique for each spin off supposedly. Is it really unique though? I say no, these articles are not unique and at the same time being duplicated throughout hundreds of article directories during the process. The reason I bring up this topic is because it still surrounds the notion of duplication and authorship.

With that in mind, does Google form a breaking point in the algorithm for article directories or should they be a thing of the past? I believe that if Google knows you are the author of the content then it does not consider it to be duplicated since Google knows it was produced by you – but in theory.

Article Testing and Measurement – Experiment #1

A while back during the inception of the Google Panda release in early 2011. I ran an experiment on five articles that were then submitted to approximately 500 different article directories. While each article was different they were derived from the same article, more less spin offs. During the process there were gains in the results from this experiment during the process of submission. This was also before Google authorship became a big player in the part of content duplication worries (A.K.A. – Panda).

Since this experiment, there was also consistent content being written and no article directory submissions for the normal web blog post. These web blog post produced a constant set of increasing stats on a monthly basis the entire time and had absolutely no directory submission services what so ever. The same web blog post written were also posted on several high quality sites such as tumbler, ArticlesBase, BloggerSpot and a few other high priority sites without worries of content duplication.

I also utilized CopyScape and set Google Alerts to notify me of any duplicate content issues a foot. CopyScape nor Google Alerts prompted errors surrounding duplicate content issues during that time period. The content is now appearing in the results with the authorship profile image showing from the Google+ profile in conjunction with the post.

This just truly backs up the fact of Google authorship playing a much larger roll revolving around duplicate content concerns. At this point it may appear that article directory submissions could be coming a thing of the past. Although whether article directory submission may prove to be a thing of the past or not is still yet to be seen. This is somewhat of a hunch.

Article Base Profile SERP and byline

Articles Base Profile Search Engine Result and Byline

Article Testing and Measurement – Experiment #2

My next experiment is possibly going to be the same as the first experiment or I may decide to use one web blog post from this blog and submit it to 500 directories. The goal will be to monitor the stats for gains and losses and find out if it out performs the other articles from the previous experiment and any of the post already on this web blog not having any article submissions done at all and the post relying only on Google+ authorship and quality article sites.

Secondly, another part of the goal from this experiment is to find out if Google will recognize this web blog as the originator of the content or will Google consider it to be duplicated content because it finds the same full article within these article directories. By doing this, the goal is to figure out if Google authorship prevents this from being duplicated content since authorship is now a factor for this web blog and shows me as the author now.

What do you think the outcome of this experiment will be? Do you think Google will know that the article is yours even though it is posted in article directories or will Google consider it duplicate content. 

Window and Linux Hosting Plans

Conclusion

When article submissions are being done, does it include the entire article for each submission or does it just create a link to an article on a web blog? The purpose of this thought is to define the difference between link building which is just simple anchor text links pointing to a particular article or the fact of posting the full article on an article site. The difference of building a link or posting a duplicate article have defining differences. By now you should understand the difference between what is considered duplicate web content and simple link building tactics. The term is somewhat interchangeable, but are different in retrospect. Article submission is yet a form of link building because you are essentially placing links within the article and building links to articles.

But for the purposes of submitting links to articles for link building, this is not considered a duplicate content issue. The real concern here is if the article being posted on more than one site is a full article being posted on these article sites and having a negative impact from duplicated content. This once again leads us to the same point about Google authorship and how much weight and authority does Google place into authorship? Only time will tell.

About Ty Whalin

I am a web designer, SEO specialist, programmer and internet marketer entrepreneur. I attended Central Florida Community College where I majored in Computer Information Administration and Rasmussen College studying Computer Information Technology. I am the Founder and CEO of Link Worx Seo, played drums for almost 30 years, father of one daughter and enjoy everything life has to offer.

Follow us on TwitterJoin us on FaceBookFollow us on LinkedInFollow us on PinterestFollow us on YouTubeFollow us on WordPressLink Worx SeoCome Join me on Google+

Bookmark and Share

Unplugging With SmallBox Web Design

December 2, 2012 Leave a comment

Window and Linux Hosting Plans

How did you feel?

Redbull GP Indianapolis

Redbull GP Indianapolis

Well, considering I spent almost no time doing anything but working on the business, this is sorta a tough one to answer. Most of the time when I want to unplug I may disappear for an entire weekend to get away from the daily work habits that flow over into the weekend. This has been a regular occurrence for me considering the time and effort being placed into the business. With long hours of work and no play, it can be somewhat a downer at times. One thing that manages to keep me a float is when I finish a project of which I have been working on or some cool new design work. The feeling of knowing I just completed a almost pain staking task can bring great reward knowing it is completed.

blog-white-inner-seperator

What did you do?

Monterrey, CA.

Monterrey, CA.

If I had to say when I unplug the most – on Sunday’s, game time. I never miss the Colts games and usually try and watch other games throughout the day. This is one of the main reason I appreciate football season. It break the norm of sitting around working all day on projects for clients and myself as well.

Of course football season is not around all year long so I try to find other ways to unplug. One of the thing’s I like to do every year and actually missed it this year, is going to the Red Bull GP Races. Going to the races is a great way to leave everything behind and just flat out have some fun for a day or two. Of course this usually involves beer, food and taking  pictures all day. If you have not seen my Facebook page or some of the other profiles listed around the web, there are tons of photo’s showing different race day’s.

blog-white-inner-seperator

The Suzuki Team

The Suzuki Team

Reflect on Your Experience

My passion to ride again is highly missed and plan on getting back to it someday. The joy of riding started when I was at San Diego, California. We used to ride all of the time, every chance we could get. At one point we rode the coastline from San Diego to Monterrey,  CA. to watch my first Super Sport Bike Race. It was great because there was no worries; just riding and having fun.

blog-white-inner-seperator

Unplugging in the Coming Year – 2013

Considering I did not unplug much this year, putting some effort into doing some events such as concerts or the races again is probably a good ideal to do. These are just a few ideals about unplugging this coming year. It would probably been beneficial as well to make it to other events as well. One of those would be a SMX conference. Wanted to go this year but just was not able to make it. To much going on to break away.

About Ty Whalin

I am a web designer, SEO specialist, programmer and internet marketer entrepreneur. I attended Central Florida Community College where I majored in Computer Information Administration and Rasmussen College studying Computer Information Technology. I am the Founder and CEO of Link Worx Seo, played drums for almost 30 years, father of one daughter and enjoy everything life has to offer.

Follow us on TwitterJoin us on FaceBookFollow us on LinkedInFollow us on PinterestFollow us on YouTubeFollow us on WordPressLink Worx SeoCome Join me on Google+

Bookmark and Share

How to win the war Against Black Hatters and Spam

November 30, 2012 2 comments

Spam Prevention

Free is Killing the Internet

This is because of fake accounts on Facebook, Google, Pinterest, LinkedIn, Twitter and blogs which are the main source of the problem. These company’s should join Google and Bing by creating structured data. We will not even get into the scams and spammers coming from fake e-mail accounts on Yahoo! There was a time when everyday someone with a Yahoo! account would contact me about how their family was hurt or dead and they needed money to get to the states or help save their business. I usually do not pick on people, but most of these fraudsters originate from non U.S. countries.

Although spam can not just be blamed on those people it definitely is not helping the Internet. There are also issue’s at hand concerning link builders selling services cheaply. These cheap SEO services are also causing problems as well. To many one article blogs and one or two word comments out there. This just happens to be another area of spam being out of control. More preventive measure are for sure still needed.

Can it be Done With Authorship?

I am really happy Google has decided to implement this new rel=”author” attribute. Finding way’s to eliminate spam is gaining ground but there are just still to many wood be user’s trying to beat the system. I think it would actually amaze the one’s trying to beat the system, just by following the rules and avoiding black hat techniques at how good they just might do by placing their resources into doing it the proper way.

Maybe it is the challenge black hatters find so cool about it. Difficult to say why they can not do it the same as some reputable SEO company does it. They are missing the big picture here. If they would just follow the rules everyone could benefit from it. Think about it. If the search engines no longer have to spend time preventing it; what a better system we could already have in place.

Window and Linux Hosting Plans

Fewer resources and time would be needed and the value of the Internet could be a much more manageable and happy place to work and play. Black hatters are the plaque of the Internet, they know it and we know it. Every heard the saying “Can’t we all just get along”? Well, this makes perfect sense to me.

Although the rel=”author” is still taking shape, it only requires a little coding work to complete. Google actually created this attribute years ago. When it was first thought of it had a different name back then. Can not recall it at the moment though. The rel=”author” is great because it not only improves the system in place, but strengthens profiles by reassurance to the reader who finds the information on the search engines.

Google Authorship

The rel=”author” also emphasis’ a combative presence against black hatters. There have recently been more and more websites starting to implement the same type’s of systems into their websites. One of the recent one’s is Pinterest. Although providing a way for companies to verify their profiles and accounts is great, it still does not eliminate the problem of duplicate and fraudulent accounts.

Fraudulent accounts run rampant on the Internet. This is an evident sign that these companies must do something to prevent it. For example anyone can create thousands of Google+ pages for the same company or different profiles. This is just a bunch of spam as far as I am concerned. What if people had to pay for a Facebook account or Google+ started charging for account creation? At this point this is one of the only way’s I can think of to prevent spam.

Another possible Solution of Prevention

This is probably not a possible solution but does come to mind and makes sense. What about having an I.P. address for each person, something like a mailing address. This would be completely traceable and sure prevent fake accounts. You could move anywhere and it would not matter. It would never need to be transferred like a change of address and can be accessed from anywhere in the world. Another one would be only allowing one or two e-mail addresses per person and forcing some form of verification for each e-mail address registered.

Do you have any suggestions on preventing fake accounts and profiles?
We would love to hear your ideals…

“Eliminate duplicate and fake accounts and we will win the war on spam.”

About Ty Whalin

I am a web designer, SEO specialist, programmer and internet marketer entrepreneur. I attended Central Florida Community College where I majored in Computer Information Administration and Rasmussen College studying Computer Information Technology. I am the Founder and CEO of Link Worx Seo, played drums for almost 30 years, father of one daughter and enjoy everything life has to offer.

Follow us on TwitterJoin us on FaceBookFollow us on LinkedInFollow us on PinterestFollow us on YouTubeFollow us on WordPressLink Worx SeoCome Join me on Google+

Bookmark and Share

%d bloggers like this: