Monday, May 20, 2013

URL Optimization: 5 Best Practices for SEO


URL Optimization: 5 Best Practices for SEO


The following screenshot is a sample URL with ideal anatomy for site SEO.
A Sample of an SEO-Friendly URL
A Sample of an SEO-Friendly URL
When you talk about SEO in terms of content on a web page it is most often concerning the keywords. The URL of a page is an integral part of SEO and must also contain keywords that are consistent with the other content on the website.
The following are some things that you need to consider when structuring your URLs for SEO:

1. Words Used in URLs:

As shown in the diagram above, your URL consists of some important elements that require the presence of keywords to gain optimum SEO benefits for your site. Within the different elements of a URL, the domain, sub-domain, folder and page elements can contain keywords. It is not mandatory to use keywords but if you can name folders and pages with keywords that appear in the content of that particular page, search engine crawlers will easily index and return the pages for the appropriate keywords.
Along with keywords, there are other factors that need to be considered for the words in the URL structure:
Descriptive URLs: If you do not use keywords, use words that efficiently describe the contents of the page. An obvious URL scores high in usability and often in SEO.
Shorter URL Length: The fewer words the better. A short URL is quicker to type and read. Avoid using words such as a, our, for, the, etc. Also, the fewer the words the more value each word receives from a search engine spider.
Important Keywords at the Beginning: Put the most important words in the beginning of the URL as search engine spiders do not give much significance to words toward the end of a longer URL.
No Repetition: Do not repeat words, for example, a section and sub-section name in the URL.
Rather than this:
www.domain.com/services/services.php
Name the sub-section differently, like this:
www.domain.com/services/web-services.php
Not Necessarily Identical to Page Title: In the case of a blog page, the URL is not required to be exactly the same as the page title or the title of the blog.
Unnecessary URL Parameters: Parameters such as ?, & and % must be avoided in URLs. Read our post on A Guide to Clean URLs for SEO and Usability to learn more.
Long Keywords: For pages with long keywords, avoid using category and sub-category names in the URLs.
Keyword Stuffing: Do not stuff your URL with keywords.
Capital Letters: Do not use CAPITALS in words in URLs.

2.   Dynamic Vs. Static URLs:

A dynamic URL is one that is created by a CMS or web server. The page element as a whole does not exist until the browser requests the URL. Once the URL is requested, the CMS dynamically generates the URL with lots of parameters and unwanted characters, making the URL non-SEO-friendly and causing it to look something like the example below:
http://www.domain.com/gp/detail.html/602-9912342-3046240?_encoding=UTF8&frombrowse=1&asin=B000FN0KWA
With advanced an CMS, such as WordPress, one can change the permalink structure and include the page name/title in the URL structure, as shown below:
Permalink Feature in WordPress
Permalink Feature in WordPress
Using a static URL that is human-edited while keeping in mind all the factors discussed above will assist both people and search engine crawlers in deciphering your URLs easily.

3.   Hyphens Vs. Underscores:

As discussed in our earlier post titled Underscores in URLs: Why are they Not Recommended? Google considers hyphens to be word separators but have not yet programmed their search bots to consider underscores as word separators. It does not make a difference if you use underscores or hyphens for search engines such as Bing, however, we recommend you use hyphens in your URL structure or no word separators at all. Underscores in URLs are not SEO-friendly nor are they user-friendly. If you already have URLs that contain underscores it is better to leave them untouched rather than changing them, as these pages may have already been indexed by search engines and have an established link structure. If you use 301 redirects to redirect a URL with underscores to hyphenated versions of the same URL you will lose some link juice, which is not ideal. Watch for our upcoming blog on link juice for more information on that.

4.   Use of Sub-domains:

Use sub-domains for completely different parts of your website, such as a blog page that receives user-generated content on a regular basis. You must remember that sub-domains have the potential to be considered a separate entity and not a part of your website, hence, it is not advised to use multiple sub-domains. In the case of a blog, you can build an extensive interlinking structure with the main page and not lose link juice. Be careful of using other sub-domains, such as category pages on an e-commerce site, for example, woman.domain.com/blue-dresses.html. Although the URL is reader friendly, search engines will not consider it to be a part of the main domain, thus, your website’s link juice is segregated.

5.   Duplicate URLs:

Make sure to avoid duplicate URLs. When URLs are dynamically generated, sometimes duplicate URLs are created for the same content. Your website may have www and non-www versions of your URL pointing to the same content, thus also creating a duplicate content problem. Often times duplicate content is created unintentionally by session ids, affiliate codes and sorting options (for example, sort by price and sort by color options on e-commerce sites) in URLs. There are two ways to cope with the duplicate URLs. One is to choose the best URL and add a rel canonical tag to the other duplicate URLs. The other is to add 301 redirects, most often in the case of redirecting multiple home page URLs to one preferred version. This causes less confusion and also prevents your site from duplicate content penalties.
URL cleaning and optimization for easy indexing and navigation by search engines is an important part of your on-site SEO. It is worth spending time on your URLs for both SEO and usability purposes.
Source: woorank.com

Saturday, May 18, 2013

Underscores in URLs: Why are they Not Recommended?


Search engines treat dashes and underscores differently from one another. Google has clearly stated that when it comes to URL structure, using hyphens rather than underscores makes it much easier for them to identify what the page is about. Take a look at an excerpt below from the Google support blog on URL structure.
Google Claiming that Hyphens are Preferable to Underscores in URLs
Google Claiming that Hyphens are Preferable to Underscores in URLs
Senior Google Engineer Matt Cutts clearly explains in this Google Webmaster Help Video about underscores in URLs that hyphens are used as word separators while underscores do not specify any function. Search engine bots have a different way of interpreting your punctuation when crawling and indexing sites.  Search engines have not been programmed to interpret underscores the way that we do. This difference in interpretation is not only limited to URLs but also applies to image alt tags.
For example, if your URL includes tips_for_instant_weight_loss  (with underscores) search engines read it astipsforinstantweightloss. Obviously someone typing in these words would include spaces.  Conversely, when you use hyphens to link a keyword in your URL; tips-for-instant-weight-loss, search engines can return the words in various combinations, as follows:
  • Tips for instant weight loss
  • Tips for weight loss
  • Instant weight loss
  • Weight loss
  • Tips
  • Weight
  • Loss
  • Instant
  • Tips-for-instant-weight-loss
  • Tipsforinstantweightloss
So, the probability of your website being shown in the SERPs is lower when underscores are used as opposed to when hyphens are used. If you are not bothered about optimizing your website for search, here are some reasons why hyphens in URLs are also preferable for people.
If your URL contains underscores the link will look similar to this:
http://www.tips_for_instant_weight_loss.com (http://www.tips_for_instant_weight_loss.com)
Whereas if your URL contains hyphens, the link will look similar to this:
http://www.tips-for-instant-weight-loss.com (http://www.tips-for-instant-weight-loss.com)
A user may mistake the underscores for spaces, as the underlining in the link hides the underscores. On the other hand, hyphens are clearly visible, so users are more likely to remember to type them. So, the use of underscores in URLs impacts usability as well as SEO.
Google will still crawl and index URLs that already contain underscores and it is not necessarily advisable to change your URLs if they currently contain underscores. As long as you have other ranking factors working well, you should have no problem ranking high in the SERPs.  For instance, look at the screenshot below of a Wikipedia URL for the term cloud computing. It uses underscores, and yet Wikipedia takes the top spot in search results for almost all keywords.
Wikipedia URLs Use Underscores
Wikipedia URLs Use Underscores
As seen in the Matt Cutts video mentioned above, Google says that they will begin working out a way for search engines to interpret underscores in URLs as separators once they have finished modifying the other high-impact search ranking signals they are currently working on. The general advice remains, however, that if you have yet to choose a domain name, do not use one with underscores, and if you are building inner pages on your website, make sure your URL structures contain hyphens rather than underscores.
If you already have a website URL that uses underscores and its SERP rankings are not improving, you can use 301 permanent redirects to a URL with hyphens. For example, if your old URL ishttp://www.yoursite.com/old_page.html 301 redirect it to http://www.yoursite.com/new-page.html. You do not need to do this, however, if your website fairs well in the SERPs, as 301 redirects can reduce a bit of the link juice that you obtain by building links to your site.

Souce: woorank.com

Friday, May 17, 2013

Robots.Txt: A Beginners Guide


Robots.Txt: A Beginners Guide

Robots.txt is:

A simple file that contains components used to specify the pages on a website that must not be crawled (or in some cases must be crawled) by search engine bots. This file should be placed in the root directory of your site. The standard for this file was developed in 1994 and is known as the Robots Exclusion Standard or Robots Exclusion Protocol.
Some common misconceptions about robots.txt:
  • It stops content from being indexed and shown in search results.
If you list a certain page or file under a robots.txt file but the URL to the page is found in external resources, search engine bots may still crawl and index this external URL and show the page in search results. Also, not all robots follow the instructions given in robots.txt files, so some bots may crawl and index pages mentioned under a robots.txt file anyway.  If you want an extra indexing block, a robots Meta tag with a ‘noindex’ value in the content attribute will serve as such when used on these specific web pages, as shown below:
 <meta name=“robots” content=“noindex”>
Read more about this here.
  • It protects private content.
If you have private or confidential content on a site that you would like to block from the bots, please do not only depend on robots.txt. It is advisable to use password protection for such files, or not to publish them online at all.
  • It guarantees no duplicate content indexing.
As robots.txt does not guarantee that a page will not be indexed, it is unsafe to use it to block duplicate content on your site. If you do use robots.txt to block duplicate content make sure you also adopt other foolproof methods, such as a rel=canonical tag.
  • It guarantees the blocking of all robots.
Unlike Google bots, not all bots are legitimate and thus may not follow the robots.txt file instructions to block a particular file from being indexed. The only way to block these unwanted or malicious bots is by blocking their access to your web server through server configuration or with a network firewall, assuming the bot operates from a single IP address.

Uses for Robots.txt:

In some cases the use of robots.txt may seem ineffective, as pointed out in the above section. This file is there for a reason, however, and that is its importance for on-page SEO.
The following are some of the practical ways to use robots.txt:
  • To discourage crawlers from visiting private folders.
  • To keep the robots from crawling less noteworthy content on a website. This gives them more time to crawl the important content that is intended to be shown in search results.
  • To allow only specific bots access to crawl your site. This saves bandwidth.
  • Search bots request robots.txt files by default. If they do not find one they will report a 404 error, which you will find in the log files. To avoid this you must at least use a default robots.txt, i.e. a blank robots.txt file.
  • To provide bots with the location of your Sitemap.  To do this, enter a directive in your robots.txt that includes the location of your Sitemap:
      Sitemap: http://yoursite.com/sitemap-location.xml 
You can add this anywhere in the robots.txt file because the directive is independent of the user-agent line.  All you have to do is specify the location of your Sitemap in the sitemap-location.xml part of the URL. If you have multiple Sitemaps you can also specify the location of your Sitemap index file.  Learn more about sitemaps in our blog on XML Sitemaps.

Examples of Robots.txt Files:

There are two major elements in a robots.txt file: User-agent and Disallow.
User-agent: The user-agent is most often represented with a wildcard (*) which is an asterisk sign that signifies that the blocking instructions are for all bots. If you want certain bots to be blocked or allowed on certain pages, you can specify the bot name under the user-agent directive.
Disallow: When disallow has nothing specified it means that the bots can crawl all the pages on a site. To block a certain page you must use only one URL prefix per disallow. You cannot include multiple folders or URL prefixes under the disallow element in robots.txt.
The following are some common uses of robots.txt files.
To allow all bots to access the whole site (the default robots.txt) the following is used:
User-agent:*
 Disallow:
To block the entire server from the bots, this robots.txt is used:
User-agent:*
 Disallow: /
To allow a single robot and disallow other robots:
User-agent: Googlebot
 Disallow:
User-agent: *
 Disallow: /
To block the site from a single robot:
User-agent: XYZbot
 Disallow: /
To block some parts of the site:
User-agent: *
 Disallow: /tmp/
 Disallow: /junk/
Use this robots.txt to block all content of a specific file type. In this example we are excluding all files that are Powerpoint files. (NOTE: The dollar ($) sign indicates the end of the line):
User-agent: *
 Disallow: *.ppt$
To block bots from a specific file:
User-agent: *
 Disallow: /directory/file.html
To crawl certain HTML documents in a directory that is blocked from bots you can use an Allow directive. Some major crawlers support the Allow directive in robots.txt. An example is shown below:
User-agent: *
 Disallow: /folder/
 Allow: /folder1/myfile.html
To block URLs containing specific query strings that may result in duplicate content, the robots.txt below is used. In this case, any URL containing a question mark (?) is blocked:
User-agent: *
 Disallow: /*?
Sometimes a page will get indexed even if you include in the robots.txt file due to reasons such as being linked externally. In order to completely block that page from being shown in search results, you can include robots noindex Meta tags on those pages individually. You can also include a nofollow tag and instruct the bots not to follow the outbound links by inserting the following codes:
For the page not to be indexed:
     <meta name=“robots” content=“noindex”>
For the page not to be indexed and links not to be followed:
            <meta name>=“robots” content=“noindex,nofollow”>
NOTE: If you add these pages to the robots.txt and also add the above Meta tag to the page, it will not be crawled but the pages may appear in the URL-only listings of search results, as the bots were blocked specifically from reading the Meta tags within the page.
Another important thing to note is that you must not include any URL that is blocked in your robots.txt file in your XML sitemap. This can happen, especially when you use separate tools to generate the robots.txt file and XML sitemap. In such cases, you might have to manually check to see if these blocked URLs are included in the sitemap. You can test this in your Google Webmaster Tools account if you have your site submitted and verified on the tool and have submitted your sitemap.
Go to Webmaster Tools > Optimization > Sitemaps and if the tool shows any crawl error on the sitemap(s) submitted, you can double check to see whether it is a page included in robots.txt.
Google Webmaster Tools Showing Sitemaps with Crawl Errors
Google Webmaster Tools Showing Sitemaps with Crawl Errors
If a page is blocked by robots.txt, GWT will describe the error as sitemap contains URLs which are blocked by robots.txt.
Alternatively, there is a robots.txt testing tool within GWT. It is found under Webmaster Tools > Health > Blocked URLs as shown in the screenshot below:
Blocked URLs Testing Tool on Google Webmaster Tools
Blocked URLs Testing Tool on Google Webmaster Tools
This tool is a great way to learn how to use your robots.txt file. You can see how Googlebots will treat URLs after you enter the URL you want to test.
Lastly there are some important points to remember when it comes to robots.txt:
  • When you use a forward slash after a directory or a folder, it means that robots.txt will block the directory or folder and everything in it, as shown below:
Disallow: /junk-directory/
  • Make sure CSS files and JavaScript codes that render rich content are not blocked in robots.txt, as this will hinder snippet previews.
  • Verify your syntax with the Google Webmaster Tool or get it done by someone who is well versed in robots.txt, otherwise you risk blocking important content on your site.
  • If you have two user-agent sections, one for all the bots and one for a specific bot, let’s say Googlebots, then you must keep in mind that the Googlebot crawler will only follow the instructions within the user-agent for Googlebot and not for the general one with the wildcard (*). In this case, you may have to repeat the disallow statements included in the general user-agent section in the section specific to Googlebots as well. Take a look at the text below:
User-agent: *
Disallow: /folder1/
Disallow: /folder2/
Disallow: /folder3/

User-agent: googlebot
 Crawl-delay: 2
Disallow: /folder1/
Disallow: /folder2/
Disallow: /folder3/
Disallow: /folder4/
Disallow: /folder5/

10 Free US Local Business Listing Sites


Local SEO is a must when you have a business with a physical location and expect customers to visit. You will need to pay special attention to location-specific SEO requirements for your site apart from other generic SEO essentials. In the WooRank Project tool you will learn a whole range of actionable SEO tips to get your business the online exposure it deserves. More than 20 percent of Google Search is local, as Google and other search engines sort location-specific businesses into localized search results. These results have much less competition than you would have with generic search results but it takes a few smart steps to rank at the top of them. One important step is to enter your business in local-business listing sites. In this blog we will consider the top ten directories that will help your local business acquire higher rankings in the localized SERPs (Search Engine Results Pages).
NOTE: It is important that you manually list your business in these directories by choosing the right business categories and entering consistent information throughout all the directories. Also, even though not all directories pass link juice to your site, search engines such as Google notice the citation of your business on these sites and this affects the local ranking algorithm.

1. Google Places:

To be accepted as a local business on Google, you have to make your presence known in three places.
  • Google Webmaster Tools
  • Google Places
  • Google+
Google Webmaster Tools: When your site contains a generic top level domain and you intend to target a certain geographic location you need to follow these steps to submit your site to Google Webmaster Tools in order to appear in localized searches.
Google Places: Create an account with Google Places, fill in your local business information and submit it to Google. You will receive a PIN number that you must enter in your Google Places Dashboard for business verification.
Google+Create a Local Business Brand Page on Google+ by clicking the Local Business or Place option, as shown below. Add your business information and you will be taken through a verification process to get started.
Local Business or Place Option on Google+
Local Business or Place Option on Google+

2. Yahoo Local:

Yahoo Local Listing is a great free service that displays your local business address, phone number and URL and gives you the option of choosing up to five business categories. The basic free listing can include additional information such as working hours, email address, payment options and years in business, and you also earn a link to your site. The paid listing option, which costs $9.95 per month, gives you added advantages; you can see the number of times your listing was viewed, you can  upload one small photo onto the business detail page and ten large photos onto a separate page, and you can use up to two text links in search results. To sign up, you need to have or create a Yahoo ID.
Basic Listing option on Yahoo Local
Basic Listing Option on Yahoo Local

3. Yelp:

Yelp has a large user-community which is independent from popular search engines. Hence, it is necessary for your business to feature on Yelp in order to gain attention from this completely different and targeted audience. A Yelp business account directs textual and visual representation of your business to the categories of Yelp users that are most likely interested in your business. You can also add a link to your website on your Yelp profile and there is no limit to the number of business photos you can upload. Yelp allows for customer reviews and gives you the option to respond to them. Moreover, you also get personalized analytics to monitor the traffic. Before you sign up, check to see if your business is already listed on Yelp. Sign Up for Free to either claim your existing business listing or create a new one.
Free listing on Yelp
Free Listing on Yelp

4. Bing:

For local businesses, being listed on Bing is as important as being listed on Google+ Local. You can claim your business listing that already exists or create a new one by clicking on the Get Started Now link shown in the screenshot below. To create a new listing you need to sign up for a hotmail account. After you pass through a series of steps for registration, you will be prompted to enter your contact information and communication preferences. Once you have accepted the terms and conditions, (called the Bing Business Portal Offer Guidelines), you will be prompted to fill in information about your local business, such as, name, address, city, state, business email, Facebook and Twitter address, zip code and a logo. In the subsequent steps there is an option to have a free mobile website and QR code. Your listing is finally verified by means of a postcard sent to you at the address you register your business with.
Free Business Listing on Bing
Free Business Listing on Bing

5. MerchantCircle:

MerchantCircle is a popular free American local-business listing service. It retrieves information from yellow pages listings. Check to see if your business is already listed by clicking the business page and finding the ‘Is This Your Business Listing?’ option on the blue box at the left column of that page. After signing up you get to access a comprehensive dashboard on your account that has plenty of editing options to add and modify your business details. The dashboard consists of options such as a blog, products and services, sales, deals and coupons, newsletters, pictures, videos, widgets, answers and advertisements. With a MerchantCircle listing you can network with other businesses and engage with potential contacts in your niche. Most importantly, this site allows you to post a link to your website, thus providing a high quality inbound link to your site.
Free listing on MerchantCircle
Free Listing on MerchantCircle

6. Local:

Local.com is a search engine that lists local US businesses and currently is home to 16 million business listings. You can claim your business listing if it already exists on the local.com database or create an entirely new listing for free.
Free listing on Local.com
Free Listing on Local.com

7. ExpressUpdateUSA (InfoUSA):

Infogroup has a massive database of businesses in the US and also partners directly with top search engines and local business directories to provide information on local businesses. By claiming and optimizing your listing on ExpressUpdateUSA.com (previously known as InfoUSA), you can improve your position in localized SERPs. To get listed, you must first check your business listing and claim your business if it already exists on the database. Create an account for free with ExpressUpdateUSA and add the same business details that on your website. Here is the categorized database of businesses according to the city and state they are located in.
Free listing on ExpressUpdateUSA
Free Listing on ExpressUpdateUSA

8. MapQuest:

MapQuest is yet another free local business listing service that has an audience of its own and helps people find local businesses on the web and mobile devices. You can sign up for free and add information about your business. Once again, if your business listing already exists, claim it. Add photos, hours of operation, parking tips and even mark your business on the map. Here is a detailed blog post by MapQuest that describes the information you need in your MapQuest Local Business Center profile to acquire maximum exposure.
Free listing on MapQuest
Free Listing on MapQuest

9. InsiderPages:

This is a fairly new business listing site but popular among users to find health and medical services, home and garden businesses, hair and beauty services, auto services and pet related business listings in the US.Here is a step-by-step tutorial and video on how to add your business to the InsiderPages database. The live business page includes your local business contact information, physical address (the same as that on your website), hours of operation, business description, Google Map marker, plenty of reviews with star ratings and a Facebook Like button for users to ‘like’ your business listing. You can add up to 20 images. Most importantly, your business page on InsiderPages allows you to provide a link back to your website.
Free listing on InsiderPages
Free Listing on InsiderPages

10. FourSquare:

Although featured at the end of this list, FourSquare is no regular business listing platform but a mobile application that is the fastest growing small business mobile marketing platform. It provides businesses with free listings that connect to a widespread range of smartphone users who check-in to physical business locations. This application is not limited to US businesses alone but has an ever-growing community of 25 million users worldwide. It provides free tools that allow you to engage with your customers and fans. Claim your business on FourSquare and get started with a FourSquare business account for free. The Merchant Dashboard on FourSquare allows you to keep tabs on daily check-ins via the app, recent visitors and many more things.
Free listing on FourSquare
Free Listing on FourSquare
If you want a single place to check your local listings on major search engines try GetListed.org.

Wednesday, May 8, 2013

How to Use Heading Tags for SEO


Heading tags, as their name suggests, are used to differentiate the heading of a page from the rest of the content. These tags are also known to webmasters as HTML header tags, head tags, heading tags and SEO header tags. The most important heading tag is the h1 tag and least important is the h6 tag. In HTML coding the header tags from h1 to h6 form a hierarchy. This means that if you skip any of the tag numbers (ie. jump from 1 to 3) the heading structure will be broken, and this is not ideal for on-page SEO.

Diagrammatic Representation of Heading Tag Hierarchy
Diagrammatic Representation of Heading Tag Hierarchy
For example, if your site is introduced with a heading in h1, and a sub-heading in h3, the hierarchy will be broken, meaning the heading structure is not SEO-friendly. The coding should be something like what is shown below:
<h1>Main Heading</h1>
<h2>Secondary Heading 1</h2>
<h3>Sub-section of the secondary heading 1</h3>
<h2>Secondary Heading 2</h2>
<h3>Sub-section of the secondary heading 2</h3>
The h1 tag is the most important tag. Every page must have an h1 tag.

Advantages of Using Heading Tags:

The heading tag is used to represent different sections of web page content. It has an impact on both the SEO and usability of your site.
Header tags from an SEO point of view:
  • Relevancy: Search engine spiders check the relevancy of the header tag with the content associated with it.
  • Keyword Consistency: The search engine spiders check the keyword consistency between the header tags and other parts of the page.
  • The Importance of an h1 Tag: The h1 is the most important tag and it should never be skipped on a page. Search spiders pay attention to the words used in the h1 tag as it should contain a basic description of the page content, just as the page <title> does .
  • Enriched User Experience: Heading tags give the user a clear idea of what the page content is about. Search engines give much importance to user-experience on a site, meaning the presence of heading tags becomes an important component of SEO.
Header tags from a usability point of view:
  • For users of the web who must use a screen reader, it is easier to navigate sections of content by referring to properly structured headings on a page.
  • The h1 heading tag (main heading) of a page gives users a quick overview of the content that is to follow on the page.
  • By reading the different heading tags, users can scan a page and read only the section they are interested in.
  • The primary use of heading tags is for SEO, not to gain the larger, more prominent fonts; but the presentation of a web page does look cleaner with the presence of these tags.

Things you should not be doing with heading tags:

  • Do not stuff your heading tags with keywords.
  • Do not use more than one h1 tag on a page unless really necessary. Usually pages will have a single h1 heading and including two might make search engines think this as an attempt to include more keywords for SEO in multiple h1 tags. It is better to divide the content into two separate topics on individual pages with their own h1 tags. This makes more sense to both readers and the search engine spiders, however, using multiple h1 tags is allowed.
  • Do not use heading tags as hidden text. Any hidden text can result in penalties for your site, especially if the hidden part is a component that effects SEO.
  • Do not repeat heading tags on different pages of your site. It is a good practice to have unique heading tags throughout your site.
  • Do not use the same content in your page’s h1 tag as in your meta title tag.
  • Do not use heading tags for styling text but use them for presenting organized and structured content on pages. Use CSS stylesheets for the purpose of styling.

Wednesday, May 1, 2013

20 Quick Tips to Optimize Page Load Time


Web page loading speed is the most crucial part of a site’s usability and SEO. Google considers page speed to be one of the 200 ranking factors that influence a website’s position in organic search results and is known to enrich user-experience. With numerous other websites in your niche the competition to earn site traffic and keep people impressed with rich usability is becoming more crucial every day. If your website does not load quickly chances are you will lose site visitors to your competition in a matter of seconds.
Here are 20 quick tips aimed at optimizing your website’s loading time:

1. Optimize Image Size:

The images on your site take up a lot of your page size and can eventually affect the loading time of your page. It is not enough to downsize your website’s images in the HTML editor because that only changes the appearance of the image in the front-end and not its actual size. Use external picture editor tools to resize the images, such as Photoshop.
Here are some online tools to optimize your site’s images:

2. Image File Format:

For optimized loading time of your page it is ideal to stick to standard image formats such as JPG, PNG and GIF.

3. Avoid Text Graphics:

Some sites may need stylized text to make the web page look attractive. However, you must remember that text in the form of an image can take up a lot of the web page size and is of no use for SEO. It is thus ideal to use the text styles in CSS and keep everything in text format instead.

4. Avoid Unnecessary Plugins:

A site that requires plugins may slow your page loading speed. Not all plugins are unnecessary, for example, social share plugins which are a must-have for every site these days. That said, always check to see if there is a better alternative to the plugin, for example, using a CMS with in-built social plugins.

5. Avoid Inline JS and CSS files:

It is a good practice to place your website’s JS and CSS in external files. When the page loads the browser caches these files externally and reduces the page load time. Moreover, having the JS and CSS files externally allows for easy site maintenance.

6. Optimize Caching:

Every time a visitor loads a site, your web page’s image files, CSS and Java files load as well, thus taking up a lot of page load time. When you use the HTTP caching system on your website it allows these file resources to be cached or saved by the browser or proxy. On repeated page loads these files can be retrieved from the cached files rather than downloading them all over again from the network. Moreover, by optimizing the caching system of your website you also tend to reduce the bandwidth and hosting costs on your site.

7. Place JavaScript at the end of the Document:

Ensure that your JS file is placed at the end of documents as JS file load may hinder the loading of the subsequent files.

8. Avoid Redirects:

Avoiding redirects increases serving speed. Some redirects are unavoidable and need to be in place but you must remember that this requires an additional HTTP which increases the page load time. Check for broken links and fix them immediately.

9. Reduce DNS Lookups:

DNS (Domain Name System) Lookup occurs when a URL (hostname) is typed in a browser and a DNS resolver returns that server’s IP address. The time needed for this process is around 20 – 120 milliseconds, however, multiple hostnames can be used for various elements on a website, which includes the URL, images, script files, style sheets and flash elements. With multiple unique hostnames the DNS lookup also increases thus increasing the page load time. Reducing the number of unique hostnames will reduce the number of parallel downloads, which may increase the page loading time.  It is thus ideal to use one host when you have less than six resources. You can also use URL paths instead of hostnames. This means that if you have a blog page that is hosted on blog.yoursite.com, you can instead host it on www.yoursite.com/blog.

10. Remove Unnecessary CSS and HTML:

Lighten the code of your website by removing any HTML or CSS that is not required. If your site is built on a CMS, chances are you have pre-installed CSS class and id ‘stubs’ that help design the theme. Remove unused class and ID declarations or combine multiple declarations into one.

11. Avoid Multiple Tracking Scripts:

While it is wise to keep tabs on your website’s traffic stats, it is not advisable to use multiple tracking softwares as this may hinder the page load time. If you are using a CMS such as WordPress, you could allow WP stats to run scripts on your page or Google Analytics, but never both. E-commerce shopping cart CMS tend to have their own default tracking script which cannot be deleted when you use Google Analytics instead.

12. Set up G-Zip Encoding:

Similar to files on your PC that are zipped and compressed to reduce the total size during online file transfers, heavy files on your website can be zipped with something called the G-Zip Compression. This saves bandwidth and download time and most of all reduces your page loading speed. You should configure the server so that it returns zipped content.

13. Reduce HTTP Requests:

Use CSS Sprites to reduce the number of image requests. Combine background images into a single image by using CSS background-image and background-position elements. Combine inline images into your cached stylesheets. HTTP requests are multiplied when there are duplicate scripts in the code so ensure that you identify and remove these duplicate scripts. To keep duplication under control use Script tag on your HTML page, as shown in the example below:
<script type=”text/javascript” src=”menu_1.0.17.js”></script>
You may use an insertScript function in your PHP page, as shown in this example:
<?php insertScript(“menu.js”) ?>

14. Use Expires/Cache-Control Header:

You can use Expires headers for static components of the site and Cache-Control headers for dynamic ones. Using these headers makes the various components of a site, including images, stylesheets, scripts and flash, cacheable. This in turn minimizes HTTP requests and thus improves the page load time. With the use of Expires headers you can actually control the length of time that components of a web page can be cached, as shown in the example below:
Expires: Wed, 20 Apr 2015 20:00:00 GMT
If your server is Apache you can set the time for cached content by using the ExpiresDefault directive. This sets the expiration date as a certain number of years from the current date:
ExpiresDefault “access plus 15 years”

15. Place Style Sheets at the top of Documents:

It is standard practice to place style sheets at the top of a document. The page elements that are rendered from the server open progressively in your browser as initiated by these style elements. From the navigation bar and logo to the page content, the visual progression of a loading website gives rich user experience, even if a user has a slow internet connection. Style sheets also improve the page load time.

16. Minification of JavaScript and CSS:

Minification is the process of removing unused characters from the code which helps to reduce its size and the subsequent loading time.
The following are two such minification tools available online:

17. Use GET Requests instead of POST:

It is found that using the html ‘METHOD’ attribute GET processes data much faster than POST when requesting data on a browser. Although both HTTP methods achieve the same result, POST sends the header first and then sends the data while GET takes only one TCP packet to send data. Also, GET is recommended for AJAX requests as it can be cached and remains in the browser history.

18. Avoid unnecessary DOM elements:

Crowded markups, for example, one with a lot of <div> elements, can significantly slow the Document Object Model (DOM) access in JavaScript. Instead of using nested tables for layout, you can use grids.css, fonts.css and reset.css. You can test the number of DOM elements by typing the following in Firebug’s console:
document.getElementsByTagName(‘*’).length
You might also want to minimize DOM access by caching references to accessed elements, updating nodes offline and avoiding the use of JavaScript to fix layouts.

19. Reduce Cookie Size:

The data stored in cookies is exchanged between servers and browsers. Hence, by reducing the size of the cookies you reduce the size of the data that is transferred and increase the page load time. Eliminate unnecessary cookies and set your Expires date to a sooner time period, or provide no Expires date at all to reduce the size of the cookies.

20. Update CMS Software

If you are using a CMS such as WordPress it is recommended to check frequently for updates in the software but do not load these on a live website. First carry out upgrades on a separate server to test them. Keeping abreast of software updates also improves a site’s speed.
You can test your website’s page load time and page size with the WooRank Website Review tool. A sample test is shown in the screenshot below:
Page load time tested on WooRank Website Review
Page Load Time Tested on WooRank
You can check for speed factors that are present or absent in your site with the same test:
Check speed factors affecting your website on WooRank Website Review
Use WooRank Website Reviews to Check Speed Factors Affecting Your Website
For more tutorials on page speed optimization check out a list of related Google Articles.

Article Source: woorank.com
  • Popular
  • Categories
  • Archives