Why initial audit your site and how often should you do it?

An SEO audit is a comprehensive investigation into the current state of a site. From a technical perspective, you can use it to look for any issues that could impact crawling and indexing, as well as report on areas where on-page optimisation is not utilised. From an analytical point of view, you can use it to get information on content optimisation and competitor analysis.

An initial audit should focus on issues that need the most urgent attention. Smaller issues that will not have an impact should be reserved for future instalments. An audit should look at technical issues, as well as current traffic, backlinks and keyword visibility.

It’s important to complete an SEO audit as soon as possible, so you can establish a baseline to project your future goals from. It’s also crucial to find any critical errors as soon as you can, as these could prevent indexation. The outcome of an audit is usually a backlog of issues to fix, and these will help you establish a roadmap to meet your goals.

What you will need access to

Webmaster Tools

To see what real data search engines have collected on a site, you need access to webmaster tools like Google Search Console. 

They give an insight into how a site is ranking for various queries, as well as any errors encountered by bots. Manual penalty messages, security warnings and smartphone data are also available in Search Console. It’s an invaluable tool for technical checks because it allows you to compare crawl data against real Google reports.

Bing Webmaster Tools is a recommended option for comparison against Google Search console.

Read:

Traffic Analysis

Services that track and analyse traffic (like Google Analytics) are invaluable, because they allow you to see how users act on a site. You can use them to identify trends such as peak times and popular areas, as well as the devices and countries users are accessing a site from.

Whitelisting

If possible (and not already implemented), you should request for IP address/es to be whitelisted. This can help you avoid restrictions on crawling software such as throttling or blocking.

What do you need to know?

Associated domains

To audit duplicate content to its full potential it’s important to be aware of any domains owned by or associated with the site you’re auditing. Whether they’re international versions or a sister company, the more information you have, the more comprehensive your analysis can be. 

Domain history

Knowledge of domain history can help you identify any problems which may have been caused by a previous migration, or investigate how your current domain handles users attempting to reach legacy URLs.

What tools to use for seo audit

Crawling software

To get an insight into how a site treats bots, you need to crawl the site. Ideally, an audit crawl should be setup to emulate Googlebot as closely as possible.

Crawling software such as Screaming Frog’s SEO Spider, or Sitebulb’s Website Crawler have configuration settings that can be tailored to act like Googlebot. Both tools report on a number of site aspects important to SEO, including response codes and metadata.

The crawl gives a high level view of the site, which can help to guide your focus into which areas need more analysis.

Browser

While it may seem obvious that you need a browser to audit a website, the browser you choose is actually very important.

Google’s PageSpeed metrics (and associated ranking signal) are based on the Chrome User Experience Report. It’s important to audit a site using Google Chrome so that browser-specific issues can be seen.

Google Chrome has a set of built-in software tools, known as Chrome DevTools. These allow you to view and edit the DOM and CSS as well as to help debug JavaScript. Chrome Dev Tools are useful when diving deeper into issues flagged by crawlers or Google Search Console.

Other analysis tools

For further analysis, such as keyword research and gap analysis, you can use tools like SEMRush

If you want to examine backlinks, you can use Majestic or AHrefs. (These will be covered in more detail later in the course).

Auditing technical issues

A technical audit typically involves a crawl of the site. Any issues found will then be put into a technical backlog which can be passed on to a developer. (Don’t worry, we’ll explain how to create a technical backlog in another module).

Response codes and error handling

You can use your crawl data to check the HTTP status codes of internally linked URLs, including 3xx, 4xx and 5xx responses, as well as URLs which don’t respond at all. It’s important for you to identify these and add them to the backlog as they can impact both user and bot experience, as well as potentially lower a site’s crawl budget. (More on understanding and auditing HTTP status codes later).

Manual checks can be carried out to ensure incorrect HTTP status codes are appropriately handled. This is important, because failure to do so can result in a confusing experience for both users and bots, and can cause duplicate content issues.

Crawl issues

Your crawl will help you audit issues that can only be seen by bots or could be hard to identify with manual checks. Issues such as spider traps, redirect chains and broken hreflang links can easily be highlighted by crawl data.

Internal linking

From your crawl data, you can get an overview of page crawl depth and highlight pages linked from a small percentage of the site. Aggregating this data will help you identify subfolders and sections of the site where internal linking is consistently poor.

Schema errors and optimisation opportunities

When auditing, you should investigate the schema in place on the site. You can use a variety of methods to check for errors, but we’ll touch more on this in the technical auditing module. It’s important to ensure schema is as optimised as possible to increase your chance of rich results, which can improve URL appearance in Search Engine Results Pages (SERPs).

Unoptimised metadata

Your crawl data can also help you to identify trends with meta titles and meta descriptions. This includes sections of the site where large amounts of pages are missing them, or where the metadata is not of optimal length. It’s also where duplication issues can be found.

Proper legacy URL handling

You’ll typically check legacy URLs when auditing to make sure that any deprecated areas of the site redirect to useful pages. Correct implementation can help retain users when they attempt to access content which no longer exists, as well as ensuring that the value of deprecated sections is passed on to an appropriate location.

Page speed

As a minor ranking signal for mobile content, (and a strong consideration for user engagement), it is important to ensure page speed is properly audited. You can investigate page speed using a variety of methods, (we’ll cover this more in other modules) and by looking at the speed of both individual pages and the site as a whole.

Thin and duplicate content

By crawling the site, you’ll be able to look at content data en-mass. This includes the word count of pages which, if low, can be a strong indicator of thin content.

You can also use metadata and URL reports to help identify duplicate or overly similar content, as well as checking Google Search Console to make sure no duplicate content issues are flagged.

Mobile

Mobile usability is even more important after Google’s change to a Mobile First Index and the ranking implications of this. You can check a site’s mobile usability using a variety of manual checks as well as the Google Search Console Mobile Usability Report (we’ll tell you more in another module).

You should also check for mobile/desktop parity, to make sure the links available to your desktop users are also available to mobile ones. 

Analytics and Search Console performance report

Use Google Search Console to find current clicks and impressions, and look to Google Analytics to identify current traffic distribution by subfolder, device and country. Both will help you establish your baseline and confirm if the majority of traffic is from your target audience.

You should also check your keyword visibility vs your competitors. This way you can identify overlaps and gaps, and work out where your content can be optimised.  

If you want a further analysis, investigate the types of content with the most clicks and impressions, as well as the most linked and the content you have for top keywords.

You may also need to check your backlink profile and compare it with your competitors to identify a benchmark. Further analysis might be necessary to identify spammy backlinks which could negatively impact your site.

Audit time requirements

The time it takes to complete an audit depends on the size of the site and the depth of the audit. A high level look at a large site may involve a sample crawl, whereas a deep dive into the issues will likely require a full crawl (to ensure all areas can be seen). Typically, your level of analysis will affect the amount of manual checks you’ll need to perform on a site.


Whatever you do, the process of auditing and creating a backlog will likely take a minimum of 28 man-hours.

Re-auditing

An audit will likely provoke you to make a number of changes, but you need to remember that the longevity of the data depends on how quickly the recommended changes are implemented. 

Whenever you want to make updates, perform implementation checks both before and after you’ve made changes to the live site. This will help you identify any issues as quickly as possible. Just remember, these checks will not give an overall view of the site.

We recommend repeating a technical audit of a site bi-annually, and completing a full audit annually, to check for any new errors, push forward unfinished changes and redefine your goals. There’s always room for improvement!

Leave a Comment

Your email address will not be published. Required fields are marked *