technical seo 7 min read

Noindex Tag: When and How to Use It for SEO

The noindex tag prevents pages from appearing in Google search results. Learn when to use it, how to implement it correctly, and how it differs from robots.txt disallow rules.

By SearchRankTool · 09 April 2026

The noindex tag is a powerful SEO tool that gives you direct control over which pages appear in Google's search results. Used correctly, it prevents thin, duplicate or private content from cluttering your site's presence in search and helps concentrate your SEO authority on the pages that matter. Used incorrectly, it can accidentally remove important pages from Google's index. This guide explains exactly when and how to use the noindex tag.

What Is a Noindex Tag?

The noindex tag is a meta robots tag placed in the <head> section of a web page that tells search engines not to include that page in their index. A page with a noindex tag can still be crawled by Googlebot, but it will not appear in search results.

The noindex tag looks like this:

<meta name="robots" content="noindex" />

You can also combine noindex with nofollow (which tells Googlebot not to follow links on the page):

<meta name="robots" content="noindex, nofollow" />

According to Google's indexing documentation, the noindex directive is respected by Googlebot as long as it is accessible — meaning the page must NOT also be blocked by robots.txt (a common mistake that causes the noindex to be ignored).

How to Implement Noindex

There are two main ways to implement noindex on a page:

1. Meta robots tag in HTML — place in the <head> section:

<meta name="robots" content="noindex" />

2. X-Robots-Tag HTTP response header — for non-HTML resources (PDFs, images, etc.) or when you cannot modify the HTML:

X-Robots-Tag: noindex

In WordPress, noindex is commonly set via SEO plugins (Yoast SEO, Rank Math). Yoast adds a "Search appearance" toggle per post/page and per content type (you can noindex all tag archive pages in bulk, for example).

If you are using Laravel (as this site does), you add the meta tag in your Blade template's <head> section, using a conditional passed from the controller, or use a dedicated package.

When Should You Use Noindex?

Noindex is appropriate for pages that should remain accessible to users but not appear in search results. Common use cases include:

  • Thank you pages — post-conversion pages (after form submission, after purchase) that provide no value to search visitors
  • Login and registration pages — authentication pages that require a logged-in user; searchers cannot use these pages
  • Admin and dashboard pages — any page behind a login that is not meant for public access
  • Thin content pages — pages with very little unique content that add no value to your search presence (e.g., tag archives in WordPress that show only a list of post titles)
  • Duplicate content pages — parameter-based URL variations where you cannot use a canonical tag or 301 redirect
  • Pagination beyond page 2 — deep pagination pages (/category/page/10, /category/page/11) provide little unique content and thin search value
  • Staging and development environments — staging subdomains should be noindexed or blocked at the server level to prevent test content from appearing in search results
  • Internal search results pages — /search?q=keyword pages are usually thin, duplicate-risk content that serve users but not searchers

When Should You NOT Use Noindex?

Noindex should never be applied to pages you want to rank. This sounds obvious, but accidental noindex is one of the most common and damaging technical SEO mistakes. Common scenarios where noindex is applied accidentally:

  • Migrating from staging to production without removing the noindex directive added during staging
  • A CMS plugin set to noindex "all pages" during development that was never turned off
  • A developer adding noindex to the entire site in a global header template
  • Noindex added to a post type (e.g., all blog posts) by mistake in an SEO plugin

Always check new pages for noindex tags using the Google Search Console URL Inspection tool before publishing. If a page is not appearing in search results despite being well-optimised, a misplaced noindex tag is the first thing to check.

Noindex vs Robots.txt: Key Differences

Both noindex and robots.txt can prevent content from appearing in search results, but they work differently and have a critical difference in behaviour:

FeatureNoindex Meta TagRobots.txt Disallow
Prevents indexingYesNot directly (see note)
Prevents crawlingNo (page is still crawled)Yes (page is not crawled)
Googlebot can read itYes (Googlebot crawls and reads the tag)Yes (Googlebot reads robots.txt first)
Can appear in results without contentNoYes — a URL-only result can appear if linked from other sites

Critical rule: if you block a page in robots.txt AND add a noindex tag, Googlebot cannot crawl the page to read the noindex tag — and the noindex is therefore ignored. Pages you want noindexed must be crawlable but not indexed. Only use robots.txt to block pages you want to prevent from being crawled entirely (admin pages, API endpoints, private content).

X-Robots-Tag HTTP Header

The X-Robots-Tag is an HTTP response header that serves the same function as the meta robots tag, but is useful for resources where you cannot add HTML (PDFs, images, ZIP files, video files). It can also be used as a site-wide noindex via Apache or Nginx configuration.

Example in Apache .htaccess to noindex all PDFs:

<FilesMatch "\.pdf$">
  Header set X-Robots-Tag "noindex"
</FilesMatch>

The X-Robots-Tag is particularly useful for e-commerce sites that want to prevent auto-generated product variant pages (different colour, size variations at unique URLs) from appearing in search results without modifying each individual page template.

How to Remove a Noindex Tag Safely

When you remove a noindex tag from a page (to allow it to be indexed), follow this process:

  1. Remove the noindex meta tag from the page HTML
  2. Confirm the page is accessible to Googlebot (not blocked in robots.txt)
  3. Use Google Search Console's URL Inspection tool → "Request Indexing" to prompt Google to recrawl the page quickly
  4. Check back in GSC after a few days to confirm the page's indexing status has changed from "Excluded: Noindex tag" to "Indexed"

Do not rush the process — Google needs to recrawl the page to discover the noindex has been removed. Requesting indexing via URL Inspection is the fastest way to trigger this recrawl. For urgent cases, also ensure the page is included in your XML sitemap.

Frequently Asked Questions

Can a noindexed page still receive backlink equity?

Noindexed pages are crawled but not indexed. Google can follow links on a noindexed page and pass link equity through those links to other pages, but the noindexed page itself is excluded from search results. If you need to preserve link equity to a noindexed page, consider whether a 301 redirect to an indexable page would be more appropriate.

What is the difference between noindex and nofollow?

Noindex tells Google not to include the page in search results. Nofollow tells Google not to follow links on the page. They are independent directives you can use separately or together. A page with noindex but without nofollow will still have its links followed by Googlebot — useful when you want to pass link equity through a page but not have the page itself indexed.

Will my noindexed pages still be crawled and use up crawl budget?

Yes. Noindexed pages are still crawled — Googlebot needs to crawl them to read the noindex directive. For very large sites with many noindexed pages, this can represent meaningful crawl budget usage. If pages genuinely should not be crawled at all (admin areas, API endpoints), use robots.txt to prevent crawling rather than noindex to prevent indexing.

How do I check if my page has a noindex tag?

Use Google Search Console's URL Inspection tool and look for the "Indexing" section. Alternatively, view the page source and search for "noindex." You can also use browser extensions like SEO Meta in 1 Click or check the HTTP headers using browser developer tools to see if an X-Robots-Tag noindex header is being served.

Put This Into Practice

Use our free SEO tools to apply what you just read. No signup required.

Back to Blog