Noindex Tag: When & How to Remove Your Website Pages from Search Engine Results

Noindex is a rule set with <meta> tag or HTTP response header to prevent unwanted pages from appearing in search engine results.
Amplivize Staff
March 12, 2025

While most of our efforts in SEO and content marketing focus on getting pages indexed and ranking, another consideration is what to do when you don’t want a page to be indexed.

Depending on your CMS, you can simply utilize a plugin (WordPress) and toggle not to index a page, or in Webflow you can configure the page in a similar way, and the list could go on.

But what’s going on under the hood, and what type of page wouldn’t we want to index?

HTML Meta Tag to Get Your Page Out of the SERPs

Let’s start with sourcing Google directly: “noindex is a rule set with either a <meta> tag or HTTP response header and is used to prevent indexing content by search engines that support the noindex rule, such as Google. When Googlebot crawls that page and extracts the tag or header, Google will drop that page entirely from Google Search results, regardless of whether other sites link to it.”

Here’s what’s that looks like:

  • <meta name="robots" content="noindex, nofollow">

Essentially:

  • noindex → Tells search engine not to index the page (removes it from search results)
  • nofollow → Tells search engines not to follow any links on the page (best for when you don’t want to pass link equity)

The tag goes inside the <head> section of your page or HTML document:

<head>
<meta name="robots" content="noindex, nofollow">
</head>

Then, the next time Google recrawls the page, the no-index will take effect. Or to speed up removal, use Google Search Console → Removals and request temporary removal.

Be precise here, as you don’t want to apply this on a template level when attempting to do so on a page level, as that could result in an entire section of your website losing traffic. The important thing is to be intentional about it.

Google also notes: “For the noindex rule to be effective, the page or resource must not be blocked by a robots.txt file, and it has to be otherwise accessible to the crawler. If the page is blocked by a robots.txt file or the crawler can't access the page, the crawler will never see the noindex rule, and the page can still appear in search results, for example if other pages link to it.”

Which is included in the noindex variations below:

  • X-Robots-Tag (HTTP header): Block PDFs, images, non-HTML files
  • Robots.txt: Block crawling, but not indexing
  • Canonical Tag: Avoid duplicate content indexing
  • JavaScript Noindex: For dynamically generated pages
  • HTTP 410: Permanently remove deleted pages

Why Noindex Some Website Pages from Search Engines

Most of the pages on your website will be indexed, but some examples that you want noindex included on are below.

Common Page Types for Noindex

Internal and Admin Pages:

  • /login, /admin, /dashboard
  • Because they don’t really provide value in search results and could be a security risk

Search & Filtered Results Pages

  • /search?q=seo+tips, /products/?color=blue&size=large
  • Because Google may see these as thin or duplicate content

Thank You & Confirmation Pages

  • /thank-you/, /order-confirmation/
  • Because post-conversion pages shouldn’t be indexed

Staging & Test Environments

  • staging.example.com, /coming-soon/
  • Because we don't want Google to index test environments

Cart & Checkout Pages

  • /cart/, /checkout/
  • Because there's no need to index transactional pages

Expired or Discontinued Product Pages

  • /product-old-model/
  • Because if permanently gone, use HTTP 410 to tell Google it's removed

Affiliate or Syndicated Content

  • /republished-article/
  • Because we don't want duplicate content

Summary & Conclusion

Implementing noindex tags is an essential SEO strategy for controlling what parts of your website search engines include in their index and ultimately what users will see. While we often focus on getting pages indexed, there are situations where preventing index is equally beneficial.

The goal is to apply these practices strategically to enhance site structure, get any garbage out of the SERPs, and protect sensitive information, while doing so intentionally to not experience any unintended ranking losses.

If you’re unsure which pages to noindex, performing an audit can help determine the right approach.

And we’ll leave you on one last note from Google: “We have to crawl your page in order to see <meta> tags and HTTP headers. If a page is still appearing in results, it's probably because we haven't crawled the page since you added the noindex rule. Depending on the importance of the page on the internet, it may take months for Googlebot to revisit a page. You can request that Google recrawl a page using the URL Inspection tool.”

Want to Grow Your Website?
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
We actually listen, adapt, and build solutions that fit your business, not just a template. We cut through noise, focus on what works, and make sure every move drives measurable impact. We don’t waste time, and we take pride in delivering results that matter.
Copyright © 2025. All rights reserved.