Web Scraping 2 min read

How to Bypass Google's Anti-Scraping Restrictions

Learn how to scrape Google search results using a straightforward approach with the Impit package and custom parsing logic.

by
How to Scrape Google Search Results: A Simple Implementation Guide

Google has recently tightened its grip on search results, making life harder for SEOs, scrapers, and tool builders. If you’ve noticed strange redirects or missing parameters when scraping Google, you’re not alone. In this post, I’ll show you what changed, why it matters, and how you can still scrape Google search results reliably.

Google’s Lockdown on Search Results

Over the last few months, Google rolled out a couple of major changes:

JavaScript requirement for search results If you try visiting Google search pages without JavaScript, you might see something like this instead of real results: This forces bots into a dead end unless they run a full browser.

<meta content="0;url=/httpservice/retry/enablejs?sei=QtvSaN8zop6-vQ_B1dvwDQ" http-equiv="refresh" /> <div style="display: block"> Please click <a href="/httpservice/retry/enablejs?sei=QtvSaN8zop6-vQ_B1dvwDQ">here</a> if you are not redirected within a few seconds. </div>

The removal of the num parameter For years, SEOs relied on &num=100 to grab 100 results per query. That parameter is now gone. Tools like Ahrefs and Semrush even had public issues when this broke. Search Engine Land covered the update here. Translation: Google is making it harder to scrape. Much harder.

The Workaround: Using the AdsBot User-Agent

First, shoutout to Jacob Padilla for figuring this out and telling me about it.

The good news? There’s still a way around the JavaScript wall.

When you send requests to Google with this user agent, the normal HTML SERP is returned, no redirect required:

"User-Agent": "AdsBot-Google (+http://www.google.com/adsbot.html)"

This tricks Google into thinking you’re their own AdsBot crawler, which is allowed to fetch results directly.

Parsing the Results

Once you bypass the redirect, you can parse Google’s search results like before. Here’s an example of how I extract titles, links, and snippets from the returned HTML.

This works without needing Puppeteer or a headless browser. Faster, cheaper, and less resource-intensive.

function adriansParseResults(html) {
  try {
    const $ = cheerio.load(html);
    const results = [];
    const seenUrls = new Set();

    // Select all a tags that are direct children of any element
    const links = $("*:has(> a) > a").filter((_, el) => {
      // its grabbing the 2nd a tag when it should be grabbing the first
      const url = $(el).attr("href");
      if (url?.includes("x.com")) {
        // the structure of the html is different for x.com
        return $(el).parent().children("span").length === 2;
      }

      return $(el).children("span").length === 2;
    });

    links.each((_, el) => {
      const rawUrl = $(el).attr("href");

      let url = decodeURIComponent(
        rawUrl.startsWith("/url?q=")
          ? rawUrl.split("&")[0].replace("/url?q=", "")
          : rawUrl
      );

      url = url?.split("?")?.[0];

      if (seenUrls.has(url) || url === "/search") {
        return;
      }
      seenUrls.add(url);

      let title = $(el).find("span")?.first()?.text()?.trim();

      if (url?.includes("x.com")) {
        title = $(el).parent().find("span")?.first()?.text()?.trim();
      }

      const description = $(el)
        ?.parent()
        ?.parent()
        ?.find("table")
        ?.first()
        ?.text()
        ?.trim();

      results.push({ url, title, description });
    });

    return results;
  } catch (error) {
    console.log("error at adriansParseResults", error.message);
    throw new Error(error.message);
  }
}

Why This Matters for SEOs and Scrapers

  • SEO tools need reliable access to SERPs for keyword research and rank tracking.
  • Scraping projects depend on bulk data collection.
  • Developers want efficient ways to analyze search results without spinning up costly browser clusters.

Bypassing these blocks keeps your workflows running smoothly.

Final Thoughts

Google keeps making scraping harder, and they’ll keep experimenting with new restrictions. But with the right techniques, you can stay one step ahead.

If you’re looking for easy social media scraping APIs (Instagram, TikTok, YouTube, Reddit, Twitter/X, and more), check out Scrape Creators. We handle the messy parts of scraping so you don’t have to.

FAQ

Frequently asked
questions

Can't find what you're looking for? Email me.

Adrian Horning

Written by

Adrian Horning

Founder of ScrapeCreators. I write about social data APIs, scraper reliability, and turning public creator data into useful products.

Connect

ScrapeCreatorsScrapeCreators

Social Media Scraping API
for Developers

Real-time data from TikTok, Instagram, YouTube, X, Facebook, Reddit, and more.

Real-time Data

Fresh, accurate, always up-to-date.

No Proxies

We handle the infrastructure.

Developer First

Simple API. Powerful results.

TikTok logoInstagram logoYouTube logoX logoFacebook logoReddit logo
{200 OK
"platform": "youtube",
"type": "video",
"title": "Never Gonna Give You Up",
"views": 12504321,
"transcript": "We're no strangers to love...",
}
Success124ms
Purple gift box representing 100 free ScrapeCreators credits

Get 100 credits on us - instantly.

No credit card required. Start building for free.

Try the API, on us.

New developers get 100 free credits automatically when they sign up. No credit card required.

Get started free
Trusted by 10,000+ developers
99.9% uptime
SOC 2 Compliant