Category: Search Engine Optimization

  • Why Your GSC Impressions Dropped: Google’s War on Scraping Kills &num=100

    Why Your GSC Impressions Dropped: Google’s War on Scraping Kills &num=100

    Summary: Google has permanently disabled the &num=100 search parameter. This change ends the ability for SEO tools to request 100 search results in a single query. The immediate impact is a 10x increase in operational costs for rank trackers and SERP scrapers, which must now paginate through 10 separate pages. This is causing tools to break or error. A secondary effect is a sudden, sharp drop in desktop impressions reported in Google Search Console. This drop is not a loss of human traffic but the removal of bot impressions from these tools, revealing that historical impression data has been inflated for years. This is a deliberate, strategic move by Google to combat large-scale data scraping, and SEO professionals must immediately audit their tools and re-baseline their impression data.

    Google Kills &num=100 Parameter: Why Your SEO Tools Are Breaking and Your Data Is Wrong

    Have your SEO tools started failing in the last few days? Are your rank-tracking reports full of errors or incomplete data?

    Perhaps you logged into Google Search Console and saw a sudden, terrifying drop in your desktop impressions.

    Your first thought might be a penalty or a massive algorithm update. The truth is simpler and has more profound consequences for our industry.

    Google has effectively killed the Google &num=100 parameter.

    This isn’t a temporary bug. It’s a deliberate, permanent change. It represents a defensive move by Google to combat data scraping. The SEO industry, which has relied on this function for over a decade, is collateral damage.

    This single change fundamentally breaks the economics of most SEO tools. It also forces us to accept a hard truth: our historical impression data has been wrong for years.

    What Was the Google &num=100 Parameter (And Why Did It Matter)?

    For as long as most SEOs can remember, you could add a simple string to a Google search URL to change the number of results.

    That string was &num=100.

    A typical search for “London digital agency” would return 10 results.

    A search using google.com/search?q=london+digital+agency&num=100 would return 100 results on a single page.

    This was the backbone of the entire SEO rank tracking industry.

    Think about how your rank tracker works. It needs to check where your site ranks for a keyword, from position 1 to position 100.

    Using the Google &num=100 parameter, it could do this with one single request. One query. One page load. One proxy IP address.

    It was fantastically efficient. It allowed tool providers like Semrush, Ahrefs, Moz, and hundreds of others to collect massive amounts of data at a relatively low and predictable cost.

    That entire model of efficiency is now gone.

    The New Reality: A 10x Cost for SEO Tool Data

    As of this week, the parameter is dead. Sending a request with &num=100 now simply returns the default 10 results.

    What does this mean for a tool that needs to check the top 100 positions?

    It means it must now make ten separate requests.

    1. It requests page 1 (results 1-10).
    2. It requests page 2 (results 11-20).
    3. It requests page 3 (results 21-30).
    4. …all the way to page 10 (results 91-100).

    This is a catastrophic shift in operational mechanics.

    The cost of Google SERP scraping has just multiplied by ten overnight.

    Every tool provider now faces a 10x increase in the number of requests they must make. This means 10x the proxy IPs, 10x the bandwidth, 10x the infrastructure load, and 10x the risk of being blocked or served CAPTCHAs by Google.

    This is why your tools are breaking. They are failing to get data, timing out, or returning incomplete reports. Their entire scraping infrastructure is being re-engineered in a panic.

    This 10x cost will not be absorbed by the tool providers. It cannot be.

    Prepare for a new wave of price increases across the entire SEO tool industry. The era of cheap, daily, top-100 rank tracking is over.

    The ‘Desktop Impressions Drop’ Mystery Explained

    This tool crisis is running parallel to a second, confusing trend: the widespread desktop impressions drop in Google Search Console.

    At our London agency, we’ve seen clients’ GSC reports showing a sharp decline in desktop impressions, starting on the exact same day the parameter was disabled.

    The immediate fear is a loss of visibility or human traffic.

    This is not what is happening.

    You are not losing human visitors. You are losing bot impressions.

    For years, every time an SEO tool used the &num=100 parameter to scrape a keyword, it registered as a “desktop impression” for every single one of the 100 sites it found.

    Your site, ranking at position 78 for a high-volume keyword, might have been getting thousands of “impressions” per day. These were not humans. These were bots from Ahrefs, Semrush, and countless other rank trackers.

    Now that the &num=100 parameter is dead, that scraping has become 10x harder. The scraping volume has fallen off a cliff as tools scramble to adapt.

    The bot impressions have vanished from your GSC reports.

    The “drop” you are seeing is the removal of this long-standing data inflation. What you are left with is a much more accurate, clean baseline of actual human impressions.

    This is, in a way, a good thing. Our data is finally cleaner.

    The bad news? All your historical desktop impression data is inflated. Any report, any chart, any year-on-year comparison you’ve ever made using that data is based on a contaminated metric.

    This Is Not a Bug. This Is Google’s War on Scraping.

    Some in the SEO community have suggested this is a temporary test or a bug that will be reversed.

    This is wishful thinking.

    The widespread, uniform, and global nature of this change points to a deliberate policy decision. The 10x cost implication is not an accident; it is the entire point.

    This Google search parameter change is a strategic, defensive move.

    Google is in a data war, primarily against AI companies. Large language models (LLMs) are being trained by aggressively scraping Google’s search results. This taxes Google’s infrastructure and threatens its core business.

    By killing the &num=100 parameter, Google makes large-scale scraping 10 times more expensive and 10 times easier to detect. It’s harder for a scraper to hide when it has to make 10 distinct page requests per keyword instead of one.

    The SEO industry is not the target. We are simply the collateral damage in this larger conflict. Google is protecting its data asset, and it is willing to break our tools to do it.

    What SEO Professionals and Agencies Must Do Now

    We must accept this new reality and adapt immediately. Waiting for a fix is not an option.

    Here are the four actions you need to take this week.

    1. Audit Your SEO Tools. Contact your SEO rank tracking provider. Ask them directly how they are handling the removal of the Google &num=100 parameter. Are they stable? Are they paginating? Are they limiting tracking depth? Will their prices be increasing? You need these answers to trust your data.
    2. Re-baseline Your Impression Data. Go into Google Search Console immediately. Add an annotation for the date this change occurred. This is your new “Day Zero” for desktop impressions. You must explain to your clients, bosses, and stakeholders that the desktop impressions drop is a data correction, not a performance loss. All future reports must be benchmarked against this new, lower, more accurate baseline.
    3. Re-evaluate Your Tracking Needs. Do you really need to track 100 positions for every single keyword, every single day? For many keywords, tracking the top 20 or top 30 is more than enough. Reducing your tracking depth will be the primary way to manage the new costs that will be passed down from tool providers.
    4. Prepare for a More Expensive Future. The 10x infrastructure cost for SEO tool data is real. It will be passed on to us, the end-users. Budget for increased subscription fees for all your SERP-dependent tools.

    The end of the &num=100 parameter marks a new chapter in our industry. It’s one with more restricted, more expensive, but ultimately more accurate data. The sooner we adapt, the better.

  • The Unified Mandate: Why CWV, GEO, and AEO are Non-Negotiable for LLM Optimization

    The Unified Mandate: Why CWV, GEO, and AEO are Non-Negotiable for LLM Optimization

    Your current, siloed SEO strategy is obsolete. Relying on separate teams for technical SEO, content, and local optimization is a failing model in an AI-driven search world. Google’s Search Generative Experience (SGE) and other LLM-driven models do not just “rank” your content; they “ingest” and “synthesize” it to form direct answers. Winning in this new era requires a single, unified framework. This new, holistic SEO model merges technical performance (Core Web Vitals), local context (GEO), and answer-first content (Answer Engine Optimization) into a cohesive LLM Optimization strategy. This article explains why this pivot from keyword optimization to intent fulfillment is essential for survival and how to begin implementing it.

    Your technical SEO team just spent a month shaving 200ms off your Largest Contentful Paint (LCP). Your content team published five “keyword-optimized” articles. Your local agency is busy managing Google Business Profile reviews across your London offices.

    And yet, your visibility in AI-generated answers is zero.

    Why? Because these efforts are completely disconnected. You are meticulously optimizing for a search engine that is rapidly being replaced. The age of “10 blue links” is ending. The new battleground is the AI-generated answer box, and it plays by an entirely different set of rules.

    Surviving this shift demands a radical pivot. We must stop chasing keywords and start mastering “intent fulfillment.” This requires a holistic strategy where technical performance (CWV), local context (GEO), and answer-first content (AEO) are all optimized for ingestion and validation by Large Language Models (LLMs).

     LLMs Don’t “Crawl,” They “Ingest”: Your New Content Mandate

    For two decades, SEO has been about “crawling.” We built sites for Googlebot. We used keywords to help it index and rank a document.

    That process is now secondary.

    LLMs and generative AI experiences like SGE operate on a different principle: ingestion. They do not want to just list your page; they want to consume it. They extract its information, validate its authority, and synthesize its facts into a new, combined answer.

    If your content is not built for this ingestion process, it will be ignored.

    AI-driven search values your content differently. Success is no longer about keyword density. It is about:

    • Structured Data: Schema (like FAQPage, Article, LocalBusiness, Product) is no longer a “nice to have.” It is the instruction manual you give the LLM. It explicitly tells the AI what your content is, what your business does, and how to use your information correctly. Without it, the AI has to guess. It will not guess. It will use a competitor’s content that is structured.
    • Clear E-E-A-T Signals: Experience, Expertise, Authoritativeness, and Trustworthiness are the primary validation signals for an LLM. An AI model is trained to identify and prefer sources that demonstrate authority. This means clear, detailed author biographies, a robust “About Us” page, external citations from reputable sources, and transparent contact information. A page with “By Admin” is a page that an LLM will rightly judge as untrustworthy.
    • Answer Engine Optimization (AEO): This is the “AEO” pillar. You must stop writing “articles” and start providing “answers.” Your content must be formatted for synthesis. This means using clear, descriptive headings (H2s, H3s) that map to user questions. It means using concise paragraphs, bulleted lists, and tables. If a user asks a question, your page must provide the most direct, well-supported, and easily-extracted answer to that question.

    LLM Optimization begins here. You are no longer writing for a user; you are writing to be the source for an AI that is serving the user.

    Core Web Vitals and AI: Why Technical Performance is Now a Trust Signal

    For years, many marketing directors have viewed Core Web Vitals (CWV) as a separate, technical chore. A box to be ticked by the IT department to keep Google happy.

    This is a critical, and now dangerous, misunderstanding.

    A slow, janky site (poor LCP, high Cumulative Layout Shift) is, first and foremost, a bad user experience. AI models are trained on massive datasets to associate poor user experience with low-quality, untrustworthy content.

    Think of it from the AI’s perspective. Its primary goal is user satisfaction. If it synthesizes an answer and provides a link to your site for more information, and that page takes five seconds to load or shifts around as ads pop in, the user is frustrated. This frustration reflects poorly on the AI, not just your brand.

    The AI model infers this. It understands that a site that invests in a stable, fast, and secure user experience (good CWV, HTTPS) is more likely to be a legitimate, authoritative operation. A site that cannot be bothered to fix its technical foundation is probably not a reliable source of information.

    Core Web Vitals are no longer just a “Google” metric. They are a foundational trust signal.

    A technically sound site is the price of entry to be considered a trusted source for LLMs. A poor CWV score is a high-friction signal. The LLM will simply get its information from a lower-friction, higher-quality source. Your excellent, well-researched content will never even be ingested because your technical foundation failed the first test.

    Context is King: How GEO and AEO Create Relevance for LLMs

    LLMs thrive on context. A query like “best Sunday roast” or “compliance software” is functionally meaningless on its own.

    In the old model, the user would have to refine their search. In the new model, the AI does it for them.

    AI models are integrating user data by default. The most important contextual signal is location (GEO). That “best Sunday roast” query, coming from a user in London, is instantly understood as “best Sunday roast near me” or “best Sunday roast in Islington.”

    A query for “compliance software” from a device located in the City ofs London is understood as “MiFID II compliance software for UK-based financial firms.”

    Your content must be explicitly optimized for this contextual intent. This is where GEO (Local SEO) and AEO (Answer Engine Optimization) converge into a single, powerful tool for LLM Optimization.

    Look at your current content.

    • Bad Content: A blog post titled “Our 10 Favorite Sunday Roasts.”
    • Good Content: A local landing page titled “The Best Sunday Roast in Islington, London.” This page is structured with clear AEO-driven Q&As (“What time is Sunday roast served?”, “Is it kid-friendly?”, “What is the average price?”, “What are the vegetarian options?”). It is marked up with LocalBusiness and Restaurant schema, has an embedded map, and lists opening hours.

    This “Good Content” example is now the perfect, ingestible source for an AI. When a user asks their phone, “Where can I get a good Sunday roast near Angel station that’s good for kids?”, the AI can confidently synthesize an answer directly from your page.

    You are no longer just optimizing for a user searching on Google Maps. You are optimizing to be the definitive source that the AI uses to answer that user’s specific, location-aware, and high-intent query.

    This Isn’t More Work, It’s Smarter Work: The Compounding Returns of a Holistic SEO

    I speak to marketing directors and in-house SEO managers in London every week. The immediate pushback is predictable: “My teams are already at capacity. We cannot manage another ‘optimization’ trend. We are stretched thin managing our current SEO, content, and technical backlogs.”

    This reaction is based on a false premise. This is not another trend to add to the pile. It is the unification of your existing, scattered, and inefficient efforts.

    Right now, you have three different teams (or agencies) running on three separate treadmills, producing three separate, low-impact assets:

    1. Tech Team: Fixes a sitewide CLS issue. (Impact: Marginal)
    2. Content Team: Writes a 1,500-word blog post on a broad keyword. (Impact: Low)
    3. Local Team: Updates holiday hours on GMB. (Impact: Minimal)

    This is a massive waste of resources.

    The new, unified model creates a single, high-impact asset. Imagine your team is building a new page for a key commercial service.

    • The Process: The Content Strategist, GEO Specialist, and Technical SEO work together from the start.
    • The Asset:
      • The page structure is pure AEO. It is built as a series of direct answers to the most common user questions (“What is [service]?”, “Who needs [service]?”, “How much does [service] cost in London?”, “What is the process?”).
      • The content is enriched with GEO signals. It explicitly mentions the London boroughs or industries it serves. It includes LocalBusiness schema, client testimonials with locations, and embedded maps.
      • The page is validated by CWV. The technical team ensures this specific page loads instantly, is perfectly stable, and is flawless on mobile.

    The Payoff: This single asset now creates compounding returns. It serves all search masters simultaneously.

    • It ranks in traditional search for its target keywords.
    • It appears in local search and map packs for its GEO-specific terms.
    • It is now the perfect, ingestible, validated source for an LLM to use in an AI-generated answer.

    You have stopped bailing water with three different buckets. You have unified your team to build a single, faster boat. This isn’t more work; it’s smarter, more focused work.

    Stop Auditing in Silos: Your First Step to a Real AI Search Strategy

    Your current reports are lying to you.

    A “green” Core Web Vitals score means nothing if your content is unstructured mush that an AI cannot ingest. A high-ranking blog post is a vanity metric if an AI bypasses it entirely by providing a direct answer sourced from a competitor.

    The fundamental problem is that you are measuring the components, not the system. You are admiring the individual bricks while your house is being redesigned by someone else.

    The first step is to get an honest baseline. You must stop commissioning a “Technical Audit,” a “Content Audit,” and a “Local SEO Audit” as if they are unrelated. You must see how these elements perform together, in the context of your main competitors.

    Stop auditing your site in silos. At OnDigital, we have moved beyond these fragmented, outdated reports. It is time for a unified “AI Readiness Audit” that benchmarks your CWV, GEO,AEO, and LLM signals against your top competitors.

    This is the only way to see the real gaps and build a strategy that works for the next decade of search, not the last. The AI search era is here. You can either be the source it quotes or the link it forgets.