top of page

September 21, 2025 | Product • Marketing • Search Strategy | Canonicals, Anonymization, and Truth

How to See What Keywords a Page Is Ranking For (U.S., 2025) — An Accuracy-First Guide that Also Calls Out the Dirty Realities

bb-article-2-hero-image.png

See every keyword a single page truly earns in the U.S.—without tool hype. We reconcile GSC with third-party estimates, factor in AI Overviews and ad-first layouts, and make decisions that protect revenue and clicks, not vanity “average position.

Join Bouncebeam as an Early Adopter

Let’s be blunt. “How to see what keywords a page is ranking for” is not hard—measuring it correctly and interpreting it sanely in a rigged attention economy is. Google’s product choices optimize for its ad machine and AI surfaces first; publishers and even advertisers are negotiating over the crumbs. This guide gives you (1) the correct, defensible workflow to get per-URL keyword data, and (2) the pragmatic truth about what’s stacked against you—and what to do anyway.
What you’re actually trying to answer
 
For this exact URL, which queries does it get impressions/clicks for in the United States, how stable are those queries, and how should I reconcile GSC with third-party tools and reality on the SERP?
Short version:

  • Use Google Search Console (GSC) first for your own page: Performance → Pages → pick the URL → Queries. Segment by country (United States) and device; remember “position” is an average over impressions, not a spot check you can reproduce in a browser. (Google Help)

  • Use third-party databases (Ahrefs, Semrush) to see competitor page keywords or discover candidates you can’t see in GSC. Treat their positions/volumes as modeled estimates and verify the important ones. (Ahrefs)

  • Use manual checks sparingly to annotate SERP context (snippets, shopping units, news modules, AI Overviews). Expect variance by time, location, device, and personalization. (Google Help)

Method A — Google Search Console (the primary source for your page)
Path: Search Console → Performance (Search results) → Pages → select the exact URL → Queries. Filter Country = United States. Compare Desktop vs Mobile; don’t average them for diagnosis. (Google Help)
What the metrics actually mean (and why they confuse teams):

  • Average position = the average of where your link appeared across all impressions; it will not match the one manual search you just ran. (Google Help)

  • Not every query is shown. Google applies privacy filtering and other processing limits, so totals can exceed the sum of listed queries. Don’t force a reconciliation that the system itself says won’t balance. (Google for Developers)

  • Data window is ~16 months. Export routinely if you want a longer baseline or cohort analyses; GA4’s Search Console reports inherit the same cap. (Google Help)

Canonicalization reality check: GSC attributes performance to the Google-selected canonical. If your URL is a duplicate (parameters, alternate versions), metrics consolidate under the canonical—which may even live outside your verified property. Verify with URL Inspection. (Google Help)
U.S. workflow that won’t gaslight you:

  1. Filter Page = your URL, Country = United States, Search type = Web.

  2. Run Desktop and Mobile separately (two exports).

  3. Pull Last 28 days and Last 90 days to check stability; archive monthly to beat the 16-month ceiling. (Google Help)

Method B — Third-party databases (best for competitors and discovery)
You can’t open a competitor’s GSC, so you’ll use crawled/modelled datasets:

  • Ahrefs — Site Explorer → Organic keywords, switch Location = United States; you can do this at URL level. Use it to find query candidates; verify important ones. (Ahrefs)

  • Semrush — Organic Research → Pages, pick the page to view its keywords; again set US. (Semrush)

Why numbers won’t match GSC: different keyword universes, crawl recency, and location/device assumptions. Treat as estimates—useful for reconnaissance, not as a source of record. (Ahrefs)
Method C — Manual checks (qualitative confirmation only)
Open a clean/incognito profile and search disputed queries to observe the SERP: featured snippets, “People also ask,” Top Stories, Shopping, map packs, AI Overviews. Results differ by time, place, device, and context, so use this to annotate presentation—not to “measure rank.” (Google Help)
Reconciling the sources (the step that separates pros from amateurs)

  1. Export GSC (URL filter, United States, Web): Desktop 28/90-day + Mobile 28/90-day; keep Clicks, Impressions, Avg position. (Google Help)

  2. Export Ahrefs/Semrush for the same URL + US. (Ahrefs)

  3. Join on query; flag:

    • GSC-only queries (real but long-tail/anonymized beyond tool coverage).

    • Tool-only queries (discovery candidates).

    • Δ position ≥ 3 (investigate).

  4. Manually check the top disputes; record timestamp/device and whether AI Overviews or other modules appear (they often explain CTR swings). (blog.google)

  5. Final list = all GSC queries for that page plus tool-only items you verified or that later appear in GSC. Keep Desktop/Mobile separate for actioning. (Google Help)

The dark side you must plan around
1) Google’s incentives don’t align with your organic ambitions
Alphabet’s filings are unambiguous: Google Services (Search, YouTube, Network) earns money primarily from advertising; that’s the growth engine they manage toward. Product choices (layout, modules, and, now, AI surfaces) reflect this. (SEC)
Practical consequence: More screen real estate goes to ads and “keep-users-here” answers. Even ad placement mechanics matter: Google runs separate auctions for different locations and explicitly optimizes Top ads (generally above organic). (Google Help)
2) Zero-click is not an edge case—it’s the center of gravity
Independent clickstream studies show large, sustained shares of zero-click searches, and multiple 2024–2025 updates point the same way—even before AI Overviews expanded. (Search Engine Land)
With AI Overviews rolling out broadly, several datasets and trade-press analyses report rising zero-click behavior and declines in publisher referral traffic (while Google publicly says “click quality” is up). Expect contradictory narratives; plan for volatility. (Digiday)
3) AI Overviews change the click-through math
Google’s own posts claim AI Overviews send “higher-quality clicks” and include more prominent links; publishers and analysts counter with measured traffic drops since mid-2024, particularly for information queries (news, travel guides, health, product reviews). (blog.google)
When you audit a page’s keywords today, you’re not just competing with other websites; you’re competing with Google’s summary layer. Treat it as a SERP feature you must annotate and work around. (blog.google)
4) Even advertisers are squeezed
If you think “just pay for the clicks,” welcome to a crowded auction. Benchmarks show rising cost-per-lead in many categories year-over-year as Google leans into automation and inventory changes; advertisers face higher acquisition costs and less transparency about where impressions originate. (WordStream)
And Google continues to tweak how and where ads can appear (top and bottom, different auctions), and what assets can bloat an ad’s footprint—further pushing organic down. (Google Help)
5) Measurement is noisy by design
GSC averages position over impressions, hides some queries for privacy, and attributes performance to canonicals (which might not be the URL you’re staring at). Users see different results by time/location/device/personalization. Treat your end report as a best current estimate, not a fantasy of precision. (Google Help)
Worked example (copy this process)

  1. Pick a page (e.g., /pricing).

  2. GSC: Page filter = that URL → Queries; United States; Desktop then Mobile; export Last 28 and Last 90 days. (Google Help)

  3. Ahrefs/Semrush: Pull URL-level keywords; set United States; export. (Ahrefs)

  4. Join on query; flag GSC-only, tool-only, Δ position ≥ 3; manually check top disputes and note AI Overviews/SERP modules. (blog.google)

  5. Prioritize:

    • High-impression, low-CTR queries first (title/meta work, snippet suitability).

    • Verified tool-only queries with commercial intent (content expansion, internal links).

Bouncebeam’s operating stance

  • First-party first. Your GSC per-URL Queries (U.S., device split) is the source of truth for presence and actual clicks; tools provide discovery and competitor intel. (Google Help)

  • Stability before action. A query “counts” when it appears in at least 2 of 3 windows (7/28/90 days). This dampens long-tail noise and privacy omissions GSC documents. (Google for Developers)

  • Canonical hygiene. Confirm Google-selected canonical before attributing wins/losses to a specific URL. (Google Help)

  • SERP reality logging. For high-value queries, log whether AI Overviews or other modules appear at audit time; expect CTR distortions even at similar “average positions.” (blog.google)

Implementation checklist (U.S.-focused)

  • GSC → Performance → Pages → select URL → Queries; Country = United States; Desktop + Mobile exports for 28d and 90d. (Google Help)

  • Ahrefs/Semrush → URL-level keywords; United States export. (Ahrefs)

  • Join on query; flag tool-only and Δ position ≥ 3.

  • Manual review of top disputes; record timestamp/device and AI Overviews/SERP features. (blog.google)

  • Build the final reconciled list per device; prioritize high-impression, low-CTR items.

  • Monthly archive to beat GSC’s ~16-month window. (Google Help)

Methods comparison (at a glance)
MethodYou/CompetitorPresence accuracyPosition accuracyStrengthsLimitsBest use
GSC (URL → Queries)YoursHighMedium (averaged)First-party clicks/impressions; country/device filtersPrivacy filtering; canonical roll-ups; 16-mo capAlways your starting point. (Google Help)
Ahrefs/Semrush (URL)BothMed–HighMedium (modeled)Works for competitors; fast discoveryCoverage/model bias; location/device assumptionsRecon + discovery; then verify. (Ahrefs)
Manual checksBothMediumLow (volatile)See SERP & AI OverviewsTime/place/device variabilityValidate disputes; annotate features. (Google Help)
The uncomfortable but useful conclusions

  • Organic is a diminishing slice of the viewport. Ads (top/bottom, bigger assets) and AI layers crowd the fold, by design. Measure what matters (clicks and conversions), not a romantic idea of “position.” (Google Help)

  • Zero-click is structural, not cyclical. Multiple datasets show sustained or rising no-click behavior; plan for less traffic for “answerable” queries. (Search Engine Land)

  • Google says “quality clicks are up”; many publishers’ traffic is down. Both can be true depending on query classes. Segment your expectations. (blog.google)

  • Advertisers aren’t “the escape hatch.” Benchmarks show rising acquisition costs in many verticals as auctions and AI products change; paying more isn’t a strategy. (WordStream)

Sources woven into this guide

  • Google Search Console — Performance report (filters, dimensions), “position” is an average, privacy filtering deep dive, canonicalization & URL Inspection, differing results (time/location/device), 16-month data window. (Google Help)

  • Ahrefs / Semrush — Per-URL Organic keywords and Organic Research → Pages documentation. (Ahrefs)

  • Google Ads — Top vs bottom ad locations, separate auctions by location, sitelinks/callouts expanding ad footprint, Ad Rank thresholds. (Google Help)

  • Alphabet filings — Ad revenue centrality to Google Services (10-K, annual report). (SEC)

  • Zero-click / AI Overviews impact — SparkToro 2024 study; Search Engine Land coverage; Digiday analysis; WSJ reporting on publisher declines; broader traffic-shift reporting. (SparkToro)

  • Google’s AI Overviews claims — “More queries” and “higher-quality clicks” statements; expansion posts. (blog.google)

Final operating advice

  1. Anchor every decision in GSC per-URL data for the United States, split by device. (Google Help)

  2. Use Ahrefs/Semrush for competitor pages and discovery, but validate before you rewrite or report wins. (Ahrefs)

  3. For high-value queries, log the SERP context (including AI Overviews) at audit time; evaluate performance on clicks and conversions, not on an abstract “rank.” (blog.google)

If you want this turned into a Bouncebeam-branded worksheet and a repeatable “reconciliation” SOP your editors can run in under 30 minutes per page (pure text, Wix-friendly), I can draft it next.

Editorial-Quality Content

Bouncebeam crafts articles with full editorial checks and differentiations.

Read details

Transparent Sponsored Mentions

Automatically tagged rel="sponsored" links. Domain is never flagged, and improved long-term ranking health.

Read details
Let's get to work.

${messageToNotifyUser}

bottom of page