Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about Wandering Bear Coffee's market — your job is to tell us what we got right, what we got wrong, and what we missed.
Before we measure citation visibility in the organic cold brew coffee space, these three signals tell us whether AI crawlers can access and trust your site. They set the baseline the audit measures against.
Wandering Bear Coffee operates in the "organic cold brew coffee delivered directly to offices and homes" category across three distinct channels: office delivery, direct-to-consumer subscriptions, and retail grocery. The knowledge graph identifies 5 primary competitors (RISE Brewing Co., Chameleon Cold Brew, Commonwealth Joe, Stumptown Coffee Roasters, and La Colombe) and 4 secondary competitors, with 6 buyer personas spanning all three channels. Marcus Webb (VP of People Operations) is the dominant office-channel decision-maker, while Linda Park (Grocery Category Buyer) controls retail shelf placement and Priya Nair (COO) represents the startup executive buyer.
Layer 1 reveals two high-severity findings. "A/B Testing Script Returns JavaScript Instead of Page Content on All Pages" indicates the Shoplift A/B testing script may be serving minified JavaScript to AI crawlers instead of Wandering Bear's actual page content — if confirmed, AI citation engines cannot access the site's product descriptions, organic certifications, office delivery program details, or shelf-stability messaging, making the site effectively invisible to buyer queries. "Office Delivery, About, and Retail Pages Not Updated Since 2017–2018" means four core commercial pages predate the shelf-stable keg launch (2019), expanded retail distribution into Target and Walmart, and the current subscription program.
Two actions before the validation call: (1) Wandering Bear needs to validate how revenue splits across office delivery, DTC, and retail — this determines how the audit's 150–200 queries are distributed across channel-specific buyer language, and a DTC-heavy revenue mix would shift approximately 40% of queries from office-buyer phrasing to individual consumer language. The team should also confirm whether the Workplace Experience Manager (Sofia Reyes, medium confidence) is a distinct buyer persona or overlaps with the Office Manager role. (2) Engineering should immediately investigate the Shoplift A/B testing script rendering behavior by testing pages with JavaScript disabled — this is the single highest-impact action and does not require waiting for the validation call.
What This Document Is This document presents the research foundation for your GEO visibility audit in the organic cold brew coffee space. It maps the competitive landscape, buyer personas, product capabilities, and buyer pain points that will drive 150–200 test queries across AI platforms including ChatGPT and Perplexity. Every entity below feeds directly into query generation — getting them right means the audit measures what actually matters to your buyers.
What We Need From You Where you see purple boxes like this one, we're flagging something that needs your judgment. Every correction you make improves the precision of the audit queries — a wrong persona or mistiered competitor doesn't just waste queries, it produces misleading competitive visibility data. Focus on purple boxes and medium-confidence items.
Confidence Badges Confidence badges (High, Medium, Low) appear throughout. High confidence means the data comes from a verified source — Wandering Bear's product pages, G2 reviews, or retailer listings. Medium means it's inferred from category patterns or indirect sources. Low means it's a best guess. Focus your review time on Medium and Low confidence items — those are where your knowledge matters most.
The foundation of every audit query starts here — getting the company profile right ensures we're testing the right category language and competitive frame.
Validation Question Wandering Bear serves three distinct channels — office delivery (kegs and boxes shipped to workplaces), DTC subscriptions (individual consumers ordering for home), and retail grocery (Whole Foods, Target, Walmart shelf placement). Each channel has different buyers, different competitors, and different query language. How does revenue break down across these three channels today? If office delivery dominates, we weight ~60% of audit queries toward facilities and People Ops buyer language. If DTC is the growth engine, we shift toward individual consumer queries like "best organic cold brew subscription box." The query architecture changes substantially based on the answer — this is the single most consequential input for audit design.
These are the people who evaluate, compare, and sign off on cold brew purchases — whether that's office delivery contracts, DTC subscriptions, or retail shelf placement. Each persona drives a distinct query cluster in the audit.
Critical Review Area Personas have the highest downstream impact of any KG element. Each persona generates 15–25 queries tailored to their search behavior, seniority, and buying stage. A wrong persona wastes those queries entirely. A missing persona means an entire buyer segment goes untested. Please scrutinize every card below.
Data Sourcing Note Persona names, roles, departments, seniority, influence levels, and veto power are sourced directly from the knowledge graph. Buying jobs, query focus areas, and role descriptions are synthesized from the persona's KG attributes and Wandering Bear's category context. Fields marked with a medium confidence badge were inferred from category patterns rather than directly sourced.
→ In your sales cycle, does the Office Manager initiate cold brew vendor evaluations proactively (searching, requesting samples, comparing options), or does she execute on a directive from People Ops or executive leadership? If Tara's role is primarily execution — placing orders and managing deliveries rather than evaluating vendors — we should reclassify her as an influencer and reduce the vendor-comparison query cluster targeting her search behavior, reallocating those queries to the decision-maker who actually triggers the evaluation.
→ We've classified Marcus as the primary budget holder for office cold brew. In practice, at what company size does a VP of People Operations personally evaluate cold brew vendors versus delegating the comparison to an Office Manager or Workplace Experience lead? If Marcus approves but doesn't evaluate, the queries targeting his role should emphasize ROI justification ("is office cold brew worth the cost") and approval-stage criteria rather than product comparison and vendor discovery.
→ Sofia was inferred from category patterns — the "Workplace Experience Manager" title is increasingly common at mid-market and enterprise companies but may not appear in Wandering Bear's actual sales pipeline. Does this role show up as a distinct buyer in your deals, or do the Office Manager (Tara) and VP People (Marcus) cover this territory? If Sofia overlaps with Tara, we should merge them into a single persona and reallocate those 15–20 query slots to a different buyer type — possibly an Executive Assistant or Events Coordinator.
→ James represents the individual DTC subscriber — someone buying cold brew for home, not an office. His search behavior ("best organic cold brew subscription," "strongest cold brew delivered") is fundamentally different from office buyers. Does Wandering Bear's DTC channel generate enough revenue to justify a dedicated persona with 15–20 of its own queries in the audit, or should we consolidate DTC intent into a lighter query cluster and reallocate those slots to office or retail buyer language?
→ Linda was inferred as a retail grocery buyer — the person deciding whether Wandering Bear gets shelf space. Does Wandering Bear actively pitch to category buyers directly, or is retail distribution managed through a broker or distributor? If the retail relationship is broker-mediated, Linda's persona needs different query language ("organic cold brew velocity data," "cold brew category growth trends for buyer presentations") than if Wandering Bear sells direct. The answer also determines whether we include retail-defense queries ("Wandering Bear vs. Chameleon Cold Brew shelf comparison") or skip them.
→ We've included a startup COO because Wandering Bear's origin story and marketing reference this buyer type. At what company stage does a COO personally decide on office cold brew — Series A with 20 employees, or does this decision get delegated once headcount crosses ~50? If Priya represents a narrow company-stage window (say, 20–75 employees), we should scope her query cluster to early-stage startup language rather than treating her as a universal executive buyer. If COOs at larger companies also make this call, the query set expands to include scale-up operational language.
Missing Personas? Three roles that commonly appear in cold brew office delivery sales but aren't in this KG: (1) Executive Assistant — at companies under 100 employees, the EA often manages office perks, orders supplies, and runs the break room; if this is a common Wandering Bear buyer, they'd generate discovery-stage queries like "easy cold brew for the office." (2) Hospitality or Events Coordinator — if Wandering Bear supplies coworking spaces, conferences, or corporate events, this buyer has entirely different query language around bulk ordering and event catering. (3) Corporate Wellness Manager — if cold brew is positioned as a wellness benefit (organic, no sugar, clean caffeine) rather than just a perk, this role evaluates against wellness program criteria. Who else shows up in your deals?
These are the brands AI systems will compare Wandering Bear against when buyers ask queries like "best cold brew for the office" or "organic cold brew subscription." Tier assignments determine which queries test direct head-to-head differentiation.
Why Tiers Matter Getting these tiers right determines which approximately 30–40 queries test direct competitive differentiation (primary competitors in head-to-head comparisons like "Wandering Bear vs. RISE Brewing" or "best cold brew keg delivery for offices") versus broader category awareness (secondary competitors in landscape queries). We've identified 5 primary and 4 secondary competitors. All four secondary competitors — High Brew Coffee, Grady's Cold Brew, Blue Bottle Coffee, and Lucky Jack Coffee — have medium confidence on tier assignment. If any of them actually appear in head-to-head evaluations in your deals, promoting them to primary would add approximately 6–8 direct comparison queries per competitor.
Validation Questions Three questions for the call: (1) Missing vendors: Are we missing any brands that regularly appear in your competitive evaluations? Joyride Coffee Distributors surfaces in several competitor positioning summaries as a distribution partner for La Colombe and Blue Bottle — does Joyride compete directly in office cold brew delivery, or are they purely a distributor? (2) Tier accuracy: All four secondary competitors (High Brew, Grady's, Blue Bottle, Lucky Jack) have medium confidence on tier assignment. Do any of them actually appear in head-to-head evaluations in your sales conversations? Promoting one to primary adds 6–8 direct comparison queries. (3) Irrelevant primaries: Is Commonwealth Joe's regional limitation (East Coast metro areas only) narrow enough that they never appear in your deals outside those markets? If so, moving them to secondary would free approximately 6–8 query slots for a more relevant competitor.
These are buyer-level capabilities — not technical specs. Each feature determines which capability queries are tested in the audit. Strength ratings indicate how Wandering Bear compares to the competitive set from an outside-in perspective.
Cold brew that actually wakes you up — 2x the caffeine of regular cold brew, each glass equals two espresso shots
100% certified organic cold brew with no artificial ingredients, no added sugar, and ethically sourced beans
Cold brew shipped directly to the office — boxes, kegs, or dispensers with auto-delivery and a 10% subscription discount
No refrigeration needed until opened — 180-day shelf life on kegs means fewer deliveries and no spoilage risk
Multiple formats to match how you drink — on-tap boxes for home, kegs for big offices, concentrate for custom dilution
Beyond black — caramel, seasonal, decaf, and limited-edition flavors for variety-seeking teams
No nitrogen tanks, no keg taps, no equipment to install — just pull the tab and pour from the box
Find it at Whole Foods, Target, Walmart, Amazon, or subscribe direct — available wherever your team already shops
Subscribe and save with auto-delivery you can cancel, skip, or reschedule anytime without calling anyone
Creamy, cascading nitro cold brew on tap — the draft beer experience without the beer
Validation Questions We've rated six features as strong, three as moderate, and one (Nitrogen-Infused Cold Brew) as absent. Two questions: (1) Retail Availability: Is the "moderate" rating on Retail Availability & Distribution Footprint still accurate? If Wandering Bear's distribution has expanded significantly since our data pull — particularly into new retail chains beyond Whole Foods, Target, and Walmart — this should be upgraded to "strong," which shifts retail-presence queries from defensive positioning to offensive differentiation against Stumptown and La Colombe. (2) Nitro absence: The "absent" rating on Nitrogen-Infused Cold Brew reflects that Wandering Bear does not offer nitro. Is this absence a competitive vulnerability that comes up in sales conversations (office buyers asking "do you have nitro?"), or do buyers see the no-equipment shelf-stable format as a complete replacement for nitro? The answer determines whether we frame nitro absence as a known gap to defend against or as a positioned advantage ("you don't need nitrogen tanks") in audit queries against RISE Brewing and Commonwealth Joe.
These are the frustrations that make buyers search for cold brew solutions. The buyer language below is how audit queries will be phrased — if the language doesn't match how your buyers actually talk, the audit tests the wrong queries.
Validation Questions Two items to verify: (1) "Retail Shelf Competition" is rated high-severity but sourced from inference (medium confidence) — does Wandering Bear's team actually experience crowded-shelf pressure as a top-3 challenge, or is shelf placement relatively secure at current retail partners like Whole Foods and Target? If this pain point is lower-severity than rated, we'd deprioritize retail-defense queries and reallocate those slots to office or DTC buyer language. (2) "Supply Running Out" is rated medium-severity and inferred — is unpredictable beverage depletion a real complaint from office customers, or does the subscription auto-delivery model effectively solve this? If it's solved, we'd drop these queries entirely. Missing pain points to consider: order minimum frustrations (if small offices of 10–15 people can't meet minimum order thresholds), temperature sensitivity during shipping (if summer deliveries arrive warm and compromise taste), or taste consistency concerns across production batches.
These are technical findings from the Layer 1 site analysis. Each finding includes what we found, why it matters for AI visibility, and a recommended fix.
Engineering Action Required Two high-severity technical findings require attention. The most urgent is the Shoplift A/B testing script behavior — every page tested returned JavaScript rather than rendered content to non-JS fetches, which simulates how many AI crawlers operate. Engineering should immediately test representative pages with JavaScript disabled in Chrome DevTools. If pages show no visible content without JS, this is a structural blocker for AI citation. The second priority is sitemap cleanup — approximately 40 non-commercial utility pages can be noindexed and removed from the sitemap in under a day. Both tasks are independent of the validation call and should start now.
What we found: Every page analyzed — homepage, all /pages/* static pages, all /collections/* pages, and all /products/* pages — returned the Shoplift A/B testing JavaScript bundle (8,000–12,000 tokens of minified JS) rather than visible page content when accessed via automated HTTP fetch. This was consistent across 41 pages tested. Google has successfully indexed page content (confirmed via search result snippets), suggesting Googlebot's JavaScript rendering resolves the issue. However, the behavior under non-JS fetch — which simulates how many AI crawlers operate — returns only the Shoplift bundle.
Why it matters: AI crawlers including GPTBot (ChatGPT), ClaudeBot (Claude), and PerplexityBot do not guarantee JavaScript execution during their crawl passes. If these crawlers receive only the Shoplift A/B testing bundle rather than the rendered page content, Wandering Bear's marketing copy, product descriptions, organic certifications, shelf-stability messaging, and office delivery program details are effectively invisible to AI systems.
Recommended fix: (1) Use Google Search Console's URL Inspection tool to fetch and render sample pages. (2) Test with JavaScript disabled in Chrome DevTools — if pages render no visible text without JS, confirm CSR dependency. (3) Work with Shoplift support to ensure the A/B testing script loads asynchronously and does not intercept server-rendered Shopify Liquid HTML. (4) Confirm Shopify's native Liquid-rendered HTML is present in the initial HTTP response before any JavaScript executes.
What we found: Four core commercial pages have sitemap lastmod dates from 2017–2018: /pages/office-delivery (2018-08-03, 7.5 years stale), /pages/about (2018-07-23, 7.5 years stale), /pages/find-wandering-bear-cold-brew-in-stores (2018-07-30, 7.5 years stale), and /pages/wholesale (2017-12-14, 8+ years stale). These pages predate the shelf-stable keg launch (2019), the bag-in-box office keg expansion, expanded retail distribution into Target and Walmart, and The Pack membership program.
Why it matters: AI systems weight content freshness in training data and real-time web retrieval. An 8-year-old /pages/office-delivery page almost certainly omits the shelf-stable keg (Wandering Bear's key differentiator vs. nitrogen-dependent competitors), current subscription options, and updated retail partners. Stale pages are deprioritized by freshness algorithms, reducing the likelihood they surface in AI citations.
Recommended fix: Rewrite /pages/office-delivery to reflect the current product lineup (96oz box, 1-gallon box, 5-gallon shelf-stable keg), emphasize no-nitrogen/no-equipment as the primary competitive differentiator, add current subscription terms and The Pack membership tiers. Refresh /pages/about with current company scale. Update /pages/find-wandering-bear-cold-brew-in-stores to include Target, Walmart, and Amazon. Aim for >400 words on each page with specific, factual claims.
What we found: Of approximately 98 pages in sitemap_pages_1.xml, roughly 40 are non-commercial utility pages: 15+ email capture landing pages, promotional funnel flow pages, testing/staging pages (/pages/replo-testing, /pages/test-new-homepage, /pages/okendotest), QR redirect pages, internal account pages, and promotional campaign pages.
Why it matters: Including non-commercial pages in the sitemap wastes AI crawler crawl budget on content with zero commercial value. AI systems that crawl the sitemap may surface these pages in buyer responses — a query about "Wandering Bear subscription discount" could surface /pages/smsbear10-subscribe-page instead of the actual membership page. Test pages indexed in production risk surfacing placeholder content.
Recommended fix: Add <meta name="robots" content="noindex"> to all email capture landing pages, promotional funnel pages, test pages, and internal account portal pages. Remove these URLs from sitemap_pages_1.xml. The cleaned sitemap should contain only commercially valuable URLs.
What we found: The commercial blog cluster on /blogs/articles/ contains 16+ articles targeting office coffee buyers, but 15 of 16 were last modified in 2021–2022 (3–4 years ago). Key articles include "How to Begin an Office Coffee Program" (2022), "Subscription Coffee Service for Offices" (2021), "Why Get Office Cold Brew" (2021), and "Remote Office Coffee Program" (2022). Only one article was updated within the past 12 months.
Why it matters: AI models trained on 2024–2025 web crawl data will have limited citations from Wandering Bear's blog on timely buyer topics: return-to-office coffee perks, hybrid work schedules, 2025–2026 office wellness trends. Competitors who publish fresh content on these themes will receive disproportionate AI citation share.
Recommended fix: Refresh the top 5 office-focused articles with 2025–2026 context: update with return-to-office context and Pack membership details, rewrite subscription article with current tiers and shelf-stable keg format, update remote office article to reflect hybrid work patterns. Each refresh should aim for >1,000 words with specific, citable claims.
What we found: Due to the A/B testing script rendering behavior, structured data (JSON-LD schema), meta description tags, Open Graph tags, canonical URLs, and definitive client-side rendering status could not be assessed through our analysis method. The Shoplift JS bundle behavior prevented access to page-level HTML signals across all 41 pages analyzed.
Why it matters: Product schema markup enables rich results in Google Search and provides structured product data that AI product aggregators consume. Missing or incomplete schema on product pages, blog posts, or FAQ posts could prevent Wandering Bear from appearing in rich snippets or being correctly parsed by AI knowledge bases.
Recommended fix: (1) Use Google's Rich Results Test to verify Product schema on all product pages. (2) Check meta descriptions and OG tags using browser Developer Tools. (3) Verify Article schema on /blogs/articles/ posts. (4) Consider adding FAQ schema to /blogs/general-coffee-questions/* posts. (5) Run Screaming Frog with JavaScript enabled vs. disabled on 5 representative pages to confirm server-side rendering status.
Scores Affected by Rendering Issue All page-level scores above were derived from automated analysis that encountered the Shoplift A/B testing script on every page. Content depth, passage extractability, and schema coverage scores may underestimate the actual content quality if the script is preventing access to rendered HTML. Once engineering resolves the A/B testing script behavior, these scores should be re-evaluated against the actual server-rendered content.
The full audit will measure citation visibility across 150–200 queries in the organic cold brew coffee space, including queries like "best cold brew delivery for offices," "organic cold brew subscription box," "cold brew with no sugar for the office," and "Wandering Bear vs. RISE Brewing." You'll see exactly which queries return results that include your competitors but not Wandering Bear — and what it would take to appear in them. Resolving the Shoplift A/B testing script behavior and refreshing the stale commercial pages before the audit runs will improve the baseline the audit measures against.
45–60 minutes to walk through this document. We confirm channel revenue weighting, validate persona roles, verify competitor tiers, and lock in feature strengths. Every correction sharpens the query set.
150–200 queries built from validated personas, competitors, features, and pain points. Executed across ChatGPT and Perplexity to measure where Wandering Bear appears — and where it doesn't.
Complete visibility analysis with competitive positioning, content prioritization by actual citation impact, and a three-layer action plan: technical fixes, content creation priorities, and strategic positioning recommendations.
Start Now — Engineering Tasks These don't depend on the validation call and will improve your baseline visibility before we even measure it: (1) Investigate the Shoplift A/B testing script rendering — test 3–5 representative pages with JavaScript disabled in Chrome DevTools; if no visible content renders, work with Shoplift support to ensure the script loads asynchronously without replacing server-rendered HTML. (2) Clean the sitemap — add noindex tags to the ~40 utility, test, and promotional funnel pages in sitemap_pages_1.xml and remove them from the sitemap. (3) Verify schema markup and meta tags — use Google's Rich Results Test on product pages and view-source on key static pages to confirm Product schema, Article schema, meta descriptions, and OG tags are present.
Two jobs before we meet. The questions on the left require your judgment — no one knows your business better than you. The engineering tasks on the right don't require the call at all.