Competitive intelligence for AI-mediated buying decisions. Where 15Five wins, where it loses, and a prioritized three-layer execution plan — built from 150 buyer queries across ChatGPT + Perplexity.
Section 1 explains why 15Five's visibility collapses at the discovery stage — before buyers know what they need — while spiking at shortlisting, and identifies the three structural factors holding this pattern in place.
[Mechanism] Three compounding gaps produce the early-funnel invisibility pattern. First, 15Five's XML sitemap contains only 19 blog URLs with all commercial product and solution pages excluded, so AI crawlers cannot reliably index the content that would answer discovery-stage queries about performance management, manager coaching, or engagement solutions — the pages exist but are structurally invisible to AI systems. Second, competitor comparison page URLs — the highest-intent entry points for buyers evaluating alternatives — redirect to a generic brand page with no competitor-specific content, eliminating 15Five from the solution-exploration and comparison conversations those URLs should serve. Third, 15Five lacks content for 58 buying queries entirely, with the gap concentrated in problem identification, solution exploration, and requirements building — the exact stages that drive the 69.0% early-funnel invisibility rate (metrics.funnel_metrics.early_funnel_invisibility_rate). The late-funnel visibility spike to 81% at shortlisting is not a content success; it is evidence that brand recognition built before AI mediated this category is carrying the load — a load that will erode as AI increasingly mediates the pre-shortlist journey.
[Synthesis] L1 technical fixes must execute before L2 or L3 because the sitemap fix determines whether any new or optimized content will be indexed by AI systems — publishing 74 updated pages or 58 new pages into a site whose commercial content is excluded from the sitemap leaves all of it invisible to AI crawlers for the same structural reason today's product pages are invisible. The comparison page redirect fix (L1) must also precede any L2 positioning work on comparison content, because no content improvement on a page that immediately redirects elsewhere can improve AI response quality.
Where 15Five appears and where it doesn't — across personas, buying jobs, and platforms.
[TL;DR] 15Five appears in 46% of buyer queries and wins 26.1% of those. Converting visibility to wins is the primary challenge (20% gap). High-intent queries run higher at 59.0%.
15Five's 46% overall visibility rate masks a severe funnel imbalance: 8% at problem identification (lowest of all buying jobs) versus 81% at shortlisting means the entire discovery-stage journey happens without 15Five present to shape buyer thinking.
| Dimension | Combined | Platform Delta |
|---|---|---|
| All Queries | 46% | Perplexity +12pp |
| By Persona | ||
| Chief Financial Officer | 52.2% | Perplexity +13pp |
| Chief People Officer | 38.2% | Perplexity +12pp |
| Director of HR Technology & People Analytics | 46.9% | Perplexity +6pp |
| VP of People Operations | 46.9% | Even |
| VP of Talent Management | 48.3% | Perplexity +33pp |
| By Buying Job | ||
| Artifact Creation | 33.3% | Perplexity +20pp |
| Comparison | 47.1% | Even |
| Consensus Creation | 23.1% | Perplexity +23pp |
| Problem Identification | 8.3% | Perplexity +8pp |
| Requirements Building | 40% | Perplexity +20pp |
| Shortlisting | 80.8% | Perplexity +23pp |
| Solution Exploration | 40% | Perplexity +7pp |
| Validation | 52.2% | Perplexity +13pp |
| Dimension | ChatGPT | Perplexity |
|---|---|---|
| All Queries | 28.7% | 40.9% |
| By Persona | ||
| Chief Financial Officer | 34.8% | 47.8% |
| Chief People Officer | 26.5% | 38.2% |
| Director of HR Technology & People Analytics | 31.2% | 37.5% |
| VP of People Operations | 37.5% | 37.5% |
| VP of Talent Management | 13.8% | 46.4% |
| By Buying Job | ||
| Artifact Creation | 16.7% | 36.4% |
| Comparison | 41.2% | 38.2% |
| Consensus Creation | 0% | 23.1% |
| Problem Identification | 0% | 8.3% |
| Requirements Building | 13.3% | 33.3% |
| Shortlisting | 53.8% | 76.9% |
| Solution Exploration | 20% | 26.7% |
| Validation | 34.8% | 47.8% |
[Data] Overall: 46% visibility (69/150 queries). High-intent: 59% visible (49/83), 35% win rate (17/83), 24pp vis-to-win gap. Shortlisting: 81% (21/26) — among the highest of all buying jobs (metrics.visibility.by_buying_job[shortlisting].rate = 0.8077). Problem identification: 8% (1/12) — lowest of all buying jobs (metrics.visibility.by_buying_job[problem_identification].rate = 0.0833). Early-funnel invisibility: 69.0% across 42 queries. ChatGPT runs 12pp below Perplexity (metrics.visibility.platform_delta.value_pp = 12). [Synthesis] The 46% overall rate understates the funnel imbalance: 15Five over-indexes at shortlisting (81%) while being nearly absent at problem identification (8% — lowest of all buying jobs), the stage where buyers first articulate their pain and begin forming vendor associations. This back-loading means 15Five enters most buyer journeys as a name on a comparison list rather than the vendor that helped the buyer understand their problem — a dependency on prior brand awareness that becomes more fragile as AI mediates more pre-shortlist discovery. The 12pp platform gap between ChatGPT and Perplexity compounds the risk: gains on one platform are not transferring to the other, and optimization must address both channels.
21 queries won by named competitors · 7 no clear winner · 53 no vendor mentioned
Sorted by competitive damage — competitor-winning queries first.
| ID | Query | Persona | Stage | Winner |
|---|---|---|---|---|
| ⚑ Competitor Wins — 21 queries where a named competitor captures the buyer | ||||
| 15f_052 | "switching from annual engagement surveys to a platform with real-time pulse and stronger benchmarking for predicting turnover" | vp_people_ops | Shortlisting | PerformYard |
| 15f_056 | "Top people analytics platforms with AI-powered flight risk detection for mid-market companies" | hr_technology_director | Shortlisting | Lattice |
| 15f_072 | "How does Leapsome's manager development compare to platforms with dedicated AI coaching features?" | chro | Comparison | Leapsome |
| 15f_079 | "How does Culture Amp's analytics compare to platforms with AI-powered people analytics for workforce insights?" | hr_technology_director | Comparison | Culture Amp |
| 15f_080 | "Lattice vs Culture Amp — which has more flexible performance review workflows for complex org structures?" | hr_technology_director | Comparison | Culture Amp |
| 15f_088 | "We're replacing our current engagement tool — Culture Amp vs Lattice, which is better for mid-market retention strategies?" | chro | Comparison | Culture Amp |
| 15f_089 | "Lattice vs Leapsome for manager coaching and development features at a mid-market company" | vp_people_ops | Comparison | Lattice |
| 15f_090 | "Culture Amp vs Leapsome for continuous check-ins and pulse surveys — which drives better manager habits?" | vp_people_ops | Comparison | Leapsome |
| 15f_091 | "Betterworks vs Lattice analytics — switching from a platform with limited reporting, which has stronger people insights?" | hr_technology_director | Comparison | Lattice |
| 15f_092 | "Culture Amp vs Workleap for engagement surveys — analytics depth vs. simplicity for smaller HR teams" | hr_technology_director | Comparison | Workleap |
Remaining competitor wins: Lattice ×4, Leapsome ×3, Culture Amp ×3, Workleap ×1. 7 queries with no clear winner. 53 queries with no vendor mentioned. Full query-level data available in the analysis export.
Queries where 15Five is mentioned but a competitor is positioned more favorably.
| ID | Query | Persona | Buying Job | Winner | 15Five Position |
|---|---|---|---|---|---|
| 15f_011 | "How do you identify which employees are high-potential and at risk of leaving before they hand in their notice?" | vp_talent | Problem ID | No Vendor Mentioned | Brief Mention |
| 15f_016 | "We're replacing our ad-hoc 1:1 process — what's the real difference between dedicated check-in platforms and just using meeting agenda templates?" | vp_people_ops | Solution Exp. | No Clear Winner | Mentioned In List |
| 15f_019 | "How do performance management platforms typically integrate with HRIS systems like Workday, BambooHR, and ADP?" | hr_technology_director | Solution Exp. | No Clear Winner | Mentioned In List |
| 15f_021 | "Open source vs. commercial OKR tools — real tradeoffs for a company with 200-500 employees" | hr_technology_director | Solution Exp. | No Clear Winner | Mentioned In List |
| 15f_022 | "We've outgrown SurveyMonkey for employee engagement — what does a modern performance management tech stack look like for 300+ employees?" | hr_technology_director | Solution Exp. | Culture Amp | Mentioned In List |
| 15f_024 | "Our current review process doesn't connect to any business outcomes — how do companies move from annual reviews to something measurable?" | cfo | Solution Exp. | No Vendor Mentioned | Mentioned In List |
| 15f_026 | "What types of HR technology actually move the needle on reducing voluntary turnover at mid-market companies?" | vp_talent | Solution Exp. | No Vendor Mentioned | Mentioned In List |
| 15f_031 | "Key requirements for evaluating performance review platforms for a 400-person company moving away from annual reviews" | vp_people_ops | Req. Building | No Clear Winner | Brief Mention |
| 15f_033 | "We want continuous feedback between review cycles — what capabilities actually matter in a recognition and feedback tool?" | vp_people_ops | Req. Building | No Clear Winner | Brief Mention |
| 15f_034 | "Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support" | hr_technology_director | Req. Building | No Vendor Mentioned | Mentioned In List |
| ID | Query | Persona | Buying Job | Winner | 15Five Position |
|---|---|---|---|---|---|
| 15f_037 | "We've tried and failed with spreadsheet-based OKRs — what features in a dedicated OKR tool actually make goal cascading work?" | hr_technology_director | Req. Building | No Vendor Mentioned | Brief Mention |
| 15f_039 | "Evaluation criteria for performance management platforms from a finance perspective — ROI metrics, implementation costs, time to value" | cfo | Req. Building | No Vendor Mentioned | Brief Mention |
| 15f_042 | "We need structured 1:1 tools that connect manager check-ins to company goals — what capabilities should we prioritize?" | vp_talent | Req. Building | No Vendor Mentioned | Mentioned In List |
| 15f_044 | "Which employee engagement platforms actually help you act on survey results, not just collect engagement scores?" | chro | Shortlisting | Culture Amp | Mentioned In List |
| 15f_045 | "We've outgrown our current performance tool — best platforms for continuous check-ins and manager coaching at a 350-person company" | chro | Shortlisting | Betterworks | Mentioned In List |
| 15f_048 | "Best compensation management tools for mid-market companies trying to connect pay to performance data" | chro | Shortlisting | No Vendor Mentioned | Mentioned In List |
| 15f_049 | "Top continuous performance review platforms for replacing spreadsheet-based annual reviews at a 200-500 person company" | vp_people_ops | Shortlisting | Lattice | Strong 2nd |
| 15f_050 | "alternatives to our current performance management tool for a 350-person SaaS company focused on reducing regrettable turnover" | vp_people_ops | Shortlisting | Lattice | Mentioned In List |
| 15f_054 | "performance management platforms with reliable BambooHR and Workday integration — replacing a tool that doesn't sync properly" | vp_people_ops | Shortlisting | Lattice | Mentioned In List |
| 15f_055 | "Best performance management platforms with native HRIS integrations — Workday, ADP, BambooHR sync without custom middleware" | hr_technology_director | Shortlisting | Lattice | Mentioned In List |
| 15f_057 | "looking to replace our current review tool with a continuous performance platform that supports 360-degree feedback and custom review cycles" | hr_technology_director | Shortlisting | Lattice | Mentioned In List |
| 15f_058 | "replacing our standalone engagement survey tool — need a platform with real-time dashboards, API access, and data export for an analytics-driven HR team" | hr_technology_director | Shortlisting | Culture Amp | Mentioned In List |
| 15f_062 | "OKR platforms affordable enough for mid-market but robust enough to actually make goals stick across departments" | cfo | Shortlisting | No Vendor Mentioned | Strong 2nd |
| 15f_065 | "Best OKR tools for companies where goal cascading has never worked — switching from spreadsheets to a dedicated platform" | vp_talent | Shortlisting | No Clear Winner | Mentioned In List |
| 15f_066 | "Which engagement platforms are best at connecting survey data to retention outcomes for mid-market companies?" | vp_talent | Shortlisting | Lattice | Mentioned In List |
| 15f_067 | "Top tools for developing managers who've never had formal leadership training — practical coaching, not just theory" | vp_talent | Shortlisting | Culture Amp | Mentioned In List |
| 15f_070 | "We're moving from annual reviews — how does Lattice compare to other platforms for making that transition smooth?" | chro | Comparison | Lattice | Strong 2nd |
| 15f_074 | "How does Culture Amp handle continuous check-ins and manager enablement compared to dedicated check-in platforms?" | vp_people_ops | Comparison | Culture Amp | Strong 2nd |
| 15f_075 | "Switching from our current review tool — how does Lattice compare for making performance reviews less painful?" | vp_people_ops | Comparison | Lattice | Strong 2nd |
| 15f_076 | "How does Workleap's recognition and feedback functionality compare to more comprehensive performance management platforms?" | vp_people_ops | Comparison | Workleap | Mentioned In List |
| 15f_077 | "We're considering switching our engagement tool — how does Culture Amp's benchmarking compare to other platforms' action-planning features?" | vp_people_ops | Comparison | Culture Amp | Mentioned In List |
| 15f_078 | "How does Lattice's integration architecture compare to other performance platforms for HRIS sync, APIs, and webhooks?" | hr_technology_director | Comparison | Lattice | Mentioned In List |
| 15f_082 | "We're replacing spreadsheet-based comp decisions — how does Lattice's compensation module compare for linking pay to performance?" | cfo | Comparison | Lattice | Brief Mention |
| 15f_084 | "How does Betterworks' total cost compare to mid-market alternatives — implementation, training, and per-seat pricing?" | cfo | Comparison | Betterworks | Strong 2nd |
| 15f_085 | "How does Lattice's talent calibration and 9-box feature compare to other performance management platforms?" | vp_talent | Comparison | Lattice | Strong 2nd |
| 15f_086 | "How does Leapsome's continuous feedback compare to other 1:1 tools — which one do managers actually adopt?" | vp_talent | Comparison | Leapsome | Mentioned In List |
| 15f_087 | "How does Workleap's engagement surveys compare to more analytics-heavy platforms for a 200-person company?" | vp_talent | Comparison | Workleap | Strong 2nd |
| 15f_103 | "Lattice implementation problems when migrating from another performance management tool at a mid-market company" | chro | Validation | No Vendor Mentioned | Brief Mention |
| 15f_106 | "We're evaluating Culture Amp as a replacement — what are the biggest downsides of their performance review features?" | vp_people_ops | Validation | No Clear Winner | Brief Mention |
| 15f_109 | "Betterworks analytics and reporting limitations — what can't it do that other platforms handle?" | hr_technology_director | Validation | No Clear Winner | Brief Mention |
| 15f_111 | "Betterworks reviews from mid-market companies — is it worth the enterprise-level pricing?" | cfo | Validation | No Clear Winner | Mentioned In List |
| 15f_113 | "Is Workleap too basic for a growing mid-market company — will we outgrow it in two years?" | cfo | Validation | No Clear Winner | Brief Mention |
| 15f_114 | "Workleap Officevibe limitations — what are the biggest feature gaps compared to more comprehensive platforms?" | vp_talent | Validation | No Clear Winner | Mentioned In List |
| 15f_119 | "15Five talent management and performance calibration — how does it compare to dedicated talent review platforms?" | vp_talent | Validation | No Clear Winner | Primary Recommendation |
| 15f_121 | "Biggest risks of switching to continuous performance management from annual reviews at a mid-market company" | hr_technology_director | Validation | No Vendor Mentioned | Mentioned In List |
| 15f_127 | "Case studies of mid-market companies that improved manager effectiveness after switching to continuous performance management" | chro | Consensus | Lattice | Mentioned In List |
| 15f_137 | "Case studies of companies that reduced regrettable turnover after switching from annual reviews to continuous performance management" | vp_talent | Consensus | No Vendor Mentioned | Mentioned In List |
| 15f_140 | "Create a vendor comparison scorecard for 15Five, Lattice, Culture Amp, Betterworks, and Leapsome focused on integration capabilities and data architecture" | hr_technology_director | Artifact | Lattice | Strong 2nd |
| 15f_141 | "Build an evaluation template for comparing continuous performance management platforms — weighted scoring for reviews, check-ins, engagement, and analytics" | vp_people_ops | Artifact | No Vendor Mentioned | Mentioned In List |
| 15f_147 | "Create a comparison matrix for OKR and goal tracking features across 15Five, Betterworks, Lattice, and Leapsome" | chro | Artifact | No Clear Winner | Mentioned In List |
| 15f_149 | "Draft an executive summary comparing recognition and continuous feedback platforms for a leadership team — focus on retention impact" | vp_talent | Artifact | No Vendor Mentioned | Mentioned In List |
Who’s winning when 15Five isn’t — and who controls the narrative at each buying stage.
[TL;DR] 15Five ranks #3 in Share of Voice with a 30W–28L head-to-head record across 9 competitors.
15Five's #3 SOV rank is stable but not earned — Culture Amp and Betterworks are winning head-to-head matchups in queries 15Five should contest, and 53 of 81 invisible queries have no AI winner at all, making the unclaimed early-funnel the largest single competitive opportunity.
| Company | Mentions | Share |
|---|---|---|
| Lattice | 90 | 21.3% |
| Culture Amp | 73 | 17.3% |
| 15Five | 69 | 16.4% |
| Leapsome | 50 | 11.8% |
| Betterworks | 41 | 9.7% |
| Quantum Workplace | 30 | 7.1% |
| PerformYard | 28 | 6.6% |
| Workleap | 24 | 5.7% |
| Engagedly | 15 | 3.5% |
| Reflektive | 2 | 0.5% |
Counts only queries where both brands appear. Win = client was the primary recommendation (across platforms, by majority vote). Loss = competitor was. Tie = neither brand — or a third party — was.
For the 81 queries where 15Five is completely absent:
Vendors appearing in responses not in 15Five’s defined competitive set.
[Synthesis] 15Five's #3 SOV rank reflects inherited brand recognition, not content dominance — Culture Amp and Betterworks are winning head-to-head matchups in queries where 15Five should compete, while 15Five leads Leapsome (6-2) where its content is more directly comparative. More importantly, 53 of 81 invisible queries have no AI winner at all: the largest opportunity is not displacing a specific competitor but becoming the first authoritative voice on early-funnel questions that no vendor currently answers. Creating that discovery-stage content is the mechanism for converting 15Five's #3 SOV position into one that earns consideration before buyers have formed competitor preferences.
What AI reads and trusts in this category.
[TL;DR] 15Five had 71 unique pages cited across buyer queries, ranking #3 among all cited domains. 10 high-authority domains cite competitors but not 15Five.
71 unique pages cited ranks 15Five #3 by domain, but the citation mix skews toward the homepage and support docs rather than buyer-facing capability pages — fixing the sitemap and comparison redirects will shift citation composition toward higher-converting content before new pages are needed.
Non-competitor domains citing other vendors but not 15Five — off-domain authority opportunities.
[Synthesis] 15Five's 71 unique cited pages rank #3 by domain, but the citation mix reveals a content-type problem: the homepage (16 citation instances) and support documentation dominate over buyer-facing capability and comparison pages, signaling that AI systems treat 15Five as a brand reference rather than an authoritative source on specific buyer questions. This pattern is structurally consistent with the sitemap finding: when commercial product pages are excluded from crawler scope, AI systems default to whatever is reachable — the homepage and help center. Fixing the sitemap and comparison page redirects (L1) will shift citation composition toward higher-converting pages before new content is required.
Three layers of recommendations ranked by commercial impact and implementation speed.
[TL;DR] 132 total gaps: 81 invisibility + 51 positioning. 6 L1 technical fixes, 74 can be addressed by optimizing existing content (L2), 58 require new content creation (L3).
138 actions close 132 gaps, but execution sequence matters more than volume: the 6 L1 technical fixes must run first because they determine whether the 74 L2 optimizations and 58 L3 new pages will be indexed and cited by AI systems at all.
Reading the priority numbers: Items are ranked 1–138 across all three layers by commercial impact × implementation speed. Within each layer, items appear in priority order. Gaps in the sequence (e.g., L1 shows 1, 2, then 12) mean higher-priority items belong to a different layer.
Configuration and infrastructure changes. Owner: Engineering / DevOps. Timeline: Days to weeks.
| Priority | Finding | Impact | Timeline |
|---|---|---|---|
| #1 | No Date Signals on Any Product or Solution Page | Medium | 1-3 days |
| #2 | XML Sitemap Contains Only 19 Blog URLs — All Commercial Pages Absent | Medium | 1-3 days |
| #13 | Case Study Page Returns Minimal Body Content — Verify Gating or CSR | Medium | 1-3 days |
| #14 | Competitor Comparison URLs Redirect to Generic Brand Page With No Competitor Content | Medium | 1-2 weeks |
| #18 | Meta Descriptions and OG Tags: Manual Verification Required | Low | 1-3 days |
| #19 | Schema Markup: Manual Verification Required | Low | 1-3 days |
Click any row to expand full issue/fix detail.
Existing pages that need restructuring or deepening. Owner: Content Team. Timeline: Weeks.
The /pricing page at https://www.15five.com/pricing lists plan prices and feature tiers but has no ROI framing — CFO queries about the cost of poor PM processes (15f_009) and evaluation ROI metrics (15f_039) cannot be answered by citing a pricing page, and routing these queries to /pricing as a coverage fallback confirms the content gap rather than filling it. The /customer-stories/ page has case studies with outcome data but formats them as narrative blog posts rather than extractable ROI metrics — the Pendo (21% turnover reduction) and Auror (94% retention) outcomes are buried in story prose rather than surfaced as structured, AI-extractable claims. The routing of 15f_009 and 15f_039 to /pricing reveals the absence of any dedicated business-case or ROI content on the site — the CFO's question 'how much does poor performance management cost?' has no home anywhere in 15Five's content inventory.
Queries affected: 15f_009, 15f_039, 15f_111, 15f_113, 15f_134, 15f_135
The /products/kona page describes Kona AI Coach as a product but provides no evidence of effectiveness — queries 15f_025 ('AI coaching tools for managers — is there evidence they actually improve manager effectiveness?') and 15f_046 (shortlisting, no_clear_winner) can't cite this page because it makes no verifiable outcome claims with data. The /products/kona page contains no explanation of HOW the AI coaching works — there is no methodology section covering what data Kona uses, how it generates coaching recommendations, and what differentiates it from generic AI prompting — making it non-citable for 'how do AI coaching tools work?' queries (15f_015, 15f_025). The /products/kona page does not address the 'AI coaching vs. external coaching programs vs. training platforms' comparison framing that appears in 4 queries (15f_015, 15f_025, 15f_067, 15f_138) — buyers evaluating manager development approaches need this comparison to justify AI coaching selection.
Queries affected: 15f_003, 15f_005, 15f_015, 15f_025, 15f_032, 15f_046, 15f_067, 15f_107, 15f_110, 15f_138, 15f_144
The /products/perform page has no switching or migration narrative — queries like 15f_049 ('Top continuous performance review platforms for replacing spreadsheet-based annual reviews') and 15f_057 ('replacing our current review tool — support for 360-degree feedback and custom review cycles') lose to Lattice because Lattice's comparable page includes explicit 'migrating from spreadsheets' language and a migration guide. The /products/perform page lacks customer outcome evidence tied specifically to the performance review feature — the Auror and Pendo case study data exists on blog posts but is not integrated into the product page narrative, making the page non-citable for 'does continuous PM actually produce better outcomes?' queries. The /products/perform page does not include a structured 'Continuous vs. Annual Reviews: Key Structural Differences' comparison that AI systems can extract for the educational solution-exploration queries (15f_013, 15f_024) where no vendor is recommended but a structural comparison would surface 15Five as the page host.
Queries affected: 15f_004, 15f_013, 15f_016, 15f_024, 15f_030, 15f_031, 15f_040, 15f_042, 15f_049, 15f_057, 15f_103, 15f_105, 15f_124, 15f_127, 15f_128, 15f_137, 15f_141, 15f_150
The /products/perform/compensation/ page does not include pay equity compliance specifics — query 15f_038 ('What should I look for in compensation management software that supports pay equity compliance?') and 15f_125 ('Biggest risks of automating compensation decisions — what can go wrong with pay equity analysis?') cannot cite this page because compliance capabilities are not documented. The /products/perform/compensation/ page lacks a buyer evaluation checklist or evaluation criteria framework — queries 15f_038 (requirements building) and 15f_048 (shortlisting) need a page that helps buyers evaluate compensation management tools, not just a feature description. The /products/perform/compensation/ page does not describe the performance-rating-to-compensation data flow — the defining value proposition ('connect pay decisions to performance data without spreadsheets') is stated but not illustrated with a step-by-step process that AI systems can extract as a citable workflow.
Queries affected: 15f_010, 15f_027, 15f_038, 15f_048, 15f_112, 15f_125, 15f_129, 15f_146
The /products/engage page presents engagement features as a capabilities list but has no problem-framing section — queries asking about warning signs of employee attrition (15f_001) or how to close the loop on engagement surveys (15f_006) cannot be answered by citing a product feature page. The /products/engage page lacks an outcome evidence block — it claims engagement improvements but provides no quantified customer results (response rate improvements, action-plan completion rates, turnover reduction data) that AI systems can extract as citable claims. The /products/engage page does not address the pulse-vs-annual survey tradeoff that appears in 3 queries (15f_017, 15f_041, 15f_114) — Culture Amp's comparable page wins these queries by including explicit 'when to use pulse vs. annual' guidance.
Queries affected: 15f_001, 15f_006, 15f_017, 15f_022, 15f_028, 15f_041, 15f_052, 15f_058, 15f_066, 15f_104, 15f_114, 15f_121, 15f_143
The /solutions/reduce-regrettable-turnover page makes retention claims but doesn't explain the mechanism — queries like 15f_026 ('What types of HR technology actually move the needle on reducing voluntary turnover?') need a page that explains WHICH features drive WHICH retention outcomes, not just a claim that 15Five reduces turnover. The /solutions/reduce-regrettable-turnover page has insufficient customer outcome density — Lattice's equivalent page (winner on 15f_050) includes 5+ named company outcomes with specific retention percentages; 15Five's page references Auror and Pendo outcomes but does not present them in a structured, scannable density that AI systems can extract as a recommendation signal. The /solutions/reduce-regrettable-turnover page lacks a buyer evaluation resource — RFP-creation query 15f_139 ('Draft an RFP for a continuous performance management platform') routes to this page but finds no RFP template, evaluation criteria, or downloadable reference content.
Queries affected: 15f_026, 15f_050, 15f_139
The /integrations directory at https://www.15five.com/integrations lists supported HRIS platforms but contains zero content about integration architecture — queries 15f_034 ('Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support') and 15f_108 ('Culture Amp integration issues — any known problems syncing with Workday or other enterprise HRIS platforms?') cannot cite this page because technical architecture is not documented. The /integrations page has no customer integration success stories — shortlisting queries 15f_054 and 15f_055 (both winner=lattice) require evidence that integrations work reliably at scale with named HRIS platforms, not just confirmation that integrations exist. The /integrations page lacks any comparison framing against competitor integration ecosystems — query 15f_140 ('Create a vendor comparison scorecard for 15Five, Lattice, Culture Amp — integration capabilities and data architecture', winner=lattice) loses because Lattice has comparison-ready integration documentation that 15Five's directory cannot provide.
Queries affected: 15f_007, 15f_019, 15f_034, 15f_108, 15f_132, 15f_140
The /blog/check-ins-and-1-on-1s/ page explains how to run check-ins but lacks outcome evidence — query 15f_014 ('Does real-time employee recognition actually reduce turnover, or is it a feel-good feature?') and 15f_123 ('Do employee recognition tools actually sustain engagement improvements?') require citable evidence connecting recognition frequency to retention outcomes, which is absent from this methodology guide. The /blog/check-ins-and-1-on-1s/ page structure is optimized for human reading, not AI extraction — headings describe rather than answer ('How to run effective check-ins' instead of 'What are the most important capabilities in a continuous feedback tool?'), reducing the probability of passage extraction for requirements-building queries (15f_033). Recognition_feedback has no dedicated product landing page — this blog post is the primary coverage for all 6 queries in this cluster, but a blog post format cannot compete with Workleap's dedicated recognition product page that includes feature comparisons, adoption data, and customer outcome statistics.
Queries affected: 15f_014, 15f_033, 15f_068, 15f_123, 15f_133, 15f_149
Net new content addressing visibility and positioning gaps. Owner: Content Strategy. Timeline: Months.
People analytics is the decisive capability for the CHRO and CFO during shortlisting — it answers the board question of whether HR investment produces measurable outcomes. Across 15 queries covering problem identification through artifact creation, 15Five returns no usable content on analytics, flight-risk prediction, or workforce intelligence. Competitors including Lattice (winner on 15f_056) and Culture Amp (winner on 15f_079 and 15f_101) are filling this gap. Because the CHRO and CFO are the two highest-influence decision-maker personas in this audit, absence at the analytics layer means 15Five is structurally excluded from the frame competitors are building around data-driven HR. Building a dedicated hub converts existing AMAYA product capability into visible authority across all five personas.
Chatgpt (high): ChatGPT's vendor recommendation queries (15f_047, 15f_056) returned competitor names exclusively when 15Five had no analytics content to cite. Authoritative first-party capability pages with specific claim-level data (e.g., 'predicts flight risk using 6 behavioral signals') are the content type ChatGPT extracts for recommendation answers. Perplexity (high): Perplexity returned no 15Five citations on comparison queries 15f_079 and 15f_091 where Culture Amp and Lattice appeared. Perplexity's passage-extraction model rewards self-contained analytical paragraphs with data points — dedicated hub pages with structured headings (How It Works, Data Inputs, Accuracy Benchmarks) are directly suited to Perplexity citation.
Comparison is a high-intent buying job (is_high_intent=true), and 15Five is absent from 26 queries in this stage — the largest single gap cluster in the L3 inventory. These queries include both direct '15Five vs. Competitor' searches and 'Competitor A vs. Competitor B' searches where 15Five should be inserting itself as the superior mid-market alternative. The only competitive differentiation content on the site is a single December 2025 blog post covering Lattice; Culture Amp and Betterworks have no dedicated comparison content. Because 15Five ties Lattice 8-8 head-to-head where both appear (per metrics.competitive.head_to_head), the problem is not performance once present — it is absence from the comparison stage entirely. Fixing this gap converts a strong late-funnel shortlisting position (80.77% visibility) into comparison-stage presence, producing the largest expected lift of any single NIO.
Chatgpt (high): ChatGPT returned competitor-only responses across all 26 comparison queries. ChatGPT preferentially cites structured comparison pages with explicit claim-level differentiation ('Feature X: 15Five supports Y, Lattice requires Z'). The existing blog post format is treated as opinion content, not factual comparison — dedicated landing pages with feature matrices are required. Perplexity (high): Perplexity citations in this cluster went exclusively to competitor pages and G2 category pages. Perplexity's citation model rewards self-contained comparison tables and bullet-format differentiators that can be extracted as direct answers — exactly the format that dedicated comparison pages with feature matrices provide.
Goal misalignment is the pain point associated with OKR, and it represents a commercially actionable problem — organizations where 'nobody below VP level can explain their goals' are actively seeking solutions. 15Five has OKR tracking functionality but zero visibility across 9 OKR-related queries, allowing competitors like Betterworks (native OKR tool) and Leapsome to own category framing. Two shortlisting queries (15f_062, 15f_065) returned no_clear_winner, indicating the OKR market lacks a dominant AI-cited vendor — 15Five has a first-mover content opportunity in a segment where competitors have not yet established citation dominance.
Chatgpt (medium): OKR shortlisting queries returned no_clear_winner on 15f_065 — ChatGPT lacks a dominant source to cite. This is an opportunity to become the cited authority. ChatGPT needs a clearly structured capabilities page with specific claims about cascading depth, adoption mechanics, and manager accountability features. Perplexity (high): Perplexity's recency-weighted model rewards recently published, structured content. A new OKR hub page added to the sitemap with accurate lastmod timestamps would immediately compete for OKR-related queries where no dominant source currently exists — Perplexity's recency advantage is highest in underserved topic areas.
Talent calibration is the structural link between performance data and retention decisions — the capability that lets HR leadership identify high performers before they resign. 15Five has 9-box and talent review functionality but is absent from all 7 calibration queries. The validation query 15f_119 ('15Five talent management and performance calibration — how does it compare') returned no_clear_winner — a buyer who has already found 15Five cannot find sufficient calibration evidence to form a view. Lattice wins 15f_085, confirming that calibration is an active competitive battleground. Creating a calibration hub is a direct product-capability-to-content conversion with no product investment required.
Chatgpt (high): 15f_119 ('15Five talent management and performance calibration — how does it compare') returned no_clear_winner — ChatGPT could not find sufficient 15Five-specific calibration content to form a recommendation. A structured product page with specific feature claims (e.g., rating scale types, bias flag categories, audit trail depth) gives ChatGPT the factual content it needs to cite 15Five on direct competitor queries. Perplexity (medium): Calibration validation and requirements queries both returned no_clear_winner on Perplexity. Perplexity favors pages structured as Q&A passages — a calibration page formatted as 'What is talent calibration? How does it reduce flight risk? What should I look for in a calibration tool?' maps directly to Perplexity's answer-extraction format.
The CFO's primary concern is not features but financial justification — total cost over a multi-year investment cycle including licensing, implementation, training, and change management. While only one L3 query routes directly to this NIO, the CFO persona also drives multiple L2 ROI queries (15f_009, 15f_039, 15f_134) that share the same underlying need. A TCO template or financial model published on-domain would serve CFO consensus-creation queries across the full buying cycle and provide a defensible anchor for board-level ROI conversations — an asset type that no competitor currently has, creating first-mover citation authority.
Chatgpt (high): 15f_142 returned no_vendor_mentioned — ChatGPT produced a generic TCO model without citing any vendor. A published 15Five TCO template would likely be cited as a primary source for this query type, as ChatGPT tends to attribute vendor-published financial frameworks when artifact-creation queries seek structured models. Perplexity (medium): Perplexity's recency weighting and source-citation model rewards recently published, structured financial content with clear table formats. A TCO page structured as a table (Cost Category | Year 1 | Year 2 | Year 3 | Notes) is directly extractable as a Perplexity answer for TCO modeling queries.
All recommendations across all three layers, ranked by commercial impact × implementation speed.
All product pages, solution pages, the why-15five page, and the pricing page have no visible last-updated dates and are absent from the sitemap — meaning no lastmod signal is available from any source. Freshness could not be determined for 17 of 30 pages analyzed. While blog posts in the sitemap carry lastmod timestamps (November-December 2025), these appear to be bulk-refreshed timestamps rather than per-post content modification dates: several blog posts show sitemap lastmod of 2025-11-25 or 2025-11-26 regardless of their original publication date (some were written in 2017-2019).
The sitemap at https://www.15five.com/sitemap.xml contains exactly 19 URLs, all of which are blog posts or resource thank-you pages with lastmod timestamps of November-December 2025. Zero product pages, zero solution pages, zero pricing pages, zero integration pages, zero comparison-redirect pages, and zero feature subpages appear in the sitemap. No sitemap index file exists (sitemap_index.xml and hs-sitemap.xml both return 404). Core commercial pages such as /products, /products/perform, /products/engage, /products/kona, /products/perform/compensation, /pricing, /integrations, /solutions/reduce-regrettable-turnover, and /why-15five are all entirely absent from any known sitemap.
26 comparison and shortlisting queries were routed to L3 because the AFFINITY OVERRIDE rule found that 15Five's existing content uses wrong page types (blog posts, product pages, and integration catalogs) for comparison and shortlisting buying jobs that require dedicated comparison pages and case-study landing pages. An existing L1 finding (comparison_urls_redirect_to_generic_page) confirms three indexed comparison URLs redirect to a generic brand page with zero competitor-specific content.
15Five has no substantive people analytics content anywhere on the site — coverage was assessed as thin for all 15 queries in this cluster. The gap spans every buying stage from problem identification through artifact creation, meaning buyers evaluating analytics and flight-risk capabilities never encounter 15Five during the research process.
The /pricing page at https://www.15five.com/pricing lists plan prices and feature tiers but has no ROI framing — CFO queries about the cost of poor PM processes (15f_009) and evaluation ROI metrics (15f_039) cannot be answered by citing a pricing page, and routing these queries to /pricing as a coverage fallback confirms the content gap rather than filling it.
9 queries targeting OKR and goal cascading capabilities were routed to L3 because the content inventory assessed coverage as thin across all OKR-focused queries. The gap spans problem identification through artifact creation, covering the VP of Talent's goal alignment pain point (goal_misalignment) and the CFO's concern about departmental goal adoption cost-effectiveness.
7 queries targeting talent calibration, 9-box assessment, and high-potential identification were routed to L3 because content inventory assessed talent calibration coverage as thin across the site. The gap spans problem identification through artifact creation, with the VP of Talent as the primary persona and top_talent_flight_risk as the central pain point.
The /products/kona page describes Kona AI Coach as a product but provides no evidence of effectiveness — queries 15f_025 ('AI coaching tools for managers — is there evidence they actually improve manager effectiveness?') and 15f_046 (shortlisting, no_clear_winner) can't cite this page because it makes no verifiable outcome claims with data.
The /products/perform page has no switching or migration narrative — queries like 15f_049 ('Top continuous performance review platforms for replacing spreadsheet-based annual reviews') and 15f_057 ('replacing our current review tool — support for 360-degree feedback and custom review cycles') lose to Lattice because Lattice's comparable page includes explicit 'migrating from spreadsheets' language and a migration guide.
The /products/perform/compensation/ page does not include pay equity compliance specifics — query 15f_038 ('What should I look for in compensation management software that supports pay equity compliance?') and 15f_125 ('Biggest risks of automating compensation decisions — what can go wrong with pay equity analysis?') cannot cite this page because compliance capabilities are not documented.
The /products/engage page presents engagement features as a capabilities list but has no problem-framing section — queries asking about warning signs of employee attrition (15f_001) or how to close the loop on engagement surveys (15f_006) cannot be answered by citing a product feature page.
The /solutions/reduce-regrettable-turnover page makes retention claims but doesn't explain the mechanism — queries like 15f_026 ('What types of HR technology actually move the needle on reducing voluntary turnover?') need a page that explains WHICH features drive WHICH retention outcomes, not just a claim that 15Five reduces turnover.
The Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 returned almost exclusively navigation and footer markup with negligible body content — only the headline metric ('reduced turnover by over 20%') and a download button were accessible. Related customer stories presented as blog posts (Pendo, Auror) returned full body content normally. The case study format on this URL appears to use a gated download model (PDF behind a form), which renders the page's substantive content inaccessible to AI crawlers.
Three URLs that appear in search engine results as dedicated competitor comparison pages — /15five-vs-lattice, /15five-vs-cultureamp/, and /15five-vs-leapsome/ — all redirect to the generic /why-15five page. The /why-15five page contains no competitor-specific content: it does not mention Lattice, Culture Amp, or Leapsome by name, and contains only generic brand messaging ('The new ERA OF HR'). Fetching each comparison URL confirmed the canonical page is /why-15five and the full page content is identical across all three. Web search results still index these URLs with competitor-specific titles (e.g., '15Five vs Culture Amp | Comparing Employee Management...'), meaning buyers and AI crawlers who follow these URLs from search results land on a page that does not address the query that brought them there.
One query (15f_142) was routed to L3 with coverage_status='missing' — no page exists anywhere on 15Five's site that addresses TCO modeling, 3-year cost projections, or financial modeling frameworks for HR software investment. This is the only query in the audit with complete coverage absence, indicating a missing content type for CFO-facing financial decision support.
The /integrations directory at https://www.15five.com/integrations lists supported HRIS platforms but contains zero content about integration architecture — queries 15f_034 ('Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support') and 15f_108 ('Culture Amp integration issues — any known problems syncing with Workday or other enterprise HRIS platforms?') cannot cite this page because technical architecture is not documented.
The /blog/check-ins-and-1-on-1s/ page explains how to run check-ins but lacks outcome evidence — query 15f_014 ('Does real-time employee recognition actually reduce turnover, or is it a feel-good feature?') and 15f_123 ('Do employee recognition tools actually sustain engagement improvements?') require citable evidence connecting recognition frequency to retention outcomes, which is absent from this methodology guide.
Meta descriptions and Open Graph tags (og:description, og:image, og:title) are not accessible via rendered markdown analysis. None of the 30 pages analyzed had visible meta description or OG tag content in the fetched output.
This analysis was conducted using rendered page content (web_fetch returns markdown, not raw HTML), so JSON-LD schema blocks, meta tags, and OG tags are not visible in any of the 30 pages analyzed. Whether product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with datePublished/dateModified, pricing pages carry Offer schema, or FAQ sections carry FAQPage schema cannot be determined from this analysis method.
All three workstreams can start this week.
[Synthesis] The 138-item plan is sequenced by dependency, not commercial impact alone: L1 technical fixes — particularly the sitemap fix that restores commercial pages to crawler scope and the redirect fix that makes comparison pages functional — must execute first because they determine whether L2 and L3 improvements will be indexed by AI systems at all. L2 optimizations on 74 existing pages then close positioning gaps where 15Five is visible but loses; L3 new content builds the 58 pages 15Five entirely lacks for discovery-stage queries, targeting the early-funnel stages that drive the 69.0% invisibility rate.
Gap coverage note: 129 of 132 gap queries (98%) are assigned to an L2 or L3 action item. 3 gap queries remain unrouted — these may represent edge-case queries that don’t cluster neatly or fall below the LLM’s grouping threshold.