How I Got a SaaS Client Cited in 47 ChatGPT Answers in 90 Days (Full Teardown)
Month-by-month playbook of how an HR tech SaaS went from zero ChatGPT citations to being cited in 47 buyer-intent prompts. Real tactics, real timeline, no fluff.
In late 2025, an HR tech SaaS client came to me with a simple but uncomfortable question: 'Why are our competitors being recommended by ChatGPT and we're not?' We had a ranked Google presence (DR 52, 800 keywords in top 10), strong product, and a marketing team that knew what they were doing. But on every buyer-intent ChatGPT prompt — 'best HRIS for 50–200 person companies', 'HRIS comparison', 'how to choose HR software' — we were invisible. This is the month-by-month story of going from zero citations to 47 in 90 days, with everything we did and everything that didn't work.
How long did it take to get cited by ChatGPT?
First citations appeared in week 5 on long-tail definitional prompts. By week 8, the brand was cited in 12 prompts. By week 12 (end of the 90-day program), citation count was 47 of 600 ChatGPT responses across our priority prompt set — a 7.8% citation share in a category dominated by 4 incumbent brands.
Starting baseline (Week 0)
0 ChatGPT citations on 40 priority prompts. 0 Perplexity citations. 3 AI Overview appearances out of 200 tracked queries. Site had partial schema (Article on blog, Organization on home), no FAQPage anywhere, no Person schema for authors, sporadic dateModified updates. 4 named authors on the blog, none with sameAs links or external bylines.
Month 1: Clarify and Index foundations
Week 1: Built the prompt set. Pulled 200 questions from sales call transcripts, support tickets, and Reddit. Narrowed to 40 priority prompts across definitional, comparison, and how-to categories. Week 2: Audited top 30 site pages against my AEO checklist. Identified 18 pages that needed Index work and 6 that needed full rewrites. Week 3–4: Started Index pass. Added FAQPage schema to 12 pages, rewrote intros with 50-word quotable answer blocks, refreshed dateModified across the priority set.
Month 2: Trust building
Week 5: First two ChatGPT citations appeared on long-tail definitional prompts. Used this as proof the structural work was paying off. Week 5–8: Trust track activated. Created Person schema for all 4 authors with sameAs to LinkedIn, secured 6 third-party industry bylines (CHRO Magazine, HR Dive, two niche blogs), claimed the Google Knowledge Panel, updated Crunchbase with current product positioning. Week 7: Created a Wikidata entry for the brand. Week 8: Citation count hit 12.
Month 3: Echo loop and competitive plays
Week 9: Echo loop went weekly. Identified 15 prompts where competitors were cited and we weren't. Reverse-engineered the cited pages and rebuilt our equivalent pages with stronger quotable blocks, more entities, and proprietary statistics. Week 10–11: Rolled out a benchmark report ('State of HRIS 2026') with original survey data. This single asset got picked up by 3 industry sites and generated 9 new ChatGPT citations within 2 weeks. Week 12: Final tally — 47 citations on 40 priority prompts, with 11 prompts now showing the brand as the lead citation.
What worked best
(1) The 50-word quotable answer block — every page that got citations had one. (2) Person schema with sameAs — citation count for author-bylined pages was 3× higher than unbylined. (3) The original benchmark report — single highest-leverage asset of the entire 90 days. (4) Weekly Echo loop — without it, we wouldn't have caught the competitive prompt gaps until month 4.
What didn't work
Mass FAQ generation. We added FAQ schema to 8 thin pages in week 3 and got zero lift — schema without genuine Q&A content does nothing. Generic 'we're great' brand mentions in third-party bylines didn't move citations either; substance and topic-relevance mattered far more than mention volume.
What this cost
Roughly 60 hours of consultant time over 90 days, plus internal content team effort for the rewrites and benchmark report. The benchmark survey itself cost ~$2,000 in panel respondents. Total program cost was in the mid-five-figures — and the pipeline impact (qualified demos sourced from AI surfaces) paid it back inside 6 months.
Honest caveats
The client started with strong fundamentals — a ranked Google presence, real product traction, and a marketing team that could execute fast. A brand starting from zero domain authority would not have hit 47 citations in 90 days. The CITE framework still works for them, but the timeline is 6–9 months not 90 days.
What I'd do differently
Start the Trust track in week 1, not week 5. Entity authority has the longest lag time of any AEO lever — every week you delay it, you delay your head-term citation ceiling.
Frequently asked
Not in a public post — the case study is shared with their explicit permission as anonymized. Reach out for a private reference call.
Profound for ChatGPT/Perplexity/Gemini, plus manual weekly spot-checks on AI Overviews. ~3 hours per week of tracking time.
Yes — about 12% increase in top-10 keywords over the same 90 days. The AEO work compounded with classic SEO.
Yes, with adjustments. B2C prompts skew more emotional and less factual, so the quotable answer blocks need to lean into specificity and named comparisons rather than definitions.
Roughly 10 hours/month of Echo loop work plus 2 new content pieces per quarter to stay ahead of competitor catch-up.
Doubling to 100 citations and entering the top-3 cited brands in their category by end of Q3 2026.
