For officeholders

Chatbots are describing your record to constituents, journalists, and donors.
You have no idea what they're saying.

ChatGPT, Google AI Overview, and Perplexity answer questions about your voting record, your policy positions, your committee work, and your public statements every day. The answers are sourced from Wikipedia, Ballotpedia, news archives, and third-party databases — not from you. You have no idea what they're saying.

scroll
What chatbots find when someone asks about you

Seven chatbot platforms.
Answering without you.

Seven major chatbot platforms are regularly answering questions about officeholders and members of Congress. When a constituent, journalist, or donor asks about your record, each platform assembles an answer from Wikipedia, news archives, and sources you didn't choose. The answer is delivered confidently. You were not consulted.

7 chatbot platforms measured
Google AI Overview ChatGPT Perplexity Meta AI Microsoft Copilot Grok Google AI Mode

The kinds of questions people ask about you on chatbot platforms include:

voting record on [issue] policy positions committee roles and work biography and background endorsements received controversies and criticisms campaign finance and donors legislative achievements party affiliation and record public statements and floor speeches

The sources chatbots tend to draw from when building these answers:

Wikipedia Ballotpedia News archives Official .gov pages Third-party vote trackers Opposition PAC content Academic and policy databases Social media archives
The problem is not that chatbots exist — it's that these sources are often incomplete, outdated, or assembled by others. Your most recent work may not be indexed. Prior-cycle attack content may feature prominently. The chatbot answer is presented as fact. You were not consulted.
Federal officeholders face a compounding problem. Your campaign website and your official .gov site are both high-authority sources — and they don't always say the same thing. When chatbots draw from both and find contradictions, the answers they produce can be significantly worse than either source alone.
The record gap

What you know about your record.
What chatbots say about it.

There is a consistent gap between the record an officeholder maintains and the record chatbots retrieve and surface. The gap is not random — it follows predictable patterns that Kyanos documents across every AUDIT engagement.

What you know
How you actually voted and why
Your current policy positions in your own words
Your committee work and legislative achievements
Corrections to prior mischaracterizations
Your recent public statements and floor speeches
What chatbots often say
Voting record framed by opposition sources from prior cycles
Policy positions described in outdated or third-party language
Committee work absent from the sources chatbots draw from
Corrections that exist but don't register against the original characterization
Older statements cited over more recent positions

Three gap types appear most commonly in Kyanos AUDIT findings for officeholders:

Gap type 1: Voting record framed by opposition sources. A vote your office characterized as one thing was characterized differently by an opposing PAC — and the PAC's version has been cited more frequently in news coverage and indexed sources. Chatbots tend to reflect the more-cited version.
Gap type 2: Policy positions described in outdated language. Positions evolve. Official .gov pages are updated. But older descriptions — from third-party vote trackers, archived news articles, or prior-cycle profiles — persist in indexed sources and continue to surface as current.
Gap type 3: Achievements not present in the sources chatbots draw from. Committee work, legislative accomplishments, and constituent service outcomes often don't appear in the indexed sources chatbots draw from. The work exists. The chatbot doesn't know about it.
How Kyanos works for officeholders

Four phases.
AUDIT · GROUND · REMEDY · DEFEND

Kyanos works in four structured phases — from initial measurement through ongoing monitoring. Nothing is built before the foundations are right. Every deliverable traces back to what the AUDIT found and what the GROUND brief established.

AUDIT
Measure what every major chatbot says about your record
We run the questions your constituents, journalists, and donors are actually asking — about voting record, policy positions, committee work, and biography — across all seven chatbot platforms. Findings are documented by platform, by topic, by source. You see exactly what is being said, verbatim, for the first time.
GROUND
Establish how you want your record represented
A structured facilitation — with your communications team — that establishes tone, style, and approach before any content is drafted. GROUND is not a writing session. It is the decision-making process: which issues to engage, which to hold, what voice to use, what to correct first. The output is a locked brief that sets the parameters for every piece of owned content written in REMEDY. Style and tone are established here. Copy is written later.
REMEDY
Build the information layer chatbots should be finding
Wikipedia edits. Structured data for your official pages. Press materials formatted for chatbot retrieval. Official statement pages that document positions in your own words. The work of making your record accurately represented — finding by finding.
DEFEND
Bi-weekly monitoring — catch drift before it compounds
We re-audit chatbot output across all seven platforms on a bi-weekly cadence. When something changes — a new attack piece gets indexed, a characterization shifts — we flag it before it compounds into a narrative problem. The ongoing monitoring that keeps the chatbot layer current.
Why this matters for officeholders specifically

The people who matter most
are using chatbots to learn about you.

Chatbots are not a niche research tool. For constituents, journalists, donors, and opposing campaigns, a chatbot search is often the first step — and sometimes the only one.

Constituents use chatbots to research their representatives before town halls, before calling the district office, and before deciding how to vote. The chatbot answer is often the first answer — and sometimes the only one they check.
Journalists use chatbots to background-check before interviews, before filing, and when establishing context for a story. A chatbot characterization of your record shapes how a journalist approaches you before the conversation begins.
Donors research before writing checks. Major donors and bundlers — the ones whose support depends on alignment with your record — are using chatbots to verify positions and track your votes.
Opposing campaigns use chatbot output as opposition research source material. If a chatbot is mischaracterizing your record, that characterization can make its way into attack ads without verifying the original source.
The chatbot answer is often the first answer. And it was written without you.
Get started

Find out what chatbots
are saying about you.

Your record is being described to constituents, journalists, and donors right now. See exactly what the platforms are saying — before someone else uses it.

Pick a time
Schedule a conversation

Choose a time that works. We'll walk through what AI is saying about your record and what the options are.

Send a message
We'll reach out to you

Tell us your name and office. We'll follow up directly.

We'll show you one real finding — verbatim chatbot output for your record, sourced and explained.
[✓]
Request received.
We'll send you one real finding — the verbatim chatbot output for your record, sourced and explained.