What Is Ai Reading The Muck Rack Ai Report 2025
Posted by Michael Brito | Jul 24, 2025 | Research This post breaks down findings from Muck Rack’s AI report, “What is AI Reading?” on how AI search engines decide which sources to trust and cite. The key takeaway is that earned media, not owned or paid content, drives AI visibility. Recency, source credibility, and question style all influence whether a brand appears in AI-generated answers. Different models like ChatGPT, Claude, and Gemini cite different sources, making it essential to monitor each one. For PR teams, this means shifting from splashy campaigns to a steady stream of high-authority coverage.
The real risk is staying invisible while competitors feed the models with stronger signals. AI search is rewriting the rules of visibility. Muck Rack’s What is AI Reading 2025 report spells out exactly how it works. It doesn’t just tell you which sources AI prefers. It shows how those sources shift based on the question, the topic, and the model. If you want your brand to show up in generative AI results, you need to stop thinking like an SEO.
Start thinking like a journalist. Turn citations on, and the AI response changes. Not just the source list. The entire answer. Muck Rack tested this across a million links and found that enabling citations doesn’t simply add proof. It rewires the response itself, and it should always be a best practice.
AI relies heavily on its source material. If that material includes your coverage, your story gets baked into the response. If it doesn’t, your brand disappears. July 23, 2025 06:45 ET | Source: Muck Rack Muck Rack Miami, July 23, 2025 (GLOBE NEWSWIRE) -- In a groundbreaking study analyzing millions of AI-cited links from hundreds of thousands of prompts, Muck Rack, the leading provider of the award-winning PR software, has revealed... The research, titled “What is AI Reading?” set out to answer a fundamental question: Does media coverage materially affect what AI says?
The answer, unequivocally, is yes. “This study is an eye-opener for PR and communications teams aiming to understand how media mentions influence AI-generated content,” said Greg Galant, cofounder and CEO of Muck Rack. “Until now, we’ve had theories and early signals—but now we’ve got solid evidence that earned media directly influences AI-generated output. This changes the stakes for PR. The way businesses are represented by AI now ties directly to the media coverage they earn.” Citations actively influence AI output: Controlled prompt testing showed that when citations are enabled, LLM (large language models like OpenAI’s ChatGPT and Google Gemini) outputs meaningfully change.
Earned media doesn’t just show up—it affects what is said. This proves that cited content is not decorative. It materially grounds outputs in real-time, dynamic inputs. Earned media is a foundational input: More than 95% of citations come from unpaid media sources and 85% of those come from earned sources, while another quarter are from journalistic sources. Half of total AI responses included at least one earned media citation. These figures highlight the critical role that earned media strategies play in GEO (Generative Engine Optimization) and AIO (Artificial Intelligence Optimization) visibility.What Determines Whether a Brand is CitedThree key variables drive citation inclusion:
We prompted AI 500,000+ times to find out! By entering your information, you agree to our Terms of Service and Privacy Policy. That Cheshire-cat grin you can’t escape seeing all over LinkedIn? That’s the PR industry after Muck Rack dropped its recent What AI is Reading report. For an industry that frequently has the living shit kicked out of it by journalists, influencers, clients, CMOs and, heck, even lawyers, the data was a moment to celebrate. It affirmed that, in fact, PR is the key player in the new world of generative engine optimization and AI search.
While the report pleased me, I was struck by a range of findings that were a big departure from conclusions of other GEO studies about what AI cites, and what we’ve seen on a... Before I go any further, it’s worth saying that Muck Rack’s study looks at aggregate trends. Even when they get into the very valuable work of looking at citations for categories like travel, tech, business, health etc, they are looking at large clusters of sub-categories and grouping them together. Muck Rack Complete Report – Snipped from Executive Summary • Citations affect responses: Simply enabling or disabling the ability for AI to search the web drastically modifies responses, indicating that the systems are truly... • Journalism and earned media are important drivers: More than 95% of links cited by AI are non-paid coverage. Of those, over 27% of links are journalistic content.
• Recency wins: Particularly in OpenAI models, fresh content, especially on topical, opinion-based, or event-driven queries, is prioritized. • Query framing changes sources: Advice-seeking or opinion-based prompts trigger more dynamic citations, while encyclopedic queries tend to fall back on older, static training data. • Outlet authority matters: High-domain authority outlets such as Reuters, Axios and Financial Times are frequently cited, but not consistently. Sources vary by industry too with only about 15% of sources appearing in the top 10 across multiple industries. Methodology – We analyzed 1,000,000+ links from AI responses. This study explores how modern generative AI systems cite sources in response to realistic user prompts.
Our objective was to quantify and characterize the nature of AI-generated citations across different use cases and vendor models. This includes their frequency, source types, and the prominence of earned and owned media. To accomplish this, we constructed a large, diverse prompt set and executed it across several web-enabled language models, followed by systematic analysis of the responses and the cited links. The prompts span a variety of industries and subject matter. Sometimes they specifically mention companies by name, sometimes they do not. The following specific models were used to execute the queries, during the month of July 2025: Chat GPT (both ‘4o’ and ‘4o-mini’), Gemini (‘flash’ and ‘pro’) and Claude (‘sonnet’ and ‘haiku’) Generative AI systems...
The behaviors observed in this study may shift as models are updated or retrained. We assigned cited links into categories as follows: • Journalistic: News sites, and other journalistic coverage • Corporate Blogs and Content: Third party corporate blogs and content not owned by a company/product targeted in... I'm hoping to rely on loyal readers rather than erratic ads. Please Click the Donate button and support BeSpacific. Thank you! As generative AI reshapes the digital landscape, a new question is emerging at the center of content creation and discovery: What exactly is AI reading?
A groundbreaking study titled What is AI Reading from Generative Pulse by Muck Rack analyzed over 1 million citations from major AI systems—including OpenAI’s ChatGPT (4o and 4o-mini), Google’s Gemini (Flash and Pro), and... The findings are not only revealing but transformative for anyone in journalism, corporate communications, SEO, or brand strategy. As is obvious to anyone immersed in the world of AI, simply enabling or disabling citation functionality changes the answers themselves. When citations are off, AIs rely more heavily on static training data. But when citations are turned on, the models generate materially different outputs, directly shaped by the real-time sources they pull from. Key Example: Asked about the worst Major League Baseball team, a citation-disabled AI mentioned the 1962 Mets.
But with citations on, it updated the answer to include the 2024 Chicago White Sox with a record-breaking 41–121 season—explicitly citing CBS Sports. Over 95% of all cited sources come from non-paid media. This includes: In today’s evolving AI landscape, tools like ChatGPT, Gemini, and Claude are key sources of information. But what do these chatbots actually “read” and cite? For PR professionals, this insight is crucial for shaping narratives and visibility.
A new MuckRack report analyzed over a million AI responses from July 2025, revealing how generative AI citations differ from traditional SEO and why PR professionals need to rethink their digital strategies with Generative... The Role of Citations in AI-Driven Insights A key finding from Muck Rack’s report is that AI models like ChatGPT provide more accurate, current answers when they access real-time citations. Instead of outdated info, they draw from trusted, up-to-date sources like news articles and blogs. For example, asked “Who’s the worst MLB team?”, ChatGPT first named the 1962 Mets but updated to the 2024 White Sox with citations enabled, highlighting how AI relies on current published content and why...
People Also Search
- What is AI reading? Takeaways from a report on AI brand ... - Muck Rack
- What is AI Reading? The Muck Rack AI Report 2025
- Muck Rack Study: Generative AI Relies Heavily on Earned
- What Is AI Reading? Inside Muck Rack's Groundbreaking Report
- What is AI reading? | Generative Pulse by Muck Rack
- Muck Rack's, What AI is Reading, Report and a Few Grains of Salt - LinkedIn
- What is AI Reading - Report by Muck Rack - beSpacific
- What Is AI Reading? Inside the Hidden Mechanics of ... - Unite.AI
- What AI Chatbots are Reading - buchananpr.com
- Experts answer your FAQs on Muck Rack's 'What is AI Reading?' report
Posted By Michael Brito | Jul 24, 2025 | Research
Posted by Michael Brito | Jul 24, 2025 | Research This post breaks down findings from Muck Rack’s AI report, “What is AI Reading?” on how AI search engines decide which sources to trust and cite. The key takeaway is that earned media, not owned or paid content, drives AI visibility. Recency, source credibility, and question style all influence whether a brand appears in AI-generated answers. Diffe...
The Real Risk Is Staying Invisible While Competitors Feed The
The real risk is staying invisible while competitors feed the models with stronger signals. AI search is rewriting the rules of visibility. Muck Rack’s What is AI Reading 2025 report spells out exactly how it works. It doesn’t just tell you which sources AI prefers. It shows how those sources shift based on the question, the topic, and the model. If you want your brand to show up in generative AI ...
Start Thinking Like A Journalist. Turn Citations On, And The
Start thinking like a journalist. Turn citations on, and the AI response changes. Not just the source list. The entire answer. Muck Rack tested this across a million links and found that enabling citations doesn’t simply add proof. It rewires the response itself, and it should always be a best practice.
AI Relies Heavily On Its Source Material. If That Material
AI relies heavily on its source material. If that material includes your coverage, your story gets baked into the response. If it doesn’t, your brand disappears. July 23, 2025 06:45 ET | Source: Muck Rack Muck Rack Miami, July 23, 2025 (GLOBE NEWSWIRE) -- In a groundbreaking study analyzing millions of AI-cited links from hundreds of thousands of prompts, Muck Rack, the leading provider of the awa...
The Answer, Unequivocally, Is Yes. “This Study Is An Eye-opener
The answer, unequivocally, is yes. “This study is an eye-opener for PR and communications teams aiming to understand how media mentions influence AI-generated content,” said Greg Galant, cofounder and CEO of Muck Rack. “Until now, we’ve had theories and early signals—but now we’ve got solid evidence that earned media directly influences AI-generated output. This changes the stakes for PR. The way ...