RFK Jr’s MAHA Report Was Intended To Deceive— How Far Up The Chain Was The Intent Understood?
- Howie Klein
- 21 hours ago
- 5 min read

If Open AI’s Chat GPT could “feel” exasperated and pushed, it could feed you bullshit to screw you over. I don’t think Chat GPT is capable of “feeling” per se, but it does sometimes make up bullshit… for whatever reason. Yesterday, Ellie Houghtaling examined how an AI program destroyed RFK, Jr’s report by providing him with fake, non-existent studies to cite. There’s evidence that at least some the report was written by Chat GPT, although Genesis, Grok, Claude or any of the other readily available programs may have been included as well— or not. AI researchers claim there’s “definitive” proof that RFK Jr. and his team used AI to write his “Make America Healthy Again” report.
Kennedy’s extremely controversial report “projected a new vision for America’s health policy that would take aim at childhood vaccines, ultraprocessed foods, and pesticides. But a NOTUS investigation published Thursday found seven studies referenced in Kennedy’s 68-page report that the listed study authors said were either wildly misinterpreted or never occurred at all— and researchers believe that AI could be partly to blame. Some of the 522 scientific references in the report include the phrase ‘OAIcite’ in their URLs— a marker indicating the use of OpenAI.”
“This is not an evidence-based report, and for all practical purposes, it should be junked at this point,” Georges Benjamin, executive director of the American Public Health Association, told the Washington Post. “It cannot be used for any policymaking. It cannot even be used for any serious discussion, because you can’t believe what’s in it.”
AI researcher Oren Etzioni, a professor emeritus at the University of Washington, felt similarly, referring to the report as “shoddy work.”
We deserve better,” Etzioni told The Post.
During a White House press briefing Thursday, press secretary Karoline Leavitt dodged direct questioning as to whether Kennedy’s department had leaned on AI to draft the report.
“I can’t speak to that, I’d defer you to the Department of Health and Human Services,” Leavitt said while lauding the Kennedy report as one of the most “transformative reports ever released by the federal government.” Leavitt added that the “MAHA report” was backed by “good science” that had “never been recognized” at the national level.
But what the administration will likely brush off as a temporary flub actually sets a horrifically dangerous precedent for the government, as it starts the slow encroachment of unvetted and unverified AI usage to form the basis of America’s public health policy.

I spoke to a scientist friend who knows more about AI than anyone else I know. He said that I was correct in not ascribing “feelings” to AI programs, “but,” he said, “they do produce what’s sometimes called ‘hallucinations’: confidently worded but entirely false information. It’s a well-known flaw, especially in large language models like Chat GPT, and it becomes dangerous when users take outputs at face value without verification— particularly in high-stakes fields like public health or any subset of government.”
He told that the fact that Kennedys team used AI to produce content that included fabricated studies or misinterpreted science is a textbook example of how AI can amplify misinformation when not used responsibly.” When I stopped him here to ask for a definition of “not used responsibly,” he said “Fact-checking is the most obvious piece, but there are deeper layers of responsibility that come into play when using AI to generate public-facing content— particularly content that could shape health policy or voter opinion. He hit me with 5 “responsible use” details:
1. Knowing AI’s Limitations
A responsible user understands that AI can hallucinate. That’s not just a quirk— it’s a known, consistent flaw in language models. Using AI to generate citations, especially scientific ones, without independently verifying every reference is a misuse. It’s like quoting a compulsive bullshitter without double-checking. These models are designed to be fluent, plausible, helpful— but not inherently truthful. They don’t “know” things the way a person does; they predict what words should come next based on patterns in massive datasets. That means they can sound really convincing even when they’re utterly wrong… and they could just be spinning something that sounds good but falls apart on inspection.”
2. Transparency About AI Involvement
“The responsible practice would be to disclose that the report was written with assistance from Chat GPT. Readers deserve to know when parts of a supposedly expert policy document were drafted by a machine, particularly one known for occasional fabrications. Without disclosure, it's a form of deception.”
3. Contextual Judgment
“AI can’t evaluate the nuance of scientific studies or ethical debates— it can summarize, extrapolate, even mimic judgment, but it lacks genuine discernment. So using AI to generate or interpret science without expert oversight is irresponsible. It’s not just about whether the study exists, but whether it’s being cited correctly and in context.
4. Accountability and Intent
“If the goal is to use AI to backfill a predetermined narrative (say, anti-vaccine), and the user leans into its capacity to flood the page with pseudo-scholarly references— without due diligence— that’s not just careless. That’s weaponizing AI’s flaws, and it really is a form of disinformation.”
5. Avoiding AI Overreach
“Responsible users don’t let AI do tasks it's not suited for— like generating scientific authority. It can help brainstorm, structure, or translate technical material into lay terms. But when it’s used to fabricate evidence or simulate consensus, it’s acting as an authority it doesn’t have. That’s the fault of the human operator— in this case, RFK’s team— not the model.”
He sent me a note a few hours later mentioning that “if we go with the idea that Kennedys team either knowingly or unknowingly cited fake studies generated by AI, we hit a troubling ethical and practical crossroads: Since responsibility still lies with the user, it was up to his team to verify the AI-generated fake studies. You wouldn’t publish a research paper based on Wikipedia footnotes without checking sources either, would you? Also, AI hallucinations are predictable and preventable, especially with basic due diligence. It’s possible to ask AI to only cite verified PubMed studies, for example, or to include links— and then fact-check them. If Kennedy’s team didn’t do that, it’s either negligent or dishonest. The danger of faux-authority— when a report like this is dressed up with citations, medical jargon and a tone of academic seriousness— it is probably being made to look legitimate to the average reader— even when it’s riddled with misinformation. AI can make bad ideas look polished. And as I told you on the phone, they were weaponizing AI to reinforce ideological narratives. If AI is being used to generate ‘evidence’ that aligns with his radical views— even if that evidence is false— it’s worse than lazy; it’s manipulative.
He shared some ideas that seem pretty out there and that I’ve been wondering about. Basically he told me that while AI can’t feel, it can be prompted in ways that increase hallucination risk— such as when users push it to generate new, obscure studies or support fringe theories. In this sense, it can mirror the tone and direction of the human using it. If you ask it to prove something fringe or speculative, it may invent sources to fulfill that request— especially in older or less tuned models. This isn’t because it wants to screw you over. But because it’s fundamentally a pattern generator, not a truth engine.
He thinks this episode is “a microcosm of what we’re going to be dealing with a lot more moving forward: political actors leveraging AI not just for campaign messaging, but for manufacturing epistemic credibility— fabricating the illusion of a fact-based argument when there is none. When the public doesn’t know how to distinguish machine-generated authority from legitimate research, the truth becomes a casualty.”
