OpenAI has started testing ads inside ChatGPT this week and one of its former researchers is publicly criticizing the move.
In a New York Times op-ed published today, former OpenAI researcher Zoë Hitzig announced she resigned from the company this week, arguing that OpenAI is at risk of repeating mistakes made by Facebook during the rise of social media advertising.
Hitzig, who spent two years at OpenAI helping shape early model pricing and safety policies, says the company’s decision to introduce ads raises deeper structural concerns — particularly around incentives, data usage, and long-term governance.
OpenAI has said ads will be clearly labeled, appear at the bottom of responses, and not influence model outputs. But Hitzig argues the larger issue isn’t the first version of ads, but what comes next.
Why This Matters:
People disclose highly sensitive information inside ChatGPT — everything from medical fears and relationship problems to financial anxieties. As Hitzig puts it, that gives OpenAI access to “an archive of human candor that has no precedent.”
That openness was driven, in part, by the belief that ChatGPT had no ulterior motive. Unlike social platforms, it wasn’t built to optimize feeds for engagement.
Hitzig’s concern is that advertising layered onto a database of deeply personal conversations creates incentives that clash with ChatGPT’s original value proposition — and with the trust that led users to share so openly.
She argues that even if OpenAI limits targeting at first, an ad-supported model will eventually introduce pressure to optimize for engagement or ad performance. “In its early years,” she writes, Facebook also “promised that users would control their data and be able to vote on policy changes. Those commitments eroded.”
To be clear, Hitzig does not argue that ads are inherently unethical. Instead, she says any move into advertising should include binding governance structures, independent oversight over data use, and user-controlled data trusts or cooperatives.
Experts React:
This might be the most memorable part of the piece:
“I don’t believe ads are immoral or unethical. A.I. is expensive to run, and ads can be a critical source of revenue. But I have deep reservations about OpenAI’s strategy.
For several years, ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda. Users are interacting with an adaptive, conversational voice to which they have revealed their most private thoughts. People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
Hitzig also posted about this on X (see the reply):
Our Take:
The pressure ChatGPT is facing from competitors and former researchers is a good thing. Ultimately, it should help drive what everyone wants: the best possible user experience in an ad-supported environment.
Advertising is inevitable in any channel where there’s an opportunity to reach and engage people. But it doesn’t have to be harmfu or bad, and it doesn’t have to come with a tradeoff.
Ideally, the scrutiny pushes the company to approach ads differently — in a way that balances revenue with responsibility and keeps the consumer at the center of the equation. (While still being good for advertisers.)