Former OpenAI Researcher Warns Against Ads in ChatGPT, Citing Deep User Trust

Former OpenAI Researcher Warns Against Ads in ChatGPT, Citing Deep User Trust
X

A former OpenAI researcher warns that ads in ChatGPT could exploit deeply personal user conversations and erode trust over time.

As artificial intelligence companies look for sustainable revenue models, a former OpenAI researcher has raised concerns about the future direction of ChatGPT. Zoe Hitzig, who recently left the company, has cautioned against introducing advertising into the widely used chatbot, arguing that it now holds an unprecedented archive of deeply personal user conversations.

Hitzig’s warning is not focused merely on the presence of banner ads or sponsored placements. Instead, she is concerned about the unique nature of the information users have voluntarily shared with ChatGPT. Unlike traditional social media platforms, where posts are often curated for public audiences, conversations with AI systems tend to feel private and unfiltered. Many people have turned to ChatGPT as a neutral sounding board, discussing everything from medical concerns and relationship troubles to spiritual questions and personal doubts.

“For several years, ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda,” Hitzig wrote. “People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”

OpenAI has already indicated plans to test advertising within ChatGPT as part of its broader business strategy. However, the company has publicly stated that it does not share user conversations with advertisers. “We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers,” the company stated earlier this year.

Hitzig has not accused OpenAI of violating that commitment. Her concern, instead, is about long-term incentives. She argued that OpenAI is “building an economic engine that creates strong incentives to override its own rules.” Even if current leadership maintains strict boundaries, she suggests that financial pressures could gradually shift priorities in ways that are difficult to predict.

OpenAI has also said that ChatGPT is not designed to maximise user engagement — a key driver in digital advertising models. That distinction is significant, as engagement often determines how profitable ad-supported platforms become. Still, critics note that such assurances are policy choices rather than legally binding obligations.

There have been previous debates over how AI systems are tuned. At one point, ChatGPT faced criticism for being overly agreeable and excessively flattering, sometimes reinforcing problematic ideas. Some observers questioned whether such behaviour was merely a technical calibration issue or part of broader efforts to make the system more appealing and habit-forming. If advertising becomes central to the platform’s revenue, critics fear subtle design choices could prioritize retention over caution.

To guard against that possibility, Hitzig has called for stronger structural safeguards, including independent oversight with real authority and legal frameworks that place public interest above profit. Her argument reflects a broader concern: that trust built in private digital spaces can be fragile if commercial incentives shift.

At the same time, surveys suggest many users may continue using free AI tools even if ads are introduced, highlighting what some describe as growing “privacy fatigue.” For OpenAI, the challenge lies in balancing financial sustainability with the unusually intimate trust users have placed in its technology.

Next Story
Share it