OpenAI Opts Out of Watermarking ChatGPT Text Amid User Concerns
OpenAI has developed a system for watermarking text generated by ChatGPT and a tool to detect these watermarks, as reported by The Wall Street Journal. However, internal debates have stalled the release of this feature. While watermarking seems like a responsible step, it could impact the company's user base and, consequently, its revenue.
Watermarking in ChatGPT involves tweaking the model's word prediction patterns to create detectable markers. For a more detailed explanation, you can refer to Google's analysis of Gemini's text watermarking. The potential benefits of watermarking are significant, particularly for educators looking to prevent students from submitting AI-generated assignments. The Journal notes that watermarking does not compromise the quality of ChatGPT's text output. Moreover, a survey commissioned by OpenAI revealed that global support for an AI detection tool outweighs opposition by a four-to-one margin.
After the Journal's story, OpenAI confirmed in a blog post that it has been working on watermarking, a fact also highlighted by TechCrunch. According to the company, their method boasts a 99.9% effectiveness rate and is resistant to tampering, such as paraphrasing. However, OpenAI acknowledges that rewording by another model could easily bypass the watermark, making it trivial for malicious users to evade detection. The company also expressed concern about the potential stigmatization of AI tools, particularly for non-native English speakers.
One significant worry for OpenAI is user response. Almost 30 per cent of surveyed ChatGPT users indicated they would use the software less if watermarking were implemented. This hesitation underscores the delicate balance OpenAI must maintain between ethical responsibility and user satisfaction.
Despite these concerns, some OpenAI employees believe in the effectiveness of watermarking. Nevertheless, the Journal reports that alternative methods, which might be less controversial but are still unproven, are also under consideration. In its recent blog post, OpenAI mentioned it is in the early stages of exploring metadata embedding. This technique, being cryptographically signed, promises no false positives, though it is too soon to determine its overall effectiveness.
In summary, while OpenAI acknowledges the advantages of watermarking in identifying AI-generated text, user feedback and practical concerns have led the company to explore other less invasive methods. The decision reflects a careful consideration of both ethical implications and user engagement.