There are increasing concerns that cheaters, spammers, and others may abuse artificial intelligence. That’s the reason OpenAI released a tool for identifying AI-produced text. Nevertheless, according to OpenAI’s so-called AI classifier, nearly three-quarters of the time, it fails to detect ai-generated text.
The company launched ChatGPT in November and confirmed a multiyear, multibillion-dollar collaboration with Microsoft. Then, the San Francisco-based firm released the detection tool. The tool, according to the firm’s blog post, should aid individuals in differentiating between human-written text and a variety of artificial intelligence tools, even beyond ChatGPT. According to OpenAI, its new tool correctly classified 26% of AI-authored text as such in evaluations. It also claimed that the classifier produced false positives 9% of the time when it incorrectly identified the human-created text as AI-generated.
Isn’t using unreliable tools to determine AI content unethical?
The tool isn’t good enough on its own. However, it may be used to help identify the source of a piece of text, according to OpenAI. “Our classifier isn’t trustworthy,” the firm said, admitting to it as a work in progress.
ChatGPT is more unreliable for short texts. OpenAi claims that even longer texts are quite occasionally incorrectly labeled. Another issue is that the tool isn’t able to distinguish between a list of facts and a non-list.
There are tons of AI tools helping create professional content, like text editors. They identify some queries in the human-written text and suggest ways to resolve them. OpenAI tool often identifies the result as AI-generated too. Unfortunately, such glimpses of misguidance work against the effectiveness of the tool at this point.
OpenAI seeks to enhance the tool by incorporating user feedback. It plans on discussing the tool’s pros and cons with a wider audience and developing it into a fully trustworthy tool, helping all those negatively affected by AI-generated texts.