AI text detectors are showing up everywhere, scanning essays, emails, and social media posts to catch machine-written content. But how well can they actually spot a bot? While these tools often get it right, they're far from foolproof, sparking a growing debate about their real reliability. Could you tell that the above paragraph was written by ChatGPT? If not, you're not alone; only three of the 10 online detector tools we fed that passage to (along with some more AI output to give it a fairer chance) successfully flagged it as a high probability of AI output. Another two guessed it was mixed, and five found no evidence of AI. (Editor's note: Tech Brew would never publish an AI-written lede were it not to prove a point. The reporter noted it took much honing and back-and-forth to generate something serviceable.) Since ChatGPT first thrust text generators into the mainstream almost two years ago, a cottage industry of tools has promised to suss out AI-generated text. Educators, platform moderators, editors, and hiring managers have turned to these models in hopes of restoring a semblance of order amid an onslaught of AI-generated student essays, social media posts, book submissions, and other mass-produced copy. But the capabilities of these tools can vary widely. A paper earlier this year from researchers at the University of Pennsylvania found that many text detectors are exaggerating their prowess. Rather than the claims of up to 99% accuracy that some of these tool creators make, the research found that performance often fluctuates depending on the type of text and the model used to produce it. Error rates might be acceptable in certain contexts, but false positives can be ruinous in education, where students can face baseless accusations of cheating because of faulty detection. Keep reading here.—PK |
No hay comentarios:
Publicar un comentario