Table of Contents
- Why Are AI Detectors Failing? How Can You Spot Fake Content Without Relying on Flawed Software?
- Recommendation
- Take-Aways
- Summary
- The dawn of ChatGPT means readers can no longer trust that the author of a given text is human.
- Be alert for the presence of certain words and phrases that aren’t commonplace in regular parlance.
- Watch out for overly ornate wording.
- AI-generated texts are full of falsified information, but they don’t contain any typos.
- AI detectors are not as accurate as advertised.
- About the Author
Why Are AI Detectors Failing? How Can You Spot Fake Content Without Relying on Flawed Software?
Can you spot the difference between human and machine? Read Jake Peterson’s guide on Lifehacker to learn the secret ‘tells’ of AI writing—from the overuse of words like ‘tapestry’ to the suspicious absence of typos—and why you can’t trust detection software.
Don’t get fooled by a bot—read the full summary below to master the 5 manual checks that are more accurate than any AI detector on the market.
Recommendation
Do you feel confident that you could spot a real from a fake? The dawn of ChatGPT means that you can no longer trust that a human is the author of the book, essay, article, or report you are reading. So how can you navigate this new world? Happily Jake Peterson, the senior technology editor at Lifehacker, has become somewhat of a Sherlock Holmes when it comes to uncovering AI-generated text. He has identified several revealing tells to help even the least tech-savvy reader spot AI-generated content. So don your skeptical hat and do your due diligence.
Take-Aways
- The dawn of ChatGPT means readers can no longer trust that the author of a given text is human.
- Be alert for the presence of certain words and phrases that aren’t commonplace in regular parlance.
- Watch out for overly ornate wording.
- AI-generated texts are full of falsified information, but they don’t contain any typos.
- AI detectors are not as accurate as advertised.
Summary
Upon the release of ChatGPT in late 2022, it became clear that it was no longer a given that a text — published online or otherwise — had a human author. ChatGPT can churn out text on demand within mere seconds. While most people find the idea of ChatGPT-generated texts unsettling, it hasn’t stopped various publishers from experimenting with the medium. In 2023, digital media publication CNET silently published AI-generated articles alongside articles by human journalists, much to the disdain of the company’s employees. And when digital publishing company G/O Media published AI-generated text, it experienced a backlash due to the poor quality of the content.
“No matter how you coax the AI into responding, it is beholden to its training, and there will likely be signs such a piece of text was generated by an LLM.”
Current AI tools, such as ChatGPT, are powered by large language models (LLMs) — deep-learning algorithms that feed on enormous volumes of data, which inform ChatGPT’s responses to user queries. Thus, generative AI is not so intelligent: It merely deconstructs a question into its main components and returns results based on its comprehension of the “relationship between words,” predicting what the user wants to know.
Given the right instructions, ChatGPT is capable of producing convincing texts. For instance, chatbots are often preloaded with user prompts, which can make the AI-generated output seem smoother and more human. So how exactly can you distinguish between a human-authored text and an AI-generated one?
Be alert for the presence of certain words and phrases that aren’t commonplace in regular parlance.
As you grow more accustomed to using ChatGPT, you’ll notice some of the telltale characteristics that reveal itself. Since ChatGPT generates output according to the relationship between words, the more you use the tool, the more you will notice its tendency to use certain words more frequently than a human necessarily would. While there is no definitive list of such words, ChatGPT users have noticed that the tool is partial to using the words “delve,” “underscore,” “relentless,” “groundbreaking,” “mosaic” and “tapestry.” While the appearance of these words in a text are not proof that ChatGPT was the author, it might arouse your suspicions.
Watch out for overly ornate wording.
AI output can seem quite natural at first blush, but the more AI-generated texts you read, the easier you’ll spot its hyperbolic adjectives and grandiose writing style. For example, ChatGPT might describe a negative outcome as “devastating” rather than merely “bad.”
“AI has a bad habit of using flowery language…, as if it was mostly trained on marketing copy.”
AI’s use of ornate language is most obvious when it’s trying to assume a casual tone. While ChatGPT’s GPT-4o model has reigned in this foible, giving more curt responses, Meta AI’s chatbot still suffers from this affliction, generating overly friendly counselor-style responses to queries. You might also find that AI’s responses are superficial; they offer shallow summaries, rather than deep analyses, of arguments.
AI-generated texts are full of falsified information, but they don’t contain any typos.
AI has a tendency to “hallucinate” — that is, to make up information. That’s because the tool doesn’t “know” anything. It makes connections between the words in your query and the words in its knowledge database, and it spews out a response that isn’t necessarily anchored in fact. If you spot an outlandish statement in a text that you know is untrue, this should raise a red flag for you.
“If you feel you’ve walked away from the piece having learned nothing at all, that might be AI’s doing.”
Meanwhile, AI never makes any typos or grammar mistakes. If the text you are reading has flawless spelling and punctuation, it might be too good to be true.
AI detectors are not as accurate as advertised.
AI detectors are also powered by LLMs, but they are trained on AI-generated text rather than general text. Theoretically, AI detectors ought to be able to identify an AI-generated text, but they often fall short. For instance, ZeroGPT, a popular AI detector, recognizes Article I, Section 2 of the US Constitution as 100% AI-generated. AI detectors likely will improve with time, but don’t rely on them. Always perform due diligence.
About the Author
Jake Peterson is the senior technology editor at Lifehacker, an online source of tech-based tips and life advice. He writes on a broad range of tech-related issues, including devices, software, and subscriptions.