» Community » Community Blog » How I Almost Lost My Client, Their Trust, and...
Page options
Sep 27, 2023
How I Almost Lost My Client, Their Trust, and My JSS Score Because of AI-Content Detectors

One day after submitting a new article to a client, I was surprised by one of her messages. She wanted me to edit the article so it was flagged as "man-made" by a certain ChatGPT content detector. Apparently, the article I had submitted to her the day before was 71% AI-made.


I hadn't used ChatGPT at all while writing the article. To put it in a way that ChatGPT content detectors understand, I was 100% sure that the article was 100% human-made. So, how was it possible that it was flagged as 71% AI-made? How was this online detector capable of determining the AI-ness of an article so accurately? And was my client ever going to believe me?




ChatGPT is a blessing to all content creators, especially freelance copywriters such as myself. It can act as a virtual personal assistant, a right-hand man, and a platform for finding quick solutions to new problems. Although some aspects of AI can be worrying, there's nothing intrinsically wrong with ChatGPT—even when it seems like it's going to "steal your job."


Sadly, ChatGPT content detectors aren't such a fascinating new technology. Most are based on a terrifyingly simple principle, most succinctly explained by this quote I found on an SEO website: "ChatGPT detection tools grade content based on how predictable the phrase choices are within a piece of content." Or, in even more succinct terms, detection tools think all predictable language is AI-made. It's hard not to see the crucial flaw in this line of thought: human language, such as AI language, is extremely predictable. We understand each other because we share common sets of languages that function according to the predictable patterns (grammar and syntax) of largely static bodies (names, adjectives, concepts, and so forth).


Using predictability to evaluate whether something was made by a human or by ChatGPT is not only dumb because of the predictable way in which all human language unfolds. It's dumb because AI software such as ChatGPT learns by using large sets of data produced by humans. All AI knows is what humans already know: it's meta-tagged information smashed together in a virtual environment and spit out in the form of a ChatGPT message. There's no clear distinction between human language and AI language because AI language is the same as human language! Whether we like it or not, ChatGPT already sounds like a human.


With some exceptions, humans aren't capable of accurately distinguishing between something I wrote and something ChatGPT wrote. ChatGPT content detectors have the same handicap. That's not their fault. Humans and content detectors are only ineffective at distinguishing between human and AI language because there's no absolutely-certain way of separating the two—let alone evaluating with precision which percentage of an article was made by AI.


This is important, but not in a good way. Especially for freelance copywriters. The 100% human article I submitted to my client was flagged as 71% AI, and that made me look like an impostor. One of the requirements of the article was that it wasn't AI-written, so I didn't use ChatGPT at all. Nevertheless, my article was falsely flagged as AI by a random ChatGPT content detector. A random ChatGPT detector my client believed to be quite reliable…




This could have made me lose the trust of my client, my job, my Upwork job success rate, and my precious time. As you can see, my client was happy to see the logic in my words and ignored the copyleaks results. I was never able to prove the human-ness of my work; instead, I decided to show my client that ChatGPT detectors are far from reliable (I even sent her a link to a very cool article on the topic). I couldn't write a 0% AI article even if I tried it because humans and ChatGPT use the same predictable phrase choices within a piece of content. Ironic as it sounds, that's precisely why humans and ChatGPT tend to communicate with one another so neatly.


If all of this is a bit too over-the-top for you, let me ask you: how was it that, in a couple of years or less, so many people came up with percentage-point-accurate detectors of software as complex as ChatGPT? How were they so fast at analyzing what a beastly and ever-changing machine sounded like? ChatGPT is such an impressive tool that it caused (and still causes) shockwaves all over the world. Yet, it got easily beaten by poorly-subsidized AI detectors on websites such as Copyleaks and Hugging Face. Does this make sense to you?


In the end, all I'm asking is that you think twice before trusting the AI-ness score of a piece of content. If you're working with freelancers or hiring creatives who may be using ChatGPT, don't waste your time checking their work. The best detector, AI software included, is still the human detector. If the content is good, is there any point in knowing who (or what) wrote it?