Everyone Is Wrong About AI Writing Tools
Well, maybe not *everyone*, but I didn’t write this headline…
Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0
I’m an AI writer. As such, I have a unique dual perspective on the topic: My “I know AI” mind and my “I write” mind have strong conflicting opinions.
My AI mind constantly repeats, “don’t worry, AI is dumb; it can’t reason or understand.” But then my writing mind says, “hey, I’ve had problems distinguishing GPT-3 from humans before.”
Of course, those identities of mine care about what matters to each of them. It’s at the intersection of their views that I realize both are right — partially.
On the one hand, even the most sophisticated AI writing tools are — albeit impressive — nothing more than the appearance of mastery. A mirage that feeds from our gullibility and the limited access we have to the reality behind their perfected spell.
On the other hand, unlucky for us, appearances can be more than enough: As long as this feels like my writing and the illusion resists your scrutiny, then you’ll be satisfied.
It doesn’t matter to you if I wrote this or if an AI did.
Or does it?
The secret of human language
We, humans, write to communicate something, as professors Emily M. Bender and Alexander Koller eloquently argued in a 2020 paper on AI language models.
There’s always an intention — a purpose behind the words.
Yet, words alone can’t convey that intent. Whatever effect I want to cause in you, reader, remains hidden in my mind. As soon as these letters leave my fingertips to stay forever immutable on this page, they become an empty casing.
Unless another mind — your mind — comes across these symbols to give them a new meaning. Maybe similar to mine. Maybe not.
Your task, as a reader, is to reverse-engineer the message I intended to imbue in these words using the meaning you bestowed into them, combined with the linguistic system that we share (English) and your world knowledge.
And here comes the key.
It’s at that very moment — when your meaning overrides mine as you reconstruct the original intent and communication happens — that it becomes suddenly critical that these words came from my thinking mind and not from an AI stochastic parrot.
Why? Because if it wasn’t a human who wrote this, you’d be pursuing a pointless search for something that isn’t there:
No meaning within. And no intent to be retrieved.
This article is a selection from The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between algorithms and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future.
The Algorithmic Bridge
Bridging the gap between algorithms and people. A newsletter about the AI that matters to your life. Click to read The…
thealgorithmicbridge.substack.com
Where AI writing tools can’t reach
I’ve seen GPT-3 write high-quality prose:
Credit: OpenAI
And engaging philosophical essays:
And yet, despite its undeniable mastery of the form of language, there are places where AI won’t ever reach — regardless of how good it gets (under current paradigms).
These are places where intent matters more than words.
If you know me, you’re not here just because you want to read something or get some undefined value. You’re here because you want to read what I have to write and extract the value that I can provide.
Me being the writer has some inherent importance to you because you want to have the means, through my words, to access the communicative intent I hid inside. You want to take a peek at my mind.
These words are nothing more than the means to that.
Putting it cleanly: If GPT-3 had written these words instead of me, it would kill the very purpose of writing them in the first place. Reading this wouldn’t give you the same value because the means (words) wouldn’t take you to your end (retrieving my intent).
Let me use this quote to illustrate why (it’s from a teacher asking a student to not use GPT-3 to write essays):
“As someone who teaches, I can say that this is something I dread. If I learned that my students were submitting AI-written papers, I’d quit. Grading something an AI wrote is an incredibly depressing waste of my life.”
The reader values being in a trustworthy relationship with the writer, not an apparently trustworthy one.
But I’ll concede — it depends.
The relative weight readers give to intent vs words depends on many factors: Are they reading a book or an ad? Do they know the author or do they only care about what the content provides? Do they care about the argument and the thesis that’s being defended, or only about spending some time reading?
Reading can be passive consumption. In those cases, words hold the most value.
But when reading becomes a timeless active conversation with the writer, words matter little — it’s the underlying intention that is valuable.
And neither GPT-3 nor any other similarly built AI language model — however good at spitting tokens coherently — will ever be able to provide that.
But there’s a human behind the AI!
You may argue that human intent can be effectively preserved with prompts.
In the end, the AI doesn’t come up with the words by itself, it’s a human who guides it through the possible completions with clever pushes in the form of natural language inputs.
However, because of the arguments I laid out above, prompts can’t contain the intent of the person behind the AI. And, even if they did, no AI could capture the original purpose.
AI systems not only lack the ability to write intentionally. They also lack the ability to retrieve intent from human-written text. The reason? They can’t access meaning, which mediates the relationship between intent and words by grounding the latter in real-world entities.
The bottom line is that whenever an AI is present in the communication chain, there’s a defective link.
As soon as a writer (prompter) accepts the AI’s output as valid, they’re giving up their original intent — if there was any.
This is probably the most valuable insight in this article for writers: In using AI writing tools, you risk replacing your sensible, interesting, or useful communicative intent with whatever the AI decides to output.
As soon as you start saying, “that’s good enough,” your presence in the finished piece starts to shrink.
Final thoughts
In this article, I’ve highlighted just how fundamentally flawed the comparison between humans and AI writing tools is.
This analysis is key to understanding who is at risk and which tasks may end up being automated. For instance, tasks where scale matters more than style and where the effect on the reader matters more than the intent of the writer.
It is under those conditions that the risk is highest. If you’re reading Shakespeare’s tales, you care it’s him and not an AI emulating his voice. If a marketing agency wants to come up with a clever ad for the next campaign, neither you nor they care.
As a newsletter writer, I’m likely in the top half of the safe-unsafe spectrum. But copywriters, ad marketers, generic content creators, freelance writers, and even ghostwriters — among others — may face a harsher future.
To end on a high note, remember this:
Can AI write a seemingly human-made text? Yes. Can readers obtain value from AI-written pieces? Yes. Can AI writing tools enhance writers’ abilities? Yes. Can AI writing tools impact the demand for human writers? Sadly, yes.
But can AI write how or why humans write? No. Now nor ever.
Something to remember for both writers and readers (also applicable to other generative AI like text-to-image models).
Subscribe to The Algorithmic Bridge. Bridging the gap between algorithms and people. A newsletter about the AI that matters to your life.
You can also support my work on Medium directly and get unlimited access by becoming a member using my referral link here! :)