Credit: Author via Midjourney

Today I bring you a fresh perspective on a topic I’ve written about a lot before: AI imperfection.
But, instead of enumerating the ways in which AI systems fail, as I typically do, I’m going to change my point of view to give you a new — and rather convincing — argument that I haven’t seen written anywhere else.
Let’s start from the beginning. A few days ago, before ChatGPT was a thing, I was scrolling Twitter and saw this picture (try to recognize what you’re looking at):
It took me a whole minute to realize it’s just a little doggy.
Then it struck me: humans are nowhere near perfect.
It’s ironic that I write so much (maybe too much? let me know!) about AI ethics, bias, misinformation, unreliability, and systems wreaking havoc, and it turns out that humans fail a lot, too.
If we’re so imperfect, why am I demanding such perfectionism for AI? Why do I set these super-high standards to consider that an AI system is good enough to be out in the world?
Am I being reasonable when I argue companies should show restraint to transform research into products and services and devote more resources to improve these issues?
The Algorithmic Bridge is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This article is a selection from The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between algorithms and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future.
The Algorithmic Bridge
Bridging the gap between algorithms and people. A newsletter about the AI that matters to you. Click to read The…
The impossibility of perfection
The above image is one example among uncountable others that reveal how the world tricks our senses and reasoning.
Perceptual illusions (visual, auditory, tactile, proprioceptive, etc.) affect how we interpret — or misinterpret — the information that’s coming into our brains:
This is not a gif. Source

But even when information is processed adequately, the cognitive centers may still fail to make sense of it. Cognitive biases, defined as “systematic patterns of deviation from norm and/or rationality in judgment” are pervasive and unavoidable for humans.
Anthropomorphism — our tendency to ascribe human traits to things — is an example of cognitive bias that appears time and again around AI. Another is automation bias, which makes us overly trusting of information that comes from an automated system (e.g. ChatGPT) in contrast to non-automated sources (e.g. our priors).
As professor Talia Ringer argues, it’s not the “novelty” of ChatGPT that makes us trust its outputs, but the “phenomenon of automation bias:”
Interestingly, we’re almost never aware of our biases (a phenomenon that is, in itself, a (meta) cognitive bias, called bias blind spot) unless we truly pay attention — then we start to see them everywhere (which, funnily enough, is another cognitive bias called frequency illusion). As you can see, cognitive biases are ubiquitous.
In summary, humans suffer from subjective perception and cognition that affect our thoughts, beliefs, and ultimately our behavior.
We evolved through an optimization process of millions of years that made us well-suited for the natural world (many biases appeared much after we left that world behind) but in no way that process transformed us into perfect entities — that wasn’t the goal.
Then the question is: can we design AI — free from the imperfections of evolution — to be perfect? Should we strive to or should we settle instead with well-enough AI systems that are as imperfect as we are?
Move fast and break things
Big tech companies (among which many are heavily reliant on AI) share a common answer to that: “move fast and break things.” Meta’s Mark Zuckerberg coined the expression and used it as a “prime directive” in Facebook’s early years. Others followed suit.
This is, in my view, a modern version of “the ends justify the means;” if you want to progress and survive, you better be breaking things.
This may be fine advice in some cases, but with AI it may be risky. For instance, a broken recommender system that can’t protect kids isn’t ready to be deciding what they see online 24/7, and an AI system whose supposed goal is to “organize science” shouldn’t be so prone to making up stuff as Galactica does.
Companies have reasons to be careful with deploying AI into the world, but its capability to produce harm is way too low on the list (whereas investors’ anger is probably high at the top). OpenAI, on the contrary, has been especially careful with alignment — which they consider a top priority — and safety (so much so that people are reportedly angry at the way they’re doing things).
Still, Sam Altman tweeted this on Saturday:
Companies optimize for progress and only minimize collateral harm afterward. If harm minimization gets in the way of progress, they enter into a “better ask for forgiveness than permission” mindset (like Google and Meta have been doing since forever).
Could it be that these companies are right and their reasons to prioritize progress are reasonable? First, it’s undeniable that we’d progress slower if companies were more careful. Second, they have incentives to reject flawless AI as their goal. Finally, if models work just fine most of the time, that may be acceptable.
The question now is: if humans fail a lot and companies have reasons to favor “messy” progress, why is AI-related harm deemed so dangerous?
There’s one argument that, as I see it, explains this very well.
Why an imperfect AI is more dangerous than an imperfect human
I want to clarify two things here: First, I don’t see any of this as black and white. I think that, under adequate circumstances, ChatGPT is an awesome tool. And it’s obvious people agree with this. Second, I don’t think we should strive for perfection when building AI.
What I believe is that harm precaution should be much higher on our (their) list of priorities — although humans are full of bias, companies should strive to make AI more reliable than we are.
Many approaches serve to illustrate this (e.g. AI systems can produce misinformation at a much larger scale), but I’ll resort to the argument that I think best encompasses all others: It’s not the amount of AI imperfection that is dangerous, but the way AI is imperfect.
Humans fail but the biases that govern our mistakes are shared by all of us.
When you saw that picture in the beginning, you maybe thought it was a deformed animal, maybe a goat, and maybe you got it right. In any case, although we may have differed in the specifics of our interpretation, there’s an underlying coherence (i.e. it looks like an animal, most likely a mammal, and there’s something weird about it).
AIs don’t do that. Even if the degree of imperfection of an AI system like ChatGPT is similar across tasks and domains — as measured by some benchmark — the failure modes of AI often feel alien to us.
Those of you who have tried ChatGPT firsthand know this (I illustrate this with ChatGPT because it’s the best AI chatbot out there). When language models fail they often do so inhumanly most of the time:
This mistake feels bizarre because a human wouldn’t ever make it. We’ve seen this so many times (e.g. with GPT-3, LaMDA, BlenderBot 3, Galactica, etc.) that it gets tiring to repeat the same arguments over and over again.
More importantly, we can’t predict these errors because we ignore, for the most part, how, why, where, and when an AI system will fail (we’re very limited by the only testing method at our disposal; sampling). Their “cognitive” structure, internal functions, and learning processes are too alien for us.
Human biases, although ubiquitous and universal are more understandable and tractable, and thus less dangerous.
This all would be fine if it were just a theoretical observation of how AIs and humans differ from one another. But the truth is, contrary to what AI pioneer Yann LeCun seems to think, that second- and third-order consequences are already having an impact in the real world.
The Verge’s James Vincent published an article yesterday on Stack Overflow’s decision to temporarily ban ChatGPT. The reason? “The volume of incorrect but plausible-looking replies was just too great for [moderators] to deal with.”
In comparison to previous models (like GPT-3), ChatGPT makes fewer mistakes. And here’s the key: Companies are building AI models that are less prone to failure but they’re not working to make them fail more like us.
ChatGPT’s rate of failure is lower but its alienness isn’t getting any better. In any case, it’d be getting worse, as it’s weirder to encounter a chatbot that shifts from super-intelligent to completely clueless randomly than another that’s always dumb.
Taking this argument to the extreme, imagine an AI that was human in each and every sense (appearance, behavior, thinking patterns, intelligence, etc.) but made random mistakes at random times.
The most likely outcome? We’d grow an undeserved trust toward it only for it to catch us off guard with weird and unpredictable behavior. We can’t even start to imagine the potential implications of this.
If you accept these arguments, we have two options:
We can either readapt society to AI (e.g. by educating people on what it’s possible to do with these systems and what isn’t and evolve accordingly) or we can bring AI closer to us — in the sense of how it’s built and how it learns — so it adapts to us, society, and our ways of doing things.
Which one do you think makes more sense?
Subscribe to The Algorithmic Bridge. Bridging the gap between algorithms and people. A newsletter about the AI that matters to you.
You can also support my work on Medium directly and get unlimited access by becoming a member using my referral link here! :)