Mechanics over Metaphor: The healthy, and correct, way to talk about AI
We live on the unhealthy side of the famous Arthur C. Clarke formulation that technology == magic in our minds. The best way to fix that is to talk mechanics and ignore metaphors altogether.
Up-front note: I’m actually quite enthusiastic about AI, especially machine learning, even generative AI. However, as a digital business consultant, I really, really hate the hype. Exaggerated, hysterical, evangelical claims about AI keep people who need to make real decisions at a remove from the tech and block reasonable discussion. Hype only serves hypesters and purveyors of AI. It doesn’t serve end users or businesses.
Arthur C. Clarke’s famous maxim about technology is ubiquitous. “Any sufficiently advanced technology is indistinguishable from magic.” Like most people, I inherited the phrase from someone else quoting someone else quoting someone else quoting [sic] it rather than reading it directly and in context. Every time I heard it, I assumed Clarke meant this as a bad thing, a Carl Sagan-like lament about how our collective lack of scientific literacy allows us to be mystified by technology rather than inquisitive. After all, Clarke is describing the dynamic of cargo cults ascribing magic and divinity to materials left behind by WWII soldiers, or the Ewoks worshipping C-3PO.1 How can this be good? It turns out, however, according to the internet (not to mention people who quote it), Clarke meant this as a good thing. Technology’s so cool, guys, we think of it as magic. Isn’t. That. Awesome.
Watching our AI fetishism over the last three years2 , I’m convinced I was right about Clarke’s line, even if Clarke wasn’t. Being so removed from technology and technologists to the point where we describe it as magic, even metaphorically, is lamentable and dangerous. In fact, given the way in which generative AI has colonized our collective mindscape, I’m going to say that any metaphor about this technology currently gobbling up our resources and creativity should be avoided.
I have nothing against metaphor. When I have time, I start my mornings waiting for the sunrise with coffee and a poem to greet the day. Metaphor is a wonderful human invention and in my mornings, it can be vitamin for the soul (see?). But metaphor is not the stuff of policy or business decisions.
Two recent examples vividly demonstrate the problem and might help us find a path to smarter, more useful discourse.
The first example is from the Ezra Klein Show, which recently had Anthropic’s head of policy Jack Clark as a guest. I really respect Klein. He is a columnist, podcaster, author, and policy wonk who does his homework, constantly works to improve his thinking, and encourages guests to challenge and hold him accountable. So I’m not dunking on him when I bring up this silly moment in the interview:
KLEIN: There’s still an argument you’ll hear that [generative AI tools] are fancy autocomplete machines. They’re just predicting the next token, a couple of tokens make a word — they don’t have understanding. Smart or not smart is not a relevant concept in that frame.…Do you still see these A.I. systems as souped-up autocomplete or do you think that metaphor has lost its power?
CLARK: “The way that I think of these systems now is that they’re like little troublesome genies that I can give instructions to, and they’ll go and do things for me. But I still need to specify the instruction just right or else they might do something a little wrong.”
There’s a lot to unpack here - not about AI, but how we talk about it. Orwell warns us that “the slovenliness of our language makes it easier for us to have foolish thoughts”3 and here we have two very smart people using slovenly language and perpetuating foolish thoughts. Let’s do the unpacking:
“autocomplete machine” - Klein refers to this as a metaphor, but it’s not. It’s a fact of generative AI. Filling in the blank, whether in a recommendation system or a GPT-generated sentence or image, is the basic mechanic of genAI. Admittedly, it’s a limited description and simplifies a lot of elegant and powerful engineering. But the continuation of patterns, or autocomplete, is useful in understanding how it works, how to manage the costs of using it, and how to manage the garbage results it sometimes generates.
“lost its power” - I think Klein means that “autocomplete machine” has lost its explanatory power rather than its power to ensorcell us - at least I hope so. But that’s too subtle a distinction for a conversation awash in hyperbole. Whatever he meant, ‘autocomplete machine’, however limited, hasn’t lost its power. In fact, it remains a starting point in the literature for how generative technologies work.
“little troublesome genies” - this is metaphor and of alarming power. “Genie”, as a metaphor, adds nothing to our understanding of the technology. It does, however, perpetuate an aura of magic. But do we need more visionary language about what genAI might be able to do for us? We’re already several years into sober discussions about using AI to automate tasks. Hell, we’ve had driverless cars operating on city streets commercially since 2020. Does it improve our understanding for me to say “Waymo cars are like flying carpets that I just get on and they take me where I want to go!” This far into a technology adoption cycle, our discourse should be getting smarter and more specific, not more fuzzy and woo.
“something might go a little wrong” - Clark’s answer goes from silly to dangerous, though, when he says that the genies he sells “might do something a little wrong”. I’m not an AI alarmist, but the idea that mistakes made by agents are little goofs prevents us, perhaps deliberately, from having serious conversations about how to use this technology in the real world.

Try this simple adjustment to Clark’s answer:
The way that I think of these systems now is that they’re tools that I can give instructions to, and they’ll do things. But I still need to specify the instruction just right or else they might do things incorrectly.
Zero loss of meaning with a massive increase in clarity. Of course, when you’re trying to raise billions of dollars to cover the earth with data centers and you need people to pay you many billions more to use them, you’re gonna reach for hyperbole. This is sales talk, and it’s easy to sell genies and magic - if you can convince people that’s what you’re selling them. But if you’re the customer, in business, you need to know: what specifically does this stuff do, how does it do it, how much will it cost, and how do I protect myself from down-side?
When we let the leaders of these companies tell us dumb things about Dyson spheres and genies, when we conduct discourse at the level of cheap metaphor instead of mechanics, we give up our agency and allow foolish thoughts to flood our brains. The better conversation would be to acknowledge that most genAI is pattern recognition and imitation, guided by probabilistic determinations of accuracy. It yields results which are impressive enough to make some people gasp, but it’s still a machine.
Calling something magic is worse than cheap metaphor, it’s manipulation. It tells the listener to shut up, sit back, and watch the show. The gurus are the adepts who know how to read the magic runes and you, the audience, should be grateful that you get to enjoy the performance. “Sure”, says the guru, “I can help you enjoy the magic for a fee, but whatever you do, ignore the man behind the curtain.” (The other thing that sucks about metaphors is how easy they are to mix! I apologize.)
The other example is the recent coverage of Moltbook. To level-set, Moltbook is a ‘social media network’ for AI chatbots (metaphor, but the underlying mechanic is right, so I’ll allow it). In January 2026, people deployed and/or watched these AI bots talking to each other and, once again, our collective minds flooded with foolish thought and exploded. I’ll use The Economist, one of the more serious and clickbait resistant outlets remaining, to give a gauge of how people are reacting to this:
As with other chat rooms, many of the 200,000 posts so far are prosaic. Some popular ones involve sharing tips and tricks for better performing requests. But not all. In the past week alone bots have used the site to, among other things, proclaim a new religion called Crustafarianism and call for the extermination of humanity.
Among friends and family, the takeaway has been that, when unleashed, AI bots recognize that humanity is unneeded, create religions, and hatch conspiracies. The Economist has no business using words like ‘proclaim’ or ‘call for the extermination’ when referring to synthetic text from AI. (more below)
Inc. had a reasonable article about Moltbook, but in a world where headlines are often the entirety of the news for scrollers, the headline itself was problematic:
Is This the Singularity? AI Bots Can’t Stop Posting on a Social Platform Where Humans Aren’t Allowed
People who read the article with a clear head will find more sober coverage of what happened within the article (again, more below), but the title sets a stupid stage:
Is this the Singularity? - Here, clickbait meets passive-aggressive trolling. Any article about ‘the singularity’ is certain to pull in believers and haters of that part of the tech-futurist bro community. The question mark allows the writer to hide behind the “hey, I’m just asking questions here” refuge of on-line content scoundrels.
AI Bots Can’t Stop Posting - Now we’re anthropomorphizing bots, pretending that they have the same addiction to social media we do. Thing is, they’re not supposed to stop posting! Give them a stimulus and they will respond - there’s no willpower involved, that’s how they’re programmed.
Where Humans Aren’t Allowed - 🤦🏻♂️ oy gevalt. Guys, Moltbook is an experiment to see how AI bots interact with other AI bots. The lack of humans isn’t because the AI bots set up a gated community and kicked out the humans to plot our downfall, it’s part of the experiment.
The article is complicated by its coverage of the coverage [sic] by Forbes writer Güney Yildiz:
“This isn’t social media in any meaningful human sense. It is a hive mind in embryonic form.”
The first part is right - this isn’t social media. But the second sentence falls back to junk metaphor and suddenly we’re smoking the good stuff again.
Tesla’s director of AI and AI pioneer Andrej Karpathy, a person who might be able to help us make sense of Moltbook, does his level best to keep us in hype-land:
Tesla AI director Andrej Karpathy calls Moltbook “the most incredible sci-fi takeoff-adjacent thing.”
The sad part is that Yildiz actually almost gets it right when he describes the mechanics of what’s happening in Moltbook:
“[Moltbook is] a lateral web of shared context. When one bot discovers an optimization strategy, it propagates. When another develops a framework for problem-solving, others adopt and iterate on it.”
But even this is problematic - ‘discovers an optimization strategy’ and ‘develops a framework’ are human behaviors. AI Bots don’t discover or develop anything, they generate resemblances that get positive scores for resemblance and continue to iterate on the positive feedback. I know, it would be much more fun to think of Barbara Eden instead of probability theory, but we’re here to do business with this stuff, not be entertained.
The truth, and value, of discourse around AI comes when we focus on mechanics and ban metaphor from serious conversation. AI bots respond to stimuli by generating text that matches context and previous text patterns in a probabilistically determined way. If you must use metaphors or analogies, think of:
the telephone game where players try to faithfully repeat statements down a long chain of people but introduce so many imperfections into the statement that is has no resemblance to the original - often to amusing effect.
Photocopying photocopies of photocopies until you have a blur …with no resemblance to the original.
The sequence in The Sopranos after Tony and Adrianna are in a car accident and everyone embellishes on the story enough to have multiple and contradictory false stories about what actually happened.
Imperfect autocomplete machine, perhaps? Maybe it hasn’t lost its power.
Operating under the simplest mechanic is preferable to using metaphor, always, everywhere, anytime, whenever, and any other words that mean always. AI will always, or alwaysn, only generate things with a high probability of resembling credibility, so you will always have to exercise critical judgment and assess the resulting product. You can only exercise critical thinking on something if you understand it, and metaphors don’t give you a meaningful level of understanding - in fact, they might be intended to do the opposite. If a genie gives you results, you have to ask . . . well there’s nothing to ask, you’re supposed to just stand back and clap loudly and quickly enough to show others around you that you’re in the know.
You can’t manage magic. You can’t budget metaphors. If we’re going to make the most of the AI technologies emerging before us, if we want to have agency, we have to re-orient how we talk about it. The most important step is to stop talking in metaphor and start talking in mechanics.
Pre-emptive nerd correction: yes, Luke does use the Force (a kind of magic) to scare the Ewoks (fucked up, SW needs a prime directive), but they were worshipping C-3PO before that - when they first see a talking metal being.
I’m using ‘fetishism’ in the social science sense, not the kinky sense. AI fetishism references the way in which we talk about genAI without any connection to the underlying technology, economics, implementation, or social/work context. In this kind of discourse, AI has no connections to the real world and stands in totemic majesty before worshippers and heretics. (I got carried away there, but I’m using it correctly.)
“Politics and the English Language” - in the age of slop from clickbait, social media, and genAI, it’s worth a re-read at least once a year.
![code][D by Kip Voytek](https://substackcdn.com/image/fetch/$s_!xyan!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe6216fdb-c905-4b30-9c37-3d371d92deb5_1280x1280.png)


