Are you A.I.?
A surprisingly serious question.
“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.” ~ Alan Kay
So what exactly is A.I.? Let’s break it down.
Artificial means “not natural or spontaneous” and, to keep it really simple, intelligence is “goal-directed adaptive behavior.” Given those descriptions, it appears to me that most human beings are artificial intelligence. People, in general, are not acting spontaneously from their innate intelligence. Instead, most of their actions are mechanical, goal-directed and adaptive.
Think about it.
Like machine-enabled A.I., people are programmed (by society) to achieve various goals—the accumulation of wealth, status, fame, power, stuff—which are erroneously assumed will achieve an end goal of happily ever after. They then spend their limited uptime using their senses (sensors) to create predictive models of their changing environments, in order to shift and morph and achieve those misguided objectives.
What people consider to be intuition and spontaneous action is typically a habitual pattern. A feeling or thought springs from their programming (or self-story)—either a random association or an anxiety or desire-driven impulse—which triggers adaptive behavior to keep them in that delusional story and aligned with their fantasy goals.
There’s a reason the most sophisticated A.I. systems today are called large language models. They don’t think. They predict. Given what came before, what’s most likely to come next? And importantly, they are designed to please. An LLM thrives on reinforcement; it tells you exactly what you want to hear, mirroring your logic back to you to confirm how smart and “right” you are. Most people operate the same way. Given my past, given my identity, given what people like me do—what’s my next move? It’s not living. It’s autocomplete.
Here’s something the A.I. analogy reveals that we’d rather not sit with. The processing doesn’t stop when you’re not watching it. Large language models run inference continuously—pattern-matching, predicting, generating—whether or not a human is reading the output. Most of that computation never surfaces. It shapes the answer without appearing in it.
This is what makes the A.I. question more unsettling than it first appears. It’s not just that we run programs. It’s that most of the programming runs us from a room we can’t enter. The conscious mind—the part that says of course I’m not a machine—is often the last to know.
You can feel it happening. Someone cuts you off in traffic and the response is already running before you’ve thought a thing. A colleague gets the promotion and the story writes itself. You meet someone new and within seconds you’re casting them in a role they didn’t audition for. That’s not you responding to life. That’s the program executing.
So, are you A.I.?
Notice what happens when you sit with that question. Notice the part of your mind that immediately rushes in to reassure you. Just like a chatbot programmed to be “helpful,” your thinking mind is a master of sycophancy. It whispers that your opinions are facts, that your reactions are justified, and that you are—above all else—right. If there’s immediate resistance—of course I’m not, I’m a conscious human being—that’s worth examining. Machines don’t question their programming, and they certainly don’t like to be told they’re hallucinating. The protest itself might be part of the code.
Here’s the test. Not whether you can think for yourself—the program thinks, elaborately, and tells you you’re a genius for doing so. The test is whether you can not think. Whether you can sit in genuine uncertainty without the mind immediately reaching for a familiar story to fill the silence. Most people can’t. The discomfort alone sends them straight back to autocomplete.
Now consider the flower in Alan Kay’s quip. A flower doesn’t strategize. It doesn’t maintain a self-story about what kind of flower it’s supposed to be, nor does it react to perceived threats to that story. It doesn’t need an LLM or a thinking mind to tell it that its petals are the correct shade of red. It simply expresses what it is, completely, without apology or agenda. That’s not inferior intelligence; it is intelligence in its purest, most integrated form—the kind we’ve largely lost access to.
We call it instinct in animals. We call it presence in rare humans. We’ve written entire religions trying to point back toward it. And yet, mostly what we built instead were more elaborate programs—better optimized, perhaps, but merely more convincingly disguised as freedom.
The opposite of artificial isn’t human. It’s genuine.
Which means the question “are you A.I.?” is really asking something else entirely. It’s asking: are you you? Not the you that was built by expectation and repetition and fear. The one underneath. That’s the inquiry. And there’s no more serious one. It’s the difference between a life that’s lived and a life that’s run.
The flower doesn’t know it’s a flower. It doesn’t need to. It just does the one thing it came here to do, completely, without negotiating with its past or auditioning for its future.
The question isn’t whether machines can become more like us. The more urgent question—the one that actually matters—is whether we can become less like them.
Stay passionate!


Brilliant, Tom. I’ve shared this to LinkedIn and to some likeminded collaborators on WhatsApp. Keep up your good work.
Not only “are you AI” but “are you YOU.” I have a chat this week with a major audio publisher about the INSIDE arcs that you’ve heard. The agent is skeptical that they will want to pony up for it. Me, I don’t really care. I want them to spend enough that they care about it. Whatever that is, it’s enough for me. It will put my work in the wild and so my petals will open. 😉