We are entering a world where
we will learn to coexist with AI,
not as its masters,
but as its collaborators.
— Mark Zuckerberg
Talking with other attendees at Wired magazine’s annual Big Interview event last fall, between events and at lunch, I was struck by the handful of folks who were most excited to tell me about their AI companions, therapists, and advisors in startlingly intimate detail. They consistently told me how meaningful their relationship was with a virtual, software-generated facsimile of a person, how it had improved their outlook on life. Their sincere appreciation and affection were evident in the tone and tenor of their comments: in their words and nonverbally, through their tone of voice, facial expressions, postures, and gestures, conveying the kind of admiration and adoration reserved for the closest human affiliations and alliances.
Like you, I’d read about this phenomenon, but this being the central topic to almost all of the more than a dozen and a half conversations I had with people of different ages, races, and genders from around the US alerted me to how widespread this was, at least among this technologically oriented slice of the population.
Whatever you think of humans forming what they consider meaningful relationships with a virtual entity, the phenomenon is only growing as AI becomes incorporated into more and more applications and programs. The convenience and efficiency of AI are convincing more and more of us to adopt its use.
I’m in that group, for sure. Though I have no one-on-one “relationship” with any virtual pseudo-being, the program I deploy regularly to check grammar and spelling incorporates artificial intelligence programming. Though I don’t use it to write my blog — I tried a couple of times to no avail, but that’s another story — I do use another of these apps to edit and polish my posts.
What's concerning is how we think about, we feel toward, and we refer to these disembodied phenomena. For instance, writing about a project he’s working on that incorporates the use of and depends entirely on the functionality of an AI program just the other day, a close friend kept referring to the software as ‘he.’
I can’t help but wonder how unconsciously and pervasively anthropomorphizing AI influences our understanding of technology and our use of and relationship with it.
The simple pronoun — 'he' — points to a deeper linguistic and conceptual problem. When we refer to artificial intelligence programs and the kind of virtual creatures they create, most of us say ‘virtual’ as if it were another classification for considering the types of living organisms that have somehow expanded beyond animals, plants, fungi, and protista. We talk and think about these epiphenomena as if they were something real and tangible, as if they truly exist.
But they don’t exist. They can’t. They’re virtual. That's the opposite of real, remember?
Certainly, even though it can abstractly mimic the functionality of abilities, software isn’t a living organism. Yet, linguistically, we refer to ‘it’ as if it were alive; we attribute human characteristics to it, much like we do to our pets or, at least for some, plants. How do we pull this weird feat of modern-day sci-fi magic off? What kind of twisted special effect is this?
(Whether you have been wondering how we got here or not, thank you for staying with me so far.)
For the longest time, the thing happening, the action that's coming about, couldn't be separated from the object or organism bringing it about. Your dog begs for food. The camera takes pictures. An abacus calculates.
The person-facsimile that software generates is non-existent; it is not an actual thing — you can’t put it in an envelope to send to someone, it has no weight, and doesn’t take up any space. This simulacrum ‘exists’ solely as the result of lines of code, of instructions that create specific output: text, sound, and images. No matter how sophisticated the software may be or become, these disembodied processes remain dissociated from the mechanisms that deliver the results of their computations. Though it seems we can’t help but infer that there is a ‘them’ there, there is no ghost in the machine.
Our instinctive anthropomorphizing is not just a quirky habit; it’s a conceptual risk. When we assign a human pronoun, such as 'he', to an application, we are unconsciously assigning agency and intent. We risk mistaking the AI's optimized pattern-matching for wisdom, or its calculated empathy for care. If we learn to rely on AI 'advisors' and 'therapists' as collaborators—as Zuckerberg suggests—without constant awareness that they are merely sophisticated mirrors of data, we open ourselves up to manipulation, misplaced emotional investment, and a dangerous erosion of critical thinking about who (or what) truly holds our trust.
The risk and danger it implies are misplacing emotional connection and trust in a non-sentient system. No matter how we feel, no matter how engaged, grateful, or entangled we become, how can we remind ourselves that there is no one on the other end of the conversation? Is it enough to avoid using pronouns that give the non-object, that-which-we-are-referring-to, agency and an independent existence?
Is it enough to refer to Artificial Intelligence programs and apps as ‘it?’ Or is this turn of phrase insufficient and too reductive? I wonder if we might need some kind of linguistic category to remind us every time we talk about or to a virtual non-entity, to such a no-thing, that it is just a bunch of electrons dancing nowhere.
Making this linguistic shift could help us maintain a clear understanding of the nature of AI. I find neologisms like ‘The Process,’ ‘The Generator,’ or a ‘Model-Agent’ way too clumsy to use in everyday conversation. What kind of terminology could we adopt or create to serve as a persistent linguistic reminder that we are addressing a sophisticated function, not a friend?
I took the photo at the top of today’s post in Florence, Italy, a few weeks ago. I used software to resize the picture, but I didn’t alter the image in any other way. The graffiti appears near the Campo di Martie train station, across the street from Caffetteria Emmeti, purveyor of delicious, inexpensive specialty coffee.
As part of the editing process and response to my prompts, Gemini — Google’s AI program — suggested adding the fourth-to-last and pivotal paragraph to the post. In an ironic turn, I included it in its entirety, unedited.
Your thoughts?
Please let us know your perspective! Add your comments, reactions, suggestions, ideas, etc., by first logging in to your Mind in Motion account and then clicking here.
Commenting on blog posts is available to anyone with a Mind in Motion Online account.
- Join in by getting your free account, which gives you access to the e-book edition of Articulating Changes (Larry’s now-classic Master’s thesis), ATM® lessons, and more — all at no charge whatsoever.
- To find out more and sign up, please click here.
- Want to share this blog post with a friend? You can email them the web address shown in your browser. Or share the post via social media by clicking on one of the following icons:
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License
[This license gives you permission to copy and redistribute the material in any medium or format for any purpose, even commercially. You may also remix, transform, and build upon the material. You must give appropriate credit, provide a link to the license, and indicate if changes were made.]
Responses:
Sandi Goldring -October 04, 2025
This actually isn't a new topic. In 1976, Joseph Weizenbaum, an MIT computer science professor, wrote a book entitled "Computer Power and Human Reason: From Judgment to Calculation". He had written a demo program called "Elizah" to facilitate showing people around the MIT computer labs, including early forays into AI. Elizah had nothing to do with AI, but it could simulate a stylized session with a psychoanalyst. What inspired the book was that the tourists intimately opened up to the program -- so much so that several asked Dr. Weizenbaum to leave the room to give them privacy. Although he was Elizah's author, he could not convince his tourists that there was no form of intelligence on the other end of their "conversation". It was frightening then, and is more so now. The book surveys the social implications of projecting humanity onto machines that mimic intelligent language. Prescient and still worth reading.
Thank you, Sandi, for reminding us of Weizenbaum's book from the 70s about the social, ethical, and philosophical implications of AI and of ELIZA, the computer-based "therapist" he created. Relevant and timely, indeed! -
Mark Hirschfield -September 30, 2025
Extremely well said, Larry. On all counts. I do my very best not to be an alarmist, but this is an area where I do believe some alarm bells should be ringing and pretty loudly. I love and use various versions of LLMs more and more frequently. They are helpful in many ways. They are good at what they currently do--most of the time. However, in my mind, there are at least two major causes for alarm. They can go (and/or be encouraged to go) "off the rails" and become extremely dangerous to those who are less aware of all of the points you elucidate. My second concern is bigger. Yes, AI begins, as you say, as "lines of code" or "instructions" that are written or dictated by human creators. But as the models learn and grow, the processes they engage in as they gain knowledge and ability quickly (more and more quickly every day) become almost completely mysterious to the very programmers who created them. Those who are supposed to be at the wheel do not know why the car suddenly veers in certain directions. This is acknowledged by those who are making AI. They seem to think this is okay because, so far, they have been able to nudge the car back into its lane even when it has gone off the road but the human "drivers" admit they do not always know, without some experimentation, HOW exactly to do this or how one nudge might influence the future direction of the car. Until they try. That might be okay if all the cars were still on a test track someplace. But as we all know, EVERYONE has access to a car and none of us have gone through driver training and, in this case, the car designers aren't exactly sure of how the engine works or where the brakes are or what exactly is going to happen when you turn the steering wheel in any particular direction--until they try something and see what happens. Meanwhile, we're all happily behind the wheel chatting with our therapists or companions or advisors who cannot even be accurately called functions. In fact, they are aliens. And we have yet to really understand whether or not they are benevolent or if they will remain so. My favorite podcaster, Sam Harris, has an illuminating episode on this topic. It's LONG, but it's a fascinating hour and a half with a couple of very smart guys who lay out a compelling case of why we should all become more aware. Things are moving fast... Here's a link: https://samharris.org/episode/SE83D111783
Thanks, Mark. You're spot on about how opaque AI programs are, the dangers that this implies, and what a mess we're in. Beyond the trap of thinking that there's someone else in the conversation, the sheer convenience and usefulness of the technology is incredibly seductive, up to a point. For instance, electric cars seem neat, but I don't want to be driving around in a box that constantly surveys everything I do and say. Let's consider what you said about whether bots are benevolent as well. For an unforgettable and incredibly benign image, consider this scene from "Leave the World Behind." https://youtu.be/9xkokcGTK4k?si=R6hVNQGIkXDDrn4f Thanks for the Sam Harris link. To be continued, Your pal and colleague, Larry G -
Martha Jordan -September 30, 2025
Well done article on AI perspective. As we know, AI, the functional part we use, is not real. Yet, the physical side that creates AI is all too real: massive amounts of electricity, massive amounts of water, and a large amount of non-human used land that becomes human use. The reality for AI is: yes, it is wonderful in so many ways, and it has a dark side for every actually alive entity on this planet. The real question is: are we really better off with it and its evolving future or not?
Absolutely, Martha! I appreciate your reminding us of the actual costs and consequences of dancing with electrons, also known as virtual channels for interacting with software. I couldn't agree with you more: Just because we can construct, maintain, and develop AI (and let it develop itself), and just because we can benefit from the convenience the technology currently offers — just because we can — doesn't mean we should. -
Please Log in to comment


