AI Isn't the Problem. People Are.
Misused tools don’t make villains—people do. Here’s how (and why we need) to keep the story honest.
“AI is theft. Full stop.”
That was the headline of a recent essay by someone I admire deeply. The piece itself was nuanced. But the headline was reductive and framed the issue incorrectly – fanning the fears and flames of the hundreds of commenters who felt similarly.
Their responses are what buoyed me to write this – although old insecurities are haunting me. Despite the encouragement of you readers, I still feel ill at ease in this literary world – a world of authors and academics who hear “Edgar” and think publishing award – whereas I think of the SEC database.
But it’s precisely because I am new in this world that I feel a need to speak up. Because the future of creative work isn’t just shaped by those who have “made it” – but also by those of us who are trying to forge our own paths.
I’ve spent almost two decades of my life working for companies who developed some form of AI – and I can tell you, AI itself is not theft. If we want honest, public discourse, words matter. The headline not only unnecessarily skewed the piece, it didn’t allow for the debate we need to have.
Artificial intelligence (AI) is a collection of tools that relies on pattern recognition and probability. The term AI is a broad cloth that blankets everything from spam filters to finding an Italian restaurant near me; from finding passive sentences to (hopefully) having your Substack found by Google.
Each of those “AI” capabilities built on the previous one (albeit in a quiet, non-threatening way) – were acceptable. That is until generative AI (GenAI) started generating text, music, images, and code. It wasn’t just flagging passive voice, it was generating entire sentences, paragraphs, and images.
But it’s not creating with any sentient feeling.
AI Is Math and Pattern Recognition
I mentioned pattern recognitions. At the heart of AI is lots of math and lots of repetition. Really if you think about it, how AI learns is akin to how humans do. If we are writers, musicians, artists, we studied the masters that came before us, learned their techniques, and practiced them for hours – until we began applying them. We adopted and then adapted them –embellishing them with our own soulful touches, our own lived experiences.
In the AI world, large language models (LLMs) are fed countless documents and images, and learn the pattern of understanding linguistic, musical, and artistic rules. It generates output – and humans rate the findings with a thumbs up or down. Rinse and repeat.
While humans take years to learn, the computer processes that much faster. And the computer does learn and can mimic with astounding accuracy. It also can hallucinate – like creating compelling –yet raudulent – titles by real authors.1
The computer cannot discern. It can’t judge. It can’t feel empathy.
Does that make AI inherently bad? Not at all. Like the internet it is neither good nor bad. That is why the headline of “AI is theft” is so grating. Not to mention it suggests that anyone using genAI is somehow abetting a crime.
AI isn’t theft, but using stolen material to train it is.
Theft is an action – perpetrated by humans. Humans decide to make bad decisions with the tools. Humans decide to train LLMs with copywritten materials, humans decide to fire people and use machine-generated copy, code, and images. Humans decide to forgo editors. Humans create forgeries. Humans steal using a variety of tools including AI.
The efficiency of the computer means bad things can happen at scale – just like good ones can. As AI mimics styles, it can also identify tumors invisible to the human eye. Like any tool, AI reflects the intent of its user.
Currently, AI is recognizing certain types of tumors better than many of the best radiologists. On the other hand, it may result in unnecessary rescans. Understanding its limits is key, which is why radiologists are still needed.
[I do worry that health insurers will decide doctors aren’t needed at all and will only use machine learning to diagnose tumors – or decide to program the AI to refuse claims for treatments. That is a different fight, and one we need to have.]
Are kids using generative AI to do homework and write essays?
Are companies using copywritten works to train their LLMs?
In both cases, the answer is yes, and they are doing so at appalling rates. They are stealing from creators – and in the case of students – cheating themselves.
But the argument that AI (or GenAI) is theft is decidedly the wrong one – and absolves the abusers of their guilt.
Valid Arguments Against Generative AI
Here are a variety of better statements around GenAI:
AI should not be trained on copywritten works – without permission
AI should not mimic a person’s appearance or voice – without consent
AI can use a tremendous amount of water and energy (should we be using it?)
AI can cause people to lose their jobs
In all these cases, AI is not the guilty party – the bad actors are large corporations and individuals who decide to use it in malevolent or careless way.
Here’s another compelling argument:
AI has no soul.
Authenticity Requires Being Personal
A year ago I published a piece stating that in order to write authentically – you must embrace the “I.” Your personal experiences, your shared knowledge and vulnerabilities, are what resonate with others. I’m not an artist, but I would extend that to “you must embrace the ‘eye’” – which as we know, is the window to the soul.
Indeed, the person who wrote the headline that I am taking issue with, addressed the very things I mentioned in my essay – whatever your creative endeavor, success is a culmination of
Your failures
Your skills
Your life experiences
Those who have influenced you
This isn’t just theoretical for me. As a writer (who really can’t draw), I face the challenge of how to visually represent my work without a design team or trust fund. I have used AI to create some heroes for my essays. I paid an artist to create an image for me that was of an older woman in an anime style and I developed a palette.
Using Microsoft Designer2, I uploaded my anime to create “a model” for my illustrations, along with the palette and my concept. I would revise my prompts until it produced a satisfactory image.
Here are some of them.


Although I never received any direct feedback, I stopped generating them after reading many arguments that people were sick of looking at sterile pieces of AI-generated illustrations. I get that. There is no soul in AI-generated art.
That point should give pause to companies that want to circumvent human beings in favor of AI-generated content. AI is masterful at pattern recognition – but as both the essayist and I have stated – it can’t take a lifetime of emotional human experience to create a nuance that will make a connection with others.
So where do I think generative AI belongs? Here’s what I think it is excellent for:
Analyzing vast amounts of information (prose and/or spread sheets) and distilling it to the point -- where you, the human, can make a judgement
Analyzing vast amounts of imagery to find an abnormality – so a human can make a judgement
Removing tedious tasks – that leave time for creators to do more high-value work
Menu-planning
Being creative with your family
Staying on budget
Identifying where to go in Sicily with a family member in a wheelchair or stroller.
In each case, AI is merely returning a suggestion to you. Individuals then weigh the suggestions and decide the course of action.
Like all the technological advancements in our lives – there are good ways of using AI and bad. But the advancement itself is agnostic. The key is in how individuals (and organizations) choose to use it - and whether they do so in a way that respects others.
AI is not theft. People steal and cheat.
To live, create, and define what’s real with integrity, we need stronger guardrails—and better questions. Because the tools won’t ask them for us.
If you are enjoying this, please consider sharing with a friend or leaving a comment - it’s how others find my work and keep me inspired Thank you!
What I Am Reading
I adored Ruth Ozeki’s meta biographl A Tale for the Time Being. And then the library pinged that the seventh and final book in the Sarah Maas Throne of Glass series was available. Kingdom of Ash is a whopping 992 pages in hard cover (1300 on my Libby app). There are just shy of 100 characters that are woven into the multi-setting battles of darkness versus light. Somehow, the, sometimes verbose text holds my attention. Battle lovers decried the lack of killing scenes. That doesn’t bother me at all.
No idea what I am reading after this one - I just hope it’s a little shorter. My eyes need a bit of a break …
Unbelievable. Chicago Sun-Time published a summer’s reading list by real authors, of fake books. The Sun-Time says that they bought the syndicated piece from King Features. King Features says it fired the freelancer. No word yet on whether or not they fired the person who decided that freelance work doesn’t need to be copy edited.
Microsoft indemnifies users from copyright violations - my assumption is it is not training its LLMs on copywriter material
I'm glad you liked Ruth Ozeki's novel. I still think about it.
Diane,
As you know, anyone who uses spellcheck or a calculator has been using AI.
My nephew has built a career with AI. He works for Thompson Reuters in their technology research department. I don't fully understand what he does. But he talks about how much better doctors can diagnose symptoms. He traveled to San Franciso, where he won a contest in Generative AI Agents Developer Contest by NVIDIA and LangChain. He also posted on his LinkedIn how he took a class and was certified in Technoethics. On his About Me he says: "Philosophically, I view creating software as an art form similar to writing music or creating traditional art."
I totally agree with you. It's not the technology; it's the people using it and how they use it.
I do worry that if everyone uses AI to write, from students' papers to emails, it will homogenize how people talk and write.
Another thing AI can do is create an Excel graph for someone participating in a swim challenge. And help one figure out how many yards, feet, and miles one has swum. I know I could do this myself because I know how to create Excel graphs, and I know how to calculate feet, yards and miles. It will "show its work," so you can see how it arrived at its answer.
I wonder if the younger generations will be hurt by relying too much on AI. They used to tell kids you will not always have a pocket calculator available, but they were wrong. Nowadays, just about everybody has a calculator in the smartphone in their pocket. What if the technology to develop those calculators was considered stealing from the original mathematicians who figured out the formulas?