AI for Normal People - Where to Be Wary
Generative AI is a powerful that can be exceedingly efficient at short-cutting certain tasks. But here's where you need to be cautious.
For the last two decades I called the tech sector home. You learn things about new technologies that sometimes takes years to reach consumers. Generative AI (GenAI) is one of those technologies. It’s a powerful tool - and like many technologies is exceedingly efficient at removing tedious tasks. But that doesn’t mean it doesn’t come with some warnings. Here’s what you need to be aware of.
In my prior post AI for Normal People, I gave a smidgeon of background on artificial intelligence (AI), Natural Language Processing (NLP), and GenAI. As I mentioned, none of these are really new technologies, as each has been explored and tinkered with for at least 50 years. What is new is that each is tickling the public’s imagination with new applications that are leap-frogging into our lives.
When we give voice commands to Siri and Alexa, we are doing so through NLP technologies. Rather than using a keyboard or arcane Boolean commands (like quotation marks and operators like AND, OR) you just use your voice. Simply say, Siri or Alexa … and ask the question.
Of course there are times when those voice assistants don’t understand us — if our questions are too complex, we speak with an accent, or there is background noise. But they seem to get more reliable with each passing month.
GenAI is a type of artificial intelligence that uses Natural Language Processing to understand human readable questions, and generates human-like outputs. It generates the answer using an LLM - or a Large Language Model.
An LLM is a collection of machine learning algorithms that become “smart” by training on massive volumes of data. And when I say massive - I mean, most of the internet. Like our brain, the LLM then encodes and decodes what it has “consumed” to generate answers. It learns for example, that following the words “Mary had a little …” the answer is usually “Lamb.”
There are literally dozens of LLMs - and more popping up all the time. Some of the most popular LLMs are ChatGPT (GPT) by OpenAI, Google Gemini, Anthropic’s Claude, Meta’s Llama, and Cohere’s Coral. Applications that boast machine learning are likely using one of these LLMs at its core.
As I mentioned, an LLM trains on massive amounts of text data to learn about the world at large - and then parrots back answers to almost any question. It’s fun to watch the conversations flow from it.
It can be exceedingly polite when it responds — “Is this the right answer, Diane?” so much so, it can fool you into thinking it’s sentient and all-knowing.
Fake News
But here’s the thing: While LLMs can create lovely, well-constructed sentences, they are parroting semantic connections it makes - without any concern for whether or not they are truthful.
The tech world calls this fabrication of answers to be hallucinations. And it is a huge problem. In the image below
Oh you usually don’t have to worry about facts like “what is AI?” - so much has been written about it - that LLMs will look at all the sites that define AI - and parrot back a few sentences that are pretty much uniform. It’s when you go deeper, asking it to expound on it. That’s when it is likely to spew out run-on sentences that might look good to a desperate teen trying to get a paper done with a little - ahem - assist. But in actuality all those words will send up an alarm that screams “THIS WAS DONE WITH AI.”
Plagiarism
Schools are cracking down on the use of GenAI. Educators (and publishers) have plenty of tools they can use that can suss out the probability that your prose is AI generated. Both entities consider it plagiarism. If your kid turns in an AI-generated essay, chances are you will be hearing from the school - and it will be a very unpleasant conversation.
Bias Perpetuation
Those large volumes of public data are not being vetted in any way. If a large collection of content implicitly or explicitly states that a certain demographic behaves a certain way, it can very easily become accepted by the LLM as the norm. Be careful not to perpetuate prejudices.
Security & Privacy Breaches
Security can be a huge problem with LLMs. If you were to feed a confidential document into your LLM to see if there is a better way of phrasing it - may accidentally end up exposing this sensitive information. Tax returns, social security numbers, contact information (personal private data), and any proprietary business data or creative assets like songs or manuscripts that you don’t want published, SHOULD NEVER be fed into a public LLM.
Microsoft Copilot is able to work across all your office and windows files. You might see reference to the word “grounding.” It means the LLM output is heavily influenced by your information - which helps to lessen hallucinations. Microsoft claims that your information will not be used to train a public LLM. However, it is not clear that any new work that Copilot creates - like a powerpoint or word doc - won’t be shared. Firms like this one keep a running list of security concerns. But I would make sure any derivatives are cleansed of proprietary information.
Provenance of Research
One of the joys of an LLM is that you have a ready made research assistant. Or do you?
Since answers are being generated by aggregating information across potentially millions of documents, it is very hard to validate results. Again, lack of citations is a red flag that your document is plagiarized. My suggestion is, use your LLM to get a gist of a subject - and then look for established, scholarly research to support or refute the findings.
Economic Abuses of the Technology
While Natural Language Processing has been around for almost 50 years, the use of Generative AI by the masses is pretty new. And being so new, likely there will be a slew of legal and ethical concerns raised. For instance, newspapers are furious that these language models are being trained on news archives for commercial purposes — with no compensation going back to the original creators.
Job Taking
There is tremendous fear from workers that AI - including GenAI - will impact their jobs. ADP recently asked 35,000 private-sector workers in 18 countries how they felt about artificial intelligence. Eight-five percent were convinced it would impact their jobs - with 44% saying it would benefit them, and 41% saying - nope, it’s going to hurt. Fifteen percent said it would have no impact, or they weren’t sure of the impact.
When looking at just folks in North America, uncertainty of impact more than tripled, with 38% saying it would benefit, 33% saying it would hurt, and 30% saying they weren’t sure.
Thinking It Sounds Better
This is sort of a follow on of the point above. Google just came out with an ad for its GenAI offering, Gemini. In it a Dad is helping his young daughter write a fan letter to Olympic track and field start Sydney McLaughlin-Levrone, and turns to Gemini to get it done. The fictional dad said: “I’m pretty good with words, but this has to be just right.”
There is nothing heart-felt about AI-generated text, as it is void of human experience
Syracuse University Professor Shelly Palmer was rightfully appalled.
”It is one of the most disturbing commercials I’ve ever seen.
This is exactly what we do not want anyone to do with AI. Ever.”
The sad part about this is that the (hopefully) fictional father had a real opportunity to help the daughter write a heart-felt letter. He could have used AI to learn more about Sydney and find commonalities. And he could have helped his daughter explore and explain why she was so intrigued by Sydney. Instead, he cheated on being on a parent and instilled in his daughter that her voice isn’t good enough.
The Upshod
Being nervous about new technologies is normal. If you are in the workplace, or are the parents of young adults — I encourage you (and them) to experiment with the LLMs and the many applications that are sprouting up. Understanding how it can be useful - while being mindful of the downsides – is the best way to prepare yourself for a world that will eventually embrace AI.
At the very least, AI will help eliminate tedious tasks and give you shortcuts to understanding virtually any topic out there.
Quite informative!