Originally published on CUInsight.
Can you feel it? We’re in a bubble, and it is expanding rapidly. What makes up this enveloping sphere?
AI writing tools. ChatGPT-powered chatbots.
Incredible technology, right? It’s sci-fi, made real! Finally, we have the Enterprise’s computer or HAL…hmm, bad example.
These are impressive tools. But it’s important to recognize what they are…and what they aren’t. Despite seeming incredibly smart at times, these large language models (LLMs) are exceedingly dumb. They’re not AI by any means (it’s a pop term which has now lost most of its previous meaning).
At their core, they are advanced autocorrect and autofill platforms. Yet that can still be useful. I’ll get to how this impacts credit unions and your members, but first, we have to understand what these systems really are. Lots of hype, not enough data.
Sidenote: I am not going into the image generation models here. They exist, they probably have more downsides than benefits, and I make fun little graphics for blog posts with them. I’m also not including their enormous energy needs, which isn’t great.
Better. Good enough?
Take out your phone (if you’re not already using it now). Start writing an email. Notice how it suggests the next word above your keyboard? If you keep tapping an option, you’ll eventually get a run-on sentence that might, or might not, be total gibberish. (iOS 17 will improve that dramatically)
This is how ChatGPT, Bard, or anything based on them works, too. It predicts the next word based on what has already been output. “But Joe, not only did you have a whole interview with ChatGPT, it can do all sorts of complicated things!”
You’re right. And there’s absolutely a place for this technology. We just need to recognize where it does and does not fit.
What makes sense for a ChatGPT-type platform:
- Organizing tasks
- Laying out a social media or blogging schedule
- Adopting a certain tone in a piece
- Helping write a first draft of code, formulas, or other content
- Creating frameworks for replicated processes
- Diving into a topic you’re already familiar
- Member interactions that are a step above the 1st generation chatbots, but worse than a person
Here’s what you should not rely on these systems to do:
- Replace copywriters (or anyone, really)
- Copy/paste content from it for use in any context not explicitly presented as “ChatGPT-generated”
- Give you answers without having other sources available
- Get medical advice
- Use in legal arguments (unless you want to forever be known as the lawyer who presented made-up cases in front of a judge)
Hallucinations: On being wrong
My greatest concern with these platforms is that they’re extremely convincing and certain. I’ve had conversations where it assures me the answer is correct…even when I know it is wrong. Gaslighting to the extreme. This ranged from math questions to factual challenges.
In the legal arguments referenced above, ChatGPT convincingly made up cases which never happened. Why? How?
Remember what it is: An autofill system.
The platform recognized how case records look, how they’re formatted, even trends in how they are resolved. Then, it made up its own to fit that mold. They seemed convincing, because to even trained eyes, they looked like how real case citations should appear.
There’s no motivation towards being “right” or “wrong” with a LLM. The goal of the system is to generate the next word, not fact-check itself. Thus, the tech industry came up with a harmless-sounding term for it: Hallucinations.
In other words, when your chatbot hallucinates, it’s creating new “facts” to fit the response. Lying, you could say, but that infers intent. Once again, a chatbot has no “desire” to be factual or not, only provide a response that seems appropriate.
Sidenote: Interestingly, “hallucinate” was originally a good thing, showing how the chatbot could “imagine” itself as a programming language or other platform. The original definition had nothing to do with making stuff up.
There’s also a danger of bias. While questions about Stalin mention the massive loss of life and human rights violations, it does not for current world leaders. Once again, it’s because the responses are created from content accessible online.
If certain people, policies, or ideas are associated with a lot of disinformation or propaganda, you can bet it will be reflected in answers. Without context or note. What exists, persists.
The Growing Bubble
Ok, with some background, we can now go back to the bubble. You have noticed just how fast ChatGPT became, well, everywhere. And every tech company (besides Apple) has thrown their shirt into the LLM game as well.
Microsoft has shoved ChatGPT (they invested billions) into Bing search. Google has Bard and is also testing “Search Generative Experience”. Nearly every smaller tech company I know is launching an “AI feature” of some kind, be it Canva, ActiveCampaign, Spark, Adobe, Zoom, and many more.
When a single product concept expands this quickly, across that many industries, I see a bubble. Eventually, interest will stagnate, people will tire of the “convenience”, and these generative chat systems will settle into a more permanent, and subtle, part of our tech world.
Of course, the companies profiting from them want this bubble to grow forever.
Google wants to replace web links, keeping you on their sites (and seeing their ads, if you’re not using an adblocker). Nvidia, the graphics card maker powering many of these systems, is keen for it to grow, as they get to sell more units, following the crypto crash (using their cards).
Little players don’t want to seem laggards, so they launch things as well. They’re typically generators for writing or content layout. If these help your overworked marketing team, awesome.
The rush to fake “AI” is a bubble, and running too fast has consequences. Take care with anything you read, see, or hear. You can be sure unscrupulous actors will be using these tools to further drive their agendas and create the illusion of fact or reality.
Your Credit Union
We’re here, as promised.
Even after all these cautions, for any credit union not yet using these LLM platforms, I’d still ask, “why not?” Their use cases are substantial, so long as you do not share your member or internal information with them (most systems add that data to the model).
To me, it’s the mundane, repetitive tasks where these systems shine. Need to figure out Excel formulas for some data calculation? Ask (then check to make sure they really do what it says). Want a marketing strategy for a promotion with a rock ‘n roll flair? Describe it.
If you have a blog that suffers from a lack of content, and you have no desire to create quality human-generated material for it, guess what? Your neighborhood LLM can help. Plan out a post schedule, get topic suggestions, and then draft out ideas.
My recommendation would be to create better, human-written, more relevant, and impactful articles, but if that’s not an option, ChatGPT is there.
Human or ChatGPT?
I read a lot of content in our industry. Not all pieces are stellar. Sometimes, on the particularly terrible ones (that I know were written by a human), I create a short prompt for ChatGPT to replicate. Then, I send both results to a friend to play our game: Human or ChatGPT?
Over half the time, we agree the GPT one was a much better article, even if we could recognize some of the LLM traits.
For any institution using staff resources to write these posts, I ask: When a free chat system can do better, why would your members care?
Here’s what you should do
Keep a close eye on ChatGPT and other LLMs. Watch the field to see progress, as well as challenges. Understand the benefits and downsides. Talk to your staff to learn if and how they are using them in their own life.
Through your day, make a point of asking one of these writing tools to compose what you have to do next. Get better at clarifying your request to the system (Side-effect: You get better at explaining what you want!). Compare what it creates to what you made.
If it’s as good or better, with no concern of “hallucinations”, use it moving forward!
I’ve commented on LinkedIn how we will eventually get to a point where so much content is generated that the new trendy thing will be to have a human make it! Bubbles and cycles.
Remember, like every other new technology, LLM chatbots are tools. A hammer can be used to bang in a nail or hit someone over the head. It’s up to the owner to decide the use-case. Take the same approach with these. Also please do not hit anyone over the head.
And to send us off, here’s a ditty from ChatGPT:
In the realm of finance’s digital tide,
Credit unions embrace tech as their guide.
With ChatGPT’s aid, they forge a new way,
Empowering members, trust paving the day.
Together they thrive, on innovation they ride.ChatGPT