I don’t live in fear of AI. Instead, I fear people and what they will do with chatbots like ChatGPT and the best AI image-generators like Stable Diffusion, just as I fear what they’ll do with any other powerful technology or object. Case in point, how some are using artificial intelligence and chatbots to generate fake images and stories and, naturally, present them as fact.
It’s horrible, shameless, and dangerous.
Recently, a magazine in Germany (Die Aktuelle (opens in new tab)) printed what appeared to be a new and exclusive interview with former Formula 1 driver Michael Schumacher who was injured in a 2013 skiing accident, and splashed it across its front page. It didn’t take long for people to figure out that the interview was a total fabrication, one that may have fooled more than the normal number of gullible types because Schumacher’s answers were generated by an AI chatbot called character.ai (opens in new tab).
Character.ai is a ChatGPT rival that lets you talk to famous figures both dead and alive. While not quite in the same Large Language Model class as the Bards and Bing AIs of the world, it’s smart enough to recreate conversations with a wide swath of historical people, and to do a pretty convincing job of it.
What it isn’t meant to be, is accurate: it’s available largely for entertainment, and its creators are careful to state that “Everything Characters say is made up! Don’t trust everything they say or take them too seriously.”
Naturally, someone had to abuse it.
In recent months, I’ve spent a lot of time with all the major chatbots, including ChatGPT, Google Bard, and Bing AI, asking them to create all manner of creative, silly, and mostly useless content. I take nothing at face value, knowing that there is still a tendency toward AI chatbot hallucinations (presenting fiction as fact).
What I didn’t imagine is that anyone would start using content spit out by these AIs to share as original stories, articles, videos, and even music from existing and long-deceased artists (opens in new tab). It’s not just distasteful, but dangerous.
Again, the idea that this is somehow the fault of AIs (not people with the ability to have faults, by the way) is ludicrous. The fault is, as always, in our humanity. People don’t know how to responsibly use a good, bad or indifferent thing.
As a species, we are prone to excess, and whatever pleasurable thing we’re offered, be it sex, money, drugs, or AI, it’s a clarion call to do more of it – a lot more of it.
We’re currently so obsessed with the incredible capabilities of these chatbots that, like any addition, we cannot stop using them. I admit, I have a habit of loading up ChatGPT (the GPT-3 version) and asking it a mix of important and ridiculous questions.
The other day I asked it to create a logline for a new TV series about a time-traveling mailman. It did a nice job, though I do wonder how much was sourced from other people’s work. You just never know how much ChatGPT is inadvertently plagiarizing. At least this isn’t important stuff (unless the show gets picked up. 😁).
I think ChatGPT is onto something here. #ClimateEmergency pic.twitter.com/sJn5nj76KXApril 19, 2023
This week, I asked it to solve the climate crisis. ChatGPT’s ideas about moving to a plant-based diet were interesting but, as people noted on Twitter, ChatGPT left out a lot of other real climate-altering factors. As always, I was careful to present this as ChatGPT’s response and not use it, say, in my reporting.
Sure, I’m not a climate reporter, but what if I was? It would be insane for me to blend what I ‘learned’ from ChatGPT about climate change into a news story that is intended to inform millions of people.
That is my point. ChatGPT and other chatbots like it are not a substitute for real, original research and reporting. In journalism, we often say, “Go to the source.” That means you find where the information or interview originated and/or conduct the interview yourself and use that as the foundation of your story.
Simply put, nothing an AI chatbot produces can be used in a serious way. Honestly, I do not care if the German magazine was yellow journalism at its finest and “no one cares.” We all should care, because someone who is not necessarily a bad journalist will eventually stumble on something that stupidly re-reported those AI-generated quotes and possibly try presenting them as fact.
Decades from now, Schumacher might be quoted as saying things he never spoke of in his life. Will we know the difference?
The point is, we have to draw a hard line now. AI Chatbots are recreational tools that can help us build products, maybe code, and offer insights and direction, but they cannot be trusted as sources. They just can’t. So don’t. Please.