Thanks to image generators like OpenAI’s DALL-E2, Midjourney and Stable Diffusion, AI-generated images are more realistic and more available than ever. And technology to create videos out of whole cloth is rapidly improving, too.
The current wave of fake images isn’t perfect, however, especially when it comes to depicting people. Generators can struggle with creating realistic hands, teeth and accessories like glasses and jewelry. If an image includes multiple people, there may be even more irregularities.
Take the synthetic image of the Pope wearing a stylish puffy coat that recently went viral. If you look closer, his fingers don’t seem to actually be grasping the coffee cup he appears to be holding. The rim of his eyeglasses is distorted.
AI-generated image: a fake image of the Pope wearing a white puffer jacket.
Pablo Xavier; Annotation by NPR
Another set of viral fake photos purportedly showed former President Donald Trump getting arrested. In some images, hands were bizarre and faces in the background were strangely blurred.
AI-generated image: a fake image of former President Donald Trump being arrested.
Eliot Higgins; Annotation by NPR
Listen To Life Kit
This story is adapted from an episode of Life Kit, NPR’s podcast with tools to help you get it together. To listen to this episode, play the audio at the top of the page or subscribe. For more, sign up for the newsletter.
Synthetic videos have their own oddities, like slight mismatches between sound and motion and distorted mouths. They often lack facial expressions or subtle body movements that real people make.
Some tools try to detect AI-generated content, but they are not always reliable.
Experts caution against relying too heavily on these kinds of tells. The newest version of Midjourney, for example, is much better at rendering hands. The absence of blinking used to be a signal a video might be computer-generated, but that is no longer the case.
“The problem is we’ve started to cultivate an idea that you can spot these AI-generated images by these little clues. And the clues don’t last,” says Sam Gregory of the nonprofit Witness, which helps people use video and technology to protect human rights.
Gregory says it can be counterproductive to spend too long trying to analyze an image unless you’re trained in digital forensics. And too much skepticism can backfire — giving bad actors the opportunity to discredit real images and video as fake.
Use S-I-F-T to assess what you’re looking at
Instead of going down a rabbit hole of trying to examine images pixel-by-pixel, experts recommend zooming out, using tried-and-true techniques of media literacy.
Sponsor Message
One model, created by research scientist Mike Caufield, is called SIFT. That stands for four steps: Stop. Investigate the source. Find better coverage. Trace the original context.
The overall idea is to slow down and consider what you’re looking at — especially pictures, posts, or claims that trigger your emotions.
“Something seems too good to be true or too funny to believe or too confirming of your existing biases,” says Gregory. “People want to lean into their belief that something is real, that their belief is confirmed about a particular piece of media.”
A good first step is to look for other coverage of the same topic. If it’s an image or video of an event — say a politician speaking — are there other photos from the same event?
Does the location look accurate? Fake photos of a non-existent explosion at the Pentagon went viral and sparked a brief dip in the stock market. But the building depicted didn’t actually resemble the Pentagon.
Google recently announced it’s making it easier to see when a photo first appeared online, which could help identify AI-generated pictures as well as photos that are shared with misleading or false context — like that viral image of a shark swimming on a flooded highway that often appears after hurricanes.
Pause and think in other situations, too. Scammers have begun using spoofed audio to scam people by impersonating family members in distress. The Federal Trade Commission has issued a consumer alert and urged vigilance. It suggests if you get a call from a friend or relative asking for money, call the person back at a known number to verify it’s really them.
Check your sources
AI images aren’t the only way you might be fooled by a computer. Chatbots like OpenAI’s ChatGPT, Microsoft’s Bing and Google’s Bard are really good at producing text that sounds highly plausible. But that doesn’t mean what they tell you is true or accurate.
Sponsor Message
That’s because they’re trained on massive amounts of text to find statistical relationships between words. They use that information to create everything from recipes to political speeches to computer code.
While the text chatbots spit out may sound convincingly human, they do not learn, think, or create in the ways we do, says Gary Marcus, a cognitive scientist and professor emeritus at New York University.
“They don’t have models of the world. They don’t reason. They don’t know what facts are. They’re not built for that,” he says. “They’re basically autocomplete on steroids. They predict what words would be plausible in some context, and plausible is not the same as true.”
ChatGPT fabricated a damaging allegation of sexual harassment against a law professor. It’s made up a story my colleague Geoff Brumfiel, an editor and correspondent on NPR’s science desk, never wrote. Bing invented quotes from a Pentagon spokesman. Bard made a factual error during its high-profile launch that sent Google’s parent company’s shares plummeting.
That means you should double-check anything a chatbot tells you — even if it comes footnoted with sources, as Google’s Bard and Microsoft’s Bing do. Make sure the links they cite are real and actually support the information the chatbot provides.
Use generative AI tools responsibly
In its early phase, AI can be unreliable and even risky. But it’s also fun and interesting to experiment with. And like it or not, generative AI tools are being integrated into all kinds of software, from email and search to Google Docs, Microsoft Office, Zoom, Expedia, and Snapchat.
Playing around with chatbots and image generators is a good way to learn more about how the technology works and what it can and can’t do.
“My main piece of advice to everybody is, do use this stuff,” says Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School. “You absolutely should be making things. You should absolutely spend an hour on ChatGPT…You should try and automate your job.”
Sponsor Message
Mollick requires his students to use AI. And while he’s an enthusiastic user of chatbots and other forms of AI, he’s also wary of the ways they can be misused.
“You’ve got to figure this thing out because we’re in a world where there’s nobody with great advice right now. There isn’t like a manual out there that you can read,” Mollick says.
If you are going to experiment with generative AI, here are a few things to keep in mind.
- Privacy: Be smart about sharing personal information with AI software. Systems may use your input for training, and companies may have access to what you enter as inputs.
- Ethics: What are you using the software to create? Are you asking an image generator to copy the style of a living artist, for example? Or using it in a class without your teacher’s knowledge?
- Consent: If you’re creating an image, who are you depicting? Is it parody? Could they be harmed by the portrayal?
- Disclosure: If you’re sharing your AI creations on social media, have you made it clear they are computer-generated? What would happen if they were shared further without that disclosure?
- Fact check: As explained above, chatbots get things wrong. So double-check any important information before you post or share it.
“You can think of it as like an infinitely helpful intern with access to all of human knowledge who makes stuff up every once in a while,” Mollick says.
The audio portion of this episode was produced by Thomas Lu and edited by Brett Neely and Meghan Keane. NPR.
Discussion about this post