Myths and misinformation run rampant on the internet all the time these days, but never more reliably than on April 1.
People have celebrated April Fools' Day for centuries with all sorts of jokes and pranks, and while old-school traditions (hello, rubber snakes) remain plenty popular, gags have grown considerably more high tech over the years.
And fake news and announcements — whether by a major company, public figure, a random social media user or your childhood best friend — can take off quickly and morph wildly, thanks to social media.
It can be tough to tell whether something online is real, especially with artificial intelligence making it increasingly easy for anyone to create fake images, video, audio and text.
It took Sam Gregory, for example, under a minute to write the text prompt needed to create a fake image of the Easter Bunny for his kids last year, as he told NPR. He's well-acquainted with the challenges of combating deepfakes as the executive director of WITNESS, a human rights nonprofit whose work involves helping people recognize and respond to deceptive AI.
"We're in this moment where it's much easier to make both personalized and individualized, realistic images and audio and increasingly video, and in the hands of many more people," Gregory said. "And then the flip side of that is that the tools are not easily available on the technical side to spot them."
There are some resources out there — from news literacy nonprofits to trusted media sources — that can help sort fact from fiction. But much of that responsibility falls to internet users themselves.
Part of that involves understanding the moments where myths tend to spread, like in the immediate aftermath of a breaking news event or on April Fools' Day, says Dan Evon, the senior manager of education design at the News Literacy Project and a former Snopes fact-checker.
He says April Fools' Day, with its trend of advertisements masked as jokes, is a perfect time for people to get in the mindset of anticipating and investigating misinformation.
"April Fools' Day jokes in general don't try to persuade your politics or make you angry or target the negative emotions that are dangerous online," he told NPR. "And a lot of the things that you find are humorous. So from a news literacy perspective, it's kind of fun to encourage people to practice your skills."
Plus, he notes, some of the rumors that originate on April Fools' Day could have staying power or resurface much later, like the fake image of an elephant carrying a lion cub that circulated years after it was first posted as a prank.
Here are some steps you can take to reduce your chances of getting fooled online, on Monday and beyond.
Slow down
The biggest piece of advice that Evon tells people is to simply slow down.
"Social media is really fast, and there is so much information that comes at us at once," he says. "You don't have to go through this stuff so quickly, you can take some time — just a few extra seconds — to examine these posts."
Gregory similarly says to stop when you see something that's "too good to be true, or too crazy to be believable, or too-anger inducing."
He cites the SIFT methodology for evaluating information, developed by researcher Mike Caulfield. It stands for: Stop, investigate the source, find better coverage and trace claims, quotes and media to the original context.
Once a piece of media gives you pause, he says, consider who shared it. Are they friend, foe or stranger?
"Is it your friend who's sharing it, or someone you know? And is it something they made themselves?" Gregory adds. "Or is it online and it's just a random X account that is trying to explain to you that the King of England has just died, but they seem to generally tweet gossip, and they're based in California?"
In other words: Are they a credible source for the context in which you're encountering the information?
This step is a little trickier on April Fools' Day, Evon adds, because a lot of the jokes are likely to be coming from verified official accounts. That's why it's especially important to consider the context.
"If you're going to encounter an AI image, you don't just see the AI image," he says. "You see where it's been posted, you see the comments that are attached to it, you see the caption that's presented — on April Fools' Day, you see the date. And maybe that makes you a little skeptical of whether or not it's real."
See what others are saying
The next step is to do what experts call "lateral reading," which is basically seeing what else is out there.
"If your only source of information is the one post that you're seeing, there's good reason to be skeptical of that," Evon says.
For instance, he says, if a major tech company is actually announcing a new initiative, there's likely to be news coverage of it from at least some credible sources.
This isn't a foolproof test. Just because people are talking about something online doesn't make it reliable — take all the amateur forensic experts analyzing Kate Middleton's controversial family photo, Gregory notes.
But seeing what — and how much — people have to say in the comments section of an X post or TikTok video can be a helpful clue, both experts agree.
"I think it's often worth looking at the comments not because the comments tell you the truth, but the comments tell you if there's a debate around this that merits further investigation," Gregory says.
Look for the original
Another tactic is to try to track down the original image, something Gregory says is easier to do with "shallow fakes," or photos manipulated with basic editing software.
Oftentimes people will take an existing image or video and just say it's from another time or place. Doing a reverse image search can help you challenge or back up that claim.
That basically involves "taking a screenshot and plugging into a search engine and then seeing what pops up," according to Evon. He recommends using sites like Google Images, TinEye and Bing.
Making sense of those results requires some more critical thinking, Gregory explains.
"It's going to pop up and say, 'Wait a second, someone told you this image was from yesterday, but we have an earlier version that's from last year,' " he says. "Now, it doesn't mean the image is from last year, but it certainly tells you it's not from yesterday."
When it comes to AI deep fakes, Gregory says there are plenty of clues that people have been told to look for to try to spot manipulated photos and videos — from hands that don't look quite right, to garbled writing in an image, to eyes that don't properly blink.
The problem, he adds, is that most of those glitches are going away as companies get better at AI.
"If we had talked a year ago, it would have been more reliable to say, look at the hands. The hands have got better," he says. "If we talked a year ago it'd been more reliable to say, look at the writing. But then companies have introduced ways to write more accurately."
Amplify responsibly
Internet trickery doesn't mean you can never retweet a funny post or play a harmless prank again. But experts urge caution when amplifying information, no matter the date on the calendar.
"There are going to be jokes that people are going to circulate, in a lot of instances these are going to be funny," acknowledges Evon. "I don't think it's ever good to intentionally lie to someone or mislead someone, so if you do share something, maybe comment and remind people that it's fake."
He offers this broader rule of thumb: If you're skeptical of something, don't help it go viral. And if you do encounter accurate information — like a fact-check or correction — help amplify that instead.
When it comes to reposting, Gregory recommends pausing "in proportion to your emotional reaction." So if you're about to share something inflammatory, defamatory or that reinforces a worldview in a "highly emotional way," he says stop first to consider your motivations for sharing it — and whether that post is the best way to achieve them.
Consider patterns
April Fools' Day may be a unique day in many ways, but it also reflects broader trends in misinformation.
It's not the only day of the year where people should be bracing for falsehoods online, Evon says, noting that people tend to exploit major breaking news stories (like the collapse of the Baltimore bridge) to promote misinformation.
"What we really want people to do is, we want to learn the patterns that these things follow, so that they can better recognize them in the future," he explains.
This year, Gregory is most expecting to see the proliferation of AI images — because they're so easy to make — and audio, because it's already being seen in so many other contexts, from phone scammers using voice clones to an election-related robocall purported to be from President Biden.
"I bet we're going to see many, ho-ho-ho April Fools' jokes with audio clones, some of which you and I will never hear because it'll just be me making one for my friend and sending it to them," he says. "And of course if you go on TikTok you're gonna see fake AI audio everywhere. And it cuts across that whole spectrum, from humorous and prank to financial scam to political upheaval."
He also notes that the uncertainty created by AI hasn't only made it easier for people to falsify things, but for people to try to dismiss real footage as fake. That's just another reason, he says, to pay close attention.
300x250 Ad
300x250 Ad