By Sam Wineburg
Note from the OER Project Team
The blog posts shared here, authored by independent contributors, are intended to provide diverse perspectives on issues having significant impact on the field of education. These opinions are solely those of the authors and may not reflect the views of the OER Project team. We encourage you to engage with these ideas critically and use them as a springboard for thoughtful discussion and inquiry.
In the aftermath of Hurricane Helene, which ravaged parts of Southeast United States, a photo—one picture that captured the misery of this disaster—went viral: A grief-stricken little girl clutches a puppy as flood waters rise around her. The image became an instant icon of human anguish and, in some corners of the web, a symbol of a feeble federal response to a natural disaster.
The image spread far and wide, thanks to social-media posts by activists and politicians. Amy Kremer, a cofounder of Women for Trump, wrote that the picture was “seared into” her mind. Senator Mike Lee (R-Utah) shared the image with his 744,000 X followers, asking them to “caption this photo,” an invitation to heap scorn on the administration's response to this calamity.
There was only one problem with this gush of sympathy, concern, and let’s admit it, politicking. Neither the little girl nor her puppy were real. The picture was a rank fiction, an AI-generated ruse. When you examine the image closely you can make out an irregular gloss on the child’s face, as well as a limb that seems to be missing from a male figure lurking in the background.
One of the most pervasive myths in history is the existence of African-Americans who fought for the confederacy in the Civil War. This myth has been decisively disproven, but ChatGPT will generate a realistic-seeming photograph of just such a unit, if you ask. Generated from Dall-E by Trevor Getz on Feb 19, 2025.
Guides for pinpointing such flaws have become something of an internet cottage industry. Although they warn that rapid improvements in AI may render these techniques “fruitless,” the Southern Poverty Law Center’s “Tips for Spotting AI-Generated Election Disinformation and Propaganda” nonetheless offers 10 guidelines for determining an image’s authenticity. Among SPLC’s guidelines are these: “Is the subject’s skin too smooth or wrinkled? Are objects in the frame not proportional? Are there shades in strange places? Are the subject’s mouth movements out of sync with the audio?”
However, before you sign up to become a reality- verification sleuth, consider a second example, this one created using Grok, X’s chatbot. NPR recently instructed Grok to generate an image of vote tampering taken from the perspective of CCTV surveillance. Two shady figures stuff envelopes into ballot boxes. Look closely: Can you pick out the flaws in this image? Or try this one: An image from the assassination attempt of Donald Trump in which secret service agents appear to be grinning—not the response one would expect, claimed a post on X, if the attempt hadn’t been staged. What jumps out when you examine this one?
Don’t feel bad if you can’t identify anything suspicious in either. Repositioning objects, correcting lighting, adding missing pixels, lightening dark shadows, and so on have become child’s play with the advent of sophisticated graphic design tools. Eddie Perez, formerly an information integrity director at Twitter, explained how easy it is to tweak an AI-generated image: “I clean it up a little more and then I make [it] go viral.” The recipe is simple: Generate an image, fix it in Photoshop, and off to the races you go.
Incendiary images and videos swarm on social media. Some of these are authentic. But many aren’t. Should you play CSI, squinting at your screen, locating misplaced pixels and scanning for unnatural skin sheens? Do you really have the chops of a seasoned Bellingcat investigator who’s spent years perfecting the OSNIT craft? If the answer is no, let me suggest an alternative course of action: critical ignoring.
One great strategy for avoiding spreading disinformation is critical ignoring. Just avoid the urge to share right away, and wait for more information to come in. CC0.
Critical ignoring doesn’t mean you are forbidden to look at images and watch videos. You’re even allowed to feel your emotions because, after all, that’s the whole point. Visual stimuli are designed to stir your emotions by circumventing your prefrontal cortex and landing a sucker punch to your solar plexus. But unless you’re a digital forensic expert, obey Rule #1 of critical ignoring: Exercise a bit of humility. Admit it: You really can’t tell if that image of a bombed-out hospital is real or not. So, take that finger of yours off “Share” even if the image supports what you believe is real and true.
Rule #2: Exercise a bit of patience. The great thing about the internet is that along with troll armies, there are legions of well-meaning, smart people who know more than you do. If an image goes viral, rest assured that someone out there with the requisite skills is trying to get to the bottom of it. That puppy picture? It didn’t take long before a team of internet good Samaritans determined that none of the people posting it provided links to an original source, an immediate tell that something was fishy.
AI has lowered the barrier for spreading mischief and filling the information stream with gunk. But the vast majority of videos floating in cyberspace are not deepfakes at all, but cheapfakes—snippets of actual footage grabbed from other contexts and recycled with a caption to make it appear as if the event happened this morning. The Swiss Army knife of cheap fakes is selective editing, such as when the internet exploded with a video of climate activist Greta Thunberg seemingly telling fans that climate change does not exist. Four seconds of video were excavated from nearly six-minutes of footage, which when viewed in context, irrefutably showed that Thunberg believed nothing of the sort.
As AI detection tools get better, so does AI image generation. This has led to an arms race between the software to make AI deepfake images, and the tools meant to detect it. Generated from Dall-E by Trevor Getz on Feb 19, 2025.
Selective editing didn’t start with the internet. It’s been around since Guttenberg. Before taking a quote at face value, historical analysis demands that we restore words to their original context: What did the speaker say before and after the quoted excerpt? It’s golden advice to follow for online videos as well. The shorter the clip, the more likely the context has been disfigured.
A bit of humility leavened by a dash of patience guarantees that you will have less apologizing to do, which is the opposite predicament Senator Mike Lee found himself in when he scrubbed the puppy post from his social-media feed.
Back in the pre-internet Stone Age, people would drive to fast-food restaurants, eat dinner in their cars, and then toss the wrappers out the window. Most of us, thank goodness, no longer litter our highways with garbage. Let’s do our best to not litter our information supply either.
About the author: Sam Wineburg is a cofounder of the Digital Inquiry Group, and Margaret Jacks Professor of Education Emeritus at Stanford University. His latest book, with coauthor Mike Caulfield, is Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online.
Header image: This AI generated image was shared during the aftermath of Hurricane Helene. Presented as a real, unaltered photograph, its intent appears to have been political manipulation. Olga Robinson, Assistant Editor at BBC Verify, is an expert in combatting disinformation. She writes “Tell tale signs [of AI generation] include the unnatural sheen, a disappearing green boat and a man with a seemingly missing limb in the background.