Associate Teaching Professor of Linguistics at UC San Diego
Director of UCSD's Computational Social Science Program
Some thoughts on ‘Detecting AI Content’
Note: This was originally posted on my LinkedIn page, but I figured I should put it someplace a bit more permanent too

This is a great meme. Let me overexplain it until it’s no longer funny, and share a bit of my thinking on ‘detecting AI content’.
For those who haven’t seen it, the plane below demonstrates common patterns of damage repair to World War II Bombers. It’s easy to, at a moment’s thought, start armoring the places that ‘get damaged more often’, until you remember that you’re only seeing the planes that made it home, and it’s the damage you’re not seeing on planes getting home that’s the real problem.
It’s worth remembering that, especially as the internet is increasingly being flooded with machine generated content, users, influencers, and opinions, there is a huge difference between “I can reliably recognize ‘AI’ content” and “I can reliably recognize ‘AI’ content which has recognizable ‘AI’ problems”.
Sure, obvious ‘AI’ problems will often be there for lazy AI content creators using less advanced platforms, and there will be ‘easy to spot’ content. But also remember that folks who have the ability to generate difficult-to-detect ‘AI’ content have every incentive to also generate easy-to-detect ‘AI’ content, because then everybody can feel good about their ability to ‘spot’ it. Then, if we get really complacent, we start to assume that every image with the right number of fingers, every video with object permanence, and every piece of text without em-dashes, markdown bullet subheadings, perfect spelling, and ‘delve’, are human generated.
It’s easy to pretend that there’s a technical solution, an “AI detector”, or even just sharp wits to catch every instance, and plenty of people want to sell you this fantasy. But ultimately, this is a trust problem, and a social problem, not a technical problem, and if you think your instincts are going to save you from ‘AI’ content, you’re going to find your hopes get shot down sooner or later.
If you’re interested in what other smart people are thinking about this, here’s an interesting panel on the Challenge of AI Content featuring representatives from companies, the FBI, the DoD, and others. Also, amusingly, Will Corvey from DARPA and I went to graduate school together at CU Boulder.