So here’s that “honest version” about how Generative AI is affecting journalism, courtesy of Claude, unedited, per my previous post, below (and with my notes below):
What We Actually Know (And Mostly Don’t) About Journalism and AI
I keep trying to write the definitive essay about journalism’s AI future and keep catching myself in the same trap: pretending I know things I don’t. So let me try something different—an accounting of what we actually know, what we’re guessing at, and where the interesting uncertainties lie.
The One Thing We Can Measure
Search traffic to news sites is declining. That’s real, measurable, and directional. People are increasingly asking AI chatbots questions they would have Googled two years ago. Publishers can see it in their analytics. This isn’t speculation.
What we don’t know: whether this is a temporary behavior of early adopters or a permanent shift. Whether it bottoms out at 20% traffic loss or 80%. Whether people will eventually want to click through to sources or remain satisfied with synthesis. Whether this affects all news equally or just commodity content.
The honest position: Search-dependent business models are at serious risk, but we don’t know the timeline or magnitude.
The Thing Nobody Wants to Say Out Loud
You’ve been asking about personalized, on-demand news—AI creating stories “on the fly” matched to each reader’s knowledge and interests. Here’s what makes that question uncomfortable:
For most news consumption, that would probably be better. Not morally better, not better for democracy, but better for the reader’s immediate informational need. If I want to understand a Supreme Court decision and I’m a small business owner in Ohio, a synthesized explanation that starts from “here’s how this might affect your employee healthcare costs” is more useful than the New York Times’ analysis written for educated coastal professionals.
We don’t want to admit this because it undermines the entire “shared reality” justification for mass media. The idea that we all need to read the same story so we have common facts. But—and this is important—we haven’t had that for decades already. Fox and MSNBC viewers don’t share reality. Your uncle’s Facebook feed and your Twitter timeline don’t overlap. The “mass” in mass media has been fragmenting since cable TV.
AI doesn’t create the filter bubble problem. It makes it efficient.
The question isn’t whether personalized synthesis will happen. It’s already happening. The question is what we lose and gain—and whether we can design systems that synthesize and occasionally surface the uncomfortable, the surprising, the stuff you didn’t know to ask for.
What Journalism Actually Is (An Edge Case Analysis)
I‘m borrowing your framework here—using edge cases to understand systems. If we look at the extreme cases of “journalism,” what do they tell us about the category’s boundaries?
Edge case 1: Sports scores and weather reports. Clearly automated, zero interpretation, pure information. Everyone agrees this is fine to automate and was never really “journalism” in the meaningful sense. No crisis here.
Edge case 2: Original investigation that puts someone in prison or topples a government. Everyone agrees this is the core—valuable, non-automatable, worth protecting.
Edge case 3: The weird middle—your local city hall reporter who attends boring meetings and writes summaries nobody reads until suddenly something matters. This is where the real uncertainty lies. Could AI transcribe meetings and flag anomalies? Technically, yes. Would it? Would it catch what matters? Would anyone fund it?
Here’s what the edge case analysis reveals: We call lots of different activities “journalism” and they don’t all have the same AI vulnerability or the same social value. Lumping them together makes the problem unsolvable. “Save journalism!” means something different if you’re talking about investigative consortiums versus local calendar listings versus opinion columnists versus sports bloggers.
The honest position: We need different strategies for different types of information work, and some of what we currently call journalism probably doesn’t need “saving” in its current form.
The Business Model Question Has No Answer Yet
Here are several plausible futures, all of which I can argue myself into:
Future 1: Subscriptions for prestige brands. Twenty major outlets globally survive on loyal subscribers who want the brand association and depth. Everyone else folds. This seems to be the consensus bet.
Future 2: Public funding as infrastructure. We collectively decide accountability journalism is a public good and fund it like roads. This requires political will that doesn’t currently exist.
Future 3: The API future. AI companies need current information (not just training data) and pay news organizations for real-time feeds. This becomes the new wire service business. Publishers serve AI, not readers.
Future 4: Attention never mattered. The most valuable journalism was always funded by rich people’s vanity (Bezos/WaPo), billionaire philanthropy (Craigslist guy), or foundations. We just stop pretending it should be a market business.
Future 5: The tools get democratized. AI makes high-quality information work so cheap that one person can do what required a newsroom. Local coverage comes back through micro-entrepreneurs using AI to scale their work. Quality varies wildly.
Future 6: Everything’s fine, actually. People overestimate disruption. Articles survive because humans like narrative. Newspapers figured out websites; they’ll figure out AI. Journalism is always in crisis; it always survives.
I find problems with every scenario. I can’t confidently advocate for any of them. The people who are confident are either grifting or not thinking hard enough.
What I Actually Wonder About
The questions that keep me up aren’t the ones people write about:
1. Who asks the AI what to investigate?
If investigative journalism survives as public infrastructure funded by foundations, who decides what to investigate? Currently, journalists choose based on news judgment, personal interest, tips. Do we want that gatekeeping? Do we want foundation officers deciding? Democratic input somehow? This is a harder problem than “fund more journalism.”
2. What happens to accidental discoveries?
You were reading about housing policy and stumbled on a corruption scandal in the footnotes. You were investigating the mayor and discovered environmental fraud. Serendipity is a feature of curious humans following threads. Does AI-directed investigation optimize that away? Do we lose the weird tangents that produce important revelations?
3. Can there be adversarial AI?
The most important journalism is adversarial—making powerful people uncomfortable. But AIs are trained to be helpful, harmless, and honest. They avoid controversy. Could you build an AI agent that’s genuinely skeptical of official sources, that asks hostile questions, that protects whistleblowers? The technical challenges are smaller than the corporate/legal ones. Nobody’s funding aggressive AI interrogators.
4. What’s the unit of news consumption in 10 years?
Articles made sense for newspapers. Videos made sense for TV. What’s native to AI interfaces? Conversations? Personalized briefings? Something we haven’t imagined? And if the format changes, does the underlying work change too?
5. How do you build trust in synthesis?
Right now, journalism’s credibility comes partly from bylines—you trust Maggie Haberman because you’ve read her work over time. How do you build that relationship with a system generating custom responses? Do we need “certified” AI journalists whose models you can trust? Is that even coherent?
The Parts That Scare Me
Not the jobs losses—those are awful but solvable with policy (though we won’t). What scares me:
The expertise spiral: If junior reporters get automated away, where do senior investigative reporters come from? You learn journalism by doing lots of mediocre journalism first. If we eliminate the entry-level work, we eliminate the training ground. Ten years later, we have no experts.
The imperceptible degradation: AI makes mistakes differently than humans. It confabulates plausibly. If we’re ingesting synthetic content at scale, how do we catch the slow drift away from accuracy? The errors might be too distributed to notice until the entire information ecosystem is subtly corrupted.
The motivation question: Why does anyone become a journalist if the market says it’s worthless? You can’t build public infrastructure on jobs nobody wants. If journalism becomes a poorly-paid public service gig, you get DMV energy, not watchdog energy.
The geopolitical split: Maybe U.S. journalism collapses while China’s state-funded model thrives. Maybe European public media adapts better than American commercial news. If information infrastructure diverges globally, what happens to shared reality across borders? We’re already seeing authoritarian states use AI for propaganda at scale.
What I Think We Should Actually Do
Not “solutions” but experiments worth trying:
Fund the experiments: We need 50 different attempts at AI-native journalism, most of which will fail. Foundation money should go to weird pilots, not propping up legacy institutions.
Make the infrastructure transparent: If AI is ingesting and synthesizing news, we need to see what sources it’s using, how it’s weighting them, where it’s uncertain. Treat this like we treat election systems—boring but crucial infrastructure that demands transparency.
Preserve the adversarial function somehow: This might mean protecting human journalists specifically doing accountability work. It might mean building institutional independence into AI systems. I don’t know what it looks like, but “AI makes everyone a journalist” won’t work if nobody’s willing to get sued or threatened.
Separate the different functions: Stop treating “journalism” as one thing. Informational needs are different from accountability needs are different from community-building needs. They might need different solutions, different funding models, different AI integration.
Invest in information literacy: If everyone’s getting personalized synthesis, they need to understand what that means—what’s being left out, how algorithms work, when to seek multiple perspectives. This is education infrastructure, not journalism per se.
The Uncomfortable Conclusion
I don’t know what happens next. Neither does anyone else. The people with confident predictions are trying to manifest their preferred future, not forecast the likely one.
What I’m fairly sure of:
- The current system is unsustainable
- Something important will be lost in the transition
- Something potentially valuable might emerge
- The people most harmed will be working journalists, not readers
- By the time we understand what happened, it’ll be too late to course-correct
Maybe the honest work isn’t predicting the future but building feedback mechanisms—ways to notice quickly when we’re losing something important, ways to experiment and evaluate, ways to preserve optionality.
The worst outcome isn’t that journalism dies. It’s that we optimize for engagement and efficiency and only later realize we’ve lost the strange, inefficient, occasionally brilliant work of people trying to figure out what’s true and why it matters.
But I don’t know. And I’m suspicious of anyone who claims they do.
Gina’s notes: I’m not sure I agree with everything here. But I’m equally sure I don’t disagree with most of it. The critical lines to take away, I think, are this:
The worst outcome isn’t that journalism dies. It’s that we optimize for engagement and efficiency and only later realize we’ve lost the strange, inefficient, occasionally brilliant work of people trying to figure out what’s true and why it matters.
And what I think we need to focus on — and a key part of what I plan to do at the Tow-Knight Center at CUNY — is how to nudge whatever systems that come along towards public service rather than simply efficiency.
But also: A chatbot system wrote this. Think about that.