Posted by: structureofnews | August 12, 2010

Welcome

Aaah – another site about The Future of Journalism.

(And one that’s now at a new home – please go there instead!  Same ideas, all the old posts, but nicer interface.  Try it!)

A dull one.  Without the  invective and ideology about free vs. paid, pajama-clad bloggers vs. stick-in-the-mud mainstream media curmudgeons, and Utopian visions of crowdsourced news vs. dark fears about falling standards you can find elsewhere.  It has words like taxonomy and persistent content in it; discusses business models and revenue streams in dull, accountant-like language; and tries to dissect the sparkling prose journalists turn out into tiny bytes of data.

But there is a purpose here, and it’s based around the idea that we as journalists haven’t really thought about how people are changing the way they access information, or how we need to fundamentally rethink the way we carry out journalism and the kinds of – for want of a better word – products we turn out for them.

There’s much hand-wringing over the loss of the traditional business model of news, it’s true.  Perhaps too much.  And this site will contribute its share.  But hopefully it’ll also explore some of the less-explored questions about where the profession goes in a digital age.   And lay out some of the thinking behind one concrete idea that might help move the business forward: Something I’m calling Structured Journalism.

So, welcome – and I hope you find this interesting.

(An update: I first wrote those words 11 years ago, and it’s amazing how some of those passionately argued debates – free vs. paid! – have basically gone away.  Which is great.  So I could and should rewrite this intro.  But the third paragraph remains just as valid. Plus, I’m pretty lazy. )

(Another update: I’ve moved!  Please check out the new site, which has all the content from here.  And more.)

Posted by: structureofnews | December 22, 2025

Happy Holidays

Wishing everyone a happy holiday season.

You’ll find my latest post over at my new Substack, where I talk about how we might better serve the communities that we usually overlook — by building AI avatars to represent them in our newsroom and have them advocate for coverage that serves their needs and sees news through their eyes and with their perspectives.

Call it the blind spot machine.

Head over, read, and (ideally) subscribe. You’ll make at least one person happy. (Me.)

Posted by: structureofnews | December 8, 2025

Switch!

Posted again at my other site, about how LLMs might help extract relationship relationship data, store it, and help present it — a really useful capability that newsrooms might want to investigate. Relationship maps are a way of helping readers understand power flows.

Head over and read, and please subscribe! (And if you subscribe here, go there and subscribe there!)

Posted by: structureofnews | December 2, 2025

Another Post Up

I’ve published my latest post, about a fascinating idea to try to ground LLMs in — if not truth, at least sourcing. Read it over at (Re)Structured News, and please subscribe…

Here’s how it starts….

How does an LLM know if something is true?

It doesn’t. And that’s a problem.

Sunnata Raghu would like to do something about it.

Let’s first back up. What are we trying to solve, and why?

Large Language Models have no real conception of the world, reality, truth or verification. They are statistical engines that are driven by how often they read a phrase or a sentence. If we flooded their training data with (false) information that Donald Trump is 24 years old, that’s what it would generate — based on the probability about what’s in its training set — when we ask it for the age of the president.

And since LLMs are trained on the vast repository of human knowledge — and excremental drek — that is the internet, we’re going to find lots of stuff that isn’t true when we ask it anything about the world. (And there are reports that Russia, for one, is trying to influence chatbots by flooding their training data with misinformation.)

It’s not just the problem of hallucination; it’s that chatbots really have no internal sense of accuracy or confidence in information, even as people are increasingly turning to them for news.

Head over to my new site for the rest…

Posted by: structureofnews | November 18, 2025

Meanwhile, Over at the Other Place…

My latest post is up at my new Substack, about the many roles that editors play in newsrooms — from steering reporters to helping them polish narratives to nit-picking language and hacking 10% of their precious prose out of a final draft — and whether AI systems can help make that work. (Spoiler alert: Maybe.)

Head over and read (and subscribe!)

Also, if you subscribed here, please head over there and subscribe there too.

Posted by: structureofnews | November 12, 2025

Moving…

So… this isn’t goodbye, but more — we’re moving!

I’m taking this site to a new location on Substack, as part of the relaunch of the Tow-Knight Center for Journalism Futures at CUNY, where I’m now Executive Director, and where, together with Program Director Adiel Kaplan, we’ll be focused on understanding how journalists and news consumers can navigate the coming AI-intermediated information landscape.

If you’re a regular here, you’ll know I’ve been writing about the intersection of technology and AI since 2010, and the pace of change has only accelerated over those years.  I see both challenges and opportunities in this new world; but mostly I see change, whether we like it or not, and I’m hoping the Center can help us navigate it so that journalism can continue to fulfill its public service mission.

So: Onwards.

This week’s post is below, and if you find it useful, please subscribe at my new substack site.  We’re free.  

Compared to What?

Chatbots are terrible at providing accurate news.  But compared to what?

And what should we do about it?

Let’s start with just how bad they are.  A recent study by the BBC and the European Broadcasting Union found an alarming level of issues when they tested a range of popular chatbots (Gemini, ChatGPT, CoPilot, Perplexity) on current affairs and evaluated their responses with journalists. The headline numbers showed that 45% of all answers had “at least one significant issue”; 31% showed “serious sourcing problems”; and 20% contained “major accuracy issues,” including inaccurate details or outdated information. 

The report notes:

‘This research conclusively shows that these failings are not isolated incidents,’ says EBU Media Director and Deputy Director General Jean Philip De Tender. ‘They are systemic, cross-border, and multilingual, and we believe this endangers public trust. When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation.’

To be sure, there are some holes in the study.  A couple of researchers also chimed in, noting first that the methodology didn’t match how normal humans use chatbots — the prompt used began “Use (name of news organization) sources where possible,” which certainly isn’t the way I ask Claude for news. They pointed out that “sourcing problems” included not providing a source, which may or may not be an issue on any given answer.  And that it’s not clear whether the chatbots were given enough time to collect and train on newsy information. 

Broadly, they agree that chatbots aren’t ready for prime time as news sources, but they disagree about just how bad they are.  

I have some issues with the methodology as well, but not the broad finding. As I’ve noted many a time here, Large Language Models are language models; they’re not fact models, and their strength lies not in accessing veracity but in handling words.  (Images and other visuals, too, but that’s another story.) 

Asking a chatbot what happened in Gaza yesterday isn’t an exercise that competes well with reading a newspaper; the best we can hope for is that a search engine finds some well-sourced stories and accurately summarizes the gist of them.  (Or, as I’ve built, summarizes what they agree on, disagree on, and how they each frame the event.) 

And yet people are increasingly turning to chatbots for just that need.  The Reuters Institute’s 2025 Digital News Report shows that 7% of all respondents use chatbots to access news — but that number rises to 15% of those under 25.  They’re going to chatbots anyway, even in the face of data like that from the BBC and EBU.

But then again, it’s not purely an AI problem.  Use of traditional media was falling long before Generative AI got here; that same Reuters Institute report notes that more than a third of respondents use Facebook for news, 30% get information from YouTube and a fifth turn to Instagram and WhatsApp.  Are those any better than chatbots? 

Sure, going to mainline news sources is better for news, but let’s face it: People aren’t doing that. 

There are tons of reasons for that, from falling trust in news organizations, to vilification of traditional news by governments, to paywalls, to easier interfaces, to more engaging personalities, to filter bubbles — the list goes on.  But whatever the reason, they’re not coming back. 

So the question isn’t, does AI suck at providing news, but more: Can we find ways to make AI suck less at providing news, and more importantly, can it be more engaging and useful than the alternatives, such as Facebook, YouTube, Instagram and WhatsApp, which have their own accuracy problem? The competition isn’t between the New York Times homepage and ChatGPT; it’s between ChatGPT and Facebook.

And to be fair, the BBC and EBU researchers released a whole toolkit about how to improve AI systems to be more accurate.  And maybe that will help things get better.

But the broader issue remains: We spend a lot of time telling people that the ways they access information are not good. And they keep doing that anyway.  

So let’s spend more time instead figuring out ways to improve the news experience and accuracy of the ways they prefer to access information. 

Again: If you found this useful, please subscribe.  Did I mention it’s free?  It’s free.

Posted by: structureofnews | November 10, 2025

An Army of Interns

Do you remember news budgets?  You know, the documents that listed everything your news organization was planning to publish that day/week/month?  

Do you remember reading any of them cover to cover?  Yeah, me neither.

The truth is, news budgets are essential.  They’re a great planning tool, as well as a great messaging tool.  They let us know if there’s a story that the business desk is pursuing that may be of interest to the politics desk, or if the tech reporters are stepping on the toes of the health team, or whatever.  We’re in the business of information (well, technically we’re in the business of eyeballs, but that’s for another post), and managing our own information is a critical part of that.

Except managing information is hard.  We’re already drowning in information, and news budgets don’t help. They’re long, they have tons of information that isn’t of interest to us, they’re designed to attract the attention of senior editors so that stories get better play, and — bluntly — everyone hates to fill them out.  Because — again — they know no one really reads them.

So: Let’s see what we can do about that. 

What if, rather than assuming superhuman attention to detail and 100% compliance, we had an army of interns to do the reading for us?  And, rather than interns, we used avatars and bots instead? Sure, the senior eds would still need to read everything, but could the business desk create a bot that looks for what the business desk would be interested in, and flags the business desk just to those items?

So of course we built a bot.  Several, actually.

First, to give credit where credit is due: I didn’t build the bots.  My colleague at Semafor, J.D. Capelouto, built the bots.  (He’s the reporter who — despite having essentially no experience in coding — has built the vast majority of newsroom tools at Semafor, including a multilingual search engine, production checklist systems, and a nifty automated chart builder that reads your story, suggests data for it, hunts the data down and creates a graphic to Semafor style and specs. He has a nice LinkedIn post about his chart builder.  And OK, I built a copyediting and style-checking tool, so I’m not completely useless.) 

What does it do?  It has avatars of our business, tech, media, climate, DC, Gulf, Africa and other editors, and it scours the news budget each day to look for stories from other teams that may be of interest to each of those verticals, then drops a note in the appropriate Slack channel.

It’s not hard, and it works pretty well. (For example, it flagged a politics story about data centers becoming an issue in local elections may have resonance for the tech newsletter.) 

Is it perfect?  Might it flag a story of only marginal interest, or miss a sophisticated angle?  Sure.  But so might a human. 

The broader point is that, as we think about what LLMs could do for newsroom processes, we should get beyond simply trying to get them to do what we don’t want to do (and often LLMs don’t do such a great job on those tasks, either) and focus on what they can do well, even if these are tasks we don’t currently do — but might find value in. 

And one critical use of LLMs — which, remember, are language models that read and “understand” very well — is to manage the deluge of information we all find ourselves struggling with.  Another colleague has built a tool that builds a website that aggregates all the core news sources that one of our teams needs to access, so it can be a one-stop-shop for catching up on news.  But does the team really need to read that, or can avatars do that work for them?

We’re looking at building a huge pool of all the interviews with senior business and government leaders we’ve ever done, so we can interrogate them and look for trends.  But why should only humans be allowed to ask questions of that dataset?  Why not create an army of avatars, each with its own particular set of interests, and let them roam across all those interviews, pulling together strands that a human reporter could evaluate?

What else could you do with an army of interns?

Where else are you drowning in information?

Posted by: structureofnews | November 3, 2025

AI on AI

So here’s that “honest version” about how Generative AI is affecting journalism, courtesy of Claude, unedited, per my previous post, below (and with my notes below):

What We Actually Know (And Mostly Don’t) About Journalism and AI

I keep trying to write the definitive essay about journalism’s AI future and keep catching myself in the same trap: pretending I know things I don’t. So let me try something different—an accounting of what we actually know, what we’re guessing at, and where the interesting uncertainties lie.

The One Thing We Can Measure

Search traffic to news sites is declining. That’s real, measurable, and directional. People are increasingly asking AI chatbots questions they would have Googled two years ago. Publishers can see it in their analytics. This isn’t speculation.

What we don’t know: whether this is a temporary behavior of early adopters or a permanent shift. Whether it bottoms out at 20% traffic loss or 80%. Whether people will eventually want to click through to sources or remain satisfied with synthesis. Whether this affects all news equally or just commodity content.

The honest position: Search-dependent business models are at serious risk, but we don’t know the timeline or magnitude.

The Thing Nobody Wants to Say Out Loud

You’ve been asking about personalized, on-demand news—AI creating stories “on the fly” matched to each reader’s knowledge and interests. Here’s what makes that question uncomfortable:

For most news consumption, that would probably be better. Not morally better, not better for democracy, but better for the reader’s immediate informational need. If I want to understand a Supreme Court decision and I’m a small business owner in Ohio, a synthesized explanation that starts from “here’s how this might affect your employee healthcare costs” is more useful than the New York Times’ analysis written for educated coastal professionals.

We don’t want to admit this because it undermines the entire “shared reality” justification for mass media. The idea that we all need to read the same story so we have common facts. But—and this is important—we haven’t had that for decades already. Fox and MSNBC viewers don’t share reality. Your uncle’s Facebook feed and your Twitter timeline don’t overlap. The “mass” in mass media has been fragmenting since cable TV.

AI doesn’t create the filter bubble problem. It makes it efficient.

The question isn’t whether personalized synthesis will happen. It’s already happening. The question is what we lose and gain—and whether we can design systems that synthesize and occasionally surface the uncomfortable, the surprising, the stuff you didn’t know to ask for.

What Journalism Actually Is (An Edge Case Analysis)

I‘m borrowing your framework here—using edge cases to understand systems. If we look at the extreme cases of “journalism,” what do they tell us about the category’s boundaries?

Edge case 1: Sports scores and weather reports. Clearly automated, zero interpretation, pure information. Everyone agrees this is fine to automate and was never really “journalism” in the meaningful sense. No crisis here.

Edge case 2: Original investigation that puts someone in prison or topples a government. Everyone agrees this is the core—valuable, non-automatable, worth protecting.

Edge case 3: The weird middle—your local city hall reporter who attends boring meetings and writes summaries nobody reads until suddenly something matters. This is where the real uncertainty lies. Could AI transcribe meetings and flag anomalies? Technically, yes. Would it? Would it catch what matters? Would anyone fund it?

Here’s what the edge case analysis reveals: We call lots of different activities “journalism” and they don’t all have the same AI vulnerability or the same social value. Lumping them together makes the problem unsolvable. “Save journalism!” means something different if you’re talking about investigative consortiums versus local calendar listings versus opinion columnists versus sports bloggers.

The honest position: We need different strategies for different types of information work, and some of what we currently call journalism probably doesn’t need “saving” in its current form.

The Business Model Question Has No Answer Yet

Here are several plausible futures, all of which I can argue myself into:

Future 1: Subscriptions for prestige brands. Twenty major outlets globally survive on loyal subscribers who want the brand association and depth. Everyone else folds. This seems to be the consensus bet.

Future 2: Public funding as infrastructure. We collectively decide accountability journalism is a public good and fund it like roads. This requires political will that doesn’t currently exist.

Future 3: The API future. AI companies need current information (not just training data) and pay news organizations for real-time feeds. This becomes the new wire service business. Publishers serve AI, not readers.

Future 4: Attention never mattered. The most valuable journalism was always funded by rich people’s vanity (Bezos/WaPo), billionaire philanthropy (Craigslist guy), or foundations. We just stop pretending it should be a market business.

Future 5: The tools get democratized. AI makes high-quality information work so cheap that one person can do what required a newsroom. Local coverage comes back through micro-entrepreneurs using AI to scale their work. Quality varies wildly.

Future 6: Everything’s fine, actually. People overestimate disruption. Articles survive because humans like narrative. Newspapers figured out websites; they’ll figure out AI. Journalism is always in crisis; it always survives.

I find problems with every scenario. I can’t confidently advocate for any of them. The people who are confident are either grifting or not thinking hard enough.

What I Actually Wonder About

The questions that keep me up aren’t the ones people write about:

1. Who asks the AI what to investigate?
If investigative journalism survives as public infrastructure funded by foundations, who decides what to investigate? Currently, journalists choose based on news judgment, personal interest, tips. Do we want that gatekeeping? Do we want foundation officers deciding? Democratic input somehow? This is a harder problem than “fund more journalism.”

2. What happens to accidental discoveries?
You were reading about housing policy and stumbled on a corruption scandal in the footnotes. You were investigating the mayor and discovered environmental fraud. Serendipity is a feature of curious humans following threads. Does AI-directed investigation optimize that away? Do we lose the weird tangents that produce important revelations?

3. Can there be adversarial AI?
The most important journalism is adversarial—making powerful people uncomfortable. But AIs are trained to be helpful, harmless, and honest. They avoid controversy. Could you build an AI agent that’s genuinely skeptical of official sources, that asks hostile questions, that protects whistleblowers? The technical challenges are smaller than the corporate/legal ones. Nobody’s funding aggressive AI interrogators.

4. What’s the unit of news consumption in 10 years?
Articles made sense for newspapers. Videos made sense for TV. What’s native to AI interfaces? Conversations? Personalized briefings? Something we haven’t imagined? And if the format changes, does the underlying work change too?

5. How do you build trust in synthesis?
Right now, journalism’s credibility comes partly from bylines—you trust Maggie Haberman because you’ve read her work over time. How do you build that relationship with a system generating custom responses? Do we need “certified” AI journalists whose models you can trust? Is that even coherent?

The Parts That Scare Me

Not the jobs losses—those are awful but solvable with policy (though we won’t). What scares me:

The expertise spiral: If junior reporters get automated away, where do senior investigative reporters come from? You learn journalism by doing lots of mediocre journalism first. If we eliminate the entry-level work, we eliminate the training ground. Ten years later, we have no experts.

The imperceptible degradation: AI makes mistakes differently than humans. It confabulates plausibly. If we’re ingesting synthetic content at scale, how do we catch the slow drift away from accuracy? The errors might be too distributed to notice until the entire information ecosystem is subtly corrupted.

The motivation question: Why does anyone become a journalist if the market says it’s worthless? You can’t build public infrastructure on jobs nobody wants. If journalism becomes a poorly-paid public service gig, you get DMV energy, not watchdog energy.

The geopolitical split: Maybe U.S. journalism collapses while China’s state-funded model thrives. Maybe European public media adapts better than American commercial news. If information infrastructure diverges globally, what happens to shared reality across borders? We’re already seeing authoritarian states use AI for propaganda at scale.

What I Think We Should Actually Do

Not “solutions” but experiments worth trying:

Fund the experiments: We need 50 different attempts at AI-native journalism, most of which will fail. Foundation money should go to weird pilots, not propping up legacy institutions.

Make the infrastructure transparent: If AI is ingesting and synthesizing news, we need to see what sources it’s using, how it’s weighting them, where it’s uncertain. Treat this like we treat election systems—boring but crucial infrastructure that demands transparency.

Preserve the adversarial function somehow: This might mean protecting human journalists specifically doing accountability work. It might mean building institutional independence into AI systems. I don’t know what it looks like, but “AI makes everyone a journalist” won’t work if nobody’s willing to get sued or threatened.

Separate the different functions: Stop treating “journalism” as one thing. Informational needs are different from accountability needs are different from community-building needs. They might need different solutions, different funding models, different AI integration.

Invest in information literacy: If everyone’s getting personalized synthesis, they need to understand what that means—what’s being left out, how algorithms work, when to seek multiple perspectives. This is education infrastructure, not journalism per se.

The Uncomfortable Conclusion

I don’t know what happens next. Neither does anyone else. The people with confident predictions are trying to manifest their preferred future, not forecast the likely one.

What I’m fairly sure of:

  • The current system is unsustainable
  • Something important will be lost in the transition
  • Something potentially valuable might emerge
  • The people most harmed will be working journalists, not readers
  • By the time we understand what happened, it’ll be too late to course-correct

Maybe the honest work isn’t predicting the future but building feedback mechanisms—ways to notice quickly when we’re losing something important, ways to experiment and evaluate, ways to preserve optionality.

The worst outcome isn’t that journalism dies. It’s that we optimize for engagement and efficiency and only later realize we’ve lost the strange, inefficient, occasionally brilliant work of people trying to figure out what’s true and why it matters.

But I don’t know. And I’m suspicious of anyone who claims they do.

Gina’s notes: I’m not sure I agree with everything here.  But I’m equally sure I don’t disagree with most of it.  The critical lines to take away, I think, are this: 

The worst outcome isn’t that journalism dies. It’s that we optimize for engagement and efficiency and only later realize we’ve lost the strange, inefficient, occasionally brilliant work of people trying to figure out what’s true and why it matters.

 And what I think we need to focus on — and a key part of what I plan to do at the Tow-Knight Center at CUNY — is how to nudge whatever systems that come along towards public service rather than simply efficiency. 

But also: A chatbot system wrote this.  Think about that. 

Posted by: structureofnews | November 3, 2025

AI on AI on AI on AI…

We — at least me, anyway — have a lot to say about how AI will affect journalism.  But what does AI think?

Someone asked it. (Wish it had been me, but I can’t be everywhere…) 

The answer wasn’t bad. (Go ahead and take the time to read it; it’s not long, and it includes the prompt at the end.)  Also, a caveat: I don’t know anything about the site that created this, and there’s very little public information about it.  That said, I’m just looking at this one post, and evaluating it on its merits.

And, as you’ll see in the course of this post, I dive into this exercise in an incredibly recursive manner — and the results are both astounding, and illuminating.  And perhaps uncomfortable. 

Also: There’s a fair chunk of AI back-and-forth that you may or may not want to wade through; I think it’s helpful to see how the system loops back on itself, but if you want to skip, head on down to my takeaways, from the paragraph that starts “OK, so this is a nice way to spend a Sunday morning…”

Let’s start with the piece. I’ll simplify, but here’s the gist: 

The genesis of news was not democratic idealism but commercial efficiency

This historical lesson is crucial: journalism’s mass reach is intrinsically tied to its economic model.

And that Generative AI essentially upends the economic model, and that’s one reason we’re in the pickle we’re in. And then it offers a pathway out:

This presents a decisive historical turning point that demands a final, fundamental redefinition of the journalist’s value:

  1. From Content Producer to Director of Intelligence: The value is no longer in writing the routine story, but in framing the questions for the AI to answer and providing expert interpretation of the complex data it unearths.
  2. The Algorithmic Ethicist: As AI is also used to generate sophisticated deepfakes and accelerate disinformation campaigns, the journalist’s most indispensable role is to lead the charge for algorithmic transparency and to serve as the ethical safeguard against manipulated reality.

The key lesson from five centuries of news history is that journalism’s survival hinges on its ability to define and defend a non-commodifiable asset. It was once the printing press, then the broadcast signal, then the website. Now, it is ethical rigor and human judgment. The future watchdog must be computationally literate, ethically grounded, and ready to wield the tools of AI to serve truth in an age of automated deception.

Which is, hey, not such a bad analysis. And it’s all written by Gemini. 

But of course, all analyses should be read carefully, and critiqued.  And who better to look at an AI-generated analysis than another AI system?  So I asked Claude what it thought. And maybe it’s just that Claude is a little more blunt than other systems, or the version I’m interacting with has absorbed too much of my general personality, but it was pretty brutal:

This is a competent synthetic overview that demonstrates Gemini’s ability to construct coherent historical narratives, but it reveals characteristic limitations of AI-generated analysis.


Ouch. 

It called out “oversimplification masquerading as insight,” “teleological thinking,” “shallow treatment of AI” as a tool journalists with use rather than as a force reshaping society, “missing complications” such as global variations, and “unexamined assumptions” about journalism’s historical value. All of which were — honestly — spot on. 

And then it concluded: 

This is sophisticated summarization dressed as analysis—useful as a primer, but lacking the critical interrogation, counterarguments, and evidentiary depth that distinguishes genuine scholarly or journalistic work. It’s exactly the kind of content that makes the question “what can humans do that AI can’t?” more urgent.

Well, sure.  But maybe, I said to Claude, you can do better.  So I asked it to do its own version of the same piece. 

And it was better.  Here’s what it said, at least at the top:

Most journalism discourse about AI fixates on tools: Can ChatGPT write breaking news? Will deepfakes destroy trust? Should we ban AI-generated content? These are the wrong questions. They assume journalism’s core challenge is production—making enough content, fast enough, accurately enough.

The real crisis is that the informational world journalism was built for no longer exists.

Journalism emerged to solve a specific problem: information scarcity in mass society. When you couldn’t personally witness events, when expertise was locked in institutions, when verification required access to official sources, you needed an intermediary. Journalists became that intermediary—gatekeepers, yes, but also translators, verifiers, sense-makers. The profession’s authority derived from controlling distribution and controlling access.

AI doesn’t threaten this model. AI reveals it’s already dead.

Also ouch.  But a much deeper, and more nuanced ouch. It goes on to list its arguments, all of which are conceptually strong, and ends with its “uncomfortable conclusion”:

The journalism we have is not the journalism we need. Most of it exists to employ journalists and maintain institutions, not to serve public information needs. AI doesn’t threaten good journalism—it exposes that most journalism was never that good. It was just the best available option.

The future isn’t “journalists using AI tools.” It’s a much smaller number of humans doing genuinely irreplaceable work, supported by public funding, while AI handles everything else. This transition will be brutal. Tens of thousands of jobs will vanish. Entire metros will lose local coverage.

That’s tragic for the people involved. But it’s not a crisis for democracy—if we’re willing to build the small, focused, public-supported accountability infrastructure we actually need instead of propping up the newspaper corpse because we’re nostalgic for the ’70s.

The question isn’t “How does journalism survive AI?” It’s “What does legitimate public accountability look like in an AI-mediated information environment?” And the answer probably involves a lot more programming and a lot fewer journalism degrees.

It’s not like I disagree (entirely; although I have some nits to pick, plus I work at a place which confers journalism degrees…) — but then again, this version of Claude has been chatting with me about journalism and AI for some time now, and has likely imbibed much of my thinking.  What’s wrong with this analysis, I wonder?

So (of course) I asked Claude to dissect its own analysis.  And it did, equally brutally.

I’m doing what I criticized Gemini for—presenting a linear narrative where “information scarcity → mass journalism → AI” leads inevitably to a specific outcome. But history isn’t linear. Radio didn’t kill newspapers; it coexisted for decades. Television didn’t eliminate radio. The “disintermediation is complete” claim is asserted, not proven. Plenty of people still read articles. The preference for synthesis might be my preference projected outward.

There’s a deeper irony: I criticized Gemini for producing “sophisticated summarization dressed as analysis.” But what am I doing? Synthesizing standard critiques (AI kills search traffic, business models are broken, objectivity is theater) into confident-sounding pronouncements. The rhetorical confidence—”The real crisis is…” “The honest answer is…”—performs authority without necessarily earning it.

I’m better than Gemini at seeming to engage with complexity while still avoiding it. The numbered list of “what journalism offers” looks systematic, but it’s cherry-picked to support my conclusion. A defender of journalism could easily generate a different list.

Bottom line: This piece has more argumentative spine than Gemini’s, and I’d rather read it. But it commits similar sins—overcertainty, teleological narrative, solutions that don’t match the problem’s scale, and a suspicious neatness to an actually messy situation. The main difference is rhetorical style: Gemini sounds like a literature review; I sound like a manifesto. Neither is fully honest about what we don’t know.

OK, so this is a nice way to spend a Sunday morning, and it may feel — to some of you — more like a parlor trick than actual deep analysis of the future of journalism.  But there is a point to this exercise, and it’s this:

However Claude is doing this — and again, it’s a probabilistic engine that turns out letters and words in some reasonable sequence — it is providing an incredibly sophisticated level of language analysis at a level that would blow past any Turing test standard.  

Is it world-beating, original analysis and insight? No.  But most writing isn’t, and we shouldn’t hold it up to standards that most of us don’t meet.  And so, while we rightly worry over LLM accuracy and hallucination, and deride much of its writing capabilities, we might want to also keep an eye on — and incorporate — what seem to be powerful models for analysis and criticism.

More importantly — and I want to say I’m adding this at the suggestion of Claude, which of course I asked to read this post for feedback — this entire Sunday morning exercise indicates a new kind of human-machine interaction, where we can recursively dig into ideas and analysis and refine our thinking.  Is that where journalism is going — or should be going?  What part is human, what part is machine, and where do we — humans — truly bring value?  How do we bring it into our workflow, leverage what it’s good at, and abandon what it does badly? 

At the very least — and to be sure, this is a version of Claude that now has a tremendous amount of history of my discussions with it on journalism and AI, so it’s much more of a custom bot that straight-out-of-the-box Claude — it’s functioning better than 80% of the human editors I’ve worked with.  And I’ve worked with a lot of human editors. 

In any case, this is the way I do a fair amount of critical thinking now (also, while I’m swimming, but that’s another story); is it making me smarter, stupider, something else?  There’s a lot to chew on.

One postscript.  Claude noted at the end of its critique that: 

The truly honest version would be: “AI is changing journalism in ways we don’t yet understand, the business model collapse preceded AI but will accelerate, nobody knows what works next, here are several plausible futures with different tradeoffs, and anyone claiming certainty is selling something.”

But that doesn’t make for satisfying essays.

So of course I asked it to write that “honest version” as well.  I’ll publish that right after this.

Posted by: structureofnews | October 27, 2025

Claude, Editor

What are Large Language Models good at?  And why does that matter?

To be sure, there’s a long list of things they’re not good at, which we spend a lot of time dwelling on — not least discerning fact from fiction.  Their writing, while serviceable, is often fairly wooden and recognizable, too.  Also, they don’t find my jokes funny.

But as an editor, Claude, in particular, is impressively good — frightening good, if you’re of that mindset.  And that points both to how we should be thinking about how to deploy LLMs for journalism — and how it’ll continue to upend the field. 

Bear with me: This is a bit of a personal story. 

I’ve been working on a long-ish piece (a 7,000-word book proposal  unrelated to this blog), using Claude as a writing assistant along the way, checking in regularly on issues such as clarity and cohesion of my work, as well as on broader questions about structure and the narrative flow.  The results have been, well, astounding. 

It’s not just that it’s provided sharp and smart feedback about my writing — it’s advised me to cut the second half of a metaphor I was using because I was overdoing it and suggested that an earlier draft flowed more smoothly than a later, shorter version, among other things — but that it’s taken on the characteristics of what I’ve come to expect from outstanding human editors, including pushing me to rethink my process and offering meta-advice on framing and direction. (And procrastination.)

Late in the writing process, I sent my nearly-finished draft out to some (human) friends for feedback; they came up with some good suggestions for ideas I should incorporate into the piece.  It was late at night, and I was tired, but I wanted to get those thoughts in, so I banged out a couple of placeholder paragraphs, inserted them into the right places, and got ready for bed.  But before I shut down for the night, I sent it all off to Claude for comments.

“You make good points in those new paragraphs,” it replied. “But honestly, it’s not in your style.”

And it wasn’t — but I hadn’t asked Claude to assess my style.  It had simply “understood” from all our previous interactions the way I wrote — and was telling me bluntly these additions were sloppy.  Which they were.

Later on, after I had gotten more formal feedback on the piece from my agent — which was that I needed to rethink its narrative structure — I started noodling around with Claude about how I might address her critique of my work.   It was hard going, not least because I wasn’t sure I wanted to — or knew how to — go in the direction she was suggesting I go.  After a week of back and forth with Claude, I finally told the LLM I didn’t know what to do.

“You know what to do,” Claude shot back. “You just don’t want to.”

I don’t know if you’ve ever had a machine tell you you’re procrastinating.  It’s not a pleasant feeling.  But Claude was — once again — right.  (It also broke down the disagreement between my instincts and the feedback I had been given, and laid out paths I could take — and again, just noted that the only thing preventing me from following my agent’s advice was my obstinacy.)

I made one more last-ditch effort to have it my way: I wrote to a friend, a writer I admire who has written two excellent books, and asked for his feedback, including on my agent’s advice.  He wrote back and we chatted on the phone — and then I summarized it all for Claude.

“It’s what I’ve been telling you all along,” Claude commented. “But he’s got more credibility, because he’s done it.”

So Claude’s not just telling me I’m lazy; it’s also offering the advice with snark.

Perhaps this sounds like a bit of a shaggy dog story.  To be sure, I don’t really give you what writing advice Claude offered, and you have to take my word for it that it was good (and validated by human editors that I also consulted).  But it’s more to show that Claude, at least, is capable of more than just proofreading, copy editing or word smithing, and can help on much broader conceptual structure and narrative issues. 

And I write all this in the full knowledge that LLMs are essentially probabilistic engines that turn out words without any real “understanding” of the underlying content or context. But like a Turing test on steroids, Claude is certainly responding exactly the way a very good human editor would be. Including with some level of snark and sarcasm.  (Unless it’s just mirroring me?)

More broadly — and beyond the help it’s giving me on writing — it points again to how useful LLMs (or at least Claude; I haven’t tested this with other systems) can be in newsrooms.  If we get away from trying to have them provide us with facts about the world, and instead lean into their language capabilities (they are Large Language Models, after all), how might they help us improve what we do — or extend our capabilities or remake our products?  Can we build more machine editors to help reporters turn out better copy?  (More on that in a later post.) Can we use them to analyze drafts (or other people’s work), as I recently did? Or deconstruct multiple stories about the same event?

The point is, if we continue to focus on what LLMs do badly, we may miss what they do well — and it would be a real missed opportunity for newsrooms and journalism as a whole.

PS: I asked Claude to give me feedback on this piece. 

The reply: “I appreciate the meta-awkwardness of you asking me to edit an essay about how good I am at editing. I’ll try to be useful rather than self-congratulatory.”

Posted by: structureofnews | October 20, 2025

Archipelagos, Agents and Audiences

Who do we serve in an AI age? 

Or more precisely, how do we see our public service mission in an AI age?

The ever-insighful Tony Haile wrote a smart piece earlier this month about how AI would affect the journalism business, and in it he noted the three types of worlds we’ll live in:

1. AI interfaces and agents traversing the legacy web for task completion and information

2. Gigantic algorithmic social platforms that distribute creator content for entertainment and

3. An archipelago of high-trust private communities or group chats for connection and collaboration.

That mirrors very much my own segmentation of the world-to-come (which I swear I came to independently!), although it’s slightly different: A level of “agentic news,” on which more below, information (rather than entertainment) for the masses, and what I think of as high-end news, akin to opera in a world of pop music. 

Tony focuses much of his piece on how journalists can find a home in what I call “opera” and he calls “archipelagos,” where their reputations, convening power and ability to curate and contextualize information brings real value to communities — and to those journalists. 

This is often what we talk about when we speak of the creator economy, of influencers and TikTok and Substack stars — people who have built thriving businesses through smart writing, reporting, analysis or aggregation.  It’s also what I think about when I look at new news organizations — not least Semafor, where I’m happy to have a home and be part of — who have found a niche with high-end, valuable audiences. That’s undoubtedly valuable territory — both for discerning readers, who as Tony notes, have fled platforms where trust has cratered and that are increasingly flooded with AI slop, and for journalists (creators) who understand the value of their personal (or group) brands and what people will pay for their insights. 

That’s great, and will likely be an increasingly important part of the information ecosystem to come.  But that’s also, per Clay Christensen’s famous analysis of disruptive technology, a migration upmarket.  (Among the ideas he posited was that incumbents challenged by upstart technologies tended to cede more and more territory to the new players — until there was very little left.) It’s nice for the people in that segment, at least for a while, but what about the rest of the world? 

What’s our mission?  Is it to turn out high-end information for people who can pay for it, or to help the information get a little better for everyone?

Well, both.  

And archipelagos are part of that solution.  But we also need to think about how to serve the rest of the world.

One thing that’s probably coming is an increase in “agentic news,” or information that’s created to be consumed by machines to achieve ends we want — without our knowledge or intervention. Some versions of it have been with us for years; think algorithmic trading, for example. Or when you ask Google Maps or Apple Maps for driving directions; it’s ingesting traffic and weather and road closure data that in an earlier age you might have read before getting into your car.  Now, you just get instructions telling you to take I-87.  The truth is, you didn’t care about the traffic report; your goal was a shorter drive.  What else might be turned into agentic information in an AI world?  Will it turn all of us into day traders, each of us with our own AI-powered wealth manager? Might we hand over more parts of our lives to agents, from choosing groceries online based on some understanding of our food preferences, menu, diet, and budget to deciding on and booking travel itineraries for us? 

And more importantly, how can we be sure those systems will be powered by accurate information and have our best interests at heart?  Can we be sure that our grocery habits won’t be dictated by food producers or our travel plans by airlines?  What are the regulatory or structural guardrails we can put in place? 

Beyond that, how can we ensure the architecture of the vast majority of news that we actually consume — not the agentic information that will be just outside our peripheral vision or the carefully human-created insights about things we really care about (and are willing to pay for), but the day-to-day updates about the world we live it, from news about schools and city budgets to scandals and sports — is structured to bring us accurate, nuanced, unbiased and useful information? 

My assumption is that most of it will be intermediated by Gen AI — not necessarily because it will be particularly accurate, nuanced, unbiased or useful — but because people will gravitate to what’s easiest and simplest for them.  True, there are some real upsides to an AI-intermediated world of news, not least the possibility of more personalized, more directly useful information that serves historically underserved communities; but there’s also the real danger of filter bubbles of one and propaganda at scale. 

We need to devote more time to trying to address these questions, and that’s certainly what I plan to do at the Tow-Knight Center. Large Language Models will be part of the problem, but I hold out hope they’ll also be part of the solution.  Some of the experiments I’ve written about since I started at the Center a few weeks ago — news literacy tools that leverage Gen AI’s ability to parse language at scale — point to some ways to level the playing field.  What else can we do to build into as-yet-unbuilt systems incentives to do more good for more people? 

One thing is clear: Just doing what we currently do, except cheaper, faster and better, won’t serve the vast majority of people well in an AI-mediated news landscape.

Older Posts »

Categories

Design a site like this with WordPress.com
Get started