Wednesday, January 14, 2026

Rational Functions Solved!

It's not every day that one of my open problems is solved, especially one that I asked about over three decades ago. Matt Kovacs-Deak, Daochen Wang and Rain Zimin Yang just posted a paper showing that if you have a Boolean function \(f\) and two polynomials \(p\) and \(q\) of degree at most \(d\) such that \(f(x)=p(x)/q(x)\) for every \(x\) of length \(n\) then \(f\) has decision tree complexity at most \(2d^4\).

Noam Nisan and Mario Szegedy had this beautiful paper in the early 90s showing that if a low-degree polynomial approximated a Boolean function, the decision tree complexity, the number of bits you needed to query from the input to decide the output, was bounded by the degree. For rational functions, that wasn't true, the majority function can be approximated by the ratio of two low-degree polynomials. This was the main tool used by Beigel, Reingold and Spielman when they showed that the complexity class PP is closed under intersection. 

Motivated by the complexity class C\(_=\)P\(\cap\)co-C\(_=\)P, I asked Mario if a low-degree rational function exactly computed a Boolean function, then would that function have low decision tree complexity. Mario thought about it, said it was an interesting problem, and added it as an open question to the journal version of his paper with Nisan.

I gave the problem to every student who asked me for an open combinatorial problem, as well as every reasoning AI model I could get my hands on. Iyer et al. posted a paper that examined this problem and showed it true for symmetric functions but the general case remained open until now.

The main insight of Kovacs-Deak, Wang and Yang is to consider the minimum block sensitivity, which is usually an uninteresting property of a function. The majority function for example has large maximum and average block sensitivity but low min block sensitivity. Nevertheless, their proof bounded the min block sensitivity by twice the degree of the rational function and used it to get a degree reduction of the rational function by querying a small number of bits of the input. Nice.

I always joked that if I told people the complexity application of the problem they would lose interest. The question came out of work with Steve Fenner, Stuart Kurtz and Lide Li on An Oracle Builder's Toolkit. C\(_=\)P is a generalization of co-NP where we ask if a gap-P function (the difference of two #P functions) is zero. So C\(_=\)P\(\cap\)co-C\(_=\)P is a generalization of NP\(\cap\)co-NP. This new theorem implies the technical result that if P = PSPACE unrelativized then P = C\(_=\)P\(\cap\)co-C\(_=\)P relative to a generic oracle. This implies an oracle where P = C\(_=\)P\(\cap\)co-C\(_=\)P and the PH is infinite and another where C\(_=\)P\(\cap\)co-C\(_=\)P has no complete sets.

If you are more of a quantum person, C\(_=\)P\(\cap\)co-C\(_=\)P = PostEQP while PostBQP = PP, and this work shows quite a difference between these classes. PP has complete sets and if P = PP then the polynomial-time hierarchy and even the counting hierarchy collapses to P.

Sunday, January 11, 2026

Is `smells like' commutative?

 
1) Smells Like... Something

In many TV shows having to do with murder (and there are plenty of them), I’ve heard the following exchange:

        His breath smells like bitter almonds. So he was poisoned with cyanide

They’re either saying

        bitter almonds smell like cyanide

or

        cyanide smells like bitter almonds.

If you say X smells like Y, you mean that X is the new smell and Y is the familiar one.  However, on these shows, people seem to smell cyanide a lot,
yet I’ve never seen them smell or taste bitter almonds.  That's good since bitter almonds can be lethal (see here). So there should be mystery stories where bitter almonds are used and the cops say

             His breath smells like cyanide. So he was poisoned with bitter almonds.

I don’t know what either one smells like.

2) Rotten Eggs

In real life: My Darling grew up in Pittsburgh when it was still a steel-mill city.
She said she often smelled something that

        smelled like rotten eggs.

It was sulfur. But in telling me this, she assumes I’ve smelled rotten eggs.
I haven’t. But I have smelled other things that I was told smell like rotten eggs.

I think the phrase

        smells like rotten eggs

is often used by people who’ve never actually smelled rotten eggs.

3) Cardboard and Matzoh

A  blog post by Scott (see here), and my post about his post (see here), brought up the question:

        Does matzoh taste like cardboard?

I doubt any of us have actually tasted cardboard.


My proofreader once accidentally did, while eating takeout from a paper container. He says
(1) it doesn’t taste like matzoh, and
(2) it doesn’t taste like food — which matzoh does.


4) Dog Food

I’ve heard the cliché insult:

        Your cooking is so bad that it tastes like dog food.

I’ve never eaten dog food.  Maybe it tastes good.

5) When X Smells Like Y

If someone says X smells like Y, then:

a) If people know what Y smells like but not X, that’s informative.
b) If people know what X smells like but not Y, that’s not informative.
c) If I hear that X smells like rotten eggs and Y smells like rotten eggs, then I know X and Y smell the same —
even though I don’t know what rotten eggs smell like.
Oh wait — I do. They smell like X or Y!

6) How do the following fit into this discussion?:

a) The Nirvana song Smells Like Teen Spirit, video here.
b) The Weird AI song Smells Like Nirvana, video here.


Friday, January 09, 2026

Computational Depth

I'm posting from Oxford University where I will be spending the "Hilary Term" (through late March) as a visiting fellow at Magdalen College. If you are relatively local, reach out if you'd like to connect.

I plan to get back into research after thirteen years of administration, working primarily with Rahul Santhanam and his group. I haven't had a significant sabbatical or leave since Amsterdam 30 years ago, which is what comes from changing jobs too often.

Today I'd like to talk about a 2006 paper about a topic I first thought about in Amsterdam and will likely play a role in this visit, Computational Depth: Concept and Applications by Luis Antunes, Dieter van Melkebeek, Vinod Variyam and myself. 

In Amsterdam, I was hosted by Paul Vitányi and Harry Buhrman at CWI, and naturally worked on Kolmogorov complexity, the algorithmic study of randomness, as well as various problems in computational complexity.

Very simple strings don't have much information. Completely random strings have maximal Kolmogorov complexity but not particularly useful either as we can create our own random strings by flipping coins. Is there some way to measure useful information? 

In particular I had this question: If NP reduces to a sparse set then it has polynomial-size circuits. If NP reduces to a random set then it has polynomial-size circuits. Is this just coincidence or is there a broader principle here?

Charlie Bennett developed a notion of logical depth that has a similar philosophy but it is a difficult definition to work with. Luis Antunes, a student from Porto, came to work with me at NEC Research in New Jersey when I was there around the turn of the century. We developed this simpler notion of computational depth as the difference of two Kolmogorov measures, like time-bounded Kolmogorov complexity minus traditional Kolmogorov complexity. This would be small for both very easy strings and full random strings. With then NEC postdoc (and now Nebraska professor) Vinod Variyam, we found a connection to finding SAT witnesses and with DIMACS and IAS postdoc Dieter van Melkebeek (now Wisconsin professor), we came up with the notion of shallow sets, that generalized both random and sparse sets and, as hoped, if NP reduces to a shallow set than it has polynomial-sized circuits. Luis would title his PhD thesis "Useless Information".

Luis and I would later find a nice connection of computational depth to average-case complexity. 

Computational Depth had a resurgence of popularity with the rise of meta-complexity, with 50 of the original paper's 90 citations coming since 2020.

So I hope to find new applications of computational depth working with Rahul who's at the center of meta-complexity. Maybe computational complexity can help us understand machine learning better based on my frozen concentrate analogy, where the learning process removes the randomness, leaving structured information behind. 

Monday, January 05, 2026

AI and Research Papers

2026 will be a year of AI disruption across all of academia. Let's start by talking about AI is changing how we write research papers. Not the research itself (another post), just about the dissemination thereof.

Technology has changed research distribution since the invention of paper, and the printing press, typewriters, journals and conferences have all had dramatic effects. But we've already seen such dramatic changes just within my research career from authors doing their own formatting (TeX/LaTeX), instant distribution of papers (email/web) and collaborative writing (email/shared repositories/Overleaf).

How AI affects research writing follows a trend not unlike other AI applications. 

The Slop Phase

This is where AI does more harm than good. Useful for finding grammar mistakes and making fancy LaTeX tables, but if you use it for writing it may make stuff up, create citations out of thin air and put in words like "as a large-language model" in the text. Things have gotten much better with the latest models, I'm currently a fan of Gemini 3, but we haven't fully left the slop phase and anything AI written needs close verification.

The Verification Phase

2026 will be the year we go from humans having to look over AI generated content to AI having to look over human-generated content. Already you should give your paper, ideally in LaTeX, to the best model you have access to with a prompt asking for a critical review, or use a specialized service. You don't have to follow what the AI says but you should be open to improving your paper. Google provided an optional tool for those preparing STOC 2026 submission that many found valuable. In 2026 some conferences and journals will start requiring submitted papers to go through an AI review as well as a human review (for now), especially with those overwhelmed by submissions.

At some point, we may require authors, with AI help, have their theorems verified by theorem provers like Lean or Rocq. It will be a while, especially in computational complexity, where it is hard to properly state a theorem in formal logic, let alone its proof. 

The Collaborative Phase

Not just AI critiquing papers, but researchers and AI working together in paper writing. We already see pieces of this, having AI fill in details of arguments, give a first draft of an introduction, help create diagrams, find relevant citations, etc. Right now anything the AI writes requires a careful lookover by the researcher. As these models improve, we'll see more and more generated by the AI and researchers starting to trust the output, maybe more than they should. This will lead to...

The Generation Phase

Here the AI writes the LaTeX from scratch based on proof sketch given by the researchers. As an experiment, I "vibe-coded" a minor research paper and it went better than expected. 

The researcher will go and ask for updates and touch up the LaTeX at the end, but at some point the human won't ever touch the LaTeX file, just a prompt or a markdown file that generates it. 

By the end of the 1980s, researchers just trusted the LaTeX for formatting, fixing only syntax errors and overfull boxes. Most researchers (but not all) stopped fiddling with the look of the paper. At some point, and at higher stakes, we'll do the same with the paper content as well.

Friday, January 02, 2026

The Betty White Award for 2025

The Betty White Award goes to people who die at the end of the year--- too late to be on those articles with titles like 

                                                            people we lost this year.

The  point of the award is that news outlets and blogs should WAIT until Jan before having any articles that look back on the prior year. (I tell Lance we should have our end-of-the-year post in January just in case someone solves P vs NP the last week of December. [Lance: I'm not worried])

I made up the award in 2023. More recently, I looked at the web to see who would have won it in past years. My chart is here. As I was writing this post I wondered if there already was a Betty White Award for something else. The response is here. Short version: ChatGPT says that there is an informal award called that for reading cue cards really well on Saturday Night Live. As opposed to my award which is formal(?).

This year's list of winners so far has only one person:

Brigitte Bardot who died on Dec. 28, 2025, at the age of 91. She was an actress, singer, model, and activist (she was for animal rights and against interracial marriage---see her Wikipedia page for other views she had).  Like most celebrities who die when they are over 90, I suspect many people either don't know who she was or thought she was already dead. For more on her see her Wikipedia page here. Or type her name into chatty, though be warned that you may get some false statements. 

1) This year I am taking NOMINATIONS for additions to the list.  The number of winners can be more than 1. The max in a year has been 4 so far.  SO 

If you know of someone who

a) died between Dec. 20 and Dec. 31

b) is famous

then leave a comment.  DEADLINE: Jan. 8 at noon East Coast time. On Jan. 8 I will update this blog post with more winners if need be. 

I've already looked at Wikipedia's list of who died in 2025, here. If there is either (1)  someone on there who is more famous than I thought (that happened last year with Manmohan Singh) or (2)  there is someone not on the Wikipedia list who is famous to our community, let me know.

2) As noted above, the point of the award is to point out that publications should WAIT until Jan to have those lists. I am happy to say that Entertainment Tonight has found a good way to handle this---they had an article title Celebrity deaths 2025: Remembering those who've died this year but they are updating it (at least their online version). The updated version includes Bardot! See here. Paper publications  won't be able to do that. This may not matter---will there still be paper publications at the end of 2026? I blogged on paper free stuff here.



Monday, December 22, 2025

Complexity Year in Review

An easy choice for paper of the year, a paper that has nothing to do with randomness, interaction, quantum, circuits or codes. Just a near quadratic improvement in the amount of memory you need to simulate time.

Simulating Time with Square-Root Space by Ryan Williams

Any time \(t(n)\) algorithm can be simulated in space \(O(\sqrt{t(n)\log t(n)})\) greatly improving the \(O(t(n)/\log t(n))\) result from the 70's. Ryan's work makes strong use of last year's space efficient tree evaluation by James Cook and Ian Mertz. More in my February post and a Quanta article which did a better job explaining the importance of the result than I could.

Bill is also excited by the new \(O(m\log^{2/3}n)\) single-sourced shortest path algorithm by Ran Duan, Jiayi Mao, Xiao Mao, Xinkai Shu and Longhui Yinthat that beats out Dijkstra on sparse graphs. 

Last year I wrote

We're heading to a perfect storm for US higher education with the oncoming trains of the new administration, artificial intelligence, fiscal challenges and the demographic cliff. Hang on tight, it's going to be a bumpy ride.

Bumpy is an understatement and we're just starting the ride. Limited immigration, National Science Foundation woes in its 75th anniversary, and a drop in computer science enrollments as AI continues to suck up the atmosphere. Do we buckle down or should we completely rethink our institutions? 

In the spirit of all the AI wrapped content, I asked Claude to put together a full year in review for this blog. This is getting scarily good.

We remember George Foreman, Frank GehryRay Laflamme, Tom Lehrer, Charles Lin, Pradyut Shah and Tom Stoppard

We thank our guest posters Eric Allender, Daniel Fernández and Alberto Fraile, Clyde Kruskal and Nick Sovich.

See you all in January!

Wednesday, December 17, 2025

A Place Away From Tech

 

The Fine Arts Building

Last week, I partook of the second Fridays open house in the The Fine Arts Building, ten floors of offices all related to the arts and creatives in some way. Art studios of all kinds, from fine art to photography, music instrument sales, repairs and instruction, an opera company and various music ensembles, puppetry, jewelry makers, authors, interior decorators, a store that sells music scores on paper and an independent books store, and much more. 

On the evenings of the second Friday of the month, the building has an open house, so you can visit the art studios and hear some music performances in small studios. You can stop by the Exile in Bookville bookstore and have Keir Graff sign a copy of his recent coffee table book about the building. 

The building started as the Studebaker building in 1885 as a factory and showroom for horse-drawn carriages. By 1896, the Studebaker family converted the building to studios for artists, architects, musicians and others and has more or less remained that way ever since. The big theater in the building, recently renovated, still bears the family name.

As a one-time tuba player, I appreciate that Arnold Jacobs, principal tubist of the Chicago Symphony for 44 years and perhaps the greatest to play the instrument, taught tuba from an office in the building.

Marker for Arnold Jacobs' Studio

The building is so low tech, it has the last remaining manually operated elevators in the city. It's not that everyone is anti-technology. Most studios have a computer to keep up with finances, emails, web pages and social media. But that's tech in the background. In a world focused on technology, nice to see a building in Chicago devoted to those who still create in a magical place that stands the test of time.

Sunday, December 14, 2025

Weird Al vs Weird AI

 ONE

The following headline confused me:
 

                  Trump, 79, Deletes Weird AI Video Shilling Magic Beds (see here). 

Was Weird Al selling magic beds? Magic beds?! How does that relate to President Trump? What’s going on?

The problem is the font: a capital I (as in AI) can look like a lowercase l (as in Al).
So the headline should really be:

Trump, 79, Deletes Weird Artificial Intelligence Video Shilling Magic Beds.

This case is particularly confusing because:

a) Weird AI videos clearly means Videos that Weird Al has that go with his songs. 


While we’re on this topic, here are some Weird Al videos related to computer science or math:

Don't Download This Song 

 It's all about the Pentiums

 Polka Patterns 

The first two  songs are dated but I still like them. They also show that Weird Al has had (and still has) a long career. 

b) The story, even when understood completely, is really weird. 

TWO

I pointed out the Weird AI vs Weird Al issue to a fellow Weird Al fan, and he emailed me the following links:

McDonald's Making job Applicants Take Weird AI Personality Tests

MAGA Rep Weights in on Sydney Seeney Jeans Debate with Weird AI Ad

A Weird AI Camera With a Human Name is Coming for Your Cell Phone

Leith Ross Denounces Weird AI Songs Uploaded to Their Spotify Profile

Why We Don't Believe MIT Nanda's Weird Al Study

Anthropic Researchers Discover the weird AI Proble: Why thinking longer makes models dumber

Samsung phones are getting a weird AI shopping platform nobody asked for

THREE

0) The one with Weird AI Songs is the most confusing since that clearly means songs by Weird Al. 

1) Am I the first person to notice this? Of course not-- see here

2) Has someone used this confusion? Of course--see here

3) Will this confusion help or hurt Weird AL's career? Time will teLL.