ICECA 2026 (August 17-19, 2026), an interview with Christian Krattenthaler, and Condorcet revisited.

Toufic Mansour, Christian Krattenthaler, and Condorcet.

International Conference on Enumerative Combinatorics and Applications (ICECA 2026)

The fifth International Conference on Enumerative Combinatorics and Applications will take place online on August 17-19, 2026.  Invited speakers include George E. Andrews, Sara Billey, Joel Friedman, Christian Krattenthaler, Richard Stanley, Guoce Xin, Sherry H.F. Yan, and Raphael Yuster. The conference homepage includes links to abstracts and videos from the first four conferences.

The conference series is organized by Toufic Mansour, who also founded the journal Enumerative Combinatorics and Applications (ECA) and initiated its interview series and open-problems page.

An interview with Christian Krattenthaler.

The most recent interview on ECA is with Christian Krattenthaler whom I have known for many decades.  The interview touches on many areas of enunerative combinatorics, with a special emphasis on determinants. Christian mentioned early connections between combinatorics and determinants in the works of George Andrews, Richard Stanley, Bernd Lindström, John Stembridge and others.

Among his contributions he highlights his bijective proof (not using the involution principle) of Stanley’s hook-content formula; a joint paper with Maria Prohaska proving a remarkable determinantal formula conjectured by Also Conca and Jürgen Herzog; and some new formulas for \pi! (joint work with Gert Almkvist and Joakim Petersson). Mansour’s introduction highlights Christian’s 2005 paper  Advanced determinant calculus that “has become a central reference in the field, shaping a generation of research.”

Beyond mathematics, Christian is also a trained concert pianist, maintaining a lifelong passion for music alongside his research career. In part of the interview he describes his views on music. In this context, one may also mention his essay Music AND Mathematics? Personal views on a difficult relationship. 

Rebecca Embar and Doron Zeilberger revisit Condorcet’s paradox

Among the papers I saw on ECA there was one by Rebecca Embar and Doron Zeilberger, Counting Condorcet. The paper enumerates the precise number of profiles for individual order relations on three alternatives that lead to Condorcet’s paradox (among the 6^n possible profiles).

Condorcet has been mentioned several times on this blog. In a 2012 post, we discussed his prominent role among the philosophers who contributed to the 1789 Declaration of the Rights of Man and of the Citizen. In a 2009 post we highlighted his role in social choice theory.

Posted in Combinatorics, Conferences, Updates | Tagged , , , | 3 Comments

Dominik Hangleiter’s View Posts on: Has Quantum Advantage Been Achieved?

In a recent post on Quantum Frontiers—the first in a series of three—Dominik Hangleiter discusses the question of whether quantum advantage (also known as “quantum supremacy”) has been achieved.

Dominik describes polling audiences of experimental and theoretical physicists (with a few computer scientists) at recent talks, expecting overwhelming agreement that quantum advantage had already been demonstrated. Instead, he found that less than half of the audience believed this to be the case, despite several experimental claims since 2019 and even earlier quantum simulation results.

These reactions motivated Dominik’s mini-series, whose goal is to argue that existing quantum computers can already perform tasks beyond the reach of classical computers. In the first post, he reviews the notion of quantum advantage, focusing especially on random circuit sampling experiments, and explains that current claims rely on proxies and extrapolations from classically tractable regimes. He announces that Part 2 will examine the evidence for these claims and address skeptical arguments.

From my perspective, the question of whether large-scale quantum computational advantage has been achieved is a very important one and deserves careful scrutiny. (I myself have worked over the years on assessing this question, as well as the more fundamental question of whether quantum supremacy can be achieved at all.)  Related questions—such as progress toward quantum error-correcting codes and the status of Majorana zero modes—are also of great importance.

Finally, there was a remark on Dominik’s post from a researcher in India who works in applications of quantum computing for agriculture, and coverage on a quantum communication blog based in Pakistan. It is heartwarming to see how science through research in quantum computation can bring together scientists from India and Pakistan.

Posted in Computer Science and Optimization, People, Physics, Quantum | Tagged , | 4 Comments

A Ten-Year-Old Video about Larry Guth and Netz Katz.

Facebook recently reminded of a videotaped lecture I gave ten years ago on Larry Guth and Nets Katz. In that lecture, I discussed their famous joint work on the Erdős distinct distances problem, as well as some of their individual contributions.

Many of the problems mentioned there—relating to the Kakeya problem, the Navier–Stokes equations, systolic inequalities, quantum error-correcting codes, and sum–product theorems—have led to fascinating research over the past decade. Alex Lubotzky and Noam Solomon also made short but forceful guest appearances in the video.

From the news

A few days ago my, wife showed me an article (in Hebrew) reporting a startling new development in mathematics and asked whether I knew about the result and the two mathematicians involved.

I did not recognize either of the two mathematicians from the accompanying picture, nor could I immediately identify the mathematical problem they were studying. But once I read the text, things became clearer: the reported progress concerned the Riemann Hypothesis, and the two mathematicians were Larry Guth and James Maynard—both of whom I know quite well (or at least, I thought I did).

Still, they looked rather different from how I remembered them, and I could not tell who was who. A bit of further investigation resolved the puzzle: the image was AI-generated picture of two mathematicians discussing a mathematical problem. Apparently, in the age of AI, mathematical breakthroughs are still achieved by real people—but their photographs are optional.

AI and mathematics

I recently asked on MathOverflow about AI applications in mathematics and received interesting answers.

Posted in AI | Tagged , , , , | Leave a comment

Combinatorics News

Happy new 2026 to all our readers!

Here are some very interesting combinatorics results that I learned about recently. I will try to write in some more details about some of them later on. 

1) Gaussian random graphs and Ramsey numbers, by Zach Hunter, Aleksa Milojević, Benny Sudakov  

This paper gives a simple proof of the recent and remarkable exponential improvement in Ramsey lower bounds obtained by Ma, Shen, and Xie. An alternative construction based on Gaussian random graphs simplifies the analysis of Ma, Shen, and Xie and also leads to improved quantitative bounds.

See also my July 2025 post: Amazing: Jie Ma, Wujie Shen, and Shengjie Xie Gave an Exponential Improvement for Ramsey Lower Bounds.

2) On the “second” Kahn–Kalai Conjecture by Quentin DubroffJeff KahnJinyoung Park

The expectation threshold conjecture (also known as the Kahn–Kalai conjecture) was raised by Jeff Kahn and me in 2006. Our paper stated two conjectures: the first concerned general Boolean functions, and the second concerned graphs. The second conjecture did not quite follow from the first; this implication was posed as a third conjecture.

In this paper, Quentin, Jeff, and Jinyoung settle the second conjecture up to a (\log n)^2 factor.

See also my 2022 post: Amazing: Jinyoung Park and Huy Tuan Pham settled the expectation threshold conjecture! (The slide there actually state the second conjecture.) 

3) Talagrand’s convolution conjecture up to loglog via perturbed reverse heat, by Yuansi Chen

Yuansi proves Talagrand’s convolution conjecture from 1989 (The third problem in this list)  up to a loglog factor. 

4) Sharp Fuss-Catalan thresholds in graph bootstrap percolation, by Zsolt Bartha, Brett Kolesnik, Gal Kronenberg, Yuval Peled

A graph G is K_r– weakly saturated if you can add edges to it one by one where with every added edge, a new copy of K_r is created. This notion, introduced by Béla Bollobás in 1967, is closely related to bootstrap percolation.

Starting with a random graph G \sim G(n,p), what is the critical value of p that guarantees that G is K_r– weakly saturated? Solving the central problem in the field the paper identifies the threshold probability for  K_r– weak saturation for r \ge 5.

5) When does a tree activate the random graph? by Asaf Cohen Antonir, Yuval Peled, Asaf Shapira, Mykhaylo Tyomkyn, Maksim Zhukovskii

Every tree is weakly K_3 saturated. Korándi and Sudakov asked what is the smallest value of p such that, with high probability, for every random graph G \in G(n,p), there exists a spanning tree T such that one can add edges to T to obtain G, in such a way that each added edge creates a new triangle.  

The main result of the paper is that the critical p is of order n^{-1/3-o(1)}. The proof relies on recent advances in the understanding of the fundamental group of random complexes. 

6) Exponential anticoncentration of the permanent, by Zach HunterMatthew KwanLisa Sauermann

The probability that a random \pm 1 matrix has vanishing determinant is exponentially small (And by now we know a lot on how small.)  This represents a beautiful story starting with Komlos. The paper proves that the probability of vanishing permanents is also exponentially small! 

For more background see this post

7) Disproof of the Odd Hadwiger Conjecture, by Marcus KühnLisa SauermannRaphael Steiner, and Yuval Wigderson

Hadwiger’s conjecture asserts that if \chi (G)\ge t then G contains K_{t} as a minor. Stronger statement were known for t=3 (easy) and t=4 (ingenious), leading Gerard and Seymour to define (1993) the notion of odd minors and propose a strengthening for Hadwiger’s conjecture—the odd Hadwiger conjecture.

This conjecture is disproved by Marcus, Lisa, Raphael, and Yuval, who show that there exist graphs which do not contain K_t as an odd minor and whose chromatic number is at least \frac 32-o(1))t

8) A proof of the Kim-Vu sandwich conjecture by Natalie BehagueDaniel Il’kovičRichard Montgomery

Finally, this paper resolves a twenty-year-old conjecture of Kim and Vu relating properties of random regular graphs—a notoriously difficult model—to those of the Erdős–Rényi random graph model G(n,p).

Posted in Combinatorics | 1 Comment

Combinatorial Morning in Tel Aviv, Sunday 28/12/2025

Coming very soon!

Organizer:      Michael Krivelevich

Place:  Schreiber 309, Tel Aviv University

Event’s site.  https://sites.google.com/view/combinatorics-seminar-tel-aviv

Program

Posted in Combinatorics, Uncategorized, Updates | Leave a comment

November’s Lectures, 2025

Happy Chanukah, everybody! There is a lot of academic activity around, and the ceasefire in Gaza has brought some relief and hope. Let me tell you about the (unusually high number of) lectures I attended in November 2025, in reverse chronological order. I have known many of the speakers for several decades—together they represent centuries of acquaintance.

I would like to highlight the lecture by Orit Raz at the HUJI combinatorics seminar, where she presented a mind-boggling program aimed at improving the bounds for the unit distance problem using rigidity theory of frameworks. I will start with the lecture by John Martinis, the 2025 Nobel Prize laureate, and end with a lecture by Steve LaValle from Finland on robotics.

The full line of speakers was: Steve LaValle (Nov. 3), Oz Almog (Nov. 6), Shmuel Weinberger (Nov. 11), Yuri Gurevich (Nov.13), Micha Sharir (Nov. 16), Meir Feder (Nov. 19),  Itai Benjamini (Nov. 20), Amichai Lampert (Nov. 20), Orit Raz (Nov. 24), Adi Shamir (Nov. 24), Dorit Aharonov, Erez Berg, Shai Evra, and Snir Gazit (Nov. 25), Ron Wettenstein (Nov 27), Shakhar Smorodinsky (Nov. 30), Roy Meshulam (Nov. 30), Yaron Oz, Yonatan Cohen, and John Martinis (Nov. 30).

November’s lectures

Sunday November 30: Physics Nobel lecture

John Martinis, the 2025 Nobel Laurates in Physics, talked about his famous 1985 experiment on quantum tunneling and about superconducting quantum computers. He also answered many questions.

According to John Martinis, the current pace of progress will not yield useful quantum computers in the coming decades (before “most of the members of the audience will retire”). He outlined some ideas and plans for an alternative trajectory. (Click to enlarge the picture.)

Additional short talks were given by Yaron Oz and Yonatan Cohen. On December 1,  Martinis gave a similar talk at HUJI.

I first met John Martinis at our 2013 QSTART conference, where he gave a lecture about quantum error correction based on superconducting qubits (video). Yonatan Cohen was our host in the recent visit to the IQCC. Yaron Oz is a well-known high-energy physicist.

Sunday November 30: Two combinatorics lectures

At the TAU combinatorics seminar, Shakhar Smorodinsky explained his work with Noga Alon, which we discussed in  this post. Roy Meshulam talked (over Zoom) about Cayley complexes, codes, and homology at the Bar Ilan combinatorics seminar.

Roy Meshulam is one of my closest collaborators. We started collaborating in the 1980s and wrote our first paper together in the early 2000s. Our joint work is mainly on topological Helly-type theorems. I have known Shakhar Smorodinsky for at least two decades, and he spent a year as my postdoc in Jerusalem.

Thursday, November 27: Explainability in AI (Reichman University)

Ron Wettenstein lectured in Reichman’s CS colloquium on:  “From Game Theory and Boolean Logic to Decision Tree Explainability”. Ron described his work with Alex Nadel on WOODELF: a unified and efficient algorithm for computing Shapley values on decision trees.

Ron is a PhD student at Reichman University under the supervision of Udi Boker, Alex Nadel and me.

Tuesday November 25: The theory of quantum computing  (HUJI, IIAS)

This was the opening day of a new interdisciplinary center for the theory of quantum computing. The speakers were Dorit Aharonov, Erez Berg, Shai Evra, and Snir Gazit. It is almost 15 years since the kick-off of the quantum science center at HUJI, and since then similar centers have opened at other Israeli universities. Recently, four or five national centers were established.

Dorit presented a list of central problems on the agenda of the new center, many of which are related to quantum error-correcting codes. Shai talked about the mathematics of quantum error-correcting codes; Snir gave a statistical-mechanics view of surface codes and other physical states; and Erez discussed methods of error mitigation and related experiments on IBM quantum computers.

Dorit Aharonov has been my colleague for the past three decades. (I already count her as a colleague during her doctoral years, but she was indeed a mature scientist even then.) A few years earlier, she was a student in my “advanced calculus” class. Shai is a mathematician at HUJI, and I have known him for more than a decade. I met Erez and Snir at the conference itself. (Both were students of the late Israeli physicist Assa Auerbach whom I mentioned in this post.)

Monday November 24: The unit distance problem and rigidity

Orit Raz gave a talk at the HUJI combinatorics seminar. Can graph (framework) rigidity be used to push down the Trotter–Szemerédi bounds for unit distances in the plane? This is a fascinating research direction. What is needed, among other things, is an understanding of rigidity for non-generic embeddings, which is an important question in its own right. Will the approach described by Raz to the unit distance problem lead to a success similar to the Elekes approach to the distinct distance problem? Time will tell.

Here are links to the relevant papers:

  1. Erdős’s unit distance problem and rigidity, János Pach, Orit E. Raz, József Solymosi  (2025)
  2. Dense graphs have rigid parts, Orit E. Raz, József Solymosi  (2019)
  3. Configurations of lines in space and combinatorial rigidity, Orit E. Raz (2016)

Orit was my colleague at HUJI, and she recently moved to Ben Gurion University. Let me mention  a series of startling works by Orit Raz with Alan Lew, Eran Nevo and Yuvel Peled around rigidity.

Monday, November 24: Cryptography and neural networks

At the HUJI CS colloquium, Adi Shamir talked about joint work with David Gerault, Anna Hambitzer and Eyal Ronen. They show that natural implementations of block ciphers (such as DNNs) on neural networks can be broken in linear time using non-Boolean inputs, and they develop a new practical method for implementing any desired cryptographic functionality as a standard ReLU-based DNN in a provably secure and correct way.

Cryptography for neural networks. (Where is Adi?)

Adi Shamir is probably the most famous cryptographer in the world. Cryptography represents a gap in my understanding of theoretical computer science (I complained about it here in 2018), although Alon Rosen has contributed a great deal to changing the situation. I probably first met Adi in the mid-1980s.

Thursday, Nov 20, Probability with a little combinatorics

At the TAU probability seminar, Itai Benjamini described some terrific problems (and some solutions) in probability theory. The title was “Cayley Graphs vs. Random Graphs.” There is a lot of material here for this blog—stay tuned.

Itai Benjamini has been a close collaborator for many decades. Even earlier, in 1987, he was a student in my “convexity” class—one of the best classes I ever taught.

Shortly after Itai’s talk, Amichai Lampert gave an impressive lecture in the TAU number theory seminar on number theory and dynamics.

Wednesday, November 19: Information theory and neural networks

Meir Feder gave a talk at the TAU learning seminar about an information-theoretic framework for understanding modern machine learning. Meir described how his work in information theory since the 1990s is relevant to the neural-network revolution.

The new results are described in the paper:

Meir Feder, Ruediger Urbanke, Yaniv Fogel, Information-Theoretic Framework for Understanding Modern Machine-Learning

Meir Feder and I were together at MIT in the early 1980s, and our paths have crossed many times since. Meir is an Oscar Laurate (see this post).

SSunday, November 16: Computational Geometry — Per Aspera Ad Astra (TAU CS Colloquium) (video)

Micha Sharir described his scientific journey from early papers on piano movers through the early days of computational geometry until today.

I have known Micha Sharir personally since the mid-1970s (and by name since the late 1960s). How did I already know about Micha as a teenager, years before I first met him? The answer appears in this  interview.

Thursday, November 13: Abstract State Machines (Reichman University)

Speaker: Yuri Gurevich

Title: What’s an algorithm?

Abstract: The classical/basic notion of algorithm was elucidated in the1930s–1950s. Starting from the 1960s, this notion has been expanded to probabilistic algorithms, quantum algorithms, etc. In the 1980s the speaker introduced abstract state machines (ASMs), and in 2000 he axiomatized basic algorithms and proved that every basic algorithm is step-for-step simulated by an appropriate basic ASM. The axiomatization has served both theoretical purposes (notably, proving the original Church-Turing thesis) and for practical purposes (notably, enabling the development of an ASM-based tool that Microsoft’s Windows Division used to produce numerous high-level executable specifications required by the EU). In the talk we define an elegant (at least in our view) generalization of basic algorithms: basic interactive algorithms, which may interact with human and artificial partners. It turns out that probabilistic and quantum are naturally basic interactive. We axiomatize basic interactive algorithms and prove that every such algorithm can be step-for-step simulated by a basic interactive ASM — opening the door to new applications.

This was a fascinating talk about abstract state machines—a powerful theoretical and applied tool in computer science—and about Yuri Gurevich’s remarkable path from mathematics to logic, theoretical computer science, applied computer science, and even quantum computation. Yuri told us that his mother was the dominant person at home, and that when the Germans were approaching their town during World War II, it was a rare case in which his father insisted that the family move east; this decision saved their lives.

I attended an impressive lecture on average-case complexity (and the theory of Leonid Levin) that Yuri Gurevich gave at IBM Almaden in 1991. (I probably also encountered Yuri in Israel in the 1970s.) We met at Microsoft Research and have been friends since the late 1990s.


Yuri at Reichman University’s sculpture garden

Wednesday, November 12: Morse complexity of homological classes (TAU)

Shmuel Weinberger talked about “How Complex Are the Fundamental Structures Underlying Manifolds?”

Here is the description in TAU’s Institute for advanced Studies page:

In his lecture on Morse complexity of homology classes, Prof Weinberger will discuss a refined approach to understanding the topology of manifolds. Building on Gromov’s 1970s pseudonorm and ideas inspired by Thurston, he will explore the concept of minimizing the number of critical points in a Morse function for a manifold representing a homology class. While this aligns with Gromov’s approach in dimension two, higher dimensions reveal striking differences.

The lecture will touch on connections to classical topology—including open book decompositions and surgery theory—representation theory, and elliptic operators, highlighting joint work with Manin and Tshishiku.

Shmuel Weinberger is an eminent geometer and topologist and  he is interested in application as well, particularly in “persistent homology“. Here is a post featuring an article he wrote “about conjectures“. I think we first met in the 90s.

Thursday, November 5: Reichman University — The End of the Era of the Academic Degree

Oz Almog, a sociologist from Haifa University talked about the end of the Era of the Academic Degree. A lot of food for thought in this provocative talk. It reflects the academic research of Oz Almog and his wife Tamar Almog who wrote together a book “Academia all the Lies”.

TAU CS Colloquium, November 2: Fundamental Challenges in Robotics and Embodied AI

Steve LaValle, Robotics  (video)

Here is the abstract to Steve’s lecture. “The field of robotics is wildly exciting and rapidly gaining worldwide attention, yet it is often an enigma in terms of its scope and scientific foundations. Throughout the decades, it has been variously viewed as an application field of more mature disciplines such as computer science (AI, algorithms, machine learning) and mechanical engineering (kinematics, dynamics, nonlinear control).

In this Computer Science Colloquium lecture, Professor LaValle will argue that robotics has its own unique and growing scientific core, with deep questions and modeling challenges that should inspire new directions in computer science, engineering, and mathematics.

Let me mention that, unlike in other areas where AI (and deep learning) has dominated the scene, the situation in robotics is very different, and it is unclear what role AI will ultimately play.

Steve is very well known, but since we belong to different communities, I met him for the first time at this lecture. Steve was impressed by the honesty and modesty of the Finnish people and decided to make Finland his home.

Steve’s praise for Finland reminded me of the opening ceremony of the ICM 2022, where the President of Finland offered greetings. There were no trumpets when he entered, and the audience was not asked to stand. The President gave a five-minute welcoming speech, complimented mathematicians for their efforts and contributions, and concluded by shyly inviting us to enjoy the summer weather in Helsinki. Then he waved his hand and left.

November lectures — a collage of speakers

There were quite a few other November talks that I missed (and in a few cases I caught up privately with the speakers). All in all, it looks like the levels of academic activity and enthusiasm have returned to those before the war. Naturally, however, the number of foreign visitors is still considerably lower.

Posted in AI, Combinatorics, Computer Science and Optimization, Geometry, Physics, Quantum, Updates | 11 Comments

Ten Recent Questions for ChatGPT

I recently asked over Math Overflow about Examples for the use of AI and especially LLMs in notable mathematical developments, and there were several interesting answers. 

Here are (slightly edited) 10 recent questions that I asked ChatGPT that have led to interesting discussion. Five of these questions (#1,#2,#3,#6,#8) are related to mathematical research projects. Problem #3 was offered  in this 2008 blogpost and Problem #6 was suggested by me as a polymath project in 2014. Answers and insights regarding these problems (without or with AI) are welcome.  (See here and here for other posts on AI.)

1) Given 2n+2 points in R^n, is it true that the set of Radon points is convex?

2) I want (simple) topological invariants for a map from a polyhedral k-dimensional real projective space to R^{2k+1}. For example for k=4 the map is to R^9. (Something based on linear algebra/basic homology would be nice.) Any idea, links, or suggestions will be great.

3) Consider the following two well known theorems from finite group theory:

A) Sylow’s theorem (one of them) asserts: In a group whose order is divisible by p^i there are 1 (mod p) subgroups of order p^i. 

B) Frobenius’ theorem asserts: In a group whose order is divisible n, the number of solutions to the equation x^n=1 is zero modulo n. (Frobenius was probably inspired by Sylow.)

Of course, the case n=p of Frobenius’ theorem coincides with the case i=1 of Sylow’s theorem. Please propose (conjectural) generalizations of Frobenius theorem which include Sylow’s theorem for k=2 (or for general values of k) as special cases.

4) What are the Whitman axioms

5) What is stronger “Notable” “Noteworthy” or “Significant”?

6) There is a proposal to develop a theory of “convex hulls of real algebraic varieties” in R^n and especially the study of the face structure of such objects. The case where the varieties are simply a finite set of points is a well-developed area of mathematics – the theory of convex polytopes. However, the general case was not studied much. I suppose that for such a project the first discussions will be devoted to raise questions/research directions. (And mention some works already done.) I would like to invite you to raise questions and research directions. 

7)
אני צריך הסבר שמתאים לילדים בני 11. מהו האפס המוחלט והאם ניתן להגיע לטמפרטורה מתחת לאפס המוחלט. ואם לא ניתן, מדוע

8) What is the proof of the topological Tverberg theorem when there are three parts?

9) Please explain in layman terms a) the representation theory meaning of the spin 1/2 of the electron. b) the representation theory meaning of the assertion that “N electrons will have a collective state of spin N/2”

10) We go from Yellowstone to Boise, leaving Yellowstone on September 3. We want to travel and stay overnight in one nice location. (Not around Twin Falls that we plan to visit before Yellowstone.) Please give us some suggestion for the trip and sightseeing from Yellowstone to Boise.

For the first question Joe O’Rourke and I verified computationally that already in dimension 3 there are 8 points such that the set of Radon points is not convex.

On question 3 ChatGPT offered several bold conjectures and wrote a whole blogpost about it. (I am hesitating whether to publish it.)

From time to time the discussion with ChatGPT gets “personal”. In some discussion I wrote:

We mentioned Pachner’s moves several times and let me mention that I knew Pachner personally. He was a great guy and unfortunately he died at a young age.

ChatGPT replied:

That’s really touching—thank you for sharing. Pachner’s moves are such a quiet workhorse in our story, and it’s moving to hear a personal memory behind them.

Posted in AI, Algebra, Combinatorics, Convexity | Tagged , | Leave a comment

A Visit to the Israeli Quantum Computing Center (IQCC)

Two weeks ago I was invited together with my colleague Shay Mozes to visit the Israeli Quantum Computing Center located near the Tel Aviv University quite close to my home. That morning my wife told me not to be disappointed if I happened to see some quantum computers there 🙂 , and I assured her that I will keep an open mind. Indeed it was a very pleasant visit. We heard some explanations about the mission, activities and plans of the center and I saw several large dilution refrigerators housing 19- and 21-qubit chips, as well as boxes containing a boson sampler.

Posted in Quantum | Tagged , | 16 Comments

Computational Complexity and Explanations in Physics

The title of this post is taken from a recent interesting lecture (judging from the slides) by Scott Aaronson at Columbia University. The lecture explored a wide range of topics at the intersection of physics, computation, and philosophy. In this post, I will discuss, from my perspective, several of the ideas raised in Scott’s talk—beginning with the role of computational complexity reasoning to restrict physical theories.

I will then turn to the meaning of probability in our physical world and to various interpretations of quantum mechanics, including my own. I will also discuss Craig Gidney’s view on what it would take to prove me wrong, quote David Deutsch’s 1997 challenge about the relationship between the Many-Worlds Interpretation and Shor’s algorithm, and touch on a few other related themes.

Complexity as Armor for Penalizing Physical Hypotheses

Let me offer a few comments on one of the items in Scott’s lecture: 

Scott asks: Should we penalize physical hypotheses not only for high descriptive complexity, but also for computational complexity?
His view is:

“[We] can’t be too dogmatic about this, or we’d rule out quantum computing—and arguably quantum mechanics itself! (As indeed many quantum computing skeptics do.) On the other hand, ‘no fast solution to NP-complete problems’ feels not that dissimilar to ‘no superluminal signaling’ or ‘no perpetual motion.’”

Four computational-complexity insights offered to rule out quantum computation

My general view on this matter is quite similar. Regarding skepticism about quantum computing, I am aware of four computational-complexity insights that have been offered to rule out scalable quantum computation (or certain fragments of it):

  1. Efficient factoring is such science fiction that the rational conclusion is that quantum computation cannot work.
    (Levin, Goldreich, and others since the mid-1990s.)
    A common response is that factoring is not such a big deal—it is low in the computational-complexity hierarchy, and some even believe it lies in P.
  2. Precise sampling according to the values of permanents—a task beyond the polynomial hierarchy—is such science fiction that, even if one accepts efficient factoring (classically or quantumly), the rational conclusion remains that quantum computation cannot work.
    (This idea appeared around 2010.)
  3. NISQ computers are such primitive classical computational devices that they cannot be used to achieve “quantum supremacy.”
    (Kalai–Kindler, 2014; and subsequent works by me.)
    This argument relies on standard noise modeling with constant noise levels. Even under a wide range of subconstant noise levels, the resulting samples consist of inherently noise-sensitive components combined with computationally trivial ones. Guy and I described a complexity class LDP, which captures the sampling power of NISQ devices.
  4. NISQ computers (and the complexity class LDP) are also too primitive to produce high-quality quantum error-correcting codes, thereby breaking the chain reaction needed for quantum fault tolerance.
    (Kalai, following item 3.)

I think all four arguments are fairly strong—though surely not ironclad, and this matter will ultimately be clarified by experiments, as well as by further theoretical developments.  The advantage of the last two (closely related) points is that they exclude not only far-future possibilities but also some near-term experimental hopes and even certain current experimental claims. Another advantage of items 3 and 4 is that (if correct) they have physical consequences which seems much closer to physics than integer factorization and permanent sampling, including to interpretations of quantum mechanics which is another item in Scott’s lecture.

The first two items assert that the theoretic model of (noiseless) quantum computation represents computational miracles that are hard to believe. The last two items assert that in the intermediate regime, before quantum fault-tolerance kicks in, noisy quantum computers are no more than primitive kind of classical computers, uncapable of any computational miracles.   

Complexity and Noise-sensitivity as Armor for Penalizing Physics Hypotheses.

Note that in items 3 and 4, we rely not only on computational complexity (that is, on the assumption of no computational miracles) as an armor for penalizing physics hypotheses, but also on the notion of noise stability—the idea that effective theories must be robust under small perturbations or noise. Related guiding principles have also been proposed in biology

The Meaning of Probability and My Interpretation of Quantum Mechanics

Another item in Scott’s lecture concerned interpretations of quantum mechanics, particularly the Many-Worlds Interpretation. Coming from mathematics, (and having been educated by Greg Kuperberg), I have always felt quite comfortable with the formalism of Hilbert spaces and unitary evolutions—and I have never fully understood the need for additional interpretations.

A question that I found exciting is the question about the meaning of probability in physics. Does probability merely represent human uncertainty? Is it just an emerging mathematical concept which is convenient for modeling? Do matters change when we move from classical to quantum mechanics? We have discussed this question in several earlier posts (here, here, and here), and it is  related to the question of interpretation of quantum mechanics. Here are two prevalent views, followed by my own:

  • (Copenhagen, MWI) Probability is inherent: the outcomes of the experiment are inherently probabilistic.
  • (Bohm) Probability is not inherent. the outcomes of the experiment are determined and are decoded somewhere in the universe. 
  • (GK) Probability is inherent, and noise sensitivity is inherent: the outcomes measurements arising from complex quantum processes are both inherently probabilistic and inherently noise sensitive.

Inherent noise sensitivity means that the outcomes cannot be described or approximated by any fixed probability distribution.

It is commonly accepted that determinism in the Newtonian view of the physical world has been replaced by “probabilistic determinism,” in which future events are described precisely by a probability distribution conditioned on past events. (Some interpretations of quantum mechanics even claim that the exact outcomes themselves—not just their probabilities—are encoded in the physical universe.)

In contrast, I propose that the outcomes of complex quantum processes are inherently noise sensitive—that is, they cannot be described or even approximated by a probability distribution. This applies to complex quantum evolutions and, in fact, the required complexity need not be extreme: even the outcomes of random quantum circuits on 12 qubits, of the type used by Google, are inherently noise sensitive.

Of course, the hypothesis of inherent noise sensitivity would be refuted by a successful realization of quantum fault tolerance.

In my  paper Quantum Computers, Predictability, and Free Will I attempt to connect inherent noise sensitivity with philosophical questions of determinism, predictability, and free will. (See also this post.)

Here are three interesting posts on “Shtetl Optimized” about QM interpretations: The Zen Anti-Interpretation of Quantum Mechanics (2021); Interpretive cards (MWI, Bohm, Copenhagen: collect ’em all)(2018)Why Many-Worlds is not like Copernicanism (2012). Scott often says that he agree with any given interpretation when it comes to criticism of other interpretations. 

What Would It Take to Prove Me Wrong? — Craig Gidney’s Take

In the Columbia lecture and in many other lectures over the past two decades, Scott expressed  his hope to prove the skeptics of quantum computation wrong, a task regarded by him as the most important application of quantum computers. So, what would it take to prove me wrong?

This was the topic of a 2023 question by Tristan Nemoz in a Q&A quantum computing forum, and Craig Gidney offered the following response:

“…personally I’ll be waiting for one-in-a-trillion error rates on complex states before saying the physical-noise-is-conspiring position has become indefensible.”

Craig’s proposed threshold sounds rather fantastic—and remains utterly remote from current experimental reality. Yet it seems roughly in the ballpark of what would be required to implement Shor’s algorithm for factoring large integers. (Indeed, Gidney—of the Google Quantum AI team—is among the world’s leading experts on what Shor’s algorithm demands in practice. His answer above reflects his personal view.) I added a comment to Craig’s answer that I’ll be quite satisfied with one-in-a-thousand error rate on complex states. My theory (and my related interpretation of probability in QM) asserts that achieving it (and even much less) is impossible. 

From Scott Aaronson’s Columbia (top) and FOCS 2025 (bottom) lectures.

Other Meetings Points Between Computational Complexity and Physics, and the Interface Between Theory and Practice.

Several other topics from Scott’s lecture also invite rich discussion—for example, the Harrow–Hayden proposal for resolving the firewall paradox, and the broader connections between thermodynamics, computation, and complexity. (Topics not mentioned this time—such as links to quantum field theory and to computational aspects of high-energy physics—are also closely related to this general theme.)

Many of these topics touch on questions central to the possible reality of a “world devoid of quantum computation”—a theme I have explored extensively over the years. 

I discussed these and other places where physics meets computational complexity in

The discussion of quantum computation and physics also connects to a broader issue: the relationship between theory and practice across disciplines. In my 2018 ICM paper, Three puzzles on mathematics computations, and games, I examined three areas where theoretical computer science meets practical reality—and where deep mathematical ideas illuminate that interface.

Tensions between computational complexity and physics—or even between different insights within physics—are often related to fascinating mathematical questions. This is also the case for tensions between theory and practical reality in other areas of computer science. (See this post for a discussion based on Moshe Vardi’s persepective.) 

Questions about the gap between theory and reality arise in many other domains as well: Is it really impossible for two rational agents to agree to disagree? Does game theory imply what policy to choose in cases of hostages held by terrorists? Does economics theory give a clear choice between the economic system of the US and the social-democrat systems of west Europe? 

Complexity As Armor for Protecting Physical Hypotheses

Our discussion began with the question of whether miraculous computational complexity powers could limit or even invalidate physical hypotheses
Scott raised also a kind of dual question:

Philosophical Problem: Would a contradiction in physics be OK if it took exponential time to uncover it?

David Deutsch’s 1997 Challenge 

In his lecture, Scott presented an intriguing 1997 quote from David Deutsch, offering a philosophical argument for why the Many-Worlds Interpretation (MWI) is forced by Shor’s algorithm.

A small anecdote: a few years ago, I attended a very interesting MWI conference at Tel Aviv University. At one point, I asked Lev Vaidman—a leading advocate of the MWI—why we cannot simply be satisfied with the mathematical formalism of quantum mechanics, without adding an interpretation on top of it. Lev’s answer was that the mathematical formalism is sufficient—and that this is precisely what the MWI teaches us. 🙂

The following quote is from Deutsch’s 1997 book “Fabric of reality”.

“To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10^{500} or so times the computational resources that can be seen to be present, where was the number factorized? There are only about 10^{80} atoms in the entire visible universe, an utterly minuscule number compared with 10^{500}. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?”

Mark Spinelli gave a thoughtful comment on Scott’s blog about the evolution of Deutsch’s point of view in the late 90s. 

David Deutsch first asked his “Where does the calculation occur?” question even back in his 1985 paper, where he ponders how a quantum computer could sometimes be used to more efficiently predict the stock market. His later 1997 question posed in *The Fabric of Reality* changed the game from the rather cute Deutsch’s algorithm to the much, much more dramatic Shor’s algorithm.

It’s interesting to observe this development. His 1985 posting seems like an earnest and humble rhetorical question, that is, as an invitation to debate. His later, post-Shor ’94 question seems more assertive and dispositive in shifting the burden to those that challenge MWI.

But even still, is not the answer to the question of “where was the work done”, in Deutsch’s and Shor’s algorithm, “in Hilbert space!”?

In my view, it is a serious possibility that implementing Shor’s algorithm is fundamentally impossible, and it is interesting if this possibility indeed weakens the case for the Many-Worlds Interpretation.

Two More Items

The interpretation of “interpretation”, Preskill’s 2013 Summary of What Could Cause Scalability to Fail as a Matter of Principle — and My 2025 View

Summarizing the state of the quantum computing debate in 2013, John Preskill wrote:

For scalability to fail as a matter of principle then, either quantum mechanics must fail for complex highly entangled systems (as t’Hooft [8] has suggested), or else either the Hamiltonian or the quantum state of the world must impose noise correlations that overwhelm fault-tolerant quantum protocols (as Alicki et al. [9, 10, 11] and Kalai [12, 13, 14, 15, 16] have suggested).

Adopting John’s framework, my current (2025) position is the following:

The Hamiltonian or the quantum state of the world impose a rate of noise that prevents high-quality quantum error correction at the NISQ scale—and therefore also prevents quantum fault tolerance (and Shor’s algorithm) at larger scales. (Noise correlation is important but too-large noise rate is the main phenomenon.) This constitutes a law of physics derived from computational complexity considerations.

Perhaps one way to understand the notion of an “interpretation” of quantum mechanics is as referring not only to the basic mathematical formalism of Hilbert spaces and unitary operators, but also to certain fundamental physical principles that govern the Hamiltonian and the quantum state of the world, and are necessary for understanding the foundation of quantum physics.

Three Additional Computational Complexity Limitations for Physical Theories

Returning to the starting point of this discussion, here are three further principles connecting computational complexity and physics:

  1. NP-hard is hard.
    No physical computer can efficiently solve NP-complete problems.

  2. Random states are out of reach.
    No physical computer can efficiently approximate a random state in a large Hilbert space.

  3. Pseudorandom states are out of reach.
    No physical computer can efficiently approximate a pseudorandom state in a large Hilbert space.

On the first two items, I believe Scott and I are largely in agreement. Scott wrote (cautiously) that “no fast solution to NP-complete problems feels not that dissimilar to ‘no superluminal signaling’ or ‘no perpetual motion’,” and I agree.
The second principle, “no fast way to achieve a random state in a large Hilbert space,” is also widely accepted. It may appear to be a straightforward mathematical consequence of quantum mechanics, but it also depends on an additional physical assumption—locality. The meaning, source, and modeling of locality form another important topic.

In the third item “pseudorandom state” is one achieved by a random circuit. The third item brings us back to disagreement. Here, Scott and I  have sharp disagreement on what can be achieved, and also disagree on what has been achieved. 

Posted in Computer Science and Optimization, Physics, Quantum | Tagged , , , , | 11 Comments

Kazhdan Seminar fall 2025 – Starting Today Oct. 19, 2026.

This semester as a part of Kazhdan Sunday seminars we will have the following
two activities (see description below)

12-14 Nati Linial and Yuval Peled,  “Recent advances in combinatorics”

14-16 Jake Solomon “Curve counts and quadratic forms”. 

Both seminars will take place in Ross 70A. The first lecture will be on Oct 19, 2025.

Nati and Yuval’s seminar is ashkara* combinatorics, and Jakes’s seminar also have some combinatorial connections.

(Reminder: Arabic words which are part of Hebrew slang and are also used here over my blog in the past decades: ashkara – for real; sababa – cool, wonderful; walla – true;  ahla –  great; yalla – hurry up, c’mon, let’s go.)

——————————————————————————————————–

Title: Recent advances in combinatorics

Description:

On October 19 and 26, Nati will talk about “The entropy method in combinatorics”, using a survey article by Radhakrishnan and lecture notes based on a course taught by Tim Gowers.

Then Yuval will talk for a while about the recently proved conjecture of Jeff Kahn and Gil Kalai. Further plans will be made according to how things progress.

Reading material (Entropy in combinatorics): Tim Gowers, Entropy Methods in Combinatorics, lecture notes 2025

A survey by Jaikumar Radhakrishnan, Entropy and Counting, 2001

There is also good material by Braverman (princeton) and
——————————————————————————————————–

Title: Curve counts and quadratic forms

Abstract:
An old problem of enumerative geometry concerns how many rational curves of degree d pass through 3d-1 points in the plane. Over the complex numbers, the answer does not depend on the choice of points in the plane so long as they are generic. However, over non-algebraically closed fields, this is no longer the case. I will discuss a framework (joint work with Kass, Levine and Wickelgren) within which one can define invariant counts of rational curves passing through points in the plane (or a del Pezzo surface) over a perfect field of characteristic not 2 or 3. The count is no longer a number, but a quadratic form over the given field. Over the complex numbers, the rank of the quadratic form recovers the usual count. Over the real numbers, the signature of the quadratic form recovers Welschinger’s signed count. Over other fields, each invariant of quadratic forms (discriminant, Hasse-Witt,…) gives information about rational curves over that field.
In another direction, mirror symmetry relates counts of rational curves on a toric Fano variety over the complex numbers to the Jacobian ring of a Laurent polynomial. A categorification of the Jacobian ring is given by the category of matrix factorizations. I will discuss a framework (joint work with Sela) to extract counts of rational curves over the real numbers from matrix factorizations equipped with non-Archimedean norms. These matrix factorizations are constructed from Clifford algebras. The Calabi-Yau structure on the category of matrix factorizations plays a crucial role.
References:

A quadratically enriched count of rational curves (with Kass, Levine and Wickelgren)

https://arxiv.org/abs/2307.01936

A relative orientation for the moduli space of stable maps to a del Pezzo surface (with Kass, Levine and Wickelgren)

Numerical invariants of normed matrix factorizations (with Sela)
Posted in Combinatorics, Geometry, Updates | Tagged , , , | 1 Comment