Quantum computing: too much to handle!
November 13th, 2025Tomorrow I’m headed to Berkeley for the Inkhaven blogging residency, whose participants need to write one blog post per day or get kicked out. I’ll be there to share my “wisdom” as a distinguished elder blogger (note that Shtetl-Optimized is now in its twentieth year). I’m acutely aware of the irony, that I myself can barely muster the willpower these days to put up a post every other week.
And it’s not as if nothing is happening in this blog’s traditional stomping-ground of quantum computing! In fact, the issue is just the opposite: way too much is happening for me to do it any sort of justice. Who do people think I am, Zvi Mowshowitz? The mere thought of being comprehensive, of responsibly staying on top of all the latest QC developments, makes me want to curl up in bed, and either scroll through political Substacks or take a nap.
But then, you know, eventually a post gets written. Let me give you some vignettes about what’s new in QC, any one of which could easily have been its own post if I were twenty years younger.
(1) Google announced verifiable quantum advantage based on Out-of-Time-Order-Correlators (OTOC)—this is actually from back in June, but it’s gotten more and more attention as Google has explained it more thoroughly. See especially this recent 2-page note by King, Kothari, et al., explaining Google’s experiment in theoretical computer science language. Basically, what they do is, starting from the all-|0⟩ state, to apply a random circuit C, then a single gate g, then C-1, then another gate h, then C again, then g again, then C-1, and then measure a qubit. If C is shallow, then the qubit is likely to still be |0⟩. If C is too deep, then the qubit is likely to be in the maximally mixed state, totally uncorrelated with its initial state—the gates g and h having caused a “butterfly effect” that completely ruined all the cancellation between C and C-1. Google claims that, empirically, there’s an intermediate regime where the qubit is neither |0⟩ nor the maximally mixed state, but a third thing—and that this third thing seems hard to determine classically, using tensor network algorithms or anything else they’ve thrown at it, but it can of course be determined by running the quantum computer. Crucially, because we’re just trying to estimate a few parameters here, rather than sample from a probability distribution (as with previous quantum supremacy experiments), the output can be checked by comparing it against the output of a second quantum computer, even though the problem still isn’t in NP. Incidentally, if you’re wondering why they go back and forth between C and C-1 multiple times rather than just once, it’s to be extra confident that there’s not a fast classical simulation. Of course there might turn out to be a fast classical simulation anyway, but if so, it will require a new idea: gauntlet thrown.
(2) Quantinuum, the trapped-ion QC startup in Colorado, announced its Helios processor. Quick summary of the specs: 98 qubits, all-to-all 2-qubit gates with 99.92% fidelity, the ability to choose which gates to apply “just in time” (rather than fixing the whole circuit in advance, as was needed with their previous API), and an “X”-shaped junction for routing qubits one way or the other (the sort of thing that a scalable trapped-ion quantum computer will need many of). This will enable, and is already enabling, more and better demonstrations of quantum advantage.
(3) Quantinuum and JP Morgan Chase announced the demonstration of a substantially improved version of my and Shih-Han-Hung’s protocol for generating cryptographically certified random bits, using quantum supremacy experiments based on random circuit sampling. They did their demo on Quantinuum’s new Helios processor. Compared to the previous demonstration, the new innovation is to send the circuit to the quantum computer one layer at a time, rather than all at once (something that, again, Quantinuum’s new API allows). The idea is that a cheating server, who wanted to spoof the randomness deterministically, now has much less time: using the most competitive known methods (e.g., those based on tensor network contraction), it seems the cheater would need to swing into action only after learning the final layer of gates, so would now have mere milliseconds to spoof rather than seconds, making Internet latency the dominant source of spoofing time in practice. While a complexity-theoretic analysis of the new protocol (or, in general, of “layer-by-layer” quantum supremacy protocols like it) is still lacking, I like the idea a lot.
(4) The startup company BlueQubit announced a candidate demonstration of verifiable quantum supremacy via obfuscated peaked random circuits, again on a Quantinuum trapped-ion processor (though not Helios). In so doing, BlueQubit is following the program that Yuxuan Zhang and I laid out last year: namely, generate a quantum circuit C that hopefully looks random to any efficient classical algorithm, but that conceals a secret high-probability output string x, which pops out if you run C on a quantum computer on the all-0 initial state. To try to hide x, BlueQubit uses at least three different circuit obfuscation techniques, which already tells you that they can’t have complete confidence in any one of them (since if they did, why the other two?). Nevertheless, I’m satisfied that they tried hard to break their own obfuscation, and failed. Now it’s other people’s turn to try.
(5) Deshpande, Fefferman, et al. announced a different theoretical proposal for quantum advantage from peaked quantum circuits, based on error-correcting codes. This seems tempting to try to demonstrate along the way to quantum fault-tolerance.
(6) A big one: John Bostanci, Jonas Haferkamp, Chinmay Nirkhe, and Mark Zhandry announced a proof of a classical oracle separation between the complexity classes QMA and QCMA, something that they’ve been working on for well over a year. Their candidate problem is basically a QMA-ified version of my Forrelation, which Raz and Tal previously used to achieve an oracle separation between BQP and PH. I caution that their paper is 91 pages long and hasn’t yet been vetted by independent experts, and there have been serious failed attempts on this exact problem in this past. If this stands, however, it finally settles a problem that’s been open since 2002 (and which I’ve worked on at various points starting in 2002), and shows a strong sense in which quantum proofs are more powerful than classical proofs. Note that in 2006, Greg Kuperberg and I gave a quantum oracle separation between QMA and QCMA—introducing the concept of quantum oracles for the specific purpose of that result—and since then, there’s been progress on making the oracle steadily “more classical,” but the oracle was always still randomized or “in-place” or had restrictions on how it could be queried.
(7) Oxford Ionics (which is now owned by IonQ) announced a 2-qubit gate with 99.99% fidelity: a record, and significantly past the threshold for quantum fault-tolerance. However, as far as I know, it remains to demonstrate this sort of fidelity in a large programmable system with dozens of qubits and hundreds of gates.
(8) Semi-announcement: Quanta reports that “Physicists Take the Imaginary Numbers Out of Quantum Mechanics,” and this seems to have gone viral on my social media. The article misses the opportunity to explain that “taking the imaginary numbers out” is as trivial as choosing to call each complex amplitude “just an ordered pair of reals, obeying such-and-such rules, which happen to mimic the rules for complex numbers.” Thus, the only interesting question here is whether one can take imaginary numbers out of QM in various more-or-less “natural” ways: a technical debate that the recent papers are pushing forward. For what it’s worth, I don’t expect that anything coming out of this line of work will ever be “natural” enough for me to stop explaining QM in terms of complex numbers in my undergraduate class, for example.
(9) The list of accepted talks for the annual QIP conference, to be held January 24-30 in Riga, Latvia, is now out. Lots of great stuff as always.
(10) There are probably other major recent developments in QC that I should’ve put into this post but forgot about. You can remind me about them in the comments.
(11) Indeed there are! I completely forgot that Phasecraft announced two simulations of fermionic systems that might achieve quantum advantage, one using Google’s Willow superconducting chip and the other using a Quantinuum device.
To summarize three takeaways:
- Evidence continues to pile up that we are not living in the universe of Gil Kalai and the other quantum computing skeptics. Indeed, given the current staggering rate of hardware progress, I now think it’s a live possibility that we’ll have a fault-tolerant quantum computer running Shor’s algorithm before the next US presidential election. And I say that not only because of the possibility of the next US presidential election getting cancelled, or preempted by runaway superintelligence!
- OK, but what will those quantum computers be useful for? Anyone who’s been reading this blog for the past 20 years, or any non-negligible fraction thereof, hopefully already has a calibrated sense of that, so I won’t belabor. But briefly: yes, our knowledge of useful quantum algorithms has slowly been expanding over the past thirty years. The central difficulty is that our knowledge of useful classical algorithms has also been expanding, and the only thing that matters is the differential between the two! I’d say that the two biggest known application areas for QC remain (a) quantum simulation and (b) the breaking of public-key cryptography, just as they were thirty years ago. In any case, none of the exciting developments that I’ve chosen to highlight in this post directly address the “what is it good for?” question, with the exception of the certified randomness thing.
- In talks over the past three years, I’ve advocated “verifiable quantum supremacy on current hardware” as perhaps the central challenge right now for quantum computing theory. (As I love to point out, we do know how to achieve any two of (a) quantum supremacy that’s (b) verifiable and (c) runs on current hardware!) So I’m gratified that three of the recent developments that I chose to highlight, namely (1), (4), and (5), directly address this challenge. Of course, we’re not yet sure whether any of these three attempts will stand—that is, whether they’ll resist all attempts to simulate them classically. But the more serious shots on goal we have (and all three of these are quite serious), the better the chances that at least one will stand! So I’m glad that people are sticking their necks out, proposing these things, and honestly communicating what they know and don’t know about them: this is exactly what I’d hoped would happen. Of course, complexity-theoretic analysis of these proposals would also be great, perhaps from people with more youth and/or energy than me. Now it’s time for me to sleep.
Posted in Announcements, Complexity, Quantum | 28 Comments »
UT Austin’s Statement on Academic Integrity
November 6th, 2025A month ago William Inboden, the provost of UT Austin (where I work), invited me to join a university-wide “Faculty Working Group on Academic Integrity.” The name made me think that it would be about students cheating on exams and the like. I didn’t relish the prospect but I said sure.
Shortly afterward, Jim Davis, the president of UT Austin, sent out an email listing me among 21 faculty who had agreed to serve on an important working group to decide UT Austin’s position on academic free speech and the responsibilities of professors in the classroom (!). Immediately I started getting emails from my colleagues, thanking me for my “service” and sharing their thoughts about what this panel needed to say in response to the Trump administration’s Compact on Higher Education. For context: the Compact would involve universities agreeing to do all sorts of things that the Trump administration wants—capping international student enrollment, “institutional neutrality,” freezing tuition, etc. etc.—in exchange for preferential funding. UT Austin was one of nine universities originally invited to join the Compact, along with MIT, Penn, Brown, Dartmouth, and more, and is the only one that hasn’t yet rejected it. It hasn’t accepted it either.
Formally, it was explained to me, UT’s Working Group on Academic Integrity had nothing to do with Trump’s Compact, and no mandate to either accept or reject it. But it quickly became obvious to me that my faculty colleagues would see everything we did exclusively in light of the Compact, and of other efforts by the Trump administration and the State of Texas to impose conservative values on universities. While not addressing current events directly, what we could do would be to take a strong stand for academic freedom, and more generally, for the role of intellectually independent universities in a free society.
So, led by Provost Inboden, over two meetings and a bunch of emails we hashed out a document. You can now read the Texas Statement on Academic Integrity, and I’d encourage you to do so. The document takes a pretty strong swing for academic freedom:
Academic freedom lies at the core of the academic enterprise. It is foundational to the excellence of the American higher education system, and is non-negotiable. In the words of the U.S. Supreme Court, academic freedom is "a special concern of the First Amendment." The world’s finest universities are in free societies, and free societies honor academic freedom.
The statement also reaffirms UT Austin’s previous commitments to the Chicago Principles of Free Expression, and the 1940 and 1967 academic freedom statements of the American Association of University Professors.
Without revealing too much about my role in the deliberations, I’ll say that I was especially pleased by the inclusion of the word “non-negotiable.” I thought that that word might acquire particular importance, and this was confirmed by the headline in yesterday’s Chronicle of Higher Education: As Trump’s Compact Looms, UT-Austin Affirms ‘Non-Negotiable’ Commitment to Academic Freedom (warning: paywall).
At the same time, the document also talks about the responsibility of a public university to maintain the trust of society, and about the responsibilities of professors in the classroom:
Academic integrity obligates the instructor to protect every student’s academic freedom and right to learn in an environment of open inquiry. This includes the responsibilities:
- to foster classroom cultures of trust in which all students feel free to voice their questions and beliefs, especially when those perspectives might conflict with those of the instructor or other students;
- to fairly present differing views and scholarly evidence on reasonably disputed matters and unsettled issues;
- to equip students to assess competing theories and claims, and to use reason and appropriate evidence to form their own conclusions about course material; and
- to eschew topics and controversies that are not germane to the course.
All stuff that I’ve instinctively followed, in nearly 20 years of classroom teaching, without the need for any statement telling me to. Whatever opinions I might get goaded into expressing on this blog about Trump, feminism, or Israel/Palestine, I’ve always regarded the classroom as a sacred space. (I have hosted a few fierce classroom debates about the interpretation of quantum mechanics, but even there, I try not to tip my own hand!)
I’m sure that there are commenters, on both ends of the political spectrum, who will condemn me for my participation in the faculty working group, and for putting my name on the statement. At this point in this blog’s history, commenters on both ends of the political spectrum would condemn me for saying that freshly baked chocolate chip cookies are delicious. But I like the statement, and find nothing in it that any reasonable person should disagree with. Overall, my participation in this process increased my confidence that UT Austin will be able to navigate this contentious time for the state, country, and world while maintaining its fundamental values. It made me proud to be a professor here.
Posted in Announcements | 49 Comments »
On keeping a packed suitcase
October 31st, 2025Update (Nov. 6): I’ve closed the comments, as they crossed the threshold from “sometimes worthwhile” to “purely abusive.” As for Mamdani’s victory: as I like to say in such cases (and said, e.g., after George W. Bush’s and Trump’s victories), the silver lining to which I cling is that either I’ll be pleasantly surprised, and things won’t be quite as terrible as I expect, or else I’ll be vindicated.
This Halloween, I didn’t need anything special to frighten me. I walked all day around in a haze of fear and depression, unable to concentrate on my research or anything else. I saw people smiling, dressed up in costumes, and I thought: how?
The president of the Heritage Foundation, the most important right-wing think tank in the United States, has now explicitly aligned himself with Tucker Carlson, even as the latter has become a full-on Holocaust-denying Hitler-loving antisemite, who nods in agreement with the openly neo-Nazi Nick Fuentes. Meanwhile, Vice President J.D. Vance—i.e., plausibly the next President of the United States—pointedly did nothing whatsoever to distance himself from the MAGA movement’s lunatic antisemites, in response to their lunatic antisemitic questions at the Turning Point USA conference. (Vance thus dishonored the memory of Charlie Kirk, who for all my many disagreements with him, was a firmly committed Zionist.) It’s become undeniable that, once Trump himself leaves the stage, this is the future of MAGA, and hence of the Republican Party itself. Exactly as I warned would happen a decade ago, thisis what’s crawled out from underneath the rock that Trump gleefully overturned.
While the Republican Party is being swallowed by a movement that holds that Jews like me have no place in America, the Democratic Party is being swallowed by a movement that holds that Jews have no place in Israel. If these two movements ever merged, the obvious “compromise” would be the belief, popular throughout history, that Jews have no place anywhere on earth.
Barring a miracle, New York City—home to the world’s second-largest Jewish community—is about to be led by a man for whom eradicating the Jewish state is his deepest, most fundamental moral imperative, besides of course the proletariat seizing the means of production. And to their eternal shame, something like 29% of New York’s Jews are actually going to vote for this man, believing that their own collaboration with evil will somehow protect them personally—in breathtaking ignorance of the millennia of Jewish history testifying to the opposite.
Despite what you might think, I try really, really hard not to hyperventilate or overreact. I know that, even if I lived in literal Warsaw in 1939, it would still be incumbent on me to assess the situation calmly and figure out the best response.
So for whatever it’s worth: no, I don’t expect that American Jews, even pro-Zionist Jews in New York City, will need to flee their homes just yet. But it does seem to me that they (to say nothing of British and Canadian and French Jews) might, so to speak, want to keep their suitcases packed by the door, as Jews have through the centuries in analogous situations. As Tevye says near the end of Fiddler on the Roof, when the Jews are given three days to evacuate Anatevka: “maybe this is why we always keep our hats on.” Diaspora Jews like me might also want to brush up on Hebrew. We can thank Hashem or the Born Rule that, this time around, at least the State of Israel exists (despite the bloodthirsty wish of half the world that it cease to exist), and we can reflect that these contingencies are precisely why Israel was created.
Let me make something clear: I don’t focus so much on antisemitism only because of parochial concern for the survival of my own kids, although I freely admit to having as much such concern as the next person. Instead, I do so because I hold with David Deutsch that, in Western civilization, antisemitism has for millennia been the inevitable endpoint toward which every bad idea ultimately tends. It’s the universal bad idea. It’s bad-idea-complete. Antisemitism is the purest possible expression of the worldview of the pitchfork-wielding peasant, who blames shadowy elites for his own failures in life, and who dreams in his resentment and rage of reversing the moral and scientific progress of humanity by slaughtering all those responsible for it. Hatred of high-achieving Chinese and Indian immigrants, and of gifted programs and standardized testing, are other expressions of the same worldview.
As far as I know, in 3,000 years, there hasn’t been a single example—not one—of an antisemitic regime of which one could honestly say: “fine, but once you look past what they did to the Jews, they were great for everyone else!” Philosemitism is no guarantee of general goodness (as we see for example with Trump), but antisemitism pretty much does guarantee general awfulness. That’s because antisemitism is not merely a hatred, but an entire false theory of how the world works—not just a but the conspiracy theory—and as such, it necessarily prevents its believers from figuring out true explanations for society’s problems.
I’d better end a post like this on a note of optimism. Yes, every single time I check my phone, I’m assaulted with twenty fresh examples of once-respected people and institutions, all across the political spectrum, who’ve now fallen to the brain virus, and started blaming all the world’s problems on “bloodsucking globalists” or George Soros or Jeffrey Epstein or AIPAC or some other suspicious stand-in du jour. (The deepest cuts come from the new Jew-haters who I myself once knew, or admired, or had some friendly correspondence with.)
But also, every time I venture out into the real world, I meet twenty people of all backgrounds whose brains still seem perfectly healthy, and who respond to events in a normal human way. Even in the dark world behind the screen, I can find dozens of righteous condemnations of Zohran Mamdani and Tucker Carlson and the Heritage Foundation and the others who’ve chosen to play footsie with those seeking a new Final Solution to the Jewish Question. So I reflect that, for all the battering it’s taken in this age of TikTok and idiocracy—even then, our Enlightenment civilization still has a few antibodies that are able to put up a fight.
In their beautiful book Abundance , Ezra Klein and Derek Thompson set out an ambitious agenda by which the Democratic Party could reinvent itself and defeat MAGA, not by indulging conspiracy theories but by creating actual broad prosperity. Their agenda is full of items like: legalizing the construction of more housing where people actually want to live; repealing the laws that let random busybodies block the construction of mass transit; building out renewable energy and nuclear; investing in science and technology … basically, doing all the things that anyone with any ounce of economic literacy knows to be good. The abundance agenda isn’t only righteous and smart: for all I know, it might even turn out to be popular. It’s clearly worth a try.
Last week I was amused to see Kate Willett and Briahna Joy Gray, two of the loudest voices of the conspiratorial far left, denounce the abundance agenda as … wait for it … a cover for Zionism. As far as they’re concerned, the only reason why anyone would talk about affordable housing or high-speed rail is to distract the masses from the evil Zionists murdering Palestinian babies in order to harvest their organs.
The more I thought about this, the more I realized that Willett and Gray actually have a point. Yes, solving America’s problems with reason and hard work and creativity, like the abundance agenda says to do, is the diametric opposite of blaming all the problems on the perfidy of Jews or some other scapegoat. The two approaches really are the logical endpoints of two directly competing visions of reality.
Naturally I have a preference between those visions. So I’ve been on a bit of a spending spree lately, in support of sane, moderate, pro-abundance, anti-MAGA, liberal Enlightenment forces retaking America. I donated 1000ドル to Alex Bores, who’s running for Congress in NYC, and who besides being a moderate Democrat who favors all the usual good things, is also a leader in AI safety legislation. (For more, see this by Eric Neyman of Alignment Research Center, or this from Scott Alexander himself—the AI alignment community has been pretty wowed.) I also donated 1000ドル to Scott Wiener, who’s running for Nancy Pelosi’s seat in California, has a nuanced pro-two-states, anti-Netanyahu position that causes him to get heckled as a genocidal Zionist, and authored the excellent SB1047 AI safety bill, which Gavin Newsom unfortunately vetoed for short-term political reasons. And I donated 1000ドル to Vikki Goodwin, a sane Democrat who’s running to unseat Lieutenant Governor Dan Patrick in my own state of Texas. Any other American office-seeker who resonates with this post, and who’d like a donation, can feel free to contact me as well.
My bag is packed … but for now, only for a brief trip to give the physics colloquium at Harvard, after which I’ll return back home to Austin. Until it becomes impossible, I call on my thousands of thoughtful, empathetic American readers to stay right where you are, and simply do your best to fight the brain-eaten zombies of both left and right. If you are one of the zombies, of course, then my calling you one doesn’t even begin to express my contempt: may you be remembered by history alongside the willing dupes of Hitler, Stalin, and Mao. May the good guys prevail.
Oh, and speaking of zombies, Happy Halloween everyone! Boooooooo!
Posted in Obviously I'm Not Defending Aaronson, The Fate of Humanity | 119 Comments »
An Experimental Program for AI-Powered Feedback at STOC: Guest Post from David Woodruff
October 28th, 2025This year for STOC, we decided to run an experiment to explore the use of Large Language Models in the theoretical computer science community, and we’re inviting the entire community to participate.
We—a team from the STOC PC—are offering authors the chance to get automated pre-submission feedback from an advanced, Gemini-based LLM tool that’s been optimized for checking mathematical rigor. The goal is simple: to provide constructive suggestions and, potentially, help find technical mistakes before the paper goes to the PC. Some important points:
- This is 100% optional and opt-in.
- The reviews generated WILL NOT be passed on to the PC. They are for your eyes only.
- Data Privacy is Our #1 Commitment. We commit that your submitted paper will NOT be logged, stored, or used for training.
- Please do not publicly share these reviews without contacting the organizing team first.
This tool is specifically optimized for checking a paper’s mathematical rigor. It’s a hopefully useful way to check the correctness of your arguments. Note that sometimes it does not possess external, area-specific knowledge (like “folklore” results). This means it may flag sections that rely on unstated assumptions, or it might find simple omissions or typos.
Nevertheless, we hope you’ll find this feedback valuable for improving the paper’s overall clarity and completeness.
If you’re submitting to STOC, we encourage you to opt-in. You’ll get (we hope) useful feedback, and you’ll be providing invaluable data as we assess this tool for future theory conferences.
The deadline to opt-in on the HotCRP submission form is November 1 (5pm EST).
You can read the full “Terms of Participation” (including all privacy and confidentiality details) at the link below.
This experiment is being run by PC members David Woodruff (CMU) and Rajesh Jayaram (Google), as well as Vincent Cohen-Addad (Google) and Jon Schneider (Google).
We’re excited to offer this resource to the community.
Please see the STOC Call for Papers here and specific details on the experiment here.
Posted in Announcements, Complexity | 6 Comments »
My talk at Columbia University: “Computational Complexity and Explanations in Physics”
October 16th, 2025Last week, I gave the Patrick Suppes Lecture in the Columbia University Philosophy Department. Patrick Suppes was a distinguished philosopher at Stanford who (among many other things) pioneered remote gifted education through the EPGY program, and who I was privileged to spend some time with back in 2007, when he was in his eighties.
My talk at Columbia was entitled “Computational Complexity and Explanations in Physics.” Here are the PowerPoint slides, and here’s the abstract:
The fact, or conjecture, of certain computational problems being intractable (that is, needing astronomical amounts of time to solve) clearly affects our ability to learn about physics. But could computational intractability also play a direct role in physical explanations themselves? I’ll consider this question by examining three possibilities:
(1) If quantum computers really take exponential time to simulate using classical computers, does that militate toward the many-worlds interpretation of quantum mechanics, as David Deutsch famously proposed?
(2) Are certain speculative physical ideas (e.g., time travel to the past or nonlinearities in quantum mechanics) disfavored, over and above any other reasons to disfavor them, because they would lead to “absurd computational superpowers”?
(3) Do certain effective descriptions in physics work only because of the computational intractability of violating those descriptions — as for example with Harlow and Hayden’s resolution of the “firewall paradox” in black hole thermodynamics, or perhaps even the Second Law of Thermodynamics itself?
I’m grateful to David Albert and Lydia Goehr of Columbia’s Philosophy Department, who invited me and organized the talk, as well as string theorist Brian Greene, who came and contributed to the discussion afterward. I also spent a day in Columbia’s CS department, gave a talk about my recent results on quantum oracles, and saw many new friends and old there, including my and my wife’s amazing former student Henry Yuen. Thanks to everyone.
This was my first visit to Columbia University for more than a decade, and certainly my first since the upheavals following the October 7 massacre. Of course I was eager to see the situation for myself, having written about it on this blog. Basically, if you’re a visitor like me, you now need both a QR code and an ID to get into the campus, which is undeniably annoying. On the other hand, once you’re in, everything is pleasant and beautiful. Just from wandering around, I’d have no idea that this campus had recently been Ground Zero for the pro-intifada protests, and then for the reactions against those protests (indeed, the use of the protests as a pretext to try to destroy academia entirely) that rocked the entire country, filling my world and my social media feed.
When I asked friends and colleagues about the situation, I heard a range of perspectives: some were clearly exasperated with the security measures; others, while sharing in the annoyance, suggested the measures seem to be needed, since every time the university has tried to relax them, the “intifada” has returned, with non-university agitators once again disrupting research and teaching. Of course we can all pray that the current ceasefire will hold, for many reasons, the least of which is that perhaps then the obsession of the world’s young and virtuous to destroy the world’s only Jewish state will cool down a bit, and they’ll find another target for their rage. That would also help life at Columbia and other universities return to how it was before.
Before anyone asks: no, Columbia’s Peter Woit never showed up to disrupt my talk with rotten vegetables or a bullhorn—indeed, I didn’t see him at all on his trip, nor did I seek him out. Given that Peter chose to use his platform, one of the world’s best-known science blogs, to call me a mentally ill genocidal fascist week after week, it meant an enormous amount to me to see how many friends and supporters I have right in his own backyard.
All in all, I had a wonderful time at Columbia, and based on what I saw, I won’t hesitate to come back, nor will I hesitate to recommend Jewish or Israeli or pro-Zionist students to study there.
Posted in Adventures in Meatspace, Complexity, Obviously I'm Not Defending Aaronson, Quantum | 68 Comments »
Sad and happy day
October 7th, 2025Today, of course, is the second anniversary of the genocidal Oct. 7 invasion of Israel—the deadliest day for Jews since the Holocaust, and the event that launched the current wars that have been reshaping the Middle East for better and/or worse. Regardless of whether their primary concern is for Israelis, Palestinians, or both, I’d hope all readers of this blog could at least join me in wishing this barbaric invasion had never happened, and in condemning the celebrations of it taking place around the world.
Now for the happy part: today is also the day when the Nobel Prize in Physics is announced. I was delighted to wake up to the news that this year, the prize goes to John Clarke of Berkeley, John Martinis of UC Santa Barbara, and Michel Devoret of UC Santa Barbara (formerly Yale), for their experiments in the 1980s that demonstrated the reality of macroscopic quantum tunneling in superconducting circuits. Among other things, this work laid the foundation for the current effort by Google, IBM, and many others to build quantum computers with superconducting qubits. To clarify, though, today’s prize is not for quantum computing per se, but for the earlier work.
While I don’t know John Clarke, and know Michel Devoret only a little, I’ve been proud to count John Martinis as a good friend for the past decade—indeed, his name has often appeared on this blog. When Google hired John in 2014 to build the first programmable quantum computer capable of demonstrating quantum supremacy, it was clear that we’d need to talk about the theory, so we did. Through many email exchanges, calls, and visits to Google’s Santa Barbara Lab, I came to admire John for his iconoclasm, his bluntness, and his determination to make sampling-based quantum supremacy happen. After Google’s success in 2019, I sometimes wondered whether John might eventually be part of a Nobel Prize in Physics for his experimental work in quantum computing. That may have become less likely today, now that he’s won the Nobel Prize in Physics for his work before quantum computing, but I’m guessing he doesn’t mind! Anyway, huge congratulations to all three of the winners.
Posted in Announcements, Quantum, The Fate of Humanity | 38 Comments »
The QMA Singularity
September 27th, 2025Update (Sep. 29): Since this post has now gone semi-viral on X, Hacker News, etc., with people arguing about how trivial or nontrivial was GPT5’s “discovery,” it seems worthwhile to say something that was implicit in the post.
Namely, GPT5-Thinking’s suggestion of a function to use “should have” been obvious to us. It would have been obvious to us had we known more, or had we spent more time studying the literature or asking experts.
The point is, anyone engaged in mathematical research knows that an AI that can “merely” fill in the insights that “should’ve been” obvious to you is a really huge freaking deal! It speeds up the actual discovery process, as opposed to the process of writing LaTeX or preparing the bibliography or whatever. This post gave one tiny example of what I’m sure will soon be thousands.
I should also add that, since this post went up, a commenter named Phillip Harris proposed a better function to use than GPT-5’s: det(I-E) rather than Tr[(I-E)-1]. While we’re still checking details, not only do we think this works, we think it simplifies our argument and solves one of our open problems. So it seems human supremacy has been restored, at least for now!
A couple days ago, Freek Witteveen of CWI and I posted a paper to the arXiv called “Limits to black-box amplification in QMA.” Let me share the abstract:
We study the limitations of black-box amplification in the quantum complexity class QMA. Amplification is known to boost any inverse-polynomial gap between completeness and soundness to exponentially small error, and a recent result (Jeffery and Witteveen, 2025) shows that completeness can in fact be amplified to be doubly exponentially close to 1. We prove that this is optimal for black-box procedures: we provide a quantum oracle relative to which no QMA verification procedure using polynomial resources can achieve completeness closer to 1 than doubly exponential, or a soundness which is super-exponentially small. This is proven by using techniques from complex approximation theory, to make the oracle separation from (Aaronson, 2008), between QMA and QMA with perfect completeness, quantitative.
You can also check out my PowerPoint slides here.
To explain the context: QMA, or Quantum Merlin Arthur, is the canonical quantum version of NP. It’s the class of all decision problems for which, if the answer is “yes,” then Merlin can send Arthur a quantum witness state that causes him to accept with probability at least 2/3 (after a polynomial-time quantum computation), while if the answer is “no,” then regardless of what witness Merlin sends, Arthur accepts with probability at most 1/3. Here, as usual in complexity theory, the constants 2/3 and 1/3 are just conventions, which can be replaced (for example) by 1-2-n and 2-n using amplification.
A longstanding open problem about QMA—not the biggest problem, but arguably the most annoying—has been whether the 2/3 can be replaced by 1, as it can be for classical MA for example. In other words, does QMA = QMA1, where QMA1 is the subclass of QMA that admits protocols with “perfect completeness”? In 2008, I used real analysis to show that there’s a quantum oracle relative to which QMA ≠ QMA1, which means that any proof of QMA = QMA1 would need to use “quantumly nonrelativizing techniques” (not at all an insuperable barrier, but at least we learned something about why the problem is nontrivial).
Then came a bombshell: in June, Freek Witteveen and longtime friend-of-the-blog Stacey Jeffery released a paper showing that any QMA protocol can be amplified, in a black-box manner, to have completeness error that’s doubly exponentially small, 1/exp(exp(n)). They did this via a method I never would’ve thought of, wherein a probability of acceptance is encoded via the amplitudes of a quantum state that decrease in a geometric series. QMA, it turned out, was an old friend that still had surprises up its sleeve after a quarter-century.
In August, we had Freek speak about this breakthrough by Zoom in our quantum group meeting at UT Austin. Later that day, I asked Freek whether their new protocol was the best you could hope to do with black-box techniques, or whether for example one could amplify the completeness error to be triply exponentially small, 1/exp(exp(exp(n))). About a week later, Freek and I had a full proof written down that, using black-box techniques, doubly-exponentially small completeness error is the best you can do. In other words: we showed that, when one makes my 2008 QMA ≠ QMA1 quantum oracle separation quantitative, one gets a lower bound that precisely matches Freek and Stacey’s protocol.
All this will, I hope, interest and excite aficianados of quantum complexity classes, while others might have very little reason to care.
But here’s a reason why other people might care. This is the first paper I’ve ever put out for which a key technical step in the proof of the main result came from AI—specifically, from GPT5-Thinking. Here was the situation: we had an ×ばつN Hermitian matrix E(θ) (where, say, N=2n), each of whose entries was a poly(n)-degree trigonometric polynomial in a real parameter θ. We needed to study the largest eigenvalue of E(θ), as θ varied from 0 to 1, to show that this λmax(E(θ)) couldn’t start out close to 0 but then spend a long time “hanging out” ridiculously close to 1, like 1/exp(exp(exp(n))) close for example.
Given a week or two to try out ideas and search the literature, I’m pretty sure that Freek and I could’ve solved this problem ourselves. Instead, though, I simply asked GPT5-Thinking. After five minutes, it gave me something confident, plausible-looking, and (I could tell) wrong. But rather than laughing at the silly AI like a skeptic might do, I told GPT5 how I knew it was wrong. It thought some more, apologized, and tried again, and gave me something better. So it went for a few iterations, much like interacting with a grad student or colleague. Within a half hour, it had suggested to look at the function
$$ Tr[(I-E(\theta))^{-1}] = \sum_{i=1}^N \frac{1}{1-\lambda_i(\theta)}. $$
It pointed out, correctly, that this was a rational function in θ of controllable degree, that happened to encode the relevant information about how close the largest eigenvalue λmax(E(θ)) is to 1. And this … worked, as we could easily check ourselves with no AI assistance. And I mean, maybe GPT5 had seen this or a similar construction somewhere in its training data. But there’s not the slightest doubt that, if a student had given it to me, I would’ve called it clever. Obvious with hindsight, but many such ideas are.
I had tried similar problems a year ago, with the then-new GPT reasoning models, but I didn’t get results that were nearly as good. Now, in September 2025, I’m here to tell you that AI has finally come for what my experience tells me is the most quintessentially human of all human intellectual activities: namely, proving oracle separations between quantum complexity classes. Right now, it almost certainly can’t write the whole research paper (at least if you want it to be correct and good), but it can help you get unstuck if you otherwise know what you’re doing, which you might call a sweet spot. Who knows how long this state of affairs will last? I guess I should be grateful that I have tenure.
Posted in Complexity, Nerd Interest, Quantum | 72 Comments »
HSBC unleashes yet another “qombie”: a zombie claim of quantum advantage that isn’t
September 25th, 2025Today, I got email after email asking me to comment on a new paper from HSBC—yes, the bank—together with IBM. The paper claims to use a quantum computer to get a 34% advantage in predictions of financial trading data. (See also blog posts here and here, or numerous popular articles that you can easily find and I won’t link.) What have we got? Let’s read the abstract:
The estimation of fill probabilities for trade orders represents a key ingredient in the optimization of algorithmic trading strategies. It is bound by the complex dynamics of financial markets with inherent uncertainties, and the limitations of models aiming to learn from multivariate financial time series that often exhibit stochastic properties with hidden temporal patterns. In this paper, we focus on algorithmic responses to trade inquiries in the corporate bond market and investigate fill probability estimation errors of common machine learning models when given real production-scale intraday trade event data, transformed by a quantum algorithm running on IBM Heron processors, as well as on noiseless quantum simulators for comparison. We introduce a framework to embed these quantum-generated data transforms as a decoupled offline component that can be selectively queried by models in lowlatency institutional trade optimization settings. A trade execution backtesting method is employed to evaluate the fill prediction performance of these models in relation to their input data. We observe a relative gain of up to ∼ 34% in out-of-sample test scores for those models with access to quantum hardware-transformed data over those using the original trading data or transforms by noiseless quantum simulation. These empirical results suggest that the inherent noise in current quantum hardware contributes to this effect and motivates further studies. Our work demonstrates the emerging potential of quantum computing as a complementary explorative tool in quantitative finance and encourages applied industry research towards practical applications in trading.
As they say, there are more red flags here than in a People’s Liberation Army parade. To critique this paper is not quite “shooting fish in a barrel,” because the fish are already dead before we’ve reached the end of the abstract.
They see a quantum advantage for the task in question, but only because of the noise in their quantum hardware? When they simulate the noiseless quantum computation classically, the advantage disappears? WTF? This strikes me as all but an admission that the “advantage” is just a strange artifact of the particular methods that they decided to compare—that it has nothing really to do with quantum mechanics in general, or with quantum computational speedup in particular.
Indeed, the possibility of selection bias rears its head. How many times did someone do some totally unprincipled, stab-in-the-dark comparison of a specific quantum learning method against a specific classical method, and get predictions from the quantum method that were worse than whatever they got classically … so then they didn’t publish a paper about it?
If it seems like I’m being harsh, it’s because to my mind, the entire concept of this sort of study is fatally flawed from the beginning, optimized for generating headlines rather than knowledge. The first task, I would’ve thought, is to show the reality of quantum computational advantage in the system or algorithm under investigation, even just for a useless benchmark problem. Only after one has done that, has one earned the right to look for a practical benefit in algorithmic trading or predicting financial time-series data or whatever, coming from that same advantage. If you skip the first step, then whatever “benefits” you get from your quantum computer are overwhelmingly likely to be cargo cult benefits.
And yet none of it matters. The paper can, more or less, openly admit all this right in the abstract, and yet it will still predictably generate lots of credulous articles in the business and financial news about HSBC using quantum computers to improve bond trading!—which, one assumes, was the point of the exercise from the beginning. Qombies roam the earth: undead narratives of “quantum advantage for important business problems” detached from any serious underlying truth-claim. And even here at one of the top 50 quantum computing blogs on the planet, there’s nothing I can do about it other than scream into the void.
Update (Sep. 26): Someone let me know that Martin Shkreli, the “pharma bro,” will be hosting a conference call for investors to push back on quantum computing hype. He announced on X that he’s offering quantum computing experts 2ドルk each to speak in his call. On the off chance that Shkreli reads this blog: I’d be willing to do it for 50ドルk. And if Shkreli were to complain about my jacking up the price… 😄
Posted in Quantum, Rage Against Doofosity, Speaking Truth to Parallelism | 65 Comments »
Darkness over America
September 22nd, 2025Update (September 24): A sympathetic correspondent wrote to tip me off that this blog post has caused me to get added to a list, maintained by MAGA activists and circulated by email, of academics and others who ought to “[face] some consequences for maligning the patriotic MAGA movement.” Needless to say, not only did this post unequivocally condemn Charlie Kirk’s murder, it even mentioned areas of common ground between me and Kirk, and my beefs with the social-justice left. If someone wants to go to the Texas Legislature to get me fired, literally the only thing they’ll have on me is that I “maligned the patriotic MAGA movement,” i.e. expressed political views shared by the majority of Americans.
Still, it’s a strange honor to have had people on both extremes of the ideological spectrum wanting to cancel me for stuff I’ve written on this blog. What is tenure for, if not this?
Another Update: In a dark and polarized age like ours, one thing that gives hope is the prospect of rational agents updating on each others’ knowledge to come to agreement. On that note, please enjoy this recent podcast, in which a 95-year-old Robert Aumann explains Aumann’s agreement theorem in his own words (see here for my old post about it, one of the most popular in the history of this blog).
From 2016 until last week, as the Trump movement dismantled one after another of the obvious bipartisan norms of the United States that I’d taken for granted since my childhood—e.g., the loser conceding an election and attending the winner’s inauguration, America being proudly a nation of immigrants, science being good, vaccines being good, Russia invading its neighbors being bad, corruption (when it occurred) not openly boasted about—I often consoled myself that at least the First Amendment, the motor of our whole system since 1791, was still in effect. At least you could still call Trump a thug and a conman without fear. Yes, Trump constantly railed against hostile journalists and comedians and protesters, threatened them at his rallies, filed frivolous lawsuits against them, but none of it seemed to lead to any serious program to shut them down. Oceans of anti-Trump content remained a click away.
I even wondered whether this was Trump’s central innovation in the annals of authoritarianism: proving that, in the age of streaming and podcasts and social media, you no longer needed to bother with censorship in order to build a regime of lies. You could simply ensure that the truth remained one narrative among others, that it never penetrated the epistemic bubble of your core supporters, who’d continue to be algorithmically fed whatever most flattered their prejudices.
Last week, that all changed. Another pillar of the previous world fell. According to the new norm, if you’re a late-night comedian who says anything Trump doesn’t like, he’ll have the FCC threaten your station’s affiliates’ broadcast licenses, and they’ll cave, and you’ll be off the air, and he’ll gloat about it. We ought to be clear that, even conditioned on everything else, this is a huge further step toward how things work in Erdogan’s Turkey or Orban’s Hungary, and how they were never supposed to work in America.
At risk of stating the obvious:
- I was horrified by the murder of Charlie Kirk. Political murder burns our societal commons and makes the world worse in every way. I’d barely been aware of Kirk before the murder, but it seems clear he was someone with whom I’d have countless disagreements, but also some common ground, for example about Israel. Agree or disagree is beside the point, though. One thing we can all hopefully take from the example of Kirk’s short life, regardless of our beliefs, is his commitment to “Prove Me Wrong” and “Change My Mind”: to showing up on campus (or wherever people are likeliest to disagree with us) and exchanging words rather than bullets.
- I’m horrified that there are fringe figures on social media who’ve celebrated Kirk’s murder or made light of it. I’m fine with such people losing their jobs, as I’d be with those who celebrate any political murder.
- It looks like Kirk’s murderer was a vaguely left-wing lunatic, with emphasis on the “lunatic” part (as often with these assassins, his worldview wasn’t particularly coherent). Jimmy Kimmel was wrong to insinuate that the murderer was a MAGA conservative. But he was “merely” wrong. By no stretch of the imagination did Kimmel justify or celebrate Kirk’s murder.
- If the new rule is that anyone who spreads misinformation gets cancelled by force of government, then certainly Fox News, One America News, Joe Rogan, and MAGA’s other organs of support should all go dark immediately.
- Yes, I’m aware (to put it mildly) that, especially between 2015 and 2020, the left often used its power in media, academia, and nonprofits to try to silence those with whom it disagreed, by publicly shaming them and getting them blacklisted and fired. That was terrible too. I opposed it at the time, and in the comment-171 affair, I even risked my career to stand up to it.
- But censorship backed by the machinery of state is even worse than social-media shaming mobs. As I and many others discovered back then, to our surprised relief, there are severe limits to the practical power of angry leftists on Twitter and Reddit. That was true then, and it’s even truer today. But there are far fewer limits to the power of a government, especially one that’s been reorganized on the principle of obedience to one man’s will. The point here goes far beyond “two wrongs don’t make a right.” As pointed out by that bleeding-heart woke, Texas Senator Ted Cruz, new weapons are being introduced that the other side will also be tempted to use when it retakes power. The First Amendment now has a knife to its throat, as it didn’t even at the height of the 2015-2020 moral panic.
- Yes, five years ago, the federal government pressured Facebook and other social media platforms to take down COVID ‘misinformation,’ some of which turned out not to be misinformation at all. That was also bad, and indeed it dramatically backfired. But let’s come out and say it: censoring medical misinformation because you’re desperately trying to save lives during a global pandemic is a hundred times more forgivable than censoring comedians because they made fun of you. And no one can deny that the latter is the actual issue here, because Trump and his henchmen keep saying the quiet part out loud.
Anyway, I keep hoping that my next post will be about quantum complexity theory or AI alignment or Busy Beaver 6 or whatever. Whenever I feel backed into a corner, however, I will risk my career, and the Internet’s wrath, to blog my nutty, extreme, embarrassing, totally anodyne liberal beliefs that half or more of Americans actually share.
Posted in Announcements, The Fate of Humanity | 93 Comments »
Quantum Information Supremacy
September 12th, 2025I’m thrilled that our paper entitled Demonstrating an unconditional separation between quantum and classical information resources, based on a collaboration between UT Austin and Quantinuum, is finally up on the arXiv. I’m equally thrilled that my coauthor and former PhD student William Kretschmer — who led the theory for this project, and even wrote much of the code — is now my faculty colleague at UT Austin! My physics colleague Nick Hunter-Jones and my current PhD student Sabee Grewal made important contributions as well. I’d especially like to thank the team at Quantinuum for recognizing a unique opportunity to test and showcase their cutting-edge hardware, and collaborating with us wild-eyed theorists to make it happen. This is something that, crucially, would not have been feasible with the quantum computing hardware of only a couple years ago.
Here’s our abstract, which I think explains what we did clearly enough, although do read the paper for more:
A longstanding goal in quantum information science is to demonstrate quantum computations that cannot be feasibly reproduced on a classical computer. Such demonstrations mark major milestones: they showcase fine control over quantum systems and are prerequisites for useful quantum computation. To date, quantum advantage has been demonstrated, for example, through violations of Bell inequalities and sampling-based quantum supremacy experiments. However, both forms of advantage come with important caveats: Bell tests are not computationally difficult tasks, and the classical hardness of sampling experiments relies on unproven complexity-theoretic assumptions. Here we demonstrate an unconditional quantum advantage in information resources required for a computational task, realized on Quantinuum’s H1-1 trapped-ion quantum computer operating at a median two-qubit partial-entangler fidelity of 99.941(7)%. We construct a task for which the most space-efficient classical algorithm provably requires between 62 and 382 bits of memory, and solve it using only 12 qubits. Our result provides the most direct evidence yet that currently existing quantum processors can generate and manipulate entangled states of sufficient complexity to access the exponentiality of Hilbert space. This form of quantum advantage — which we call quantum information supremacy — represents a new benchmark in quantum computing, one that does not rely on unproven conjectures.
I’m very happy to field questions about this paper in the comments section.
Unrelated Announcement: As some of you might have seen, yesterday’s Wall Street Journal carried a piece by Dan Kagan-Kans on “The Rise of ‘Conspiracy Physics.'” I talked to the author for the piece, and he quoted this blog in the following passage:
This resentment of scientific authority figures is the major attraction of what might be called "conspiracy physics." Most fringe theories are too arcane for listeners to understand, but anyone can grasp the idea that academic physics is just one more corrupt and self-serving establishment. The German physicist Sabine Hossenfelder has attracted 1.72 million YouTube subscribers in part by attacking her colleagues: "Your problem is that you’re lying to the people who pay you," she declared. "Your problem is that you’re cowards without a shred of scientific integrity."
In this corner of the internet, the scientist Scott Aaronson has written, "Anyone perceived as the ‘mainstream establishment’ faces a near-insurmountable burden of proof, while anyone perceived as ‘renegade’ wins by default if they identify any hole whatsoever in mainstream understanding."
Posted in Announcements, Bell's Theorem? But a Flesh Wound!, Complexity, Quantum | 160 Comments »