Skip to main content
·35 min read

A Better Question Than P Versus NP

Quantum ComputingComplexity TheoryP Versus NPHarvest Now, Decrypt Later

The Wrong Question

In my last semester of college, I took Theory of Computation, where Dr. Jacques teaches the most horrific conjecture I've encountered in computer science: P vs. NP. Consider:

If a solution to a problem can be verified quickly, does that mean it can also be found quickly?

If the answer is yes, then P = NP. This conjecture is so significant that it is considered one of the Millennium Prize Problems. This means that there is a $1,000,000 dollar prize for anyone who can solve P Versus NP. If P=NP, Modern cryptography collapses. Entire classes of optimization problems that currently require exponential time become tractable. Mathematical proofs could be found automatically. The whole architecture of computational difficulty falls apart.

I have been studying computer science since I was 14, and the reason I loved it so much was that everything is debuggable. I spent ages 14 to 20 explaining to anyone who would listen that my fondness for computer science was rooted in the lack of mystery — that every problem has an answer, and that answer is findable with enough patience and the right lines of code.

So when I learned about P versus NP, I learned that my entire intuition about computation might be wrong. Not wrong in a fixable way. Wrong in a way that might be unfixable by design. I spent an entire semester of college absolutely blindsided by it, filling notebooks trying to find a solution that classical computers probably just cannot produce.

Some of the problems I was fixating on already had a different story being written about them, not in theory of computation classrooms, but in physics labs. I didn't find that out in school. I took Theory of Computation in my last semester of college, and after graduating, I transitioned into a full-time role with TD SYNNEX on our IBM team. For those unfamiliar, IBM is a leader in quantum computing, so when I got real exposure to the IBM Quantum platform, that's when it became clear: the problems I'd spent so much mental energy on were open questions in classical complexity theory, and they were also being approached through a completely different computational model that I had never seriously engaged with.

Quantum computing doesn't resolve P versus NP. But it does have a realistic shot within our lifetime and current roadmaps at solving specific problems that sit at the intersection of classically intractable and quantum approachable. Problems I'd been circling for years without knowing there was another angle entirely.

To clarify, I am not discussing whether quantum will prove or disprove P=NP (it won't). What I am going to talk about is which problems quantum can actually solve, and why. To understand that, you have to start with physics.


Quantum Computing Starts With Physics

In the 1980s, physicist Richard Feynman noticed something inconvenient. If you try to simulate molecules, particles, materials on a classical computer, the required memory grows exponentially with system size. A modest molecule might require more classical bits than exist in the observable universe. The universe itself, meanwhile, runs quantum systems constantly and without apparent effort.

Maybe, the way to simulate a natural phenomena is to build a computer that obeys quantum mechanics. That idea launched an entire field. Three physical phenomena make it work.

“Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical”, Richard Feynman, 1981.


The Three Quantum Effects That Matter

Superposition

A classical bit is either 0 or 1. A qubit can exist in a superposition of both simultaneously:

|ψ⟩ = α|0⟩ + β|1⟩

where α and β are complex probability amplitudes. The qubit isn't secretly 0 or 1 while you're not looking — it is genuinely in both states at once, until measurement forces it to collapse into one. This is not a metaphor. It is experimentally verified, reproducible physics.

A register of n qubits can represent 2ⁿ states simultaneously, which means the number of states grows exponentially with every qubit you add. Sixteen qubits holds 65,536 states at once. Three hundred qubits holds more states than there are atoms in the observable universe. This is the part that gets oversimplified in every quantum explainer you have ever read: people hear "all states at once" and conclude that a quantum computer just tries every possible answer simultaneously and picks the right one. That would be extraordinary. It would also mean measurement is trivial, and it isn't.

When you measure a qubit, the superposition collapses. You get one result. The universe picks for you, probabilistically, and everything else disappears. So the raw fact of superposition doesn't buy you anything on its own.. if you could just look at all the states, you would, and the problem would be solved. The reason quantum computers are interesting is more subtle and more strange than parallelism: it is about what you can do to the state before you measure it. That is where interference comes in, and interference is the actual engine of every quantum algorithm worth knowing about.

Entanglement

Two qubits can become entangled: their states are correlated in a way that has no classical equivalent. Measure one and the other's state is instantly determined, regardless of physical distance. Einstein famously called this "spooky action at a distance" and found it deeply objectionable. The experimental record has not been kind to his objection. Entanglement is real, and it allows quantum computers to encode relationships between variables in ways classical systems cannot replicate.

Interference

This is the actual engine of quantum algorithms, and it receives less attention than it deserves. Quantum states behave like waves, so their amplitudes can actually reinforce or cancel each other out. A well-designed quantum algorithm amplifies the amplitude of correct answers and suppresses incorrect ones through carefully constructed interference patterns. Quantum computation is less about parallelism and more about wave engineering. The distinction matters.


The Complexity Classes That Actually Matter

Computer scientists organize problems into complexity classes based on how resource requirements scale with input size. Four classes are worth knowing here.

P — problems solvable in polynomial time on a classical computer. Sorting, shortest path, multiplication. Efficient by definition.

NP — problems where a proposed solution can be verified quickly, but finding that solution may require exponential time. Factoring large integers, the traveling salesman problem, graph coloring. The class I spent an embarrassing amount of time trying to crack. There is, incidentally, a $1 million Millennium Prize for anyone who can prove or disprove P = NP — it remains unclaimed.

NP-complete — the hardest problems in NP. Solve any one in polynomial time and you solve them all. The field's consensus is that P ≠ NP and that this hardness is structural. No clever trick circumvents it. I was resistant to accepting that for longer than I should have been.

BQP — Bounded-Error Quantum Polynomial time. Problems a quantum computer can solve efficiently with a small probability of error. That error bound is less alarming than it sounds — it can be reduced to any desired threshold by running the algorithm repeatedly and taking a majority vote. BQP sits inside a known hierarchy: P ⊆ BPP ⊆ BQP ⊆ PSPACE. The critical point to highlight is that BQP is almost certainly not a superset of NP. Quantum computers do not crack NP-complete problems. The hardness there appears to be a deeper fact about computation than quantum mechanics can bypass. But BQP is still genuinely larger than P, and that is where the real story begins.

Venn diagram displaying the relationships of the above complexity classes. Source: https://www.researchgate.net/figure/NP-Complete-problems-are-outside-the-BQP-class-meaning-that-quantum-computers-can-not_fig1_371318355
Venn diagram displaying the relationships of the above complexity classes. Source: https://www.researchgate.net/figure/NP-Complete-problems-are-outside-the-BQP-class-meaning-that-quantum-computers-can-not_fig1_371318355


Problems Quantum Computers Can Actually Solve

Quantum advantage shows up in specific kinds of structure: periodicity, algebraic symmetry, and simulation requirements. The canonical examples follow.

Factoring Large Numbers

Shor's algorithm, published in 1994, is the reason every government-funded quantum computing program exists. It factors an n-bit integer in polynomial time (O(n³) quantum) against the classical best of roughly O(e^(n^1/3)), which is effectively exponential for large inputs. RSA encryption derives its security from precisely that classical difficulty.

The core insight is that factoring reduces to order-finding: given N and a random integer a, find the smallest r such that aʳ ≡ 1 (mod N). Once you have r, the prime factors of N follow from gcd(aʳ/² ± 1, N) with high probability. Finding r classically requires searching exponentially many values. Shor's algorithm uses the Quantum Fourier Transform, a circuit that exploits interference to find the periodicity of f(x) = aˣ mod N in O(n²) gates, to do it efficiently. The QFT is the engine underneath every quantum speedup involving periodicity or algebraic structure. If large fault-tolerant quantum computers exist, most current public-key cryptography becomes vulnerable.

Harvest Now, Decrypt Later

That vulnerability is not purely a future concern. There is a strategy nation-state actors are believed to be running right now called harvest now, decrypt later. The approach is straightforward: intercept encrypted data today (e.g. classified communications, medical records, financial transactions, diplomatic cables) and store it. The data is unreadable at time of collection. The bet is that a cryptographically relevant quantum computer will exist within a decade or two, at which point the archive becomes readable retroactively. The breach precedes its own discovery by years.

Jeff Crume, IBM Distinguished Engineer and a serious voice in enterprise cybersecurity, has been making this argument consistently for years. His consistent position is that data being exfiltrated today may still carry sensitivity in ten or twenty years, and that the cryptographic migrations required to address this take years to execute. You cannot wait for a cryptographically relevant quantum computer to exist before starting to move. Watch his video on this topic here.

This is why NIST finalized post-quantum cryptography standards in August 2024. Not because quantum computers can break RSA today — they cannot, not at any scale that matters. Because the window in which migration needs to happen is already open, and the data being collected right now is the data at risk.

Searching Unstructured Data

Grover's algorithm, 1996, provides a quadratic speedup for unstructured search: find a marked element among N items in O(√N) quantum queries versus O(N) classically. It works by iteratively applying an oracle that flips the phase of the correct answer, followed by a diffusion operator that amplifies the marked state. Each iteration rotates the quantum state vector slightly toward the target; after O(√N) rotations, you arrive with near-certainty. The geometry of the rotation also proves the bound is optimal, so no quantum algorithm can search unstructured space faster.

A quadratic speedup does not break NP-hardness. But it carries real practical consequences: it effectively halves the security strength of symmetric encryption, which is why AES-128 needs to become AES-256 in a post-quantum context.

Simulating Quantum Systems

This was Feynman's original motivation, and it may ultimately be the first genuinely transformative industrial application. Classical computers require exponentially growing resources to simulate quantum systems; quantum computers do it natively. The applications span drug discovery, materials science, battery chemistry, and quantum chemistry. Quantum can handle calculations that are classically intractable for large molecules are natural problems for quantum hardware.

Quantum Phase Estimation is the core algorithm here. I do not have a deep enough understanding to describe it effectively, so you can read more here. What I can say this that QPE It is the primitive underlying quantum chemistry, quantum linear systems, and large portions of quantum machine learning.

This milestone is getting closer! in March 2026, IBM published a reference architecture for quantum-centric supercomputing showing that QPE-based workflows can now be embedded directly into existing HPC infrastructure, with researchers from IBM, RIKEN, and the University of Chicago demonstrating quantum simulation converging to ground-state energies that leading classical methods could not produce, though on synthetic Hamiltonians rather than real-world molecular systems.

A Unifying Structure

Most quantum speedups can be understood as instances of the Hidden Subgroup Problem: given a group and a function constant on cosets of some hidden subgroup, find that subgroup. The Quantum Fourier Transform solves abelian instances efficiently. Factoring, discrete logarithm, and many other BQP algorithms are all instances of this structure. When a quantum speedup exists, there is usually periodicity or algebraic symmetry underneath it.


Where the Hardware Actually Is

The numbers you see in quantum computing headlines are almost always the wrong number.

When IBM announces a 1,000-qubit processor, or Google announces a 70-qubit chip, they are describing physical qubits, which are the actual hardware units. Superconducting circuits, trapped ions, photons. These are noisy by nature. They decohere. They make errors. Two-qubit gate error rates on today's best machines run between 0.1 and 1 percent, which sounds small until you consider that a useful computation might require millions of gates.

The number that actually matters is logical qubits, which are error-corrected qubits built from many physical qubits working together to catch and fix each other's mistakes. The overhead is severe: a single reliable logical qubit requires somewhere between hundreds and thousands of physical qubits depending on the target error rate.

Which means IBM's 1,000-qubit Condor processor is, in error-corrected terms, approximately a one-logical-qubit machine for anything serious.

Every headline number needs that conversion applied before it means anything.

This is also why Google's 2019 quantum supremacy claim deserves a closer look. A 54-qubit processor completed a specific sampling task in roughly 200 seconds, and Google estimated the same calculation would take a classical supercomputer 10,000 years. The headline wrote itself.

IBM ran the numbers differently. Classical algorithms, properly optimized, could replicate the result in approximately 2.5 days. Both things are true: the demonstration was a genuine engineering achievement, and the 10,000-year figure was not a meaningful measure of anything.

What that exchange exposed is that supremacy was never the right threshold to begin with. The more honest standard is quantum utility: the point where a quantum computer solves a problem that actually matters, faster and more cost-effectively than any classical alternative. Not a contrived sampling task. A real problem, better solved.

That bar has not yet been cleared. The field is building toward it, not past it.


What It Looks Like When We Get There

Quantum computers will not displace classical computers. No quantum laptops. The realistic near-term picture is quantum processors functioning as specialized co-processors inside hybrid systems which are invoked for the specific sub-problems where quantum algorithms provide genuine advantage, while everything else continues to run on conventional hardware. Access will come through cloud APIs. The transition will look less like a revolution and more like a new category of accelerator becoming available to the problems that can use it.

The downstream effects, once meaningful quantum advantage exists, will be significant in specific domains. Cryptography is the most pressing, since RSA and elliptic curve cryptography become vulnerable to Shor's algorithm, post-quantum standards are finalized, and migration is already underway in critical infrastructure. Drug discovery and materials science stand to be substantially affected by quantum simulation of molecular interactions. Optimization problems in logistics and finance may benefit from quantum annealing and variational algorithms, though practical advantage at scale remains undemonstrated. Quantum machine learning carries theoretical speedups for linear algebra and sampling problems, but dequantization results have shown that several of these can be replicated classically with sufficiently clever data structures, so the picture there is genuinely unsettled.

The transition will require more than new hardware. Cryptographic infrastructure needs systematic migration. Scientists will need to learn to frame problems as quantum circuits. The entire software stack, from qubit control systems to high-level programming frameworks,is still being built.


What I Know Now That I Didn't Then

If I could go back to the version of myself filling notebooks trying to crack P = NP, I would show her a complexity diagram and point to BQP, since these problems are ones that actually may be solved in my lifetime.

P = NP remains one of the deepest open questions in mathematics. Quantum computing does not resolve it. The hardness of NP-complete problems appears to be a more fundamental fact about computation than quantum mechanics can circumvent.

But BQP is genuinely larger than P. Integer factorization lives there. Grover's algorithm remains one of the most elegant results I have encountered: pure interference applied to search, provably optimal, requiring almost no assumptions about problem structure. Scroll back up and play with that Grover demo if you haven't already.

The thing that took me too long to learn is that the right question was never whether quantum solves the hardest classical problems. The right question is what structure quantum computers exploit that classical computers cannot. The answer is interference, periodicity, and entanglement. The class of problems with that structure is BQP.

Everything else follows from there.