Computer Science / Review Essay

Vol. 7, NO. 3 / October 2022

When Existence is Inefficient

Allyn Jackson

Letters to the Editors

In response to “When Existence is Inefficient


A  mathematician writes down a proof of a conjecture. A scientist formulates a theory to fit data. An engineer designs a widget adhering to specifications. Discovering that proof, that theory, that design is often hard. But once they are found, we have good ways to verify they work.

Discovery seems harder than verification. But is it really? That question underlies one of the biggest open problems in computer science, that of P versus NP.

NP is the class of problems for which we have an efficient method to check a solution is correct. Finding the needle in the haystack is inefficient if we must exhaustively search every stalk. But it’s easy to recognize the needle once we find it.

P is the class of problems for which we have an efficient method that produces a solution or shows no solution exists. A magnet easily extracts the needle from the haystack and can also demonstrate that no needle lies hidden inside.

What P versus NP asks is whether there are problems that are intractable—problems for which there is no way to collapse an exhaustive search down to something much shorter. Are these problems truly hard? Or have we just not been clever enough to find efficient algorithms to solve them?

Half a century of effort has not yielded a definitive answer. But it has inspired a tremendous blossoming of theoretical computer science. The accumulated knowledge has led to the strong conviction that P and NP are truly different.

One piece of evidence for this conviction comes from a subclass of NP called TFNP (total function NP). The characteristic feature of a TFNP problem is that its very structure guarantees a solution exists.

A basic example of a TFNP problem is factoring integers. There is an efficient way to check whether an integer n is prime. If it is, then the solution is simply n. If not, the solution is the prime factorization of n, and there is an efficient way to check that: Just multiply it out. Factoring is therefore in NP. The problem always has a solution, because every integer has a prime factorization. Factoring is therefore also in TFNP. But it is not in P—not yet anyway—because no efficient way is known to factor integers.

For some TFNP problems, the guarantee of a solution comes in the form of a mathematical theorem that is based on a simple, common-sense combinatorial principle. These principles, whose use is ubiquitous in mathematics, seem to be a source of inherent hardness in those problems.

Polynomial versus Exponential

The set of possible solutions to an NP problem might be huge, but it is finite. A brute-force search will produce a correct solution or show no solution exists. But if the search takes longer than the age of the universe, the solution will remain out of reach.

The need to overcome brute-force search was understood by Kurt Gödel, who wrote about this in a 1956 letter to John von Neumann. At the time, von Neumann was fatally ill and it is not known whether he ever responded.1 Hints that John Nash also pondered the issue appear in a handwritten letter he sent to the National Security Agency in 1955.2

About ten years later, Jack Edmonds wrote a paper in which he presented his now-classic algorithm for a problem in graph theory known as the maximum matching problem. Because searching all possibilities is an obvious algorithm that would succeed, it was not apparent at the time why his algorithm was an improvement. “Edmonds felt the need to explain why his was a result at all,” Michael Sipser wrote.3

Edmonds explained that his algorithm runs in polynomial-time. This means that if n is the size of the problem—in Edmonds’s algorithm, n could be the number of edges or the number of vertices in a graph—then the running time of the algorithm is a polynomial function in n. By contrast, the running time for a brute-force search is an exponential function in n.

Edmonds, as well as other early researchers like Alan Cobham, Leonid Levin, and Michael Rabin, understood that it’s not only the faster running time that gives polynomial-time algorithms their power. As Avi Wigderson put it in a recent interview:

If you have a problem where the only known algorithm is a brute-force solution exponential in n, and you manage to find a polynomial-time algorithm—even if the degree is n1000—then you must have had a fundamental idea in understanding that problem. You have made a leap.4

P, which stands for “polynomial time,” consists of problems for which we have a polynomial-time algorithm that produces a solution or shows none exists. NP, for “nondeterministic polynomial time,” consists of problems for which a solution can be checked in polynomial time.

The P versus NP question then asks, does the ability to check a solution in polynomial time imply you can produce one in polynomial time? The same question applies to problems in TFNP. But because TFNP problems always have a solution, another question imposes itself. How hard is it to find a solution that’s guaranteed to be there? Put differently, does knowing that a solution exists imply you can produce one in polynomial time?

Single-handedly Capturing the Difficulty of NP

When I saw the computer, when I saw that huge room with blinking lights, when I realized I could learn this new language, it really impressed me. It’s a language where you have to explain everything. Imagine you are telling a story [to a person] but this person has never lived. You say, “Flowers.” “What is a flower?”5

That’s Christos Papadimitriou describing his first encounter with a computer around 1970, when he was in his early twenties and was a student at the National Technical University of Athens. At the time, researchers were hunting for polynomial-time algorithms for every problem they encountered. Despite their efforts, certain problems stubbornly resisted polynomial-time solution.

One such problem, hailing from logic, is Boolean satisfiability, or SAT. A SAT formula consists of some number of variables connected by the logical quantifiers “and,” “or,” and “not.” Each variable can take the value true or false. A satisfying assignment is a choice of values for the variables that results in a true statement. Some formulas have no satisfying assignment; x and (not x) is an example.

SAT is in NP because checking whether an assignment satisfies a formula, even one with 1,000 variables, requires only polynomial time. Finding a satisfying assignment for a 1,000-variable formula looks far more daunting. After all, there are 21000 possible assignments. No polynomial-time algorithm for SAT has ever been found.

In the early 1970s, Stephen Cook and Leonid Levin independently discovered something curious about SAT. If one had a polynomial-time algorithm for SAT, one could efficiently translate it into a polynomial-time algorithm for any other NP problem. Collapse SAT down to polynomial time, and all NP problems collapse with it.

SAT is termed NP-complete. Because all other NP problems can be reduced to it, SAT sits at the pinnacle of hardness of NP. As Scott Aaronson put it, NP-complete problems are those “that single-handedly capture the difficulty of every other NP problem.”6

Is there something special about SAT that gives it this power? Not really. A year after Cook’s paper appeared, Richard Karp published a list of twenty-one NP-complete problems. No artificial concoctions dreamed up to fit NP-completeness, these problems arose naturally across several mathematical areas.

Papadimitriou finished his PhD in theoretical computer science in 1976. By then workers in the field had found many more NP-complete problems in their work. Today, thousands of NP-complete problems have been identified in biology, chemistry, economics, neuroscience, electrical engineering, and other fields.

Not one of them is known to have a polynomial-time algorithm. This is what supports computer scientists’ belief that NP-complete problems possess an inherent hardness.

And TFNP? Some TFNP problems also seem to possess an inherent hardness, but they are believed not to be NP-complete. The hardness of NP-complete problems arises from the possibility that a solution might not exist. TFNP problems by contrast are guaranteed to have a solution. Their hardness seems to have a different character.

A Marvelous Theorem

In 1911, the Dutch mathematician L. E. J. Brouwer published a paper that would leave a deep imprint on his field.7 His biographer and former student, Dirk van Dalen, described the paper as “a short but exhaustive course in the future of topology.”8 The paper’s apotheosis is the proof of the Brouwer fixed-point theorem: Every continuous mapping from a convex set to itself must have a fixed point.

One way to think about this is to imagine stirring a cup of coffee—stirring gently, so no droplets fly out. Brouwer’s theorem says that, when the coffee comes to rest, at least one point in the coffee will be exactly where it was before the stirring started. The theorem guarantees that fixed points exist. It says nothing about how to find them.

In the years following publication of his 1911 paper, Brouwer sometimes surprised listeners by casting doubt on his eponymous theorem. Van Dalen asked the obvious question: “How could one prove such a marvelous theorem and then renounce it?”9

The theorem was not wrong—at least not in the eyes of the vast majority of mathematicians, who quickly began using it to great effect. Brouwer was an intuitionist and therefore rejected certain tenets that nearly all mathematicians accept without question. If an object exists, he argued, we must find the object; and if we cannot find it, why believe it exists?

Like most mathematicians, computer scientists accept the validity of Brouwer’s original proof. But they also want to find the fixed points. The computational problem of finding them is in NP, because it is easy to check a solution: Just verify that it remains fixed under the mapping in question. And the problem is in TFNP because Brouwer’s theorem guarantees a solution exists.

Seventy years after Brouwer’s original proof, Papadimitriou had established himself as a leader in the new field of computational complexity theory, the study of the hardness of problems. When he encountered the problem of finding Brouwer fixed points, he wondered how hard could it be.

It seemed to be very hard.

A Core of Inefficiency

The fundamental theorem of algebra states that every polynomial in one variable has a root in the complex numbers. In 1891, Karl Weierstrass proved the existence of the roots by actually finding them. Following his proof step by step, one can build a polynomial-time algorithm to calculate the roots.

What about a non-constructive theorem of existence, like Brouwer’s fixed-point theorem?10 Step after step can be transformed into a polynomial-time algorithm, even mathematically difficult steps. Then comes a different kind of step that is mathematically easy because contains a natural, common-sense combinatorial argument. And this step seems to require exponential time. For Brouwer’s fixed points, the combinatorial step boils down to what Papadimitriou named the polynomial parity argument for directed graphs.

Imagine a house with finitely many rooms. Every room has exactly two doors. All the doors are painted black on one side and white on the other. When both doors are closed, each room has one black door and one white door. The argument concludes that if there is one door to the outside of the house, there must be another.

Here’s why. Initially all doors in the house are closed. Enter the house through an outside door and note the color of the other door in the room. If that door is black, move through the house observing the rule that you can use only black doors to enter rooms. Following this rule means you never visit any room twice. Because the number of rooms is finite, sooner or later you will find a second door to the outside.

That second door exists. But you might need to visit every single room to find it—in other words, a brute-force search. This is the inefficient core of the Brouwer fixed-point theorem.

A New Paradigm for Complexity Classes

In 1994, Papadimitriou published a wildly inventive paper in which he envisioned a new paradigm for grouping TFNP problems into complexity classes.11 The idea is to structure the classes around the combinatorial principles on which the inefficient proofs of existence depend.

One of these classes he called PPAD, for polynomial parity argument for directed graphs, the argument captured by the house-and-doors algorithm. PPAD contains TFNP problems, like that of finding Brouwer fixed points, for which the proof guaranteeing a solution boils down to the PPAD combinatorial principle.

Papadimitriou identified other such principles and defined the related complexity classes: PPA (polynomial parity argument, which applies to non-directed graphs), PPP (polynomial pigeonhole principle), and PLS (polynomial local search).

The most familiar of these is the pigeonhole principle. It says that if n letters are distributed into n – 1 mailboxes, one mailbox contains two letters. The other principles share this simple, common-sense nature. In the existence proofs for the associated TFNP problems, these principles are what come into play in the inefficient steps. And exactly those steps seem to require exponential time.

Optimization Problems

The starting point for Papadimitriou’s work was the class defined by PLS, polynomial local search.12 PLS contains problems for which locally optimal solutions are sought and includes many important applications.

The combinatorial principle underlying PLS is that every directed acyclic graph has a sink. An acyclic graph is one with no cycles, that is, no closed loops. Suppose you start at one vertex of an acyclic graph and begin walking through the graph. You will never visit the same vertex twice because the graph has no cycles. The graph is finite, so your journey must end somewhere. It must end at a sink—a vertex having only incoming edges. Enter inefficiency: To find the sink, you might need to visit every vertex.

PLS problems arise in many areas of science and engineering, for example in physics, when one seeks local energy minima, and in evolutionary biology, where one looks for local maxima in fitness landscapes. In a 2014 paper, Papadimitriou commented:

Whereas physicists and evolutionary biologists often speak about a process “getting stuck” at a local optimum, from the algorithmic point of view it turns out that, by all available evidence, getting stuck at a local optimum may be an exponentially difficult task.13

For PPA, the underlying combinatorial principle is a parity argument.14 Any finite graph has an even number of odd-degree nodes, where the degree of a node is the number of edges meeting the node. Suppose the nodes are people at a party and the edges connect the partygoers who have shaken hands. If Jim has shaken an odd number of hands, there must be another partygoer who also has shaken an odd number of hands. Jim can find that other partygoer, but he might have to ask every single person at the party. This is where the inefficiency enters.

Papadimitriou wondered whether there are other principles that underlie inefficient proofs of existence, beyond the ones he identified in his paper, and whether those principles could be used to create new classes of TFNP problems. In the quarter-century since his paper appeared, no other principles have been identified.

Among the results in Papadimitriou’s paper is a proof that the problem of finding Brouwer fixed points is PPAD-complete. Collapse the problem of finding fixed points down to polynomial time, and all of PPAD collapses with it. The problem stands at the pinnacle of hardness of PPAD.

Seeking Nash Equilibria

In game theory, a Nash equilibrium for a game is a set of strategies with the property that no player can improve position by changing strategy. John Nash was a topologist, not an economist. It was his topological intuition that led to the insight that Brouwer’s fixed-point theorem renders a simple proof that all games have equilibria.

Proved in 1951, his result has become a cornerstone of modern economic theory. Like Brouwer’s fixed-point theorem, Nash’s theorem is non-constructive. It says only that equilibria exist. It does not say where or how to find them.

Following Nash, economists began to use Brouwer’s theorem to prove the existence of economic equilibria. The work of Kenneth Arrow and Gérard Debreu provides the most famous example.15 Arrow and Debreu established that, under fairly general conditions, market equilibria always exist. Economists naturally wanted to calculate those equilibria. Ingenious algorithms were devised that perform well on typical cases, but they all require exponential time for the worst cases. A polynomial-time algorithm has never been found.

Nash’s theorem depends on Brouwer’s, and the problem of finding Brouwer fixed points is in PPAD. Therefore the problem of finding Nash equilibria is in PPAD. Papadimitriou wondered: Could Nash, like Brouwer, also be PPAD-complete? That question stood unresolved for another two decades.

TFNP Problems are Probably Not NP-Complete

Could the problem of finding Nash equilibria be even harder than PPAD-complete? Could it be NP-complete? If it were—in fact if any TFNP problem were shown to be NP-complete—the consequences would be dramatic. The reason goes back to the fact that NP-complete problems derive their hardness from the possibility that a solution might not exist.

An NP problem has an efficient method for verifying a solution. However, it might lack an efficient method for showing no solution exists. Consider the problem of Boolean satisfiability. One can efficiently verify a satisfying assignment for a SAT formula. But verifying that a SAT formula has no satisfying assignment? That looks far more daunting, and indeed no efficient method is known.

The class co-NP consists of those problems for which one can efficiently verify non-existence of a solution. An example of a co-NP problem is deciding whether a number is prime: Verifying a number is not prime can be done efficiently by multiplying out its factors. The second biggest surprise in the theory of computing, after a proof that P equals NP, would be a proof that NP equals co-NP.

What if the problem of finding Nash equilibria were NP-complete? This would mean solutions to Nash would efficiently supply solutions to SAT: A SAT formula would be paired with a game such that the equilibria for that game could be efficiently translated into a satisfying assignment for the formula.

A SAT formula with no satisfying assignment could then be paired with a game whose equilibria efficiently verify that the formula is unsatisfiable. Then SAT would be in co-NP, and NP equals co-NP would immediately follow.

The belief that no TFNP problem can be NP-complete is nearly as strong as the belief that P differs from NP.

A Beautiful Symmetry

The rise of the internet introduced a new viewpoint in computer science. Computer scientists now had a computational artifact to study. Like the universe, or the brain, or a financial market, the internet has a structure that emerges through evolving dynamic processes driven by multiple agents. A new field was born, that of algorithmic game theory, which uses Nash equilibria as a framework for studying computational questions arising from the internet. This brought new urgency to the question of the hardness of finding Nash equilibria.

By 2004, Papadimitriou had been pondering the question for a long time. Then came what he called “a planet alignment.” A fellow Athenian, Constantinos Daskalakis, became Papadimitriou’s PhD student at UC Berkeley. Daskalakis supplied fresh ideas as well as the daring needed to jump into the problem with both feet. Paul Goldberg, who was visiting Berkeley from England, brought in the crucial ingredient of graphical games. The trio proved that finding a Nash equilibrium in a three-person game is PPAD-complete.16 Shortly thereafter, Xi Chen, Xiaotie Deng, and Shang-Hua Teng showed that the proof actually covers the case of two-person games as well.17

That Nash’s proof depends on the Brouwer fixed-point theorem brings about a beautiful symmetry: Finding Nash equilibrium points is exactly as hard as finding Brouwer fixed points. Moreover, showing that Nash is PPAD-complete has deep implications for the usefulness of Nash equilibria.

What is “cool” about Nash equilibria is their universality, Daskalakis once observed18. Equilibria exist in every game. But finding them could be intractable, as the PPAD-completeness result suggests. “If equilibria are intractable, why would you expect that players, who are boundedly rational and have bounded computational resources in their brains, would actually arrive at an equilibrium?”

PPA versus PPAD

The hairy ball theorem says that you can’t comb a hairy ball without creating a cowlick. More precisely, any continuous vector field on an even-dimensional sphere must vanish somewhere. The computational problem associated to the hairy ball theorem is, given such a vector field, to find a point where the vector field vanishes. In 2019, Goldberg and Alexandros Hollender proved this computational problem is PPAD-complete.19

They also discussed another central result in topology, the Borsuk-Ulam theorem: For any continuous function from the two-sphere to the plane, there is a pair of antipodal points having the same value under the function. A conceptual picture: There are always two antipodal points on the earth with the same temperature and barometric pressure. Here the associated computational problem is to find those antipodal points.

The hairy ball theorem and the Borsuk-Ulam theorem look superficially similar. But, as Goldberg and Hollender point out, the former is in PPAD, whereas the latter is in PPA. Because the PPA principle is more general than the PPAD principle, PPAD is a subclass of PPA. It is believed, though not known, that the two are truly different. This belief corresponds, for example, to the belief that one needs the general parity argument to prove the Borsuk-Ulam Theorem; the more specialized PPAD principle won’t suffice.

PPA arises in another recent result, about the problem of factoring integers. It is strongly believed that there is no polynomial-time algorithm for factoring. Resting on this belief is the security of the entire world’s financial system, which counts on the intractability of factoring to encrypt financial transactions.

In 2016, Emil Jeřábek proved a randomized result that roughly says that, if one accepts a degree of probability that an algorithm for factoring might fail to produce a factorization, then the problem of factoring is in PPA.20

These and related results make the comparison of PPA and PPAD all the more intriguing. But most intriguing of all is what the comparison could tell us about P versus NP. For if PPA really is different from PPAD, it would immediately follow that P is different from NP.

Acknowledgments: The author thanks Christos Papadimitriou and Paul Goldberg for helpful discussions.

Endmark

  1. Michael Sipser, “The History and Status of the P versus NP Question,” STOC ’92: Proceedings of the Twenty-Fourth Annual ACM Symposium on Theory of Computing (July 1992): 603–18, doi:10.1145/129712.129771. 
  2. National Cryptologic Museum Opens New Exhibit on Dr. John Nash,” NSA news release, January 27, 2012. 
  3. Sipser, “The History and Status of the P versus NP Question.” 
  4. Allyn Jackson, “Exploring the Theory of Computing,” Communications of the ACM, January 16, 2020. 
  5. Christos Papadimitriou—Interview,” YouTube video, 21:35, uploaded February 28, 2019. 
  6. Scott Aaronson, Course notes “PHYS 771 Lecture 6: P, NP, and Friends,” ScottAaronson.com. 
  7. Luitzen E. J. Brouwer, “Über Abbildung von Mannigfaltigkeiten,” Mathematische Annalen 71, no. 1 (1911): 97–115, doi:10.1007/bf01456931. 
  8. Dirk van Dalen, L. E. J. Brouwer—Topologist, Intuitionist, Philosopher: How Mathematics Is Rooted in Life (London: Springer, 2012), 169. 
  9. Van Dalen, L. E. J. Brouwer, 503. 
  10. For Brouwer’s theorem, the proof most amenable to algorithmic translation is not his original proof but one that rests on a combinatorial result called Sperner’s lemma. 
  11. Christos Papadimitriou, “On the Complexity of the Parity Argument and Other Inefficient Proofs of Existence,” 31st Annual Symposium on Foundations of Computer Science (FOCS) (St. Louis, MO, 1990). Journal of Computer and System Sciences 48 (1994), no. 3, 498–532. 
  12. David Johnson, Christos Papadimitriou, and Mihalis Yannakakis, “How Easy Is Local Search?Journal of Computer and System Sciences 37, no. 1 (1988): 79–100, doi:10.1109/sfcs.1985.31. 
  13. Christos Papadimitriou, “Algorithms, Complexity, and the Sciences,” Proceedings of the National Academy of Sciences 111, no. 45 (November 11, 2014): 15,881–87, doi:10.1073/pnas.1416954111. 
  14. There is an infinite collection of classes based on PPA called PPA[k], where k is an integer greater than 2. These are based on the principle that in a bipartite graph, if a node has degree not divisible by k, there must be another such node. These classes seem to be different from one another, but their relative hardness is unknown. 
  15. Kenneth Arrow and Gérard Debreu, “Existence of an Equilibrium for a Competitive Economy,” Econometrica 22, no. 3 (1954): 265–90, doi:10.2307/1907353. 
  16. Constantinos Daskalakis, Paul Goldberg, and Christos Papadimitriou, “The Complexity of Computing a Nash Equilibrium,” SIAM Journal on Computing 39, no. 1 (2009): 195–259, doi:10.1137/070699652. 
  17. Xi Chen, Xiaotie Deng, and Shang-Hua Teng, “Settling the Complexity of 2-Player Nash-Equilibrium,” Journal of the ACM 56, no. 3 (2009): 1–57, doi:10.1145/1516512.1516516. 
  18. Allyn Jackson, “Seeking Equilibria in Economics, Computer Science,” Communications of the ACM, December 13, 2018. 
  19. Paul Goldberg and Alexandros Hollender, “The Hairy Ball Problem Is PPAD-Complete,” Journal of Computer and System Sciences 122 (2021): 34–62, doi:10.1016/j.jcss.2021.05.004. 
  20. Emil Jeřábek, “Integer Factoring and Modular Square Roots,” Journal of Computer System Sciences 82, no. 2 (2016): 380–94, doi:10.1016/j.jcss.2015.08.001. 

Allyn Jackson is an American writer and editor based in Germany.


More from this Contributor

More on Computer Science


Endmark

Copyright © Inference 2024

ISSN #2576–4403