Solving Complex Problems: A Look into Computational Complexity Theory
Category Computer Science Thursday - April 18 2024, 14:57 UTC - 7 months ago Computational complexity theory is a subfield of computer science that studies the best approaches to solving difficult problems. Researchers have long debated whether there are problems that can only be solved through trial and error. Last November, new algorithms were discovered that are slightly faster, but still rely on exhaustive search. The results highlight the ongoing quest to understand the limits of computational problem solving.
What’s the best way to solve hard problems? That’s the question at the heart of a subfield of computer science called computational complexity theory. It’s a hard question to answer, but flip it around and it becomes easier. The worst approach is almost always trial and error, which involves plugging in possible solutions until one works. But for some problems, it seems there simply are no alternatives — the worst approach is also the best one.
Researchers have long wondered whether that’s ever really the case, said Rahul Ilango, a graduate student studying complexity theory at the Massachusetts Institute of Technology. "You could ask, ‘Are there problems for which guess-and-check is just optimal?’" .
Complexity theorists have studied many computational problems, and even the hard ones often admit some kind of clever procedure, or algorithm, that makes finding a solution a little bit easier than pure trial and error. Among the few exceptions are so-called compression problems, where the goal is to find the shortest description of a data set.
But last November, two groups of researchers independently discovered another algorithm for compression problems — one that’s ever so slightly faster than checking all the possible answers. The new approach works by adapting an algorithm invented by cryptographers 25 years ago for attacking a different problem. There’s just one restriction: You need to tailor the algorithm to the size of your data set.
"They’re really beautiful and important results," said Eric Allender, a theoretical computer scientist at Rutgers University.
Defining Hardness .
The new results are the latest to investigate a question first studied in the Soviet Union, well before the advent of complexity theory. "Before I was in grade school, people in Russia were formulating this," Allender said.
The specific computational problem that those Soviet researchers studied, called the minimum circuit size problem, is akin to one that designers of computer hardware face all the time. If you’re given complete specifications of how an electronic circuit should behave, can you find the simplest circuit that will do the job? Nobody knew how to solve this problem without "perebor" — a Russian word roughly equivalent to "exhaustive search." The minimum circuit size problem is an example of a compression problem. You can describe a circuit’s behavior with a long string of bits — 0s and 1s — and then ask whether there’s a way to reproduce that same behavior using fewer bits. Checking all possible circuit layouts would take time that grows exponentially with the number of bits in the string. This sort of exponential growth is the defining feature of a hard computational problem. But not all hard problems are equally hard — some have algorithms that are faster than exhaustive search, though their runtimes still grow exponentially. In modern terms, the perebor question is whether any such algorithms exist for compression problems. In 1959, a prominent researcher named Sergey Yablonsky claimed to have proved that exhaustive search really was the only way to solve the minimum circuit size problem. But his proof left some .
Share