The Limit of Algorithms: Alan Turing's Uncomputable Problem
Category Science Friday - September 8 2023, 10:12 UTC - 1 year ago Alan Turing proved that there exist problems that can never be solved algorithmically. The proof utilizes a mathematical technique called diagonalization which has a distinguished history. Georg Cantor used diagonalization to prove that some infinities are larger than others and Turing adapted this technique to the theory of computation. Turing set up a game with a paradoxical twist, asking an adversary to guess the limit of algorithms and the adversary has to find a problem that A can't solve.
Algorithms have become ubiquitous. They optimize our commutes, process payments and coordinate the flow of internet traffic. It seems that for every problem that can be articulated in precise mathematical terms, there’s an algorithm that can solve it, at least in principle.
But that’s not the case — some seemingly simple problems can never be solved algorithmically. The pioneering computer scientist Alan Turing proved the existence of such "uncomputable" problems nearly a century ago, in the same paper where he formulated the mathematical model of computation that launched modern computer science.
Turing proved this groundbreaking result using a counterintuitive strategy: He defined a problem that simply rejects every attempt to solve it.
"I ask you what you’re doing, and then I say, ‘No, I’m going to do something different,’" said Rahul Ilango, a graduate student at the Massachusetts Institute of Technology studying theoretical computer science.
Turing’s strategy was based on a mathematical technique called diagonalization that has a distinguished history. Here’s a simplified account of the logic behind his proof.
String Theory .
Diagonalization stems from a clever trick for solving a mundane problem that involves strings of bits, each of which can be either 0 or 1. Given a list of such strings, all equally long, can you generate a new string that isn’t on the list? .
The most straightforward strategy is to consider each possible string in turn. Suppose you have five strings, each five bits long. Start by scanning the list for 00000. If it’s not there, you can stop; if it is, you move on to 00001 and repeat the process. This is simple enough, but it’s slow for long lists of long strings.
Diagonalization is an alternate approach that builds up a missing string bit by bit. Start with the first bit of the first string on the list and invert it — that’ll be the first bit of your new string. Then invert the second bit of the second string and use that as the second bit of the new string, and repeat until you get to the end of the list. The bits you flip ensure that the new string differs from every string on the original list in at least one place. (They also form a diagonal line through the list of strings, giving the technique its name.)Diagonalization only needs to examine one bit from each string on the list, so it’s often much faster than other methods. But its true power lies in how well it plays with infinity.
"The strings can now be infinite; the list can be infinite — it still works," said Ryan Williams, a theoretical computer scientist at MIT.
The first person to harness this power was Georg Cantor, the founder of the mathematical subfield of set theory. In 1873, Cantor used diagonalization to prove that some infinities are larger than others. Six decades later, Turing adapted Cantor’s version of diagonalization to the theory of computation, giving it a distinctly contrarian flavor.
The Limitation Game .
Turing wanted to prove the existence of mathematical problems that no algorithm can solve — that is, problems with well-defined inputs and outputs but no foolproof procedure for getting from input to output.
To do it, he set up a game with a paradoxical twist: He asked an adversary to guess the limit of algorithms. He assumed the existence of an algorithm A that can solve any mathematical problem. The adversary has to find a problem that A can’t solve.
Share