The Mathematical Beauty of Randomness

Category Computer Science

tldr #

Randomness is a powerful tool in computer science, allowing us to solve questions with true-or-false answers very quickly. Pierre de Fermat developed a solution to tell if a given number is prime or composite, which can then be adjusted for use in a primality test. Scientists have discovered many ways to use randomness to solve other problems, and in 1993 a theorem was proven that showed that randomized algorithms cannot be replaced by deterministic algorithms in various instances.


content #

Since the very first days of computer science — a field known for its methodical approach to problem-solving — randomness has played an important role. The first program to run on the world’s first general-purpose electronic computer used randomness to simulate nuclear processes. Similar approaches have since been used in astrophysics, climate science and economics. In all these cases, plugging in random numbers at certain steps in the algorithm helps researchers account for uncertainty about the many ways that complex processes can play out.

Randomness can be used to make predictions about the future.

But adding randomness into an algorithm can also help you calculate the correct answer to unambiguous true-or-false questions. "You just say ‘OK, let me give up, let me not try, let me just pick something at random,’" said Eric Blais, a computer scientist at the University of Waterloo. "For way too many problems, that ends up being a successful approach." .

Let’s say you want to determine whether a given number is prime (divisible only by 1 and itself) or composite (also divisible by other integers). You could simply try to divide it by all possible factors, but for large numbers this "brute force" method and other factoring algorithms are excruciatingly slow. And if the number turns out to be composite, factoring algorithms tell you the values of its divisors — more information than you asked for. If you care only about a number’s "primality," is there a more efficient algorithm? .

Random numbers are used in cryptography to protect messages.

There is if you use randomness. The basic idea goes back to a result from the 17th-century French mathematician Pierre de Fermat, known as his "little theorem." Fermat considered two integers — call them N and x. He proved that if N is a prime number, then xN − x is always a multiple of N, regardless of the value of x. Equivalently, if xN − x is not a multiple of N, then N can’t be a prime number. But the inverse statement isn’t always true: If xN − x is a multiple of N, then N is usually but not always prime.

Randomness helps to reduce the computational complexity of certain tasks.

To turn Fermat’s little theorem into a primality test, just take the N that you’re interested in, choose x at random, and plug the two numbers into xN − x. If the result is not a multiple of N, then you’re done: You know that N is definitely composite. If the result is a multiple of N, then N is probably prime. Now pick another random x and try again. In most cases, after a few dozen tries, you can conclude with near certainty that N is a prime number. "You do this a small number of times," Blais said, "and somehow now your probability of having an error is less than the probability of an asteroid hitting the Earth between now and when you look at the answer." .

Randomness can show us if two objects have similar properties.

The first primality tests using randomized algorithms (based on refinements to Fermat’s little theorem) ushered in a new era. Problem after problem turned out to be far easier to solve with randomness than with nonrandom, or deterministic, algorithms. The key was to reframe each problem as one that could be quickly solved given an appropriate value for some number x, and then prove that just about any x would do. The solution works even though researchers have no idea how to determine whether any specific choice is a good one. Mathematicians have joked that this unusual challenge is akin to finding hay in a haystack.

Randomized algorithms are widely used in machine learning.

But these successes made researchers wonder why randomness should help — the results seem counterintuitive, because randomly chosen numbers often look like an inefficient way to solve logically complex problems. It took until 1993 for computer scientists to prove the true power of randomness through the "derandomization theorem" from Michael Rabin and Avi Wigderson. The theorem showed that randomized algorithms for a wide variety of problems — including primality tests, graph-coloring tasks and network-flow problems — can’t be substantially improved upon by deterministic algorithms, no matter how cleverly crafted.

Randomness in computer models can sometimes reveal hidden patterns.

hashtags #
worddensity #

Share