Thanks to artificial intelligence technologies, computers today can engage in compelling conversations with people, compose songs, paint pictures, play chess and diagnose diseases, to name just a few examples of their prowess. technologies.
These successes could be taken to indicate that the calculation has no limits. To see if that’s the case, it’s important to understand what makes a computer powerful.
There are two aspects to the power of a computer: the number of operations its hardware can perform per second and the efficiency of the algorithms it runs. Hardware speed is limited by the laws of physics. Algorithms – essentially sets of instructions – are written by humans and translated into a sequence of operations that computer hardware can perform. Even though the speed of a computer can reach the physical limit, computational hurdles still remain due to the limitations of the algorithms.
These obstacles include problems that are impossible for computers to solve and problems that are theoretically solvable but, in practice, are beyond the capabilities of the most powerful versions of today’s computers imaginable. Mathematicians and computer scientists try to determine if a problem is solvable by trying them out on an imaginary machine.
An imaginary computer machine
The modern notion of an algorithm, known as the Turing machine, was formulated in 1936 by British mathematician Alan Turing. It is an imaginary device that mimics the way arithmetic calculations are performed with a pencil on paper. Turing’s machine is the model on which all computers are based today.
To account for calculations that would require more paper if done manually, the imaginary paper supply in a Turing machine is assumed to be unlimited. This amounts to an unlimited imaginary ribbon, or “strip”, of squares, each of which is empty or contains a symbol.
The machine is controlled by a finite set of rules and starts on an initial sequence of symbols on the strip. The operations that the machine can perform are moving to a neighboring box, erasing a symbol and writing a symbol on a blank box. The machine calculates by performing a sequence of these operations. When the machine completes or “stops”, the remaining symbols on the tape are the output or result.
Computing is often about making decisions with yes or no answers. By analogy, a medical test (type of problem) checks whether a patient’s sample (an example of the problem) has a certain indicator of disease (answer yes or no). The instance, represented in a Turing machine in digital form, is the initial sequence of symbols.
A problem is considered “solvable” if a Turing machine can be designed that stops for every instance, whether positive or negative, and correctly determines the response given by the instance.
Not all problems can be solved
Many problems can be solved using a Turing machine and can therefore be solved on a computer, while many others cannot. For example, the domino problem, a variant of the tiling problem formulated by Chinese-American mathematician Hao Wang in 1961, is not solvable.
The task is to use a set of dominoes to cover an entire grid and, following the rules of most domino games, match the number of pips on the ends of adjoining dominoes. It turns out that there is no algorithm that can start with a set of dominoes and determine whether or not the set will completely cover the grid.
stay reasonable
A number of solvable problems can be solved by algorithms that stop in a reasonable amount of time. These “polynomial-time algorithms” are efficient algorithms, which means that it is convenient to use computers to solve instances of them.
Thousands of other solvable problems are not known to have polynomial-time algorithms, despite continued intensive efforts to find such algorithms. These include the traveling salesman problem.
The traveling salesman problem asks if a set of points with some points directly connected, called a graph, has a path that starts at any point and passes through all other points exactly once, and back to the point of origin . Imagine that a salesperson wants to find a route that passes exactly once through all the households in a neighborhood and returns to the starting point.
These problems, called NP-complete, were independently formulated and demonstrated in the early 1970s by two computer scientists, Canadian-American Stephen Cook and Ukrainian-American Leonid Levin. Cook, whose work came out on top, received the 1982 Turing Award, the highest in computing, for this work.
The cost of knowing exactly
The best-known algorithms for NP-complete problems essentially search for a solution among all possible answers. The traveling salesman problem on a graph of a few hundred points would take years to run on a supercomputer. Such algorithms are inefficient, meaning there are no mathematical shortcuts.
Practical algorithms that deal with these problems in the real world can only offer approximations, although the approximations are getting better. The question of whether there are efficient polynomial-time algorithms capable of solving NP-complete problems is among the seven millennium open problems published by the Clay Mathematics Institute at the turn of the 21st century, each carrying a price tag of 1 million U.S. dollars.
Beyond Turing
Could there be a new form of computation beyond Turing’s framework? In 1982, the American physicist Richard Feynman, winner of the Nobel Prize, proposed the idea of a calculation based on quantum mechanics.
In 1995, Peter Shor, an American applied mathematician, presented a quantum algorithm for factoring integers in polynomial time. Mathematicians believe this is unsolvable by polynomial-time algorithms in the Turing framework. Factoring an integer means finding a smallest integer greater than 1 that can divide the integer. For example, the integer 688,826,081 is divisible by a smaller integer 25,253, because 688,826,081 = 25,253 x 27,277.
A major algorithm called the RSA algorithm, widely used in securing network communications, is based on the computational difficulty of factoring large integers. Shor’s result suggests that quantum computing, if it were to become a reality, would change the cybersecurity landscape.
Can a full-fledged quantum computer be built to factor integers and solve other problems? Some scientists believe that may be the case. Several groups of scientists around the world are working to build one, and some have already built small-scale quantum computers.
Nevertheless, like all new technologies invented before, it is almost certain that problems with quantum computing will arise that would impose new limits.