The story so far: The appeal of quantum computers (QC) lies in their ability to take advantage of quantum physics to solve problems too complex for classical physics computers. The 2022 Nobel Prize in Physics was awarded for work that rigorously tested such “experience” and paved the way for its applications in computer science – which speaks to the current importance of QCs. Several institutes, companies and governments have invested in the development of quantum computing systems, from software to solve various problems to electromagnetic and materials science going into expanding their hardware capabilities. In 2021 alone, the Government of India launched a national mission to research quantum technologies with an allocation of ₹8,000 crore; the army opened a quantum research facility in Madhya Pradesh; and the Ministry of Science and Technology jointly established another facility in Pune. Given the wide range of applications, understanding what QCs really are is crucial in order to bypass the misinformation that comes with them and develop expectations that are closer to reality.
How does a computer use physics?
A macroscopic object—like a ball, a chair, or a person—can only be in one place at a time; this location can be accurately predicted; and the object’s effects on its surroundings cannot be transmitted faster than the speed of light. This is the classic “experience” of reality.
For example, you can watch a ball flying through the air and plot its trajectory according to Newton’s laws. You can predict exactly where the ball will be at any given moment. When the ball hits the ground you will see this in the time it takes for the light to travel through the atmosphere to you.
Quantum physics describes reality at the subatomic level, where the objects are particles like electrons. In this range one cannot localize the position of an electron. You can only know that it will be present in a given volume of space, with each point in the volume having a probability associated with it – e.g. B. 10% at point A and 5% at point B. If you examine this volume more closely, you may find the electron at point B. If you examine this volume repeatedly, you will find the electron at point B 5% of the time.
There are many interpretations of the laws of quantum physics. One is the “Copenhagen Interpretation,” popularized by Erwin Schrödinger with a 1935 thought experiment. There’s a cat in a locked box with a poison bowl. There is no way to know if the cat is alive or dead without opening the box. During this time, the cat is said to exist in a superposition of two states: alive and dead. Opening the box forces the superposition to collapse into a single state. The state it collapses into depends on the probability of each state.
Similarly, if you examine the volume, you force the superposition of the states of the electrons to collapse to one depending on the probability of each state. (Note: This is a simplified example to illustrate a concept.)
The other “experience” relevant to quantum computing is entanglement. When two particles are entangled and then separated by any distance (even more than 1,000 km), observing one particle and thereby collapsing its superposition will cause the superposition of the other particle to collapse instantaneously as well. This phenomenon seems to go against the notion that the speed of light is the ultimate speed limit of the universe. That is, the superposition of the second particle collapses to a single state in less than three-hundredths of a second, which is the time it takes for light to travel 1,000 km. (Note: the “many worlds” interpretation has gained favor over the Copenhagen interpretation. There is no “collapse” here, which automatically removes some of these puzzling problems.)
How would a computer use superposition?
The bit is the basic unit of a classic computer. Its value is 1 when a corresponding transistor is on and 0 when the transistor is off. The transistor can be in one of two states at the same time—on or off—so a bit can have one of two values at once, 0 or 1.
The qubit is the basic unit of a QC. It is typically a particle like an electron. (Google and IBM are known to use transmons, in which pairs of bound electrons oscillate between two superconductors, to denote the two states.) Some information is encoded directly on the qubit: when an electron’s spin is pointing up, it means 1; If the spin is down, it means 0.
But instead of being either 1 or 0, the information is encoded in an overlay: say 45% 0 plus 55% 1. This is entirely different from the two separate states of 0 and 1 and is a third type of state.
The qubits are entangled to ensure they work together. When a qubit is examined to reveal its state, so are some or all of the other qubits, depending on the calculation being performed. The final output from the computer is the state to which all qubits have collapsed.
A qubit can encode two states. Five qubits can encode 32 states. A computer with N qubits can encode 2N states – while a computer with N transistors can only encode 2 × N states. A qubit-based computer can therefore access more states than a transistor-based computer and thus access more computational paths and solutions to more complex problems.
How come we don’t use them?
Researchers have figured out the basics and used QCs to model the binding energy of hydrogen bonds and simulate a wormhole model. But to solve most practical problems, like finding the form of an undiscovered drug, exploring space autonomously, or factoring large numbers, they face some ambivalent challenges.
A practical QC needs at least 1,000 qubits. The currently largest quantum processor has 433 qubits. There are theoretically no limits to larger processors; The barrier is due to technical reasons.
Qubits exist in superposition under certain conditions, including very low temperature (~0.01 K), with radiation shielding and protection against physical shock. Tap the table with your finger and the states of the qubit sitting on it could collapse. Material or electromagnetic defects in the circuitry between qubits could also “corrupt” their states and affect the end result. Researchers have yet to build QCs that completely eliminate these perturbations in systems with a few dozen qubits.
Error correction is also tricky. The no-cloning theorem states that it is impossible to clone a qubit’s states perfectly, meaning that engineers cannot create a copy of a qubit’s states in a classical system to circumvent the problem. One way out is to entangle each qubit with a group of physical qubits that correct errors. A physical qubit is a system that mimics a qubit. However, reliable error correction requires each qubit to be appended to thousands of physical qubits.
Researchers also have yet to build QCs that don’t amplify errors as more qubits are added. This challenge is linked to a fundamental problem: Unless the error rate is kept below a certain threshold, more qubits only increase information noise.
Practical QCs will require at least thousands of qubits working with superconducting circuits that we have yet to build – apart from other components such as firmware, circuit optimization, compilers and algorithms that take advantage of the power of quantum physics. Quantum supremacy itself – a QC doing something a classical computer can’t – is therefore at least decades away.
The billions invested in this technology today are based on speculative profits, while companies promising developers access to quantum circuits in the cloud often offer physical qubits with appreciable error rates.
The interested reader can build and simulate rudimentary quantum circuits with IBM’s “Quantum Composer” in the browser.