# Google's Quantum Processor May Achieve Quantum Supremacy in Months

While I said several months ago that we'd find a way to bring Moore's Law back, I didn't expect it to go down like this. In a new report in *Quanta Magazine* by Kevin Hartnett, Hartmut Neven, the director of Google's Quantum Artificial Intelligence Lab, reveals that the growth in power with each new improvement to Google's best quantum processor is unlike anything found in nature. It's growing at not just an exponential rate, like in Moore's Law, but at a *doubly-exponential rate, *meaning we may be mere months away from the beginning of the practical quantum computing era.

## Google's Hartmut Neven is Telling Us to Get Ready

Hartnett's piece should be a major wake-up call for the world. As we've plodded along, thinking that tomorrow would be more or less like today, something extraordinary appears to be taking place at Google's Quantum AI labs in Santa Barbara, California. In December 2018, Neven and his team began running a calculation on the company's best quantum processor when they began to see something unbelievable.

**RELATED: NO MORE TRANSISTORS: THE END OF MOORE'S LAW**

"They were able to reproduce the [quantum processor's] computation using a regular laptop," Hartnett writes. "Then in January, they ran the same test on an improved version of the quantum chip. This time they had to use a powerful desktop computer to simulate the result. By February, there were no longer any classical computers in the building that could simulate their quantum counterparts. The researchers had to request time on Google’s enormous server network to do that.

“Somewhere in February I had to make calls to say, ‘Hey, we need more quota,’" Nevens told Hartnett. “We were running jobs comprised of a million processors.”

Google's top-performing quantum processor was doing something that has no obvious parallels in nature. "Doubly exponential growth," Hartnett writes, "is so singular that it’s hard to find examples of it in the real world. The rate of progress in quantum computing may be the first."

The unparalleled acceleration of quantum computing speeds Neven first identified started being called Neven's Law by Google researchers in a not-so-subtle reference to classical computing's Moore's Law, but with a difference. They are of a kind, but what is happening over at Google is not simply the return of Moore's Law for the quantum era; Neven's law is showing us that we may be about to plunge into an entirely alien world in only a few months.

## Why Moore's Law Continues to Matter Even After It's Demise

For the past decade, computer scientists and engineers have been anticipating the seemingly abrupt end of progress. Moore's Law, a rough guideline that says a silicon transistor can be reduced in size by about half about every two years, has been functionally dead for at lease a couple of years now.

While it lived, however, it was able to cram more and more transistors onto chips of various sizes, first empowering mainframes, then servers, then personal computers, and now mobile devices. Every couple of years, each new device wasn't just an improvement; there would be revolutionary technological changes as often as twice or three times in a single decade.

The doubling of processing power in each generation of computer chips every two years and the consequence of that rate of growth is the leap made by going from punch card computers calculating the flight paths of Apollo astronauts heading to the moon to the birth and maturing of the Internet, blazing fast computers in our pockets, and neural networks that can run the entire civil service infrastructure of cities in China in less than 50 years.

The technological leap humanity made with the silicon transistor was the single greatest innovation in human history. No other discovery or invention, not even fire, has transformed so much, so fast in our human experience--and we've known for at least a decade that this pace of change could not go on forever. As transistors are reduced to just seven nanometers long, engineers are fighting to keep an electric charge flowing in channels whose walls are only atoms thick.

Make the transistor any smaller, and the electric current that powers the processor's calculations and logic simply jumps the channel or leaks out of the component after atoms meant to contain the flow of electrons are disrupted over time.

As more transistors begin to fail and leak their electrons into other components, those too wear down faster and experience higher rates of error, inhibiting the performance of the processor as a whole until the whole thing becomes a useless, leaky sieve of electrons.

Since engineers cannot stabilize the components of the processor if they go any smaller, the silicon chip has reached its physical limit--bringing an end to Moore's Law and with it the expectation that two years from now computers will be twice as fast as they are today.

We don't like this at all, to say the least. We can see the technological potential peaking up on the horizon; to come so close and be restrained by physical laws is the kind of thing that first drove us to innovate in the first place.

So what do you do if you can't make a faster computer using atomic scales? Scientists and engineers inevitably took the next step and looked for something smaller than the atom for an answer, to quantum mechanics.

## The Quantum World

The quantum world, however, is not at all like the classical world. Exotic subatomic particles behave in ways that are hard to accept. They can blow right through foundational laws of physics without missing a step, as quantum entanglement does when paired particles communicate instantaneously with each other even if they are on opposite sides of the universe.

Schroedinger himself, one of the principal discoverers of the quantum mechanics, proposed his famous thought experiment about a cat in a box which is both alive and dead at the same time to demonstrate how absolutely absurd his theories were becoming. He couldn't believe that it was exactly as it appeared.

As maddening as it was, the unavoidable fact is that Schroedinger's cat is indeed both alive and dead at the same time and will remain so until an observer opens the box to check on it; that is the moment the universe has to decide, in purely random fashion, what the ultimate state of the cat actually is.

Not only has this superposition of Schroedinger's cat been proven in practice, but the superposition of particles is also where the power of a quantum computer comes from.

By operating on a particle in superposition--called a **quantum bit**, or **qubit**--vastly more data can be contained in quantum memory with far fewer bits than in classical computers, and operations on a **qubit** apply to **all possible values** that **qubit** takes on. When these **qubits** are paired with other interdependent **qubits**--can perform vastly more complicated logic operations in significantly less time.

This potential for drastically improved processing speed over classical processors is what is driving so much of the hype around quantum computing right now. It's our way of keeping the current rate of progress going, no longer confined to the water's edge by the end of Moore's Law.

## How Quantum Computing is Guaranteed to Upend Our Technology

So how powerful is quantum computing exactly then? What does this speed translate into, in real terms? For a while, the answer was nothing. It was actually a ridiculous idea that no one really took seriously.

Proposed in various ways over the years in academic papers since the 1970s, it popped up every now and again but not only was it impossible to imagine such a system in practice; such a machine wouldn't serve any real purpose to justify even investing money to investigate it. Then, in 1994, mathematician Peter Shor published a paper that changed everything.

Shor created an algorithm that cracked open a brutally intractable math problem that is the basis for modern RSA cryptography, the problem of prime factorization of integers. Prime factorizing a several thousand digit long integer is just not something a classical computer can do efficiently, no matter how many processors you throw at it; the necessary algorithms either aren't known or don't exist.

Even as modern computers became more powerful and were able to use raw processing power to crack earlier 256-bit, 512-bit, and even higher bit-count encryption keys, all one would need to do is multiply the bit-count used for your key by two and your new scheme was literally exponentially stronger than the one that just got cracked.

A classical computer doesn't get exponentially better at solving these problems as the numbers involved increase. This limitation, known as time complexity, eventually put some things beyond classical computers capacity to ever really solve. Lengthening RSA encryption-keys can very quickly begin to add millions, billions, and even trillions of years to the time needed to crack the encryption key using a classical computer.

What Shor showed was that using the superposition of qubits would allow you to solve the factorization problem significantly faster. It might still take a long time to break open the toughest RSA encryption, but a trillion-trillion-year-problem was made into a 2-to-5-year problem with a quantum computer--and * only* with a quantum computer.

## If Neven's Law Bears Out, Quantum Computing Will Be Here in Under a Year

People finally took notice after Shor published his paper and realized this was something completely different than classical computing, and potentially orders of magnitude more powerful.

People started to see the potential, but in the 20+ years since Shor's algorithm first appeared, running that algorithm and maybe a few other quantum algorithms published in the years since remain the only reason why we'd ever need a quantum computer in the first place. We have been told that it will change everything, and we have waited as very, very little seems to be happening in reality.

Even many computer science professionals, including Ph.D.s and industry veterans who know the science behind it all, have expressed skepticism that quantum computing will deliver its at-times unbelievable promise. That may be changing, however, after Neven went public in May about the incredible growth of Google's quantum processors at Google's Quantum Spring Symposium and introduced the world to the "Law" that bears his name.

He revealed that what he and the rest of Google's quantum computing team were looking at was the "doubly exponential" growth of quantum computing power relative to classical computing: "it looks like nothing is happening, nothing is happening, and then whoops, suddenly you’re in a different world," he said. "That’s what we’re experiencing here.”

## What Does Doubly Exponential Growth Actually Mean?

According to Neven, there are two factors that combine to produce this incredible rate of growth Google is seeing in its quantum computer chips.

The first simply being the natural exponential advantage that quantum computing has over a classical computer. Where classical bits can only be in one state at any given time, 1 **or** 0, a qubit in superposition is both 1 **and** 0. This means that a qubit becomes exponentially more efficient in terms of representing and processing data for each additional qubit added. For any given number of qubits **n** in a quantum processor, they do the same work or hold the same amount of data as **2 ^{n }**classical bits.

**2 qubits**equals

**4 bits**,

**4 qubits**equals

**16 bits**,

**16 qubits**equals

**65, 536 bits**, and so on.

The second is more directly related to the improvements that Google is making to its quantum processors. According to Neven, Google is seeing their best quantum processors improve at an exponential rate, something that IBM has also seen with its **IBM Q System One**. Taken together, Neven says, you end up with a doubly exponential rate of growth of quantum computing relative to classical computing.

What does doubly exponential growth look like? The classic exponential growth function when dealing with bits is obviously doubling, a function defined as **2 ^{n}** in binary systems. How do you double doubling? Simply replace the

**n**in the doubling function with another doubling function, or

**2**.

^{2n}Since Moore's Law is a doubling function, we can represent Moore's Law like this, where **n** represents a two year interval:

* n Classical computing power (2 ^{n)} *

* 1 2

*** 2 4**

*** 3 8**

*** 4 16**

*** 5 32**

*** 6 64**

*** 7 128**

*** 8 256**

*** 9 512**

*** 10 1024**

So what does **Neven's Law** look like? It would look something like this, where **n** equals each new improvement to Google's quantum processor:

* **n 2 ^{n} 2^{(2n}^{)} Quantum Computing Power Relative to Classical Computing Power*

** * 1 2 2 ^{2 } 4**

*** 2 4 2**

^{4}16*** 3 8 2**

^{8}256*** 4 16 2**

^{16}65,536*** 5 32 2**

^{32}4,294,967,296*** 6 64 2**

^{64}18,446,744,073,709,551,616*** 7 128 2**

^{128}3.4028236692093846346337460743177e+38*** 8 256 2**

^{256 }1.1579208923731619542357098500869e+77*** 9 512 2**

^{512}1.3407807929942597099574024998206e+154*** 10 1024 2**

^{1024 }1.797693134862315907729305190789e+308After the list goes above **6**, the numbers start becoming so large and abstracted you lose the sense of the gulf between where Google is and where it will be at the next step.

In the case of Moore's Law, it started out in the **1970s** as doubling every year, before being revised up to about every two years. According to Neven, Google is exponentially increasing the power of its processors on a * monthly to semi-monthly basis.* If

**December 2018**is the

**1**on this list, when Neven first began his calculations, then we are already between

**5**and

**7**.

In** December 2019, **only six months from now, the power of Google's quantum computing processor might be anywhere from **2 ^{4096}** times to

**2**times as powerful as it was at the start of the year. According to Neven's telling, by February--only

^{8192}**three months**after they began their tests, so

**3**on our list--,

*in the building that could recreate the results of Google's quantum computer's calculations, which a laptop had been doing just*

**there were****no longer any classical computers****two months**earlier.

Neven said that as a result, Google is preparing to reach **quantum supremacy**--the point where quantum computers start to outperform supercomputers simulating quantum algorithms--in a only a matter of **months**, not **years**: “We often say we think we will achieve it in 2019. The writing is on the wall.”

## Skepticism is Warranted, to a Point

It's important to stress that this growth in power is relative to the power of a classical computer, not an absolute measure, and that the starting point for quantum computing not that long ago would be comparable to the **UNIVAC** vacuum tube-era computers from the **1940s** and **1950s**.

Much of the core theoretical-computer science of quantum computing is still being written and debated, and there are those who have their doubts about whether "doubly exponential" growth relative to classical computing is truly happening.

After all, Moore's Law may be done for, but classical computing isn't dead, it continues to improve to this day and will continue to do so as new algorithms are developed that improve the efficiency of classical computers.

Still, others say that it isn't enough to just downplay or dispute the rapid progress claimed by Google for its quantum processors. IBM may be more modest in their predictions about quantum supremacy, but they're confident they can achieve it in about three years. Five years ago, many thought we wouldn't see a quantum computer until 2025 or even as late as 2030 and beyond.

Now, it's looking like we may even see the real deal by Christmas, and there's no reason to think that the power of quantum computers won't continue to increase even further once either Google or IBM or even someone else achieves true **quantum supremacy**.

The system, which uses Tesla technology, went online earlier than originally planned due to predicted energy shortages.