How to Make a Jupiter Brain – A Computer the Size of a Planet
How feasible is it to build a Jupiter brain, a computer the size of a planet? Just in the past few decades, the amount of computational power that’s available to humanity has increased dramatically. Your smartphone is millions of times more powerful than the NASA computers used to send astronauts to the moon on the Apollo 11 mission in 1969. Computers have become integral to our lives, becoming the backbone of our communications, finances, education, art, health care, military, and entertainment. In fact, it would be hard to find an area of our lives that computers didn’t affect.
Now imagine that one day we make a computer that’s the size of an entire planet. And we’re not talking Earth, but larger, a megastructure the size of a gas giant like Jupiter. What would be the implications for humans to operate a computer that size, with an absolutely enormous, virtually limitless, amount of computing power? How would our lives change? One certainly begins to conjure up the transformational effects of having so much oomph, from energy generation to space travel and colonization to a fundamental change in the lifespan and abilities of future humans.
But while speculation of that sort can easily lead us into the fictional realm, what are the known facts about creating such an impressive computer? How hard would it be?
The limits of a Jupiter brain
Building a Jupiter brain would be dependent on specific factors that limit the power of a computer, as outlined by the Swedish computational neuroscientist and transhumanist Anders Sandberg in his seminal 1999 paper on the subject. His work, titled “The Physics of Informational Processing Superobjects: Daily Life Among the Jupiter Brains,” focused on the stipulations of building such an enormous computer. As Anders writes in his paper, the “laws of physics impose constraints on the activities of intelligent beings regardless of their motivations, culture or technology.” Even more specifically, he argues, each civilization is also limited by the physics of information processing.
The specific physical constraints Sanders found in supersizing a computer are the following:
1. Processing and memory density
The elements that constitute a computer and its memory units, all the chips and circuits involved, have a finite size, which is limited by physics. This fact creates “an upper limit” on the processing and memory density of any computing system. In other words, you can’t create computer parts that are smaller than a certain shape, beyond a certain size they will stop functioning reliably.
2. Processing speed
The speed of information processing or memory retrieval is related to how fast electrical signals can travel through the computer, determined by the “natural timescales of physical processes,” writes Sandberg.
3. Communication delays
If we build a gigantic computer the size of a planet, it might experience delays in communication between its various extended parts due to the speed of light. In fact, the faster its processing speed, the longer the delays might feel “from an internal subjective point of view,” as the scientist describes. If we want to have fewer delays, the distances in the system need to be as small as possible, or else not need to utilize communication over long distances.
4. Energy supply
As you might imagine, an extremely large computing system would be a major power hog. Computation on such a scale would need tremendous amounts of energy and the management of heat dissipation. In fact, looking for the heat emissions from large computing system is one potential way to scour the sky for advanced alien civilizations.
Sandberg suggests some ways to deal with these challenges. While the power and speed of individual processors may have a limit, we must turn our focus to figuring out how to make parallel systems where all the disparate elements work in unison. He gives the example of the human brain where “even fairly slow and inefficient elements can produce a very powerful computing system.”
The processing factors and the delays in communication may have to be handled by creating a computing system that’s more concentrated and modular. Among other considerations, he also proposes giving “reversible computing” (a theoretical form of quantum computing in which the computational process is to some extent time-reversible) a closer look, as it may be possible to achieve this type of computation without having to expend additional energy. It involves no bits being erased and is based on reversible physics. An example of this would be copying and pasting a record, along with its inverse. Such machines could be potentially built by utilizing reversible circuits and logical boards as well as quantum computation, among several other approaches proposed by Sanders.
Technologies you would need
One of the fun parts of trying to design a Jupiter brain is figuring out the technology that would be necessary to accomplish this mammoth task. Besides the potential army of self-replicating swarms of nanorobots that would need to be employed to put this immense computer together; in an appendix to his paper, Sanders suggests a design for what it would take to make a Jupiter brain he called “Zeus.”
Zeus would be a sphere 11,184 miles (18,000 kilometers) in diameter, weighing about 1.8 times the mass of Earth. This super-object would be made out of nano diamonds called diamondoids. These would form a network of nodes around a central energy core consisting of quantum dot circuits and molecular storage systems. Another way to organize the nodes and distribute information could be through a cortex “with connections through the interior” which Sanders finds most “volume-efficient” and best for cooling.
Each node would be a processing element, a memory storage system, or both, meant to act with relative independence. Internal connections between the nodes would be optical, employing fiber optics/waveguides or utilizing “directional signals sent through vacuum.”
Around the sphere would be a concentric shield whose function would be to offer protection from radiation and dissipate heat into space via radiators. Zeus would be powered by nuclear fusion reactors dispersed on the outside of that shield. This would make a Jupiter brain particularly distinct from other hypothetical megastructures like a Dyson Sphere or a Matrioshka Brain that Type II civilizations on the Kardashev Scale could theoretically create to harness energy from stars.
Where would we get the supplies to make a Jupiter brain? Sanders proposes gathering the carbon located in gas giant cores or through star lifting, any one of several hypothetical processes that would allow Type II civilizations to repurpose stellar matter.
If planet-size computers are not enough of a challenge, Sanders also proposes some information processing solutions that even he termed “exotica”, as they involve developing or purely theoretical technologies. Among these are using quantum computers, which are not only quantitatively but “qualitatively more powerful than classical computers.” Sanders also believes they allow for reversible computation and are the “natural choice” when it comes to computing systems on the nanoscale or the even smaller femtoscale.
Black Holes could potentially be used as processing elements if they do not destroy information, a currently contested notion. If information is released from black holes via Hawking radiation, they could possibly be tapped as information processors, conjectures the scientist.
A network of wormholes, theoretical tunnels that connect distant parts of the space and time continuum, is another yet-to-be-proven hypothetical structure that may serve as “extremely useful” for information processing and communications.
Another philosophical nugget that would be at home in any discussion involving The Matrix also emerged from Sandberg’s paper: As a civilization grows and expands its information processes to the limits of physical laws and technology, it will at some point become “advantageous in terms of flexibility and efficiency for individual beings to exist as software rather than (biological) hardware."
Why is that so? Fewer of the increasingly scarce resources would be required to sustain such a being, which will evolve automatically as code. The limits of this virtual existence are bounded by the computing system it exists in. “As technology advances the being will be extended too,” writes Sanders.
The Swedish philosopher and computational neuroscientist Nick Bostrom wrote a now-famous paper on the Simulation Hypothesis titled “Are we living in a computer simulation?” In it, he estimates that all the brain activity by all the humans who ever lived would amount to somewhere between 1033 and 1036 operations. By comparison, a planet-sized computer like a Jupiter brain would be able to execute 1042 operations per second. It would be able to simulate all of human brain activity ever, all the consciousnesses of all the people who ever lived, “by using less than one millionth of its processing power for one second,” writes Bostrom.
Certainly, these technologies and their implications are highly speculative at this point, but visualizing the futuristic gadgetry is one step in making it real eventually, as has happened with other tech developments. If we can imagine it, well, perhaps we can build it.