15 Most Significant Milestones in the History of the Computer

While computers seem like the quintessential invention of the modern age, the history of the computer goes back to some of the earliest standing monuments of our species.
John Loeffler

When you think of a computer, you no doubt think of a screen and a keyboard, or a touchscreen tablet, or maybe a supercomputer taking up the entire floor of some major laboratory somewhere but the idea of the computer in history goes back to some of the most ancient monuments crafted by human hands.

From Stonehenge to the IBM Q System One, at their core the purpose of these things remains the same: to relieve the human mind of the tedious task of repetitive mental calculation and since civilization first arrived on the scene, computers came with it.

RELATED: A BRIEF HISTORY OF THE WEB: FROM 17TH CENTURY COMPUTERS TO TODAY'S DIGITAL EMPIRES

Not every major advance in computer technology was a machine, however. Just as important, if not more so, were several major innovations in human abstract reasoning. Things like recording figures in wet clay to clear up mental space for other more advanced operations and the realization that mathematical computations can work together to accomplish even more complicated computational tasks so that the result is greater than the sum of the sums and differences of the parts. Without human reasoning, computers are little more than unproductive paperweights.

Stonehenge: the World's First Computer?

Stonehenge
Source: Stonehenge Stone Circle / Flickr

When you think about the world's first computer, it's doubtful that Stonehenge is the first thing you thought of, but you need to remember what a computer is. All a computer does is take an input and produces a predictable output based on a given condition or state. By that definition, Stonehenge absolutely qualifies as a computer.

An analysis of the orientation of the stones at Stonehenge and the astronomical alignments that would have been visible around the time of Stonehenge's construction reveals that the different stones line up and appear to track major celestial bodies that would have been known to the humans who built it. These include the major, visible celestial bodies that dominate the astrologies of the world, such as the sun, the moon, and the five visible planets, Mercury, Venus, Mars, Jupiter, and Saturn.

Our ancestors, as well as many modern humans, obsessively charted the course of celestial bodies that they believed had a direct effect on events on Earth and in their lives, and they planned their lives around them.

If a celestial body is an input and the season of year or a specific length of time is the state or condition of the 'computer', then the sun, the moon, and other bodies would line up and traverse the stones at Stonehenge in predictable ways. As a form of computation, these alignments would tell the humans of neolithic Wiltshire when it was time to plant crops or when to go to war. It might not be an Excel spreadsheet, but it's not that much different fundamentally.

There's Something about Sixty: Sumerian Cuneiform and Numerology

Sumerian Cuneiform
Source: Wikimedia Commons

The ancient Sumerians of Mesopotamia are almost certainly not the first people to have developed a writing system to record figures and data, but it is one of the oldest system that has survived to the present day and remains significant for its relative sophistication given its age.

'Written' by pressing a wedged stylus into a tablet of wet clay, Sumerian cuneiform allowed merchants and administrators to offload the enormous amount of data onto a physical storage device that could be referenced when necessary. This allowed humans to start working with and processing large sets numbers and data--as well as make more complicated computations--than the human memory could remember at a single time.

This allowed for much more complicated mathematics to develop, such as the sexagesimal (base 60) number system that we still use today to measure smaller units of time. The number sixty is also special in that it is highly divisible and is loaded with a whole lot of ancient numerological significance.

According to the Engineering and Technology History Wiki:

The product of 12 and 30 is 360, the number of degrees in a circle; did the Sumerians define the 360 degree circle? Probably, because dividing the Zodiac into 360 degrees means Jupiter traverses 30 degrees in a year and Saturn 12 degrees; thereby coupling the periods of the gods Jupiter and Saturn.

The Sun tracks through the Zodiac in one year. Jupiter would track 1/12 of the way in that time. Why not divide a year into 12ths, i.e., 12 months; then the Sun tracks the same distance in one month that Jupiter tracks in one year; thereby coupling the periods of Jupiter and the Sun. And since the Sun would then track 30 degrees along the Zodiac in a month, why not divide the month into about 30 days, the period of Saturn? Then the Sun tracks about 1 degree every day. Of course the Sumerians knew that a year is actually 365 days simply by watching the sun track through the Zodiac, so maybe they just added a 5 day Holiday (like the Egyptians).

A geometric argument may also have contributed to the development of base 60. The Pythagorean Theorem was well known in ancient Mesopotamia; i.e., the square of the longest side of a right triangle is equal to the sum of the squares of the two shorter sides. The most famous and useful right triangle is the 3-4-5 right triangle; also known to very ancient peoples. The product of those three numbers is, you guessed it, 60.

Why is the Sumerian mathematical system significant? By giving humanity a quantifiable way to chart the movement of the celestial bodies that governed their lives, the Sumerian system eliminated the need for standing stones and other physical landmarks. With their numbering system, the uncountable man-hours of labor needed to build Stonehenge to compute the course of the heavenly bodies could be done with simple math on a tablet and in their head.

And thanks to cuneiform, they wouldn't need to remember how many days had passed since the solstice, they could simply write it down and return to it later when that information needed to be recalled.

The Antikythera Mechanism

Antikythera Mechanism
Source: Peulle / Wikimedia Commons 

Easily the most famous ancient computer of them all, the Antikythera Mechanism was discovered over a century ago in a 2,000-year-old shipwreck off the coast of the Greek town of Antikythera. Known from the start to be some form of advanced automata of some kind, it wasn't until 1959 that Princeton historian Derek J. de Solla Price theorized that this mysterious device was used to--you guessed it--track the positions of the celestial bodies in the night sky.

Given that maritime navigation has historically relied on the position of the stars in the sky if you find a funky, complicated device on an ancient ship, odds are pretty good it had something to do with the sky. It wasn't until half a century later however that imaging technology advanced enough that researchers were able to get a true understanding of just how intricate the Antikythera Mechanism actually was.

Antikythera Mechanism Gears
Source: Freeth, et al. / Nature

Yes, it tracked the celestial bodies in the night sky, but the precision with which it did so is so advanced that researchers have no idea how the Greeks were able to create it. Cycling through the calendar dates of the year on the main gear of the Antikythera Mechanism, more than two dozen gears would turn to compute all sorts of astronomical data, such as the angle of the sun in the sky relative to the horizon and even whether a lunar eclipse was going to occur.

The Antikythera Mechanism is so advanced, in fact, that it would take a little more than a millennium and a half before such an advanced device was seen in Europe in the 1600s, and nothing else like it has ever been found dating to that era, making the mystery of the Antikythera Mechanism all the more intriguing.

The Roman Abacus and Chinese Suan Pan 

Roman and Chinese Acabus
Source: 1, 2

While the Antikythera Mechanism was rusting away at the bottom of the Mediterranean, Europe, and Asia was stuck doing their math on independently developed abacuses--the Roman Abacus in the West and the Suan Pan in China. Don't let these simple computers fool you though; the human minds who used them found them invaluable.

China constructed the Great Wall using a variety of tools, but the Suan Pan would have been in daily use throughout by the engineers and planners who oversaw the wall's construction. Meanwhile, the ancient Roman artillerymen used their abacus to calculate the flight of stones hurled from catapults against the walls of enemy cities more than a thousand years before the math that governed that flight was discovered by Newton and Liebnitz. Don't knock the abacus.

The Pascaline Calculator

Pascal's Calculator Pascaline
Source: David Monniaux / Wikimedia Commons

When the renowned mathematician and inventor Blaise Pascal invented his mechanical calculator in 1642, he wasn't the first to have done so--that honor goes to Wilhelm Schickard, who invented his mechanical adder in 1623. While Schickard's work is recognized as the first mechanical calculator to perform arithmetic operations like adding and subtracting, it wasn't terribly sophisticated and had several issues that caused Schickard to abandon the effort altogether before his death.

Blaise Pascal, however, not only managed to succeed where Schickard struggled, his mechanical adder and subtractor--which could also perform multiplication and division through repeated additions and subtractions--was the forerunner of the computer as we understand them today.

Charles Babbage's Difference and Analytical Engines

Babbage Difference Engine
Source: geni / Wikimedia Commons

Mechanical Adders proliferated throughout Europe in the 17th and 18th century, but Charles Babbage's Engines are widely considered the first mechanical computers as we understand them today, even though they were never built in his lifetime.

What made the difference engine, well, different from Pascal's Pascalines wasn't just the steampunk inspiring steam engine that powered it. What made the difference engine remarkable was that it would automatically calculate mathematical tables based on input, operating much more like a modern computer than anything else that came before it.

It was his Analytical Engine, however, that truly stretched itself toward the modern computer age. Using a system of punchcard programming, the Analytical Engine was entirely programmable to fit the need of the user and was capable of solving polynomial equations, something no simply adder could accomplish. And since geometric and trigonometric equations can be represented in polynomial form, the analytical engine could do incredibly complicated computations automatically.

Ada Lovelace Writes the First Program

Ada Lovelace and Her Program
Source: 1, 2

We can't talk about Babbage's Analytical Engine without talking about Ada Lovelace. Formally Ada King, Duchess of Lovelace, Lovelace was the only legitimate child of Lord Byron, the Romantic era poet, adventure-seeker, and ne'er-do-well who died after getting sick fighting in the early 19th century Greek War of Independence.

Never knowing her father beyond his reputation--he died when Lovelace was only eight years old and had left the family when Lovelace was still an infant--Lovelace became acquainted with Charles Babbage and took an intense interest in his Engines when not many others did.

In translating an article written by the Italian mathematician and politician, Luigi Menabrea, about Babbage's Analytic Engine into French, Lovelace wrote copious notes explaining the machine's workings and its potential beyond simply calculating figures and tables.

An incredibly brilliant woman, Lovelace saw in the Analytic Engine what Babbage's contemporaries missed. To show the machine's potential, Lovelace wrote up a detailed algorithm that would generate the sequence of Bernoulli numbers on Babbage's Analytical Engine, if it were ever built. This is considered to be the first computer program ever written, even though it would take a century before her contribution to computer science history would be discovered.

Alan Turing's Universal Computing Machine

Alan Turing
Source: Wikimedia Commons

The theoretical foundation of the modern digital computer began as a mathematical thought experiment by Alan Turing while he was finishing his studies at Cambridge. Published in 1936, On Computable Numbers [PDF] was an instant classic work of theoretical mathematics for its brilliant solution to a seemingly impossible mathematical problem--known as the Entscheidungsproblem, which, in short, asks whether Mathematics, in theory, can solve every possible problem that can be expressed symbolically.

To answer this question, Turing conceived of a hypothetical 'Universal Machine' that could compute any number that can be produced through mathematical operations like addition and subtraction, finding derivatives and integrals, using mathematical functions such as those in geometry and trigonometry, and the like. In theory, if a problem can be expressed symbolically, a Universal Machine should be able to compute a definite result.

What Turing discovered, however, was that these 'computable numbers' could eventually produce numbers through various processes that his Universal Machine could not compute, or 'uncomputable numbers.'

If his Universal Machine can carry out every possible mathematical and logical operation, even those we do not know about, and not be able to arrive at one of these uncomputable numbers--even if there was only one uncomputable number in existence--then mathematics was undecidable; there were just some things that were beyond the reach of mathematics to describe.

While this proof alone puts Turing in the upper tier of mathematical minds in human history, Turing quickly saw that his theoretical Universal Machine was much, much more than just a thought experiment.

Alan Turing conceived of his Universal Machine, which everyone immediately started calling Turing machines forever after and so will we, as mirroring the way the human mind computes a number.

[see-also]

When you perform a mathematical operation in your mind, you start with an operand--a number, an algebraic term, whatever--and in your mind, you perform an operation by bringing in a second operand and produce a result. That result then replaces these two operands in your mind. So if you start with the number 4--the first operand--and you decide to add--the operation--the number 3--the second operand, you get the result, which is 7. This 7 replaces the 4, the 3, and the addition operation in your mind. You repeat this process as long as there is another operand and an operation to combine the two. Once you have only a single operand left, you're finished.

This is how math is done, on paper, in your head, wherever. What Turing was able to intuit, however, was that what is actually happening is that your mind--or the variable on the page, etc.--is changing its state with every operation, with the new state being the new operand produced by the operation you just performed.

Why this was such a monumental leap is that Turing's machine wasn't modeled on the mathematical mechanisms that earlier mechanical calculators were, it was modeled on the way the human mind thinks. We're no longer talking about calculating tables of figures the way Babbage's Engines did, Turing's machine could represent anything that could be expressed symbolically and which was governed by a clearly defined rule.

For example, if your Turing machine's initial state is a circle, and the machine reads in a triangle as the next symbol of input, the state must change to a square; if it reads in a square instead, it must change its state to a hexagon. These rules aren't just academic; its how human beings make decisions.

In the real world, if your initial state in the morning is that you are about to leave the house, you look outside before you leave. If it's raining, you change your state to the one where you take an umbrella. If it's warm and sunny, you change your state instead to the one where you don't take your heavy coat.

This kind of decision-making process could be reproduced symbolically on a Turing machine, and it can't be overstated how revolutionary this leap was. Alan Turing invented a machine that could think. In theory, the modern digital computer was born.

John Von Neumann and the Stored-Program Concept

John Von Neumann
Source: Los Alamos National Laboratory / Wikimedia Commons

The accomplishments of John Von Neumann are too numerous to list. One of the greatest mathematicians in history, Von Neumann is probably most famous for his work on the Manhattan Project during the Second World War and the more than 100 academic papers published in his lifetime in the fields of ranging from theoretical and applied mathematics to quantum mechanics to economics.

Von Neumann's major mark on the history of the computer would come shortly after the Second World War. Along with Turing and mathematician Claude Shannon, Von Neumann conceptualized the idea of a computer that did not need to be fed tapes of input to operate.

Known as the stored-program concept, they explored how the instructions carried out by a computer program could be retained by the computer, rather than simply fed into it every time the computer ran the program. If you imagine having to reinstall the operating system on your computer every time you wanted to use it, you can quickly see the problem with the first production digital computers that these men were trying to solve.

Though he wasn't alone in coming up with the idea, it would be Von Neumann who would lay the actual groundwork for the stored-program concept, which is currently the operational foundation of every modern computer in existence.

Having developed close ties to the American military during the Manhattan Project, Von Neumann was able to modify the US Army's rigid, mechanical, and hard-wired ENIAC computer into a stored-program machine. Afterward, he won approval to develop a new and improved computer at the Institute for Advanced Study, which was the first modern, binary arithmetic computer system. Importantly, it implemented the stored-program concept but with the innovative twist of using the same memory space for instructions as well as the data used by the program.

This allowed for more sophisticated conditional instruction branching that is one of the major defining elements of software code.

UNIVAC: The First Major Commercial Computer

UNIVAC
Source: Lockheed Aircraft Corporation | US Army / Wikimedia Commons

While Turing and Von Neumann were laying the theoretical and operational foundation of the modern computer, the Eckert–Mauchly Computer Corporation (EMCC) started building machines that put these theories into rudimentary practice. Founded by the creators of the ENIAC, J. Presper Eckert, and John Mauchly, EMCC built the first general-purpose electronic computer for the Northrop Aircraft Company in 1949, the BINAC. The first commercial computer in the world to incorporate Von Neumann's stored-program paradigm, the BINAC soon fell by the wayside as Eckert and Mauchly began work on their most important machine, the UNIVAC.

With 1950 being a census year in the United States, the US Bureau of the Census funded much of the development of the UNIVAC to assist them with the upcoming decennial project. Around the same time, EMCC's chairman, and major source of funding, Harry L. Strauss died in a plane crash in the fall of 1949, and EMCC was sold to the Remington Rand company in 1950 and Remington Rand's name has been associated with the UNIVAC ever since.

While developed for the Census, the UNIVAC could be put to any general-purpose business or scientific use and was marketed as such by Remington Rand. In 1952, Remington Rand approached CBS News and offered to let them use the new UNIVAC I mainframe computer to count the early returns for the upcoming Presidential election. Though skeptical, CBS News chief Sig Mickelson took Remington Rand up on their offer, even if just for the novelty of seeing this new-fangled machine trying to out-think human mathematicians used by CBS to project the election returns.

Around 8:30 PM on election night, a UNIVAC I mainframe computer in Philadelphia, connected to CBS studios in New York via teletype and relying on past election results and early return numbers, made a prediction. The UNIVAC I computed that the Republican candidate, General Dwight D Eisenhower, Supreme Commander of Allied Forces in Europe during the Second World War, was going to bury the Democratic candidate, Illinois Governor Adlai Stevenson, in a 345-point landslide.

The UNIVAC I was predicting Eisenhower pulling in 438 electoral college votes to Stevenson's 93 electoral college votes, a prediction that no one at CBS believed was possible. The most recent polls showed a tight race, if not an outright win for Stevenson, so Mickelson was convinced that the UNIVAC I prediction was junk and told the news team not to air the prediction.

While CBS didn't broadcast the UNIVAC I's actual prediction, they instead completely fabricating a different prediction, giving Eisenhower 8-to-7 odds in his favor of winning the presidency. The UNIVAC was actually predicting 100-to-1 odds that Eisenhower would receive 266 electoral college votes, the number needed to win the election. Even as new data came in, the UNIVAC I never wavered: Eisenhower's victory was all-but-guaranteed, and it would be overwhelming.

As the night wore on, returns came back that began to verify the UNIVAC I's assessment. By the late evening, the Eisenhower landslide was undeniable. The final electoral college vote had Eisenhower receiving 442 votes and Stevenson receiving only 89 votes. The UNIVAC I called the election hours earlier within a single percentage point, and the worst that could be said of it was that it was too generous to Stevenson.

CBS News correspondent Charles Collingwood, who was the one relayed the false UNIVAC I prediction to viewers, had to go back on the air and confess to audiences that the UNIVAC I had actually gotten the election call right earlier in the evening and that CBS hadn't aired it because they didn't believe it.

You couldn't buy this kind of advertising if you were Remington Rand. The stakes couldn't have been higher, and failure would have been disastrous, but the UNIVAC I proved itself before a national audience in real-time and did so in spectacular fashion. No one could deny after 1952 that these new computers were something entirely different than the fancy mechanical calculators people assumed they were and that they were orders of magnitude more powerful.

The Transistor: Mankind's Greatest Invention

Transistor
A transistor etched into a silicon chip         Source: Richstracka / Wikimedia Commons

The election of 1952 aside, the UNIVAC wasn't without its problems. First, it took up an entire floor of most office buildings and used tens of thousands of glass vacuum tubes to run a program. If a single tube blew out, the entire computer would grind to a halt until the glass tube was replaced. It also radiated heat like a furnace, making it all that more likely to blow out vacuum tubes seemingly at random.

Five years before the UNIVAC I made its national debut during the 1952 Presidential election, William Shockey, John Bardeen, and Walter Brattain, of American Telegraph & Telephone's Bell Laboratory (Bell Labs), constructed the first working transistor, marking possibly the most significant development in human technology since humanity learned to wield fire.

While Bardeen and Brattain are credited as co-inventors of the transistor, it was Shockey who had worked on the theoretical design of the transistor over the preceding decade. Annoyed at having to share credit with the engineers who more or less built the first transistor off the work Shockley had already done, Shockley developed an improved transistor design and successfully built it himself. Since that transistor supplanted the one built by Bardeen and Brattain, we can fairly credit Skockley as being the creator of the transistors we use today.

This transistor was significantly smaller than the vacuum tubes used in the UNIVAC and used much less energy, producing less heat as a result. Because of this, they didn't fail nearly as often as vacuum tubes did, so manufacturers ditched the vacuum tubes and went all-in on the transistor.

In 1958, Jack Kilby of Texas Instruments and Robert Noyce of Fairchild Semiconductor independently invented the integrated circuit, the crucial step that helped computers achieve a meteoric technological lift-off. By etching the entire transistor onto a thin silicon chip, engineers were able to make transistors progressively smaller, making each new generation of computer processor exponentially faster than the one that came before. This rate of progress, known as Moore's Law, held for the next fifty years and transformed human civilization in the process.

Grace Hopper Creates COBOL, a ProgrammersProgramming Language

Grace Hopper
Source: Smithsonian Institute, via Grantland

All of this new processing power was useless without a way to harness it. Assembly language, the machine level instructions read in by the CPU is unwieldy, to say the least and you can forget about programming in ones and zeros. Something more was needed to give engineers and programmers a more efficient and accessible means of programming these newly empowered computer systems.

Enter Grace Hopper. Entire books have been written about her and her work, and her various accomplishments in the field of computer science are worthy of articles in and of themselves. But one of her most important contributions to the history of the computer is the Common Business-Oriented Language, COBOL.

COBOL was the first high-level programming language developed with someone other than a mathematician in mind. According to Techopedia:

The traditional COBOL specification had a number of advantages over the other languages in that it encouraged straight-forward coding style. For example, no pointers, user defined types or user defined functions.

COBOL language programs are highly portable since they do not belong to a particular vendor. They can be used in a wide variety of hardware and software and supports most of the existing operating systems such as Windows, Linux, Unix etc. It is a self documented language. Any person with a good English grammar can read and understand a COBOL program. The self documenting nature of COBOL helps to maintain synchronization between program code and documentation. Thus easy maintainability is achieved with COBOL.

Hopper's development of COBOL has earned her the title of 'Queen of Code' in the field of computer science and engineering. COBOL drove a wedge between mathematics and computer programming, laying the groundwork for dedicated computer programmers who didn't need to have a doctorate in applied mathematics to run a for-loop or an if-else statement. Every major programming language currently in use owes its existence to Grace Hopper's COBOL and COBOL code still in running on systems around the world, powering administrative systems, financial markets, and more.

The Apple II, the World's First Personal Computer

Apple II
The original Apple II personal computer      Source: Rama / Wikimedia Commons

When Steve Jobs and Steve Wozniak created the Apple II, there were two kinds of people who used computers: professionals in business, government, and academia senior-enough to be trusted with the outrageously expensive mainframe systems that still filled up entire rooms, and hobbyists engineers tinkering with microprocessors to see if they could make it draw a circle on a screen.

Jobs and Wozniak straddled the line between these two camps, and their creation of the Apple II computer was a watershed moment in the history of the computer. The Apple II, more than any other computer, brought computing to the consumer market and we as a society have never been the same.

The Internet Connects the World

The Internet
A map of all the network connections that make up the Internet.     Source: The Opte Project, via PRI.org

And then there was the Internet. The introduction of the Internet into our daily lives starting in the 1990s took the world and made it local in ways no other technology had before. The ability to communicate with someone anywhere in the world with an internet connection--often nearly instantaneously--has transformed business, education, and culture in radical ways.

On a global level, the cultural exchange enabled by the Internet has enabled a more diverse sense of solidarity and common humanity between diverse peoples and cultures that wouldn't have been possible before the Internet. It hasn't always gone smoothly, but the potential for the Internet to be the thread that binds humankind together across previously uncrossable divides grows more potent with each passing year.

The Quantum Computer

IBM Q System One
Source: IBM

A lot of digital ink has been spent writing about the potential of the quantum computer. Of all the major milestones in the history of the computer, quantum computing is the first one that we can see coming before it hits.

Granted, none of us knows exactly what is on the other side of quantum supremacy--the moment when quantum computers start to outperform classical computers running quantum simulations. But there are people alive today who were coming of age before the publication of On Computable Numbers and experienced the entire modern computer revolution from start to present, and they can testify to the radical transformation they've witnessed.

We know what this kind of transformational change can look like and we're only around the Analytical Engine stage of quantum computer development right now. The entire future of quantum computing is as unknowable as the Internet was to Charles Babbage and Ada Lovelace, but there's every reason to believe that human advancement will accelerate even more dramatically going forward.