**Algorithms** are used by all of us all the time with or without our direct knowledge. They have applications in many different disciplines, from** math** and **physics** to, of course, **computing**.

It turns out **algorithms** have a long and illustrious history stretching back as far as ancient Mesopotamian times. But which of these complex calculation processes could be considered some of the most important?

These are 15 of the most likely candidates.

**RELATED: HOW ALGORITHMS RUN THE WORLD WE LIVE IN**

**A brief history of algorithms**

Algorithms are pre-defined, self-contained sets of instructions designed to execute diverse functions, and they have been around for longer than you might expect. From ancient Babylon to the present day, algorithms have been an important feature of our society for millennia.

The very first examples were simple algorithms used by ancients to track their grain and livestock, among other things. In their wake, and with the advent of a formalized numerical system, other technological and conceptual leaps were achieved, including the invention of abacus, algebra, and the concept of variables.

Ancient Greek thinkers like Euclid, Archimedes, and Eratosthenes would use early algorithms to do things like determine the greatest common divisor of different numbers, approximate Pi, and calculate prime numbers.

Over time, such feats would give rise to symbols and rules involved in formulating evaluation systems.

The term “algorithm” itself is believed to have its origins with the 9th-century Persian astronomer and mathematician, Muhammad ibn Mūsā al-Khwārizmī. This man is widely viewed as the person who first introduced decimal positioning within the numerical system of the Western world.

His surname, Latinized as *Algorithmi, *would become synonymous with instructions for performing computations to execute tasks.

He is also credited with developing the first-ever system for solving linear and quadratic equations. The original, albeit rudimentary forms of algorithms, called *algorisms*, were regarded as rules for performing arithmetic calculations with Hindu-Arabic numerals.

The next biggest leap forward in the history of algorithms came in the 1800s with the work of the great George Boole. Many others push algorithms forward in the 19th and 20th centuries, including Giuseppe Peano and Ada Lovelace, to name but a few.

But algorithms would get a major upgrade with the work of Emil Post and Alan Turing in the 1930s that would ultimately give rise to the modern computer. Nothing would be the same again.

## What are some of the most important algorithms?

And so, without further ado, here are some examples of the most important algorithms of all time. The list includes ancient examples as well as some of the most groundbreaking computer science algorithms and programming algorithms in history.

The following list of algorithms is far from exhaustive and is in no particular order.

## 1. Babylonian algorithms are the oldest ever found

**Creator: **Unknown

**When it was created: **circa 1600 BC

**Its impact/implications on the world: **The world's first known algorithm

Although there is some evidence of early multiplication algorithms in Egypt **(around 2000-1700 BC**), the oldest written algorithm is widely accepted to have been found on a set of Babylonian clay tablets that date to around **1800-1600 BC.**

Their true significance only came to light around **1972**, when computer scientist and mathematician Donald E. Knuth published the first English translations of various cuneiform mathematical tablets.

Here are some excerpts from his **1972** manuscript that explain these early algorithms:-

"The calculations described in Babylonian tablets are not merely the solutions to specific individual problems; they are actually general procedures for solving a whole class of problems." - Pages 672 to 673 of "Ancient Babylonian Algorithms".

The tablets also appear to have been an early form of instruction manual:-

"Note also the stereotyped ending, 'This is the procedure,' which is commonly found at the end of each section on a table. Thus the Babylonian procedures are genuine algorithms, and we can commend the Babylonians for developing a nice way to explain an algorithm by example as the algorithm itself was being defined...." - Pages 672 to 673 of "Ancient Babylonian Algorithms".

## 2. Euclid's algorithm is still in use today

**Creator: **Euclid

**When it was created: **300 BC

**Its impact/implications on the world: **Euclid's algorithm is one of the earliest algorithms ever created and, with some alterations, is still used by computers today.

The Euclidean algorithm is a procedure used to find the greatest common divisor (GCD) of two positive integers. It was first described by Euclid in his manuscript *Elements *written around **300 BC**.

It is a very efficient computation that is still used today by computers in some form or other.

Euclid's algorithm requires the successive division and calculation of remainders until the result is reached. It is best described by use of an example:

What is the GCD of 56 and 12?

Step 1 - Divide the larger by the smaller number:-

56 / 12 = 4 (remainder 8)

Step 2 - Divide the divisor by the remainder from the previous step:-

12 / 8 = 1 (remainder 4)

Step 3 - Continue step 2 until no remainders are left (in this case it's a simple 3 step process):-

8 / 4 = 2 (no remainder)

In this case, the GCD is 4.

This algorithm is widely used for reducing common fractions to their lowest terms and in advanced mathematics applications such as finding integer solutions to linear equations.

## 3. The sieve of Eratosthenes is an ancient, simple algorithm

**Creator**: Eratosthenes

**When it was created: **200 BC

**Its impact/implications on the world:** The sieve of Eratosthenes is widely accepted as one of the most ancient algorithms of all time. It allows you to find all the prime numbers in a table of given numbers (as many as you want to include).

To perform it, you find all the numbers greater than 2, then cross out the ones divisible by 2. Then you do the same for non-crossed out numbers greater than 3, so on so forth *ad infinitum *until all composite (non-prime) numbers are crossed out.

## 4. Boolean (binary) algebra was the foundation of the Information Age

**Creator**: George Boole

**When it was created: **1847

**Its impact/implications on the world: **This algorithm is widely recognized as the foundation of modern computer coding. It is still in use today, especially in computer circuitry.

You might be familiar with the term Boolean from mathematics, logic, and computer coding.

Boolean algebra is a branch of algebra in which a variable can only ever be true or false - so-called truth values (usually binary 1 or 0). It was first invented by George Boole in his **1845** work *An Investigation of the Laws of Thought*.

Boolean algebra is widely credited as being the foundation for the Information Age.

## 5. Ada Lovelace's algorithm was the first computer program

**Creator**: Ada Lovelace

**When it was created: **1842

**Its impact/implications on the world: **This algorithm is widely recognized as the first computer program.

Ada Lovelace spent the best part of a year translating one of Charles Babbage's lectures (that had been transcribed into French by an Italian engineer) into English. During this process, she dutifully added additional explanatory notes of her own.

Her notes were labeled A - G, with the latter describing an algorithm for an “analytical engine” to compute Bernoulli numbers. Note G is now widely accepted as being the first recorded example of computer code - making her the first-ever computer programmer.

The engine was never built, and so, her algorithm was never tested during her lifetime. Her work was rediscovered in** 1953** when her notes were republished.

## 6. Fast Fourier Transform Breaks Down Signals Into Frequencies

**Creator**: Carl Gauss, Joseph Fourier, James Cooley, and John Tukey

**When it was created: **1802, 1822, 1965

**Its impact/implications on the world: **This algorithm is used to break down a signal into the frequencies that compose it - much like a musical chord can be expressed in frequencies, or pitches, of each note therein.

The fast Fourier transform (FTT) algorithm can trace its origins to Carl Gauss, who first created it to calculate the trajectories of asteroids. The formula was later expanded on by Joseph Fourier in **1822**.

But the more modern and widely used form of the algorithm was created, and published by, James Cooley and John Tukey in** 1965**. This version is the most far-reaching algorithm in applied mathematics, and it revolutionized signal processing.

"FFT relies on a divide-and-conquer strategy to reduce an ostensibly O(N2) chore to an O(N log N) frolic. But unlike Quicksort, the implementation is (at first sight) nonintuitive and less than straightforward. This in itself gave computer science an impetus to investigate the inherent complexity of computational problems and algorithms." - Barry A. Cipra.

## 7. Google's ranking algorithm (PageRank) could be the most widely used algorithm

**Creator: **Larry Page (mainly) and Sergey Brin

**When it was created: **1996

**Its impact/implications on the world: **PageRank is, arguably, the most used algorithm in the world today. It is, of course, the foundation of the ranking of pages on Google's search engine.

Interestingly, PageRank is named after Larry Page rather than it literally meaning '”o rank web pages”. It is not the only algorithm that Google uses nowadays to order pages on its search result, but it is the oldest and best known of them.

PageRank was devised by Larry Page and Sergey Brin while they were studying at Stanford University as part of their research project on a new kind of search engine.

The main premise was to rank pages based on their relative importance or popularity. Generally speaking, pages that appear higher in the hierarchy have more back-links or links to them.

The PageRank algorithm is given by the following formula:

PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

where:

- PR(A) is the PageRank of page A,
- PR(Ti) is the PageRank of pages Ti which link to page A,
- C(Ti) is the number of outbound links on page Ti and;
- d is a damping factor which can be set between 0 and 1.

We can see that this algorithm doesn't rank websites as a whole but ranks each page individually. It can also be likened to a “pyramid scheme in which a particular page is ranked recursively depending on what other pages link to it.

PageRank has fallen out of favor in recent years but is still used as part of a general suite of other algorithms at Google.

## 8. Monte Carlo method (Metropolis Algorithm) was used at Los Alamos

**Creator: **John von Neumann, Stan Ulam, and Nick Metropolis

**When it was created: **1946

**Its impact/implications on the world:** It was used as a Los Alamos code word for the stochastic simulations applied to build better atomic bombs after the Second World War.

The Monte Carlo method is defined as follows: "Monte Carlo is the art of approximating an expectation by the sample mean of a function of simulated random variables."

This important algorithm is used to approximate solutions to numerical problems of unmanageable complexity by mimicking a random process. It relies on repeated random sampling to obtain a result - in effect using randomness to solve problems that might be deterministic in principle.

The algorithm is widely used in physical and mathematical problems and is most useful when it is difficult or impossible to use other approaches.

Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.

## 9. The simplex method for linear programming was widely adopted by industry

**Creator: **George Dantzig

**When it was created:** 1947

**Its impact/implications on the world: **The simplex method of linear programming is one of the most successful algorithms of all time. It was widely used in the world of industry or any other situation where economic survival rests on the ability to maximize efficiency within a budget and/or other constraints.

Devised by George Dantzig in 1947, this algorithm has become one of the most widespread in history, despite the fact that most real-world problems are very rarely linear in nature.

It does, however, offer a simple and elegant way of deriving solutions to problems. Some have noted that it can be affected by exponential delays but is otherwise highly efficient - it usually takes 2*m* (where *m* is the number of equality constraints) to 3*m* iterations to complete.

It works by using a systematic strategy to generate and validate candidate vertex solutions within a linear program. At each iteration, the algorithm chooses the variable that makes the biggest modification towards the minimum-cost solution.

That variable then replaces one of its covariables, which is most drastically limiting it, thereby shifting the simplex method to another part of the solution set and toward the final solution.

## 10. Krylov subspace iteration methods are still used today

**Creator (if known): **Magnus Hestenes, Eduard Stiefel, and Cornelius Lanczos**When it was created (if known): **1950

**Its impact/implications on the world: **The Krylov subspace iteration method is one of the ten most important classes of numerical methods in the world.

The Krylov subspace iteration methods are a set of algorithms that were developed at the Institute for Numerical Analysis at the National Bureau of Standards in the 1950s. They address the deceptively simple task of solving equations of the form Ax = b.

Of course, it isn't quite as simple as that, the A is an enormous n by n matrix. This, therefore, means the algebraic answer x= b/A isn't exactly child's play to calculate.

"Iterative methods—such as solving equations of the form Kx_{i} + 1 = Kx_{i} + b – Axi with a simpler matrix K that’s ideally “close” to A—lead to the study of Krylov subspaces. Named for the Russian mathematician Nikolai Krylov, Krylov subspaces are spanned by powers of a matrix applied to an initial “remainder” vector r_{0} = b – Ax_{0}." - Barry A. Cipra.

"Lanczos found a nifty way to generate an orthogonal basis for such a subspace when the matrix is symmetric. Hestenes and Stiefel proposed an even niftier method, known as the conjugate gradient method, for systems that are both symmetric and positive definite."

These algorithms have been continually tweaked over the past 50 years. Their current iterations for non-symmetrical systems include the Generalized minimum residual method (GMRES) and the Biconjugate gradient stabilized method (Bi-CGSTAB).

## 11. Kalman Filter is great for predicting the future, sort of

**Creator : **Rudolf E. Kálmán

**When it was created (if known):** 1958-1961

**Its impact/implications on the world: **The Kalman Filter is a general and powerful tool for combining information in the presence of uncertainty.

Kalman Filtering, aka linear quadratic estimation (LQE), is an algorithm that uses a series of measurements, observed over time and including statistical noise, to produce an estimate of unknown variables via a joint probability distribution.

In other words, it helps you make an educated guess about what a system is likely going to do next, within reason of course. Kalman filters are great for situations where systems are constantly changing.

Another great thing about this algorithm is that it is light on “memory”. Only the result of the previous step is needed to progress, which makes it very fast and ideally suited for real-time problem-solving.

## 12. QR algorithms for computing eigenvalues have proved incredibly useful

**Creator: **John G. F. Francis and by Vera N. Kublanovskaya independently

**When it was created: **Late 1950's

**Its impact/implications on the world: **The QR algorithm greatly simplifies the calculations of eigenvalues (which are the most important numbers associated with matrices). After its development, calculating these pesky numericals became a routine task rather than a formidable and labor-intensive process.

The QR algorithm, aka eigenvalue algorithm, is important in numerical linear algebra. In addition to enabling the swift calculation of eigenvalues, it also aids in the processing of eigenvectors in a given matrix.

Its basic function is to perform QR decomposition, writing a matrix as a product of an orthogonal matrix and an upper triangular matrix, and multiplying the factors in the reverse order and iterating.

**13. The formulation of the Fortran optimizing compiler could be the most important event in programming history**

**Creator: **John Backus (and IBM team)

**When it was created: **1957

**Its impact/implications on the world: **Quite possibly the most important algorithm/event in computer programming.

Fortran was developed by John Backus and his team at IBM in the late 1950s.It enabled scientists, and other users, to actually tell a computer what they wanted it to do without the need to get ”bogged down” in the minutiae of machine code. The Fortran optimizing compiler is modest by modern-day standards with "23,500 assembly-language instructions—the early compiler was nonetheless capable of surprisingly sophisticated computations." - Barry A. Cipra.

Backus would recognize its significance to the world later, in 1998, when he recalled the history of Fortran I, II, and III for the *IEEE Annals of the History of Computing*. The compiler, Backus said, “produced code of such efficiency that its output would startle the programmers who studied it.”

## 14. Quicksort is great at helping sort things

**Creator: **Tony Hoare of Elliott Brothers, Limited, London

**When it was created:** 1962

**Its impact/implications on the world: **It provided a means of quickly and efficiently sorting lists alphabetically and numerically.

Sorting a set amount of things in order either alphabetically or numerically had always been a laborious and tedious task. Tony Hoare managed, in **1962**, to produce an algorithm to perform this task quickly.

His Quicksort algorithm used a recursive strategy to “divide and conquer” to rapidly reach a solution. It would prove to be two to three times quicker than its main competitors merge sort and heapsort.

It works by choosing one element to be the “pivot”. All others are then sorted into "bigger" and "smaller" piles of elements relative to the pivot.

This process is then repeated in each pile.

"Although it’s possible to get stuck doing all N(N – 1)/2 comparisons (especially if you use as your pivot the first item on a list that’s already sorted!), Quicksort runs on average with O(N log N) efficiency." - Barry A. Cipra.

The simplicity and elegance of this algorithm won it great praise in its day and made it the poster child of computational complexity in the late 1990s.

## 15. JPEG and other data compression algorithms are incredibly useful

**Creator: **Joint Photographic Experts Group, IBM, Mitsubishi Electric, AT&T, Canon Inc., ITU-T Study Group 16

**When it was created:** 1992

**Its impact/implications on the world: **Data compression algorithms, like JPEG, MP3, zip, or MPEG-2, are widely used the world over. Most have become the *de facto* standard for their particular application. They have made computer systems cheaper and more efficient over time.

It is difficult to single out one particular data compression algorithm as their value or importance depends on the files' applications. They have become so widespread today that they are used whenever you download a file to your computer, play music or video games, stream videos, use a database, or engage with the cloud.