Generative Adversarial Networks or GANs have been discussed increasingly in the past few years. If you go back 10 years, you won’t find any trace of such a subject. So, what made Generative Adversarial Networks come to the forefront and why should you care? Let’s discuss.
Whenever there’s a discussion about Generative Adversarial Networks or GANs, it is always in the context of AI, machine learning or deep learning. While this topic is quite vast, this article is meant to help you understand it in simplified terms.
Let’s start with the term itself - Generative Adversarial Networks.
Let the software compete with each other!
GANs are fundamentally an approach to generative modeling using deep learning methods. The word “Generative” in the term points to the property of the GANs to create something of its own.
How can a program have the creativity to make something of its own? We give it the power of machine learning where it can learn from past data.
So, if you were to feed GANs with a ton of images, it can create a unique image of its own. The same is true for any set of data.
Given this definition, we run into a problem where there is no filter to check the output of the Generator for its authenticity. The generator can create anything related to its reference data set without knowing where it would be acceptable to us or not.
To solve this problem, GANs come with a discriminative network that checks the generated data with the true data. This is the Adversarial part of a Generative Adversarial Network. We are essentially pitting the generative network and discriminative network against each other, creating adversaries with one another.
Discriminative network or a Discriminator is used to keep the generated values of the Generator in check. And the generator’s task is to fool the discriminator into thinking that generated values are actually real and not computer-generated.
This is the basic concept of GANs.
GANs is explained in more detail in the paper by Ian Goodfellow and other researchers at the University of Montreal aptly titled Generative Adversarial Networks.
In the paper, they have clearly mentioned that the whole purpose of the generative network is to push the discriminative network into making a mistake. And the discriminative network will only make a mistake when it cannot differentiate between a machine-generated data and training data.
The best way to train a GAN is by using the MNIST database (Modified National Institute of Standards and Technology database).
The database consists of a training set of 60,000 examples and a test set of 10,000 examples. MNIST Training uses handwritten numerical values.
They are a great start for anyone looking for resources to train networks. It is a set of data that was used to train the model by Ian and his team.
From this data, the best analogy that we can use for GAN is that it is a two-player game where each player is trying their hardest to beat one another.
Where are we now in GAN development?
You might still remember the wave of news that came in late December 2018 about realistic-looking images generated by an AI. Well, that was GAN!
If you review those pictures, it’s easy to see why this was such a big story. The images were indistinguishable from the real-life pictures of a human face.
The people behind the project was NVidia, the popular computer graphics hardware and software developer. The researchers behind the project prepared a paper on its development and their results which is titled A Style-Based Generator Architecture for Generative Adversarial Networks.
The researchers at NVidia spent 8 weeks training the networks, and this was using eight of their Tesla range of graphic cards.
This certainly raises some questions about how we can trust pictorial data anymore. Also, if we are able to create human faces from reference images, it won’t be long before we can create perfect faces, causing models to go out of work!
Many raise concerns regarding the use of pictorial data in the judiciary system if the software can alter images so effectively. This is certainly something to think about!
Back to the subject of how far is GAN development, nothing depicts a clearer picture of the technology than the experiment from NVidia. We are able to create realistic looking faces that are not just believable, but highly customizable by the computer.
Research is still underway to make GANs more powerful to create realistic data and with less power-hungry requirements.
Applications of GAN
GANs can be used in a variety of applications, mostly image related, but that is surely going to change. Currently, GANs are used in:
Generating new content (imagery): GANs can be used to create lifelike images from a set of source images. The use of such a system is purely for understanding the capabilities of the GANs.
Some argue that this technology can be used to determine the looks of a baby from the photos of its parents.
Aging or de-aging: With a robust set of sample images, GANs can successfully age or de-age human faces. The recent popularity of an app called FaceApp shows how such technology is very popular among the masses.
If you are wondering about the technology behind FaceApp, its GANs.
Colorizing black and white photos: When a GAN is trained well enough, it can colorize photos and do it remarkably well. This technology can indeed bring life to old photos and give us a glimpse of that time in color.
Resolution enhancement: If you have tried enhancing the resolution of a low result in a picture, the result is always a blurry mess with blown out pixels. However, GAN substitutes each additional picture and creates high-quality enhancement images even when their resolution is low.
The world has seen many examples of GANs at work, and the ongoing research in this direction points to many more unexpected applications of GANs in the future.
The technology is revolutionary and we can expect GANs to show up on on our devices in more ways than one. However, before this technology matures, there are serious discussions needed on the ethical use of such powerful deep learning methods.