Will AI-Art Supplant Humans as the Artists of the Future?
AI is rapidly developing all around the world. It is quite likely that its rise will lead to the kind of tectonic changes to many professions that were seen during the creative destruction of the Industrial Revolution.
But could the sacrosanct sphere of art, long jealously guarded by human artists, now be under threat? Should artists be worried about the rising tide of AI-art?
What is “AI created art?”
AI-created art, as the name would suggest, is when the images are ostensibly resembling human-created artwork that is generated almost entirely algorithmically. Although this sounds a little implausible here is an example.

Spooky isn't it? And it was created by reams of code rather than a human mind.
The above 'painting', "Portrait of Edmond de Belamy", hit the news in 2018 when Christie's announced their intention to sell the piece at auction.
Incredibly, it smashed through its estimate and finally sold for an eye-watering $432,500.
It was 'created' by a generative adversarial network (two competing algorithms that are left to their own devices in a combative no-sum game). The AI was developed by a Paris-based art-collective called Obvious.
The result was a portrait of a rather blurry figure that was later printed on canvas and set within a gilded wood frame. Its 'creator' AI used 15,000 existing human-created portraits as reference material from various periods in art history.
Interestingly, the AI's work differs greatly from what we have come to expect from flesh and bone artists. For example, the subjects face is poorly defined, it is aligned off center and large parts of the canvas are empty.
The AI even signed the piece with a rather esoteric formula as a signature. This want-to-be Rembrandt has apparently chosen a 'name' for itself.
AI-Art is still in its infancy
AI-art first caught the public's attention in 2015 with the announcement of Google's pattern-finding software called DeepDream. This is a computer vision program created by Google's engineer Alexander Mordvintsev.
DeepDream uses a convolutional neural network that identifies patterns in images using algorithmic synthetic pareidolia. This is a psychological phenomenon where your mind responds to stimuli in the form of an image or sound.
Humans are highly attuned to their surroundings then be able to recognize or discern some form of pattern within them - when none is really there. Common examples would include seeing animals or other objects in cloud formations (or anything really) or hearing a hidden message in a piece of music when played in reverse.
Here is a prime example, do you see the Elephant in the following rock formation?

By simulating this in code, Google's DeepDream could be a very potent piece of programming indeed. Well perhaps.
In 2014 something called Generative Adversarial Network (GAN's) entered the fray after an article was written in 2014 by Ian Goodfellow. He theorized that they would be the next step in the evolution of neural networks.
GAN's, in case you are unaware, are deep neural net architectures that are made up of two nets. Each one is pitted against them in a literal and figurative tug-of-war.
In the case of the "Portrait of Edmond de Belamy" mentioned eariler, the human's behind it used two competing nets called the "Generator" and the "Discriminator. The first, as the name suggests, created a new image from information fed to it.
The latter then attempted to find differences between human-made images and that generated by the "Generator". The aim was for the "Generator" to fool the "Discriminator" into thinking its image was genuine and not synthetic.
Their potential is actually huge. This is because they are able to learn and mimic any distribution of data - they can learn!
And their results speak for themselves. When applied to create artwork GAN's can actually be 'trained' to produce completely new, and often dramatic, images.
This is in stark contrast to the rather bland creations offered by DeepDream.

Is this the end of human-created art?
AI-art has really caused a stir in the art world with a fair few fearing for its future. The work of organizations like Obvious has even coined a name for this new genre, "Gan-ism".
But many other artists don't seem to be phased in the slightest. Many artists who utilize AI, like Mario Klingemann, believe this is nothing more than 'a storm in a teacup'.
“They create instant gratification even if you have no deeper knowledge of how they work and how to control them, they currently attract charlatans and attention seekers who ride on that novelty wave,” Klingemann says.
Whilst he, and others, appreciate these models can create images that look fresh and are characteristic of a new genre, they are nothing beyond a new form of artistic tool.
After all, all images created to date have actually had a not insignificant amount of human input. The AI's are not spontaneously creating these pieces completely under their own direction.
“The work isn’t interesting, or original,” says Robbie Barrat, a young artist who works with AI regarding Obvious' recent work.
“They try to make it sound like they ‘invented’ or ‘wrote’ the algorithm that produced the works," he continued. In fact, Robbie explains, they used a form of pre-existing model that generates low-resolution outputs that are enhanced before release.
"People have been working with low-resolution GANs like this since 2015,” says Barrat.
Who, if anyone, should get paid when AI-generated artwork is sold?
The honest answer at present is, it depends. It all comes down to whoever, in the end, is deemed the copyright owner by law in a particular country.
You might well remember the famous monkey "selfies" taken by an Indonesian monkey called "Naruto" a few years ago. The event occurred when one British wildlife photographer, David Slater, left his camera unattended for a few moments and the playful primate found it.
Slater then published the photographs and, unsurprisingly, they went viral. But, The People for the Ethical Treatment of Animals (PETA) then decided to sue him for "copyright infringement" on behalf of "Naruto".
Whilst an initial settlement of 25% of Slater's revenue from the selfies was agreed, thankfully common sense finally prevailed. In an appeal, the case was dismissed on the grounds that copyrights can only be owned by human beings, not animals.
But can the same be said for AI? Logically speaking yes, but what about AI-artists, after all, they are not purely programmatic works - they tend to have human input.
Shouldn't the human contributors own the copyright if their AI 'colleague' is automatically ineligible for claims?
The U.S. Patent office appears to be ahead of the game in this regard.
It states that it “will refuse to register a claim if it determines that a human being did not create the work.” But it also goes stating that it will exclude works “produced by machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”
Case dismissed, the human artist's own the copyright.
But wait. In the example of "Naruto" since his work can't be claimed by anyone, they must be in the public domain. But, there are further arguments that any art created by animals who happen to reside on government-owned reserves or private property is automatically the property of whoever owns the land.
This is similar in practice to work-for-hire contracts in the U.S. On these occasions an author rescinds their copyright ownership of their work to their employer.
[see-also]
Any copyright of AI-art could theoretically be claimed in a similar fashion by the AI's creators and/or users, and thus remove them from the public domain.
In other countries, like the UK the issue is a lot simpler. UK law grants copyrights to a person who arranges for the creation of computer-generated works but they do stipulate the need for some "exertions" of a human.
So the issue seems to be as clear as mud. This question will likely dog legal types for some years to come.
But as AI's become more and more unpredictable, they'd better clarify the issue quickly before AI-lawmakers help out their buddies.