Open AI's Algorithm Can Make These Dots Collaborate to Complete a Task

Open AI's quest for safe artificial intelligence has brought forth an impressive deep learning algorithm that has various applications.
Donovan Alexander

Artificial intelligence is part of humanity's future, but to get to that society needs to pursue AI responsibly. Though the age of super artificial intelligence could prove to be beneficial to humanity, there seems to be an equal chance that AI could be highly destructive.

Billionaire and Tesla CEO Elon Musk have made his opinions quite clear on the future of artificial intelligence stating in an interview, “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” 

Taking the initiative, Elon Musk and Y Combinator president Sam Altman have founded the company OpenAI, a non- profit whose mission is to build safe artificial general intelligence and ensure AGI's benefits are as widely and evenly distributed as possible. 

Recently in the news for expanding their team to bring forth a safe AI future, Open AI has recently showcased an algorithm that will potentially lay the framework for artificial general intelligence. 

Cooperate, Compete, Communicate

Open has developed an impressive new algorithm dubbed MADDPG for centralized learning in multiagent environments allowing agents to learn to collaborate and compete with each other.

What is a multiagent environment you ask? In short, a multiagent computerized system composed of multiple interacting intelligent agents.

Using multiagent environments are excellent for testing complex AI in a closed system because first, the difficulty of the environment is determined by the skill of your competitors and two, no matter how smart an agent is, there’s always pressure to get smarter. 

In the experiment, Open AI used their MADDPG to train four red agents to chase two green agents. Like a game of tag or a highly intelligent game of cat and mouse, four red dots (chasers) chased two green (runners) dot.

 Open AI's Algorithm Can Make These Dots Collaborate to Complete a Task
Source: Open AI

During the test with the power of the algorithm, the red agents were able to collaborate with each other to chase a single green agent, gaining a higher reward.

The green agents, however, learned to split from each other to maximize their survival. 

Most Popular

[see-also]

Open AI describes their experiment in their blog post stating "We treat each agent in our simulation as an “actor”, and each actor gets advice from a “critic” that helps the actor decide what actions to reinforce during training," says the AI company. To make it feasible to train multiple agents that can act in a globally-coordinated way, we enhance our critics so they can access the observations and actions of all the agents."

Building off of the algorithm, Open AI intends to continue to expand on their research with different variations of the algorithm, with each new research project having the agents compete or collaborate to complete a task. Be sure to check out the Open AI's experiment above. 

message circleSHOW COMMENT (1)chevron