A new AI system could substantially upgrade traffic flow
Have you ever been stuck in traffic only to head toward another red light? Is there a feeling more irritating than being held up in a traffic jam?
Now Aston University researchers have engineered a new artificial intelligence system that could put an end to long queues at traffic lights, according to a statement released by the institution on Tuesday.
Deep reinforcement learning
The system is so efficient because it uses deep reinforcement learning, meaning it actually adapts its processes when it is not doing well and continues improving when it makes progress.
“We have set this up as a traffic control game. The program gets a ‘reward’ when it gets a car through a junction. Every time a car has to wait or there’s a jam, there’s a negative reward. There’s actually no input from us; we simply control the reward system," said Dr. Maria Chli, a reader in Computer Science at Aston University.
The researchers noted that the new system significantly outperformed all other conventional methods of tackling traffic. This is because it was built on a state-of-the-art photo-realistic traffic simulator called Traffic 3D.
Adaptable to real-world settings
This simulator has been trained to handle different traffic and weather scenarios and was, therefore, able to quickly adapt to real traffic intersections, making it effective in many real-world settings.
“The reason we have based this program on learned behaviors is so that it can understand situations it hasn’t explicitly experienced before. We’ve tested this with a physical obstacle that is causing congestion, rather than traffic light phasing, and the system still did well. As long as there is a causal link, the computer will ultimately figure out what that link is. It’s an intensely powerful system," concluded Dr. George Vogiatzis, senior lecturer in Computer Science at Aston University.
The study was published in Aston University's Library Services.
Study abstract:
Ineffective traffic signal control is one of the major causes of congestion in urban road networks. Dynamically changing traffic conditions and live traffic state estimation are fundamental challenges that limit the ability of the existing signal infrastructure in rendering individualized signal control in real-time. We use deep reinforcement learning (DRL) to address these challenges. Due to economic and safety constraints associated training such agents in the real world, a practical approach is to do so in simulation before deployment. Domain randomization is an effective technique for bridging the reality gap and ensuring effective transfer of simulation-trained agents to the real world. In this paper, we develop a fully-autonomous, vision-based DRL agent that achieves adaptive signal control in the face of complex, imprecise, and dynamic traffic environments. Our agent uses live visual data (i.e. a stream of real-time RGB footage) from an intersection to extensively perceive and subsequently act upon the traffic environment. Employing domain randomization, we examine our agent’s generalization capabilities under varying traffic conditions in both the simulation and the real-world environments. In a diverse validation set independent of training data, our traffic control agent reliably adapted to novel traffic situations and demonstrated a positive transfer to previously unseen real intersections despite being trained entirely in simulation.
A researcher behind the new climate change mitigation proposal hopes we can find a "more palatable" solution to solar geoengineering.