DARPA Begins Working on Cameras That Mimic the Human Brain
While you have been fretting about how your camera phone does not take good pictures, Defense Advanced Research Projects Agency (DARPA) has announced research teams that will work on the development of cameras that process images like the human brain. The program is aimed at improving military applications such as autonomous vehicles, robotics, and search and tracking.
Camera technology has developed tremendously over the years. Not only have we managed to take pictures on far-away planets and beam them back to Earth, but we have also managed to pack more imaging power in cameras that are getting smaller by the day. Yet, the human eye is what we are trying to beat.
The cameras that we use are processing more data, consuming more power yet lack the 'intelligence' to track objects if backgrounds change or become cluttered. Humans can do this well because the brain has optimized the process by processing signals only when something changes. DARPA intends to replicate this process in-silico through its Fast Event-based Neuromorphic Camera and Electronics (FENCE) program. The program wants to develop neuromorphic - brainlike- cameras that capture events by transmitting information only about the pixels that have changed. This will drastically reduce the amount of data that these cameras will process, their latency - or delays in transmitting the data, while also reducing their power consumption in the process.
“The goal is to develop a ‘smart’ sensor that can intelligently reduce the amount of information that is transmitted from the camera, narrowing down the data for consideration to only the most relevant pixels,” said Dr. Whitney Mason, program manager of the FENCE program.
To achieve this, the program will develop an event-based infrared (IR) focal plane array (FPA) that will capture signals and transmit them. A new class of digital signal processing and learning algorithms will be needed to facilitate the workings of intelligent sensors capable of handling dynamic scenes.
A read-out integrated circuit (ROIC) and an integrated processing layer will be used to identify relevant spatial and temporal signals. Since the ROIC and processing layer process little data with low latency, the sensor developed in the program is expected to function on power less than 1.5 Watts.
DARPA has announced that the team of researchers working on developing the ROIC and processing layer will be from Raytheon, BAE Systems, and Northrop Grumman.