The monochromatic black-and-green that defined night vision for decades is quickly receding into the past.
The U.S. military already issues night-vision goggles that outline people and other objects in bright white, and researchers across the world are racing to develop even more advanced ways of seeing in the dark. A new proof-of-principle study offers intriguing hints about how the next generation of such technology might work.
In a paper published Wednesday in the academic journal PLOS ONE, researchers demonstrate that a deep learning algorithm can build a full-color reconstruction of a scene using only infrared images the human eye can't see.
These findings suggest an exciting new future for night-vision technology.
Human eyes face many limitations
It seems like humans can see every color, but our eyes can only detect a narrow slice of the electromagnetic spectrum. The light waves we can see range from roughly 400 nanometers (that registers in the human brain as violet) to roughly 700 nanometers (perceived as red). If someone were in a room with no windows and a bright bulb casting light at a wavelength of 800 nanometers, they would experience total darkness.
A mosquito or a pit viper, on the other hand, could see just fine. (So could a cyborg mouse.) A human could also see a version of the scene if they were looking through an infrared camera. That's because it's not a technical challenge to take photos in infrared light. The challenge is rendering those images in visible light so a human viewer can make sense of what they're seeing. For example, thermal imaging uses a technique called pseudocolor to make an infrared image visible. While the resulting image contains multiple colors, it's really a souped-up black-and-white picture where the colors don't correspond to what the scene would look like if seen in visible light.
New technology could render the infrared light perfectly visible
The researchers behind the new study are doing something far more sophisticated with infrared images. They started by printing images of color palettes and faces. Then they created a dataset by taking photos of those images using a monochromatic camera that can be set to take photos at very specific wavelengths. They took photos of the faces under monochromatic light sources of various wavelengths in the visible and near-infrared spectrums.
With these digital files in hand, they built on decades of research in computer science to develop and test a deep learning algorithm that could begin with infrared images of a scene and infer what that scene would look like in the visible spectrum. And it worked! Under these admittedly ideal conditions, the researchers found that one of their algorithms — using deep U-Net-based architectures — was able to transform a set of three infrared images into a full-color photo that very closely resembled a normal photo of the same image.
We probably won't see this technology in night-vision goggles anytime soon, but this proof-of-concept shows that full-color night vision is on the horizon.