Bioengineers develop bionic 3D cameras with bug eyes and bat sonar

This 3D camera can spot hidden objects.
Loukia Papadopoulos
The new camera can see just like bugs' eyes.jpg
The new camera can see just like bugs' eyes.


  • The new camera can decipher the size and shape of objects hidden around corners or behind other items.
  • It does this by using echolocation and bug eyes.
  • It has many applications in autonomous driving.

UCLA bioengineers have engineered a new class of bionic 3D camera systems that can successfully imitate flies’ multiview vision and bats’ natural sonar sensing, according to a statement by the institution.

The Camera can spot hidden objects

The new camera is powered by computational image processing and can decipher the size and shape of objects hidden around corners or behind other items.

The new device uses a form of echolocation, or sonar, present in bats. Bats produce high-frequency squeaks that bounce off their surroundings and are picked back up by their ears. They then evaluate the differences in how long it takes for the echo to reach them and the intensity of the sound to determine where things are, what’s in the way, and the proximity of potential prey.

Meanwhile, insects have geometric-shaped compound eyes, in which each “eye” is composed of hundreds to tens of thousands of individual units for sight. This means they can see the same thing from multiple lines of sight.

The researchers designed a high-performance 3D camera system that leverages these advantages found in bats and insects while at the same time addressing any inherent shortcomings.

Bioengineers develop bionic 3D cameras with bug eyes and bat sonar
Bat and insect inspired 3D imaging.

“While the idea itself has been tried, seeing across a range of distances and around occlusions has been a major hurdle,” said study leader Liang Gao, an associate professor of bioengineering at the UCLA Samueli School of Engineering.

“To address that, we developed a novel computational imaging framework, which for the first time enables the acquisition of a wide and deep panoramic view with simple optics and a small array of sensors.”

The new framework is called “Compact Light-field Photography,” or CLIP, and studies have proved that it can be used to “see” hidden objects. This effect is supported by a type of LiDAR, or “Light Detection And Ranging,” in which a laser scans the surroundings to create a 3D map of the area.

Most Popular

Seven LIDAR cameras and CLIP

The new device uses seven LiDAR cameras combined with CLIP to take a lower-resolution image of the scene, process what individual cameras see, and reconstruct the combined scene in high-resolution 3D imaging.

“If you’re covering one eye and looking at your laptop computer, and there’s a coffee mug just slightly hidden behind it, you might not see it because the laptop blocks the view,” explained Gao, who is also a member of the California NanoSystems Institute.

“But if you use both eyes, you’ll notice you’ll get a better view of the object. That’s sort of what’s happening here, but now imagine seeing the mug with an insect’s compound eye. Now multiple views of it are possible.”

The technology could now be used in autonomous vehicles or medical imaging tools. The research has been published in Nature Communications.


Inspired by natural living systems, modern cameras can attain three-dimensional vision via multi-view geometry like compound eyes in flies, or time-of-flight sensing like echolocation in bats. However, high-speed, accurate three-dimensional sensing capable of scaling over an extensive distance range and coping well with severe occlusions remains challenging. Here, we report compact light field photography for acquiring large-scale light fields with simple optics and a small number of sensors in arbitrary formats ranging from two-dimensional area to single-point detectors, culminating in a dense multi-view measurement with orders of magnitude lower dataload. We demonstrated compact light field photography for efficient multi-view acquisition of time-of-flight signals to enable snapshot three-dimensional imaging with an extended depth range and through severe scene occlusions. Moreover, we show how compact light field photography can exploit curved and disconnected surfaces for real-time non-line-of-sight 3D vision. Compact light field photography will broadly benefit high-speed 3D imaging and open up new avenues in various disciplines.