Engineers Develop a Method to Encode Two Different Holograms in One Surface
Holograms have been part of pop culture for over 50 years, from science fiction novels to Princess Leia's famous plea to Obi-Wan Kenobi in Star Wars. The understanding of holograms, in theory, would be that no matter where you view it, you'd see the same object in three dimensions. But a team of researchers at Caltech developed a way to bounce incoming light off of a material at different angles to create two different holograms.
"Each post can do double duty. This is how we're able to have more than one image encoded in the same surface with no loss of resolution," said Andrei Faraon (BS '04), senior author of a paper on the new material published by Physical Review X on December 7.
"Previous attempts to encode two images on a single surface meant arranging pixels for one image side by side with pixels for another image. This is the first time that we're aware of that all of the pixels on a surface have been available for each image," Faraon added.
The visuals are more than just the cheap tricks associated with things like the 'lenticular prints' which have been popular for decades. The researchers had to craft the effect on a nanoscale. They developed a unique metamaterial from silicon oxide and aluminum. They formed this new material in tiny posts just a few hundred nanometers tall. (For scale, the average human hair is 100,000 nanometers wide.)

The result was a proof of concept surface that when hit with a laser straight on (at 0 degrees) projects a hologram of the Caltech logo. However, at 30 degrees, the projection changes to a logo of the Department of Energy Light-Material Interactions in Energy Conversion Energy Frontier Research Center. (Faraon serves as a principal investigator for the organization.)

The researchers then manipulated the properties to turn each pillar into a binary source that depended on the angle of the light source. The process was labor-intensive, according to the researchers.
"We created a library of nanoposts with information about how each shape reflects light at different angles. Based on that, we assembled the two images simultaneously, pixel by pixel," said Seyedeh Mahsa Kamali, the first author of the Physical Review X paper.
[see-also]
The Caltech team also determined that it would be theoretically possible to encode three or more images into a single surface, but they would reach a limit at a certain point of how many images can fit.
Faraon and the engineers hope that the research could improve virtual reality and augmented reality, further expanding what headsets are capable of doing for its users.
"We're still a long way from seeing this on the market, but it is an important demonstration of what is possible," Faraon said in a press statement.
Kamali agreed, saying, "We are still exploring just how far this technology can go."
The full study can be found at the journal Physical Review X.
Via: Caltech
Ammoun's photography career started in 2015 when he bought his first camera with money from his dental school graduation award. This sparked an interest that grew into a guide to the Moon, space, and beyond.