University of Pennsylvania researchers have invented a new optical chip that can process over 2 billion photos per second. The gadget consists of a neural network that processes data as light without the use of components that slow down standard computer chips, such as memory.
The research was published in the journal Nature.
The new chip is based on a neural network, which is a system fashioned after how the brain processes information. These networks are made up of nodes that connect like neurons, and they even "learn" in the same manner that organic brains do by being trained on sets of data, such as object recognition in photos or speech recognition. In other words, they get a lot better at these things over time.
The new chip, as previously highlighted, handles information in the form of light rather than electrical signals. Its "neurons" are optical wires, which are layered in numerous layers, each specializing in a different form of classification.
In experiments, the scientists created a chip with a surface area of 0.01 square inches (9.3 mm2) and used it to classify a sequence of handwritten characters that looked like letters. The chip was able to classify photos with 93.8 percent accuracy for sets having two types of characters, and 89.8 percent accuracy for sets containing four types after being trained on relevant data sets.
Most notably, the chip was able to classify each character in 0.57 nanoseconds, allowing it to process 1.75 billion photos every second. The team says that this speed comes from the chip’s ability to process information as light, which gives it several advantages over existing computer chips.
“Our chip processes information through what we call ‘computation-by-propagation,’ meaning that, unlike clock-based systems, computations occur as light propagates through the chip,” said Firooz Aflatouni, lead author of the study. “We are also skipping the step of converting optical signals to electrical signals because our chip can read and process optical signals directly, and both of these changes make our chip a significantly faster technology.”
Another benefit is that the data being processed does not need to be stored, therefore, it saves time by not having to transmit data to memory and space by not requiring a memory component at all. According to the experts, not storing the data is also safer because it prevents any potential leaks.
The team's next steps will be to scale up the device and modify the technology to process different types of data.
“What’s really interesting about this technology is that it can do so much more than classify images,” said Aflatouni. “We already know how to convert many data types into the electrical domain – images, audio, speech, and many other data types. Now, we can convert different data types into the optical domain and have them processed almost instantaneously using this technology.”
Study Abstract:
"Deep neural networks with applications from computer vision to medical diagnosis are commonly implemented using clock-based processors, in which computation speed is mainly limited by the clock frequency and the memory access time. In the optical domain, despite advances in photonic computation, the lack of scalable on-chip optical non-linearity and the loss of photonic devices limit the scalability of optical deep networks. Here we report an integrated end-to-end photonic deep neural network (PDNN) that performs sub-nanosecond image classification through direct processing of the optical waves impinging on the on-chip pixel array as they propagate through layers of neurons. In each neuron, linear computation is performed optically and the non-linear activation function is realized Opto-electronically, allowing a classification time of under 570 ps, which is comparable with a single clock cycle of state-of-the-art digital platforms. A uniformly distributed supply light provides the same per-neuron optical output range, allowing scalability to large-scale PDNNs. Two-class and four-class classification of handwritten letters with accuracies higher than 93.8% and 89.8%, respectively, is demonstrated. Direct, clock-less processing of optical data eliminates analog-to-digital conversion and the requirement for a large memory module, allowing faster and more energy-efficient neural networks for the next generations of deep learning systems."