Researchers Develop Neural Network with Remarkable Mind-Reading Capabilities
There is a host of web-based features that have popped up in the past decade that seem to anticipate what our search preferences are, which friends we may want to add to our circle, or even what products we may be wanting to buy. These tech-driven innovations seem to know what’s on our minds and adjust in subtle (or sometimes not so subtle) ways. A team of Japanese researchers at Kyoto University, however, have taken the idea a few steps further, achieving remarkable results.
[see-also]
They developed a new technique they have termed “deep image reconstruction,” and it offers the ability to decode a more sophisticated set of images, unlike similar methods that rely on binary pixel deconstruction. This approach essentially involves “a deep generator network (DGN) [which] is optionally combined with the [deep neural network] DNN to produce natural-looking images, in which optimization is performed at the input space of the DGN.” Details from the study, titled “Deep image reconstruction from human brain activity”, were shared at the end of December and are awaiting peer review.

The research was conducted over a 10-month period and began with the team creating three categories of images for three participants to view for differing lengths of times:
• Artificial Geometric Shapes
• Natural Phenomena, which include people or animals
• Letters of the Alphabet
The data collected from analyzing their brain activity—carried out both during and after the images were viewed by the three participants—was decoded via a neural network. And just like that, interpretations of their thoughts could be generated (imagine a kind of neural vending machine).

Kyoto University Graduate School of Informatics Professor Yukiyasu Kamitani, who was part of the team, elaborates on how their work was built on previous research: “We have been studying methods to reconstruct or recreate an image a person is seeing just by looking at the person's brain activity. Our previous method was to assume that an image consists of pixels or simple shapes. But it's known that our brain processes visual information hierarchically extracting different levels of features or components of different complexities.”
Kamitani discusses the impact of their work: “Our brain processes visual information by hierarchically extracting different levels of features or components of different complexities. These neural networks or AI models can be used as a proxy for the hierarchical structure of the human brain.”
Although Kamitani is quick to acknowledge that more work needs to be done in terms of development, once perfected the technology could potentially revolutionize the area of visualization technology and brain-machine interfaces, from product placements to using the methods towards improving the level of care offered to psychiatric patients through visualizations of their hallucinations.
NASA "are simply the best in the world at modeling these materials, hands down," SMART Tire co-founder Brian Yennie tells IE.