An assistant professor, Gordon Wetzstein with Julie Chang, a graduate student and first author on the paper Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification published in Nature Scientific Reports, married two types of computers into one. This created an optical-electronic hybrid computer whose aim is image analysis. The prototype camera’s first layer is an optical computer, which does not require power-intensive mathematical computing. The second layer is a conventional electronic computer.
The optical computer physically preprocesses the image data, filtering it in multiple ways. An electronic computer would have had to do it mathematically otherwise. This layer operates with zero input power since the filtering happens naturally by light passing through the optics. A lot of time and power is saved in this hybrid model which would have been consumed by image computation.
Chang said, “We’ve outsourced some of the math of artificial intelligence into the optics,”
This results in fewer calculations which in turn means fewer calls to memory and far less time to complete the process. Skipping these preprocessing steps gives the digital computer a head start for the remaining analysis.
Wetzstein said, “Millions of calculations are circumvented and it all happens at the speed of light. Some future version of our system would be especially useful in rapid decision-making applications, like autonomous vehicles,”
The prototype rivals the existing electronic-only computing processors in speed and accuracy. But the change here is that there are substantial computational cost savings which translates to time. The current prototype is arranged on a lab bench and could not be exactly classified as hand-held small. The researchers said that the system can one day be made small enough to be handheld.
Wetzstein, Chang and the researchers at the Stanford Computational Imaging Lab are now working in ways to make the optical component do even more of the preprocessing. This would result in a smaller, faster AI camera system that can replace the trunk sized computers currently used in cars and drones.
It is important to note that the system was successfully able to identify objects in both simulations and real-world experiments.
For more information, you can visit the official Stanford news website and the research paper.
Tesla is building its own AI hardware for self-driving cars
AI powered Robotics : Autonomous machines in the making
AutoAugment: Google’s research initiative to improve deep learning performance