Researchers At MIT Propose A New Method That Uses Optics To Accelerate Machine Learning Computations On Smart Speakers And Other Low-Power Connected Devices

Researchers have developed a new technique for performing computations straight on smart home gadgets. Their method moves the memory-intensive machine-learning model operations to a central server. The data is encoded onto light waves rather than sent hundreds of miles away from the device.

Fiber optics is used to transfer the waves to a connected device, allowing massive amounts of data to be sent via a network at incredible speeds. The receiver then uses a specific optical device to quickly compute the model’s components delivered by those light waves.

This technology significantly improves energy efficiency as compared to earlier methods—more than a hundred times. It might also improve security because user data won’t need to be routed to a central location for processing.

By employing a fraction of the energy now required by power-hungry processors, this technique would allow a self-driving automobile to make judgments right away.  It may also be used to categorize images quickly on a spacecraft traveling millions of kilometers from Earth, evaluate live video transmitted over cellular networks, or even provide latency-free communication between a user and their smart home device.

Today, the study was released in the journal Science. A whole feature-length movie is transmitted over the internet every few milliseconds or so. Data enters its system at that rate, and it can process information at that rate.

Reducing the workload

Neural networks are used in machine learning to recognize patterns in datasets and perform tasks like speech recognition and image categorization. Neural networks are layers of interconnected nodes or neurons. However, these models’ weight parameters—numerical values that modify the input data as it is processed—can amount to billions of them. Remembering these weights is necessary. At the same time, the data translation procedure necessitates billions of algebraic computations, which use a lot of power.

Researchers have developed a device that transfers data from memory to the parts of the computer that actually carry out the computation.

They created the Netcast neural network architecture, which stores weights in a central server coupled to a revolutionary piece of hardware known as a smart transceiver. The silicon photonics technology used by this smart transceiver, a thumb-sized data receiver, and transmitter, allows it to retrieve trillions of weights from memory per second.

Weights are detected as light waves with electrical signals imprinted on them. By turning on and off lasers, the transceiver changes data that has been encoded as bits (1s and 0s). For a 1 and a 0, and vice versa, a laser is activated. To avoid having a client device contact the server to get them, it mixes these light waves and periodically sends them across a fiber optic network.

“Optics is excellent because it offers a variety of methods for carrying data because data can be stored in a variety of colors, enabling a much higher data flow and bandwidth than with electronic storage devices like e-cigarettes or mobile phones.

Trillions per second

The device that employs light waves to carry out complex calculations at the speed of computers has been created by MIT researchers. Netcast gadget uses extremely little power and has the capacity to multiply billions of times per second. Powered by a special optical component known as a “Mach-Zehnder” modulator, which makes use of light waves once they have reached the client device.

Netcast enabled machine learning to work rapidly and with good accuracy (98.7% for image classification and 98.8% for digit identification), and conveyed weights across an 86-kilometer fiber. The researchers intend to iterate on the technology in the future to enhance performance even more. They also seek to reduce the receiver’s size so it can fit on a smart device like a cell phone.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'Delocalized Photonic Deep Learning on the Internet’s Edge'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and reference article.
Please Don't Forget To Join Our ML Subreddit


Ashish kumar is a consulting intern at MarktechPost. He is currently pursuing his Btech from the Indian Institute of technology(IIT),kanpur. He is passionate about exploring the new advancements in technologies and their real life application.


Credit: Source link

Comments are closed.