Researchers at the University of Pennsylvania Propose a New Computing Architecture Ideal for Artificial Intelligence (AI)

Conventional computing architectures severely constrain artificial intelligence’s ability to improve technology. In traditional models, memory storage and computing occur in separate areas of the machine. This is why data must be transported from its storage area to a CPU or GPU for processing. The most significant disadvantage of this design is that this movement takes time, which reduces the performance of even the most potent processing units available. There is no avoiding lag when compute performance exceeds memory transfer. These delays become a severe issue when dealing with the massive amounts of data required for machine learning and AI applications. 

Researchers have focused on hardware innovation to achieve the necessary increases in speed, agility, and energy efficiency as AI software advances in sophistication and the rise of the sensor-heavy Internet of Things produces larger datasets. A team of researchers from the University of Pennsylvania’s School of Engineering and Applied Science, in collaboration with researchers from Sandia National Laboratories and Brookhaven National Laboratory, have created a new computing architecture based on compute-in-memory (CIM), which is ideal for AI. Processing and storage take place simultaneously in CIM systems, which helps to reduce energy consumption and eliminate transfer time. The new CIM design from the team stands out for containing no transistors. This design is specifically adapted to how Big Data applications have changed how computing works today.

Transistors limit the speed at which data may be accessed, even in a compute-in-memory architecture. They utilize more time, space, and energy than is ideal for AI applications since they require much wire in a chip’s overall circuitry. The transistor-free design by the team is distinctive since it is straightforward, quick, and uses less energy. The researchers clearly emphasize that the advancement is not limited to circuit-level design. Their earlier materials science research on a semiconductor known as scandium-alloyed aluminum nitride (AlScN) was the foundation for the new computing architecture. Ferroelectric switching is possible with AlScN, making it faster and more energy-efficient than other nonvolatile memory components. Another crucial feature is the material’s ability to be deposited at temperatures low enough to work with silicon foundries. This makes it possible for the architecture to be space-efficient, which is crucial for small chip designs. 

Compute-in-memory architectures have been successfully applied in other studies to boost performance for AI applications. However, the conflicting trade-off between performance and adaptability is something that current methods cannot resolve. Memristor crossbar arrays are used in computing architecture to achieve high performance. This design imitates the anatomy of the human brain to facilitate optimal neural network performance. However, several important categories of data activities are required for functioning AI, and neural network operations, which employ layers of algorithms to process data and recognize patterns, are just one of them. Compared to alternative compute-in-memory architectures in the market, the team’s ferrodiode approach offers ground-breaking versatility. In all three essential data operations, which serve as the cornerstone of successful AI applications, it executes with similar proficiency and achieves higher accuracy. It provides matrix multiplication acceleration, a key component of neural network computing, parallel search, a feature that enables precise data filtering and analysis, and on-chip storage, the ability to keep the large volumes of data necessary for deep learning.

With conventional architectures, conducting pattern recognition and search in the same AI application requires distinct portions of the chip. Thus, the availability and space in such designs are very easily and swiftly exhausted. However, with the ferrodiode developed by the team, the user can perform such functions in the same portion by altering the voltage while programming. The chip worked as accurately as AI-based software operating on a standard CPU when it was tested using a simulation of a machine learning task. This made the research extremely important since it demonstrated how memory technology could be used to create chips that integrate various AI data applications in a way that seriously undermines traditional computing technologies.

The design strategy used by the team takes into account the fact that AI is neither hardware nor software but rather a vitally important combination of the two. The researchers stress that all modern AI computing is software-enabled on silicon hardware with an architecture created many years ago. They predict that rebuilding hardware for AI will be the next significant advancement in semiconductors and microelectronics, and future research will be centered on the co-design of hardware and software. The team’s objective was to create hardware that improved software performance, and with their new architecture, they hoped to ensure that the technology is both accurate and quick.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'Reconfigurable Compute-In-Memory on Field-Programmable Ferroelectric Diodes'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and reference article.

Please Don't Forget To Join Our ML Subreddit


Khushboo Gupta is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Goa. She is passionate about the fields of Machine Learning, Natural Language Processing and Web Development. She enjoys learning more about the technical field by participating in several challenges.


Credit: Source link

Comments are closed.