New Method Forestalls Data Preprocessing Bottlenecks

Nvidia GPU

The Korea Advanced Institute of Science and Technology (KAIST) announced on Jan. 10 that a research team led by professor Jung Myung-su at its School of Electrical Engineering has developed the world’s first holistic graph-based neural network machine learning technique for inference acceleration near a storage and an SSD accelerator capable of graph-based AI inference.

At present, graph-based neural network machine learning is conducted using general machine learning accelerators such as GPUs. This method has caused serious data preprocessing bottlenecks and memory shortages. However, with the technique developed by the team, every inference process is directly accelerated near a graph data storage and the bottleneck can be forestalled.

The team produced a prototype computational storage and applied a software framework to it along with the developed hardware resistor-transistor logic. Then, a test was conducted with an Nvidia GPU acceleration system (RTX 3090) and the team confirmed that the new technique provides a seven-fold improvement in speed and a 33-fold improvement in energy consumption compared to existing Nvidia GPU-based graph machine learning acceleration.

“We also confirmed that the preprocessing bottleneck further decreases in a larger graph and the maximum improvements amount to 201-fold and 453-fold, respectively,” the professor said, adding, “The new machine learning model is capable of showing data correlations and can be utilized in various fields, including large-scale recommendation, traffic prediction and new drug development.” Details of the research are scheduled to be presented at USENIX FAST 2022 next month.

Copyright © BusinessKorea. Prohibited from unauthorized reproduction and redistribution