AI Overvaluation Concerns Dismissed

The author is an analyst at NH Securities & Investemnt. He can be reached at hwdoh@nhqv.com. -- Ed.

 

At AAAI 2020, authorities on machine learning dismissed concerns towards AI overvaluation while pointing out the technical limitations of GPUs. In order to address the shortcomings of GPUs, industry leaders commented on the potential of NPUs featuring large-scale built-in memory.

AI overvaluation concerns dismissed

At the Association for the Advancement of Artificial Intelligence (AAAI) conference held last week, world leaders in machine learning, including Geoffrey Hinton, Yan LeCun, and Yoshua Bengio, discussed the challenges facing AI. Regarding the recently-emerging theory that AI technologies face inherent limitations, LeCun said that deep learning limits mainly relate to supervised learning, in which both input and output data are provided by researchers. He went on to forecast that unsupervised learning (relatively free from the control of engineers) may be the game changer that kicks off the next AI revolution.

Meanwhile, Geoffrey Hinton argued that convolutional neural networks (CNNs), which are the current go-to algorithms for deep learning in image recognition, face limitations stemming from their lack of consideration given to spatial hierarchies between simple and complex objects. To solve this problem, he proposes using a ‘capsule network’ algorithm capable of freely representing complex objects via dynamic routing.

At the conference, attention was given to the limitations of GPUs, the mainstay computing devices of the machine learning world. As GPUs have limited internal memory which can be used to update neural network weights, weights must be constantly stored and then retrieved in external DRAM—a significant disadvantage. To address this issue, it was suggested that NPUs featuring large-scale built-in memory might prove efficient.

AI chip development trend negative for GPU market

Speakers at AAAI pointed out that the dominance of Nvidia’s GPU and Google’s TPU in the AI semiconductor market presents a concern. However, it was noted that several semiconductor development projects (including for inference-focused semiconductors) with the potential to address this issue are currently underway at a range of industry players.

We note that Facebook, working with Yan LeCun, is delving into AI chip design, alongside domestic and international startups such as Groq, GreenWaves, Eta Compute, Esperanto Tech, Xnor, Picovoice, and Furiosa AI. In our view, this trend appears favorable for both small startups developing computational chips to replace GPUs and firms manufacturing chip design tools essential to the chip development process. Related plays include Cadence and Synopsys. However, we view this trend as a negative for Nvidia, who has captured market attention for its potential to monopolize machine learning technology.

Copyright © BusinessKorea. Prohibited from unauthorized reproduction and redistribution