Abstract:
With exponential increase in the amount of data collected per day, the fields of artificial intelligence and machine learning continue to progress at a rapid pace with respect to algorithms, models, applications and hardware. In particular, deep neural networks have revolutionized the field by providing unprecedented human-like performance in solving many real-world problems such as image or speech recognition. There is also a significant research aimed at unravelling the principles of computation in large biological neural networks and, in particular, biologically plausible spiking neural networks. Research efforts are also directed towards developing energy-efficient computing systems for machine learning and AI. New system architectures and computational models from tensor processing units to in-memory computing are being explored. Reducing energy consumption requires careful design choices from many perspectives. Some examples include: choice of model, approximations of the models for reduced storage and memory access, choice of precision for different layers of networks and computing with beyond-CMOS devices such as memristive devices. The full-day tutorial will provide a detailed overview of the new developments related to energy-efficient machine learning in CMOS and beyond-CMOS technologies. Specific topics include: (a) Low-Energy Machine Learning, (b) From Matrix to Tensor: Algorithm and Hardware Co-Design for Energy-Efficient Deep Learning, (c) Bringing AI to the Edge - A Shannon-inspired Approach, (d) In-Memory Computing using Memristive Devices and Applications in Machine Learning, and (e) Algorithms and System Design for Brain-Inspired Spiking Neural Networks (SNNs).