-
Memory Chips that Compute Will Accelerate AI - Samsung could double the performance of neural nets with processing-in-memory. View the article here.
-
Real-Time Evolution and Deployment of Neuromorphic Computing at the Edge - Neuromorphic systems can provide real-time edge control systems with very low power dissipation.
- Frontier Supercomputer to Usher in Exascale Computing - Oak Ridge National Lab may be the first to reach 10^18 operations per second.
-
Memory Chips that Compute Will Accelerate AI - Samsung could double the performance of neural nets with processing-in-memory. View the article here.
-
The Femtojoule Promise of Analog AI - Analog neural networks based on non-volatile memory arrays promise dramatic reductions in power for many AI applications.
-
Supercomputers Flex Their AI Muscles - AI benchmarks recently tested on the latest supercomputers worldwide show exascale performance.
-
Explainable AI and ML: Guest Editor's Introduction, M. Raunak, and R. Kuhn, IEEE Computer, October 2021 - Overview of a special issue on how machine learning and training may be better understood; several of these articles are open-access.
-
The Great AI Reckoning: Special Report, IEEE Spectrum, October 2021 - This special report includes eight featured articles on the past, present, and future of artificial intelligence.
-
Next-Gen Chips Will Be Powered From Below - Redesign of power lines in state-of-the-art silicon chips will enable increased energy efficiency, permitting Moore's Law to continue a bit longer.
-
Cerebras' Tech Trains "Brain-Scale" AIs - A dedicated AI training system, containing a wafer-scale chip, further redesigns the memory architecture to enhance the scale and speed of machine learning.
-
Can Software Performance Engineering Save Us From the End of Moore's Law? - Although custom optimization of software for modern platforms can be difficult, performance improvements can be quite substantial.