Important Update
Network Computing to Stop Publishing
On Sep. 30, 2025, Network Computing will stop publishing. Thank you to all our readers for being with us on this journey.
[フレーム][フレーム] NVIDIA, the AI industry's biggest thought leader, has rolled out a bevy of products to support the upcoming AI wave. When NVIDIA founder and CEO Jensen Huang takes the stage for a keynote at a major computer industry event, there's little doubt that he'll announce several innovations and enhancements from his industry-leading GPU company. That's just what he did this week to kick off Computex 2025 in Taipei, Taiwan. Anyone who's been to a major event with Huang keynoting is likely accustomed to him unveiling a slew of innovations to advance AI. Huang started the conference by stating how AI is revolutionizing the world . He then described how NVIDIA is enabling this revolution. Huang's passion for the benefits that AI can deliver is evident in the new products NVIDIA and its partners are rapidly developing. "AI is now infrastructure," Huang said. "And this infrastructure, just like the internet, just like electricity, needs factories. These factories are essentially what we build today." He added that these factories are "not the data centers of the past," but factories where "you apply energy to it, and it produces something incredibly valuable." Most of the news focused on products to build bigger, faster and more scalable AI factories. One of the biggest challenges in scaling AI is keeping the data flowing between GPUs and systems. Traditional networks can't process data reliably or fast enough to keep up with the connectivity demands. During his keynote, Huang described the challenges of scaling AI and how it's a network issue. Related:Nvidia Announces New and Expanded Products at SIGGRAPH 2025 "The way you scale is not just to make the chips faster," he said. "There's only a limit to how fast you can make chips and how big you can make chips. In the case of [NVIDIA] Blackwell, we even connected two chips together to make it possible." NVIDIA NVLink Fusion aims to resolve these limitations, he said. NVLink connects a rack of servers over one backbone and enables customers and partners to build their own custom rack-scale designs . The ability for system designers to use third-party CPUs and accelerators with NVIDIA products creates new possibilities in how enterprises deploy AI infrastructure. According to Huang, NVLink creates "an easy path to scale out AI factories to millions of GPUs, using any ASIC, NVIDIA's rack-scale systems and the NVIDIA end-to-end networking platform." It delivers up to 800 Gbps of throughput and features the following: NVIDIA ConnectX-8 SuperNICs. NVIDIA Spectrum-X Ethernet. NVIDIA Quantum-X800 InfiniBand switches. Computing power is the fuel of AI innovation, and the engine driving NVIDIA's AI ecosystem is its Blackwell architecture . Huang said Blackwell delivers a single architecture from cloud AI to enterprise AI as well as from personal AI to edge AI. Related:4 Takeaways from Antonio Neri's Keynote at HPE Discover 2025 Among the products powered by Blackwell is DGX Spark , described by Huang as being "for anybody who would like to have their own AI supercomputer." DGX Spark is a smaller, more versatile version of the company's DGX-1, which debuted in 2016. DGX Spark will be available from several computer manufacturers, including Dell, HP, ASUS, Gigabyte, MSI and Lenovo. It comes equipped with NVIDIA's GB10 Grace Blackwell Superchip. DGX Spark delivers up to 1 petaflop of AI compute and 128 GB of unified memory. "This is going to be your own personal DGX supercomputer," Huang said. "This computer is the most performance you can possibly get out of a wall socket." Designed for the most demanding AI workloads, DGX Station is powered by the NVIDIA Grace Blackwell Ultra Desktop Superchip, which delivers up to 20 petaflops of AI performance and 784 GB of unified system memory. Huang said that's "enough capacity and performance to run a 1 trillion parameter AI model." NVIDIA also announced the new RTX PRO line of enterprise and omniverse servers for agentic AI. Part of NVIDIA's new Enterprise AI Factory design, the RTX Pro servers are "a foundation for partners to build and operate on-premises AI factories," according to a company press release. The servers are available now. Related:2 Ways to Think about AI and Networking -- Without the Hype Since the modern AI compute platform is different, it requires a different type of storage platform. Huang said several NVIDIA partners are "building intelligent storage infrastructure" with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and the company's AI Data Platform reference design. Robotics is another AI focus area for NVIDIA. In his keynote, Huang introduced Isaac GROOT N1.5, the first update to the company's "open, generalized, fully customizable foundation model for humanoid reasoning and skills." He also unveiled the Isaac GROOT-Dreams blueprint for generating synthetic motion data -- known as neural trajectories -- for physical AI developers to use as they train a robot's new behaviors, including how to adapt to changing environments. Huang used his high-profile keynote to showcase how NVIDIA continues to have a heavy foot on the technology acceleration pedal. Even for a company as forward-looking as NVIDIA, it's unwise to let up because the rest of the marketplace is always trying to out-innovate each other. Zeus Kerravala is the founder and principal analyst with ZK Research. He spent 10 years at Yankee Group and prior to that held a number of corporate IT positions. Kerravala is considered one of the top 10 IT analysts in the world by Apollo Research, which evaluated 3,960 technology analysts and their individual press coverage metrics. You May Also Like Important Update On Sep. 30, 2025, Network Computing will stop publishing. Thank you to all our readers for being with us on this journey.NVIDIA Unveils a Barrage of AI Products and Capabilities at Computex 2025
Introducing NVLink Fusion
Powered by Blackwell
New Servers and Data Platform
Accelerating Development of Humanoid Robots
About the Author
Network Computing to Stop Publishing