
DDN Gooses AI Storage Pipelines with Infinia 2.0

(spainter_vfx/Shutterstock)
AI’s insatiable demand for data has exposed a growing problem: storage infrastructure isn’t keeping up. From training foundation models to running real-time inference, AI workloads require high-throughput, low-latency access to vast amounts of data spread across cloud, edge, and on-prem environments. Traditional storage systems have often struggled under the weight of these demands, creating bottlenecks that can drastically delay innovation in the AI space.
Today, DDN unveiled Infinia 2.0, a significant update to its AI-focused, software-defined data storage platform designed to eliminate the inefficiencies in AI storage and data management. The company says Infinia 2.0 acts as a unified, intelligent data layer that dynamically optimizes AI workflows.
“Infinia 2.0 is not just an upgrade—it’s a paradigm shift in AI data management,” DDN CEO Alex Bouzari says, emphasizing how Infinia builds on the company’s deep-rooted expertise in HPC storage to power the next generation of AI-driven data services.
As AI adoption grows, the challenges of scale, speed, and efficiency become more apparent. LLMs, generative AI applications, and inference systems require not only massive datasets but the ability to access and process them faster than ever. Traditional storage solutions struggle with performance bottlenecks, making it difficult for GPUs to receive the data they need quickly enough, limiting overall training efficiency. At the same time, organizations must navigate the fragmentation of data across multiple locations, from structured databases to unstructured video and sensory data. Moving data between these environments creates inefficiencies, driving up operational costs and creating latency issues that slow AI applications.
DDN claims Infinia 2.0 solves these challenges by integrating real-time AI data pipelines, dynamic metadata-driven automation, and multi-cloud unification, all optimized specifically for AI workloads. Rather than forcing enterprises to work with disconnected data lakes, Infinia 2.0 introduces a Data Ocean, a unified global view that eliminates redundant copies and enables organizations to process and analyze their data wherever it resides. This is meant to reduce storage sprawl and to allow AI models to search and retrieve relevant data more efficiently using an advanced metadata tagging system. With virtually unlimited metadata capabilities, AI applications can associate vast amounts of metadata with each object, making search and retrieval operations dramatically faster.
Infinia 2.0 integrates with frameworks like TensorFlow and PyTorch, which the company says eliminates the need for complex format conversions, allowing AI execution engines to interact with data directly to significantly speed up processing times. The platform is also designed for extreme scalability, supporting deployments that range from a few terabytes to exabytes of storage, making it flexible enough to meet the needs of both startups and enterprise-scale AI operations.
Performance is another area where Infinia 2.0 could be a breakthrough. The platform boasts 100x faster metadata processing, reducing lookup times from over ten milliseconds to less than one. AI pipelines execute 25x faster, while the system can handle up to 600,000 object lists per second, surpassing the limitations of even AWS S3. By leveraging these capabilities, DDN asserts that AI-driven organizations can ensure their models are trained, refined, and deployed with minimal lag and maximum efficiency.
During a virtual launch event today called Beyond Artificial, DDN’s claims were reinforced by strong endorsements from industry leaders like Nvidia CEO Jensen Huang, who highlighted Infinia’s potential to redefine AI data management, emphasizing how metadata-driven architectures like Infinia transform raw data into actionable intelligence. Enterprise computing leader Lenovo also praised the platform, underscoring its ability to merge on-prem and cloud data for more efficient AI deployment.
Supermicro, another DDN partner, also endorses Infinia: “At Supermicro, we are proud to partner with DDN to transform how organizations leverage data to drive business success,” said Charles Liang, founder, president, and CEO at Supermicro. “By combining Supermicro’s high-performance, energy-efficient hardware with DDN’s revolutionary Infinia platform, we empower customers to accelerate AI workloads, maximize operational efficiency, and reduce costs. Infinia’s seamless data unification across cloud, edge, and on-prem environments enables businesses to make faster, data-driven decisions and achieve measurable outcomes, aligning perfectly with our commitment to delivering optimized, sustainable infrastructure solutions.”
At the Beyond Artificial event, Bouzari and Huang sat down for a fireside chat to reflect on how a previous idea, born from a 2017 meeting with Nvidia, evolved into the Infinia platform.
DDN had been asked to help build a reference architecture for AI computing, but Bouzari saw a much bigger opportunity. If Huang’s vision for AI was going to materialize, the world would need a fundamentally new data architecture, one that could scale AI workloads, eliminate latency, and transform raw information into actionable intelligence.
At the Beyond Artificial event, Huang and Bouzari sit down for a fireside chat about the bigger picture of storage and AI.
Infinia is more than just storage, Bouzari says, and fuels AI systems the way energy fuels a brain. And according to Huang, that distinction is critical.
“One of the most important things people forget is the importance of data that is necessary during application, not just during training,” Huang notes. “You want to train on a vast amount of data for pretraining, but during use, the AI has to access information, and AI would like to access information, not in raw data form, but in informational flow.”
This shift from traditional storage to AI-native data intelligence has profound implications, the CEOs say. Instead of treating storage as a passive repository, DDN and Nvidia are turning it into an active layer of intelligence, enabling AI to retrieve insights instantly.
“This is the reason why the reframing of storage of objects and raw data into data intelligence is this new opportunity for DDN, providing data intelligence for all of the world’s enterprises as AIs run on top of this fabric of information,” Huang says, calling it “an extraordinary reframing of computing and storage.”
Reframing certainly seems necessary as AI continues to evolve because the infrastructure supporting it must evolve as well. DDN’s Infinia 2.0 could represent a major shift in how enterprises approach AI storage, not as a passive archive, but as an active intelligence layer that fuels AI systems in real time. By eliminating traditional bottlenecks, unifying distributed data, and integrating seamlessly with AI frameworks, Infinia 2.0 aims to reshape how AI applications access, process, and act on information.
With endorsements from industry leaders like Nvidia, Supermicro, and Lenovo, and with its latest funding round of $300 million at a $5 billion valuation, DDN is positioning itself as a key player in the AI landscape. Whether Infinia 2.0 delivers on its ambitious promises remains to be seen, but one thing is clear: AI’s next frontier isn’t just about models and compute but is about rethinking data itself. And with this launch, DDN is making the case that the future of AI hinges on new paradigms for data management.
Learn more about the technical aspects of Infinia 2.0 at this link, or watch a replay of Beyond Artificial here.
Related Items:
Feeding the Virtuous Cycle of Discovery: HPC, Big Data, and AI Acceleration
The AI Data Cycle: Understanding the Optimal Storage Mix for AI Workloads at Scale
DDN Cranks the Data Throughput with AI400X2 Turbo
Editor’s note: This article first appeared on AIWire.