Follow BigDATAwire:

September 23, 2020

Filling Persistent Gaps in the ‘Big Memory’ Era

Organizations that need to process large amounts of data in real time will be the big winners of the emerging “big memory” era, says Charles Fan, the CEO and co-founder of MemVerge, which today unveiled new software designed to lower the barrier of entry to big memory systems.

Emerging use cases, particularly around streaming analytics and machine learning, are giving organizations new data processing requirements. But these use cases are also uncovering new challenges that must be overcome.

It really comes down to size and time: As the data volumes get bigger and faster, the windows of opportunity to act on that data are getting smaller, which is forcing organizations to cram more data into DRAM.

According to IDC, about one-quarter of the world’s data will need to be dealt with in real time. “That means latencies that are in microseconds, or even nanoseconds,” Fan said at last week’s HPC + AI on Wall Street event. “We are seeing more and more such cases where you have both large as well as fast data at the same time.”

As organizations reach the limits of what existing DRAM can do in the face of big and fast data, they face some unpleasant consequences. “With some of these new applications, with real time big data analytics, or in AI and ML, when you actually reach a state where the data cannot be fully placed in memory, performance falls off a cliff and becomes much slower.”

There’s a name for the problem when data is greater than memory. “It’s called the DGM problem,” Fan said. “Performance becomes 100 to 1,000 times slower.”

One way to address the DGM problem is by adding more RAM. But just adding more RAM isn’t always possible, either because the server’s DIMM slots are maxed out or because the cost of RAM outweighs the benefits.

But thanks to the introduction of persistent memory technologies like Intel Optane, organizations now have a new technology to add to the mix. With Optane, organizations can now load up on persistent memory, or PMEM, that is about half the cost of traditional DRAM, and almost as fast. Plus, it’s persistent, which bolsters data recovery in the event of an outage.

“This is the new kid on the block, persistent memory that can co-exist with DRAM,” Fan said. “It doesn’t replace DRAM, but it really extends the capacity of DRAM and lowers the cost of the overall infrastructure.”

The challenge of PMEM is that it usually requires some changes to be made to the application, which was designed to read and write data into DRAM and is expecting DRAM. The risk and cost associated with opening up the application makes can be tough for an organization to justify.

That’s where MemVerge’s new Memory Machine comes in. The software, which became generally available today, makes PMEMlook like DRAM, which eliminates the need for organizations to modify their applications to get the benefits of PMEM.

“We developed a big memory software that does exactly that: Deliver you the performance of DRAM, the compatibility of DRAM, while giving you the capacity of PMEM, giving you the lower cost of PMEM, as well as the persistence and the data services that can be developed on top of it,” Fan said.

MemVerge’s software can support up 4.5 TB of combined DRAM and PMEM per socket, or a maximum of 9TB on a two-socket server, Fan said. It supports today’s standard memory busses, as well as emerging memory busses, such as DDRT, he said.

MemVerge is delivering two versions of its Memory Machine. The standard version provides the application compatibility with Optane PMEM modules described above. It also developed an advanced version that leverages the PMEM to provide application checkpointing for faster recovery following an outage.

The application checkpointing technology in the advanced version, dubbed ZeroIO, allows customers to instantly capture the entire application state in PMEM, and recover this snapshot at any time in the future, Fan said. It can even be used to copy and clone entire databases, he added.

“You no longer need to move this application image to a persistent storage system, which is gigabytes [in size] and takes many minutes,” Fan said. “Now you can do that instantly, within a second, with a big memory system.”

MemVerge has the backing of key players in the space, including Intel. Alper Ilkbahar, the vice president and general manager of the Memory and Storage Product Group at Intel, said: “It is exciting to see this offering from MemVerge that will help enable enterprises to take advantage of the speed of traditional memory with the capacity that persistent memory delivers, without requiring application code modifications.”

MemVerge also today announced a partnership with Penguin Computing, a provider of high-performance Linux systems for AI and HPC customers. It also announced a couple of customers that are using its Big Memory solutions, including MemX, a members-only, technology-driven stock exchange, as well as Intesa Sanpaolo, an Italian bank.

Fam said MemVerge is the first mover in this new space of big memory software, and that other players will be joining the area. “We are going to see a complete revolution of the infrastructure for computing over the next 10 years,” he said, “and that’s going to make a lot of things impossible today to become possible tomorrow.”

Related Items:

Future of Fintech on Display at HPC + AI Wall Street (HPCwire)

Persistent Memory Can Change the Way Enterprises Navigate Advanced Analytics

Now and Then: The Evolution of In-Memory Computing

 

BigDATAwire