Persistent Memory Programming Part 2: The NVM Programming Model
Intel’s Andy Rudoff describes persistent memory and delves into why there’s so much activity around it in the industry lately. Andy describes how persistent memory is connected to a computer platform, how it performs, and what are some of the challenges for programmers. Andy is a Non-volatile Memory Software Architect and a member of the SNIA (Storage Networking Industry Association) Non-volatile Memory Programming Technical Work Group.
Hi, I'm Andy Rudoff from Intel. This video will describe the SNIA NVM programming model for persistent memory. This is the way operating systems expose persistent memory to applications. Don't forget to watch the rest of the persistent memory programming playlist, where we'll be building on this model.
Let's take a look at the basic concepts around persistent memory programming. As a programmer, you're probably familiar with the storage software stack, shown here at a very high level. These basic blocks that make up the stack haven't changed much over many decades of use. Applications use standard file APIs to open files on a file system, and the file system does block I/O as necessary through a driver or set of drivers. All accesses to the storage happens in blocks, typically over an interconnect like PCIe. I haven't mentioned specific operating systems so far, because at this high level, they're all very similar.
If you're a Windows or Linux programmer, you may recognize these basic APIs, now more than 30 years old, that deal with opening files and reading and writing ranges of bytes. Perhaps you've never used these calls because you're used to programming with libraries that provide more convenient APIs. those libraries will eventually call these basic APIs internally. Both Windows and Linux also support memory-mapped files, a feature which has been around for a long time, but not as commonly used. For persistent memory, these APIs for memory mapping files are very useful. In fact, they're at the heart of the persistent memory programming model published by SNIA, the Storage Networking Industry Association.
Memory mapping a file is only allowed after the file is already opened, so the permission checks have already happened by the time an application calls CreateFileMapping and MapViewOfFile on Windows or nmap on Linux. Once those calls are made, the file appears in the address space of the application, allowing load store access to the file contents. An important aspect of memory mapped files is that changes, done by store instructions, are not guaranteed to be persistent until they are flushed to storage. On Windows, this is done using FlushViewOfFile and FlushFileBuffers. On Linux, we use either msync or fsync. Memory mapping a file makes the file appear as if it's laid out in the virtual memory of an application. The application just uses pointers to do loads and stores to data structures in the storage.
From the application point of view, this is byte addressable storage. But what's actually happening with storage is paging. Paging is where the operating system pauses the program to do I/O to storage. Storage can only do I/O in blocks. So the first time a program accesses a byte in a file, a full block, usually 4k bytes, is read from storage. And when the process flushes a change to persistence, again, it must wait while the operating system writes the full block out. This is where the power of the memory mapped file API really benefits persistent memory programming.
The standard file APIs are the same. But with persistent memory, a persistent memory-aware file system is used to set up the memory mapped file. The result is direct load store access to the persistent media instead of the paged access you get with traditional storage. See how the traditional stack converts everything into block accesses. The persistent memory programming model allows byte level access to the nonvolatile media plugged into the memory bus, shown here by the common industry term NVDIMM.
You can see that once the mappings are set up, the application has direct access provided by the MMU's virtual to physical mappings. The ability to configure these direct mappings to persistent memory is a feature known as DAX, which is short for direct access. Support for this feature is what differentiates a normal file system from a persistent memory-aware file system. DAX is supported today by both Windows and Linux.
Now you understand how persistent memory is exposed to applications by the operating system. Watch the rest of this playlist to see what happens next after the application has access to large ranges of persistent memory. Thanks for watching this video and the persistent memory programming playlist. Remember to visit the links in the description below. And don't forget to like this video and subscribe.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.