Tobi had a few questions about memory mapped files. And it is quite an interesting topic.
A memory mapped file is a feature for all modern operating system. It require coordination between the memory manager and the I/O subsystem. Basically, you can tell the OS that some file is the backing store for a certain portion of the process memory. In order to understand that, we have to understand virtual memory.
Here is your physical memory, using 32 bits for ease of use:
Now, you have two processes that are running. Each of them get their own 4 GB address space (actually, only 2 GB is available to the process in 32 bits, but that is good enough). What happen when both of those processes obtain a pointer to 0x0232194?
Well, what actually happens is that the pointer that looks like a pointer to physical memory is actually a virtual pointer, which the CPU and the OS work together to map to physical memory. It is obvious from the image that there is a problem here, what happens if two processes uses 4GB each? There isn't enough physical memory to handle that. This is the point were the page file gets involved. So far, this is pretty basic OS 101 stuff.
The next stuff is where it gets interesting. So the OS already knows how to evict pages from memory and store them on the file system, because it needs to do that for the page file. The next step is to make use of that for more than just the page file. So you can map any file into your memory space. Once that is done, you can access the part of the memory you have mapped and the OS will load the relevant parts of the file to memory. Again, pretty basic stuff so far.
You can read about this more in this article.
The reason you want to do this sort of thing is that it gives you the ability to work with files as if it was memory. And you don't have to worry about all that pesky file I/O stuff. The OS will take care of that for you.
And now, to Tobi's questions:
What are the most important advantages and disadvantages?
Probably the most important is that you don't have to do manual file I/O. That can drastically reduce the amount of work you have to do, and it can also give you a pretty significant perf boost. Because the I/O is actually being managed by the operation system, you gain a lot of experience in optimizing things. And the OS will do things like give you a page buffer, caching, preloading, etc. It also make it drastically easier to do parallel I/O safely, since you can read/write from "memory" concurrently without having to deal with complex API.
The disadvantages you need to be aware that things like the base addresses would change whenever you re-open the file, and that data structures that are good in memory might not result in good performance if they are stored on disk.
Read more: Ayende @ Rahien