1.. _cleancache: 2 3========== 4Cleancache 5========== 6 7Motivation 8========== 9 10Cleancache is a new optional feature provided by the VFS layer that 11potentially dramatically increases page cache effectiveness for 12many workloads in many environments at a negligible cost. 13 14Cleancache can be thought of as a page-granularity victim cache for clean 15pages that the kernel's pageframe replacement algorithm (PFRA) would like 16to keep around, but can't since there isn't enough memory. So when the 17PFRA "evicts" a page, it first attempts to use cleancache code to 18put the data contained in that page into "transcendent memory", memory 19that is not directly accessible or addressable by the kernel and is 20of unknown and possibly time-varying size. 21 22Later, when a cleancache-enabled filesystem wishes to access a page 23in a file on disk, it first checks cleancache to see if it already 24contains it; if it does, the page of data is copied into the kernel 25and a disk access is avoided. 26 27Transcendent memory "drivers" for cleancache are currently implemented 28in Xen (using hypervisor memory) and zcache (using in-kernel compressed 29memory) and other implementations are in development. 30 31:ref:`FAQs <faq>` are included below. 32 33Implementation Overview 34======================= 35 36A cleancache "backend" that provides transcendent memory registers itself 37to the kernel's cleancache "frontend" by calling cleancache_register_ops, 38passing a pointer to a cleancache_ops structure with funcs set appropriately. 39The functions provided must conform to certain semantics as follows: 40 41Most important, cleancache is "ephemeral". Pages which are copied into 42cleancache have an indefinite lifetime which is completely unknowable 43by the kernel and so may or may not still be in cleancache at any later time. 44Thus, as its name implies, cleancache is not suitable for dirty pages. 45Cleancache has complete discretion over what pages to preserve and what 46pages to discard and when. 47 48Mounting a cleancache-enabled filesystem should call "init_fs" to obtain a 49pool id which, if positive, must be saved in the filesystem's superblock; 50a negative return value indicates failure. A "put_page" will copy a 51(presumably about-to-be-evicted) page into cleancache and associate it with 52the pool id, a file key, and a page index into the file. (The combination 53of a pool id, a file key, and an index is sometimes called a "handle".) 54A "get_page" will copy the page, if found, from cleancache into kernel memory. 55An "invalidate_page" will ensure the page no longer is present in cleancache; 56an "invalidate_inode" will invalidate all pages associated with the specified 57file; and, when a filesystem is unmounted, an "invalidate_fs" will invalidate 58all pages in all files specified by the given pool id and also surrender 59the pool id. 60 61An "init_shared_fs", like init_fs, obtains a pool id but tells cleancache 62to treat the pool as shared using a 128-bit UUID as a key. On systems 63that may run multiple kernels (such as hard partitioned or virtualized 64systems) that may share a clustered filesystem, and where cleancache 65may be shared among those kernels, calls to init_shared_fs that specify the 66same UUID will receive the same pool id, thus allowing the pages to 67be shared. Note that any security requirements must be imposed outside 68of the kernel (e.g. by "tools" that control cleancache). Or a 69cleancache implementation can simply disable shared_init by always 70returning a negative value. 71 72If a get_page is successful on a non-shared pool, the page is invalidated 73(thus making cleancache an "exclusive" cache). On a shared pool, the page 74is NOT invalidated on a successful get_page so that it remains accessible to 75other sharers. The kernel is responsible for ensuring coherency between 76cleancache (shared or not), the page cache, and the filesystem, using 77cleancache invalidate operations as required. 78 79Note that cleancache must enforce put-put-get coherency and get-get 80coherency. For the former, if two puts are made to the same handle but 81with different data, say AAA by the first put and BBB by the second, a 82subsequent get can never return the stale data (AAA). For get-get coherency, 83if a get for a given handle fails, subsequent gets for that handle will 84never succeed unless preceded by a successful put with that handle. 85 86Last, cleancache provides no SMP serialization guarantees; if two 87different Linux threads are simultaneously putting and invalidating a page 88with the same handle, the results are indeterminate. Callers must 89lock the page to ensure serial behavior. 90 91Cleancache Performance Metrics 92============================== 93 94If properly configured, monitoring of cleancache is done via debugfs in 95the `/sys/kernel/debug/cleancache` directory. The effectiveness of cleancache 96can be measured (across all filesystems) with: 97 98``succ_gets`` 99 number of gets that were successful 100 101``failed_gets`` 102 number of gets that failed 103 104``puts`` 105 number of puts attempted (all "succeed") 106 107``invalidates`` 108 number of invalidates attempted 109 110A backend implementation may provide additional metrics. 111 112.. _faq: 113 114FAQ 115=== 116 117* Where's the value? (Andrew Morton) 118 119Cleancache provides a significant performance benefit to many workloads 120in many environments with negligible overhead by improving the 121effectiveness of the pagecache. Clean pagecache pages are 122saved in transcendent memory (RAM that is otherwise not directly 123addressable to the kernel); fetching those pages later avoids "refaults" 124and thus disk reads. 125 126Cleancache (and its sister code "frontswap") provide interfaces for 127this transcendent memory (aka "tmem"), which conceptually lies between 128fast kernel-directly-addressable RAM and slower DMA/asynchronous devices. 129Disallowing direct kernel or userland reads/writes to tmem 130is ideal when data is transformed to a different form and size (such 131as with compression) or secretly moved (as might be useful for write- 132balancing for some RAM-like devices). Evicted page-cache pages (and 133swap pages) are a great use for this kind of slower-than-RAM-but-much- 134faster-than-disk transcendent memory, and the cleancache (and frontswap) 135"page-object-oriented" specification provides a nice way to read and 136write -- and indirectly "name" -- the pages. 137 138In the virtual case, the whole point of virtualization is to statistically 139multiplex physical resources across the varying demands of multiple 140virtual machines. This is really hard to do with RAM and efforts to 141do it well with no kernel change have essentially failed (except in some 142well-publicized special-case workloads). Cleancache -- and frontswap -- 143with a fairly small impact on the kernel, provide a huge amount 144of flexibility for more dynamic, flexible RAM multiplexing. 145Specifically, the Xen Transcendent Memory backend allows otherwise 146"fallow" hypervisor-owned RAM to not only be "time-shared" between multiple 147virtual machines, but the pages can be compressed and deduplicated to 148optimize RAM utilization. And when guest OS's are induced to surrender 149underutilized RAM (e.g. with "self-ballooning"), page cache pages 150are the first to go, and cleancache allows those pages to be 151saved and reclaimed if overall host system memory conditions allow. 152 153And the identical interface used for cleancache can be used in 154physical systems as well. The zcache driver acts as a memory-hungry 155device that stores pages of data in a compressed state. And 156the proposed "RAMster" driver shares RAM across multiple physical 157systems. 158 159* Why does cleancache have its sticky fingers so deep inside the 160 filesystems and VFS? (Andrew Morton and Christoph Hellwig) 161 162The core hooks for cleancache in VFS are in most cases a single line 163and the minimum set are placed precisely where needed to maintain 164coherency (via cleancache_invalidate operations) between cleancache, 165the page cache, and disk. All hooks compile into nothingness if 166cleancache is config'ed off and turn into a function-pointer- 167compare-to-NULL if config'ed on but no backend claims the ops 168functions, or to a compare-struct-element-to-negative if a 169backend claims the ops functions but a filesystem doesn't enable 170cleancache. 171 172Some filesystems are built entirely on top of VFS and the hooks 173in VFS are sufficient, so don't require an "init_fs" hook; the 174initial implementation of cleancache didn't provide this hook. 175But for some filesystems (such as btrfs), the VFS hooks are 176incomplete and one or more hooks in fs-specific code are required. 177And for some other filesystems, such as tmpfs, cleancache may 178be counterproductive. So it seemed prudent to require a filesystem 179to "opt in" to use cleancache, which requires adding a hook in 180each filesystem. Not all filesystems are supported by cleancache 181only because they haven't been tested. The existing set should 182be sufficient to validate the concept, the opt-in approach means 183that untested filesystems are not affected, and the hooks in the 184existing filesystems should make it very easy to add more 185filesystems in the future. 186 187The total impact of the hooks to existing fs and mm files is only 188about 40 lines added (not counting comments and blank lines). 189 190* Why not make cleancache asynchronous and batched so it can more 191 easily interface with real devices with DMA instead of copying each 192 individual page? (Minchan Kim) 193 194The one-page-at-a-time copy semantics simplifies the implementation 195on both the frontend and backend and also allows the backend to 196do fancy things on-the-fly like page compression and 197page deduplication. And since the data is "gone" (copied into/out 198of the pageframe) before the cleancache get/put call returns, 199a great deal of race conditions and potential coherency issues 200are avoided. While the interface seems odd for a "real device" 201or for real kernel-addressable RAM, it makes perfect sense for 202transcendent memory. 203 204* Why is non-shared cleancache "exclusive"? And where is the 205 page "invalidated" after a "get"? (Minchan Kim) 206 207The main reason is to free up space in transcendent memory and 208to avoid unnecessary cleancache_invalidate calls. If you want inclusive, 209the page can be "put" immediately following the "get". If 210put-after-get for inclusive becomes common, the interface could 211be easily extended to add a "get_no_invalidate" call. 212 213The invalidate is done by the cleancache backend implementation. 214 215* What's the performance impact? 216 217Performance analysis has been presented at OLS'09 and LCA'10. 218Briefly, performance gains can be significant on most workloads, 219especially when memory pressure is high (e.g. when RAM is 220overcommitted in a virtual workload); and because the hooks are 221invoked primarily in place of or in addition to a disk read/write, 222overhead is negligible even in worst case workloads. Basically 223cleancache replaces I/O with memory-copy-CPU-overhead; on older 224single-core systems with slow memory-copy speeds, cleancache 225has little value, but in newer multicore machines, especially 226consolidated/virtualized machines, it has great value. 227 228* How do I add cleancache support for filesystem X? (Boaz Harrash) 229 230Filesystems that are well-behaved and conform to certain 231restrictions can utilize cleancache simply by making a call to 232cleancache_init_fs at mount time. Unusual, misbehaving, or 233poorly layered filesystems must either add additional hooks 234and/or undergo extensive additional testing... or should just 235not enable the optional cleancache. 236 237Some points for a filesystem to consider: 238 239 - The FS should be block-device-based (e.g. a ram-based FS such 240 as tmpfs should not enable cleancache) 241 - To ensure coherency/correctness, the FS must ensure that all 242 file removal or truncation operations either go through VFS or 243 add hooks to do the equivalent cleancache "invalidate" operations 244 - To ensure coherency/correctness, either inode numbers must 245 be unique across the lifetime of the on-disk file OR the 246 FS must provide an "encode_fh" function. 247 - The FS must call the VFS superblock alloc and deactivate routines 248 or add hooks to do the equivalent cleancache calls done there. 249 - To maximize performance, all pages fetched from the FS should 250 go through the do_mpag_readpage routine or the FS should add 251 hooks to do the equivalent (cf. btrfs) 252 - Currently, the FS blocksize must be the same as PAGESIZE. This 253 is not an architectural restriction, but no backends currently 254 support anything different. 255 - A clustered FS should invoke the "shared_init_fs" cleancache 256 hook to get best performance for some backends. 257 258* Why not use the KVA of the inode as the key? (Christoph Hellwig) 259 260If cleancache would use the inode virtual address instead of 261inode/filehandle, the pool id could be eliminated. But, this 262won't work because cleancache retains pagecache data pages 263persistently even when the inode has been pruned from the 264inode unused list, and only invalidates the data page if the file 265gets removed/truncated. So if cleancache used the inode kva, 266there would be potential coherency issues if/when the inode 267kva is reused for a different file. Alternately, if cleancache 268invalidated the pages when the inode kva was freed, much of the value 269of cleancache would be lost because the cache of pages in cleanache 270is potentially much larger than the kernel pagecache and is most 271useful if the pages survive inode cache removal. 272 273* Why is a global variable required? 274 275The cleancache_enabled flag is checked in all of the frequently-used 276cleancache hooks. The alternative is a function call to check a static 277variable. Since cleancache is enabled dynamically at runtime, systems 278that don't enable cleancache would suffer thousands (possibly 279tens-of-thousands) of unnecessary function calls per second. So the 280global variable allows cleancache to be enabled by default at compile 281time, but have insignificant performance impact when cleancache remains 282disabled at runtime. 283 284* Does cleanache work with KVM? 285 286The memory model of KVM is sufficiently different that a cleancache 287backend may have less value for KVM. This remains to be tested, 288especially in an overcommitted system. 289 290* Does cleancache work in userspace? It sounds useful for 291 memory hungry caches like web browsers. (Jamie Lokier) 292 293No plans yet, though we agree it sounds useful, at least for 294apps that bypass the page cache (e.g. O_DIRECT). 295 296Last updated: Dan Magenheimer, April 13 2011 297