Lines Matching refs:pages
15 pages.
30 pages and to hide these pages from vmscan. This mechanism is based on a patch
36 main memory will have over 32 million 4k pages in a single node. When a large
37 fraction of these pages are not evictable for any reason [see below], vmscan
39 of pages that are evictable. This can result in a situation where all CPUs are
43 The unevictable list addresses the following classes of unevictable pages:
51 The infrastructure may also be able to handle other conditions that make pages
66 The Unevictable LRU infrastructure maintains unevictable pages on an additional
69 (1) We get to "treat unevictable pages just like we treat other pages in the
74 (2) We want to be able to migrate unevictable pages between nodes for memory
76 can only migrate pages that it can successfully isolate from the LRU
77 lists. If we were to maintain pages elsewhere than on an LRU-like list,
79 migration, unless we reworked migration code to find the unevictable pages
84 swap-backed pages. This differentiation is only important while the pages are,
100 lru_list enum element). The memory controller tracks the movement of pages to
104 not attempt to reclaim pages on the unevictable list. This has a couple of
107 (1) Because the pages are "hidden" from reclaim on the unevictable list, the
108 reclaim process can be more efficient, dealing only with pages that have a
111 (2) On the other hand, if too many of the pages charged to the control group
122 For facilities such as ramfs none of the pages attached to the address space
123 may be evicted. To prevent eviction of any such pages, the AS_UNEVICTABLE
147 Note that SHM_LOCK is not required to page in the locked pages if they're
148 swapped out; the application must touch the pages manually if it wants to
167 any special effort to push any pages in the SHM_LOCK'd area to the unevictable
168 list. Instead, vmscan will do this if and when it encounters the pages during
172 the pages in the region and "rescue" them from the unevictable list if no other
174 the pages are also "rescued" from the unevictable list in the process of
177 page_evictable() also checks for mlocked pages by testing an additional page
185 If unevictable pages are culled in the fault path, or moved to the unevictable
186 list at mlock() or mmap() time, vmscan will not encounter the pages until they
191 pages in all of the shrink_{active|inactive|page}_list() functions and will
192 "cull" such pages that it encounters: that is, it diverts those pages to the
196 page is not marked as PG_mlocked. Such pages will make it all the way to
208 event and movement of pages onto the unevictable list should be rare, these
225 posted by Nick Piggin in an RFC patch entitled "mm: mlocked pages off LRU".
227 to achieve the same objective: hiding mlocked pages from vmscan.
231 prevented the management of the pages on an LRU list, and thus mlocked pages
235 Nick resolved this by putting mlocked pages back on the lru list before
245 mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable
246 pages. When such a page has been "noticed" by the memory management subsystem,
251 the LRU. Such pages can be "noticed" by memory management in several places:
261 (4) in the fault path, if mlocked pages are "culled" in the fault path,
270 mlocked pages become unlocked and rescued from the unevictable list when:
297 populate_vma_page_range() to fault in the pages via get_user_pages() and to
298 mark the pages as mlocked via mlock_vma_page().
301 get_user_pages() will be unable to fault in the pages. That's okay. If pages
312 detect and cull such pages.
335 1) VMAs with VM_IO or VM_PFNMAP set are skipped entirely. The pages behind
337 mlocked. In any case, most of the pages have no struct page in which to so
342 neither need nor want to mlock() these pages. However, to preserve the
345 allocate the huge pages and populate the ptes.
347 3) VMAs with VM_DONTEXPAND are generally userspace mappings of kernel pages,
348 such as the VDSO page, relay channel pages, etc. These pages
377 faulting in and mlocking pages, get_user_pages() was unreliable for visiting
378 these pages for munlocking. Because we don't want to leave pages mlocked,
380 fetching the pages - all of which should be resident as a result of previous
383 For munlock(), populate_vma_page_range() unlocks individual pages by calling
389 mlocked pages. Note, however, that at this point we haven't checked whether
413 of mlocked pages and other unevictable pages. This involves simply moving the
421 can skip these pages by testing the page mapping under page lock.
423 To complete page migration, we place the new and old pages back onto the LRU
426 process is released. To ensure that we don't strand pages on the unevictable
428 putback_lru_page() function to add migrated pages back to the LRU.
454 We handle this by keeping PTE-mapped huge pages on normal LRU lists: the
471 area will still have properties of the locked area - aka. pages will not get
477 changes, the kernel simply called make_pages_present() to allocate pages and
486 populate_vma_page_range() returns the number of pages NOT mlocked. All of the
491 and pages allocated into that region.
499 munlock the pages if we're removing the last VM_LOCKED VMA that maps the pages.
500 Before the unevictable/mlock changes, mlocking did not mark the pages in any
508 actually contain mlocked pages will be passed to munlock_vma_pages_all().
526 in section "vmscan's handling of unevictable pages". To handle this situation,
532 functions handle anonymous and mapped file and KSM pages, as these types of
533 pages have different reverse map lookup mechanisms, with different locking.
545 holepunching, and truncation of file pages and their anonymous COWed pages.
573 shrink_active_list() culls any obviously unevictable pages - i.e.
575 However, shrink_active_list() only sees unevictable pages that made it onto the
576 active/inactive lru lists. Note that these pages do not have PageUnevictable
580 Some examples of these unevictable pages on the LRU lists are:
582 (1) ramfs pages that have been placed on the LRU lists when first allocated.
584 (2) SHM_LOCK'd shared memory pages. shmctl(SHM_LOCK) does not attempt to
585 allocate or fault in the pages in the shared memory region. This happens
589 (3) mlocked pages that could not be isolated from the LRU and moved to the
592 shrink_inactive_list() also diverts any unevictable pages that it finds on the
595 shrink_inactive_list() should only see SHM_LOCK'd pages that became SHM_LOCK'd
596 after shrink_active_list() had moved them to the inactive list, or pages mapped
601 shrink_page_list() again culls obviously unevictable pages that it could