/linux/Documentation/core-api/ |
A D | gfp_mask-from-fs-io.rst | 15 memory reclaim calling back into the FS or IO paths and blocking on 25 of GFP_NOFS/GFP_NOIO can lead to memory over-reclaim or other memory 26 reclaim issues. 44 any critical section with respect to the reclaim is started - e.g. 45 lock shared with the reclaim context or when a transaction context 46 nesting would be possible via reclaim. The restore function should be 48 explanation what is the reclaim context for easier maintenance.
|
A D | memory-allocation.rst | 43 direct reclaim may be triggered under memory pressure; the calling 46 handler, use ``GFP_NOWAIT``. This flag prevents direct reclaim and 74 prevent recursion deadlocks caused by direct memory reclaim calling 87 GFP flags and reclaim behavior 89 Memory allocations may trigger direct or background reclaim and it is 95 doesn't kick the background reclaim. Should be used carefully because it 97 reclaim. 101 context but can wake kswapd to reclaim memory if the zone is below 111 * ``GFP_KERNEL`` - both background and direct reclaim are allowed and the 119 reclaim (one round of reclaim in this implementation). The OOM killer [all …]
|
A D | workqueue.rst | 143 on code paths that handle memory reclaim are required to be queued on 190 All wq which might be used in the memory reclaim paths **MUST** 326 items which are used during memory reclaim. Each wq with 329 reclaim, they should be queued to separate wq each with 340 which are not involved in memory reclaim and don't need to be
|
/linux/Documentation/ABI/testing/ |
A D | sysfs-kernel-mm-numa | 9 Description: Enable/disable demoting pages during reclaim 11 Page migration during reclaim is intended for systems 16 Allowing page migration during reclaim enables these
|
/linux/Documentation/admin-guide/device-mapper/ |
A D | dm-zoned.rst | 27 internally for storing metadata and performing reclaim operations. 108 situation, a reclaim process regularly scans used conventional zones and 109 tries to reclaim the least recently used zones by copying the valid 128 (for both incoming BIO processing and reclaim process) and all dirty 184 Normally the reclaim process will be started once there are less than 50 185 percent free random zones. In order to start the reclaim process manually 191 dmsetup message /dev/dm-X 0 reclaim 193 will start the reclaim process and random zones will be moved to sequential
|
/linux/Documentation/admin-guide/sysctl/ |
A D | vm.rst | 486 A percentage of the total pages in each zone. On Zone reclaim 490 systems that rarely perform global reclaim. 494 Note that slab reclaim is triggered in a per zone / node fashion. 504 This is a percentage of the total pages in each zone. Zone reclaim will 905 This percentage value controls the tendency of the kernel to reclaim 914 causes the kernel to prefer to reclaim dentries and inodes. 953 A high rate of threads entering direct reclaim (allocstall) or kswapd 971 1 Zone reclaim on 972 2 Zone reclaim writes dirty pages out 973 4 Zone reclaim swaps pages [all …]
|
/linux/mm/damon/ |
A D | Makefile | 7 obj-$(CONFIG_DAMON_RECLAIM) += reclaim.o
|
A D | Kconfig | 77 bool "Build DAMON-based reclaim (DAMON_RECLAIM)" 81 that not accessed for a long time (cold) using DAMON and reclaim
|
/linux/drivers/usb/host/ |
A D | oxu210hp-hcd.c | 2094 next = qh->reclaim; in end_unlink_async() 2095 oxu->reclaim = next; in end_unlink_async() 2097 qh->reclaim = NULL; in end_unlink_async() 2116 oxu->reclaim = NULL; in end_unlink_async() 2139 && !oxu->reclaim) { in start_unlink_async() 2832 && oxu->reclaim in unlink_async() 2837 last->reclaim; in unlink_async() 2841 last->reclaim = qh; in unlink_async() 2988 if (oxu->reclaim) { in oxu_watchdog() 3036 oxu->reclaim = NULL; in oxu_hcd_init() [all …]
|
/linux/Documentation/admin-guide/mm/damon/ |
A D | index.rst | 16 reclaim
|
/linux/drivers/md/ |
A D | dm-zoned-target.c | 423 dmz_reclaim_bio_acc(zone->dev->reclaim); in dmz_handle_bio() 931 ret = dmz_ctr_reclaim(dmz->metadata, &dmz->dev[i].reclaim, i); in dmz_ctr() 973 dmz_dtr_reclaim(dmz->dev[i].reclaim); in dmz_dtr() 1046 dmz_suspend_reclaim(dmz->dev[i].reclaim); in dmz_suspend() 1060 dmz_resume_reclaim(dmz->dev[i].reclaim); in dmz_resume() 1138 dmz_schedule_reclaim(dmz->dev[i].reclaim); in dmz_message()
|
A D | dm-zoned.h | 57 struct dmz_reclaim *reclaim; member 292 dmz_reclaim_bio_acc(zone->dev->reclaim); in dmz_deactivate_zone()
|
/linux/Documentation/accounting/ |
A D | taskstats-struct.rst | 38 6) Extended delay accounting fields for memory reclaim 191 6) Extended delay accounting fields for memory reclaim:: 193 /* Delay waiting for memory reclaim */
|
A D | delay-accounting.rst | 15 d) memory reclaim 48 delay seen for cpu, sync block I/O, swapin, memory reclaim etc.
|
/linux/drivers/media/mmc/siano/ |
A D | smssdio.c | 303 goto reclaim; in smssdio_probe() 307 reclaim: in smssdio_probe()
|
/linux/Documentation/admin-guide/cgroup-v1/ |
A D | memory.rst | 82 memory.force_empty trigger forced page reclaim 181 More details can be found in the reclaim section of this document. 266 to reclaim memory from the cgroup so as to make space for the new 267 pages that the cgroup has touched. If the reclaim is unsuccessful, 271 The reclaim algorithm has not been modified for cgroups, except that 321 to trigger slab reclaim when those limits are reached. 369 In the current implementation, memory reclaim will NOT be 598 Please note that unlike during the global reclaim, limit reclaim 601 if there are no file pages to reclaim. 669 6.1 Hierarchical accounting and reclaim [all …]
|
/linux/arch/x86/kernel/cpu/sgx/ |
A D | sgx.h | 85 struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim);
|
/linux/Documentation/admin-guide/mm/ |
A D | concepts.rst | 182 repurposing them is called (surprise!) `reclaim`. Linux can reclaim 193 will trigger `direct reclaim`. In this case allocation is stalled 211 Like reclaim, the compaction may happen asynchronously in the ``kcompactd`` 218 kernel will be unable to reclaim enough memory to continue to operate. In
|
A D | transhuge.rst | 125 allocation failure and directly reclaim pages and compact 132 to reclaim pages and wake kcompactd to compact memory so that 137 will enter direct reclaim and compaction like ``always``, but 139 other regions will wake kswapd in the background to reclaim 144 will enter direct reclaim like ``always`` but only for regions
|
/linux/fs/lockd/ |
A D | clntproc.c | 278 if (host->h_reclaiming && !argp->reclaim) in nlmclnt_call() 308 if (argp->reclaim) { in nlmclnt_call() 314 if (!argp->reclaim) { in nlmclnt_call() 632 req->a_args.reclaim = 1; in nlmclnt_reclaim()
|
/linux/include/linux/lockd/ |
A D | xdr.h | 66 u32 reclaim; member
|
/linux/Documentation/vm/ |
A D | unevictable-lru.rst | 32 reclaim in Linux. The problems have been observed at customer sites on large 104 not attempt to reclaim pages on the unevictable list. This has a couple of 107 (1) Because the pages are "hidden" from reclaim on the unevictable list, the 108 reclaim process can be more efficient, dealing only with pages that have a 265 reclaim a page in a VM_LOCKED VMA via try_to_unmap() 327 it later if and when it attempts to reclaim the page. 402 This is fine, because we'll catch it later if and if vmscan tries to reclaim 530 try_to_unmap() is always called, by either vmscan for reclaim or for page 537 When trying to reclaim, if try_to_unmap_one() finds the page in a VM_LOCKED 544 munlock or munmap system calls, mm teardown (munlock_vma_pages_all), reclaim,
|
A D | z3fold.rst | 29 depend on MMU enabled and provides more predictable reclaim behavior
|
/linux/net/ipv4/ |
A D | tcp_metrics.c | 153 bool reclaim = false; in tcpm_new() local 163 reclaim = true; in tcpm_new() 171 if (unlikely(reclaim)) { in tcpm_new() 192 if (likely(!reclaim)) { in tcpm_new()
|
/linux/drivers/net/ethernet/chelsio/cxgb3/ |
A D | sge.c | 328 unsigned int reclaim = q->processed - q->cleaned; in reclaim_completed_tx() local 330 reclaim = min(chunk, reclaim); in reclaim_completed_tx() 331 if (reclaim) { in reclaim_completed_tx() 332 free_tx_desc(adapter, q, reclaim); in reclaim_completed_tx() 333 q->cleaned += reclaim; in reclaim_completed_tx() 334 q->in_use -= reclaim; in reclaim_completed_tx() 1454 unsigned int reclaim = q->processed - q->cleaned; in reclaim_completed_tx_imm() local 1456 q->in_use -= reclaim; in reclaim_completed_tx_imm() 1457 q->cleaned += reclaim; in reclaim_completed_tx_imm()
|