/linux/Documentation/admin-guide/mm/ |
A D | hugetlbpage.rst | 32 and surplus huge pages in the pool of huge pages of default size. 57 huge page from the pool of huge pages at fault time. 82 pages in the kernel's huge page pool. "Persistent" huge pages will be 95 Once a number of huge pages have been pre-allocated to the kernel huge page 105 Some platforms support multiple huge page sizes. To allocate huge pages 122 specific huge page size. Valid huge page sizes are architecture 178 huge page pool to 20, allocating or freeing huge pages, as required. 211 persistent huge page pool is exhausted. As these surplus huge pages become 228 of the in-use huge pages to surplus huge pages. This will occur even if 262 1GB and 2MB huge pages sizes. A 1GB huge page can be split into 512 [all …]
|
A D | transhuge.rst | 13 using huge pages for the backing of virtual memory with huge pages 53 collapses sequences of basic pages into huge pages. 242 ``huge=``. It can have following values: 248 Do not allocate huge pages; 260 ``huge=never`` will not attempt to break up huge pages at all, just stop more 353 is incremented if kernel fails to split huge 365 munmap() on part of huge page. It doesn't split huge page, only 369 is incremented every time a huge zero page is 385 for the huge page. 398 freed a huge page for use. [all …]
|
A D | concepts.rst | 81 `huge`. Usage of huge pages significantly reduces pressure on TLB, 85 memory with the huge pages. The first one is `HugeTLB filesystem`, or 88 the memory and mapped using huge pages. The hugetlbfs is described at 91 Another, more recent, mechanism that enables use of the huge pages is 94 the system memory should and can be mapped by the huge pages, THP 204 buffer for DMA, or when THP allocates a huge page. Memory `compaction`
|
/linux/Documentation/vm/ |
A D | hugetlbfs_reserv.rst | 11 preallocated for application use. These huge pages are instantiated in a 13 to be used. If no huge page exists at page fault time, the task is sent 21 'reserve' huge pages at mmap() time to ensure that huge pages would be 37 huge pages are only available to the task which reserved them. 38 Therefore, the number of huge pages generally available is computed 52 There is one reserve map for each huge page mapping in the system. 77 The PagePrivate page flag is used to indicate that a huge page 78 reservation must be restored when the huge page is freed. More 79 details will be discussed in the "Freeing huge pages" section. 313 huge pages. If they can not be reserved, the mount fails. [all …]
|
A D | transhuge.rst | 15 knowledge fall back to breaking huge pmd mapping into table of ptes and, 43 is complete, so they won't ever notice the fact the page is huge. But 59 Code walking pagetables but unaware about huge pmds can simply call 94 To make pagetable walks huge pmd aware, all you need to do is to call 96 mmap_lock in read (or write) mode to be sure a huge pmd cannot be 102 page table lock will prevent the huge pmd being converted into a 106 before. Otherwise, you can proceed to process the huge pmd and the 109 Refcounts and transparent huge pages 124 (stored in first tail page). For file huge pages, we also increment 151 requests to split pinned huge pages: it expects page count to be equal to
|
A D | arch_pgtable_helpers.rst | 139 | pmd_set_huge | Creates a PMD huge mapping | 141 | pmd_clear_huge | Clears a PMD huge mapping | 195 | pud_set_huge | Creates a PUD huge mapping | 197 | pud_clear_huge | Clears a PUD huge mapping |
|
/linux/tools/testing/selftests/vm/ |
A D | charge_reserved_hugetlb.sh | 48 if [[ -e /mnt/huge ]]; then 49 rm -rf /mnt/huge/* 50 umount /mnt/huge || echo error 51 rmdir /mnt/huge 256 if [[ -e /mnt/huge ]]; then 257 rm -rf /mnt/huge/* 258 umount /mnt/huge 259 rmdir /mnt/huge 286 mkdir -p /mnt/huge 287 mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge [all …]
|
/linux/arch/powerpc/include/asm/nohash/32/ |
A D | pgtable.h | 235 static int number_of_cells_per_pte(pmd_t *pmd, pte_basic_t val, int huge) in number_of_cells_per_pte() argument 237 if (!huge) in number_of_cells_per_pte() 248 unsigned long clr, unsigned long set, int huge) in pte_update() argument 256 num = number_of_cells_per_pte(pmd, new, huge); in pte_update() 277 unsigned long clr, unsigned long set, int huge) in pte_update() argument 327 int huge = psize > mmu_virtual_psize ? 1 : 0; in __ptep_set_access_flags() local 329 pte_update(vma->vm_mm, address, ptep, 0, set, huge); in __ptep_set_access_flags()
|
A D | pte-8xx.h | 140 unsigned long clr, unsigned long set, int huge); 153 int huge = psize > mmu_virtual_psize ? 1 : 0; in __ptep_set_access_flags() local 155 pte_update(vma->vm_mm, address, ptep, clr, set, huge); in __ptep_set_access_flags()
|
/linux/arch/powerpc/include/asm/book3s/64/ |
A D | hash.h | 147 pte_t *ptep, unsigned long pte, int huge); 154 int huge) in hash__pte_update() argument 172 if (!huge) in hash__pte_update() 177 hpte_need_flush(mm, addr, ptep, old, huge); in hash__pte_update()
|
A D | radix.h | 170 int huge) in radix__pte_update() argument 175 if (!huge) in radix__pte_update()
|
/linux/Documentation/core-api/ |
A D | pin_user_pages.rst | 58 For huge pages (and in fact, any compound page of more than 2 pages), the 65 huge pages, because each tail page adds a refcount to the head page. And in 67 page overflows were seen in some huge page stress tests. 69 This also means that huge pages and compound pages (of order > 1) do not suffer 241 acquired since the system was powered on. For huge pages, the head page is 242 pinned once for each page (head page and each tail page) within the huge page. 243 This follows the same sort of behavior that get_user_pages() uses for huge 244 pages: the head page is refcounted once for each tail or head page in the huge 245 page, when get_user_pages() is applied to a huge page. 249 PAGE_SIZE granularity, even if the original pin was applied to a huge page.
|
/linux/Documentation/admin-guide/hw-vuln/ |
A D | multihit.rst | 81 * - KVM: Mitigation: Split huge pages 111 In order to mitigate the vulnerability, KVM initially marks all huge pages 125 The KVM hypervisor mitigation mechanism for marking huge pages as 134 non-executable huge pages in Linux kernel KVM module. All huge
|
/linux/arch/alpha/lib/ |
A D | ev6-clear_user.S | 86 subq $1, 16, $4 # .. .. .. E : If < 16, we can not use the huge loop 87 and $16, 0x3f, $2 # .. .. E .. : Forward work for huge loop 88 subq $2, 0x40, $3 # .. E .. .. : bias counter (huge loop)
|
/linux/mm/ |
A D | shmem.c | 116 int huge; member 528 switch (huge) { in shmem_format_huge() 1595 huge = false; in shmem_alloc_and_acct_page() 1601 if (huge) in shmem_alloc_and_acct_page() 3514 sbinfo->huge = ctx->huge; in shmem_reconfigure() 3582 if (sbinfo->huge) in shmem_show_options() 3648 sbinfo->huge = ctx->huge; in shmem_fill_super() 3943 int huge; in shmem_enabled_store() local 3953 if (huge == -EINVAL) in shmem_enabled_store() 3956 huge != SHMEM_HUGE_NEVER && huge != SHMEM_HUGE_DENY) in shmem_enabled_store() [all …]
|
A D | memory-failure.c | 2087 bool huge = PageHuge(page); in __soft_offline_page() local 2132 bool release = !huge; in __soft_offline_page() 2134 if (!page_handle_poison(page, huge, release)) in __soft_offline_page() 2141 pfn, msg_page[huge], ret, &page->flags); in __soft_offline_page() 2147 pfn, msg_page[huge], page_count(page), &page->flags); in __soft_offline_page()
|
/linux/drivers/misc/lkdtm/ |
A D | bugs.c | 274 volatile unsigned int huge = INT_MAX - 2; variable 281 value = huge; in lkdtm_OVERFLOW_SIGNED() 296 value = huge; in lkdtm_OVERFLOW_UNSIGNED()
|
/linux/arch/powerpc/mm/book3s64/ |
A D | hash_tlb.c | 41 pte_t *ptep, unsigned long pte, int huge) in hpte_need_flush() argument 61 if (huge) { in hpte_need_flush()
|
/linux/arch/parisc/mm/ |
A D | init.c | 398 bool huge = false; in map_pages() local 408 huge = true; in map_pages() 413 huge = true; in map_pages() 419 if (huge) in map_pages()
|
/linux/Documentation/features/vm/huge-vmap/ |
A D | arch-support.txt | 2 # Feature name: huge-vmap
|
/linux/arch/powerpc/include/asm/nohash/64/ |
A D | pgtable.h | 190 int huge) in pte_update() argument 196 if (!huge) in pte_update()
|
/linux/fs/netfs/ |
A D | Kconfig | 8 segmentation, local caching and transparent huge page support.
|
/linux/Documentation/vm/damon/ |
A D | design.rst | 49 Only small parts in the super-huge virtual address space of the processes are 54 cases. That said, too huge unmapped areas inside the monitoring target should 63 exceptionally huge in usual address spaces, excluding these will be sufficient
|
/linux/Documentation/riscv/ |
A D | vm-layout.rst | 42 …0000004000000000 | +256 GB | ffffffbfffffffff | ~16M TB | ... huge, almost 64 bits wide hole of…
|
/linux/include/linux/ |
A D | shmem_fs.h | 37 unsigned char huge; /* Whether to try for hugepages */ member
|