/linux/Documentation/vm/ |
A D | arch_pgtable_helpers.rst | 17 PTE Page Table Helpers 21 | pte_same | Tests whether both PTE entries are the same | 23 | pte_bad | Tests a non-table mapped PTE | 25 | pte_present | Tests a valid mapped PTE | 27 | pte_young | Tests a young PTE | 29 | pte_dirty | Tests a dirty PTE | 31 | pte_write | Tests a writable PTE | 33 | pte_special | Tests a special PTE | 35 | pte_protnone | Tests a PROT_NONE PTE | 37 | pte_devmap | Tests a ZONE_DEVICE mapped PTE | [all …]
|
A D | split_page_table_lock.rst | 13 access to the table. At the moment we use split lock for PTE and PMD 19 maps pte and takes PTE table lock, returns pointer to the taken 22 unlocks and unmaps PTE table; 24 allocates PTE table if needed and take the lock, returns pointer 27 returns pointer to PTE table lock; 33 Split page table lock for PTE tables is enabled compile-time if 37 Split page table lock for PMD tables is enabled, if it's enabled for PTE 57 There's no need in special enabling of PTE split page table lock: everything 59 must be called on PTE table allocation / freeing. 97 The spinlock_t allocated in pgtable_pte_page_ctor() for PTE table and in
|
A D | remap_file_pages.rst | 18 PTE for this purpose. PTE flags are scarce resource especially on some CPU
|
A D | transhuge.rst | 120 - map/unmap of the pages with PTE entry increment/decrement ->_mapcount 143 File pages get PG_double_map set on the first map of the page with PTE and
|
A D | hmm.rst | 344 of copying a page of zeros. Valid PTE entries to system memory or 347 the LRU), unmapped from the process, and a special migration PTE is 348 inserted in place of the original PTE. 411 Some devices have features such as atomic PTE bits that can be used to implement
|
A D | highmem.rst | 141 advantage is that PAE has more PTE bits and can provide advanced features
|
/linux/tools/testing/selftests/vm/ |
A D | mremap_test.c | 52 #define PTE page_size macro 293 test_cases[3] = MAKE_TEST(PTE, PTE, PTE * 2, in main() 298 test_cases[4] = MAKE_TEST(_1MB, PTE, _2MB, NON_OVERLAPPING, EXPECT_SUCCESS, in main() 304 test_cases[6] = MAKE_TEST(PMD, PTE, _4MB, NON_OVERLAPPING, EXPECT_SUCCESS, in main() 312 test_cases[9] = MAKE_TEST(PUD, PTE, _2GB, NON_OVERLAPPING, EXPECT_SUCCESS, in main()
|
/linux/arch/sparc/include/asm/ |
A D | pgalloc_64.h | 68 #define pmd_populate_kernel(MM, PMD, PTE) pmd_set(MM, PMD, PTE) argument 69 #define pmd_populate(MM, PMD, PTE) pmd_set(MM, PMD, PTE) argument
|
/linux/Documentation/admin-guide/mm/ |
A D | soft-dirty.rst | 7 The soft-dirty is a bit on a PTE which helps to track which pages a task 20 64-bit qword is the soft-dirty one. If set, the respective PTE was 27 the soft-dirty bit on the respective PTE. 33 bits on the PTE. 38 the same place. When unmap is called, the kernel internally clears PTE values
|
A D | idle_page_tracking.rst | 111 more page flag is introduced, the Young flag. When the PTE Accessed bit is 113 is set on the page. The reclaimer treats the Young flag as an extra PTE
|
A D | pagemap.rst | 133 a PTE. To make sure the flag is up-to-date one has to read 208 However, values stored only on the normal PTE get lost irretrievably when the
|
/linux/Documentation/translations/zh_CN/arm64/ |
A D | hugetlbpage.rst | 40 - CONT PTE PMD CONT PMD PUD
|
/linux/Documentation/translations/zh_TW/arm64/ |
A D | hugetlbpage.rst | 43 - CONT PTE PMD CONT PMD PUD
|
/linux/arch/microblaze/include/asm/ |
A D | mmu.h | 33 } PTE; typedef
|
/linux/Documentation/admin-guide/hw-vuln/ |
A D | l1tf.rst | 47 table entry (PTE) has the Present bit cleared or other reserved bits set, 48 then speculative execution ignores the invalid PTE and loads the referenced 50 by the address bits in the PTE was still present and accessible. 72 PTE which is marked non present. This allows a malicious user space 75 encoded in the address bits of the PTE, thus making attacks more 78 The Linux kernel contains a mitigation for this attack vector, PTE 92 PTE inversion mitigation for L1TF, to attack physical host memory. 132 'Mitigation: PTE Inversion' The host protection is active 136 information is appended to the 'Mitigation: PTE Inversion' part: 582 - PTE inversion to protect against malicious user space. This is done
|
/linux/Documentation/virt/kvm/ |
A D | locking.rst | 200 kvm_mmu_notifier_clear_flush_young), it marks the PTE not-present in hardware 201 by clearing the RWX bits in the PTE and storing the original R & X bits in more 204 atomically restore the PTE to a Present state. The W bit is not saved when the 205 PTE is marked for access tracking and during restoration to the Present state,
|
/linux/Documentation/x86/ |
A D | intel-iommu.rst | 106 DMAR:[fault reason 05] PTE Write access is not set 108 DMAR:[fault reason 05] PTE Write access is not set
|
/linux/Documentation/vm/damon/ |
A D | design.rst | 75 PTE Accessed-bit Based Access Check 78 Both of the implementations for physical and virtual address spaces use PTE 80 finding the relevant PTE Accessed bit(s) from the address. While the
|
A D | faq.rst | 39 Nonetheless, DAMON provides vma/rmap tracking and PTE Accessed bit check based
|
/linux/arch/xtensa/ |
A D | Kconfig.debug | 8 This check can spot missing TLB invalidation/wrong PTE permissions/
|
/linux/Documentation/arm64/ |
A D | hugetlbpage.rst | 38 - CONT PTE PMD CONT PMD PUD
|
/linux/arch/arm/mm/ |
A D | proc-macros.S | 110 #error PTE shared bit mismatch 115 #error Invalid Linux PTE bit settings
|
/linux/arch/nds32/kernel/ |
A D | ex-entry.S | 93 .long do_page_fault !PTE not present
|
/linux/arch/sparc/kernel/ |
A D | sun4v_tlb_miss.S | 82 mov %g3, %o2 ! PTE 125 mov %g3, %o2 ! PTE
|
/linux/Documentation/powerpc/ |
A D | papr_hcalls.rst | 178 an active PTE entry to the SCM block being bound. 188 HCALL can fail if the Guest has an active PTE entry to the SCM block being
|