Searched refs:split_huge_page (Results 1 – 9 of 9) sorted by relevance
70 calling split_huge_page(page). This is what the Linux VM does before71 it tries to swapout the hugepage for example. split_huge_page() can fail92 split_huge_page() or split_huge_pmd() has a cost.146 split_huge_page internally has to distribute the refcounts in the head150 additional pins (i.e. from get_user_pages). split_huge_page() fails any152 the sum of mapcount of all sub-pages plus one (split_huge_page caller must155 split_huge_page uses migration entries to stabilize page->_refcount and
451 We cannot just split the page on partial mlock() as split_huge_page() can
190 static inline int split_huge_page(struct page *page) in split_huge_page() function386 static inline int split_huge_page(struct page *page) in split_huge_page() function
357 err = split_huge_page(page); in madvise_cold_or_pageout_pte_range()425 if (split_huge_page(page)) { in madvise_cold_or_pageout_pte_range()636 if (split_huge_page(page)) { in madvise_free_pte_range()
1544 split_huge_page(page); in madvise_free_huge_pmd()2841 if (!split_huge_page(page)) in deferred_split_scan()2896 if (!split_huge_page(page)) in split_huge_pages_all()2987 if (!split_huge_page(page)) in split_huge_pages_pid()3045 if (!split_huge_page(fpage)) in split_huge_pages_in_file()
1224 if (split_huge_page(page)) in try_to_merge_one_page()2188 split_huge_page(page); in cmp_and_merge_page()
630 ret = split_huge_page(page); in shmem_unused_huge_shrink()906 return split_huge_page(page) >= 0; in shmem_punch_compound()1353 if (split_huge_page(page) < 0) in shmem_writepage()
1439 if (unlikely(split_huge_page(page))) { in try_to_split_thp_page()
2281 ret = split_huge_page(page); in migrate_vma_collect_pmd()
Completed in 31 milliseconds