/xen/docs/misc/ |
A D | xenpaging.txt | 5 guest memory and its filesystems! 9 xenpaging writes memory pages of a given guest to a file and moves the 10 pages back to the pool of available memory. Once the guests wants to 11 access the paged-out memory, the page is read from disk and placed into 12 memory. This allows the sum of all running guests to use more memory 33 Once xenpaging runs it needs a memory target, which is the memory 37 xenstore-write /local/domain/<dom_id>/memory/target-tot_pages $((1024*512)) 39 Now xenpaging tries to page-out as many pages to keep the overall memory
|
A D | libxl_memory.txt | 1 /* === Domain memory breakdown: HVM guests ================================== 38 === Domain memory breakdown: PV guests ==================================
|
A D | vtpm-platforms.txt | 30 memory=8 39 memory=8 46 memory=8 54 memory=1024 61 memory=1024 93 permitted access to IO memory at 0xfed42; this IO memory is accessible to the
|
A D | pvh.pandoc | 12 The domain builder must load the kernel into the guest memory space and 16 * `ebx`: contains the physical memory address where the loader has placed 70 the position of the RSDP in memory (if present) can be fetched from the start 111 the host. This is done because the hardware domain memory map is already a 112 modified copy of the host memory map, so the same MTRR setup should work.
|
A D | xen-error-handling.txt | 70 Example usage: Most commonly for out-of-memory errors during 72 enough memory to boot Xen, but if the failure does occur then the 73 context of the failed memory allocation itself is not very
|
/xen/xen/include/ |
A D | xlat.lst | 94 ! add_to_physmap memory.h 95 ! add_to_physmap_batch memory.h 96 ! foreign_memory_map memory.h 97 ! memory_exchange memory.h 98 ! memory_map memory.h 99 ! memory_reservation memory.h 100 ! mem_access_op memory.h 102 ! pod_target memory.h 103 ! remove_from_physmap memory.h 105 ? vmemrange memory.h [all …]
|
/xen/docs/man/ |
A D | xl.conf.5.pod | 61 memory assigned to domain 0 in order to free memory for new domains. 64 domain 0 memory. 71 of memory given to domain 0 by default. 163 guarantee that there is memory available for the guest. 165 quickly and the amount of free memory (which C<xl info> can show) is 167 the amount of memory (see 'memory' in xl.conf(5)) is set, which is then 169 The free memory in C<xl info> is the combination of the hypervisor's 170 free heap memory minus the outstanding claims value. 183 attempted as normal and may fail due to memory exhaustion. 187 Free memory as reported by the hypervisor is used for [all …]
|
A D | xl-numa-placement.7.pod | 11 NUMA (which stands for Non-Uniform Memory Access) means that the memory 13 distance between that CPU and that memory. In fact, most of the NUMA 14 systems are built in such a way that each processor has its local memory, 16 data from and on remote memory (that is, memory local to some other processor) 19 the memory directly attached to the set of cores. 22 running memory-intensive workloads on a shared host. In fact, the cost 23 of accessing non node-local memory locations is very high, and the 49 created, as the most of its memory is allocated at that time and can 144 affects NUMA placement and memory accesses as, in this case, the 219 the candidate with with the greatest amount of free memory is [all …]
|
/xen/tools/examples/ |
A D | xlexample.pvlinux | 25 # Initial memory allocation (MB) 26 memory = 128 28 # Maximum memory (MB) 29 # If this is greater than `memory' then the slack will start ballooned
|
A D | xlexample.hvm | 24 # Initial memory allocation (MB) 25 memory = 128 27 # Maximum memory (MB) 28 # If this is greater than `memory' then the slack will start ballooned
|
A D | xlexample.pvhlinux | 28 # Initial memory allocation (MB) 29 memory = 512
|
/xen/xen/arch/x86/boot/ |
A D | mem.S | 44 # e801h memory size call 53 testw %dx, %dx # memory in AX/BX rather than 60 addl %ecx, %edx # add in lower memory 61 movl %edx,bootsym(highmem_kb) # store extended memory size
|
/xen/docs/misc/arm/device-tree/ |
A D | guest.txt | 11 memory where the grant table should be mapped to, using an 12 HYPERVISOR_memory_op hypercall. The memory region is large enough to map 29 xen,uefi-mmap-start | 64-bit | Guest physical address of the UEFI memory 32 xen,uefi-mmap-size | 32-bit | Size in bytes of the UEFI memory map 36 | | memory map.
|
/xen/tools/xenstore/ |
A D | talloc_guide.txt | 13 The new talloc is a hierarchical, reference counted memory pool system 64 memory of the given type. 99 then the memory is not actually released, but instead the most 125 around 48 bytes of memory on intel x86 platforms). 169 pieces of memory. A common use for destructors is to clean up 213 memory without releasing the name. All of the memory is released when 308 memory for a longer time. 350 for the top level memory context, but only if 364 for the top level memory context, but only if 533 to reduce the noise in memory leak reports. [all …]
|
/xen/tools/helpers/ |
A D | init-xenstore-domain.c | 26 static int memory; variable 68 int limit_kb = (maxmem ? : (memory + 1)) * 1024; in build() 183 rv = xc_dom_mem_init(dom, memory); in build() 311 if ( maxmem < memory ) in parse_maxmem() 361 memory = strtol(optarg, NULL, 10); in main() 384 if ( optind != argc || !kernel || !memory ) in main() 433 snprintf(buf, 16, "%d", memory * 1024); in main()
|
/xen/tools/tests/mce-test/cases/ucna_llc/guest/ |
A D | cases.sh | 49 m) memory=$OPTARG;; 61 create_hvm_guest $image -u $vcpus -m $memory
|
/xen/tools/tests/mce-test/cases/srao_llc/guest/ |
A D | cases.sh | 49 m) memory=$OPTARG;; 61 create_hvm_guest $image -u $vcpus -m $memory
|
/xen/tools/tests/mce-test/cases/srao_mem/guest/ |
A D | cases.sh | 49 m) memory=$OPTARG;; 61 create_hvm_guest $image -u $vcpus -m $memory
|
/xen/tools/libacpi/ |
A D | build.c | 219 struct acpi_20_srat_memory *memory; in construct_srat() local 225 sizeof(*memory) * config->numa.nr_vmemranges; in construct_srat() 252 memory = (struct acpi_20_srat_memory *)processor; in construct_srat() 255 memory->type = ACPI_MEMORY_AFFINITY; in construct_srat() 256 memory->length = sizeof(*memory); in construct_srat() 257 memory->domain = config->numa.vmemrange[i].nid; in construct_srat() 258 memory->flags = ACPI_MEM_AFFIN_ENABLED; in construct_srat() 259 memory->base_address = config->numa.vmemrange[i].start; in construct_srat() 260 memory->mem_length = config->numa.vmemrange[i].end - in construct_srat() 262 memory++; in construct_srat() [all …]
|
/xen/tools/ocaml/test/ |
A D | list_domains.ml | 12 and memory = dominfo.Xenlight.Dominfo.current_memkb 14 printf "Dom %d: %c%c%c%c%c %LdKB\n" id running blocked paused shutdown dying memory
|
/xen/tools/hotplug/Linux/init.d/ |
A D | sysconfig.xencommons.in | 68 # xenstore domain memory size in MiB. 75 # Maximum xenstore domain memory size. Can be specified as: 77 # - fraction of host memory, e.g. 1/100
|
/xen/tools/tests/mce-test/lib/ |
A D | xen-mceinj-tool.sh | 70 m ) memory=$OPTARG;; 87 [ -z $memory ] || sed -i "/^memory/s/^.*$/memory = $memory/" $config
|
/xen/docs/features/ |
A D | dom0less.pandoc | 30 ### Loading binaries into memory ### 67 memory = <0 0x20000>; 83 Where memory is the memory of the VM in KBs, cpus is the number of 85 ramdisk are in memory.
|
/xen/tools/flask/policy/modules/ |
A D | nomigrate.te | 2 # once built, dom0 cannot read their memory.
|
A D | prot_domU.te | 3 # map memory belonging to those domains.
|