1=head1 NAME
2
3xl - Xen management tool, based on libxenlight
4
5=head1 SYNOPSIS
6
7B<xl> I<subcommand> [I<args>]
8
9=head1 DESCRIPTION
10
11The B<xl> program is the new tool for managing Xen guest
12domains. The program can be used to create, pause, and shutdown
13domains. It can also be used to list current domains, enable or pin
14VCPUs, and attach or detach virtual block devices.
15
16The basic structure of every B<xl> command is almost always:
17
18=over 2
19
20B<xl> I<subcommand> [I<OPTIONS>] I<domain-id>
21
22=back
23
24Where I<subcommand> is one of the subcommands listed below, I<domain-id>
25is the numeric domain id, or the domain name (which will be internally
26translated to domain id), and I<OPTIONS> are subcommand specific
27options.  There are a few exceptions to this rule in the cases where
28the subcommand in question acts on all domains, the entire machine,
29or directly on the Xen hypervisor.  Those exceptions will be clear for
30each of those subcommands.
31
32=head1 NOTES
33
34=over 4
35
36=item start the script B</etc/init.d/xencommons> at boot time
37
38Most B<xl> operations rely upon B<xenstored> and B<xenconsoled>: make
39sure you start the script B</etc/init.d/xencommons> at boot time to
40initialize all the daemons needed by B<xl>.
41
42=item setup a B<xenbr0> bridge in dom0
43
44In the most common network configuration, you need to setup a bridge in dom0
45named B<xenbr0> in order to have a working network in the guest domains.
46Please refer to the documentation of your Linux distribution to know how to
47setup the bridge.
48
49=item B<autoballoon>
50
51If you specify the amount of memory dom0 has, passing B<dom0_mem> to
52Xen, it is highly recommended to disable B<autoballoon>. Edit
53B</etc/xen/xl.conf> and set it to 0.
54
55=item run xl as B<root>
56
57Most B<xl> commands require root privileges to run due to the
58communications channels used to talk to the hypervisor.  Running as
59non root will return an error.
60
61=back
62
63=head1 GLOBAL OPTIONS
64
65Some global options are always available:
66
67=over 4
68
69=item B<-v>
70
71Verbose.
72
73=item B<-N>
74
75Dry run: do not actually execute the command.
76
77=item B<-f>
78
79Force execution: xl will refuse to run some commands if it detects that xend is
80also running, this option will force the execution of those commands, even
81though it is unsafe.
82
83=item B<-t>
84
85Always use carriage-return-based overwriting for displaying progress
86messages without scrolling the screen.  Without -t, this is done only
87if stderr is a tty.
88
89=back
90
91=head1 DOMAIN SUBCOMMANDS
92
93The following subcommands manipulate domains directly.  As stated
94previously, most commands take I<domain-id> as the first parameter.
95
96=over 4
97
98=item B<button-press> I<domain-id> I<button>
99
100I<This command is deprecated. Please use C<xl trigger> instead.>
101
102Indicate an ACPI button press to the domain, where I<button> can be 'power' or
103'sleep'. This command is only available for HVM domains.
104
105=item B<create> [I<configfile>] [I<OPTIONS>]
106
107The create subcommand takes a config file as its first argument: see
108L<xl.cfg(5)> for full details of the file format and possible options.
109If I<configfile> is missing B<xl> creates the domain assuming the default
110values for every option.
111
112I<configfile> has to be an absolute path to a file.
113
114Create will return B<as soon> as the domain is started.  This B<does
115not> mean the guest OS in the domain has actually booted, or is
116available for input.
117
118If the I<-F> option is specified, create will start the domain and not
119return until its death.
120
121B<OPTIONS>
122
123=over 4
124
125=item B<-q>, B<--quiet>
126
127No console output.
128
129=item B<-f=FILE>, B<--defconfig=FILE>
130
131Use the given configuration file.
132
133=item B<-p>
134
135Leave the domain paused after it is created.
136
137=item B<-F>
138
139Run in foreground until death of the domain.
140
141=item B<-V>, B<--vncviewer>
142
143Attach to domain's VNC server, forking a vncviewer process.
144
145=item B<-A>, B<--vncviewer-autopass>
146
147Pass the VNC password to vncviewer via stdin.
148
149=item B<-c>
150
151Attach console to the domain as soon as it has started.  This is
152useful for determining issues with crashing domains and just as a
153general convenience since you often want to watch the
154domain boot.
155
156=item B<key=value>
157
158It is possible to pass I<key=value> pairs on the command line to provide
159options as if they were written in the configuration file; these override
160whatever is in the I<configfile>.
161
162NB: Many config options require characters such as quotes or brackets
163which are interpreted by the shell (and often discarded) before being
164passed to xl, resulting in xl being unable to parse the value
165correctly.  A simple work-around is to put all extra options within a
166single set of quotes, separated by semicolons.  (See below for an example.)
167
168=back
169
170B<EXAMPLES>
171
172=over 4
173
174=item I<with config file>
175
176  xl create DebianLenny
177
178This creates a domain with the file /etc/xen/DebianLenny, and returns as
179soon as it is run.
180
181=item I<with extra parameters>
182
183  xl create hvm.cfg 'cpus="0-3"; pci=["01:05.1","01:05.2"]'
184
185This creates a domain with the file hvm.cfg, but additionally pins it to
186cpus 0-3, and passes through two PCI devices.
187
188=back
189
190=item B<config-update> I<domain-id> [I<configfile>] [I<OPTIONS>]
191
192Update the saved configuration for a running domain. This has no
193immediate effect but will be applied when the guest is next
194restarted. This command is useful to ensure that runtime modifications
195made to the guest will be preserved when the guest is restarted.
196
197Since Xen 4.5 xl has improved capabilities to handle dynamic domain
198configuration changes and will preserve any changes made at runtime
199when necessary. Therefore it should not normally be necessary to use
200this command any more.
201
202I<configfile> has to be an absolute path to a file.
203
204B<OPTIONS>
205
206=over 4
207
208=item B<-f=FILE>, B<--defconfig=FILE>
209
210Use the given configuration file.
211
212=item B<key=value>
213
214It is possible to pass I<key=value> pairs on the command line to
215provide options as if they were written in the configuration file;
216these override whatever is in the I<configfile>.  Please see the note
217under I<create> on handling special characters when passing
218I<key=value> pairs on the command line.
219
220=back
221
222=item B<console> [I<OPTIONS>] I<domain-id>
223
224Attach to the console of a domain specified by I<domain-id>.  If you've set up
225your domains to have a traditional login console this will look much like a
226normal text login screen.
227
228Use the key combination Ctrl+] to detach from the domain console.
229
230B<OPTIONS>
231
232=over 4
233
234=item I<-t [pv|serial]>
235
236Connect to a PV console or connect to an emulated serial console.
237PV consoles are the only consoles available for PV domains while HVM
238domains can have both. If this option is not specified it defaults to
239emulated serial for HVM guests and PV console for PV guests.
240
241=item I<-n NUM>
242
243Connect to console number I<NUM>. Console numbers start from 0.
244
245=back
246
247=item B<destroy> [I<OPTIONS>] I<domain-id>
248
249Immediately terminate the domain specified by I<domain-id>.  This doesn't give
250the domain OS any chance to react, and is the equivalent of ripping the power
251cord out on a physical machine.  In most cases you will want to use the
252B<shutdown> command instead.
253
254B<OPTIONS>
255
256=over 4
257
258=item I<-f>
259
260Allow domain 0 to be destroyed.  Because a domain cannot destroy itself, this
261is only possible when using a disaggregated toolstack, and is most useful when
262using a hardware domain separated from domain 0.
263
264=back
265
266=item B<domid> I<domain-name>
267
268Converts a domain name to a domain id.
269
270=item B<domname> I<domain-id>
271
272Converts a domain id to a domain name.
273
274=item B<rename> I<domain-id> I<new-name>
275
276Change the domain name of a domain specified by I<domain-id> to I<new-name>.
277
278=item B<dump-core> I<domain-id> [I<filename>]
279
280Dumps the virtual machine's memory for the specified domain to the
281I<filename> specified, without pausing the domain.  The dump file will
282be written to a distribution specific directory for dump files, for example:
283@XEN_DUMP_DIR@/dump.
284
285=item B<help> [I<--long>]
286
287Displays the short help message (i.e. common commands) by default.
288
289If the I<--long> option is specified, it displays the complete set of B<xl>
290subcommands, grouped by function.
291
292=item B<list> [I<OPTIONS>] [I<domain-id> ...]
293
294Displays information about one or more domains.  If no domains are
295specified it displays information about all domains.
296
297
298B<OPTIONS>
299
300=over 4
301
302=item B<-l>, B<--long>
303
304The output for B<xl list> is not the table view shown below, but
305instead presents the data as a JSON data structure.
306
307=item B<-Z>, B<--context>
308
309Also displays the security labels.
310
311=item B<-v>, B<--verbose>
312
313Also displays the domain UUIDs, the shutdown reason and security labels.
314
315=item B<-c>, B<--cpupool>
316
317Also displays the cpupool the domain belongs to.
318
319=item B<-n>, B<--numa>
320
321Also displays the domain NUMA node affinity.
322
323=back
324
325B<EXAMPLE>
326
327An example format for the list is as follows:
328
329    Name                                        ID   Mem VCPUs      State   Time(s)
330    Domain-0                                     0   750     4     r-----   11794.3
331    win                                          1  1019     1     r-----       0.3
332    linux                                        2  2048     2     r-----    5624.2
333
334Name is the name of the domain.  ID the numeric domain id.  Mem is the
335desired amount of memory to allocate to the domain (although it may
336not be the currently allocated amount).  VCPUs is the number of
337virtual CPUs allocated to the domain.  State is the run state (see
338below).  Time is the total run time of the domain as accounted for by
339Xen.
340
341B<STATES>
342
343The State field lists 6 states for a Xen domain, and which ones the
344current domain is in.
345
346=over 4
347
348=item B<r - running>
349
350The domain is currently running on a CPU.
351
352=item B<b - blocked>
353
354The domain is blocked, and not running or runnable.  This can be because the
355domain is waiting on IO (a traditional wait state) or has
356gone to sleep because there was nothing else for it to do.
357
358=item B<p - paused>
359
360The domain has been paused, usually occurring through the administrator
361running B<xl pause>.  When in a paused state the domain will still
362consume allocated resources (like memory), but will not be eligible for
363scheduling by the Xen hypervisor.
364
365=item B<s - shutdown>
366
367The guest OS has shut down (SCHEDOP_shutdown has been called) but the
368domain is not dying yet.
369
370=item B<c - crashed>
371
372The domain has crashed, which is always a violent ending.  Usually
373this state only occurs if the domain has been configured not to
374restart on a crash.  See L<xl.cfg(5)> for more info.
375
376=item B<d - dying>
377
378The domain is in the process of dying, but hasn't completely shut down or
379crashed.
380
381=back
382
383B<NOTES>
384
385=over 4
386
387The Time column is deceptive.  Virtual IO (network and block devices)
388used by the domains requires coordination by Domain0, which means that
389Domain0 is actually charged for much of the time that a DomainU is
390doing IO.  Use of this time value to determine relative utilizations
391by domains is thus very unreliable, as a high IO workload may show as
392less utilized than a high CPU workload.  Consider yourself warned.
393
394=back
395
396=item B<mem-set> I<domain-id> I<mem>
397
398Set the target for the domain's balloon driver.
399
400The default unit is kiB.  Add 't' for TiB, 'g' for GiB, 'm' for
401MiB, 'k' for kiB, and 'b' for bytes (e.g., `2048m` for 2048 MiB).
402
403This must be less than the initial B<maxmem> parameter in the domain's
404configuration.
405
406Note that this operation requests the guest operating system's balloon
407driver to reach the target amount of memory.  The guest may fail to
408reach that amount of memory for any number of reasons, including:
409
410=over 4
411
412=item
413
414The guest doesn't have a balloon driver installed
415
416=item
417
418The guest's balloon driver is buggy
419
420=item
421
422The guest's balloon driver cannot create free guest memory due to
423guest memory pressure
424
425=item
426
427The guest's balloon driver cannot allocate memory from Xen because of
428hypervisor memory pressure
429
430=item
431
432The guest administrator has disabled the balloon driver
433
434=back
435
436B<Warning:> There is no good way to know in advance how small of a
437mem-set will make a domain unstable and cause it to crash.  Be very
438careful when using this command on running domains.
439
440=item B<mem-max> I<domain-id> I<mem>
441
442Specify the limit Xen will place on the amount of memory a guest may
443allocate.
444
445The default unit is kiB.  Add 't' for TiB, 'g' for GiB, 'm' for
446MiB, 'k' for kiB, and 'b' for bytes (e.g., `2048m` for 2048 MiB).
447
448NB that users normally shouldn't need this command; B<xl mem-set> will
449set this as appropriate automatically.
450
451I<mem> can't be set lower than the current memory target for
452I<domain-id>.  It is allowed to be higher than the configured maximum
453memory size of the domain (B<maxmem> parameter in the domain's
454configuration). Note however that the initial B<maxmem> value is still
455used as an upper limit for B<xl mem-set>.  Also note that calling B<xl
456mem-set> will reset this value.
457
458The domain will not receive any signal regarding the changed memory
459limit.
460
461=item B<migrate> [I<OPTIONS>] I<domain-id> I<host>
462
463Migrate a domain to another host machine. By default B<xl> relies on ssh as a
464transport mechanism between the two hosts.
465
466B<OPTIONS>
467
468=over 4
469
470=item B<-s> I<sshcommand>
471
472Use <sshcommand> instead of ssh.  String will be passed to sh. If empty, run
473<host> instead of ssh <host> xl migrate-receive [-d -e].
474
475=item B<-e>
476
477On the new <host>, do not wait in the background for the death of the
478domain. See the corresponding option of the I<create> subcommand.
479
480=item B<-C> I<config>
481
482Send the specified <config> file instead of the file used on creation of the
483domain.
484
485=item B<--debug>
486
487Display huge (!) amount of debug information during the migration process.
488
489=item B<-p>
490
491Leave the domain on the receive side paused after migration.
492
493=item B<-D>
494
495Preserve the B<domain-id> in the domain coniguration that is transferred
496such that it will be identical on the destination host, unless that
497configuration is overridden using the B<-C> option. Note that it is not
498possible to use this option for a 'localhost' migration.
499
500=back
501
502=item B<remus> [I<OPTIONS>] I<domain-id> I<host>
503
504Enable Remus HA or COLO HA for domain. By default B<xl> relies on ssh as a
505transport mechanism between the two hosts.
506
507B<NOTES>
508
509=over 4
510
511Remus support in xl is still in experimental (proof-of-concept) phase.
512Disk replication support is limited to DRBD disks.
513
514COLO support in xl is still in experimental (proof-of-concept)
515phase. All options are subject to change in the future.
516
517=back
518
519COLO disk configuration looks like:
520
521  disk = ['...,colo,colo-host=xxx,colo-port=xxx,colo-export=xxx,active-disk=xxx,hidden-disk=xxx...']
522
523The supported options are:
524
525=over 4
526
527=item B<colo-host>   : Secondary host's ip address.
528
529=item B<colo-port>   : Secondary host's port, we will run a nbd server on the
530secondary host, and the nbd server will listen on this port.
531
532=item B<colo-export> : Nbd server's disk export name of the secondary host.
533
534=item B<active-disk> : Secondary's guest write will be buffered to this disk,
535and it's used by the secondary.
536
537=item B<hidden-disk> : Primary's modified contents will be buffered in this
538disk, and it's used by the secondary.
539
540=back
541
542COLO network configuration looks like:
543
544  vif = [ '...,forwarddev=xxx,...']
545
546The supported options are:
547
548=over 4
549
550=item B<forwarddev> : Forward devices for the primary and the secondary, they
551are directly connected.
552
553
554=back
555
556B<OPTIONS>
557
558=over 4
559
560=item B<-i> I<MS>
561
562Checkpoint domain memory every MS milliseconds (default 200ms).
563
564=item B<-u>
565
566Disable memory checkpoint compression.
567
568=item B<-s> I<sshcommand>
569
570Use <sshcommand> instead of ssh.  String will be passed to sh.
571If empty, run <host> instead of ssh <host> xl migrate-receive -r [-e].
572
573=item B<-e>
574
575On the new <host>, do not wait in the background for the death of the domain.
576See the corresponding option of the I<create> subcommand.
577
578=item B<-N> I<netbufscript>
579
580Use <netbufscript> to setup network buffering instead of the
581default script (/etc/xen/scripts/remus-netbuf-setup).
582
583=item B<-F>
584
585Run Remus in unsafe mode. Use this option with caution as failover may
586not work as intended.
587
588=item B<-b>
589
590Replicate memory checkpoints to /dev/null (blackhole).
591Generally useful for debugging. Requires enabling unsafe mode.
592
593=item B<-n>
594
595Disable network output buffering. Requires enabling unsafe mode.
596
597=item B<-d>
598
599Disable disk replication. Requires enabling unsafe mode.
600
601=item B<-c>
602
603Enable COLO HA. This conflicts with B<-i> and B<-b>, and memory
604checkpoint compression must be disabled.
605
606=item B<-p>
607
608Use userspace COLO Proxy. This option must be used in conjunction
609with B<-c>.
610
611=back
612
613=item B<pause> I<domain-id>
614
615Pause a domain.  When in a paused state the domain will still consume
616allocated resources (such as memory), but will not be eligible for
617scheduling by the Xen hypervisor.
618
619=item B<reboot> [I<OPTIONS>] I<domain-id>
620
621Reboot a domain.  This acts just as if the domain had the B<reboot>
622command run from the console.  The command returns as soon as it has
623executed the reboot action, which may be significantly earlier than when the
624domain actually reboots.
625
626For HVM domains this requires PV drivers to be installed in your guest
627OS. If PV drivers are not present but you have configured the guest OS
628to behave appropriately you may be able to use the I<-F> option to
629trigger a reset button press.
630
631The behavior of what happens to a domain when it reboots is set by the
632B<on_reboot> parameter of the domain configuration file when the
633domain was created.
634
635B<OPTIONS>
636
637=over 4
638
639=item B<-F>
640
641If the guest does not support PV reboot control then fallback to
642sending an ACPI power event (equivalent to the I<reset> option to
643I<trigger>).
644
645You should ensure that the guest is configured to behave as expected
646in response to this event.
647
648=back
649
650=item B<restore> [I<OPTIONS>] [I<configfile>] I<checkpointfile>
651
652Build a domain from an B<xl save> state file.  See B<save> for more info.
653
654B<OPTIONS>
655
656=over 4
657
658=item B<-p>
659
660Do not unpause the domain after restoring it.
661
662=item B<-e>
663
664Do not wait in the background for the death of the domain on the new host.
665See the corresponding option of the I<create> subcommand.
666
667=item B<-d>
668
669Enable debug messages.
670
671=item B<-V>, B<--vncviewer>
672
673Attach to the domain's VNC server, forking a vncviewer process.
674
675=item B<-A>, B<--vncviewer-autopass>
676
677Pass the VNC password to vncviewer via stdin.
678
679
680
681=back
682
683=item B<save> [I<OPTIONS>] I<domain-id> I<checkpointfile> [I<configfile>]
684
685Saves a running domain to a state file so that it can be restored
686later.  Once saved, the domain will no longer be running on the
687system, unless the -c or -p options are used.
688B<xl restore> restores from this checkpoint file.
689Passing a config file argument allows the user to manually select the VM config
690file used to create the domain.
691
692=over 4
693
694=item B<-c>
695
696Leave the domain running after creating the snapshot.
697
698=item B<-p>
699
700Leave the domain paused after creating the snapshot.
701
702=item B<-D>
703
704Preserve the B<domain-id> in the domain coniguration that is embedded in
705the state file such that it will be identical when the domain is restored,
706unless that configuration is overridden. (See the B<restore> operation
707above).
708
709=back
710
711=item B<sharing> [I<domain-id>]
712
713Display the number of shared pages for a specified domain. If no domain is
714specified it displays information about all domains.
715
716=item B<shutdown> [I<OPTIONS>] I<-a|domain-id>
717
718Gracefully shuts down a domain.  This coordinates with the domain OS
719to perform graceful shutdown, so there is no guarantee that it will
720succeed, and may take a variable length of time depending on what
721services must be shut down in the domain.
722
723For HVM domains this requires PV drivers to be installed in your guest
724OS. If PV drivers are not present but you have configured the guest OS
725to behave appropriately you may be able to use the I<-F> option to
726trigger a power button press.
727
728The command returns immediately after signaling the domain unless the
729B<-w> flag is used.
730
731The behavior of what happens to a domain when it reboots is set by the
732B<on_shutdown> parameter of the domain configuration file when the
733domain was created.
734
735B<OPTIONS>
736
737=over 4
738
739=item B<-a>, B<--all>
740
741Shutdown all guest domains.  Often used when doing a complete shutdown
742of a Xen system.
743
744=item B<-w>, B<--wait>
745
746Wait for the domain to complete shutdown before returning.  If given once,
747the wait is for domain shutdown or domain death.  If given multiple times,
748the wait is for domain death only.
749
750=item B<-F>
751
752If the guest does not support PV shutdown control then fallback to
753sending an ACPI power event (equivalent to the I<power> option to
754I<trigger>).
755
756You should ensure that the guest is configured to behave as expected
757in response to this event.
758
759=back
760
761=item B<sysrq> I<domain-id> I<letter>
762
763Send a <Magic System Request> to the domain, each type of request is
764represented by a different letter.
765It can be used to send SysRq requests to Linux guests, see sysrq.txt in
766your Linux Kernel sources for more information.
767It requires PV drivers to be installed in your guest OS.
768
769=item B<trigger> I<domain-id> I<nmi|reset|init|power|sleep|s3resume> [I<VCPU>]
770
771Send a trigger to a domain, where the trigger can be: nmi, reset, init, power
772or sleep.  Optionally a specific vcpu number can be passed as an argument.
773This command is only available for HVM domains.
774
775=item B<unpause> I<domain-id>
776
777Moves a domain out of the paused state.  This will allow a previously
778paused domain to now be eligible for scheduling by the Xen hypervisor.
779
780=item B<vcpu-set> I<domain-id> I<vcpu-count>
781
782Enables the I<vcpu-count> virtual CPUs for the domain in question.
783Like mem-set, this command can only allocate up to the maximum virtual
784CPU count configured at boot for the domain.
785
786If the I<vcpu-count> is smaller than the current number of active
787VCPUs, the highest number VCPUs will be hotplug removed.  This may be
788important for pinning purposes.
789
790Attempting to set the VCPUs to a number larger than the initially
791configured VCPU count is an error.  Trying to set VCPUs to < 1 will be
792quietly ignored.
793
794Some guests may need to actually bring the newly added CPU online
795after B<vcpu-set>, go to B<SEE ALSO> section for information.
796
797=item B<vcpu-list> [I<domain-id>]
798
799Lists VCPU information for a specific domain.  If no domain is
800specified, VCPU information for all domains will be provided.
801
802=item B<vcpu-pin> [I<-f|--force>] I<domain-id> I<vcpu> I<cpus hard> I<cpus soft>
803
804Set hard and soft affinity for a I<vcpu> of <domain-id>. Normally VCPUs
805can float between available CPUs whenever Xen deems a different run state
806is appropriate.
807
808Hard affinity can be used to restrict this, by ensuring certain VCPUs
809can only run on certain physical CPUs. Soft affinity specifies a I<preferred>
810set of CPUs. Soft affinity needs special support in the scheduler, which is
811only provided in credit1.
812
813The keyword B<all> can be used to apply the hard and soft affinity masks to
814all the VCPUs in the domain. The symbol '-' can be used to leave either
815hard or soft affinity alone.
816
817For example:
818
819 xl vcpu-pin 0 3 - 6-9
820
821will set soft affinity for vCPU 3 of domain 0 to pCPUs 6,7,8 and 9,
822leaving its hard affinity untouched. On the other hand:
823
824 xl vcpu-pin 0 3 3,4 6-9
825
826will set both hard and soft affinity, the former to pCPUs 3 and 4, the
827latter to pCPUs 6,7,8, and 9.
828
829Specifying I<-f> or I<--force> will remove a temporary pinning done by the
830operating system (normally this should be done by the operating system).
831In case a temporary pinning is active for a vcpu the affinity of this vcpu
832can't be changed without this option.
833
834=item B<vm-list>
835
836Prints information about guests. This list excludes information about
837service or auxiliary domains such as dom0 and stubdoms.
838
839B<EXAMPLE>
840
841An example format for the list is as follows:
842
843    UUID                                  ID    name
844    59e1cf6c-6ab9-4879-90e7-adc8d1c63bf5  2    win
845    50bc8f75-81d0-4d53-b2e6-95cb44e2682e  3    linux
846
847=item B<vncviewer> [I<OPTIONS>] I<domain-id>
848
849Attach to the domain's VNC server, forking a vncviewer process.
850
851B<OPTIONS>
852
853=over 4
854
855=item I<--autopass>
856
857Pass the VNC password to vncviewer via stdin.
858
859=back
860
861=back
862
863=head1 XEN HOST SUBCOMMANDS
864
865=over 4
866
867=item B<debug-keys> I<keys>
868
869Send debug I<keys> to Xen. It is the same as pressing the Xen
870"conswitch" (Ctrl-A by default) three times and then pressing "keys".
871
872=item B<set-parameters> I<params>
873
874Set hypervisor parameters as specified in I<params>. This allows for some
875boot parameters of the hypervisor to be modified in the running systems.
876
877=item B<dmesg> [I<OPTIONS>]
878
879Reads the Xen message buffer, similar to dmesg on a Linux system.  The
880buffer contains informational, warning, and error messages created
881during Xen's boot process.  If you are having problems with Xen, this
882is one of the first places to look as part of problem determination.
883
884B<OPTIONS>
885
886=over 4
887
888=item B<-c>, B<--clear>
889
890Clears Xen's message buffer.
891
892=back
893
894=item B<info> [I<OPTIONS>]
895
896Print information about the Xen host in I<name : value> format.  When
897reporting a Xen bug, please provide this information as part of the
898bug report. See I<https://wiki.xenproject.org/wiki/Reporting_Bugs_against_Xen_Project> on how to
899report Xen bugs.
900
901Sample output looks as follows:
902
903 host                   : scarlett
904 release                : 3.1.0-rc4+
905 version                : #1001 SMP Wed Oct 19 11:09:54 UTC 2011
906 machine                : x86_64
907 nr_cpus                : 4
908 nr_nodes               : 1
909 cores_per_socket       : 4
910 threads_per_core       : 1
911 cpu_mhz                : 2266
912 hw_caps                : bfebfbff:28100800:00000000:00003b40:009ce3bd:00000000:00000001:00000000
913 virt_caps              : hvm hvm_directio
914 total_memory           : 6141
915 free_memory            : 4274
916 free_cpus              : 0
917 outstanding_claims     : 0
918 xen_major              : 4
919 xen_minor              : 2
920 xen_extra              : -unstable
921 xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
922 xen_scheduler          : credit
923 xen_pagesize           : 4096
924 platform_params        : virt_start=0xffff800000000000
925 xen_changeset          : Wed Nov 02 17:09:09 2011 +0000 24066:54a5e994a241
926 xen_commandline        : com1=115200,8n1 guest_loglvl=all dom0_mem=750M console=com1
927 cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
928 cc_compile_by          : sstabellini
929 cc_compile_domain      : uk.xensource.com
930 cc_compile_date        : Tue Nov  8 12:03:05 UTC 2011
931 xend_config_format     : 4
932
933
934B<FIELDS>
935
936Not all fields will be explained here, but some of the less obvious
937ones deserve explanation:
938
939=over 4
940
941=item B<hw_caps>
942
943A vector showing what hardware capabilities are supported by your
944processor.  This is equivalent to, though more cryptic, the flags
945field in /proc/cpuinfo on a normal Linux machine: they both derive from
946the feature bits returned by the cpuid command on x86 platforms.
947
948=item B<free_memory>
949
950Available memory (in MB) not allocated to Xen, or any other domains, or
951claimed for domains.
952
953=item B<outstanding_claims>
954
955When a claim call is done (see L<xl.conf(5)>) a reservation for a specific
956amount of pages is set and also a global value is incremented. This
957global value (outstanding_claims) is then reduced as the domain's memory
958is populated and eventually reaches zero. Most of the time the value will
959be zero, but if you are launching multiple guests, and B<claim_mode> is
960enabled, this value can increase/decrease. Note that the value also
961affects the B<free_memory> - as it will reflect the free memory
962in the hypervisor minus the outstanding pages claimed for guests.
963See xl I<info> B<claims> parameter for detailed listing.
964
965=item B<xen_caps>
966
967The Xen version and architecture.  Architecture values can be one of:
968x86_32, x86_32p (i.e. PAE enabled), x86_64, ia64.
969
970=item B<xen_changeset>
971
972The Xen mercurial changeset id.  Very useful for determining exactly
973what version of code your Xen system was built from.
974
975=back
976
977B<OPTIONS>
978
979=over 4
980
981=item B<-n>, B<--numa>
982
983List host NUMA topology information
984
985=back
986
987=item B<top>
988
989Executes the B<xentop(1)> command, which provides real time monitoring of
990domains.  Xentop has a curses interface, and is reasonably self explanatory.
991
992=item B<uptime>
993
994Prints the current uptime of the domains running.
995
996=item B<claims>
997
998Prints information about outstanding claims by the guests. This provides
999the outstanding claims and currently populated memory count for the guests.
1000These values added up reflect the global outstanding claim value, which
1001is provided via the I<info> argument, B<outstanding_claims> value.
1002The B<Mem> column has the cumulative value of outstanding claims and
1003the total amount of memory that has been right now allocated to the guest.
1004
1005B<EXAMPLE>
1006
1007An example format for the list is as follows:
1008
1009 Name                                        ID   Mem VCPUs      State   Time(s)  Claimed
1010 Domain-0                                     0  2047     4     r-----      19.7     0
1011 OL5                                          2  2048     1     --p---       0.0   847
1012 OL6                                          3  1024     4     r-----       5.9     0
1013 Windows_XP                                   4  2047     1     --p---       0.0  1989
1014
1015In which it can be seen that the OL5 guest still has 847MB of claimed
1016memory (out of the total 2048MB where 1191MB has been allocated to
1017the guest).
1018
1019=back
1020
1021=head1 SCHEDULER SUBCOMMANDS
1022
1023Xen ships with a number of domain schedulers, which can be set at boot
1024time with the B<sched=> parameter on the Xen command line.  By
1025default B<credit> is used for scheduling.
1026
1027=over 4
1028
1029=item B<sched-credit> [I<OPTIONS>]
1030
1031Set or get credit (aka credit1) scheduler parameters.  The credit scheduler is
1032a proportional fair share CPU scheduler built from the ground up to be
1033work conserving on SMP hosts.
1034
1035Each domain (including Domain0) is assigned a weight and a cap.
1036
1037B<OPTIONS>
1038
1039=over 4
1040
1041=item B<-d DOMAIN>, B<--domain=DOMAIN>
1042
1043Specify domain for which scheduler parameters are to be modified or retrieved.
1044Mandatory for modifying scheduler parameters.
1045
1046=item B<-w WEIGHT>, B<--weight=WEIGHT>
1047
1048A domain with a weight of 512 will get twice as much CPU as a domain
1049with a weight of 256 on a contended host. Legal weights range from 1
1050to 65535 and the default is 256.
1051
1052=item B<-c CAP>, B<--cap=CAP>
1053
1054The cap optionally fixes the maximum amount of CPU a domain will be
1055able to consume, even if the host system has idle CPU cycles. The cap
1056is expressed in percentage of one physical CPU: 100 is 1 physical CPU,
105750 is half a CPU, 400 is 4 CPUs, etc. The default, 0, means there is
1058no upper cap.
1059
1060NB: Many systems have features that will scale down the computing
1061power of a cpu that is not 100% utilized.  This can be in the
1062operating system, but can also sometimes be below the operating system
1063in the BIOS.  If you set a cap such that individual cores are running
1064at less than 100%, this may have an impact on the performance of your
1065workload over and above the impact of the cap. For example, if your
1066processor runs at 2GHz, and you cap a vm at 50%, the power management
1067system may also reduce the clock speed to 1GHz; the effect will be
1068that your VM gets 25% of the available power (50% of 1GHz) rather than
106950% (50% of 2GHz).  If you are not getting the performance you expect,
1070look at performance and cpufreq options in your operating system and
1071your BIOS.
1072
1073=item B<-p CPUPOOL>, B<--cpupool=CPUPOOL>
1074
1075Restrict output to domains in the specified cpupool.
1076
1077=item B<-s>, B<--schedparam>
1078
1079Specify to list or set pool-wide scheduler parameters.
1080
1081=item B<-t TSLICE>, B<--tslice_ms=TSLICE>
1082
1083Timeslice tells the scheduler how long to allow VMs to run before
1084pre-empting.  The default is 30ms.  Valid ranges are 1ms to 1000ms.
1085The length of the timeslice (in ms) must be higher than the length of
1086the ratelimit (see below).
1087
1088=item B<-r RLIMIT>, B<--ratelimit_us=RLIMIT>
1089
1090Ratelimit attempts to limit the number of schedules per second.  It
1091sets a minimum amount of time (in microseconds) a VM must run before
1092we will allow a higher-priority VM to pre-empt it.  The default value
1093is 1000 microseconds (1ms).  Valid range is 100 to 500000 (500ms).
1094The ratelimit length must be lower than the timeslice length.
1095
1096=item B<-m DELAY>, B<--migration_delay_us=DELAY>
1097
1098Migration delay specifies for how long a vCPU, after it stopped running should
1099be considered "cache-hot". Basically, if less than DELAY us passed since when
1100the vCPU was executing on a CPU, it is likely that most of the vCPU's working
1101set is still in the CPU's cache, and therefore the vCPU is not migrated.
1102
1103Default is 0. Maximum is 100 ms. This can be effective at preventing vCPUs
1104to bounce among CPUs too quickly, but, at the same time, the scheduler stops
1105being fully work-conserving.
1106
1107=back
1108
1109B<COMBINATION>
1110
1111The following is the effect of combining the above options:
1112
1113=over 4
1114
1115=item B<E<lt>nothingE<gt>>             : List all domain params and sched params from all pools
1116
1117=item B<-d [domid]>            : List domain params for domain [domid]
1118
1119=item B<-d [domid] [params]>   : Set domain params for domain [domid]
1120
1121=item B<-p [pool]>             : list all domains and sched params for [pool]
1122
1123=item B<-s>                    : List sched params for poolid 0
1124
1125=item B<-s [params]>           : Set sched params for poolid 0
1126
1127=item B<-p [pool] -s>          : List sched params for [pool]
1128
1129=item B<-p [pool] -s [params]> : Set sched params for [pool]
1130
1131=item B<-p [pool] -d>...       : Illegal
1132
1133=back
1134
1135=item B<sched-credit2> [I<OPTIONS>]
1136
1137Set or get credit2 scheduler parameters.  The credit2 scheduler is a
1138proportional fair share CPU scheduler built from the ground up to be
1139work conserving on SMP hosts.
1140
1141Each domain (including Domain0) is assigned a weight.
1142
1143B<OPTIONS>
1144
1145=over 4
1146
1147=item B<-d DOMAIN>, B<--domain=DOMAIN>
1148
1149Specify domain for which scheduler parameters are to be modified or retrieved.
1150Mandatory for modifying scheduler parameters.
1151
1152=item B<-w WEIGHT>, B<--weight=WEIGHT>
1153
1154A domain with a weight of 512 will get twice as much CPU as a domain
1155with a weight of 256 on a contended host. Legal weights range from 1
1156to 65535 and the default is 256.
1157
1158=item B<-p CPUPOOL>, B<--cpupool=CPUPOOL>
1159
1160Restrict output to domains in the specified cpupool.
1161
1162=item B<-s>, B<--schedparam>
1163
1164Specify to list or set pool-wide scheduler parameters.
1165
1166=item B<-r RLIMIT>, B<--ratelimit_us=RLIMIT>
1167
1168Attempts to limit the rate of context switching. It is basically the same
1169as B<--ratelimit_us> in B<sched-credit>
1170
1171=back
1172
1173=item B<sched-rtds> [I<OPTIONS>]
1174
1175Set or get rtds (Real Time Deferrable Server) scheduler parameters.
1176This rt scheduler applies Preemptive Global Earliest Deadline First
1177real-time scheduling algorithm to schedule VCPUs in the system.
1178Each VCPU has a dedicated period, budget and extratime.
1179While scheduled, a VCPU burns its budget.
1180A VCPU has its budget replenished at the beginning of each period;
1181Unused budget is discarded at the end of each period.
1182A VCPU with extratime set gets extra time from the unreserved system resource.
1183
1184B<OPTIONS>
1185
1186=over 4
1187
1188=item B<-d DOMAIN>, B<--domain=DOMAIN>
1189
1190Specify domain for which scheduler parameters are to be modified or retrieved.
1191Mandatory for modifying scheduler parameters.
1192
1193=item B<-v VCPUID/all>, B<--vcpuid=VCPUID/all>
1194
1195Specify vcpu for which scheduler parameters are to be modified or retrieved.
1196
1197=item B<-p PERIOD>, B<--period=PERIOD>
1198
1199Period of time, in microseconds, over which to replenish the budget.
1200
1201=item B<-b BUDGET>, B<--budget=BUDGET>
1202
1203Amount of time, in microseconds, that the VCPU will be allowed
1204to run every period.
1205
1206=item B<-e Extratime>, B<--extratime=Extratime>
1207
1208Binary flag to decide if the VCPU will be allowed to get extra time from
1209the unreserved system resource.
1210
1211=item B<-c CPUPOOL>, B<--cpupool=CPUPOOL>
1212
1213Restrict output to domains in the specified cpupool.
1214
1215=back
1216
1217B<EXAMPLE>
1218
1219=over 4
1220
12211) Use B<-v all> to see the budget and period of all the VCPUs of
1222all the domains:
1223
1224    xl sched-rtds -v all
1225    Cpupool Pool-0: sched=RTDS
1226    Name                        ID VCPU    Period    Budget  Extratime
1227    Domain-0                     0    0     10000      4000        yes
1228    vm1                          2    0       300       150        yes
1229    vm1                          2    1       400       200        yes
1230    vm1                          2    2     10000      4000        yes
1231    vm1                          2    3      1000       500        yes
1232    vm2                          4    0     10000      4000        yes
1233    vm2                          4    1     10000      4000        yes
1234
1235Without any arguments, it will output the default scheduling
1236parameters for each domain:
1237
1238    xl sched-rtds
1239    Cpupool Pool-0: sched=RTDS
1240    Name                        ID    Period    Budget  Extratime
1241    Domain-0                     0     10000      4000        yes
1242    vm1                          2     10000      4000        yes
1243    vm2                          4     10000      4000        yes
1244
1245
12462) Use, for instance, B<-d vm1, -v all> to see the budget and
1247period of all VCPUs of a specific domain (B<vm1>):
1248
1249    xl sched-rtds -d vm1 -v all
1250    Name                        ID VCPU    Period    Budget  Extratime
1251    vm1                          2    0       300       150        yes
1252    vm1                          2    1       400       200        yes
1253    vm1                          2    2     10000      4000        yes
1254    vm1                          2    3      1000       500        yes
1255
1256To see the parameters of a subset of the VCPUs of a domain, use:
1257
1258    xl sched-rtds -d vm1 -v 0 -v 3
1259    Name                        ID VCPU    Period    Budget  Extratime
1260    vm1                          2    0       300       150        yes
1261    vm1                          2    3      1000       500        yes
1262
1263If no B<-v> is specified, the default scheduling parameters for the
1264domain are shown:
1265
1266    xl sched-rtds -d vm1
1267    Name                        ID    Period    Budget  Extratime
1268    vm1                          2     10000      4000        yes
1269
1270
12713) Users can set the budget and period of multiple VCPUs of a
1272specific domain with only one command,
1273e.g., "xl sched-rtds -d vm1 -v 0 -p 100 -b 50 -e 1 -v 3 -p 300 -b 150 -e 0".
1274
1275To change the parameters of all the VCPUs of a domain, use B<-v all>,
1276e.g., "xl sched-rtds -d vm1 -v all -p 500 -b 250 -e 1".
1277
1278=back
1279
1280=back
1281
1282=head1 CPUPOOLS COMMANDS
1283
1284Xen can group the physical cpus of a server in cpu-pools. Each physical CPU is
1285assigned at most to one cpu-pool. Domains are each restricted to a single
1286cpu-pool. Scheduling does not cross cpu-pool boundaries, so each cpu-pool has
1287its own scheduler.
1288Physical cpus and domains can be moved from one cpu-pool to another only by an
1289explicit command.
1290Cpu-pools can be specified either by name or by id.
1291
1292=over 4
1293
1294=item B<cpupool-create> [I<OPTIONS>] [I<configfile>] [I<variable=value> ...]
1295
1296Create a cpu pool based an config from a I<configfile> or command-line
1297parameters.  Variable settings from the I<configfile> may be altered
1298by specifying new or additional assignments on the command line.
1299
1300See the L<xlcpupool.cfg(5)> manpage for more information.
1301
1302B<OPTIONS>
1303
1304=over 4
1305
1306=item B<-f=FILE>, B<--defconfig=FILE>
1307
1308Use the given configuration file.
1309
1310=back
1311
1312=item B<cpupool-list> [I<OPTIONS>] [I<cpu-pool>]
1313
1314List CPU pools on the host.
1315
1316B<OPTIONS>
1317
1318=over 4
1319
1320=item B<-c>, B<--cpus>
1321
1322If this option is specified, B<xl> prints a list of CPUs used by I<cpu-pool>.
1323
1324=back
1325
1326=item B<cpupool-destroy> I<cpu-pool>
1327
1328Deactivates a cpu pool.
1329This is possible only if no domain is active in the cpu-pool.
1330
1331=item B<cpupool-rename> I<cpu-pool> <newname>
1332
1333Renames a cpu-pool to I<newname>.
1334
1335=item B<cpupool-cpu-add> I<cpu-pool> I<cpus|node:nodes>
1336
1337Adds one or more CPUs or NUMA nodes to I<cpu-pool>. CPUs and NUMA
1338nodes can be specified as single CPU/node IDs or as ranges.
1339
1340For example:
1341
1342 (a) xl cpupool-cpu-add mypool 4
1343 (b) xl cpupool-cpu-add mypool 1,5,10-16,^13
1344 (c) xl cpupool-cpu-add mypool node:0,nodes:2-3,^10-12,8
1345
1346means adding CPU 4 to mypool, in (a); adding CPUs 1,5,10,11,12,14,15
1347and 16, in (b); and adding all the CPUs of NUMA nodes 0, 2 and 3,
1348plus CPU 8, but keeping out CPUs 10,11,12, in (c).
1349
1350All the specified CPUs that can be added to the cpupool will be added
1351to it. If some CPU can't (e.g., because they're already part of another
1352cpupool), an error is reported about each one of them.
1353
1354=item B<cpupool-cpu-remove> I<cpus|node:nodes>
1355
1356Removes one or more CPUs or NUMA nodes from I<cpu-pool>. CPUs and NUMA
1357nodes can be specified as single CPU/node IDs or as ranges, using the
1358exact same syntax as in B<cpupool-cpu-add> above.
1359
1360=item B<cpupool-migrate> I<domain-id> I<cpu-pool>
1361
1362Moves a domain specified by domain-id or domain-name into a cpu-pool.
1363Domain-0 can't be moved to another cpu-pool.
1364
1365=item B<cpupool-numa-split>
1366
1367Splits up the machine into one cpu-pool per numa node.
1368
1369=back
1370
1371=head1 VIRTUAL DEVICE COMMANDS
1372
1373Most virtual devices can be added and removed while guests are
1374running, assuming that the necessary support exists in the guest OS.  The
1375effect to the guest OS is much the same as any hotplug event.
1376
1377=head2 BLOCK DEVICES
1378
1379=over 4
1380
1381=item B<block-attach> I<domain-id> I<disc-spec-component(s)> ...
1382
1383Create a new virtual block device and attach it to the specified domain.
1384A disc specification is in the same format used for the B<disk> variable in
1385the domain config file. See L<xl-disk-configuration(5)>. This will trigger a
1386hotplug event for the guest.
1387
1388Note that only PV block devices are supported by block-attach.
1389Requests to attach emulated devices (eg, vdev=hdc) will result in only
1390the PV view being available to the guest.
1391
1392=item B<block-detach> I<domain-id> I<devid> [I<OPTIONS>]
1393
1394Detach a domain's virtual block device. I<devid> may be the symbolic
1395name or the numeric device id given to the device by domain 0.  You
1396will need to run B<xl block-list> to determine that number.
1397
1398Detaching the device requires the cooperation of the domain.  If the
1399domain fails to release the device (perhaps because the domain is hung
1400or is still using the device), the detach will fail.
1401
1402B<OPTIONS>
1403
1404=over 4
1405
1406=item B<--force>
1407
1408If this parameter is specified the device will be forcefully detached, which
1409may cause IO errors in the domain.
1410
1411=back
1412
1413
1414
1415=item B<block-list> I<domain-id>
1416
1417List virtual block devices for a domain.
1418
1419=item B<cd-insert> I<domain-id> I<virtualdevice> I<target>
1420
1421Insert a cdrom into a guest domain's existing virtual cd drive. The
1422virtual drive must already exist but can be empty. How the device should be
1423presented to the guest domain is specified by the I<virtualdevice> parameter;
1424for example "hdc". Parameter I<target> is the target path in the backend domain
1425(usually domain 0) to be exported; can be a block device or a file etc.
1426See B<target> in L<xl-disk-configuration(5)>.
1427
1428Only works with HVM domains.
1429
1430
1431=item B<cd-eject> I<domain-id> I<virtualdevice>
1432
1433Eject a cdrom from a guest domain's virtual cd drive, specified by
1434I<virtualdevice>. Only works with HVM domains.
1435
1436=back
1437
1438=head2 NETWORK DEVICES
1439
1440=over 4
1441
1442=item B<network-attach> I<domain-id> I<network-device>
1443
1444Creates a new network device in the domain specified by I<domain-id>.
1445I<network-device> describes the device to attach, using the same format as the
1446B<vif> string in the domain config file. See L<xl.cfg(5)> and
1447L<xl-network-configuration(5)>
1448for more information.
1449
1450Note that only attaching PV network interfaces is supported.
1451
1452=item B<network-detach> I<domain-id> I<devid|mac>
1453
1454Removes the network device from the domain specified by I<domain-id>.
1455I<devid> is the virtual interface device number within the domain
1456(i.e. the 3 in vif22.3). Alternatively, the I<mac> address can be used to
1457select the virtual interface to detach.
1458
1459=item B<network-list> I<domain-id>
1460
1461List virtual network interfaces for a domain.
1462
1463=back
1464
1465=head2 CHANNEL DEVICES
1466
1467=over 4
1468
1469=item B<channel-list> I<domain-id>
1470
1471List virtual channel interfaces for a domain.
1472
1473=back
1474
1475=head2 VIRTUAL TRUSTED PLATFORM MODULE (vTPM) DEVICES
1476
1477=over 4
1478
1479=item B<vtpm-attach> I<domain-id> I<vtpm-device>
1480
1481Creates a new vtpm (virtual Trusted Platform Module) device in the domain
1482specified by I<domain-id>. I<vtpm-device> describes the device to attach,
1483using the same format as the B<vtpm> string in the domain config file.
1484See L<xl.cfg(5)> for more information.
1485
1486=item B<vtpm-detach> I<domain-id> I<devid|uuid>
1487
1488Removes the vtpm device from the domain specified by I<domain-id>.
1489I<devid> is the numeric device id given to the virtual Trusted
1490Platform Module device. You will need to run B<xl vtpm-list> to determine that
1491number. Alternatively, the I<uuid> of the vtpm can be used to
1492select the virtual device to detach.
1493
1494=item B<vtpm-list> I<domain-id>
1495
1496List virtual Trusted Platform Modules for a domain.
1497
1498=back
1499
1500=head2 VDISPL DEVICES
1501
1502=over 4
1503
1504=item B<vdispl-attach> I<domain-id> I<vdispl-device>
1505
1506Creates a new vdispl device in the domain specified by I<domain-id>.
1507I<vdispl-device> describes the device to attach, using the same format as the
1508B<vdispl> string in the domain config file. See L<xl.cfg(5)> for
1509more information.
1510
1511B<NOTES>
1512
1513=over 4
1514
1515As in I<vdispl-device> string semicolon is used then put quotes or escaping
1516when using from the shell.
1517
1518B<EXAMPLE>
1519
1520=over 4
1521
1522xl vdispl-attach DomU connectors='id0:1920x1080;id1:800x600;id2:640x480'
1523
1524or
1525
1526xl vdispl-attach DomU connectors=id0:1920x1080\;id1:800x600\;id2:640x480
1527
1528=back
1529
1530=back
1531
1532=item B<vdispl-detach> I<domain-id> I<dev-id>
1533
1534Removes the vdispl device specified by I<dev-id> from the domain specified by I<domain-id>.
1535
1536=item B<vdispl-list> I<domain-id>
1537
1538List virtual displays for a domain.
1539
1540=back
1541
1542=head2 VSND DEVICES
1543
1544=over 4
1545
1546=item B<vsnd-attach> I<domain-id> I<vsnd-item> I<vsnd-item> ...
1547
1548Creates a new vsnd device in the domain specified by I<domain-id>.
1549I<vsnd-item>'s describe the vsnd device to attach, using the same format as the
1550B<VSND_ITEM_SPEC> string in the domain config file. See L<xl.cfg(5)> for
1551more information.
1552
1553B<EXAMPLE>
1554
1555=over 4
1556
1557xl vsnd-attach DomU 'CARD, short-name=Main, sample-formats=s16_le;s8;u32_be'
1558'PCM, name=Main' 'STREAM, id=0, type=p' 'STREAM, id=1, type=c, channels-max=2'
1559
1560=back
1561
1562=item B<vsnd-detach> I<domain-id> I<dev-id>
1563
1564Removes the vsnd device specified by I<dev-id> from the domain specified by I<domain-id>.
1565
1566=item B<vsnd-list> I<domain-id>
1567
1568List vsnd devices for a domain.
1569
1570=back
1571
1572=head2 KEYBOARD DEVICES
1573
1574=over 4
1575
1576=item B<vkb-attach> I<domain-id> I<vkb-device>
1577
1578Creates a new keyboard device in the domain specified by I<domain-id>.
1579I<vkb-device> describes the device to attach, using the same format as the
1580B<VKB_SPEC_STRING> string in the domain config file. See L<xl.cfg(5)>
1581for more informations.
1582
1583=item B<vkb-detach> I<domain-id> I<devid>
1584
1585Removes the keyboard device from the domain specified by I<domain-id>.
1586I<devid> is the virtual interface device number within the domain
1587
1588=item B<vkb-list> I<domain-id>
1589
1590List virtual network interfaces for a domain.
1591
1592=back
1593
1594=head1 PCI PASS-THROUGH
1595
1596=over 4
1597
1598=item B<pci-assignable-list>
1599
1600List all the assignable PCI devices.
1601These are devices in the system which are configured to be
1602available for passthrough and are bound to a suitable PCI
1603backend driver in domain 0 rather than a real driver.
1604
1605=item B<pci-assignable-add> I<BDF>
1606
1607Make the device at PCI Bus/Device/Function BDF assignable to guests.
1608This will bind the device to the pciback driver and assign it to the
1609"quarantine domain".  If it is already bound to a driver, it will
1610first be unbound, and the original driver stored so that it can be
1611re-bound to the same driver later if desired.  If the device is
1612already bound, it will assign it to the quarantine domain and return
1613success.
1614
1615CAUTION: This will make the device unusable by Domain 0 until it is
1616returned with pci-assignable-remove.  Care should therefore be taken
1617not to do this on a device critical to domain 0's operation, such as
1618storage controllers, network interfaces, or GPUs that are currently
1619being used.
1620
1621=item B<pci-assignable-remove> [I<-r>] I<BDF>
1622
1623Make the device at PCI Bus/Device/Function BDF not assignable to
1624guests.  This will at least unbind the device from pciback, and
1625re-assign it from the "quarantine domain" back to domain 0.  If the -r
1626option is specified, it will also attempt to re-bind the device to its
1627original driver, making it usable by Domain 0 again.  If the device is
1628not bound to pciback, it will return success.
1629
1630Note that this functionality will work even for devices which were not
1631made assignable by B<pci-assignable-add>.  This can be used to allow
1632dom0 to access devices which were automatically quarantined by Xen
1633after domain destruction as a result of Xen's B<iommu=quarantine>
1634command-line default.
1635
1636As always, this should only be done if you trust the guest, or are
1637confident that the particular device you're re-assigning to dom0 will
1638cancel all in-flight DMA on FLR.
1639
1640=item B<pci-attach> I<domain-id> I<BDF>
1641
1642Hot-plug a new pass-through pci device to the specified domain.
1643B<BDF> is the PCI Bus/Device/Function of the physical device to pass-through.
1644
1645=item B<pci-detach> [I<OPTIONS>] I<domain-id> I<BDF>
1646
1647Hot-unplug a previously assigned pci device from a domain. B<BDF> is the PCI
1648Bus/Device/Function of the physical device to be removed from the guest domain.
1649
1650B<OPTIONS>
1651
1652=over 4
1653
1654=item B<-f>
1655
1656If this parameter is specified, B<xl> is going to forcefully remove the device
1657even without guest domain's collaboration.
1658
1659=back
1660
1661=item B<pci-list> I<domain-id>
1662
1663List pass-through pci devices for a domain.
1664
1665=back
1666
1667=head1 USB PASS-THROUGH
1668
1669=over 4
1670
1671=item B<usbctrl-attach> I<domain-id> I<usbctrl-device>
1672
1673Create a new USB controller in the domain specified by I<domain-id>,
1674I<usbctrl-device> describes the device to attach, using form
1675C<KEY=VALUE KEY=VALUE ...> where B<KEY=VALUE> has the same
1676meaning as the B<usbctrl> description in the domain config file.
1677See L<xl.cfg(5)> for more information.
1678
1679=item B<usbctrl-detach> I<domain-id> I<devid>
1680
1681Destroy a USB controller from the specified domain.
1682B<devid> is devid of the USB controller.
1683
1684=item B<usbdev-attach> I<domain-id> I<usbdev-device>
1685
1686Hot-plug a new pass-through USB device to the domain specified by
1687I<domain-id>, I<usbdev-device> describes the device to attach, using
1688form C<KEY=VALUE KEY=VALUE ...> where B<KEY=VALUE> has the same
1689meaning as the B<usbdev> description in the domain config file.
1690See L<xl.cfg(5)> for more information.
1691
1692=item B<usbdev-detach> I<domain-id> I<controller=devid> I<port=number>
1693
1694Hot-unplug a previously assigned USB device from a domain.
1695B<controller=devid> and B<port=number> is USB controller:port in the guest
1696domain the USB device is attached to.
1697
1698=item B<usb-list> I<domain-id>
1699
1700List pass-through usb devices for a domain.
1701
1702=back
1703
1704=head1 DEVICE-MODEL CONTROL
1705
1706=over 4
1707
1708=item B<qemu-monitor-command> I<domain-id> I<command>
1709
1710Issue a monitor command to the device model of the domain specified by
1711I<domain-id>. I<command> can be any valid command qemu understands. This
1712can be e.g. used to add non-standard devices or devices with non-standard
1713parameters to a domain. The output of the command is printed to stdout.
1714
1715B<Warning:> This qemu monitor access is provided for convenience when
1716debugging, troubleshooting, and experimenting.  Its use is not
1717supported by the Xen Project.
1718
1719Specifically, not all information displayed by the qemu monitor will
1720necessarily be accurate or complete, because in a Xen system qemu
1721does not have a complete view of the guest.
1722
1723Furthermore, modifying the guest's setup via the qemu monitor may
1724conflict with the Xen toolstack's assumptions.  Resulting problems
1725may include, but are not limited to: guest crashes; toolstack error
1726messages; inability to migrate the guest; and security
1727vulnerabilities which are not covered by the Xen Project security
1728response policy.
1729
1730B<EXAMPLE>
1731
1732Obtain information of USB devices connected as such via the device model
1733(only!) to a domain:
1734
1735 xl qemu-monitor-command vm1 'info usb'
1736  Device 0.2, Port 5, Speed 480 Mb/s, Product Mass Storage
1737
1738=back
1739
1740=head1 FLASK
1741
1742B<FLASK> is a security framework that defines a mandatory access control policy
1743providing fine-grained controls over Xen domains, allowing the policy writer
1744to define what interactions between domains, devices, and the hypervisor are
1745permitted. Some example of what you can do using XSM/FLASK:
1746 - Prevent two domains from communicating via event channels or grants
1747 - Control which domains can use device passthrough (and which devices)
1748 - Restrict or audit operations performed by privileged domains
1749 - Prevent a privileged domain from arbitrarily mapping pages from other
1750   domains.
1751
1752You can find more details on how to use FLASK and an example security
1753policy here: L<https://xenbits.xenproject.org/docs/unstable/misc/xsm-flask.txt>
1754
1755=over 4
1756
1757=item B<getenforce>
1758
1759Determine if the FLASK security module is loaded and enforcing its policy.
1760
1761=item B<setenforce> I<1|0|Enforcing|Permissive>
1762
1763Enable or disable enforcing of the FLASK access controls. The default is
1764permissive, but this can be changed to enforcing by specifying "flask=enforcing"
1765or "flask=late" on the hypervisor's command line.
1766
1767=item B<loadpolicy> I<policy-file>
1768
1769Load FLASK policy from the given policy file. The initial policy is provided to
1770the hypervisor as a multiboot module; this command allows runtime updates to the
1771policy. Loading new security policy will reset runtime changes to device labels.
1772
1773=back
1774
1775=head1 PLATFORM SHARED RESOURCE MONITORING/CONTROL
1776
1777Intel Haswell and later server platforms offer shared resource monitoring
1778and control technologies. The availability of these technologies and the
1779hardware capabilities can be shown with B<psr-hwinfo>.
1780
1781See L<https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html> for more
1782information.
1783
1784=over 4
1785
1786=item B<psr-hwinfo> [I<OPTIONS>]
1787
1788Show Platform Shared Resource (PSR) hardware information.
1789
1790B<OPTIONS>
1791
1792=over 4
1793
1794=item B<-m>, B<--cmt>
1795
1796Show Cache Monitoring Technology (CMT) hardware information.
1797
1798=item B<-a>, B<--cat>
1799
1800Show Cache Allocation Technology (CAT) hardware information.
1801
1802=back
1803
1804=back
1805
1806=head2 CACHE MONITORING TECHNOLOGY
1807
1808Intel Haswell and later server platforms offer monitoring capability in each
1809logical processor to measure specific platform shared resource metric, for
1810example, L3 cache occupancy. In the Xen implementation, the monitoring
1811granularity is domain level. To monitor a specific domain, just attach the
1812domain id with the monitoring service. When the domain doesn't need to be
1813monitored any more, detach the domain id from the monitoring service.
1814
1815Intel Broadwell and later server platforms also offer total/local memory
1816bandwidth monitoring. Xen supports per-domain monitoring for these two
1817additional monitoring types. Both memory bandwidth monitoring and L3 cache
1818occupancy monitoring share the same set of underlying monitoring service. Once
1819a domain is attached to the monitoring service, monitoring data can be shown
1820for any of these monitoring types.
1821
1822There is no cache monitoring and memory bandwidth monitoring on L2 cache so
1823far.
1824
1825=over 4
1826
1827=item B<psr-cmt-attach> I<domain-id>
1828
1829attach: Attach the platform shared resource monitoring service to a domain.
1830
1831=item B<psr-cmt-detach> I<domain-id>
1832
1833detach: Detach the platform shared resource monitoring service from a domain.
1834
1835=item B<psr-cmt-show> I<psr-monitor-type> [I<domain-id>]
1836
1837Show monitoring data for a certain domain or all domains. Current supported
1838monitor types are:
1839 - "cache-occupancy": showing the L3 cache occupancy(KB).
1840 - "total-mem-bandwidth": showing the total memory bandwidth(KB/s).
1841 - "local-mem-bandwidth": showing the local memory bandwidth(KB/s).
1842
1843=back
1844
1845=head2 CACHE ALLOCATION TECHNOLOGY
1846
1847Intel Broadwell and later server platforms offer capabilities to configure and
1848make use of the Cache Allocation Technology (CAT) mechanisms, which enable more
1849cache resources (i.e. L3/L2 cache) to be made available for high priority
1850applications. In the Xen implementation, CAT is used to control cache allocation
1851on VM basis. To enforce cache on a specific domain, just set capacity bitmasks
1852(CBM) for the domain.
1853
1854Intel Broadwell and later server platforms also offer Code/Data Prioritization
1855(CDP) for cache allocations, which support specifying code or data cache for
1856applications. CDP is used on a per VM basis in the Xen implementation. To
1857specify code or data CBM for the domain, CDP feature must be enabled and CBM
1858type options need to be specified when setting CBM, and the type options (code
1859and data) are mutually exclusive. There is no CDP support on L2 so far.
1860
1861=over 4
1862
1863=item B<psr-cat-set> [I<OPTIONS>] I<domain-id> I<cbm>
1864
1865Set cache capacity bitmasks(CBM) for a domain. For how to specify I<cbm>
1866please refer to L<https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>.
1867
1868B<OPTIONS>
1869
1870=over 4
1871
1872=item B<-s SOCKET>, B<--socket=SOCKET>
1873
1874Specify the socket to process, otherwise all sockets are processed.
1875
1876=item B<-l LEVEL>, B<--level=LEVEL>
1877
1878Specify the cache level to process, otherwise the last level cache (L3) is
1879processed.
1880
1881=item B<-c>, B<--code>
1882
1883Set code CBM when CDP is enabled.
1884
1885=item B<-d>, B<--data>
1886
1887Set data CBM when CDP is enabled.
1888
1889=back
1890
1891=item B<psr-cat-show> [I<OPTIONS>] [I<domain-id>]
1892
1893Show CAT settings for a certain domain or all domains.
1894
1895B<OPTIONS>
1896
1897=over 4
1898
1899=item B<-l LEVEL>, B<--level=LEVEL>
1900
1901Specify the cache level to process, otherwise the last level cache (L3) is
1902processed.
1903
1904=back
1905
1906=back
1907
1908=head2 Memory Bandwidth Allocation
1909
1910Intel Skylake and later server platforms offer capabilities to configure and
1911make use of the Memory Bandwidth Allocation (MBA) mechanisms, which provides
1912OS/VMMs the ability to slow misbehaving apps/VMs by using a credit-based
1913throttling mechanism. In the Xen implementation, MBA is used to control memory
1914bandwidth on VM basis. To enforce bandwidth on a specific domain, just set
1915throttling value (THRTL) for the domain.
1916
1917=over 4
1918
1919=item B<psr-mba-set> [I<OPTIONS>] I<domain-id> I<thrtl>
1920
1921Set throttling value (THRTL) for a domain. For how to specify I<thrtl>
1922please refer to L<https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>.
1923
1924B<OPTIONS>
1925
1926=over 4
1927
1928=item B<-s SOCKET>, B<--socket=SOCKET>
1929
1930Specify the socket to process, otherwise all sockets are processed.
1931
1932=back
1933
1934=item B<psr-mba-show> [I<domain-id>]
1935
1936Show MBA settings for a certain domain or all domains. For linear mode, it
1937shows the decimal value. For non-linear mode, it shows hexadecimal value.
1938
1939=back
1940
1941=head1 IGNORED FOR COMPATIBILITY WITH XM
1942
1943xl is mostly command-line compatible with the old xm utility used with
1944the old Python xend.  For compatibility, the following options are
1945ignored:
1946
1947=over 4
1948
1949=item B<xl migrate --live>
1950
1951=back
1952
1953=head1 SEE ALSO
1954
1955The following man pages:
1956
1957L<xl.cfg(5)>, L<xlcpupool.cfg(5)>, L<xentop(1)>, L<xl-disk-configuration(5)>
1958L<xl-network-configuration(5)>
1959
1960And the following documents on the xenproject.org website:
1961
1962L<https://xenbits.xenproject.org/docs/unstable/misc/xsm-flask.txt>
1963L<https://xenbits.xenproject.org/docs/unstable/misc/xl-psr.html>
1964
1965For systems that don't automatically bring the CPU online:
1966
1967L<https://wiki.xenproject.org/wiki/Paravirt_Linux_CPU_Hotplug>
1968
1969=head1 BUGS
1970
1971Send bugs to xen-devel@lists.xenproject.org, see
1972https://wiki.xenproject.org/wiki/Reporting_Bugs_against_Xen_Project on how to send bug reports.
1973