From nobody Mon May 6 03:32:01 2024
Delivered-To: importer@patchew.org
Received-SPF: pass (zoho.com: domain of redhat.com designates 209.132.183.28
as permitted sender) client-ip=209.132.183.28;
envelope-from=libvir-list-bounces@redhat.com; helo=mx1.redhat.com;
Authentication-Results: mx.zohomail.com;
spf=pass (zoho.com: domain of redhat.com designates 209.132.183.28 as
permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com;
dmarc=pass(p=none dis=none) header.from=redhat.com
Return-Path: vcpus
id
attribute specifies the vCPU id as used by lib=
virt
- in other places such as vcpu pinning, scheduler information and NU=
MA
- assignment. Note that the vcpu ID as seen in the guest may differ =
from
- libvirt ID in certain cases. Valid IDs are from 0 to the maximum v=
cpu
+ in other places such as vCPU pinning, scheduler information and NU=
MA
+ assignment. Note that the vCPU ID as seen in the guest may differ =
from
+ libvirt ID in certain cases. Valid IDs are from 0 to the maximum v=
CPU
count as set by the vcpu
element minus 1.
=20
The enabled
attribute allows to control the state of =
the
- vcpu. Valid values are yes
and no
.
+ vCPU. Valid values are yes
and no
.
=20
- hotpluggable
controls whether given vcpu can be hotpl=
ugged
- and hotunplugged in cases when the cpu is enabled at boot. Note th=
at
- all disabled vcpus must be hotpluggable. Valid values are
+ hotpluggable
controls whether given vCPU can be hotpl=
ugged
+ and hotunplugged in cases when the CPU is enabled at boot. Note th=
at
+ all disabled vCPUs must be hotpluggable. Valid values are
yes
and no
.
=20
- order
allows to specify the order to add the online v=
cpus.
- For hypervisors/platforms that require to insert multiple vcpus at=
once
- the order may be duplicated across all vcpus that need to be
- enabled at once. Specifying order is not necessary, vcpus are then
+ order
allows to specify the order to add the online v=
CPUs.
+ For hypervisors/platforms that require to insert multiple vCPUs at=
once
+ the order may be duplicated across all vCPUs that need to be
+ enabled at once. Specifying order is not necessary, vCPUs are then
added in an arbitrary order. If order info is used, it must be use=
d for
- all online vcpus. Hypervisors may clear or update ordering informa=
tion
+ all online vCPUs. Hypervisors may clear or update ordering informa=
tion
during certain operations to assure valid configuration.
=20
- Note that hypervisors may create hotpluggable vcpus differently fr=
om
- boot vcpus thus special initialization may be necessary.
+ Note that hypervisors may create hotpluggable vCPUs differently fr=
om
+ boot vCPUs thus special initialization may be necessary.
=20
- Hypervisors may require that vcpus enabled on boot which are not
+ Hypervisors may require that vCPUs enabled on boot which are not
hotpluggable are clustered at the beginning starting with ID 0. It=
may
- be also required that vcpu 0 is always present and non-hotpluggabl=
e.
+ be also required that vCPU 0 is always present and non-hotpluggabl=
e.
=20
- Note that providing state for individual cpus may be necessary to =
enable
+ Note that providing state for individual CPUs may be necessary to =
enable
support of addressable vCPU hotplug and this feature may not be
supported by all hypervisors.
=20
- For QEMU the following conditions are required. Vcpu 0 needs to be
- enabled and non-hotpluggable. On PPC64 along with it vcpus that ar=
e in
- the same core need to be enabled as well. All non-hotpluggable cpus
- present at boot need to be grouped after vcpu 0.
+ For QEMU the following conditions are required. vCPU 0 needs to be
+ enabled and non-hotpluggable. On PPC64 along with it vCPUs that ar=
e in
+ the same core need to be enabled as well. All non-hotpluggable CPUs
+ present at boot need to be grouped after vCPU 0.
Since 2.2.0 (QEMU only)
vcpupin
vcpupin
element specifies which of host's
- physical CPUs the domain VCPU will be pinned to. If this is omitte=
d,
+ physical CPUs the domain vCPU will be pinned to. If this is omitte=
d,
and attribute cpuset
of element vcpu
is
not specified, the vCPU is pinned to all the physical CPUs by defa=
ult.
It contains two required attributes, the attribute vcpu
- specifies vcpu id, and the attribute
cpuset
is same as
+ specifies vCPU id, and the attribute cpuset
is same as
attribute cpuset
of element vcpu
.
(NB: Only qemu driver support)
Since 0.9.0
@@ -786,7 +786,7 @@
emulatorpin
emulatorpin
element specifies which of =
host
- physical CPUs the "emulator", a subset of a domain not including =
vcpu
+ physical CPUs the "emulator", a subset of a domain not including =
vCPU
or iothreads will be pinned to. If this is omitted, and attribute
cpuset
of element vcpu
is not specified,
"emulator" is pinned to all the physical CPUs by default. It cont=
ains
@@ -820,7 +820,7 @@
period
period
element specifies the enforcement
- interval(unit: microseconds). Within period
, each vcp=
u of
+ interval(unit: microseconds). Within period
, each vCP=
U of
the domain will not be allowed to consume more than quota
worth of runtime. The value should be in range [1000, 1000000]. A =
period
with value 0 means no value.
@@ -835,7 +835,7 @@
vCPU threads, which means that it is not bandwidth controlled. The=
value
should be in range [1000, 18446744073709551] or less than 0. A quo=
ta
with value 0 means no value. You can use this feature to ensure th=
at all
- vcpus run at the same speed.
+ vCPUs run at the same speed.
Only QEMU driver support since 0.9.4, LXC si=
nce
0.9.10
emulator_period
element specifies the en=
forcement
interval(unit: microseconds). Within emulator_period
,=
emulator
- threads(those excluding vcpus) of the domain will not be allowed t=
o consume
+ threads(those excluding vCPUs) of the domain will not be allowed t=
o consume
more than emulator_quota
worth of runtime. The value =
should be
in range [1000, 1000000]. A period with value 0 means no value.
Only QEMU driver support since 0.10.0
@@ -873,9 +873,9 @@
emulator_quota
element specifies the max=
imum
allowed bandwidth(unit: microseconds) for domain's emulator thread=
s(those
- excluding vcpus). A domain with emulator_quota
as any=
negative
+ excluding vCPUs). A domain with emulator_quota
as any=
negative
value indicates that the domain has infinite bandwidth for emulato=
r threads
- (those excluding vcpus), which means that it is not bandwidth cont=
rolled.
+ (those excluding vCPUs), which means that it is not bandwidth cont=
rolled.
The value should be in range [1000, 18446744073709551] or less tha=
n 0. A
quota with value 0 means no value.
Only QEMU driver support since 0.10.0
@@ -2131,13 +2131,13 @@
QEMU, the user-configurable extended TSEG feature was unavailabl=
e up
to and including pc-q35-2.9
. Starting with
pc-q35-2.10
the feature is available, with default =
size
- 16 MiB. That should suffice for up to roughly 272 VCPUs, 5 GiB =
guest
+ 16 MiB. That should suffice for up to roughly 272 vCPUs, 5 GiB =
guest
RAM in total, no hotplug memory range, and 32 GiB of 64-bit PCI =
MMIO
- aperture. Or for 48 VCPUs, with 1TB of guest RAM, no hotplug DI=
MM
+ aperture. Or for 48 vCPUs, with 1TB of guest RAM, no hotplug DI=
MM
range, and 32GB of 64-bit PCI MMIO aperture. The values may also=
vary
based on the loader the VM is using.
- Additional size might be needed for significantly higher VCPU co= unts + Additional size might be needed for significantly higher vCPU co= unts or increased address space (that can be memory, maxMemory, 64-bi= t PCI MMIO aperture size; roughly 8 MiB of TSEG per 1 TiB of address s= pace) which can also be rounded up. @@ -2147,7 +2147,7 @@ documentation of the guest OS or loader (if there is any), or te= st this by trial-and-error changing the value until the VM boots successfully. Yet another guiding value for users might be the = fact - that 48 MiB should be enough for pretty large guests (240 VCPUs = and + that 48 MiB should be enough for pretty large guests (240 vCPUs = and 4TB guest RAM), but it is on purpose not set as default as 48 Mi= B of unavailable RAM might be too much for small guests (e.g. with 51= 2 MiB of RAM). @@ -2425,7 +2425,7 @@
cpu_cycles
perf.cpu_cycles
stalled_cycles_frontend
perf.stalled_cycles_frontend
stalled_cycles_backend
perf.stalled_cycles_backend
ref_cpu_cycles
perf.ref_cpu_cycles
cpu_clock
perf.cpu_clock
cpu_migrations
perf.cpu_migrations