-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Xen Security Advisory CVE-2024-31145 / XSA-460
version 2
error handling in x86 IOMMU identity mapping
UPDATES IN VERSION 2
====================
Wording updated. Public release.
ISSUE DESCRIPTION
=================
Certain PCI devices in a system might be assigned Reserved Memory
Regions (specified via Reserved Memory Region Reporting, "RMRR") for
Intel VT-d or Unity Mapping ranges for AMD-Vi. These are typically used
for platform tasks such as legacy USB emulation.
Since the precise purpose of these regions is unknown, once a device
associated with such a region is active, the mappings of these regions
need to remain continuouly accessible by the device. In the logic
establishing these mappings, error handling was flawed, resulting in
such mappings to potentially remain in place when they should have been
removed again. Respective guests would then gain access to memory
regions which they aren't supposed to have access to.
IMPACT
======
The precise impact is system specific. Denial of Service (DoS)
affecting the entire host or individual guests, privilege escalation,
and information leaks cannot be ruled out.
VULNERABLE SYSTEMS
==================
Only x86 systems passing PCI devices with RMRR/Unity regions through to
guests are potentially affected.
PCI devices listed in a vm.cfg file have error handling which causes `xl
create` to abort and tear down the domain, and is thus believed to be
safe.
PCI devices attached using `xl pci-attach` will result in the command
returning nonzero, but will not tear down the domain. VMs which
continue to run after `xl pci-attach` has failed expose the
vulnerability.
For x86 Intel hardware, Xen versions 4.0 and later are affected.
For all x86 hardware, Xen versions having the XSA-378 fixes applied /
backported are affected.
MITIGATION
==========
Assigning devices using the vm.cfg file for attachment at boot avoids
the vulnerability.
CREDITS
=======
This issue was discovered by Teddy Astie of Vates and diagnosed as a
security issue by Jan Beulich of SUSE.
RESOLUTION
==========
Applying the attached patch resolves this issue.
Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball. Downstreams are encouraged to update to the
tip of the respective stable branch before applying these patches.
xsa460.patch xen-unstable - Xen 4.16.x
$ sha256sum xsa460*
f4ca598f71e9ef6b9bc50803df2996b92d2e69afd8e36d9544823d7e56ec1819 xsa460.patch
$
DEPLOYMENT DURING EMBARGO
=========================
Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.
But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).
Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.
(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable. This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)
For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----
iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAma8sCIMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZiSUIAMFWxhjNzhsuUGbrUVsO6oDIs7gOcVEsC3BlcsIp
LqetutOWHwR8B9jHeOjewZjgL/q1031qX+nCCcU/ilZtA7cAiVhPNrh4PSD/D9S5
RqUG3oSsFjSTtGwVl2JlqlHoE90tXOqLBhZFCJixQzaW3kbCfhDZdmufj8TQYBCQ
N3ioNAGwvmSeV8QPh8l3P7TRRsMwr0OTWQYtj7r4QuW+dDPJaKzbCpmWVaCPVeI2
uKUxwwIxSE9J9L1mUR34HIJR/clCFNqlcpc/MmQVz0qprBOh4jNDunN+JNDY1VXR
3P+N50ZnHCK5w1z+vjeVvZRyp9JDt2LDUj6XJ6G9IdvN1xA=
=vNzh
-----END PGP SIGNATURE-----
From: Teddy Astie <teddy.astie@vates.tech>
Subject: x86/IOMMU: move tracking in iommu_identity_mapping()
If for some reason xmalloc() fails after having mapped the reserved
regions, an error is reported, but the regions remain mapped in the P2M.
Similarly if an error occurs during set_identity_p2m_entry() (except on
the first call), the partial mappings of the region would be retained
without being tracked anywhere, and hence without there being a way to
remove them again from the domain's P2M.
Move the setting up of the list entry ahead of trying to map the region.
In cases other than the first mapping failing, keep record of the full
region, such that a subsequent unmapping request can be properly torn
down.
To compensate for the potentially excess unmapping requests, don't log a
warning from p2m_remove_identity_entry() when there really was nothing
mapped at a given GFN.
This is XSA-460 / CVE-2024-31145.
Fixes: 2201b67b9128 ("VT-d: improve RMRR region handling")
Fixes: c0e19d7c6c42 ("IOMMU: generalize VT-d's tracking of mapped RMRR regions")
Signed-off-by: Teddy Astie <teddy.astie@vates.tech>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1267,9 +1267,11 @@ int p2m_remove_identity_entry(struct dom
else
{
gfn_unlock(p2m, gfn, 0);
- printk(XENLOG_G_WARNING
- "non-identity map d%d:%lx not cleared (mapped to %lx)\n",
- d->domain_id, gfn_l, mfn_x(mfn));
+ if ( (p2mt != p2m_invalid && p2mt != p2m_mmio_dm) ||
+ a != p2m_access_n || !mfn_eq(mfn, INVALID_MFN) )
+ printk(XENLOG_G_WARNING
+ "non-identity map %pd:%lx not cleared (mapped to %lx)\n",
+ d, gfn_l, mfn_x(mfn));
ret = 0;
}
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -267,24 +267,36 @@ int iommu_identity_mapping(struct domain
if ( p2ma == p2m_access_x )
return -ENOENT;
- while ( base_pfn < end_pfn )
- {
- int err = set_identity_p2m_entry(d, base_pfn, p2ma, flag);
-
- if ( err )
- return err;
- base_pfn++;
- }
-
map = xmalloc(struct identity_map);
if ( !map )
return -ENOMEM;
+
map->base = base;
map->end = end;
map->access = p2ma;
map->count = 1;
+
+ /*
+ * Insert into list ahead of mapping, so the range can be found when
+ * trying to clean up.
+ */
list_add_tail(&map->list, &hd->arch.identity_maps);
+ for ( ; base_pfn < end_pfn; ++base_pfn )
+ {
+ int err = set_identity_p2m_entry(d, base_pfn, p2ma, flag);
+
+ if ( !err )
+ continue;
+
+ if ( (map->base >> PAGE_SHIFT_4K) == base_pfn )
+ {
+ list_del(&map->list);
+ xfree(map);
+ }
+ return err;
+ }
+
return 0;
}
© 2016 - 2024 Red Hat, Inc.