Xen Security Advisory 428 v3 (CVE-2022-42333,CVE-2022-42334) - x86/HVM pinned cache attributes mis-handling

Xen.org security team posted 1 patch 1 year ago
Failed in applying to current master (apply log)
Xen Security Advisory 428 v3 (CVE-2022-42333,CVE-2022-42334) - x86/HVM pinned cache attributes mis-handling
Posted by Xen.org security team 1 year ago
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

     Xen Security Advisory CVE-2022-42333,CVE-2022-42334 / XSA-428
                               version 3

             x86/HVM pinned cache attributes mis-handling

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

To allow cachability control for HVM guests with passed through devices,
an interface exists to explicitly override defaults which would
otherwise be put in place.  While not exposed to the affected guests
themselves, the interface specifically exists for domains controlling
such guests.  This interface may therefore be used by not fully
privileged entities, e.g. qemu running deprivileged in Dom0 or qemu
running in a so called stub-domain.  With this exposure it is an issue
that
 - the number of the such controlled regions was unbounded
   (CVE-2022-42333),
 - installation and removal of such regions was not properly serialized
   (CVE-2022-42334).

IMPACT
======

Entities controlling HVM guests can run the host out of resources or
stall execution of a physical CPU for effectively unbounded periods of
time, resulting in a Denial of Servis (DoS) affecting the entire host.
Crashes, information leaks, or elevation of privilege cannot be ruled
out.

VULNERABLE SYSTEMS
==================

Xen versions 4.11 through 4.17 are vulnerable.  Older versions contain
the same functionality, but it is exposed there only via an interface
which is subject to XSA-77's constraints.

Only x86 systems are potentially vulnerable.  Arm systems are not
vulnerable.

Only entities controlling HVM guests can leverage the vulnerability.
These are device models running in either a stub domain or de-privileged
in Dom0.

MITIGATION
==========

Running only PV or PVH guests will avoid the vulnerability.

(Switching from a device model stub domain or a de-privileged device
model to a fully privileged Dom0 device model does NOT mitigate this
vulnerability.  Rather, it simply recategorises the vulnerability to
hostile management code, regarding it "as designed"; thus it merely
reclassifies these issues as "not a bug".  The security of a Xen system
using stub domains is still better than with a qemu-dm running as a Dom0
process.  Users and vendors of stub qemu dm systems should not change
their configuration to use a Dom0 qemu process.)

CREDITS
=======

Aspects of this issue were discovered by Andrew Cooper of XenServer and
Jan Beulich of SUSE.

RESOLUTION
==========

Applying the appropriate set of attached patches resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa428-?.patch           xen-unstable
xsa428-4.17-?.patch      Xen 4.17.x
xsa428-4.16-?.patch      Xen 4.16.x - 4.14.x

$ sha256sum xsa428*
a7bd8d4c1e8579aeda47564efdc960cac92472387ba57d7f7a6d5d79470ebd6f  xsa428.meta
85a421d9123a56894124bed54731b8b6f2e86ad4e286871dee86efff519f4c68  xsa428-1.patch
3b691ca228592539a751ce5af69f31e09d9c477218d53af0602ac5f39f1e74d7  xsa428-2.patch
da60e01a17f9073c83098d187c07bad3a868a6b7f97dbc538cb5ea5698c51b39  xsa428-4.16-1.patch
27718a7a86fd57624cd8500df83eb42ff3499670bc807c6555686c25e7f7b01a  xsa428-4.16-2.patch
da60e01a17f9073c83098d187c07bad3a868a6b7f97dbc538cb5ea5698c51b39  xsa428-4.17-1.patch
20d3b66da8fe06d7e92992218e519f4f9746791d4ba5610d84a335f38a824fcb  xsa428-4.17-2.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmQZlkwMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZevEH/R0hCjoC/n2AJSr2dOU97c4bZmjeB5mTnWrOtOMA
AZnP68nvEzQ7OYfI4ihl+wgtKUvyVXLOWaBH9lKL8CySxrCX1r3BILMGhtDKViV4
opnKOoy0Ejg3H68x5McPhdr+PkvXWTzoNqbkUYMbNTw7ktB4Ze0mbsmKoXDUiLru
QZZ0XxtL4jc+d8GUM0k3Msy0p3lLYvIob8k6DWg7RdWxiIOxL43pKNvShgh7ZehN
P0S/PknVLpoPKzKFzMWrzakhZYYsOWoNM9U7C0zEozX4qrnsyQp3o3mvW/8MrPA+
5BKsIjSYxdleUzLSNks7Xn0nG+ki6kOrwPjFGGOGAwoR8aE=
=ILYn
-----END PGP SIGNATURE-----
{
  "XSA": 428,
  "SupportedVersions": [
    "master",
    "4.17",
    "4.16",
    "4.15",
    "4.14"
  ],
  "Trees": [
    "xen"
  ],
  "Recipes": {
    "4.14": {
      "Recipes": {
        "xen": {
          "StableRef": "c267abfaf2d8176371eda037f9b9152458e0656d",
          "Prereqs": [
            427
          ],
          "Patches": [
            "xsa428-4.16-?.patch"
          ]
        }
      }
    },
    "4.15": {
      "Recipes": {
        "xen": {
          "StableRef": "fa875574b73618daf3bc70e6ff4d342493fa11d9",
          "Prereqs": [
            427
          ],
          "Patches": [
            "xsa428-4.16-?.patch"
          ]
        }
      }
    },
    "4.16": {
      "Recipes": {
        "xen": {
          "StableRef": "84dfe7a56f04a7412fa4869b3e756c49e1cfbe75",
          "Prereqs": [
            427
          ],
          "Patches": [
            "xsa428-4.16-?.patch"
          ]
        }
      }
    },
    "4.17": {
      "Recipes": {
        "xen": {
          "StableRef": "ec5b058d2a6436a2e180315522fcf1645a8153b4",
          "Prereqs": [
            427
          ],
          "Patches": [
            "xsa428-4.17-?.patch"
          ]
        }
      }
    },
    "master": {
      "Recipes": {
        "xen": {
          "StableRef": "31270f11a96ebb875cd70661e2df9e5c6edd7564",
          "Prereqs": [
            427
          ],
          "Patches": [
            "xsa428-?.patch"
          ]
        }
      }
    }
  }
}From: Jan Beulich <jbeulich@suse.com>
Subject: x86/HVM: bound number of pinned cache attribute regions

This is exposed via DMOP, i.e. to potentially not fully privileged
device models. With that we may not permit registration of an (almost)
unbounded amount of such regions.

This is CVE-2022-42333 / part of XSA-428.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Fixes: 642123c5123f ("x86/hvm: provide XEN_DMOP_pin_memory_cacheattr")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -588,6 +588,7 @@ int hvm_set_mem_pinned_cacheattr(struct
                                  uint64_t gfn_end, uint32_t type)
 {
     struct hvm_mem_pinned_cacheattr_range *range;
+    unsigned int nr = 0;
     int rc = 1;
 
     if ( !is_hvm_domain(d) )
@@ -659,11 +660,15 @@ int hvm_set_mem_pinned_cacheattr(struct
             rc = -EBUSY;
             break;
         }
+        ++nr;
     }
     rcu_read_unlock(&pinned_cacheattr_rcu_lock);
     if ( rc <= 0 )
         return rc;
 
+    if ( nr >= 64 /* The limit is arbitrary. */ )
+        return -ENOSPC;
+
     range = xzalloc(struct hvm_mem_pinned_cacheattr_range);
     if ( range == NULL )
         return -ENOMEM;
From: Jan Beulich <jbeulich@suse.com>
Subject: x86/HVM: serialize pinned cache attribute list manipulation

While the RCU variants of list insertion and removal allow lockless list
traversal (with RCU just read-locked), insertions and removals still
need serializing amongst themselves. To keep things simple, use the
domain lock for this purpose.

This is CVE-2022-42334 / part of XSA-428.

Fixes: 642123c5123f ("x86/hvm: provide XEN_DMOP_pin_memory_cacheattr")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -587,7 +587,7 @@ static void cf_check free_pinned_cacheat
 int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
                                  uint64_t gfn_end, uint32_t type)
 {
-    struct hvm_mem_pinned_cacheattr_range *range;
+    struct hvm_mem_pinned_cacheattr_range *range, *newr;
     unsigned int nr = 0;
     int rc = 1;
 
@@ -601,14 +601,15 @@ int hvm_set_mem_pinned_cacheattr(struct
     {
     case XEN_DOMCTL_DELETE_MEM_CACHEATTR:
         /* Remove the requested range. */
-        rcu_read_lock(&pinned_cacheattr_rcu_lock);
-        list_for_each_entry_rcu ( range,
-                                  &d->arch.hvm.pinned_cacheattr_ranges,
-                                  list )
+        domain_lock(d);
+        list_for_each_entry ( range,
+                              &d->arch.hvm.pinned_cacheattr_ranges,
+                              list )
             if ( range->start == gfn_start && range->end == gfn_end )
             {
-                rcu_read_unlock(&pinned_cacheattr_rcu_lock);
                 list_del_rcu(&range->list);
+                domain_unlock(d);
+
                 type = range->type;
                 call_rcu(&range->rcu, free_pinned_cacheattr_entry);
                 p2m_memory_type_changed(d);
@@ -629,7 +630,7 @@ int hvm_set_mem_pinned_cacheattr(struct
                 }
                 return 0;
             }
-        rcu_read_unlock(&pinned_cacheattr_rcu_lock);
+        domain_unlock(d);
         return -ENOENT;
 
     case X86_MT_UCM:
@@ -644,7 +645,10 @@ int hvm_set_mem_pinned_cacheattr(struct
         return -EINVAL;
     }
 
-    rcu_read_lock(&pinned_cacheattr_rcu_lock);
+    newr = xzalloc(struct hvm_mem_pinned_cacheattr_range);
+
+    domain_lock(d);
+
     list_for_each_entry_rcu ( range,
                               &d->arch.hvm.pinned_cacheattr_ranges,
                               list )
@@ -662,27 +666,34 @@ int hvm_set_mem_pinned_cacheattr(struct
         }
         ++nr;
     }
-    rcu_read_unlock(&pinned_cacheattr_rcu_lock);
+
     if ( rc <= 0 )
-        return rc;
+        /* nothing */;
+    else if ( nr >= 64 /* The limit is arbitrary. */ )
+        rc = -ENOSPC;
+    else if ( !newr )
+        rc = -ENOMEM;
+    else
+    {
+        newr->start = gfn_start;
+        newr->end = gfn_end;
+        newr->type = type;
 
-    if ( nr >= 64 /* The limit is arbitrary. */ )
-        return -ENOSPC;
+        list_add_rcu(&newr->list, &d->arch.hvm.pinned_cacheattr_ranges);
+
+        newr = NULL;
+        rc = 0;
+    }
 
-    range = xzalloc(struct hvm_mem_pinned_cacheattr_range);
-    if ( range == NULL )
-        return -ENOMEM;
+    domain_unlock(d);
 
-    range->start = gfn_start;
-    range->end = gfn_end;
-    range->type = type;
+    xfree(newr);
 
-    list_add_rcu(&range->list, &d->arch.hvm.pinned_cacheattr_ranges);
     p2m_memory_type_changed(d);
     if ( type != X86_MT_WB )
         flush_all(FLUSH_CACHE);
 
-    return 0;
+    return rc;
 }
 
 static int cf_check hvm_save_mtrr_msr(struct vcpu *v, hvm_domain_context_t *h)
From: Jan Beulich <jbeulich@suse.com>
Subject: x86/HVM: bound number of pinned cache attribute regions

This is exposed via DMOP, i.e. to potentially not fully privileged
device models. With that we may not permit registration of an (almost)
unbounded amount of such regions.

This is CVE-2022-42333 / part of XSA-428.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Fixes: 642123c5123f ("x86/hvm: provide XEN_DMOP_pin_memory_cacheattr")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -595,6 +595,7 @@ int hvm_set_mem_pinned_cacheattr(struct
                                  uint64_t gfn_end, uint32_t type)
 {
     struct hvm_mem_pinned_cacheattr_range *range;
+    unsigned int nr = 0;
     int rc = 1;
 
     if ( !is_hvm_domain(d) )
@@ -666,11 +667,15 @@ int hvm_set_mem_pinned_cacheattr(struct
             rc = -EBUSY;
             break;
         }
+        ++nr;
     }
     rcu_read_unlock(&pinned_cacheattr_rcu_lock);
     if ( rc <= 0 )
         return rc;
 
+    if ( nr >= 64 /* The limit is arbitrary. */ )
+        return -ENOSPC;
+
     range = xzalloc(struct hvm_mem_pinned_cacheattr_range);
     if ( range == NULL )
         return -ENOMEM;
From: Jan Beulich <jbeulich@suse.com>
Subject: x86/HVM: serialize pinned cache attribute list manipulation

While the RCU variants of list insertion and removal allow lockless list
traversal (with RCU just read-locked), insertions and removals still
need serializing amongst themselves. To keep things simple, use the
domain lock for this purpose.

This is CVE-2022-42334 / part of XSA-428.

Fixes: 642123c5123f ("x86/hvm: provide XEN_DMOP_pin_memory_cacheattr")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -594,7 +594,7 @@ static void free_pinned_cacheattr_entry(
 int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
                                  uint64_t gfn_end, uint32_t type)
 {
-    struct hvm_mem_pinned_cacheattr_range *range;
+    struct hvm_mem_pinned_cacheattr_range *range, *newr;
     unsigned int nr = 0;
     int rc = 1;
 
@@ -608,14 +608,15 @@ int hvm_set_mem_pinned_cacheattr(struct
     {
     case XEN_DOMCTL_DELETE_MEM_CACHEATTR:
         /* Remove the requested range. */
-        rcu_read_lock(&pinned_cacheattr_rcu_lock);
-        list_for_each_entry_rcu ( range,
-                                  &d->arch.hvm.pinned_cacheattr_ranges,
-                                  list )
+        domain_lock(d);
+        list_for_each_entry ( range,
+                              &d->arch.hvm.pinned_cacheattr_ranges,
+                              list )
             if ( range->start == gfn_start && range->end == gfn_end )
             {
-                rcu_read_unlock(&pinned_cacheattr_rcu_lock);
                 list_del_rcu(&range->list);
+                domain_unlock(d);
+
                 type = range->type;
                 call_rcu(&range->rcu, free_pinned_cacheattr_entry);
                 p2m_memory_type_changed(d);
@@ -636,7 +637,7 @@ int hvm_set_mem_pinned_cacheattr(struct
                 }
                 return 0;
             }
-        rcu_read_unlock(&pinned_cacheattr_rcu_lock);
+        domain_unlock(d);
         return -ENOENT;
 
     case PAT_TYPE_UC_MINUS:
@@ -651,7 +652,10 @@ int hvm_set_mem_pinned_cacheattr(struct
         return -EINVAL;
     }
 
-    rcu_read_lock(&pinned_cacheattr_rcu_lock);
+    newr = xzalloc(struct hvm_mem_pinned_cacheattr_range);
+
+    domain_lock(d);
+
     list_for_each_entry_rcu ( range,
                               &d->arch.hvm.pinned_cacheattr_ranges,
                               list )
@@ -669,27 +673,34 @@ int hvm_set_mem_pinned_cacheattr(struct
         }
         ++nr;
     }
-    rcu_read_unlock(&pinned_cacheattr_rcu_lock);
+
     if ( rc <= 0 )
-        return rc;
+        /* nothing */;
+    else if ( nr >= 64 /* The limit is arbitrary. */ )
+        rc = -ENOSPC;
+    else if ( !newr )
+        rc = -ENOMEM;
+    else
+    {
+        newr->start = gfn_start;
+        newr->end = gfn_end;
+        newr->type = type;
 
-    if ( nr >= 64 /* The limit is arbitrary. */ )
-        return -ENOSPC;
+        list_add_rcu(&newr->list, &d->arch.hvm.pinned_cacheattr_ranges);
+
+        newr = NULL;
+        rc = 0;
+    }
 
-    range = xzalloc(struct hvm_mem_pinned_cacheattr_range);
-    if ( range == NULL )
-        return -ENOMEM;
+    domain_unlock(d);
 
-    range->start = gfn_start;
-    range->end = gfn_end;
-    range->type = type;
+    xfree(newr);
 
-    list_add_rcu(&range->list, &d->arch.hvm.pinned_cacheattr_ranges);
     p2m_memory_type_changed(d);
     if ( type != PAT_TYPE_WRBACK )
         flush_all(FLUSH_CACHE);
 
-    return 0;
+    return rc;
 }
 
 static int hvm_save_mtrr_msr(struct vcpu *v, hvm_domain_context_t *h)
From: Jan Beulich <jbeulich@suse.com>
Subject: x86/HVM: bound number of pinned cache attribute regions

This is exposed via DMOP, i.e. to potentially not fully privileged
device models. With that we may not permit registration of an (almost)
unbounded amount of such regions.

This is CVE-2022-42333 / part of XSA-428.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Fixes: 642123c5123f ("x86/hvm: provide XEN_DMOP_pin_memory_cacheattr")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -595,6 +595,7 @@ int hvm_set_mem_pinned_cacheattr(struct
                                  uint64_t gfn_end, uint32_t type)
 {
     struct hvm_mem_pinned_cacheattr_range *range;
+    unsigned int nr = 0;
     int rc = 1;
 
     if ( !is_hvm_domain(d) )
@@ -666,11 +667,15 @@ int hvm_set_mem_pinned_cacheattr(struct
             rc = -EBUSY;
             break;
         }
+        ++nr;
     }
     rcu_read_unlock(&pinned_cacheattr_rcu_lock);
     if ( rc <= 0 )
         return rc;
 
+    if ( nr >= 64 /* The limit is arbitrary. */ )
+        return -ENOSPC;
+
     range = xzalloc(struct hvm_mem_pinned_cacheattr_range);
     if ( range == NULL )
         return -ENOMEM;
From: Jan Beulich <jbeulich@suse.com>
Subject: x86/HVM: serialize pinned cache attribute list manipulation

While the RCU variants of list insertion and removal allow lockless list
traversal (with RCU just read-locked), insertions and removals still
need serializing amongst themselves. To keep things simple, use the
domain lock for this purpose.

This is CVE-2022-42334 / part of XSA-428.

Fixes: 642123c5123f ("x86/hvm: provide XEN_DMOP_pin_memory_cacheattr")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -594,7 +594,7 @@ static void cf_check free_pinned_cacheat
 int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
                                  uint64_t gfn_end, uint32_t type)
 {
-    struct hvm_mem_pinned_cacheattr_range *range;
+    struct hvm_mem_pinned_cacheattr_range *range, *newr;
     unsigned int nr = 0;
     int rc = 1;
 
@@ -608,14 +608,15 @@ int hvm_set_mem_pinned_cacheattr(struct
     {
     case XEN_DOMCTL_DELETE_MEM_CACHEATTR:
         /* Remove the requested range. */
-        rcu_read_lock(&pinned_cacheattr_rcu_lock);
-        list_for_each_entry_rcu ( range,
-                                  &d->arch.hvm.pinned_cacheattr_ranges,
-                                  list )
+        domain_lock(d);
+        list_for_each_entry ( range,
+                              &d->arch.hvm.pinned_cacheattr_ranges,
+                              list )
             if ( range->start == gfn_start && range->end == gfn_end )
             {
-                rcu_read_unlock(&pinned_cacheattr_rcu_lock);
                 list_del_rcu(&range->list);
+                domain_unlock(d);
+
                 type = range->type;
                 call_rcu(&range->rcu, free_pinned_cacheattr_entry);
                 p2m_memory_type_changed(d);
@@ -636,7 +637,7 @@ int hvm_set_mem_pinned_cacheattr(struct
                 }
                 return 0;
             }
-        rcu_read_unlock(&pinned_cacheattr_rcu_lock);
+        domain_unlock(d);
         return -ENOENT;
 
     case PAT_TYPE_UC_MINUS:
@@ -651,7 +652,10 @@ int hvm_set_mem_pinned_cacheattr(struct
         return -EINVAL;
     }
 
-    rcu_read_lock(&pinned_cacheattr_rcu_lock);
+    newr = xzalloc(struct hvm_mem_pinned_cacheattr_range);
+
+    domain_lock(d);
+
     list_for_each_entry_rcu ( range,
                               &d->arch.hvm.pinned_cacheattr_ranges,
                               list )
@@ -669,27 +673,34 @@ int hvm_set_mem_pinned_cacheattr(struct
         }
         ++nr;
     }
-    rcu_read_unlock(&pinned_cacheattr_rcu_lock);
+
     if ( rc <= 0 )
-        return rc;
+        /* nothing */;
+    else if ( nr >= 64 /* The limit is arbitrary. */ )
+        rc = -ENOSPC;
+    else if ( !newr )
+        rc = -ENOMEM;
+    else
+    {
+        newr->start = gfn_start;
+        newr->end = gfn_end;
+        newr->type = type;
 
-    if ( nr >= 64 /* The limit is arbitrary. */ )
-        return -ENOSPC;
+        list_add_rcu(&newr->list, &d->arch.hvm.pinned_cacheattr_ranges);
+
+        newr = NULL;
+        rc = 0;
+    }
 
-    range = xzalloc(struct hvm_mem_pinned_cacheattr_range);
-    if ( range == NULL )
-        return -ENOMEM;
+    domain_unlock(d);
 
-    range->start = gfn_start;
-    range->end = gfn_end;
-    range->type = type;
+    xfree(newr);
 
-    list_add_rcu(&range->list, &d->arch.hvm.pinned_cacheattr_ranges);
     p2m_memory_type_changed(d);
     if ( type != PAT_TYPE_WRBACK )
         flush_all(FLUSH_CACHE);
 
-    return 0;
+    return rc;
 }
 
 static int cf_check hvm_save_mtrr_msr(struct vcpu *v, hvm_domain_context_t *h)