From nobody Sat Nov 15 14:53:51 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linux.microsoft.com ARC-Seal: i=1; a=rsa-sha256; t=1751391628; cv=none; d=zohomail.com; s=zohoarc; b=ii4NJ9lI6BoUnNARuDNpF+esDxCAE0+Pa7gfkdzWFI19NvtJrUFqLjDS1udQXdgd1rc1YaVv6O/5DhXxAw+KhCIEORv+wOPBEQtKp82gW6M5g7cAtAV0B+jef4uj48Hs3+Dz3Qfi+XCU3eHhu8k84u4GIK1pxums6CbCllJuvCs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1751391628; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=LlF/ug667Qtq+1TC8h6j8A9o+o2DhbC1syYUSBOc37c=; b=H8uNuZE5CrPnwxreOuQDj5vxjhg/7ugTpDn1mg+kxL5EVYqS+24mqm8Wp4Kd6QSKqIIKzzSZYv+iJwKvIRbhLE9eBzbwn7rlHQcDQZ0gsfPqzRiKB8Rd9EJh1X8y4Nca5hRMNpj2OmQW12ofkDREvIjztLLcvxYxU6S7CpoLqVI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 175139162813537.59325469711132; Tue, 1 Jul 2025 10:40:28 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1uWepd-000731-3A; Tue, 01 Jul 2025 13:31:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1uWeov-0006TW-Kc for qemu-devel@nongnu.org; Tue, 01 Jul 2025 13:31:04 -0400 Received: from linux.microsoft.com ([13.77.154.182]) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1uWeor-0007Iv-IV for qemu-devel@nongnu.org; Tue, 01 Jul 2025 13:31:01 -0400 Received: from localhost.localdomain (unknown [167.220.208.67]) by linux.microsoft.com (Postfix) with ESMTPSA id 54D2F211938F; Tue, 1 Jul 2025 10:30:43 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 54D2F211938F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1751391046; bh=LlF/ug667Qtq+1TC8h6j8A9o+o2DhbC1syYUSBOc37c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=duSBAGKAyNTxw3RoZmxCXzaAOamb7dc0cU12IliCC1m6jE3baf+qBa7CnsxbGt2P0 MzyNImixqtjIOeeoH+VYnMAVh4mtTFvv555qCQpblWQAX0YD5KLR/kDisQ+tygXxlM bzx9nrWrLQKIJLZmRukHGXHhWWqA4a0sy/MWHeXI= From: Magnus Kulke To: qemu-devel@nongnu.org Cc: Cameron Esfahani , Phil Dennis-Jordan , Roman Bolshakov , Thomas Huth , Zhao Liu , Wei Liu , Paolo Bonzini , Wei Liu , Richard Henderson , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Markus Armbruster , Cornelia Huck , Magnus Kulke , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= , "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PATCH v2 26/27] accel/mshv: Workaround for overlappig mem mappings Date: Tue, 1 Jul 2025 19:28:33 +0200 Message-Id: <20250701172834.44849-27-magnuskulke@linux.microsoft.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250701172834.44849-1-magnuskulke@linux.microsoft.com> References: <20250701172834.44849-1-magnuskulke@linux.microsoft.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=13.77.154.182; envelope-from=magnuskulke@linux.microsoft.com; helo=linux.microsoft.com X-Spam_score_int: -19 X-Spam_score: -2.0 X-Spam_bar: -- X-Spam_report: (-2.0 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linux.microsoft.com) X-ZM-MESSAGEID: 1751391630288116600 Content-Type: text/plain; charset="utf-8" QEMU maps certain regions into the guest multiple times, as seen in the trace below. Currently the MSHV kernel driver will reject those mappings. To workaround this, a record is kept (a static global list of "slots", inspired by what the HVF accelerator has implemented). An overlapping region is not registered at the hypervisor, and marked as mapped=3Dfalse. If there is an UNMAPPED_GPA exit, we can look for a slot that is unmapped and would cover the GPA. In this case we map out the conflicting slot and map in the requested region. mshv_set_phys_mem add=3D1 name=3Dpc.bios mshv_map_memory =3D> u_a=3D7ffff4e00000 gpa=3D00fffc0000 size=3D000400= 00 mshv_set_phys_mem add=3D1 name=3Dioapic mshv_set_phys_mem add=3D1 name=3Dhpet mshv_set_phys_mem add=3D0 name=3Dpc.ram mshv_unmap_memory u_a=3D7fff67e00000 gpa=3D0000000000 size=3D80000000 mshv_set_phys_mem add=3D1 name=3Dpc.ram mshv_map_memory u_a=3D7fff67e00000 gpa=3D0000000000 size=3D000c0000 mshv_set_phys_mem add=3D1 name=3Dpc.rom mshv_map_memory u_a=3D7ffff4c00000 gpa=3D00000c0000 size=3D00020000 mshv_set_phys_mem add=3D1 name=3Dpc.bios mshv_remap_attempt =3D> u_a=3D7ffff4e20000 gpa=3D00000e0000 size=3D000200= 00 Signed-off-by: Magnus Kulke --- accel/mshv/mem.c | 264 ++++++++++++++++++++++++++++++++---- accel/mshv/trace-events | 7 +- include/system/mshv.h | 16 ++- target/i386/mshv/mshv-cpu.c | 43 ++++++ 4 files changed, 295 insertions(+), 35 deletions(-) diff --git a/accel/mshv/mem.c b/accel/mshv/mem.c index 6d7a726898..0ffe379601 100644 --- a/accel/mshv/mem.c +++ b/accel/mshv/mem.c @@ -20,44 +20,167 @@ #include #include "trace.h" =20 +MshvMemorySlot mem_slots[MSHV_MAX_MEM_SLOTS]; + +static MshvMemorySlot *find_free_mem_slot(void) +{ + for (int i =3D 0; i < MSHV_MAX_MEM_SLOTS; i++) { + if (mem_slots[i].memory_size =3D=3D 0) { + return &mem_slots[i]; + } + } + + return NULL; +} + +/* Find _currently mapped_ memory slot, that is overlapping in userspace */ +static MshvMemorySlot *find_overlap_mem_slot(const MshvMemorySlot *slot) +{ + MshvMemorySlot *other; + bool overlaps; + uint64_t start_1 =3D slot->userspace_addr, start_2; + size_t len_1 =3D slot->memory_size, len_2; + + for (int i =3D 0; i < MSHV_MAX_MEM_SLOTS; i++) { + other =3D &mem_slots[i]; + + if (other =3D=3D slot) { + continue; + } + + start_2 =3D other->userspace_addr; + len_2 =3D other->memory_size; + + overlaps =3D ranges_overlap(start_1, len_1, start_2, len_2); + if (other->mapped && overlaps) { + return other; + } + } + + return NULL; +} + static int set_guest_memory(int vm_fd, const mshv_user_mem_region *region) { int ret; =20 ret =3D ioctl(vm_fd, MSHV_SET_GUEST_MEMORY, region); if (ret < 0) { - error_report("failed to set guest memory"); - return -errno; + error_report("failed to set guest memory: %s", strerror(errno)); + return -1; } =20 return 0; } =20 -static int map_or_unmap(int vm_fd, const MshvMemoryRegion *mr, bool map) +static int map_or_unmap(int vm_fd, const MshvMemorySlot *slot, bool map) { struct mshv_user_mem_region region =3D {0}; =20 - region.guest_pfn =3D mr->guest_phys_addr >> MSHV_PAGE_SHIFT; - region.size =3D mr->memory_size; - region.userspace_addr =3D mr->userspace_addr; + region.guest_pfn =3D slot->guest_phys_addr >> MSHV_PAGE_SHIFT; + region.size =3D slot->memory_size; + region.userspace_addr =3D slot->userspace_addr; =20 if (!map) { region.flags |=3D (1 << MSHV_SET_MEM_BIT_UNMAP); - trace_mshv_unmap_memory(mr->userspace_addr, mr->guest_phys_addr, - mr->memory_size); + trace_mshv_unmap_memory(slot->userspace_addr, slot->guest_phys_add= r, + slot->memory_size); return set_guest_memory(vm_fd, ®ion); } =20 region.flags =3D BIT(MSHV_SET_MEM_BIT_EXECUTABLE); - if (!mr->readonly) { + if (!slot->readonly) { region.flags |=3D BIT(MSHV_SET_MEM_BIT_WRITABLE); } =20 - trace_mshv_map_memory(mr->userspace_addr, mr->guest_phys_addr, - mr->memory_size); + trace_mshv_map_memory(slot->userspace_addr, slot->guest_phys_addr, + slot->memory_size); return set_guest_memory(vm_fd, ®ion); } =20 +static MshvMemorySlot *find_mem_slot_by_region(uint64_t gpa, uint64_t size, + uint64_t userspace_addr) +{ + MshvMemorySlot *slot; + + for (int i =3D 0; i < MSHV_MAX_MEM_SLOTS; i++) { + slot =3D &mem_slots[i]; + + if (slot->guest_phys_addr =3D=3D gpa && + slot->userspace_addr =3D=3D userspace_addr && + slot->memory_size =3D=3D size) { + trace_mshv_found_slot(slot->userspace_addr, slot->guest_phys_a= ddr, + slot->memory_size); + return slot; + } + } + + return NULL; +} + +static MshvMemorySlot* find_mem_slot_by_gpa(uint64_t gpa) +{ + uint64_t gpa_offset; + MshvMemorySlot *slot; + + trace_mshv_find_slot_by_gpa(gpa); + + for (int i =3D 0; i < MSHV_MAX_MEM_SLOTS; i++) { + slot =3D &mem_slots[i]; + + gpa_offset =3D gpa - slot->guest_phys_addr; + if (slot->guest_phys_addr <=3D gpa && gpa_offset < slot->memory_si= ze) { + trace_mshv_found_slot(slot->userspace_addr, slot->guest_phys_a= ddr, + slot->memory_size); + return slot; + } + } + + return NULL; +} + +MshvRemapResult mshv_remap_overlap_region(int vm_fd, uint64_t gpa) +{ + MshvMemorySlot *gpa_slot, *overlap_slot; + int ret; + + /* return early if no slot is found */ + gpa_slot =3D find_mem_slot_by_gpa(gpa); + if (gpa_slot =3D=3D NULL) { + return MshvRemapNoMapping; + } + + overlap_slot =3D find_overlap_mem_slot(gpa_slot); + if (overlap_slot =3D=3D NULL) { + return MshvRemapNoOverlap; + } + + /* unmap overlapping slot */ + ret =3D map_or_unmap(vm_fd, overlap_slot, false); + if (ret < 0) { + error_report("failed to unmap overlap region"); + abort(); + } + overlap_slot->mapped =3D false; + warn_report("mapped out userspace_addr=3D0x%016lx gpa=3D0x%010lx size= =3D0x%lx", + overlap_slot->userspace_addr, + overlap_slot->guest_phys_addr, + overlap_slot->memory_size); + + /* map region for gpa */ + ret =3D map_or_unmap(vm_fd, gpa_slot, true); + if (ret < 0) { + error_report("failed to map new region"); + abort(); + } + gpa_slot->mapped =3D true; + warn_report("mapped in userspace_addr=3D0x%016lx gpa=3D0x%010lx size= =3D0x%lx", + gpa_slot->userspace_addr, gpa_slot->guest_phys_addr, + gpa_slot->memory_size); + + return MshvRemapOk; +} + static int handle_unmapped_mmio_region_read(uint64_t gpa, uint64_t size, uint8_t *data) { @@ -123,20 +246,106 @@ int mshv_guest_mem_write(uint64_t gpa, const uint8_t= *data, uintptr_t size, return -1; } =20 -static int set_memory(const MshvMemoryRegion *mshv_mr, bool add) +static void clear_slot(MshvMemorySlot *slot) { - int ret =3D 0; + assert(slot); + + *slot =3D (MshvMemorySlot) { 0 }; +} + +static int tracked_unmap(int vm_fd, uint64_t gpa, uint64_t size, + uint64_t userspace_addr) +{ + int ret; + MshvMemorySlot *slot; + + slot =3D find_mem_slot_by_region(gpa, size, userspace_addr); + if (!slot) { + trace_mshv_skip_unset_mem(userspace_addr, gpa, size); + /* no work to do */ + return 0; + } + + if (!slot->mapped) { + /* remove slot, no need to unmap */ + clear_slot(slot); + return 0; + } + + ret =3D map_or_unmap(vm_fd, slot, false); + if (ret < 0) { + error_report("failed to unmap memory region"); + return ret; + } + clear_slot(slot); + + return 0; +} =20 - if (!mshv_mr) { - error_report("Invalid mshv_mr"); +static int tracked_map(int vm_fd, uint64_t gpa, uint64_t size, bool readon= ly, + uint64_t userspace_addr) +{ + MshvMemorySlot *slot, *overlap_slot; + int ret; + + slot =3D find_mem_slot_by_region(gpa, size, userspace_addr); + if (slot) { + error_report("memory region already mapped at gpa=3D0x%lx, " + "userspace_addr=3D0x%lx, size=3D0x%lx", + slot->guest_phys_addr, slot->userspace_addr, + slot->memory_size); + return -1; + } + + slot =3D find_free_mem_slot(); + if (!slot) { + error_report("no free memory slot available"); + return -1; + } + + slot->guest_phys_addr =3D gpa; + slot->userspace_addr =3D userspace_addr; + slot->memory_size =3D size; + slot->readonly =3D readonly; + + overlap_slot =3D find_overlap_mem_slot(slot); + if (overlap_slot) { + trace_mshv_remap_attempt(slot->userspace_addr, + slot->guest_phys_addr, + slot->memory_size); + warn_report("attempt to map region [0x%lx-0x%lx], while " + "[0x%lx-0x%lx] is already mapped in the guest", + userspace_addr, userspace_addr + size - 1, + overlap_slot->userspace_addr, + overlap_slot->userspace_addr + + overlap_slot->memory_size - 1); + + /* do not register mem slot in hv, but record for later swap-in */ + slot->mapped =3D false; + + return 0; + } + + ret =3D map_or_unmap(vm_fd, slot, true); + if (ret < 0) { + error_report("failed to map memory region"); return -1; } + slot->mapped =3D true; =20 - trace_mshv_set_memory(add, mshv_mr->guest_phys_addr, - mshv_mr->memory_size, - mshv_mr->userspace_addr, mshv_mr->readonly, - ret); - return map_or_unmap(mshv_state->vm, mshv_mr, add); + return 0; +} + +static int set_memory(uint64_t gpa, uint64_t size, bool readonly, + uint64_t userspace_addr, bool add) +{ + int vm_fd =3D mshv_state->vm; + + if (add) { + return tracked_map(vm_fd, gpa, size, readonly, userspace_addr); + } + + return tracked_unmap(vm_fd, gpa, size, userspace_addr); } =20 /* @@ -172,9 +381,10 @@ void mshv_set_phys_mem(MshvMemoryListener *mml, Memory= RegionSection *section, bool writable =3D !area->readonly && !area->rom_device; hwaddr start_addr, mr_offset, size; void *ram; - MshvMemoryRegion mshv_mr =3D {0}; =20 - trace_mshv_set_phys_mem(add, section->mr->name); + size =3D align_section(section, &start_addr); + + trace_mshv_set_phys_mem(add, section->mr->name, start_addr); =20 /* If the memory device is a writable non-ram area, we do not * want to map it into the guest memory. If it is not a ROM device, @@ -188,7 +398,6 @@ void mshv_set_phys_mem(MshvMemoryListener *mml, MemoryR= egionSection *section, } } =20 - size =3D align_section(section, &start_addr); if (!size) { return; } @@ -198,14 +407,9 @@ void mshv_set_phys_mem(MshvMemoryListener *mml, Memory= RegionSection *section, =20 ram =3D memory_region_get_ram_ptr(area) + mr_offset; =20 - mshv_mr.guest_phys_addr =3D start_addr; - mshv_mr.memory_size =3D size; - mshv_mr.readonly =3D !writable; - mshv_mr.userspace_addr =3D (uint64_t)ram; - - ret =3D set_memory(&mshv_mr, add); + ret =3D set_memory(start_addr, size, !writable, (uint64_t)ram, add); if (ret < 0) { - error_report("Failed to set memory region"); + error_report("failed to set memory region"); abort(); } } diff --git a/accel/mshv/trace-events b/accel/mshv/trace-events index bade57e22c..efd9dd7b3c 100644 --- a/accel/mshv/trace-events +++ b/accel/mshv/trace-events @@ -20,5 +20,10 @@ mshv_mem_write(uint64_t addr, size_t size) "\tgpa=3D%lx = size=3D%lu" mshv_mem_read(uint64_t addr, size_t size) "\tgpa=3D%lx size=3D%lu" mshv_map_memory(uint64_t userspace_addr, uint64_t gpa, uint64_t size) "\tu= _a=3D%lx gpa=3D%010lx size=3D%08lx" mshv_unmap_memory(uint64_t userspace_addr, uint64_t gpa, uint64_t size) "\= tu_a=3D%lx gpa=3D%010lx size=3D%08lx" -mshv_set_phys_mem(bool add, const char *name) "\tadd=3D%d name=3D%s" +mshv_set_phys_mem(bool add, const char *name, uint64_t gpa) "\tadd=3D%d na= me=3D%s gpa=3D%lx" + +mshv_found_slot(uint64_t userspace_addr, uint64_t gpa, uint64_t size) "\tu= _a=3D%lx gpa=3D%010lx size=3D%08lx" +mshv_skip_unset_mem(uint64_t userspace_addr, uint64_t gpa, uint64_t size) = "\tu_a=3D%lx gpa=3D%010lx size=3D%08lx" +mshv_remap_attempt(uint64_t userspace_addr, uint64_t gpa, uint64_t size) "= \tu_a=3D%lx gpa=3D%010lx size=3D%08lx" +mshv_find_slot_by_gpa(uint64_t gpa) "\tgpa=3D%010lx" mshv_handle_mmio(uint64_t gva, uint64_t gpa, uint64_t size, uint8_t access= _type) "\tgva=3D%lx gpa=3D%010lx size=3D%lx access_type=3D%d" diff --git a/include/system/mshv.h b/include/system/mshv.h index 27d7e3dff3..124da05885 100644 --- a/include/system/mshv.h +++ b/include/system/mshv.h @@ -38,6 +38,8 @@ typedef struct hyperv_message hv_message; =20 #define MSHV_MSR_ENTRIES_COUNT 64 =20 +#define MSHV_MAX_MEM_SLOTS 32 + #ifdef CONFIG_MSHV_IS_POSSIBLE extern bool mshv_allowed; #define mshv_enabled() (mshv_allowed) @@ -102,6 +104,12 @@ typedef enum MshvVmExit { MshvVmExitHlt =3D 3, } MshvVmExit; =20 +typedef enum MshvRemapResult { + MshvRemapOk =3D 0, + MshvRemapNoMapping =3D 1, + MshvRemapNoOverlap =3D 2, +} MshvRemapResult; + void mshv_init_mmio_emu(void); int mshv_create_vcpu(int vm_fd, uint8_t vp_index, int *cpu_fd); void mshv_remove_vcpu(int vm_fd, int cpu_fd); @@ -143,15 +151,15 @@ typedef struct MshvMsrEntries { int mshv_configure_msr(int cpu_fd, const MshvMsrEntry *msrs, size_t n_msrs= ); =20 /* memory */ -typedef struct MshvMemoryRegion { +typedef struct MshvMemorySlot { uint64_t guest_phys_addr; uint64_t memory_size; uint64_t userspace_addr; bool readonly; -} MshvMemoryRegion; + bool mapped; +} MshvMemorySlot; =20 -int mshv_add_mem(int vm_fd, const MshvMemoryRegion *mr); -int mshv_remove_mem(int vm_fd, const MshvMemoryRegion *mr); +MshvRemapResult mshv_remap_overlap_region(int vm_fd, uint64_t gpa); int mshv_guest_mem_read(uint64_t gpa, uint8_t *data, uintptr_t size, bool is_secure_mode, bool instruction_fetch); int mshv_guest_mem_write(uint64_t gpa, const uint8_t *data, uintptr_t size, diff --git a/target/i386/mshv/mshv-cpu.c b/target/i386/mshv/mshv-cpu.c index 41a3398ec8..083f161274 100644 --- a/target/i386/mshv/mshv-cpu.c +++ b/target/i386/mshv/mshv-cpu.c @@ -1073,6 +1073,43 @@ static int handle_mmio(CPUState *cpu, const struct h= yperv_message *msg, return 0; } =20 +static int handle_unmapped_mem(int vm_fd, CPUState *cpu, + const struct hyperv_message *msg, + MshvVmExit *exit_reason) +{ + struct hv_x64_memory_intercept_message info =3D { 0 }; + uint64_t gpa; + int ret; + enum MshvRemapResult remap_result; + + ret =3D set_memory_info(msg, &info); + if (ret < 0) { + error_report("failed to convert message to memory info"); + return -1; + } + + gpa =3D info.guest_physical_address; + + /* attempt to remap the region, in case of overlapping userspace mappi= ngs */ + remap_result =3D mshv_remap_overlap_region(vm_fd, gpa); + *exit_reason =3D MshvVmExitIgnore; + + switch (remap_result) { + case MshvRemapNoMapping: + /* if we didn't find a mapping, it is probably mmio */ + return handle_mmio(cpu, msg, exit_reason); + case MshvRemapOk: + break; + case MshvRemapNoOverlap: + /* This should not happen, but we are forgiving it */ + warn_report("found no overlap for unmapped region"); + *exit_reason =3D MshvVmExitSpecial; + break; + } + + return 0; +} + static int set_ioport_info(const struct hyperv_message *msg, hv_x64_io_port_intercept_message *info) { @@ -1449,6 +1486,12 @@ int mshv_run_vcpu(int vm_fd, CPUState *cpu, hv_messa= ge *msg, MshvVmExit *exit) case HVMSG_UNRECOVERABLE_EXCEPTION: return MshvVmExitShutdown; case HVMSG_UNMAPPED_GPA: + ret =3D handle_unmapped_mem(vm_fd, cpu, msg, &exit_reason); + if (ret < 0) { + error_report("failed to handle unmapped memory"); + return -1; + } + return exit_reason; case HVMSG_GPA_INTERCEPT: ret =3D handle_mmio(cpu, msg, &exit_reason); if (ret < 0) { --=20 2.34.1