From nobody Tue May 21 15:25:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1607446747; cv=none; d=zohomail.com; s=zohoarc; b=YXeIgBYGHqCkOUBqSISTRe1mNqj+ci1Ar/n7wQEK/LA+DRzeJZP21BM+2FQ5MGtLp+CmMXeRL1g7SnU4+5h+ieUerdj2H60zbiPrw1necQHUQ+2qdRTNQWIWqX+3GBrJjD3pat1sScj8xenEAhVAcWYC4uxNIC2z4131BnSd2Jc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1607446747; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=W32wTGsMpz7/n5wELpgMp9A6ZlPamG6AU2D+I7Lb5kE=; b=B+ujkTXL677zn7bUHVGIcrLOlBrc5H4Hw/08p10DQtS0skI0vy+DfbsC1F/lzQzrL65mFlPY4D6UyjQW55qeMKxdATbn5CIsaJqB5UilXB4HXwaI5EDUdCioHHA9P9JDLkXnhRnjImYH/+pyHg7gWhF7jwQizCB2f8cbANkGaj4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1607446747621521.7129281381165; Tue, 8 Dec 2020 08:59:07 -0800 (PST) Received: from localhost ([::1]:59758 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kmg6X-0007c3-Pe for importer@patchew.org; Tue, 08 Dec 2020 11:44:45 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:37228) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kmg2n-0005jH-8z for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:40:57 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:49727) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kmg2k-0005ON-Ly for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:40:53 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-383-rh3dlAncPxenYEQB4uA9kQ-1; Tue, 08 Dec 2020 11:40:47 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4015B801B3F; Tue, 8 Dec 2020 16:39:57 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-236.ams2.redhat.com [10.36.113.236]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5C23B5D6AB; Tue, 8 Dec 2020 16:39:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607445649; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W32wTGsMpz7/n5wELpgMp9A6ZlPamG6AU2D+I7Lb5kE=; b=QEXqEQ9JGvQrPbiXjg7HyYh7zNKPl2DT1AodIY/PnoTULENg0tPAlokp7QPAkMfeWA/hlh pUTP5hS28z6RiymPbn8KxHS+UaSqRrH+cP4zsriCPgYvBm0h6HdChS7+oklHoyocSAjk4v SyXIGCHQqKvg5nS87sgMg8X2kiBgeIU= X-MC-Unique: rh3dlAncPxenYEQB4uA9kQ-1 From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v2 01/10] memory: Introduce RamDiscardMgr for RAM memory regions Date: Tue, 8 Dec 2020 17:39:41 +0100 Message-Id: <20201208163950.29617-2-david@redhat.com> In-Reply-To: <20201208163950.29617-1-david@redhat.com> References: <20201208163950.29617-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Pankaj Gupta , Wei Yang , David Hildenbrand , "Michael S. Tsirkin" , "Dr . David Alan Gilbert" , Peter Xu , Marek Kedzierski , Auger Eric , Alex Williamson , teawater , Jonathan Cameron , Paolo Bonzini , Igor Mammedov Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" We have some special RAM memory regions (managed by virtio-mem), whereby the guest agreed to only use selected memory ranges. "unused" parts are discarded so they won't consume memory - to logically unplug these memory ranges. Before the VM is allowed to use such logically unplugged memory again, coordination with the hypervisor is required. This results in "sparse" mmaps/RAMBlocks/memory regions, whereby only coordinated parts are valid to be used/accessed by the VM. In most cases, we don't care about that - e.g., in KVM, we simply have a single KVM memory slot. However, in case of vfio, registering the whole region with the kernel results in all pages getting pinned, and therefore an unexpected high memory consumption - discarding of RAM in that context is broken. Let's introduce a way to coordinate discarding/populating memory within a RAM memory region with such special consumers of RAM memory regions: they can register as listeners and get updates on memory getting discarded and populated. Using this machinery, vfio will be able to map only the currently populated parts, resulting in discarded parts not getting pinned and not consuming memory. A RamDiscardMgr has to be set for a memory region before it is getting mapped, and cannot change while the memory region is mapped. Note: At some point, we might want to let RAMBlock users (esp. vfio used for nvme://) consume this interface as well. We'll need RAMBlock notifier calls when a RAMBlock is getting mapped/unmapped (via the corresponding memory region), so we can properly register a listener there as well. Cc: Paolo Bonzini Cc: "Michael S. Tsirkin" Cc: Alex Williamson Cc: Dr. David Alan Gilbert Cc: Igor Mammedov Cc: Pankaj Gupta Cc: Peter Xu Cc: Auger Eric Cc: Wei Yang Cc: teawater Cc: Marek Kedzierski Signed-off-by: David Hildenbrand Reviewed-by: Pankaj Gupta --- include/exec/memory.h | 231 ++++++++++++++++++++++++++++++++++++++++++ softmmu/memory.c | 22 ++++ 2 files changed, 253 insertions(+) diff --git a/include/exec/memory.h b/include/exec/memory.h index 0f3e6bcd5e..30d4fcd2c0 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -42,6 +42,12 @@ typedef struct IOMMUMemoryRegionClass IOMMUMemoryRegionC= lass; DECLARE_OBJ_CHECKERS(IOMMUMemoryRegion, IOMMUMemoryRegionClass, IOMMU_MEMORY_REGION, TYPE_IOMMU_MEMORY_REGION) =20 +#define TYPE_RAM_DISCARD_MGR "qemu:ram-discard-mgr" +typedef struct RamDiscardMgrClass RamDiscardMgrClass; +typedef struct RamDiscardMgr RamDiscardMgr; +DECLARE_OBJ_CHECKERS(RamDiscardMgr, RamDiscardMgrClass, RAM_DISCARD_MGR, + TYPE_RAM_DISCARD_MGR); + #ifdef CONFIG_FUZZ void fuzz_dma_read_cb(size_t addr, size_t len, @@ -116,6 +122,66 @@ struct IOMMUNotifier { }; typedef struct IOMMUNotifier IOMMUNotifier; =20 +struct RamDiscardListener; +typedef int (*NotifyRamPopulate)(struct RamDiscardListener *rdl, + const MemoryRegion *mr, ram_addr_t offset, + ram_addr_t size); +typedef void (*NotifyRamDiscard)(struct RamDiscardListener *rdl, + const MemoryRegion *mr, ram_addr_t offset, + ram_addr_t size); +typedef void (*NotifyRamDiscardAll)(struct RamDiscardListener *rdl, + const MemoryRegion *mr); + +typedef struct RamDiscardListener { + /* + * @notify_populate: + * + * Notification that previously discarded memory is about to get popul= ated. + * Listeners are able to object. If any listener objects, already + * successfully notified listeners are notified about a discard again. + * + * @rdl: the #RamDiscardListener getting notified + * @mr: the relevant #MemoryRegion + * @offset: offset into the #MemoryRegion, aligned to minimum granular= ity of + * the #RamDiscardMgr + * @size: the size, aligned to minimum granularity of the #RamDiscardM= gr + * + * Returns 0 on success. If the notification is rejected by the listen= er, + * an error is returned. + */ + NotifyRamPopulate notify_populate; + + /* + * @notify_discard: + * + * Notification that previously populated memory was discarded success= fully + * and listeners should drop all references to such memory and prevent + * new population (e.g., unmap). + * + * @rdl: the #RamDiscardListener getting notified + * @mr: the relevant #MemoryRegion + * @offset: offset into the #MemoryRegion, aligned to minimum granular= ity of + * the #RamDiscardMgr + * @size: the size, aligned to minimum granularity of the #RamDiscardM= gr + */ + NotifyRamDiscard notify_discard; + + /* + * @notify_discard_all: + * + * Notification that all previously populated memory was discarded + * successfully. + * + * Note: this callback is optional. If not set, individual notify_popu= late() + * notifications are triggered. + * + * @rdl: the #RamDiscardListener getting notified + * @mr: the relevant #MemoryRegion + */ + NotifyRamDiscardAll notify_discard_all; + QLIST_ENTRY(RamDiscardListener) next; +} RamDiscardListener; + /* RAM is pre-allocated and passed into qemu_ram_alloc_from_ptr */ #define RAM_PREALLOC (1 << 0) =20 @@ -151,6 +217,16 @@ static inline void iommu_notifier_init(IOMMUNotifier *= n, IOMMUNotify fn, n->iommu_idx =3D iommu_idx; } =20 +static inline void ram_discard_listener_init(RamDiscardListener *rdl, + NotifyRamPopulate populate_fn, + NotifyRamDiscard discard_fn, + NotifyRamDiscardAll discard_a= ll_fn) +{ + rdl->notify_populate =3D populate_fn; + rdl->notify_discard =3D discard_fn; + rdl->notify_discard_all =3D discard_all_fn; +} + /* * Memory region callbacks */ @@ -425,6 +501,126 @@ struct IOMMUMemoryRegionClass { Error **errp); }; =20 +/* + * RamDiscardMgrClass: + * + * A #RamDiscardMgr coordinates which parts of specific RAM #MemoryRegion + * regions are currently populated to be used/accessed by the VM, notifying + * after parts were discarded (freeing up memory) and before parts will be + * populated (consuming memory), to be used/acessed by the VM. + * + * A #RamDiscardMgr can only be set for a RAM #MemoryRegion while the + * #MemoryRegion isn't mapped yet; it cannot change while the #MemoryRegio= n is + * mapped. + * + * The #RamDiscardMgr is intended to be used by technologies that are + * incompatible with discarding of RAM (e.g., VFIO, which may pin all + * memory inside a #MemoryRegion), and require proper coordination to only + * map the currently populated parts, to hinder parts that are expected to + * remain discarded from silently getting populated and consuming memory. + * Technologies that support discarding of RAM don't have to bother and can + * simply map the whole #MemoryRegion. + * + * An example #RamDiscardMgr is virtio-mem, which logically (un)plugs + * memory within an assigned RAM #MemoryRegion, coordinated with the VM. + * Logically unplugging memory consists of discarding RAM. The VM agreed t= o not + * access unplugged (discarded) memory - especially via DMA. virtio-mem wi= ll + * properly coordinate with listeners before memory is plugged (populated), + * and after memory is unplugged (discarded). + * + * Listeners are called in multiples of the minimum granularity and change= s are + * aligned to the minimum granularity within the #MemoryRegion. Listeners = have + * to prepare for memory becomming discarded in a different granularity th= an it + * was populated and the other way around. + */ +struct RamDiscardMgrClass { + /* private */ + InterfaceClass parent_class; + + /* public */ + + /** + * @get_min_granularity: + * + * Get the minimum granularity in which listeners will get notified + * about changes within the #MemoryRegion via the #RamDiscardMgr. + * + * @rdm: the #RamDiscardMgr + * @mr: the #MemoryRegion + * + * Returns the minimum granularity. + */ + uint64_t (*get_min_granularity)(const RamDiscardMgr *rdm, + const MemoryRegion *mr); + + /** + * @is_populated: + * + * Check whether the given range within the #MemoryRegion is completely + * populated (i.e., no parts are currently discarded). There are no + * alignment requirements for the range. + * + * @rdm: the #RamDiscardMgr + * @mr: the #MemoryRegion + * @offset: offset into the #MemoryRegion + * @size: size in the #MemoryRegion + * + * Returns whether the given range is completely populated. + */ + bool (*is_populated)(const RamDiscardMgr *rdm, const MemoryRegion *mr, + ram_addr_t start, ram_addr_t offset); + + /** + * @register_listener: + * + * Register a #RamDiscardListener for a #MemoryRegion via the + * #RamDiscardMgr and immediately notify the #RamDiscardListener about= all + * populated parts within the #MemoryRegion via the #RamDiscardMgr. + * + * In case any notification fails, no further notifications are trigge= red + * and an error is logged. + * + * @rdm: the #RamDiscardMgr + * @mr: the #MemoryRegion + * @rdl: the #RamDiscardListener + */ + void (*register_listener)(RamDiscardMgr *rdm, const MemoryRegion *mr, + RamDiscardListener *rdl); + + /** + * @unregister_listener: + * + * Unregister a previously registered #RamDiscardListener for a + * #MemoryRegion via the #RamDiscardMgr after notifying the + * #RamDiscardListener about all populated parts becoming unpopulated + * within the #MemoryRegion via the #RamDiscardMgr. + * + * @rdm: the #RamDiscardMgr + * @mr: the #MemoryRegion + * @rdl: the #RamDiscardListener + */ + void (*unregister_listener)(RamDiscardMgr *rdm, const MemoryRegion *mr, + RamDiscardListener *rdl); + + /** + * @replay_populated: + * + * Notify the #RamDiscardListener about all populated parts within the + * #MemoryRegion via the #RamDiscardMgr. + * + * In case any notification fails, no further notifications are trigge= red. + * The listener is not required to be registered. + * + * @rdm: the #RamDiscardMgr + * @mr: the #MemoryRegion + * @rdl: the #RamDiscardListener + * + * Returns 0 on success, or a negative error if any notification faile= d. + */ + int (*replay_populated)(const RamDiscardMgr *rdm, const MemoryRegion *= mr, + RamDiscardListener *rdl); +}; + typedef struct CoalescedMemoryRange CoalescedMemoryRange; typedef struct MemoryRegionIoeventfd MemoryRegionIoeventfd; =20 @@ -471,6 +667,7 @@ struct MemoryRegion { const char *name; unsigned ioeventfd_nb; MemoryRegionIoeventfd *ioeventfds; + RamDiscardMgr *rdm; /* Only for RAM */ }; =20 struct IOMMUMemoryRegion { @@ -1965,6 +2162,40 @@ bool memory_region_present(MemoryRegion *container, = hwaddr addr); */ bool memory_region_is_mapped(MemoryRegion *mr); =20 +/** + * memory_region_get_ram_discard_mgr: get the #RamDiscardMgr for a + * #MemoryRegion + * + * The #RamDiscardMgr cannot change while a memory region is mapped. + * + * @mr: the #MemoryRegion + */ +RamDiscardMgr *memory_region_get_ram_discard_mgr(MemoryRegion *mr); + +/** + * memory_region_has_ram_discard_mgr: check whether a #MemoryRegion has a + * #RamDiscardMgr assigned + * + * @mr: the #MemoryRegion + */ +static inline bool memory_region_has_ram_discard_mgr(MemoryRegion *mr) +{ + return !!memory_region_get_ram_discard_mgr(mr); +} + +/** + * memory_region_set_ram_discard_mgr: set the #RamDiscardMgr for a + * #MemoryRegion + * + * This function must not be called for a mapped #MemoryRegion, a #MemoryR= egion + * that does not cover RAM, or a #MemoryRegion that already has a + * #RamDiscardMgr assigned. + * + * @mr: the #MemoryRegion + * @urn: #RamDiscardMgr to set + */ +void memory_region_set_ram_discard_mgr(MemoryRegion *mr, RamDiscardMgr *rd= m); + /** * memory_region_find: translate an address/size relative to a * MemoryRegion into a #MemoryRegionSection. diff --git a/softmmu/memory.c b/softmmu/memory.c index 11ca94d037..2f1fefb806 100644 --- a/softmmu/memory.c +++ b/softmmu/memory.c @@ -2020,6 +2020,21 @@ int memory_region_iommu_num_indexes(IOMMUMemoryRegio= n *iommu_mr) return imrc->num_indexes(iommu_mr); } =20 +RamDiscardMgr *memory_region_get_ram_discard_mgr(MemoryRegion *mr) +{ + if (!memory_region_is_mapped(mr) || !memory_region_is_ram(mr)) { + return NULL; + } + return mr->rdm; +} + +void memory_region_set_ram_discard_mgr(MemoryRegion *mr, RamDiscardMgr *rd= m) +{ + g_assert(memory_region_is_ram(mr) && !memory_region_is_mapped(mr)); + g_assert(!rdm || !mr->rdm); + mr->rdm =3D rdm; +} + void memory_region_set_log(MemoryRegion *mr, bool log, unsigned client) { uint8_t mask =3D 1 << client; @@ -3301,10 +3316,17 @@ static const TypeInfo iommu_memory_region_info =3D { .abstract =3D true, }; =20 +static const TypeInfo ram_discard_mgr_info =3D { + .parent =3D TYPE_INTERFACE, + .name =3D TYPE_RAM_DISCARD_MGR, + .class_size =3D sizeof(RamDiscardMgrClass), +}; + static void memory_register_types(void) { type_register_static(&memory_region_info); type_register_static(&iommu_memory_region_info); + type_register_static(&ram_discard_mgr_info); } =20 type_init(memory_register_types) --=20 2.28.0 From nobody Tue May 21 15:25:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1607446432; cv=none; d=zohomail.com; s=zohoarc; b=mpgM5qSB4Ayw1EziFgtnlAk+nMsR8pQ9EUcN5rZ90ejQdFrh+UcoLZJ27SwHgDK26ydG9UTWcoAToYA/o8nM+1grp5yQsR4p1b0+YHSjsOb3G43ZRSa2/EroF04trvd4SWejBhFtxQvv/h8Fs4ywZpVvmdqRCPWLpjsFEgC7gpc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1607446432; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=C9/+RfK2PRjUxJiUusWWletb3qm3QIwHb++ZtHUSbdE=; b=EEFIiYtAh6MeI3oUpthUH355YsziPBFfXavRudf7iF0VN457kqJZfpJ7NuQLlavVcNT/HtvPRmp5Fd5iZzfwJErN24nSpTYPOuBchvqVrsikkFeMd4fdI0jCxVjC7tmUu9igpBZZydCYPEbfPe6M9kPiT3hq+xzr9omMK+NPLk4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1607446432435637.5610032507108; Tue, 8 Dec 2020 08:53:52 -0800 (PST) Received: from localhost ([::1]:40016 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kmgFJ-0003Fo-D9 for importer@patchew.org; Tue, 08 Dec 2020 11:53:50 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:37300) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kmg2u-0005lR-Az for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:00 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:55424) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kmg2l-0005Oc-7N for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:00 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-586-YInz00z5PiaDmqG006L1xA-1; Tue, 08 Dec 2020 11:40:48 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4D29E80400E; Tue, 8 Dec 2020 16:40:00 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-236.ams2.redhat.com [10.36.113.236]) by smtp.corp.redhat.com (Postfix) with ESMTP id 89EE15D6AB; Tue, 8 Dec 2020 16:39:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607445650; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=C9/+RfK2PRjUxJiUusWWletb3qm3QIwHb++ZtHUSbdE=; b=JbhHBtNnUoL/zh8UnIfTXHTMGRBqcexgiBFsi3+ay/7hepU46L1zTdnk1uMdnwlu2OwGmr dniPcIj08IzfKPoWNF+wRxBmHV9YGZuKO3megfX2YL1moWGspjkBWZ3yAH3iwzDVjHvIHC 5v4iZwbFvW1y7JAW48+ZN7xLPij6658= X-MC-Unique: YInz00z5PiaDmqG006L1xA-1 From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v2 02/10] virtio-mem: Factor out traversing unplugged ranges Date: Tue, 8 Dec 2020 17:39:42 +0100 Message-Id: <20201208163950.29617-3-david@redhat.com> In-Reply-To: <20201208163950.29617-1-david@redhat.com> References: <20201208163950.29617-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=63.128.21.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Pankaj Gupta , Wei Yang , David Hildenbrand , "Michael S. Tsirkin" , "Dr . David Alan Gilbert" , Peter Xu , Marek Kedzierski , Auger Eric , Alex Williamson , teawater , Jonathan Cameron , Paolo Bonzini , Igor Mammedov Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" Let's factor out the core logic, to be reused soon. Cc: Paolo Bonzini Cc: "Michael S. Tsirkin" Cc: Alex Williamson Cc: Dr. David Alan Gilbert Cc: Igor Mammedov Cc: Pankaj Gupta Cc: Peter Xu Cc: Auger Eric Cc: Wei Yang Cc: teawater Cc: Marek Kedzierski Signed-off-by: David Hildenbrand Reviewed-by: Pankaj Gupta --- hw/virtio/virtio-mem.c | 86 ++++++++++++++++++++++++------------------ 1 file changed, 49 insertions(+), 37 deletions(-) diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index 655824ff81..471e464171 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -145,6 +145,33 @@ static bool virtio_mem_is_busy(void) return migration_in_incoming_postcopy() || !migration_is_idle(); } =20 +typedef int (*virtio_mem_range_cb)(const VirtIOMEM *vmem, void *arg, + uint64_t offset, uint64_t size); + +static int virtio_mem_for_each_unplugged_range(const VirtIOMEM *vmem, void= *arg, + virtio_mem_range_cb cb) +{ + unsigned long first_zero_bit, last_zero_bit; + uint64_t offset, size; + int ret =3D 0; + + first_zero_bit =3D find_first_zero_bit(vmem->bitmap, vmem->bitmap_size= ); + while (first_zero_bit < vmem->bitmap_size) { + offset =3D first_zero_bit * vmem->block_size; + last_zero_bit =3D find_next_bit(vmem->bitmap, vmem->bitmap_size, + first_zero_bit + 1) - 1; + size =3D (last_zero_bit - first_zero_bit + 1) * vmem->block_size; + + ret =3D cb(vmem, arg, offset, size); + if (ret) { + break; + } + first_zero_bit =3D find_next_zero_bit(vmem->bitmap, vmem->bitmap_s= ize, + last_zero_bit + 2); + } + return ret; +} + static bool virtio_mem_test_bitmap(VirtIOMEM *vmem, uint64_t start_gpa, uint64_t size, bool plugged) { @@ -594,33 +621,27 @@ static void virtio_mem_device_unrealize(DeviceState *= dev) ram_block_discard_require(false); } =20 -static int virtio_mem_restore_unplugged(VirtIOMEM *vmem) +static int virtio_mem_discard_range_cb(const VirtIOMEM *vmem, void *arg, + uint64_t offset, uint64_t size) { RAMBlock *rb =3D vmem->memdev->mr.ram_block; - unsigned long first_zero_bit, last_zero_bit; - uint64_t offset, length; int ret; =20 - /* Find consecutive unplugged blocks and discard the consecutive range= . */ - first_zero_bit =3D find_first_zero_bit(vmem->bitmap, vmem->bitmap_size= ); - while (first_zero_bit < vmem->bitmap_size) { - offset =3D first_zero_bit * vmem->block_size; - last_zero_bit =3D find_next_bit(vmem->bitmap, vmem->bitmap_size, - first_zero_bit + 1) - 1; - length =3D (last_zero_bit - first_zero_bit + 1) * vmem->block_size; - - ret =3D ram_block_discard_range(rb, offset, length); - if (ret) { - error_report("Unexpected error discarding RAM: %s", - strerror(-ret)); - return -EINVAL; - } - first_zero_bit =3D find_next_zero_bit(vmem->bitmap, vmem->bitmap_s= ize, - last_zero_bit + 2); + ret =3D ram_block_discard_range(rb, offset, size); + if (ret) { + error_report("Unexpected error discarding RAM: %s", strerror(-ret)= ); + return -EINVAL; } return 0; } =20 +static int virtio_mem_restore_unplugged(VirtIOMEM *vmem) +{ + /* Make sure all memory is really discarded after migration. */ + return virtio_mem_for_each_unplugged_range(vmem, NULL, + virtio_mem_discard_range_cb= ); +} + static int virtio_mem_post_load(void *opaque, int version_id) { if (migration_in_incoming_postcopy()) { @@ -872,28 +893,19 @@ static void virtio_mem_set_block_size(Object *obj, Vi= sitor *v, const char *name, vmem->block_size =3D value; } =20 -static void virtio_mem_precopy_exclude_unplugged(VirtIOMEM *vmem) +static int virtio_mem_precopy_exclude_range_cb(const VirtIOMEM *vmem, void= *arg, + uint64_t offset, uint64_t s= ize) { void * const host =3D qemu_ram_get_host_addr(vmem->memdev->mr.ram_bloc= k); - unsigned long first_zero_bit, last_zero_bit; - uint64_t offset, length; =20 - /* - * Find consecutive unplugged blocks and exclude them from migration. - * - * Note: Blocks cannot get (un)plugged during precopy, no locking need= ed. - */ - first_zero_bit =3D find_first_zero_bit(vmem->bitmap, vmem->bitmap_size= ); - while (first_zero_bit < vmem->bitmap_size) { - offset =3D first_zero_bit * vmem->block_size; - last_zero_bit =3D find_next_bit(vmem->bitmap, vmem->bitmap_size, - first_zero_bit + 1) - 1; - length =3D (last_zero_bit - first_zero_bit + 1) * vmem->block_size; + qemu_guest_free_page_hint(host + offset, size); + return 0; +} =20 - qemu_guest_free_page_hint(host + offset, length); - first_zero_bit =3D find_next_zero_bit(vmem->bitmap, vmem->bitmap_s= ize, - last_zero_bit + 2); - } +static void virtio_mem_precopy_exclude_unplugged(VirtIOMEM *vmem) +{ + virtio_mem_for_each_unplugged_range(vmem, NULL, + virtio_mem_precopy_exclude_range_c= b); } =20 static int virtio_mem_precopy_notify(NotifierWithReturn *n, void *data) --=20 2.28.0 From nobody Tue May 21 15:25:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1607446348; cv=none; d=zohomail.com; s=zohoarc; b=Ruiv7KQ8pF25TfNcskM/hlCa55XjDqynajvB4kGfLLO5i6YxnAp65uSW5OYjup+bVTqt657el6hywi7N20TW2eeb7MWLuUwFEwRSiAyfy1Igh3UAZ9L9/mYsx6OAQ1LjwsgRTUzPi14Cj35I2dfwFrWr1OU51YGvgrM2/qzBrhE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1607446348; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Sj2z6ILI4nsYvgTB92irWMPZgHY77k3wqd9rSsSUmOQ=; b=dMPwsx4aPOAmNWqME2xDP9T+bF73RG7kv9vFv5u40luYxEoezPeqrmg/4XT2Dx8MDdSkw/kxUd3Xa6a5C4AOcPHdD3AbUwxMJLCArH1aBgJKSvkuGXBZuMzrycpGl56ogma5lhEGoHctNawwE//9Rf+Ihslh8TcYRbvUv7kkeMg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 16074463486211004.4140898172858; Tue, 8 Dec 2020 08:52:28 -0800 (PST) Received: from localhost ([::1]:37928 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kmgDy-0002Dl-EM for importer@patchew.org; Tue, 08 Dec 2020 11:52:26 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:37320) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kmg2v-0005mj-GC for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:01 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:32664) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kmg2r-0005Qw-Bj for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:01 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-157-i3PAyOGYOV-Mib4pZGP2jA-1; Tue, 08 Dec 2020 11:40:49 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 781398049DC; Tue, 8 Dec 2020 16:40:03 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-236.ams2.redhat.com [10.36.113.236]) by smtp.corp.redhat.com (Postfix) with ESMTP id 983085D6AB; Tue, 8 Dec 2020 16:40:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607445653; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Sj2z6ILI4nsYvgTB92irWMPZgHY77k3wqd9rSsSUmOQ=; b=ZEzAfyvVbmNQxFoOcOOgJbk1A9QcD1dmZ223/jEtlpBPr3zXAjxV5Vj84SSwxAnkeaAH2u Ve0wvoJtkC4ttoayaZwRwJU91E/OkNlNvG9nlDRzm5XxxtoXzPumzKC/F1D9/wYAoyM4qy 3LW9qh9IFUoYxZdpYW4ljyyK9NohBNA= X-MC-Unique: i3PAyOGYOV-Mib4pZGP2jA-1 From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v2 03/10] virtio-mem: Implement RamDiscardMgr interface Date: Tue, 8 Dec 2020 17:39:43 +0100 Message-Id: <20201208163950.29617-4-david@redhat.com> In-Reply-To: <20201208163950.29617-1-david@redhat.com> References: <20201208163950.29617-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Pankaj Gupta , Wei Yang , David Hildenbrand , "Michael S. Tsirkin" , "Dr . David Alan Gilbert" , Peter Xu , Marek Kedzierski , Auger Eric , Alex Williamson , teawater , Jonathan Cameron , Paolo Bonzini , Igor Mammedov Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" Let's properly notify when (un)plugging blocks, after discarding memory and before allowing the guest to consume memory. Handle errors from notifiers gracefully (e.g., no remaining VFIO mappings) when plugging, rolling back the change and telling the guest that the VM is busy. One special case to take care of is replaying all notifications after restoring the vmstate. The device starts out with all memory discarded, so after loading the vmstate, we have to notify about all plugged blocks. Cc: Paolo Bonzini Cc: "Michael S. Tsirkin" Cc: Alex Williamson Cc: Dr. David Alan Gilbert Cc: Igor Mammedov Cc: Pankaj Gupta Cc: Peter Xu Cc: Auger Eric Cc: Wei Yang Cc: teawater Cc: Marek Kedzierski Signed-off-by: David Hildenbrand --- hw/virtio/virtio-mem.c | 258 ++++++++++++++++++++++++++++++++- include/hw/virtio/virtio-mem.h | 3 + 2 files changed, 258 insertions(+), 3 deletions(-) diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index 471e464171..6200813bb8 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -172,7 +172,110 @@ static int virtio_mem_for_each_unplugged_range(const = VirtIOMEM *vmem, void *arg, return ret; } =20 -static bool virtio_mem_test_bitmap(VirtIOMEM *vmem, uint64_t start_gpa, +static int virtio_mem_for_each_plugged_range(const VirtIOMEM *vmem, void *= arg, + virtio_mem_range_cb cb) +{ + unsigned long first_bit, last_bit; + uint64_t offset, size; + int ret =3D 0; + + first_bit =3D find_first_bit(vmem->bitmap, vmem->bitmap_size); + while (first_bit < vmem->bitmap_size) { + offset =3D first_bit * vmem->block_size; + last_bit =3D find_next_zero_bit(vmem->bitmap, vmem->bitmap_size, + first_bit + 1) - 1; + size =3D (last_bit - first_bit + 1) * vmem->block_size; + + ret =3D cb(vmem, arg, offset, size); + if (ret) { + break; + } + first_bit =3D find_next_bit(vmem->bitmap, vmem->bitmap_size, + last_bit + 2); + } + return ret; +} + +static void virtio_mem_notify_unplug(VirtIOMEM *vmem, uint64_t offset, + uint64_t size) +{ + RamDiscardListener *rdl; + + QLIST_FOREACH(rdl, &vmem->rdl_list, next) { + rdl->notify_discard(rdl, &vmem->memdev->mr, offset, size); + } +} + +static int virtio_mem_notify_plug(VirtIOMEM *vmem, uint64_t offset, + uint64_t size) +{ + RamDiscardListener *rdl, *rdl2; + int ret =3D 0, ret2; + + QLIST_FOREACH(rdl, &vmem->rdl_list, next) { + ret =3D rdl->notify_populate(rdl, &vmem->memdev->mr, offset, size); + if (ret) { + break; + } + } + + if (ret) { + /* Could be a mapping attempt resulted in memory getting populated= . */ + ret2 =3D ram_block_discard_range(vmem->memdev->mr.ram_block, offse= t, + size); + if (ret2) { + error_report("Unexpected error discarding RAM: %s", + strerror(-ret2)); + } + + /* Notify all already-notified listeners. */ + QLIST_FOREACH(rdl2, &vmem->rdl_list, next) { + if (rdl2 =3D=3D rdl) { + break; + } + rdl2->notify_discard(rdl2, &vmem->memdev->mr, offset, size); + } + } + return ret; +} + +static int virtio_mem_notify_discard_range_cb(const VirtIOMEM *vmem, void = *arg, + uint64_t offset, uint64_t si= ze) +{ + RamDiscardListener *rdl; + + QLIST_FOREACH(rdl, &vmem->rdl_list, next) { + if (!rdl->notify_discard_all) { + rdl->notify_discard(rdl, &vmem->memdev->mr, offset, size); + } + } + return 0; +} + +static void virtio_mem_notify_unplug_all(VirtIOMEM *vmem) +{ + bool individual_calls =3D false; + RamDiscardListener *rdl; + + if (!vmem->size) { + return; + } + + QLIST_FOREACH(rdl, &vmem->rdl_list, next) { + if (rdl->notify_discard_all) { + rdl->notify_discard_all(rdl, &vmem->memdev->mr); + } else { + individual_calls =3D true; + } + } + + if (individual_calls) { + virtio_mem_for_each_unplugged_range(vmem, NULL, + virtio_mem_notify_discard_rang= e_cb); + } +} + +static bool virtio_mem_test_bitmap(const VirtIOMEM *vmem, uint64_t start_g= pa, uint64_t size, bool plugged) { const unsigned long first_bit =3D (start_gpa - vmem->addr) / vmem->blo= ck_size; @@ -225,7 +328,8 @@ static void virtio_mem_send_response_simple(VirtIOMEM *= vmem, virtio_mem_send_response(vmem, elem, &resp); } =20 -static bool virtio_mem_valid_range(VirtIOMEM *vmem, uint64_t gpa, uint64_t= size) +static bool virtio_mem_valid_range(const VirtIOMEM *vmem, uint64_t gpa, + uint64_t size) { if (!QEMU_IS_ALIGNED(gpa, vmem->block_size)) { return false; @@ -259,6 +363,9 @@ static int virtio_mem_set_block_state(VirtIOMEM *vmem, = uint64_t start_gpa, strerror(-ret)); return -EBUSY; } + virtio_mem_notify_unplug(vmem, offset, size); + } else if (virtio_mem_notify_plug(vmem, offset, size)) { + return -EBUSY; } virtio_mem_set_bitmap(vmem, start_gpa, size, plug); return 0; @@ -356,6 +463,8 @@ static int virtio_mem_unplug_all(VirtIOMEM *vmem) error_report("Unexpected error discarding RAM: %s", strerror(-ret)= ); return -EBUSY; } + virtio_mem_notify_unplug_all(vmem); + bitmap_clear(vmem->bitmap, 0, vmem->bitmap_size); if (vmem->size) { vmem->size =3D 0; @@ -604,6 +713,12 @@ static void virtio_mem_device_realize(DeviceState *dev= , Error **errp) vmstate_register_ram(&vmem->memdev->mr, DEVICE(vmem)); qemu_register_reset(virtio_mem_system_reset, vmem); precopy_add_notifier(&vmem->precopy_notifier); + + /* + * Set ourselves as RamDiscardMgr before the plug handler maps the mem= ory + * region and exposes it via an address space. + */ + memory_region_set_ram_discard_mgr(&vmem->memdev->mr, RAM_DISCARD_MGR(v= mem)); } =20 static void virtio_mem_device_unrealize(DeviceState *dev) @@ -611,6 +726,11 @@ static void virtio_mem_device_unrealize(DeviceState *d= ev) VirtIODevice *vdev =3D VIRTIO_DEVICE(dev); VirtIOMEM *vmem =3D VIRTIO_MEM(dev); =20 + /* + * The unplug handler unmapped the memory region, it cannot be + * found via an address space anymore. Unset ourselves. + */ + memory_region_set_ram_discard_mgr(&vmem->memdev->mr, NULL); precopy_remove_notifier(&vmem->precopy_notifier); qemu_unregister_reset(virtio_mem_system_reset, vmem); vmstate_unregister_ram(&vmem->memdev->mr, DEVICE(vmem)); @@ -642,13 +762,41 @@ static int virtio_mem_restore_unplugged(VirtIOMEM *vm= em) virtio_mem_discard_range_cb= ); } =20 +static int virtio_mem_post_load_replay_cb(const VirtIOMEM *vmem, void *arg, + uint64_t offset, uint64_t size) +{ + RamDiscardListener *rdl; + int ret =3D 0; + + QLIST_FOREACH(rdl, &vmem->rdl_list, next) { + ret =3D rdl->notify_populate(rdl, &vmem->memdev->mr, offset, size); + if (ret) { + break; + } + } + return ret; +} + static int virtio_mem_post_load(void *opaque, int version_id) { + VirtIOMEM *vmem =3D VIRTIO_MEM(opaque); + int ret; + + /* + * We started out with all memory discarded and our memory region is m= apped + * into an address space. Replay, now that we updated the bitmap. + */ + ret =3D virtio_mem_for_each_plugged_range(vmem, NULL, + virtio_mem_post_load_replay_cb= ); + if (ret) { + return ret; + } + if (migration_in_incoming_postcopy()) { return 0; } =20 - return virtio_mem_restore_unplugged(VIRTIO_MEM(opaque)); + return virtio_mem_restore_unplugged(vmem); } =20 typedef struct VirtIOMEMMigSanityChecks { @@ -933,6 +1081,7 @@ static void virtio_mem_instance_init(Object *obj) =20 notifier_list_init(&vmem->size_change_notifiers); vmem->precopy_notifier.notify =3D virtio_mem_precopy_notify; + QLIST_INIT(&vmem->rdl_list); =20 object_property_add(obj, VIRTIO_MEM_SIZE_PROP, "size", virtio_mem_get_= size, NULL, NULL, NULL); @@ -952,11 +1101,104 @@ static Property virtio_mem_properties[] =3D { DEFINE_PROP_END_OF_LIST(), }; =20 +static uint64_t virtio_mem_rdm_get_min_granularity(const RamDiscardMgr *rd= m, + const MemoryRegion *mr) +{ + const VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); + + g_assert(mr =3D=3D &vmem->memdev->mr); + return vmem->block_size; +} + +static bool virtio_mem_rdm_is_populated(const RamDiscardMgr *rdm, + const MemoryRegion *mr, + ram_addr_t offset, ram_addr_t size) +{ + const VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); + uint64_t start_gpa =3D QEMU_ALIGN_DOWN(vmem->addr + offset, vmem->bloc= k_size); + uint64_t end_gpa =3D QEMU_ALIGN_UP(vmem->addr + offset + size, + vmem->block_size); + + g_assert(mr =3D=3D &vmem->memdev->mr); + if (!virtio_mem_valid_range(vmem, start_gpa, end_gpa - start_gpa)) { + return false; + } + + return virtio_mem_test_bitmap(vmem, start_gpa, end_gpa - start_gpa, tr= ue); +} + +static int virtio_mem_notify_populate_range_single_cb(const VirtIOMEM *vme= m, + void *arg, + uint64_t offset, + uint64_t size) +{ + RamDiscardListener *rdl =3D arg; + + return rdl->notify_populate(rdl, &vmem->memdev->mr, offset, size); +} + +static int virtio_mem_notify_discard_range_single_cb(const VirtIOMEM *vmem, + void *arg, + uint64_t offset, + uint64_t size) +{ + RamDiscardListener *rdl =3D arg; + + rdl->notify_discard(rdl, &vmem->memdev->mr, offset, size); + return 0; +} + +static void virtio_mem_rdm_register_listener(RamDiscardMgr *rdm, + const MemoryRegion *mr, + RamDiscardListener *rdl) +{ + VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); + int ret; + + g_assert(mr =3D=3D &vmem->memdev->mr); + QLIST_INSERT_HEAD(&vmem->rdl_list, rdl, next); + ret =3D virtio_mem_for_each_plugged_range(vmem, rdl, + virtio_mem_notify_populate_range_singl= e_cb); + if (ret) { + error_report("%s: Replaying plugged ranges failed: %s", __func__, + strerror(-ret)); + } +} + +static void virtio_mem_rdm_unregister_listener(RamDiscardMgr *rdm, + const MemoryRegion *mr, + RamDiscardListener *rdl) +{ + VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); + + g_assert(mr =3D=3D &vmem->memdev->mr); + if (rdl->notify_discard_all) { + rdl->notify_discard_all(rdl, &vmem->memdev->mr); + } else { + virtio_mem_for_each_plugged_range(vmem, rdl, + virtio_mem_notify_discard_range_singl= e_cb); + + } + QLIST_REMOVE(rdl, next); +} + +static int virtio_mem_rdm_replay_populated(const RamDiscardMgr *rdm, + const MemoryRegion *mr, + RamDiscardListener *rdl) +{ + const VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); + + g_assert(mr =3D=3D &vmem->memdev->mr); + return virtio_mem_for_each_plugged_range(vmem, rdl, + virtio_mem_notify_populate_range_singl= e_cb); +} + static void virtio_mem_class_init(ObjectClass *klass, void *data) { DeviceClass *dc =3D DEVICE_CLASS(klass); VirtioDeviceClass *vdc =3D VIRTIO_DEVICE_CLASS(klass); VirtIOMEMClass *vmc =3D VIRTIO_MEM_CLASS(klass); + RamDiscardMgrClass *rdmc =3D RAM_DISCARD_MGR_CLASS(klass); =20 device_class_set_props(dc, virtio_mem_properties); dc->vmsd =3D &vmstate_virtio_mem; @@ -972,6 +1214,12 @@ static void virtio_mem_class_init(ObjectClass *klass,= void *data) vmc->get_memory_region =3D virtio_mem_get_memory_region; vmc->add_size_change_notifier =3D virtio_mem_add_size_change_notifier; vmc->remove_size_change_notifier =3D virtio_mem_remove_size_change_not= ifier; + + rdmc->get_min_granularity =3D virtio_mem_rdm_get_min_granularity; + rdmc->is_populated =3D virtio_mem_rdm_is_populated; + rdmc->register_listener =3D virtio_mem_rdm_register_listener; + rdmc->unregister_listener =3D virtio_mem_rdm_unregister_listener; + rdmc->replay_populated =3D virtio_mem_rdm_replay_populated; } =20 static const TypeInfo virtio_mem_info =3D { @@ -981,6 +1229,10 @@ static const TypeInfo virtio_mem_info =3D { .instance_init =3D virtio_mem_instance_init, .class_init =3D virtio_mem_class_init, .class_size =3D sizeof(VirtIOMEMClass), + .interfaces =3D (InterfaceInfo[]) { + { TYPE_RAM_DISCARD_MGR }, + { } + }, }; =20 static void virtio_register_types(void) diff --git a/include/hw/virtio/virtio-mem.h b/include/hw/virtio/virtio-mem.h index 4eeb82d5dd..9a6e348fa2 100644 --- a/include/hw/virtio/virtio-mem.h +++ b/include/hw/virtio/virtio-mem.h @@ -67,6 +67,9 @@ struct VirtIOMEM { =20 /* don't migrate unplugged memory */ NotifierWithReturn precopy_notifier; + + /* listeners to notify on plug/unplug activity. */ + QLIST_HEAD(, RamDiscardListener) rdl_list; }; =20 struct VirtIOMEMClass { --=20 2.28.0 From nobody Tue May 21 15:25:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1607446544; cv=none; d=zohomail.com; s=zohoarc; b=oICh13r0ILcawcUwTTzKOHB0kWmkaYP/b+k1jZyTqbPtk9MCALOiwznOnPa1jwSM8h+/zYE7wu+UVQ2owD4WpnMnCm4A6Wzs0HTvDp0/3FN5nQTJC8z6s+D2tx/kRayTPbB5wP1kHA5EcrCRksfL23SLr+2ladr52Se8VlztoNQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1607446544; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=dv4EV+SDbfGk1p9gX/1Rzy+G5AYzBfNqtKyh+tTurpA=; b=H4PwX9uMZF3ZB5F9XAjj6J+k8PWHLMyfUMRWE9fdVKw1a1zAOqhQe5xhHftFxOzHJHw6izV+VroPJ4Utxw+WtuMD5Lg6yO/tP7frIYeHOTF0RWmudk6/tDLFxA260QTDUvDp9ikjRx7pG+dvt9KEFs4R0KGR4dvWgEKKqF3O5+0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 16074465448071007.5030631728264; Tue, 8 Dec 2020 08:55:44 -0800 (PST) Received: from localhost ([::1]:42374 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kmgH9-0004J6-Ng for importer@patchew.org; Tue, 08 Dec 2020 11:55:43 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:37330) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kmg2w-0005nW-Eh for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:02 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:36198) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kmg2r-0005RP-DJ for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:02 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-373-E4pkazX7Mz-Ku6wHD356fA-1; Tue, 08 Dec 2020 11:40:54 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7BD0FBBEE0; Tue, 8 Dec 2020 16:40:06 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-236.ams2.redhat.com [10.36.113.236]) by smtp.corp.redhat.com (Postfix) with ESMTP id C85E25D6AB; Tue, 8 Dec 2020 16:40:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607445656; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dv4EV+SDbfGk1p9gX/1Rzy+G5AYzBfNqtKyh+tTurpA=; b=PeZ9VdYKtPMIZQ8y+xxMG34N7BG66pOQctdw1uUYRPETixX0mChp6yK8LLvYPc1E9j/DH5 ct5aI4nQ39IjnNH90PLQXec9LtHJKQ6QENmMIgwJ0/Dh0nUhuk2rC7qSNv2QzifN/86PHi vJIuGrV6/u5E/2JheiXemb7AorfuXqE= X-MC-Unique: E4pkazX7Mz-Ku6wHD356fA-1 From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v2 04/10] vfio: Query and store the maximum number of DMA mappings Date: Tue, 8 Dec 2020 17:39:44 +0100 Message-Id: <20201208163950.29617-5-david@redhat.com> In-Reply-To: <20201208163950.29617-1-david@redhat.com> References: <20201208163950.29617-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Pankaj Gupta , Wei Yang , David Hildenbrand , "Michael S. Tsirkin" , "Dr . David Alan Gilbert" , Peter Xu , Marek Kedzierski , Auger Eric , Alex Williamson , teawater , Jonathan Cameron , Paolo Bonzini , Igor Mammedov Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" Let's query the maximum number of DMA mappings by querying the available mappings when creating the container. In addition, count the number of DMA mappings and warn when we would exceed it. This is a preparation for RamDiscardMgr which might create quite some DMA mappings over time, and we at least want to warn early that the QEMU setup might be problematic. Use "reserved" terminology, so we can use this to reserve mappings before they are actually created. Note: don't reserve vIOMMU DMA mappings - using the vIOMMU region size divided by the mapping page size might be a bad indication of what will happen in practice - we might end up warning all the time. Cc: Paolo Bonzini Cc: "Michael S. Tsirkin" Cc: Alex Williamson Cc: Dr. David Alan Gilbert Cc: Igor Mammedov Cc: Pankaj Gupta Cc: Peter Xu Cc: Auger Eric Cc: Wei Yang Cc: teawater Cc: Marek Kedzierski Signed-off-by: David Hildenbrand --- hw/vfio/common.c | 34 ++++++++++++++++++++++++++++++++++ include/hw/vfio/vfio-common.h | 2 ++ 2 files changed, 36 insertions(+) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 6ff1daa763..5ad88d476f 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -288,6 +288,26 @@ const MemoryRegionOps vfio_region_ops =3D { }, }; =20 +static void vfio_container_dma_reserve(VFIOContainer *container, + unsigned long dma_mappings) +{ + bool warned =3D container->dma_reserved > container->dma_max; + + container->dma_reserved +=3D dma_mappings; + if (!warned && container->dma_max && + container->dma_reserved > container->dma_max) { + warn_report("%s: possibly running out of DMA mappings. " + " Maximum number of DMA mappings: %d", __func__, + container->dma_max); + } +} + +static void vfio_container_dma_unreserve(VFIOContainer *container, + unsigned long dma_mappings) +{ + container->dma_reserved -=3D dma_mappings; +} + /* * Device state interfaces */ @@ -835,6 +855,9 @@ static void vfio_listener_region_add(MemoryListener *li= stener, } } =20 + /* We'll need one DMA mapping. */ + vfio_container_dma_reserve(container, 1); + ret =3D vfio_dma_map(container, iova, int128_get64(llsize), vaddr, section->readonly); if (ret) { @@ -879,6 +902,7 @@ static void vfio_listener_region_del(MemoryListener *li= stener, MemoryRegionSection *section) { VFIOContainer *container =3D container_of(listener, VFIOContainer, lis= tener); + bool unreserve_on_unmap =3D true; hwaddr iova, end; Int128 llend, llsize; int ret; @@ -919,6 +943,7 @@ static void vfio_listener_region_del(MemoryListener *li= stener, * based IOMMU where a big unmap flattens a large range of IO-PTEs. * That may not be true for all IOMMU types. */ + unreserve_on_unmap =3D false; } =20 iova =3D TARGET_PAGE_ALIGN(section->offset_within_address_space); @@ -970,6 +995,11 @@ static void vfio_listener_region_del(MemoryListener *l= istener, "0x%"HWADDR_PRIx") =3D %d (%m)", container, iova, int128_get64(llsize), ret); } + + /* We previously reserved one DMA mapping. */ + if (unreserve_on_unmap) { + vfio_container_dma_unreserve(container, 1); + } } =20 memory_region_unref(section->mr); @@ -1735,6 +1765,7 @@ static int vfio_connect_container(VFIOGroup *group, A= ddressSpace *as, container->fd =3D fd; container->error =3D NULL; container->dirty_pages_supported =3D false; + container->dma_max =3D 0; QLIST_INIT(&container->giommu_list); QLIST_INIT(&container->hostwin_list); =20 @@ -1765,7 +1796,10 @@ static int vfio_connect_container(VFIOGroup *group, = AddressSpace *as, vfio_host_win_add(container, 0, (hwaddr)-1, info->iova_pgsizes); container->pgsizes =3D info->iova_pgsizes; =20 + /* The default in the kernel ("dma_entry_limit") is 65535. */ + container->dma_max =3D 65535; if (!ret) { + vfio_get_info_dma_avail(info, &container->dma_max); vfio_get_iommu_info_migration(container, info); } g_free(info); diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index 6141162d7a..fed0e85f66 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -88,6 +88,8 @@ typedef struct VFIOContainer { uint64_t dirty_pgsizes; uint64_t max_dirty_bitmap_size; unsigned long pgsizes; + unsigned int dma_max; + unsigned long dma_reserved; QLIST_HEAD(, VFIOGuestIOMMU) giommu_list; QLIST_HEAD(, VFIOHostDMAWindow) hostwin_list; QLIST_HEAD(, VFIOGroup) group_list; --=20 2.28.0 From nobody Tue May 21 15:25:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1607452093; cv=none; d=zohomail.com; s=zohoarc; b=QLmYdwFcttYRUyGud67CjrOyis7Jaz1MmHUM95Pxw1fDz0TAYgpz6xOAuWBz4HlQacOFrNp4vuWcTC6bPH7V/jMK6M8pQdJqnO1J2ILgYOfNkn5rXjQ6cgrFeGmnpHphqQGFdA25XKEKpx3FcuP9BcxRjIThF1tpMfczLa4OyNo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1607452093; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=EqQUMbeMlHPLGI412y5sNaMFThK+PbeJKKrxSQUu6M4=; b=PD5e9xoZ/f7CSZ36wKZIfjbGWzy8/h27W4DmGhj/xdCsP93f7IN5z2QYUc3meNxaATgaRhiUeEM69+WJa0dCcISnPR+qq6CON3iw2sta0HuddEYIvxbII/WlGObVOwcYHkIT5ro1d/idZ2FjdBHZiX5zef9eQio0rL9kI9AC4uE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1607452093534878.2272143726353; Tue, 8 Dec 2020 10:28:13 -0800 (PST) Received: from localhost ([::1]:60836 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kmgQC-0004Cu-SU for importer@patchew.org; Tue, 08 Dec 2020 12:05:04 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:37516) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kmg3D-0005tR-Qf for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:19 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:25215) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kmg30-0005U4-FV for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:19 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-398-a7nmhJ2wPi25aI94JjegVw-1; Tue, 08 Dec 2020 11:41:00 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 87F30100E43D; Tue, 8 Dec 2020 16:40:09 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-236.ams2.redhat.com [10.36.113.236]) by smtp.corp.redhat.com (Postfix) with ESMTP id CDE345D6D5; Tue, 8 Dec 2020 16:40:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607445662; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EqQUMbeMlHPLGI412y5sNaMFThK+PbeJKKrxSQUu6M4=; b=NOz+xdZ6Xek8Uo/0/QmPZ0ncCl/zy1rsyO8mm832Wq//8xCj8+DXWrIbR/Qx9Tnh4FASId VkYbROPBWFfKHzy5/83ac9vBsfHu/9L70NmqX3SVV4zYmXLPGB/rPje6eN0vbiznZdQnxu 2KYZ/OErqx0Co4902hKVQcN8IPi1JDE= X-MC-Unique: a7nmhJ2wPi25aI94JjegVw-1 From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v2 05/10] vfio: Support for RamDiscardMgr in the !vIOMMU case Date: Tue, 8 Dec 2020 17:39:45 +0100 Message-Id: <20201208163950.29617-6-david@redhat.com> In-Reply-To: <20201208163950.29617-1-david@redhat.com> References: <20201208163950.29617-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=63.128.21.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Pankaj Gupta , Wei Yang , David Hildenbrand , "Michael S. Tsirkin" , "Dr . David Alan Gilbert" , Peter Xu , Marek Kedzierski , Auger Eric , Alex Williamson , teawater , Jonathan Cameron , Paolo Bonzini , Igor Mammedov Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" Implement support for RamDiscardMgr, to prepare for virtio-mem support. Instead of mapping the whole memory section, we only map "populated" parts and update the mapping when notified about discarding/population of memory via the RamDiscardListener. Similarly, when syncing the dirty bitmaps, sync only the actually mapped (populated) parts by replaying via the notifier. Small mapping granularity is problematic for vfio, because we might run out of mappings. Indicate virito-mem as one of the problematic parts when warning in vfio_container_dma_reserve() to at least make users aware that there is such a limitation. Using virtio-mem with vfio is still blocked via ram_block_discard_disable()/ram_block_discard_require() after this patch. Cc: Paolo Bonzini Cc: "Michael S. Tsirkin" Cc: Alex Williamson Cc: Dr. David Alan Gilbert Cc: Igor Mammedov Cc: Pankaj Gupta Cc: Peter Xu Cc: Auger Eric Cc: Wei Yang Cc: teawater Cc: Marek Kedzierski Signed-off-by: David Hildenbrand --- hw/vfio/common.c | 213 +++++++++++++++++++++++++++++++++- include/hw/vfio/vfio-common.h | 13 +++ 2 files changed, 225 insertions(+), 1 deletion(-) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 5ad88d476f..b1582be1e8 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -296,7 +296,8 @@ static void vfio_container_dma_reserve(VFIOContainer *c= ontainer, container->dma_reserved +=3D dma_mappings; if (!warned && container->dma_max && container->dma_reserved > container->dma_max) { - warn_report("%s: possibly running out of DMA mappings. " + warn_report("%s: possibly running out of DMA mappings. E.g., try" + " increasing the 'block-size' of virtio-mem devies." " Maximum number of DMA mappings: %d", __func__, container->dma_max); } @@ -674,6 +675,146 @@ out: rcu_read_unlock(); } =20 +static void vfio_ram_discard_notify_discard(RamDiscardListener *rdl, + const MemoryRegion *mr, + ram_addr_t offset, ram_addr_t = size) +{ + VFIORamDiscardListener *vrdl =3D container_of(rdl, VFIORamDiscardListe= ner, + listener); + const hwaddr mr_start =3D MAX(offset, vrdl->offset_within_region); + const hwaddr mr_end =3D MIN(offset + size, + vrdl->offset_within_region + vrdl->size); + const hwaddr iova =3D mr_start - vrdl->offset_within_region + + vrdl->offset_within_address_space; + int ret; + + if (mr_start >=3D mr_end) { + return; + } + + /* Unmap with a single call. */ + ret =3D vfio_dma_unmap(vrdl->container, iova, mr_end - mr_start, NULL); + if (ret) { + error_report("%s: vfio_dma_unmap() failed: %s", __func__, + strerror(-ret)); + } +} + +static int vfio_ram_discard_notify_populate(RamDiscardListener *rdl, + const MemoryRegion *mr, + ram_addr_t offset, ram_addr_t = size) +{ + VFIORamDiscardListener *vrdl =3D container_of(rdl, VFIORamDiscardListe= ner, + listener); + const hwaddr mr_end =3D MIN(offset + size, + vrdl->offset_within_region + vrdl->size); + hwaddr mr_start =3D MAX(offset, vrdl->offset_within_region); + hwaddr mr_next, iova; + void *vaddr; + int ret; + + /* + * Map in (aligned within memory region) minimum granularity, so we can + * unmap in minimum granularity later. + */ + for (; mr_start < mr_end; mr_start =3D mr_next) { + mr_next =3D QEMU_ALIGN_UP(mr_start + 1, vrdl->granularity); + mr_next =3D MIN(mr_next, mr_end); + + iova =3D mr_start - vrdl->offset_within_region + + vrdl->offset_within_address_space; + vaddr =3D memory_region_get_ram_ptr(vrdl->mr) + mr_start; + + ret =3D vfio_dma_map(vrdl->container, iova, mr_next - mr_start, + vaddr, mr->readonly); + if (ret) { + /* Rollback */ + vfio_ram_discard_notify_discard(rdl, mr, offset, size); + return ret; + } + } + return 0; +} + +static void vfio_ram_discard_notify_discard_all(RamDiscardListener *rdl, + const MemoryRegion *mr) +{ + VFIORamDiscardListener *vrdl =3D container_of(rdl, VFIORamDiscardListe= ner, + listener); + int ret; + + /* Unmap with a single call. */ + ret =3D vfio_dma_unmap(vrdl->container, vrdl->offset_within_address_sp= ace, + vrdl->size, NULL); + if (ret) { + error_report("%s: vfio_dma_unmap() failed: %s", __func__, + strerror(-ret)); + } +} + +static void vfio_register_ram_discard_notifier(VFIOContainer *container, + MemoryRegionSection *sectio= n) +{ + RamDiscardMgr *rdm =3D memory_region_get_ram_discard_mgr(section->mr); + RamDiscardMgrClass *rdmc =3D RAM_DISCARD_MGR_GET_CLASS(rdm); + VFIORamDiscardListener *vrdl; + + vrdl =3D g_new0(VFIORamDiscardListener, 1); + vrdl->container =3D container; + vrdl->mr =3D section->mr; + vrdl->offset_within_region =3D section->offset_within_region; + vrdl->offset_within_address_space =3D section->offset_within_address_s= pace; + vrdl->size =3D int128_get64(section->size); + vrdl->granularity =3D rdmc->get_min_granularity(rdm, section->mr); + vrdl->dma_max =3D vrdl->size / vrdl->granularity; + if (!QEMU_IS_ALIGNED(vrdl->size, vrdl->granularity) || + !QEMU_IS_ALIGNED(vrdl->offset_within_region, vrdl->granularity)) { + vrdl->dma_max++; + } + + /* Ignore some corner cases not relevant in practice. */ + g_assert(QEMU_IS_ALIGNED(vrdl->offset_within_region, TARGET_PAGE_SIZE)= ); + g_assert(QEMU_IS_ALIGNED(vrdl->offset_within_address_space, + TARGET_PAGE_SIZE)); + g_assert(QEMU_IS_ALIGNED(vrdl->size, TARGET_PAGE_SIZE)); + + /* We could consume quite some mappings later. */ + vfio_container_dma_reserve(container, vrdl->dma_max); + + ram_discard_listener_init(&vrdl->listener, + vfio_ram_discard_notify_populate, + vfio_ram_discard_notify_discard, + vfio_ram_discard_notify_discard_all); + rdmc->register_listener(rdm, section->mr, &vrdl->listener); + QLIST_INSERT_HEAD(&container->vrdl_list, vrdl, next); +} + +static void vfio_unregister_ram_discard_listener(VFIOContainer *container, + MemoryRegionSection *sect= ion) +{ + RamDiscardMgr *rdm =3D memory_region_get_ram_discard_mgr(section->mr); + RamDiscardMgrClass *rdmc =3D RAM_DISCARD_MGR_GET_CLASS(rdm); + VFIORamDiscardListener *vrdl =3D NULL; + + QLIST_FOREACH(vrdl, &container->vrdl_list, next) { + if (vrdl->mr =3D=3D section->mr && + vrdl->offset_within_region =3D=3D section->offset_within_regio= n) { + break; + } + } + + if (!vrdl) { + hw_error("vfio: Trying to unregister missing RAM discard listener"= ); + } + + rdmc->unregister_listener(rdm, section->mr, &vrdl->listener); + QLIST_REMOVE(vrdl, next); + + vfio_container_dma_unreserve(container, vrdl->dma_max); + + g_free(vrdl); +} + static void vfio_listener_region_add(MemoryListener *listener, MemoryRegionSection *section) { @@ -834,6 +975,16 @@ static void vfio_listener_region_add(MemoryListener *l= istener, =20 /* Here we assume that memory_region_is_ram(section->mr)=3D=3Dtrue */ =20 + /* + * For RAM memory regions with a RamDiscardMgr, we only want to + * register the actually "used" parts - and update the mapping whenever + * we're notified about changes. + */ + if (memory_region_has_ram_discard_mgr(section->mr)) { + vfio_register_ram_discard_notifier(container, section); + return; + } + vaddr =3D memory_region_get_ram_ptr(section->mr) + section->offset_within_region + (iova - section->offset_within_address_space); @@ -975,6 +1126,10 @@ static void vfio_listener_region_del(MemoryListener *= listener, =20 pgmask =3D (1ULL << ctz64(hostwin->iova_pgsizes)) - 1; try_unmap =3D !((iova & pgmask) || (int128_get64(llsize) & pgmask)= ); + } else if (memory_region_has_ram_discard_mgr(section->mr)) { + vfio_unregister_ram_discard_listener(container, section); + /* Unregistering will trigger an unmap. */ + try_unmap =3D false; } =20 if (try_unmap) { @@ -1107,6 +1262,59 @@ static void vfio_iommu_map_dirty_notify(IOMMUNotifie= r *n, IOMMUTLBEntry *iotlb) rcu_read_unlock(); } =20 +static int vfio_ram_discard_notify_dirty_bitmap(RamDiscardListener *rdl, + const MemoryRegion *mr, + ram_addr_t offset, + ram_addr_t size) +{ + VFIORamDiscardListener *vrdl =3D container_of(rdl, VFIORamDiscardListe= ner, + listener); + const hwaddr mr_start =3D MAX(offset, vrdl->offset_within_region); + const hwaddr mr_end =3D MIN(offset + size, + vrdl->offset_within_region + vrdl->size); + const hwaddr iova =3D mr_start - vrdl->offset_within_region + + vrdl->offset_within_address_space; + ram_addr_t ram_addr; + int ret; + + if (mr_start >=3D mr_end) { + return 0; + } + + /* + * Sync the whole mapped region (spanning multiple individual mappings) + * in one go. + */ + ram_addr =3D memory_region_get_ram_addr(vrdl->mr) + mr_start; + ret =3D vfio_get_dirty_bitmap(vrdl->container, iova, mr_end - mr_start, + ram_addr); + return ret; +} + +static int vfio_sync_ram_discard_listener_dirty_bitmap(VFIOContainer *cont= ainer, + MemoryRegionSection *se= ction) +{ + RamDiscardMgr *rdm =3D memory_region_get_ram_discard_mgr(section->mr); + RamDiscardMgrClass *rdmc =3D RAM_DISCARD_MGR_GET_CLASS(rdm); + VFIORamDiscardListener tmp_vrdl, *vrdl =3D NULL; + + QLIST_FOREACH(vrdl, &container->vrdl_list, next) { + if (vrdl->mr =3D=3D section->mr && + vrdl->offset_within_region =3D=3D section->offset_within_regio= n) { + break; + } + } + + if (!vrdl) { + hw_error("vfio: Trying to sync missing RAM discard listener"); + } + + tmp_vrdl =3D *vrdl; + ram_discard_listener_init(&tmp_vrdl.listener, + vfio_ram_discard_notify_dirty_bitmap, NULL, = NULL); + return rdmc->replay_populated(rdm, section->mr, &tmp_vrdl.listener); +} + static int vfio_sync_dirty_bitmap(VFIOContainer *container, MemoryRegionSection *section) { @@ -1138,6 +1346,8 @@ static int vfio_sync_dirty_bitmap(VFIOContainer *cont= ainer, } } return 0; + } else if (memory_region_has_ram_discard_mgr(section->mr)) { + return vfio_sync_ram_discard_listener_dirty_bitmap(container, sect= ion); } =20 ram_addr =3D memory_region_get_ram_addr(section->mr) + @@ -1768,6 +1978,7 @@ static int vfio_connect_container(VFIOGroup *group, A= ddressSpace *as, container->dma_max =3D 0; QLIST_INIT(&container->giommu_list); QLIST_INIT(&container->hostwin_list); + QLIST_INIT(&container->vrdl_list); =20 ret =3D vfio_init_container(container, group->fd, errp); if (ret) { diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index fed0e85f66..fba5a14c8b 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -93,6 +93,7 @@ typedef struct VFIOContainer { QLIST_HEAD(, VFIOGuestIOMMU) giommu_list; QLIST_HEAD(, VFIOHostDMAWindow) hostwin_list; QLIST_HEAD(, VFIOGroup) group_list; + QLIST_HEAD(, VFIORamDiscardListener) vrdl_list; QLIST_ENTRY(VFIOContainer) next; } VFIOContainer; =20 @@ -104,6 +105,18 @@ typedef struct VFIOGuestIOMMU { QLIST_ENTRY(VFIOGuestIOMMU) giommu_next; } VFIOGuestIOMMU; =20 +typedef struct VFIORamDiscardListener { + VFIOContainer *container; + MemoryRegion *mr; + hwaddr offset_within_region; + hwaddr offset_within_address_space; + hwaddr size; + uint64_t granularity; + unsigned long dma_max; + RamDiscardListener listener; + QLIST_ENTRY(VFIORamDiscardListener) next; +} VFIORamDiscardListener; + typedef struct VFIOHostDMAWindow { hwaddr min_iova; hwaddr max_iova; --=20 2.28.0 From nobody Tue May 21 15:25:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1607446791; cv=none; d=zohomail.com; s=zohoarc; b=AxFQhCRToa2w4j5uHV1NUsKhftkb2SwLJIeUrHmolfHHQFhhYCv1Lp0t50bNc6eA+Uw17NnHuzCuC4vrFcxyajdp6Y+FfGUr8u0gJxWv5EQPBtrwQvX24/8DGY5vGWmI79JU7yItlsyO3Ly/AQBxbvvQq7rKHmB4HyYsedNtMGI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1607446791; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=RTHbGdhbK8mjesBu5QoUFV4amQzH5MMGPVtjh/drbnE=; b=jyLRckOfwXYrcYiuGGxhEOgdKlAiJgK7jeJwJMxRuK7LN+oxqhhqMYZyJiXbkxMmTnSu1Sl+ZCFYZnHCc7pC+//BaFc9kI1xH5CuP8ZXWrkcAORgr4codk+0iM/7AqXTLAmg53EPOitH0IJyup7J1aTIqUc4N2Uqce0DV/7J7Hc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1607446791172216.49258087658677; Tue, 8 Dec 2020 08:59:51 -0800 (PST) Received: from localhost ([::1]:52550 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kmgL8-0000EQ-18 for importer@patchew.org; Tue, 08 Dec 2020 11:59:50 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:37438) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kmg38-0005r0-ON for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:15 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:52754) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kmg2w-0005T8-Dk for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:14 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-382-e0QytsX2OjStMHFMfil6kQ-1; Tue, 08 Dec 2020 11:40:58 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8BA27C7437; Tue, 8 Dec 2020 16:40:12 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-236.ams2.redhat.com [10.36.113.236]) by smtp.corp.redhat.com (Postfix) with ESMTP id D735C5D6D5; Tue, 8 Dec 2020 16:40:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607445661; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RTHbGdhbK8mjesBu5QoUFV4amQzH5MMGPVtjh/drbnE=; b=c8ra3SmXaqToFEp25eDCvDrjZ5axc4dnH4byxk+4bYUpJZ+izFD+rKXzMnDjNtji9tT4WB CWc81MrqPBxbXF4XhbIwLq0sdWv01r4fQ7YO7SLbZ8hIgSFivOwSL/Tt105tNBOymyYiR5 lCI/ojAeyIuyxRBOliZBK+b85WxZIcc= X-MC-Unique: e0QytsX2OjStMHFMfil6kQ-1 From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v2 06/10] vfio: Support for RamDiscardMgr in the vIOMMU case Date: Tue, 8 Dec 2020 17:39:46 +0100 Message-Id: <20201208163950.29617-7-david@redhat.com> In-Reply-To: <20201208163950.29617-1-david@redhat.com> References: <20201208163950.29617-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=63.128.21.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Pankaj Gupta , Wei Yang , David Hildenbrand , "Michael S. Tsirkin" , "Dr . David Alan Gilbert" , Peter Xu , Marek Kedzierski , Auger Eric , Alex Williamson , teawater , Jonathan Cameron , Paolo Bonzini , Igor Mammedov Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" vIOMMU support works already with RamDiscardMgr as long as guests only map populated memory. Both, populated and discarded memory is mapped into &address_space_memory, where vfio_get_xlat_addr() will find that memory, to create the vfio mapping. Sane guests will never map discarded memory (e.g., unplugged memory blocks in virtio-mem) into an IOMMU - or keep it mapped into an IOMMU while memory is getting discarded. However, there are two cases where a malicious guests could trigger pinning of more memory than intended. One case is easy to handle: the guest trying to map discarded memory into an IOMMU. The other case is harder to handle: the guest keeping memory mapped in the IOMMU while it is getting discarded. We would have to walk over all mappings when discarding memory and identify if any mapping would be a violation. Let's keep it simple for now and print a warning, indicating that setting RLIMIT_MEMLOCK can mitigate such attacks. We have to take care of incoming migration: at the point the IOMMUs get restored and start creating mappings in vfio, RamDiscardMgr implementations might not be back up and running yet. Let's rely on the runstate. An alternative would be using vmstate priorities - but current handling is cleaner and more obvious. Cc: Paolo Bonzini Cc: "Michael S. Tsirkin" Cc: Alex Williamson Cc: Dr. David Alan Gilbert Cc: Igor Mammedov Cc: Pankaj Gupta Cc: Peter Xu Cc: Auger Eric Cc: Wei Yang Cc: teawater Cc: Marek Kedzierski Signed-off-by: David Hildenbrand --- hw/vfio/common.c | 35 +++++++++++++++++++++++++++++++++++ hw/virtio/virtio-mem.c | 1 + include/migration/vmstate.h | 1 + 3 files changed, 37 insertions(+) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index b1582be1e8..57c83a2f14 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -36,6 +36,7 @@ #include "qemu/range.h" #include "sysemu/kvm.h" #include "sysemu/reset.h" +#include "sysemu/runstate.h" #include "trace.h" #include "qapi/error.h" #include "migration/migration.h" @@ -595,6 +596,40 @@ static bool vfio_get_xlat_addr(IOMMUTLBEntry *iotlb, v= oid **vaddr, error_report("iommu map to non memory area %"HWADDR_PRIx"", xlat); return false; + } else if (memory_region_has_ram_discard_mgr(mr)) { + RamDiscardMgr *rdm =3D memory_region_get_ram_discard_mgr(mr); + RamDiscardMgrClass *rdmc =3D RAM_DISCARD_MGR_GET_CLASS(rdm); + + /* + * Malicious VMs can map memory into the IOMMU, which is expected + * to remain discarded. vfio will pin all pages, populating memory. + * Disallow that. vmstate priorities make sure any RamDiscardMgr w= ere + * already restored before IOMMUs are restored. + */ + if (!rdmc->is_populated(rdm, mr, xlat, len)) { + error_report("iommu map to discarded memory (e.g., unplugged v= ia" + " virtio-mem): %"HWADDR_PRIx"", + iotlb->translated_addr); + return false; + } + + /* + * Malicious VMs might trigger discarding of IOMMU-mapped memory. = The + * pages will remain pinned inside vfio until unmapped, resulting = in a + * higher memory consumption than expected. If memory would get + * populated again later, there would be an inconsistency between = pages + * pinned by vfio and pages seen by QEMU. This is the case until + * unmapped from the IOMMU (e.g., during device reset). + * + * With malicious guests, we really only care about pinning more m= emory + * than expected. RLIMIT_MEMLOCK set for the user/process can neve= r be + * exceeded and can be used to mitigate this problem. + */ + warn_report_once("Using vfio with vIOMMUs and coordinated discardi= ng of" + " RAM (e.g., virtio-mem) works, however, maliciou= s" + " guests can trigger pinning of more memory than" + " intended via an IOMMU. It's possible to mitigat= e " + " by setting/adjusting RLIMIT_MEMLOCK."); } =20 /* diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index 6200813bb8..f419a758f3 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -871,6 +871,7 @@ static const VMStateDescription vmstate_virtio_mem_devi= ce =3D { .name =3D "virtio-mem-device", .minimum_version_id =3D 1, .version_id =3D 1, + .priority =3D MIG_PRI_VIRTIO_MEM, .post_load =3D virtio_mem_post_load, .fields =3D (VMStateField[]) { VMSTATE_WITH_TMP(VirtIOMEM, VirtIOMEMMigSanityChecks, diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h index 4d71dc8fba..5b0e930144 100644 --- a/include/migration/vmstate.h +++ b/include/migration/vmstate.h @@ -153,6 +153,7 @@ typedef enum { MIG_PRI_DEFAULT =3D 0, MIG_PRI_IOMMU, /* Must happen before PCI devices */ MIG_PRI_PCI_BUS, /* Must happen before IOMMU */ + MIG_PRI_VIRTIO_MEM, /* Must happen before IOMMU */ MIG_PRI_GICV3_ITS, /* Must happen before PCI devices */ MIG_PRI_GICV3, /* Must happen before the ITS */ MIG_PRI_MAX, --=20 2.28.0 From nobody Tue May 21 15:25:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1607447332; cv=none; d=zohomail.com; s=zohoarc; b=dGGxMXrB/AND8b52g5kn9o0wWCZba128unMfAUCI8iw+oDxXBL7ENrajp6wrQ3QJE6iVfqy1JjMbeuGjxpS2A4PHH9ulBG7f53hUVq+dAvmABqtiqeE3qmUZOnQ8mLdpbjO9ykDLntyDA0du/QrzbQt8Dj8wKEDlyPSTiU1bgn8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1607447332; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ul95A8YJO7kBshJksKfVWHkPGvH7IXjpy8xRYPifS2k=; b=Mc1zwxOnMklcogMOtsdLhKFWcW5ucatGzd1PYrw1C0cYE0pMYovkX5Ru0+f8E87/mfbgKcDX6DdXzeUKBlh/QNciCY0X0R9uXrmDBcxIokg5uGHumU2btvkgNkhEJZ4PFF9vfkY1xvh+RHyyGDOtYXiiZkrlMT/ikxT0O7Sow7Q= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1607447332579695.7552555750087; Tue, 8 Dec 2020 09:08:52 -0800 (PST) Received: from localhost ([::1]:39878 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kmgTr-0007vz-AO for importer@patchew.org; Tue, 08 Dec 2020 12:08:51 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:37540) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kmg3K-0005y3-8A for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:26 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:39567) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kmg35-0005UW-SV for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:26 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-185-NcyorsrXNWyCem1hbF5b3Q-1; Tue, 08 Dec 2020 11:41:05 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8DEA887504C; Tue, 8 Dec 2020 16:40:15 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-236.ams2.redhat.com [10.36.113.236]) by smtp.corp.redhat.com (Postfix) with ESMTP id D900B5D6D5; Tue, 8 Dec 2020 16:40:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607445667; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ul95A8YJO7kBshJksKfVWHkPGvH7IXjpy8xRYPifS2k=; b=Xkf36Qo9xIsTH7Ic24JPhWBovYl+i+yZL0zOU4rRS5vuIxxOaA1ZMT4V1kpnqmnhLzVv8k wRyz5lKOkO22JeJ7mQ9HThhILgk80DV7EbIWzoeqQV8aQx2FRMcQ9iVTywkFqqUxGiyuSE BKW3t0D0rnmYzXNqJynG0wg3Bqxolxc= X-MC-Unique: NcyorsrXNWyCem1hbF5b3Q-1 From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v2 07/10] softmmu/physmem: Don't use atomic operations in ram_block_discard_(disable|require) Date: Tue, 8 Dec 2020 17:39:47 +0100 Message-Id: <20201208163950.29617-8-david@redhat.com> In-Reply-To: <20201208163950.29617-1-david@redhat.com> References: <20201208163950.29617-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Pankaj Gupta , Wei Yang , David Hildenbrand , "Michael S. Tsirkin" , "Dr . David Alan Gilbert" , Peter Xu , Marek Kedzierski , Auger Eric , Alex Williamson , teawater , Jonathan Cameron , Paolo Bonzini , Igor Mammedov Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" We have users in migration context that don't hold the BQL (when finishing migration). To prepare for further changes, use a dedicated mutex instead of atomic operations. Keep using qatomic_read ("READ_ONCE") for the functions that only extract the current state (e.g., used by virtio-balloon), locking isn't necessary. While at it, split up the counter into two variables to make it easier to understand. Suggested-by: Peter Xu Reviewed-by: Peter Xu Cc: Paolo Bonzini Cc: "Michael S. Tsirkin" Cc: Alex Williamson Cc: Dr. David Alan Gilbert Cc: Igor Mammedov Cc: Pankaj Gupta Cc: Peter Xu Cc: Auger Eric Cc: Wei Yang Cc: teawater Cc: Marek Kedzierski Signed-off-by: David Hildenbrand Reviewed-by: Pankaj Gupta --- softmmu/physmem.c | 70 ++++++++++++++++++++++++++--------------------- 1 file changed, 39 insertions(+), 31 deletions(-) diff --git a/softmmu/physmem.c b/softmmu/physmem.c index 3027747c03..448e4e8c86 100644 --- a/softmmu/physmem.c +++ b/softmmu/physmem.c @@ -3650,56 +3650,64 @@ void mtree_print_dispatch(AddressSpaceDispatch *d, = MemoryRegion *root) } } =20 -/* - * If positive, discarding RAM is disabled. If negative, discarding RAM is - * required to work and cannot be disabled. - */ -static int ram_block_discard_disabled; +static unsigned int ram_block_discard_requirers; +static unsigned int ram_block_discard_disablers; +static QemuMutex ram_block_discard_disable_mutex; + +static void ram_block_discard_disable_mutex_lock(void) +{ + static gsize initialized; + + if (g_once_init_enter(&initialized)) { + qemu_mutex_init(&ram_block_discard_disable_mutex); + g_once_init_leave(&initialized, 1); + } + qemu_mutex_lock(&ram_block_discard_disable_mutex); +} + +static void ram_block_discard_disable_mutex_unlock(void) +{ + qemu_mutex_unlock(&ram_block_discard_disable_mutex); +} =20 int ram_block_discard_disable(bool state) { - int old; + int ret =3D 0; =20 + ram_block_discard_disable_mutex_lock(); if (!state) { - qatomic_dec(&ram_block_discard_disabled); - return 0; + ram_block_discard_disablers--; + } else if (!ram_block_discard_requirers) { + ram_block_discard_disablers++; + } else { + ret =3D -EBUSY; } - - do { - old =3D qatomic_read(&ram_block_discard_disabled); - if (old < 0) { - return -EBUSY; - } - } while (qatomic_cmpxchg(&ram_block_discard_disabled, - old, old + 1) !=3D old); - return 0; + ram_block_discard_disable_mutex_unlock(); + return ret; } =20 int ram_block_discard_require(bool state) { - int old; + int ret =3D 0; =20 + ram_block_discard_disable_mutex_lock(); if (!state) { - qatomic_inc(&ram_block_discard_disabled); - return 0; + ram_block_discard_requirers--; + } else if (!ram_block_discard_disablers) { + ram_block_discard_requirers++; + } else { + ret =3D -EBUSY; } - - do { - old =3D qatomic_read(&ram_block_discard_disabled); - if (old > 0) { - return -EBUSY; - } - } while (qatomic_cmpxchg(&ram_block_discard_disabled, - old, old - 1) !=3D old); - return 0; + ram_block_discard_disable_mutex_unlock(); + return ret; } =20 bool ram_block_discard_is_disabled(void) { - return qatomic_read(&ram_block_discard_disabled) > 0; + return qatomic_read(&ram_block_discard_disablers); } =20 bool ram_block_discard_is_required(void) { - return qatomic_read(&ram_block_discard_disabled) < 0; + return qatomic_read(&ram_block_discard_requirers); } --=20 2.28.0 From nobody Tue May 21 15:25:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1607446669; cv=none; d=zohomail.com; s=zohoarc; b=CmG7GYNwc8Q8yiRNaI/0CKEp5NPAdWEd8iD4pdPOC88F/CxleMqzCVt26huWpyHzXlgVbtW6cq0okR3mjRU1QR8Xbmd9VhJLQn2TgZ0pt0nUiklIo9Iu4M4lOHjUDkJLQfOEv5bBXd0quEKFUesZTlG9HXfmWt1jtogRN1r/9d4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1607446669; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=8nJfQvvlMC3rEwnGMXSmD4INkpiGypsQUmmHbyjHYKA=; b=mlQ6UFXeZlblaXziMX3avyFmtwCTTeSI9oVg7iYKpXQXuSOLqgL/pUWjiOziZJy8Uxc53xX/Z3DqU1mEhOh+R7RNlXLMTr1Y6sMSibGP2Z31+ep5qmEsn3sqehpeKypekYPcsHp7eJ5Fl4hNwMAfH+YqTxJCYjh1BGb3Fs/HhoU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1607446669485924.3056975613006; Tue, 8 Dec 2020 08:57:49 -0800 (PST) Received: from localhost ([::1]:47146 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kmgJA-0006PZ-BG for importer@patchew.org; Tue, 08 Dec 2020 11:57:48 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:37546) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kmg3K-0005yC-J8 for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:26 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:41486) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kmg30-0005UE-F7 for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:26 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-527-QdUBWzO8Oeyho1civzs4Yg-1; Tue, 08 Dec 2020 11:41:01 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A856EDD486; Tue, 8 Dec 2020 16:40:18 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-236.ams2.redhat.com [10.36.113.236]) by smtp.corp.redhat.com (Postfix) with ESMTP id DE7805D6AB; Tue, 8 Dec 2020 16:40:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607445663; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8nJfQvvlMC3rEwnGMXSmD4INkpiGypsQUmmHbyjHYKA=; b=ZBDsvHd9UkCwJEuEmuYxdELPxOfunHkh+FXZTGUwGyyzHXSe06T2qsV+AFqJFIj3JYUYyw eyBo+X8rwvwgYGgGa73Zd+o/WdXfVoQ00FtwwIzDf1f8MkueajhpCEUURs8r/06q+FA7dD Mg+/BJfIEoL20yMxnQ+E6Wx1dWalLZM= X-MC-Unique: QdUBWzO8Oeyho1civzs4Yg-1 From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v2 08/10] softmmu/physmem: Extend ram_block_discard_(require|disable) by two discard types Date: Tue, 8 Dec 2020 17:39:48 +0100 Message-Id: <20201208163950.29617-9-david@redhat.com> In-Reply-To: <20201208163950.29617-1-david@redhat.com> References: <20201208163950.29617-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Pankaj Gupta , Wei Yang , David Hildenbrand , "Michael S. Tsirkin" , "Dr . David Alan Gilbert" , Peter Xu , Marek Kedzierski , Auger Eric , Alex Williamson , teawater , Jonathan Cameron , Paolo Bonzini , Igor Mammedov Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" We want to separate the two cases whereby we discard ram - uncoordinated: e.g., virito-balloon - coordinated: e.g., virtio-mem coordinated via the RamDiscardMgr Cc: Paolo Bonzini Cc: "Michael S. Tsirkin" Cc: Alex Williamson Cc: Dr. David Alan Gilbert Cc: Igor Mammedov Cc: Pankaj Gupta Cc: Peter Xu Cc: Auger Eric Cc: Wei Yang Cc: teawater Cc: Marek Kedzierski Signed-off-by: David Hildenbrand Reviewed-by: Pankaj Gupta --- include/exec/memory.h | 18 +++++++++++++-- softmmu/physmem.c | 54 ++++++++++++++++++++++++++++++++++++++----- 2 files changed, 64 insertions(+), 8 deletions(-) diff --git a/include/exec/memory.h b/include/exec/memory.h index 30d4fcd2c0..3f0942aa5b 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -2804,6 +2804,12 @@ static inline MemOp devend_memop(enum device_endian = end) */ int ram_block_discard_disable(bool state); =20 +/* + * See ram_block_discard_disable(): only disable unccordinated discards, + * keeping coordinated discards (via the RamDiscardMgr) enabled. + */ +int ram_block_uncoordinated_discard_disable(bool state); + /* * Inhibit technologies that disable discarding of pages in RAM blocks. * @@ -2813,12 +2819,20 @@ int ram_block_discard_disable(bool state); int ram_block_discard_require(bool state); =20 /* - * Test if discarding of memory in ram blocks is disabled. + * See ram_block_discard_require(): only inhibit technologies that disable + * uncoordinated discarding of pages in RAM blocks, allowing co-existance = with + * technologies that only inhibit uncoordinated discards (via the + * RamDiscardMgr). + */ +int ram_block_coordinated_discard_require(bool state); + +/* + * Test if any discarding of memory in ram blocks is disabled. */ bool ram_block_discard_is_disabled(void); =20 /* - * Test if discarding of memory in ram blocks is required to work reliably. + * Test if any discarding of memory in ram blocks is required to work reli= ably. */ bool ram_block_discard_is_required(void); =20 diff --git a/softmmu/physmem.c b/softmmu/physmem.c index 448e4e8c86..7a4f3db1b4 100644 --- a/softmmu/physmem.c +++ b/softmmu/physmem.c @@ -3650,8 +3650,14 @@ void mtree_print_dispatch(AddressSpaceDispatch *d, M= emoryRegion *root) } } =20 +/* Require any discards to work. */ static unsigned int ram_block_discard_requirers; +/* Require only coordinated discards to work. */ +static unsigned int ram_block_coordinated_discard_requirers; +/* Disable any discards. */ static unsigned int ram_block_discard_disablers; +/* Disable only uncoordinated disards. */ +static unsigned int ram_block_uncoordinated_discard_disablers; static QemuMutex ram_block_discard_disable_mutex; =20 static void ram_block_discard_disable_mutex_lock(void) @@ -3677,10 +3683,27 @@ int ram_block_discard_disable(bool state) ram_block_discard_disable_mutex_lock(); if (!state) { ram_block_discard_disablers--; - } else if (!ram_block_discard_requirers) { - ram_block_discard_disablers++; + } else if (ram_block_discard_requirers || + ram_block_coordinated_discard_requirers) { + ret =3D -EBUSY; } else { + ram_block_discard_disablers++; + } + ram_block_discard_disable_mutex_unlock(); + return ret; +} + +int ram_block_uncoordinated_discard_disable(bool state) +{ + int ret =3D 0; + + ram_block_discard_disable_mutex_lock(); + if (!state) { + ram_block_uncoordinated_discard_disablers--; + } else if (ram_block_discard_requirers) { ret =3D -EBUSY; + } else { + ram_block_uncoordinated_discard_disablers++; } ram_block_discard_disable_mutex_unlock(); return ret; @@ -3693,10 +3716,27 @@ int ram_block_discard_require(bool state) ram_block_discard_disable_mutex_lock(); if (!state) { ram_block_discard_requirers--; - } else if (!ram_block_discard_disablers) { - ram_block_discard_requirers++; + } else if (ram_block_discard_disablers || + ram_block_uncoordinated_discard_disablers) { + ret =3D -EBUSY; } else { + ram_block_discard_requirers++; + } + ram_block_discard_disable_mutex_unlock(); + return ret; +} + +int ram_block_coordinated_discard_require(bool state) +{ + int ret =3D 0; + + ram_block_discard_disable_mutex_lock(); + if (!state) { + ram_block_coordinated_discard_requirers--; + } else if (ram_block_discard_disablers) { ret =3D -EBUSY; + } else { + ram_block_coordinated_discard_requirers++; } ram_block_discard_disable_mutex_unlock(); return ret; @@ -3704,10 +3744,12 @@ int ram_block_discard_require(bool state) =20 bool ram_block_discard_is_disabled(void) { - return qatomic_read(&ram_block_discard_disablers); + return qatomic_read(&ram_block_discard_disablers) || + qatomic_read(&ram_block_uncoordinated_discard_disablers); } =20 bool ram_block_discard_is_required(void) { - return qatomic_read(&ram_block_discard_requirers); + return qatomic_read(&ram_block_discard_requirers) || + qatomic_read(&ram_block_coordinated_discard_requirers); } --=20 2.28.0 From nobody Tue May 21 15:25:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1607446384; cv=none; d=zohomail.com; s=zohoarc; b=XKjrB/N0VqRwThB1akv/ZKo/GPbsPjiTn9Xapou0eXR/Qd5bUEFy6edLbVUTYGV5r0OOU41GS7LUS6JGil0l8qpyty8w5ZYGeAxsgAeBXU+WTx9YZ4UnTI2CMqhqHwMa/trdE+tu9xW27PnqsybbN/VzVBPqm7Es6ltFzO4M9oQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1607446384; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=DHZ9pUXcDPS3fi6INvRwTmJ1uaZWRUtJyyVCd3yfoLU=; b=YIQQr+H8Jef8egbKVs6IoQddZlFQiWIKfpqLSkfk8sNlb+v/sH1ADjd4dLpIs3CJ0iFY5DstvqIzo+9ez9ka8W7gRn8hg17n9xbucawgBjG1ThO3NNxHSIBfuwNzk7305C9KxG9LLMuy8zfz7H3xUvX80XYY6Br+WTZogi23TeY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1607446384056772.1284693305697; Tue, 8 Dec 2020 08:53:04 -0800 (PST) Received: from localhost ([::1]:59894 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kmg6Z-0007fZ-7M for importer@patchew.org; Tue, 08 Dec 2020 11:44:47 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:37434) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kmg38-0005qz-NC for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:15 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:42911) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kmg2v-0005Sp-1M for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:13 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-229-TJcPa0E_NFasmub9BZLH7w-1; Tue, 08 Dec 2020 11:40:57 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B2CE0807328; Tue, 8 Dec 2020 16:40:21 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-236.ams2.redhat.com [10.36.113.236]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0561A5D6AB; Tue, 8 Dec 2020 16:40:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607445660; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DHZ9pUXcDPS3fi6INvRwTmJ1uaZWRUtJyyVCd3yfoLU=; b=Ow4ZlrbwfZVUlWT0agSEM45KhIzLS5fMFgVYTP3TygKIaqOEE0nF+F3Ej41bYPF9N/69bO ViEuA+JEY7ewnMhMpvsLlPJH/vTAZGPptW0GTVBZYDNsXAQlVD+m2P78FqcMvzot1P4cYT DOwDVLabajRNHuA9N0BlroPURmm1ZzM= X-MC-Unique: TJcPa0E_NFasmub9BZLH7w-1 From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v2 09/10] virtio-mem: Require only coordinated discards Date: Tue, 8 Dec 2020 17:39:49 +0100 Message-Id: <20201208163950.29617-10-david@redhat.com> In-Reply-To: <20201208163950.29617-1-david@redhat.com> References: <20201208163950.29617-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Pankaj Gupta , Wei Yang , David Hildenbrand , "Michael S. Tsirkin" , "Dr . David Alan Gilbert" , Peter Xu , Marek Kedzierski , Auger Eric , Alex Williamson , teawater , Jonathan Cameron , Paolo Bonzini , Igor Mammedov Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" We implement the RamDiscardMgr interface and only require coordinated discarding of RAM to work. Reviewed-by: Dr. David Alan Gilbert Cc: Paolo Bonzini Cc: "Michael S. Tsirkin" Cc: Alex Williamson Cc: Dr. David Alan Gilbert Cc: Igor Mammedov Cc: Pankaj Gupta Cc: Peter Xu Cc: Auger Eric Cc: Wei Yang Cc: teawater Cc: Marek Kedzierski Signed-off-by: David Hildenbrand Reviewed-by: Pankaj Gupta --- hw/virtio/virtio-mem.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index f419a758f3..99d0712195 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -687,7 +687,7 @@ static void virtio_mem_device_realize(DeviceState *dev,= Error **errp) return; } =20 - if (ram_block_discard_require(true)) { + if (ram_block_coordinated_discard_require(true)) { error_setg(errp, "Discarding RAM is disabled"); return; } @@ -695,7 +695,7 @@ static void virtio_mem_device_realize(DeviceState *dev,= Error **errp) ret =3D ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb)); if (ret) { error_setg_errno(errp, -ret, "Unexpected error discarding RAM"); - ram_block_discard_require(false); + ram_block_coordinated_discard_require(false); return; } =20 @@ -738,7 +738,7 @@ static void virtio_mem_device_unrealize(DeviceState *de= v) virtio_del_queue(vdev, 0); virtio_cleanup(vdev); g_free(vmem->bitmap); - ram_block_discard_require(false); + ram_block_coordinated_discard_require(false); } =20 static int virtio_mem_discard_range_cb(const VirtIOMEM *vmem, void *arg, --=20 2.28.0 From nobody Tue May 21 15:25:47 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1607447098; cv=none; d=zohomail.com; s=zohoarc; b=U0Hu09Fx+CvAttS0ztcX41JsIPJY/8u+klPgYKHn6XUGJ3SEBaziSpHToxGa3+IEr450BMWN12O5DM5YSf4cy3JynboeDsxRjROWRClk46HyKcNNcOSmsuN7euEHpq5OKiyFw57bR5cTI9FtjpGW5Fn46L79X6Fqo08i6vVfZzQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1607447098; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=qfGCWQUsVRDvTQCbbBZJ06I0LBuOMnjVOrqrwhzY7wc=; b=CRvzfRU2XWV0WuT8q6iRWII8HnGS9Bh1XmYcESiUUlrgtKZ9NgeMj60hQhXogi4mOlg+Z7I4ZiTejGi4lPb2ip4zhBaPdAONT7EookKqVgNX/hlJ/kBOyTx9Ogu3zH65be+7tmHIB0V5QXpbHrhPK7oT2e8NFYf39ydKs0q2oo8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1607447098372432.8682408604294; Tue, 8 Dec 2020 09:04:58 -0800 (PST) Received: from localhost ([::1]:60732 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kmgQ2-0004A1-5F for importer@patchew.org; Tue, 08 Dec 2020 12:04:54 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:37494) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kmg3B-0005s8-SB for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:17 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:25234) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kmg35-0005Uh-SK for qemu-devel@nongnu.org; Tue, 08 Dec 2020 11:41:17 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-221-cEcZ1e5VOkCvfBQY6deCKw-1; Tue, 08 Dec 2020 11:41:07 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BC85019611D0; Tue, 8 Dec 2020 16:40:24 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-236.ams2.redhat.com [10.36.113.236]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0DA8C5D6AB; Tue, 8 Dec 2020 16:40:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607445670; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qfGCWQUsVRDvTQCbbBZJ06I0LBuOMnjVOrqrwhzY7wc=; b=CHFVxVu7XqjArJRVu+QXSBAx1wODqLuVv1fohrtiJwhbJYG5I8BttNJZCPzbNLtOMNbiTK zf8AbLsE/FxYIL5LycRAWz2H5gtY8HT1Iqxn7YQJ6eCx4TTJfZkCwZSaDjmd6nFiqjY1Bb hBSOoRG0JuDxoKcilUyyNR85X8F6Om4= X-MC-Unique: cEcZ1e5VOkCvfBQY6deCKw-1 From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v2 10/10] vfio: Disable only uncoordinated discards Date: Tue, 8 Dec 2020 17:39:50 +0100 Message-Id: <20201208163950.29617-11-david@redhat.com> In-Reply-To: <20201208163950.29617-1-david@redhat.com> References: <20201208163950.29617-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Pankaj Gupta , Wei Yang , David Hildenbrand , "Michael S. Tsirkin" , "Dr . David Alan Gilbert" , Peter Xu , Marek Kedzierski , Auger Eric , Alex Williamson , teawater , Jonathan Cameron , Paolo Bonzini , Igor Mammedov Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" We support coordinated discarding of RAM using the RamDiscardMgr. Let's unlock support for coordinated discards, keeping uncoordinated discards (e.g., via virtio-balloon) disabled. This unlocks virtio-mem + vfio. Note that vfio used via "nvme://" by the block layer has to be implemented/unlocked separately. For now, virtio-mem only supports x86-64 - spapr IOMMUs are not tested/affected. Note: The block size of a virtio-mem device has to be set to sane sizes, depending on the maximum hotplug size - to not run out of vfio mappings. The default virtio-mem block size is usually in the range of a couple of MBs. The maximum number of mapping is 64k, shared with other users. Assume you want to hotplug 256GB using virtio-mem - the block size would have to be set to at least 8 MiB (resulting in 32768 separate mappings). Cc: Paolo Bonzini Cc: "Michael S. Tsirkin" Cc: Alex Williamson Cc: Dr. David Alan Gilbert Cc: Igor Mammedov Cc: Pankaj Gupta Cc: Peter Xu Cc: Auger Eric Cc: Wei Yang Cc: teawater Cc: Marek Kedzierski Signed-off-by: David Hildenbrand Reviewed-by: Pankaj Gupta --- hw/vfio/common.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 57c83a2f14..3ce5e26bab 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -1974,8 +1974,10 @@ static int vfio_connect_container(VFIOGroup *group, = AddressSpace *as, * new memory, it will not yet set ram_block_discard_set_required() and * therefore, neither stops us here or deals with the sudden memory * consumption of inflated memory. + * + * We do support discarding of memory coordinated via the RamDiscardMg= r. */ - ret =3D ram_block_discard_disable(true); + ret =3D ram_block_uncoordinated_discard_disable(true); if (ret) { error_setg_errno(errp, -ret, "Cannot set discarding of RAM broken"= ); return ret; @@ -2155,7 +2157,7 @@ close_fd_exit: close(fd); =20 put_space_exit: - ram_block_discard_disable(false); + ram_block_uncoordinated_discard_disable(false); vfio_put_address_space(space); =20 return ret; @@ -2277,7 +2279,7 @@ void vfio_put_group(VFIOGroup *group) } =20 if (!group->ram_block_discard_allowed) { - ram_block_discard_disable(false); + ram_block_uncoordinated_discard_disable(false); } vfio_kvm_device_del_group(group); vfio_disconnect_container(group); @@ -2331,7 +2333,7 @@ int vfio_get_device(VFIOGroup *group, const char *nam= e, =20 if (!group->ram_block_discard_allowed) { group->ram_block_discard_allowed =3D true; - ram_block_discard_disable(false); + ram_block_uncoordinated_discard_disable(false); } } =20 --=20 2.28.0