From nobody Tue May 7 09:59:17 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1553773154168550.9085204116725; Thu, 28 Mar 2019 04:39:14 -0700 (PDT) Received: from localhost ([127.0.0.1]:34890 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h9TND-0000sV-UL for importer@patchew.org; Thu, 28 Mar 2019 07:39:07 -0400 Received: from eggs.gnu.org ([209.51.188.92]:55765) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h9TJR-0006Re-4Z for qemu-devel@nongnu.org; Thu, 28 Mar 2019 07:35:14 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1h9TJP-0004KY-BF for qemu-devel@nongnu.org; Thu, 28 Mar 2019 07:35:12 -0400 Received: from mx1.redhat.com ([209.132.183.28]:44120) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1h9TJP-0004K9-1F; Thu, 28 Mar 2019 07:35:11 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 427D13087948; Thu, 28 Mar 2019 11:35:10 +0000 (UTC) Received: from t460s.redhat.com (ovpn-117-191.ams2.redhat.com [10.36.117.191]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8BCF65D78F; Thu, 28 Mar 2019 11:35:07 +0000 (UTC) From: David Hildenbrand To: qemu-devel@nongnu.org Date: Thu, 28 Mar 2019 12:34:57 +0100 Message-Id: <20190328113458.8405-2-david@redhat.com> In-Reply-To: <20190328113458.8405-1-david@redhat.com> References: <20190328113458.8405-1-david@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Thu, 28 Mar 2019 11:35:10 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v2 1/2] s390x/kvm: Configure page size after memory has actually been initialized X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, Thomas Huth , David Hildenbrand , Cornelia Huck , Alex Williamson , Halil Pasic , Christian Borntraeger , qemu-s390x@nongnu.org, qemu-ppc@nongnu.org, pbonzini@redhat.com, Igor Mammedov , Richard Henderson , David Gibson Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Right now we configure the pagesize quite early, when initializing KVM. This is long before system memory is actually allocated via memory_region_allocate_system_memory(), and therefore memory backends marked as mapped. Instead, let's configure the maximum page size after initializing memory in s390_memory_init(). cap_hpage_1m is still properly configured before creating any CPUs, and therefore before configuring the CPU model and eventually enabling CMMA. This is not a fix but rather a preparation for the future, when initial memory might reside on memory backends (not the case for s390x right now) We will replace qemu_getrampagesize() soon by a function that will always return the maximum page size (not the minimum page size, which only works by pure luck so far, as there are no memory backends). Acked-by: Igor Mammedov Signed-off-by: David Hildenbrand Reviewed-by: David Gibson --- hw/s390x/s390-virtio-ccw.c | 12 ++++++++++++ target/s390x/cpu.c | 7 +++++++ target/s390x/cpu.h | 1 + target/s390x/kvm-stub.c | 4 ++++ target/s390x/kvm.c | 35 ++++++++++++++--------------------- target/s390x/kvm_s390x.h | 1 + 6 files changed, 39 insertions(+), 21 deletions(-) diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c index d11069b860..3be5679657 100644 --- a/hw/s390x/s390-virtio-ccw.c +++ b/hw/s390x/s390-virtio-ccw.c @@ -15,6 +15,7 @@ #include "cpu.h" #include "hw/boards.h" #include "exec/address-spaces.h" +#include "exec/ram_addr.h" #include "hw/s390x/s390-virtio-hcall.h" #include "hw/s390x/sclp.h" #include "hw/s390x/s390_flic.h" @@ -163,6 +164,7 @@ static void s390_memory_init(ram_addr_t mem_size) MemoryRegion *sysmem =3D get_system_memory(); ram_addr_t chunk, offset =3D 0; unsigned int number =3D 0; + Error *local_err =3D NULL; gchar *name; =20 /* allocate RAM for core */ @@ -182,6 +184,15 @@ static void s390_memory_init(ram_addr_t mem_size) } g_free(name); =20 + /* + * Configure the maximum page size. As no memory devices were created + * yet, this is the page size of initial memory only. + */ + s390_set_max_pagesize(qemu_getrampagesize(), &local_err); + if (local_err) { + error_report_err(local_err); + exit(EXIT_FAILURE); + } /* Initialize storage key device */ s390_skeys_init(); /* Initialize storage attributes device */ @@ -253,6 +264,7 @@ static void ccw_init(MachineState *machine) DeviceState *dev; =20 s390_sclp_init(); + /* init memory + setup max page size. Required for the CPU model */ s390_memory_init(machine->ram_size); =20 /* init CPUs (incl. CPU model) early so s390_has_feature() works */ diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c index 698dd9cb82..b58ef0a8ef 100644 --- a/target/s390x/cpu.c +++ b/target/s390x/cpu.c @@ -399,6 +399,13 @@ int s390_set_memory_limit(uint64_t new_limit, uint64_t= *hw_limit) return 0; } =20 +void s390_set_max_pagesize(uint64_t pagesize, Error **errp) +{ + if (kvm_enabled()) { + kvm_s390_set_max_pagesize(pagesize, errp); + } +} + void s390_cmma_reset(void) { if (kvm_enabled()) { diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h index cb6d77053a..c14be2b5ba 100644 --- a/target/s390x/cpu.h +++ b/target/s390x/cpu.h @@ -734,6 +734,7 @@ static inline void s390_do_cpu_load_normal(CPUState *cs= , run_on_cpu_data arg) /* cpu.c */ void s390_crypto_reset(void); int s390_set_memory_limit(uint64_t new_limit, uint64_t *hw_limit); +void s390_set_max_pagesize(uint64_t pagesize, Error **errp); void s390_cmma_reset(void); void s390_enable_css_support(S390CPU *cpu); int s390_assign_subch_ioeventfd(EventNotifier *notifier, uint32_t sch_id, diff --git a/target/s390x/kvm-stub.c b/target/s390x/kvm-stub.c index bf7795e47a..22b4514ca6 100644 --- a/target/s390x/kvm-stub.c +++ b/target/s390x/kvm-stub.c @@ -93,6 +93,10 @@ int kvm_s390_set_mem_limit(uint64_t new_limit, uint64_t = *hw_limit) return 0; } =20 +void kvm_s390_set_max_pagesize(uint64_t pagesize, Error **errp) +{ +} + void kvm_s390_crypto_reset(void) { } diff --git a/target/s390x/kvm.c b/target/s390x/kvm.c index 19530fb94e..bee73dc1a4 100644 --- a/target/s390x/kvm.c +++ b/target/s390x/kvm.c @@ -283,44 +283,37 @@ void kvm_s390_crypto_reset(void) } } =20 -static int kvm_s390_configure_mempath_backing(KVMState *s) +void kvm_s390_set_max_pagesize(uint64_t pagesize, Error **errp) { - size_t path_psize =3D qemu_getrampagesize(); - - if (path_psize =3D=3D 4 * KiB) { - return 0; + if (pagesize =3D=3D 4 * KiB) { + return; } =20 if (!hpage_1m_allowed()) { - error_report("This QEMU machine does not support huge page " - "mappings"); - return -EINVAL; + error_setg(errp, "This QEMU machine does not support huge page " + "mappings"); + return; } =20 - if (path_psize !=3D 1 * MiB) { - error_report("Memory backing with 2G pages was specified, " - "but KVM does not support this memory backing"); - return -EINVAL; + if (pagesize !=3D 1 * MiB) { + error_setg(errp, "Memory backing with 2G pages was specified, " + "but KVM does not support this memory backing"); + return; } =20 - if (kvm_vm_enable_cap(s, KVM_CAP_S390_HPAGE_1M, 0)) { - error_report("Memory backing with 1M pages was specified, " - "but KVM does not support this memory backing"); - return -EINVAL; + if (kvm_vm_enable_cap(kvm_state, KVM_CAP_S390_HPAGE_1M, 0)) { + error_setg(errp, "Memory backing with 1M pages was specified, " + "but KVM does not support this memory backing"); + return; } =20 cap_hpage_1m =3D 1; - return 0; } =20 int kvm_arch_init(MachineState *ms, KVMState *s) { MachineClass *mc =3D MACHINE_GET_CLASS(ms); =20 - if (kvm_s390_configure_mempath_backing(s)) { - return -EINVAL; - } - mc->default_cpu_type =3D S390_CPU_TYPE_NAME("host"); cap_sync_regs =3D kvm_check_extension(s, KVM_CAP_SYNC_REGS); cap_async_pf =3D kvm_check_extension(s, KVM_CAP_ASYNC_PF); diff --git a/target/s390x/kvm_s390x.h b/target/s390x/kvm_s390x.h index 6e52287da3..caf985955b 100644 --- a/target/s390x/kvm_s390x.h +++ b/target/s390x/kvm_s390x.h @@ -36,6 +36,7 @@ int kvm_s390_cmma_active(void); void kvm_s390_cmma_reset(void); void kvm_s390_reset_vcpu(S390CPU *cpu); int kvm_s390_set_mem_limit(uint64_t new_limit, uint64_t *hw_limit); +void kvm_s390_set_max_pagesize(uint64_t pagesize, Error **errp); void kvm_s390_crypto_reset(void); void kvm_s390_restart_interrupt(S390CPU *cpu); void kvm_s390_stop_interrupt(S390CPU *cpu); --=20 2.17.2 From nobody Tue May 7 09:59:17 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1553773041131392.2886067081271; Thu, 28 Mar 2019 04:37:21 -0700 (PDT) Received: from localhost ([127.0.0.1]:34861 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h9TLQ-0007q9-3W for importer@patchew.org; Thu, 28 Mar 2019 07:37:16 -0400 Received: from eggs.gnu.org ([209.51.188.92]:55807) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h9TJT-0006VZ-T2 for qemu-devel@nongnu.org; Thu, 28 Mar 2019 07:35:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1h9TJS-0004N8-Hi for qemu-devel@nongnu.org; Thu, 28 Mar 2019 07:35:15 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43218) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1h9TJS-0004Mg-8N; Thu, 28 Mar 2019 07:35:14 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 35D48C050DEC; Thu, 28 Mar 2019 11:35:13 +0000 (UTC) Received: from t460s.redhat.com (ovpn-117-191.ams2.redhat.com [10.36.117.191]) by smtp.corp.redhat.com (Postfix) with ESMTP id 87A3280D5; Thu, 28 Mar 2019 11:35:10 +0000 (UTC) From: David Hildenbrand To: qemu-devel@nongnu.org Date: Thu, 28 Mar 2019 12:34:58 +0100 Message-Id: <20190328113458.8405-3-david@redhat.com> In-Reply-To: <20190328113458.8405-1-david@redhat.com> References: <20190328113458.8405-1-david@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Thu, 28 Mar 2019 11:35:13 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v2 2/2] exec: Introduce qemu_getmaxrampagesize() and rename qemu_getrampagesize() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, Thomas Huth , David Hildenbrand , Cornelia Huck , Alex Williamson , Halil Pasic , Christian Borntraeger , qemu-s390x@nongnu.org, qemu-ppc@nongnu.org, pbonzini@redhat.com, Igor Mammedov , Richard Henderson , David Gibson Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Rename qemu_getrampagesize() to qemu_getminrampagesize(). While at it, properly rename find_min_supported_pagesize() to find_max_pagesize(). s390x is actually interrested into the maximum ram pagesize, so introduce and use qemu_getmaxrampagesize(). Signed-off-by: David Hildenbrand --- exec.c | 39 ++++++++++++++++++++++++++++++++++---- hw/ppc/spapr_caps.c | 4 ++-- hw/s390x/s390-virtio-ccw.c | 2 +- hw/vfio/spapr.c | 2 +- target/ppc/kvm.c | 2 +- 5 files changed, 40 insertions(+), 9 deletions(-) diff --git a/exec.c b/exec.c index 6ab62f4eee..cf74f5f284 100644 --- a/exec.c +++ b/exec.c @@ -1687,7 +1687,7 @@ void ram_block_dump(Monitor *mon) * when we actually open and map them. Iterate over the file * descriptors instead, and use qemu_fd_getpagesize(). */ -static int find_max_supported_pagesize(Object *obj, void *opaque) +static int find_min_pagesize(Object *obj, void *opaque) { long *hpsize_min =3D opaque; =20 @@ -1703,7 +1703,23 @@ static int find_max_supported_pagesize(Object *obj, = void *opaque) return 0; } =20 -long qemu_getrampagesize(void) +static int find_max_pagesize(Object *obj, void *opaque) +{ + long *hpsize_max =3D opaque; + + if (object_dynamic_cast(obj, TYPE_MEMORY_BACKEND)) { + HostMemoryBackend *backend =3D MEMORY_BACKEND(obj); + long hpsize =3D host_memory_backend_pagesize(backend); + + if (host_memory_backend_is_mapped(backend) && (hpsize > *hpsize_ma= x)) { + *hpsize_max =3D hpsize; + } + } + + return 0; +} + +long qemu_getminrampagesize(void) { long hpsize =3D LONG_MAX; long mainrampagesize; @@ -1723,7 +1739,7 @@ long qemu_getrampagesize(void) */ memdev_root =3D object_resolve_path("/objects", NULL); if (memdev_root) { - object_child_foreach(memdev_root, find_max_supported_pagesize, &hp= size); + object_child_foreach(memdev_root, find_min_pagesize, &hpsize); } if (hpsize =3D=3D LONG_MAX) { /* No additional memory regions found =3D=3D> Report main RAM page= size */ @@ -1746,8 +1762,23 @@ long qemu_getrampagesize(void) =20 return hpsize; } + +long qemu_getmaxrampagesize(void) +{ + long pagesize =3D qemu_mempath_getpagesize(mem_path); + Object *memdev_root =3D object_resolve_path("/objects", NULL); + + if (memdev_root) { + object_child_foreach(memdev_root, find_max_pagesize, &pagesize); + } + return pagesize; +} #else -long qemu_getrampagesize(void) +long qemu_getminrampagesize(void) +{ + return getpagesize(); +} +long qemu_getmaxrampagesize(void) { return getpagesize(); } diff --git a/hw/ppc/spapr_caps.c b/hw/ppc/spapr_caps.c index edc5ed0e0c..3177dc2390 100644 --- a/hw/ppc/spapr_caps.c +++ b/hw/ppc/spapr_caps.c @@ -347,7 +347,7 @@ static void cap_hpt_maxpagesize_apply(SpaprMachineState= *spapr, warn_report("Many guests require at least 64kiB hpt-max-page-size"= ); } =20 - spapr_check_pagesize(spapr, qemu_getrampagesize(), errp); + spapr_check_pagesize(spapr, qemu_getminrampagesize(), errp); } =20 static bool spapr_pagesize_cb(void *opaque, uint32_t seg_pshift, @@ -609,7 +609,7 @@ static SpaprCapabilities default_caps_with_cpu(SpaprMac= hineState *spapr, uint8_t mps; =20 if (kvmppc_hpt_needs_host_contiguous_pages()) { - mps =3D ctz64(qemu_getrampagesize()); + mps =3D ctz64(qemu_getminrampagesize()); } else { mps =3D 34; /* allow everything up to 16GiB, i.e. everything */ } diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c index 3be5679657..143ac974ca 100644 --- a/hw/s390x/s390-virtio-ccw.c +++ b/hw/s390x/s390-virtio-ccw.c @@ -188,7 +188,7 @@ static void s390_memory_init(ram_addr_t mem_size) * Configure the maximum page size. As no memory devices were created * yet, this is the page size of initial memory only. */ - s390_set_max_pagesize(qemu_getrampagesize(), &local_err); + s390_set_max_pagesize(qemu_getmaxrampagesize(), &local_err); if (local_err) { error_report_err(local_err); exit(EXIT_FAILURE); diff --git a/hw/vfio/spapr.c b/hw/vfio/spapr.c index 57fe758e54..30d409a46f 100644 --- a/hw/vfio/spapr.c +++ b/hw/vfio/spapr.c @@ -148,7 +148,7 @@ int vfio_spapr_create_window(VFIOContainer *container, uint64_t pagesize =3D memory_region_iommu_get_min_page_size(iommu_mr); unsigned entries, bits_total, bits_per_level, max_levels; struct vfio_iommu_spapr_tce_create create =3D { .argsz =3D sizeof(crea= te) }; - long rampagesize =3D qemu_getrampagesize(); + long rampagesize =3D qemu_getminrampagesize(); =20 /* * The host might not support the guest supported IOMMU page size, diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c index 2427c8ee13..90f65240d0 100644 --- a/target/ppc/kvm.c +++ b/target/ppc/kvm.c @@ -2136,7 +2136,7 @@ uint64_t kvmppc_rma_size(uint64_t current_size, unsig= ned int hash_shift) /* Find the largest hardware supported page size that's less than * or equal to the (logical) backing page size of guest RAM */ kvm_get_smmu_info(&info, &error_fatal); - rampagesize =3D qemu_getrampagesize(); + rampagesize =3D qemu_getminrampagesize(); best_page_shift =3D 0; =20 for (i =3D 0; i < KVM_PPC_PAGE_SIZES_MAX_SZ; i++) { --=20 2.17.2