From nobody Sat May 18 12:29:52 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=intel.com ARC-Seal: i=1; a=rsa-sha256; t=1666630795; cv=none; d=zohomail.com; s=zohoarc; b=GccND+gq7okuSATMAEpnVFx+ZI7gLdiY+QsReXa9Lm8Lc7QCJzNCtkDdlMjVA2R7DPP4oeViqZNT5+JvevIMpSIKG6d+VCzWAC1Hk8KjkJABOPNDnCS2G7/N2WyLJY7DNuha99MLXr0cDuqx/KPrrq3XLO5H6Sx0WhqocKoLBLo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1666630795; h=Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=WxiIBBmH08gAErySLppOJAOpi18NsbPRFcKlF0bJm1U=; b=aSljIbfsbwk0ZDnVXIYFHhzDwrkrwemL4RpvWfON/qQ720UXG7IjP1Rg56SvdHkFnaAGqVqxJJlmvlgAaF4ZKlM7rebaChBXo56VDzmsquu9YwG3gWHwJ7zZD8KHXFKitY6tdRVoiv3WBA76+Vtis1lMo4/OH5q5KsbM526RhJ0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 166663079498197.97938926629081; Mon, 24 Oct 2022 09:59:54 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.429274.680200 (Exim 4.92) (envelope-from ) id 1on0nS-0006mQ-98; Mon, 24 Oct 2022 16:59:30 +0000 Received: by outflank-mailman (output) from mailman id 429274.680200; Mon, 24 Oct 2022 16:59:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1on0nS-0006mJ-6D; Mon, 24 Oct 2022 16:59:30 +0000 Received: by outflank-mailman (input) for mailman id 429274; Mon, 24 Oct 2022 16:59:29 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1on0nR-0006Ma-2b for xen-devel@lists.xenproject.org; Mon, 24 Oct 2022 16:59:29 +0000 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 35afac73-53bd-11ed-91b5-6bf2151ebd3b; Mon, 24 Oct 2022 18:59:25 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2022 09:59:10 -0700 Received: from mprakash-mobl.amr.corp.intel.com (HELO ubuntu.localdomain) ([10.212.103.22]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2022 09:59:08 -0700 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 35afac73-53bd-11ed-91b5-6bf2151ebd3b DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666630765; x=1698166765; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=cIgjQoZ7vxuC/i2lSZmYDXohIiSi9UuPb1px2tV3KLg=; b=Jjnj74sm6jYTpmD/YCNVoV3AuEC2YIxB0Vv375j/iANnimMOMQu90Q0M iqLQ6BiVrD3hGwHoQ3TqIsCBK++nRctn33uNQ86EUGzyMK2Fva0+k8DSe tdYWi1zEm/Ns629cuNa27Skc2Ux8FWTj9gX/FJAWbpZH+PukbS3XWL6wU 8Cr7yOt3ejODgPamf6VUh1X7n8Ejf+q+oDMWgszpnzN8wCC6W37Ifyfjj EpPd7Z80apOeGHfMAZtE4j0Z78HvEHgUWd8ON1VOpIAxF+iDHpKfPm4YR H8fWQXDTTJ6EGLtJBYs7gclmQReeFzzTNFCILrR7jmWcbDL1JqOJ/5Z1P Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="309160171" X-IronPort-AV: E=Sophos;i="5.95,209,1661842800"; d="scan'208";a="309160171" X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736483876" X-IronPort-AV: E=Sophos;i="5.95,209,1661842800"; d="scan'208";a="736483876" From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Cc: Tamas K Lengyel , Wei Liu , Anthony PERARD , Juergen Gross , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Jun Nakajima , Kevin Tian Subject: [PATCH] xen/x86: Make XEN_DOMCTL_get_vcpu_msrs more configurable Date: Mon, 24 Oct 2022 12:58:54 -0400 Message-Id: <854cdedcdd2bfff08ea45a3c13367c610d710aaf.1666630317.git.tamas.lengyel@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @intel.com) X-ZM-MESSAGEID: 1666630797194100001 Content-Type: text/plain; charset="utf-8" Currently the XEN_DOMCTL_get_vcpu_msrs is only capable of gathering a handf= ul of predetermined vcpu MSRs. In our use-case gathering the vPMU MSRs by an external privileged tool is necessary, thus we extend the domctl to allow f= or querying for any guest MSRs. To remain compatible with the existing setup if no specific MSR is requested via the domctl the default list is returned. Signed-off-by: Tamas K Lengyel --- tools/include/xenctrl.h | 4 +++ tools/libs/ctrl/xc_domain.c | 35 ++++++++++++++++++++++++++ tools/libs/guest/xg_sr_save_x86_pv.c | 2 ++ xen/arch/x86/cpu/vpmu.c | 10 ++++++++ xen/arch/x86/cpu/vpmu_amd.c | 7 ++++++ xen/arch/x86/cpu/vpmu_intel.c | 37 ++++++++++++++++++++++++++++ xen/arch/x86/domctl.c | 35 +++++++++++++++++--------- xen/arch/x86/include/asm/vpmu.h | 2 ++ 8 files changed, 120 insertions(+), 12 deletions(-) diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h index 0c8b4c3aa7..04244213bf 100644 --- a/tools/include/xenctrl.h +++ b/tools/include/xenctrl.h @@ -872,6 +872,10 @@ int xc_vcpu_getinfo(xc_interface *xch, uint32_t vcpu, xc_vcpuinfo_t *info); =20 +typedef struct xen_domctl_vcpu_msr xc_vcpumsr_t; +int xc_vcpu_get_msrs(xc_interface *xch, uint32_t domid, uint32_t vcpu, + uint32_t count, xc_vcpumsr_t *msrs); + long long xc_domain_get_cpu_usage(xc_interface *xch, uint32_t domid, int vcpu); diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c index 14c0420c35..d3a7e1fea6 100644 --- a/tools/libs/ctrl/xc_domain.c +++ b/tools/libs/ctrl/xc_domain.c @@ -2201,6 +2201,41 @@ int xc_domain_soft_reset(xc_interface *xch, domctl.domain =3D domid; return do_domctl(xch, &domctl); } + +int xc_vcpu_get_msrs(xc_interface *xch, uint32_t domid, uint32_t vcpu, + uint32_t count, xc_vcpumsr_t *msrs) +{ + int rc; + DECLARE_DOMCTL; + domctl.cmd =3D XEN_DOMCTL_get_vcpu_msrs; + domctl.domain =3D domid; + domctl.u.vcpu_msrs.vcpu =3D vcpu; + domctl.u.vcpu_msrs.msr_count =3D count; + + if ( !msrs ) + { + if ( (rc =3D xc_domctl(xch, &domctl)) < 0 ) + return rc; + + return domctl.u.vcpu_msrs.msr_count; + } + else + { + DECLARE_HYPERCALL_BOUNCE(msrs, count * sizeof(xc_vcpumsr_t), XC_HY= PERCALL_BUFFER_BOUNCE_BOTH); + + if ( xc_hypercall_bounce_pre(xch, msrs) ) + return -1; + + set_xen_guest_handle(domctl.u.vcpu_msrs.msrs, msrs); + + rc =3D do_domctl(xch, &domctl); + + xc_hypercall_bounce_post(xch, msrs); + + return rc; + } +} + /* * Local variables: * mode: C diff --git a/tools/libs/guest/xg_sr_save_x86_pv.c b/tools/libs/guest/xg_sr_= save_x86_pv.c index 4964f1f7b8..7ac313bf3f 100644 --- a/tools/libs/guest/xg_sr_save_x86_pv.c +++ b/tools/libs/guest/xg_sr_save_x86_pv.c @@ -719,6 +719,8 @@ static int write_one_vcpu_msrs(struct xc_sr_context *ct= x, uint32_t id) goto err; } =20 + memset(buffer, 0, buffersz); + set_xen_guest_handle(domctl.u.vcpu_msrs.msrs, buffer); if ( xc_domctl(xch, &domctl) < 0 ) { diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c index 64cdbfc48c..438dfbe196 100644 --- a/xen/arch/x86/cpu/vpmu.c +++ b/xen/arch/x86/cpu/vpmu.c @@ -651,6 +651,16 @@ void vpmu_dump(struct vcpu *v) alternative_vcall(vpmu_ops.arch_vpmu_dump, v); } =20 +int vpmu_get_msr(struct vcpu *v, unsigned int msr, uint64_t *val) +{ + ASSERT(v !=3D current); + + if ( !vpmu_is_set(vcpu_vpmu(v), VPMU_CONTEXT_ALLOCATED) ) + return -EOPNOTSUPP; + + return alternative_call(vpmu_ops.get_msr, v, msr, val); +} + long do_xenpmu_op( unsigned int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg) { diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c index 58794a16f0..75bd68e541 100644 --- a/xen/arch/x86/cpu/vpmu_amd.c +++ b/xen/arch/x86/cpu/vpmu_amd.c @@ -518,6 +518,12 @@ static int cf_check svm_vpmu_initialise(struct vcpu *v) return 0; } =20 +static int cf_check amd_get_msr(struct vcpu *v, unsigned int msr, uint64_t= *val) +{ + /* TODO in case an external tool needs access to these MSRs */ + return -ENOSYS; +} + #ifdef CONFIG_MEM_SHARING static int cf_check amd_allocate_context(struct vcpu *v) { @@ -535,6 +541,7 @@ static const struct arch_vpmu_ops __initconst_cf_clobbe= r amd_vpmu_ops =3D { .arch_vpmu_save =3D amd_vpmu_save, .arch_vpmu_load =3D amd_vpmu_load, .arch_vpmu_dump =3D amd_vpmu_dump, + .get_msr =3D amd_get_msr, =20 #ifdef CONFIG_MEM_SHARING .allocate_context =3D amd_allocate_context, diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c index b91d818be0..b4b6ecfb15 100644 --- a/xen/arch/x86/cpu/vpmu_intel.c +++ b/xen/arch/x86/cpu/vpmu_intel.c @@ -898,6 +898,42 @@ static int cf_check vmx_vpmu_initialise(struct vcpu *v) return 0; } =20 +static int cf_check core2_vpmu_get_msr(struct vcpu *v, unsigned int msr, + uint64_t *val) +{ + int type, index, ret =3D 0; + struct vpmu_struct *vpmu =3D vcpu_vpmu(v); + struct xen_pmu_intel_ctxt *core2_vpmu_cxt =3D vpmu->context; + uint64_t *fixed_counters =3D vpmu_reg_pointer(core2_vpmu_cxt, fixed_co= unters); + struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =3D + vpmu_reg_pointer(core2_vpmu_cxt, arch_counters); + + if ( !is_core2_vpmu_msr(msr, &type, &index) ) + return -EINVAL; + + vcpu_pause(v); + + if ( msr =3D=3D MSR_CORE_PERF_GLOBAL_OVF_CTRL ) + *val =3D core2_vpmu_cxt->global_ovf_ctrl; + else if ( msr =3D=3D MSR_CORE_PERF_GLOBAL_STATUS ) + *val =3D core2_vpmu_cxt->global_status; + else if ( msr =3D=3D MSR_CORE_PERF_GLOBAL_CTRL ) + *val =3D core2_vpmu_cxt->global_ctrl; + else if ( msr >=3D MSR_CORE_PERF_FIXED_CTR0 && + msr < MSR_CORE_PERF_FIXED_CTR0 + fixed_pmc_cnt ) + *val =3D fixed_counters[msr - MSR_CORE_PERF_FIXED_CTR0]; + else if ( msr >=3D MSR_P6_PERFCTR(0) && msr < MSR_P6_PERFCTR(arch_pmc_= cnt) ) + *val =3D xen_pmu_cntr_pair[msr - MSR_P6_PERFCTR(0)].counter; + else if ( msr >=3D MSR_P6_EVNTSEL(0) && msr < MSR_P6_EVNTSEL(arch_pmc_= cnt) ) + *val =3D xen_pmu_cntr_pair[msr - MSR_P6_EVNTSEL(0)].control; + else + ret =3D -EINVAL; + + vcpu_unpause(v); + + return ret; +} + static const struct arch_vpmu_ops __initconst_cf_clobber core2_vpmu_ops = =3D { .initialise =3D vmx_vpmu_initialise, .do_wrmsr =3D core2_vpmu_do_wrmsr, @@ -907,6 +943,7 @@ static const struct arch_vpmu_ops __initconst_cf_clobbe= r core2_vpmu_ops =3D { .arch_vpmu_save =3D core2_vpmu_save, .arch_vpmu_load =3D core2_vpmu_load, .arch_vpmu_dump =3D core2_vpmu_dump, + .get_msr =3D core2_vpmu_get_msr, =20 #ifdef CONFIG_MEM_SHARING .allocate_context =3D core2_vpmu_alloc_resource, diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index e9bfbc57a7..c481aa8575 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -1104,8 +1104,7 @@ long arch_do_domctl( break; =20 ret =3D -EINVAL; - if ( (v =3D=3D curr) || /* no vcpu_pause() */ - !is_pv_domain(d) ) + if ( v =3D=3D curr ) break; =20 /* Count maximum number of optional msrs. */ @@ -1127,36 +1126,48 @@ long arch_do_domctl( =20 vcpu_pause(v); =20 - for ( j =3D 0; j < ARRAY_SIZE(msrs_to_send); ++j ) + for ( j =3D 0; j < ARRAY_SIZE(msrs_to_send) && i < vmsrs->= msr_count; ++j ) { uint64_t val; - int rc =3D guest_rdmsr(v, msrs_to_send[j], &val); + int rc; + + if ( copy_from_guest_offset(&msr, vmsrs->msrs, i, 1) ) + { + ret =3D -EFAULT; + break; + } + + msr.index =3D msr.index ?: msrs_to_send[j]; + + rc =3D guest_rdmsr(v, msr.index, &val); =20 /* * It is the programmers responsibility to ensure that - * msrs_to_send[] contain generally-read/write MSRs. + * the msr requested contain generally-read/write MSRs. * X86EMUL_EXCEPTION here implies a missing feature, a= nd * that the guest doesn't have access to the MSR. */ if ( rc =3D=3D X86EMUL_EXCEPTION ) continue; + if ( rc =3D=3D X86EMUL_UNHANDLEABLE ) + ret =3D vpmu_get_msr(v, msr.index, &val); + else + ret =3D (rc =3D=3D X86EMUL_OKAY) ? 0 : -ENXIO; =20 - if ( rc !=3D X86EMUL_OKAY ) + if ( ret ) { ASSERT_UNREACHABLE(); - ret =3D -ENXIO; break; } =20 if ( !val ) continue; /* Skip empty MSRs. */ =20 - if ( i < vmsrs->msr_count && !ret ) + msr.value =3D val; + if ( copy_to_guest_offset(vmsrs->msrs, i, &msr, 1) ) { - msr.index =3D msrs_to_send[j]; - msr.value =3D val; - if ( copy_to_guest_offset(vmsrs->msrs, i, &msr, 1)= ) - ret =3D -EFAULT; + ret =3D -EFAULT; + break; } ++i; } diff --git a/xen/arch/x86/include/asm/vpmu.h b/xen/arch/x86/include/asm/vpm= u.h index 05e1fbfccf..2fcf570b25 100644 --- a/xen/arch/x86/include/asm/vpmu.h +++ b/xen/arch/x86/include/asm/vpmu.h @@ -47,6 +47,7 @@ struct arch_vpmu_ops { int (*arch_vpmu_save)(struct vcpu *v, bool_t to_guest); int (*arch_vpmu_load)(struct vcpu *v, bool_t from_guest); void (*arch_vpmu_dump)(const struct vcpu *); + int (*get_msr)(struct vcpu *v, unsigned int msr, uint64_t *val); =20 #ifdef CONFIG_MEM_SHARING int (*allocate_context)(struct vcpu *v); @@ -117,6 +118,7 @@ void vpmu_save(struct vcpu *v); void cf_check vpmu_save_force(void *arg); int vpmu_load(struct vcpu *v, bool_t from_guest); void vpmu_dump(struct vcpu *v); +int vpmu_get_msr(struct vcpu *v, unsigned int msr, uint64_t *val); =20 static inline int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) { --=20 2.34.1