From nobody Wed May 15 06:10:05 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; arc=pass (i=1 dmarc=pass fromdomain=amd.com); dmarc=pass(p=quarantine dis=none) header.from=amd.com ARC-Seal: i=2; a=rsa-sha256; t=1682324508; cv=pass; d=zohomail.com; s=zohoarc; b=aLRll2NxrhCWM74Ta7txDf4VPzDfSIB1f6+tFa5yPW7THijlcARdote6FjOfd2TDCtuACzENnzCQ7IYP+tdT8Y7VmF5mrz5k86rSOgsgfLIusY0xk9yZAMwXbqbNqJNNfqdGDCKxk3wJSrh7YV97xONF3WbzmZleEuWXAQ2ripM= ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682324508; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=XWb82N2r/vIWJiXtFNEFWs/ZcXzW3uEgMOIb6Mqkk14=; b=m+H7MEDKQXjGXgg+nwS2SCyEBz3AVezMelwpkjGVcLghe/fGJkYexGlKbaOQODEUTWFfOK+2YzeSqXTdRseOpMHGxtR6pD3jGi40ynoi8OMAwT8rnei56isHq1shluRLG0p7BOBcPzjJPRXQ/EtuzsI1nXZg5SMoa2jvlNRQ1T8= ARC-Authentication-Results: i=2; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; arc=pass (i=1 dmarc=pass fromdomain=amd.com); dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682324508615954.7414389941412; Mon, 24 Apr 2023 01:21:48 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.525208.816263 (Exim 4.92) (envelope-from ) id 1pqrRp-0003Ca-7v; Mon, 24 Apr 2023 08:21:21 +0000 Received: by outflank-mailman (output) from mailman id 525208.816263; Mon, 24 Apr 2023 08:21:21 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pqrRp-0003CR-4W; Mon, 24 Apr 2023 08:21:21 +0000 Received: by outflank-mailman (input) for mailman id 525208; Mon, 24 Apr 2023 08:21:19 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pqrRn-0003By-Qf for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:21:19 +0000 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2062a.outbound.protection.outlook.com [2a01:111:f400:7e8a::62a]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id fbd993db-e278-11ed-b223-6b7b168915f2; Mon, 24 Apr 2023 10:21:17 +0200 (CEST) Received: from BN8PR04CA0041.namprd04.prod.outlook.com (2603:10b6:408:d4::15) by DM6PR12MB4927.namprd12.prod.outlook.com (2603:10b6:5:20a::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 24 Apr 2023 08:21:14 +0000 Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com (2603:10b6:408:d4:cafe::ec) by BN8PR04CA0041.outlook.office365.com (2603:10b6:408:d4::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend Transport; Mon, 24 Apr 2023 08:21:14 +0000 Received: from SATLEXMB03.amd.com (165.204.84.17) by BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 08:21:13 +0000 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr 2023 03:21:13 -0500 Received: from 10.0.2.15 (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend Transport; Mon, 24 Apr 2023 03:21:11 -0500 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: fbd993db-e278-11ed-b223-6b7b168915f2 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KZipKmYEDaWvtt/tucRuUwLiBzI025dBtpTsrwXikUSP5avmUe4hKJzs1G2LldqXap6XO9ZeqMiO1OwzonNaNXx5c/fX4JHS+Smg1qm+j+ZxQLUNDxm17WjwXZFXMlcRn7DSqgBEgF+u7tIw9uklx+7jum+8ef88lMXtFozD6UJRLr2suxM0fBhYbklllDpIdc/5koNc/CiuwpbbpDe5tu+ui8eOqvCuG2pTIJpyUpjt3xXtQ+2D+kTGM9TKSubkUPx+42vlYW9Rft7adYtvAeH7bo67asfESQaxvgcHgCYYsbxPE05TFwKSWjNlQuSAXaYNOtMuRqnupTjmNy39Tg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XWb82N2r/vIWJiXtFNEFWs/ZcXzW3uEgMOIb6Mqkk14=; b=e1RCL6otz0kmLETyVCNUj3QS6YSYU1Ohk0SblkS/M1n8AQ7xBt3leEaxPil/okBC1EATWo1xAZEuc9ZlxDqYP62SBxhG/rOPPOw/ljhzVBoVHLHTOfRcUVxlqBRKUVkOpDMfNagFWPGt/S1e6Y5XjzSzwUsxzatJFZicnYWwwlHpgIcoi5h4tvWvWzJ7Qawj/SexIDQijRipYBsmc9BHm6zfk8e1U7Q2rqOF3zMMaWv/XS1zb/La+aamlTpE61Yyn4EGQTggS6M8VArB6zmtlj/VpBSb1poeA+3e09D+hh23wwiFRnH6hQ3jWCrODj7uWq1ticKgBz6d2JRmPlUXzQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XWb82N2r/vIWJiXtFNEFWs/ZcXzW3uEgMOIb6Mqkk14=; b=Wtd0XFnLQxCFgDwNmNRjH5XVBmBmyfNkTEP/oHOj912BDcOOvb8FvMrDH23w+to1qCzY5LLe00FjcGSWO+29G/1LZeckpIBGYxUU3mPeHvo3sBAoQPKhVPA0Bd0hezuKon841tdP2I607nIlEhOg20a4vGlWwHDLAYycwYG0AVQ= X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C From: Xenia Ragiadakou To: CC: Xenia Ragiadakou , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu Subject: [PATCH v2 1/3] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept() Date: Mon, 24 Apr 2023 11:20:36 +0300 Message-ID: <20230424082038.541122-2-xenia.ragiadakou@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230424082038.541122-1-xenia.ragiadakou@amd.com> References: <20230424082038.541122-1-xenia.ragiadakou@amd.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT017:EE_|DM6PR12MB4927:EE_ X-MS-Office365-Filtering-Correlation-Id: 2d336e64-144f-4b06-daba-08db449cde7b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: R9mfFcbaNDyE1ILb8hPrTj0VzsfYHmpky3OVp0aSVfdJ+PQzoPcybJlOD4qCx/8iKQEgEnDSUPjQIpHezYfnzi0ibX7MvXA/yhmBLj2A8vs6zvJTtEsJ0cm+thLKv+zVAgEKtCzY/56N2m7CD2vpUk0FNTSahRkNTLCJLjmfRl+jj3uQtkH6KGJ8efBSGO6sy2XG03j8bnWHQagEjBnrNr66XWS04skNjw3WoUeNHPucUe7fsnizwe7WkgSlUoXxYBqZnJLVkBWDN0ZYzM1ckxDeHskf939PmXdADzlI1B9xzWQ8IYPoGnduTfSzAJuKm7TpA6CkmVB6DY9FwRY5H7Z12mvOEfqLCclYhCfGYruOE4L9jG0nmJNNH2qTYqLyabkAl+kiEcBH3b0epJGFSWA8HT1sTwkdtPj+Rq2xnzACSLOMjpnRfblu0jlhz9bwCTjF6fAfSU4giZ41l1g9BKt8PNsmHWmbLCm6C6/AhCxV4BQhmTYwo/FyMlWNndv7cy+zAlyze+3th4jMsXHn/PgrVeJsZCF3HqdFLimTIuDEm0QLyf2W9q/MoNk/TB6r/9iFrunT0wdwrGTxpkcwYvRM3M6hwkj6751B+bpQop2LmORyvHQL7noxWRQ/ohUCRT5g65vaVgNwJMMkB10cnPV8J5yht4v45EEizQCjsKNzABkKBjHsM54b6iX6QnX7ty30xCoUjEsZYynL8VGUW859eyn+kHMn+ixWhGNLv04= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(376002)(396003)(451199021)(46966006)(40470700004)(36840700001)(36756003)(8936002)(81166007)(66899021)(8676002)(40460700003)(5660300002)(44832011)(2906002)(82310400005)(86362001)(336012)(1076003)(70206006)(40480700001)(478600001)(6666004)(54906003)(16576012)(186003)(70586007)(2616005)(36860700001)(82740400003)(356005)(316002)(83380400001)(6916009)(4326008)(47076005)(41300700001)(426003)(26005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 08:21:13.9896 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2d336e64-144f-4b06-daba-08db449cde7b X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT017.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4927 X-ZohoMail-DKIM: pass (identity @amd.com) X-ZM-MESSAGEID: 1682324509367100001 Content-Type: text/plain; charset="utf-8" This change aims to render the control interface of MSR intercepts identical between SVM and VMX code, so that the control of the MSR intercept in common code can be done through an hvm_funcs callback. Create two new functions: - svm_set_msr_intercept(), enables interception of read/write accesses to t= he corresponding MSR, by setting the corresponding read/write bits in the MS= RPM based on the flags - svm_clear_msr_intercept(), disables interception of read/write accesses to the corresponding MSR, by clearing the corresponding read/write bits in t= he MSRPM based on the flags More specifically: - if flag is MSR_R, the functions {set,clear} the MSRPM bit that controls r= ead access to the MSR - if flag is MSR_W, the functions {set,clear} the MSRPM bit that controls w= rite access to the MSR - if flag is MSR_RW, the functions {set,clear} both MSRPM bits Place the definitions of the flags in asm/hvm/hvm.h because there is the intention to be used by VMX code as well. Remove svm_intercept_msr() and MSR_INTERCEPT_* definitions, and use the new functions and flags instead. The macros svm_{en,dis}able_intercept_for_msr() will be retained for now but they will be eventually open-coded with a follow-up patch, because only one of them is actually used, and because the meaning of "enabling/disabling" msr intercepts is not consistent through the code(for instance the hvm_func enable_msr_interception() sets only the write MSRPM bit, not both). In the meantime, take the opportunity to remove excess parentheses. No functional change intended. Signed-off-by: Xenia Ragiadakou --- Changes in v2: - restore BUG_ON(), reported by Jan - coding style fixes, reported by Jan - remove excess parentheses from macros, suggested by Jan - change from int to unsigned int the type of param flags, reported by Jan - change from uint32_t to unsigned int the type of param msr, reported by= Jan xen/arch/x86/cpu/vpmu_amd.c | 9 +-- xen/arch/x86/hvm/svm/svm.c | 74 ++++++++++++++++--------- xen/arch/x86/include/asm/hvm/hvm.h | 4 ++ xen/arch/x86/include/asm/hvm/svm/vmcb.h | 15 ++--- 4 files changed, 64 insertions(+), 38 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c index 18266b9521..da8e906972 100644 --- a/xen/arch/x86/cpu/vpmu_amd.c +++ b/xen/arch/x86/cpu/vpmu_amd.c @@ -154,8 +154,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v) =20 for ( i =3D 0; i < num_counters; i++ ) { - svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE); - svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE); + svm_clear_msr_intercept(v, counters[i], MSR_RW); + svm_set_msr_intercept(v, ctrls[i], MSR_W); + svm_clear_msr_intercept(v, ctrls[i], MSR_R); } =20 msr_bitmap_on(vpmu); @@ -168,8 +169,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v) =20 for ( i =3D 0; i < num_counters; i++ ) { - svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW); - svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW); + svm_set_msr_intercept(v, counters[i], MSR_RW); + svm_set_msr_intercept(v, ctrls[i], MSR_RW); } =20 msr_bitmap_off(vpmu); diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index 59a6e88dff..3ee0805ff3 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -277,23 +277,33 @@ svm_msrbit(unsigned long *msr_bitmap, uint32_t msr) return msr_bit; } =20 -void svm_intercept_msr(struct vcpu *v, uint32_t msr, int flags) +void svm_set_msr_intercept(struct vcpu *v, unsigned int msr, unsigned int = flags) { - unsigned long *msr_bit; - const struct domain *d =3D v->domain; + unsigned long *msr_bit =3D svm_msrbit(v->arch.hvm.svm.msrpm, msr); =20 - msr_bit =3D svm_msrbit(v->arch.hvm.svm.msrpm, msr); BUG_ON(msr_bit =3D=3D NULL); + msr &=3D 0x1fff; =20 - if ( flags & MSR_INTERCEPT_READ ) + if ( flags & MSR_R ) __set_bit(msr * 2, msr_bit); - else if ( !monitored_msr(d, msr) ) - __clear_bit(msr * 2, msr_bit); - - if ( flags & MSR_INTERCEPT_WRITE ) + if ( flags & MSR_W ) __set_bit(msr * 2 + 1, msr_bit); - else if ( !monitored_msr(d, msr) ) +} + +void svm_clear_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags) +{ + unsigned long *msr_bit =3D svm_msrbit(v->arch.hvm.svm.msrpm, msr); + + BUG_ON(msr_bit =3D=3D NULL); + + if ( monitored_msr(v->domain, msr) ) + return; + + if ( flags & MSR_R ) + __clear_bit(msr * 2, msr_bit); + if ( flags & MSR_W ) __clear_bit(msr * 2 + 1, msr_bit); } =20 @@ -302,7 +312,10 @@ static void cf_check svm_enable_msr_interception(struc= t domain *d, uint32_t msr) struct vcpu *v; =20 for_each_vcpu ( d, v ) - svm_intercept_msr(v, msr, MSR_INTERCEPT_WRITE); + { + svm_set_msr_intercept(v, msr, MSR_W); + svm_clear_msr_intercept(v, msr, MSR_R); + } } =20 static void svm_save_dr(struct vcpu *v) @@ -319,10 +332,10 @@ static void svm_save_dr(struct vcpu *v) =20 if ( v->domain->arch.cpuid->extd.dbext ) { - svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_RW); - svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_RW); - svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_RW); - svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_RW); + svm_set_msr_intercept(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_RW); + svm_set_msr_intercept(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_RW); + svm_set_msr_intercept(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_RW); + svm_set_msr_intercept(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_RW); =20 rdmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.msrs->dr_mask[0]); rdmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.msrs->dr_mask[1]); @@ -350,10 +363,10 @@ static void __restore_debug_registers(struct vmcb_str= uct *vmcb, struct vcpu *v) =20 if ( v->domain->arch.cpuid->extd.dbext ) { - svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_NON= E); - svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_NON= E); - svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_NON= E); - svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_NON= E); + svm_clear_msr_intercept(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_RW); + svm_clear_msr_intercept(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_RW); + svm_clear_msr_intercept(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_RW); + svm_clear_msr_intercept(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_RW); =20 wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.msrs->dr_mask[0]); wrmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.msrs->dr_mask[1]); @@ -584,22 +597,29 @@ static void cf_check svm_cpuid_policy_changed(struct = vcpu *v) vmcb_set_exception_intercepts(vmcb, bitmap); =20 /* Give access to MSR_SPEC_CTRL if the guest has been told about it. */ - svm_intercept_msr(v, MSR_SPEC_CTRL, - cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_R= W); + if ( cp->extd.ibrs ) + svm_clear_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW); + else + svm_set_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW); =20 /* * Always trap write accesses to VIRT_SPEC_CTRL in order to cache the = guest * setting and avoid having to perform a rdmsr on vmexit to get the gu= est * setting even if VIRT_SSBD is offered to Xen itself. */ - svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL, - cp->extd.virt_ssbd && cpu_has_virt_ssbd && - !cpu_has_amd_ssbd ? - MSR_INTERCEPT_WRITE : MSR_INTERCEPT_RW); + if ( cp->extd.virt_ssbd && cpu_has_virt_ssbd && !cpu_has_amd_ssbd ) + { + svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_W); + svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_R); + } + else + svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW); =20 /* Give access to MSR_PRED_CMD if the guest has been told about it. */ - svm_intercept_msr(v, MSR_PRED_CMD, - cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_R= W); + if ( cp->extd.ibpb ) + svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW); + else + svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW); } =20 void svm_sync_vmcb(struct vcpu *v, enum vmcb_sync_state new_state) diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/= hvm/hvm.h index 04cbd4ff24..5740a64281 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -250,6 +250,10 @@ extern struct hvm_function_table hvm_funcs; extern bool_t hvm_enabled; extern s8 hvm_port80_allowed; =20 +#define MSR_R BIT(0, U) +#define MSR_W BIT(1, U) +#define MSR_RW (MSR_W | MSR_R) + extern const struct hvm_function_table *start_svm(void); extern const struct hvm_function_table *start_vmx(void); =20 diff --git a/xen/arch/x86/include/asm/hvm/svm/vmcb.h b/xen/arch/x86/include= /asm/hvm/svm/vmcb.h index a1a8a7fd25..94deb0a236 100644 --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h @@ -603,13 +603,14 @@ void svm_destroy_vmcb(struct vcpu *v); =20 void setup_vmcb_dump(void); =20 -#define MSR_INTERCEPT_NONE 0 -#define MSR_INTERCEPT_READ 1 -#define MSR_INTERCEPT_WRITE 2 -#define MSR_INTERCEPT_RW (MSR_INTERCEPT_WRITE | MSR_INTERCEPT_READ) -void svm_intercept_msr(struct vcpu *v, uint32_t msr, int enable); -#define svm_disable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr)= , MSR_INTERCEPT_NONE) -#define svm_enable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr),= MSR_INTERCEPT_RW) +void svm_set_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags); +void svm_clear_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags); +#define svm_disable_intercept_for_msr(v, msr) \ + svm_clear_msr_intercept(v, msr, MSR_RW) +#define svm_enable_intercept_for_msr(v, msr) \ + svm_set_intercept_msr(v, msr, MSR_RW) =20 /* * VMCB accessor functions. --=20 2.34.1 From nobody Wed May 15 06:10:05 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; arc=pass (i=1 dmarc=pass fromdomain=amd.com); dmarc=pass(p=quarantine dis=none) header.from=amd.com ARC-Seal: i=2; a=rsa-sha256; t=1682324509; cv=pass; d=zohomail.com; s=zohoarc; b=BG5TewcXdgGMC3EHKI5daaikiWSyla0Gw8xhUHmHRmX8e1s8yoVTC/s/Rv9aEqRRu5UJnA3/z0FgKsJHdhRK4vIa7B69Pd4wWWliWkmlEuZruUKRD4J0v0c8yPxrE2MRqqchFlKYItrZ9UcWZjEDLkJ1f+rOJVQK8brhBh0oG5Q= ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682324509; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=vLJl8NuTEtKiCgcahKscn4Cu5cCYjQA1mE/ixk7iJfk=; b=IzBVoyasP256pZqZqqClzTzCgNgUDtdzX3c/KgFmNvlf1O6SN4JFxb0GDhAnvf+lzMoulMuiqLpp9LG4G80ZuLN6rLKfJe3A2AHqm4a4gAzSF/nAIP3x1Dm9r4g/76umXlTdBM2aOF1tN67j5nIqNGoIYDQVtOBjYw7iib5J1Zs= ARC-Authentication-Results: i=2; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; arc=pass (i=1 dmarc=pass fromdomain=amd.com); dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682324509735353.1152192981896; Mon, 24 Apr 2023 01:21:49 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.525209.816272 (Exim 4.92) (envelope-from ) id 1pqrRt-0003Tu-F2; Mon, 24 Apr 2023 08:21:25 +0000 Received: by outflank-mailman (output) from mailman id 525209.816272; Mon, 24 Apr 2023 08:21:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pqrRt-0003Tn-Bt; Mon, 24 Apr 2023 08:21:25 +0000 Received: by outflank-mailman (input) for mailman id 525209; Mon, 24 Apr 2023 08:21:24 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pqrRs-0002wP-1Y for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:21:24 +0000 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2062a.outbound.protection.outlook.com [2a01:111:f400:fe5a::62a]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id fdcc21ab-e278-11ed-8611-37d641c3527e; Mon, 24 Apr 2023 10:21:21 +0200 (CEST) Received: from DM6PR07CA0105.namprd07.prod.outlook.com (2603:10b6:5:330::8) by SN7PR12MB7911.namprd12.prod.outlook.com (2603:10b6:806:32a::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.31; Mon, 24 Apr 2023 08:21:17 +0000 Received: from DM6NAM11FT005.eop-nam11.prod.protection.outlook.com (2603:10b6:5:330:cafe::b2) by DM6PR07CA0105.outlook.office365.com (2603:10b6:5:330::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend Transport; Mon, 24 Apr 2023 08:21:17 +0000 Received: from SATLEXMB03.amd.com (165.204.84.17) by DM6NAM11FT005.mail.protection.outlook.com (10.13.172.238) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 08:21:16 +0000 Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr 2023 03:21:16 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr 2023 03:21:16 -0500 Received: from 10.0.2.15 (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend Transport; Mon, 24 Apr 2023 03:21:14 -0500 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: fdcc21ab-e278-11ed-8611-37d641c3527e ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=l8FC0XbDaSGc5RHITZFFOvPU1twvx0+O2uJ/L+UrfaSxFBxo53IW8SjpywggItLJNUFGc5KUm10afLNpO5Qlc6cFpJ5CXjJojjUEvvMbvUw32BnjueLMjDOSH1Mm2LgOBltDaS49GJjQUuv0pjxTliQpRObNxLBkCkL/KPpZybN8jHFVpV16pV6T3nZYfBpuGtagFbx4SynSUlLe+qNo7+DmOFSoeJiHl5hXHUTWlteABN7FAja8KdyCNJEMRYZoxtflPd2waoqZAlh+z9Oo/0m/mzqTKUdMZgCNIDkmghavzqUZFr5RY20A+CdarAufqyd8thDklhHnZJcsaxdi2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vLJl8NuTEtKiCgcahKscn4Cu5cCYjQA1mE/ixk7iJfk=; b=DZO5UJ9J4acgEvg3PjhowLVUmjjMZAXiFQyCBhsCW4RBhbUXElEV0EwApxJ7KYBqHjxEzNmLXpO9BMRMIZ8838rUTWJhIf3PD/D3FX8m3lAoYhG6ClAhkderGWcS8J9D03T/xZQBcko3hTNlSOXRI4208J28R2BNyB0EWrwAYzfS34aRume7zibjualDY32Ukb+9lYhREx2bhxG5+Qt5Xx1c0r1gnWbP1PPbikS8Tmd6Rx+jqnRZ83pC4VM3K/leLRSctIQ3jI7yXQ2RoNjyK/2sA+dGuvolZ1yj/NDvgLzoQPbWuLaU14dUQSyuOSNj0itFm1wuCjn2d5PyZx8xSg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vLJl8NuTEtKiCgcahKscn4Cu5cCYjQA1mE/ixk7iJfk=; b=TOWpWF+5TokNpEQNiQF28yIZ0LjQ1W71JL5pK5ez1igh4yfyNrDQeULEq0Af/MDHL1nrCplSAocn+2xrBUIjy/GHROxA4zfoaF1gKPkA0yjwGHkuZlSJ5qp2L70ZZOLX3X4oNQFs6MRJbP5aZOhy+6tRAOVcYP7MOZOY/fe2bME= X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C From: Xenia Ragiadakou To: CC: Xenia Ragiadakou , Jun Nakajima , Kevin Tian , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu Subject: [PATCH v2 2/3] x86/vmx: replace enum vmx_msr_intercept_type with the msr access flags Date: Mon, 24 Apr 2023 11:20:37 +0300 Message-ID: <20230424082038.541122-3-xenia.ragiadakou@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230424082038.541122-1-xenia.ragiadakou@amd.com> References: <20230424082038.541122-1-xenia.ragiadakou@amd.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT005:EE_|SN7PR12MB7911:EE_ X-MS-Office365-Filtering-Correlation-Id: 939b278f-4e43-4a68-1506-08db449ce037 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ShtoRzldRtAD+WGvNnfSe2+t3at5cAj5tYcz9HReR7BB4m2ROu1W0Csm2GqzSgY5qOr/Wj1gNdeGZ74Lx7bFvyZH6oGDw6Se2tq8R+gN7X7KAnVmzAzK9CLOKT/9V+QQOZBzZpWlBpv9ryUtgcOg8A7WyTosTPEqql/0rYjNk2u7/WuzQN4aVpjyKY7WoTB6dGaaepZNNgFF1vgrvsqltFoWdOxhjImn26jE/yJgWyqT5WCi0hHUs8OGyUDd65ZTGgGy/iBpkoyGrkm2f+TzhI9v/8hpPpLFDXKY5yuDo5p+fLkkCXRIc7ysPg0wrz3LA6YGJyIXDv2yxRXMIEnJWcBWGuxFftDL+NiY4yW19s9RDKigZ9qFVtSiqmsmmCIfJIhs8I8va7ifKEVthoc5QUwU1R8m+tPLDpRtyaq430AbPFjrSu1l6+sZS6hveeMY1H7D12P7TSti/aleuAbCOgtXbcNEEU+ql5jKhiwo1DX3LY93Ixo0hCRztF9sutp6cO/Grlsql7I3D+LQD6knAWV1kzYGlyD0Eq3b2HbBXAFf5A1hiXxdZjnlmZ3/ekmgHSNKNKRbXSlvTejL/LCzSBFc5LqjyVu5vGPdKp+67CbtJpl0JgLRQnrfhEEttD5WsV/olfyHtXZiwpCET6KF35iQ5lCetnIt/17OiJpWNGpYe/isqZ09+rfntE9ZlKslALCSf32sqHbPinm37BYvMEtunzWqTMOYAillVYODBxA= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(396003)(376002)(451199021)(40470700004)(46966006)(36840700001)(54906003)(36756003)(4326008)(6916009)(2906002)(186003)(44832011)(70586007)(70206006)(16576012)(316002)(6666004)(30864003)(41300700001)(5660300002)(36860700001)(478600001)(40480700001)(81166007)(356005)(426003)(336012)(82740400003)(47076005)(8676002)(82310400005)(8936002)(86362001)(83380400001)(1076003)(26005)(2616005)(40460700003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 08:21:16.9412 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 939b278f-4e43-4a68-1506-08db449ce037 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT005.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7911 X-ZohoMail-DKIM: pass (identity @amd.com) X-ZM-MESSAGEID: 1682324510711100005 Content-Type: text/plain; charset="utf-8" Replace enum vmx_msr_intercept_type with the msr access flags, defined in hvm.h, so that the functions {svm,vmx}_{set,clear}_msr_intercept() share the same prototype. No functional change intended. Signed-off-by: Xenia Ragiadakou --- Changes in v2: - change from int to unsigned int the type of param type, reported by Jan xen/arch/x86/cpu/vpmu_intel.c | 24 +++++++------- xen/arch/x86/hvm/vmx/vmcs.c | 36 ++++++++++---------- xen/arch/x86/hvm/vmx/vmx.c | 44 ++++++++++++------------- xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 12 ++----- 4 files changed, 54 insertions(+), 62 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c index 35e350578b..395830e803 100644 --- a/xen/arch/x86/cpu/vpmu_intel.c +++ b/xen/arch/x86/cpu/vpmu_intel.c @@ -219,22 +219,22 @@ static void core2_vpmu_set_msr_bitmap(struct vcpu *v) =20 /* Allow Read/Write PMU Counters MSR Directly. */ for ( i =3D 0; i < fixed_pmc_cnt; i++ ) - vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, VMX_MSR_R= W); + vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); =20 for ( i =3D 0; i < arch_pmc_cnt; i++ ) { - vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); =20 if ( full_width_write ) - vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, VMX_MSR_RW= ); + vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); } =20 /* Allow Read PMU Non-global Controls Directly. */ for ( i =3D 0; i < arch_pmc_cnt; i++ ) - vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), VMX_MSR_R); + vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); =20 - vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, VMX_MSR_R); - vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, VMX_MSR_R); + vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); + vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); } =20 static void core2_vpmu_unset_msr_bitmap(struct vcpu *v) @@ -242,21 +242,21 @@ static void core2_vpmu_unset_msr_bitmap(struct vcpu *= v) unsigned int i; =20 for ( i =3D 0; i < fixed_pmc_cnt; i++ ) - vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); =20 for ( i =3D 0; i < arch_pmc_cnt; i++ ) { - vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); =20 if ( full_width_write ) - vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); } =20 for ( i =3D 0; i < arch_pmc_cnt; i++ ) - vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), VMX_MSR_R); + vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); =20 - vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, VMX_MSR_R); - vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, VMX_MSR_R); + vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); + vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); } =20 static inline void __core2_vpmu_save(struct vcpu *v) diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index b209563625..e7b67313a2 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -892,7 +892,7 @@ static void vmx_set_host_env(struct vcpu *v) } =20 void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, - enum vmx_msr_intercept_type type) + unsigned int type) { struct vmx_msr_bitmap *msr_bitmap =3D v->arch.hvm.vmx.msr_bitmap; struct domain *d =3D v->domain; @@ -906,17 +906,17 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned= int msr, =20 if ( msr <=3D 0x1fff ) { - if ( type & VMX_MSR_R ) + if ( type & MSR_R ) clear_bit(msr, msr_bitmap->read_low); - if ( type & VMX_MSR_W ) + if ( type & MSR_W ) clear_bit(msr, msr_bitmap->write_low); } else if ( (msr >=3D 0xc0000000) && (msr <=3D 0xc0001fff) ) { msr &=3D 0x1fff; - if ( type & VMX_MSR_R ) + if ( type & MSR_R ) clear_bit(msr, msr_bitmap->read_high); - if ( type & VMX_MSR_W ) + if ( type & MSR_W ) clear_bit(msr, msr_bitmap->write_high); } else @@ -924,7 +924,7 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned i= nt msr, } =20 void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, - enum vmx_msr_intercept_type type) + unsigned int type) { struct vmx_msr_bitmap *msr_bitmap =3D v->arch.hvm.vmx.msr_bitmap; =20 @@ -934,17 +934,17 @@ void vmx_set_msr_intercept(struct vcpu *v, unsigned i= nt msr, =20 if ( msr <=3D 0x1fff ) { - if ( type & VMX_MSR_R ) + if ( type & MSR_R ) set_bit(msr, msr_bitmap->read_low); - if ( type & VMX_MSR_W ) + if ( type & MSR_W ) set_bit(msr, msr_bitmap->write_low); } else if ( (msr >=3D 0xc0000000) && (msr <=3D 0xc0001fff) ) { msr &=3D 0x1fff; - if ( type & VMX_MSR_R ) + if ( type & MSR_R ) set_bit(msr, msr_bitmap->read_high); - if ( type & VMX_MSR_W ) + if ( type & MSR_W ) set_bit(msr, msr_bitmap->write_high); } else @@ -1151,17 +1151,17 @@ static int construct_vmcs(struct vcpu *v) v->arch.hvm.vmx.msr_bitmap =3D msr_bitmap; __vmwrite(MSR_BITMAP, virt_to_maddr(msr_bitmap)); =20 - vmx_clear_msr_intercept(v, MSR_FS_BASE, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_GS_BASE, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_SHADOW_GS_BASE, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_CS, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_ESP, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_EIP, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_FS_BASE, MSR_RW); + vmx_clear_msr_intercept(v, MSR_GS_BASE, MSR_RW); + vmx_clear_msr_intercept(v, MSR_SHADOW_GS_BASE, MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_CS, MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_ESP, MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_EIP, MSR_RW); if ( paging_mode_hap(d) && (!is_iommu_enabled(d) || iommu_snoop) ) - vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW); if ( (vmexit_ctl & VM_EXIT_CLEAR_BNDCFGS) && (vmentry_ctl & VM_ENTRY_LOAD_BNDCFGS) ) - vmx_clear_msr_intercept(v, MSR_IA32_BNDCFGS, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_BNDCFGS, MSR_RW); } =20 /* I/O access bitmap. */ diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 096c69251d..8a873147a5 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -791,7 +791,7 @@ static void cf_check vmx_cpuid_policy_changed(struct vc= pu *v) */ if ( cp->feat.ibrsb ) { - vmx_clear_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW); =20 rc =3D vmx_add_guest_msr(v, MSR_SPEC_CTRL, 0); if ( rc ) @@ -799,7 +799,7 @@ static void cf_check vmx_cpuid_policy_changed(struct vc= pu *v) } else { - vmx_set_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW); =20 rc =3D vmx_del_msr(v, MSR_SPEC_CTRL, VMX_MSR_GUEST); if ( rc && rc !=3D -ESRCH ) @@ -809,20 +809,20 @@ static void cf_check vmx_cpuid_policy_changed(struct = vcpu *v) =20 /* MSR_PRED_CMD is safe to pass through if the guest knows about it. */ if ( cp->feat.ibrsb || cp->extd.ibpb ) - vmx_clear_msr_intercept(v, MSR_PRED_CMD, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_PRED_CMD, MSR_RW); else - vmx_set_msr_intercept(v, MSR_PRED_CMD, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_PRED_CMD, MSR_RW); =20 /* MSR_FLUSH_CMD is safe to pass through if the guest knows about it. = */ if ( cp->feat.l1d_flush ) - vmx_clear_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_FLUSH_CMD, MSR_RW); else - vmx_set_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_FLUSH_CMD, MSR_RW); =20 if ( cp->feat.pks ) - vmx_clear_msr_intercept(v, MSR_PKRS, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_PKRS, MSR_RW); else - vmx_set_msr_intercept(v, MSR_PKRS, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_PKRS, MSR_RW); =20 out: vmx_vmcs_exit(v); @@ -1418,7 +1418,7 @@ static void cf_check vmx_handle_cd(struct vcpu *v, un= signed long value) =20 vmx_get_guest_pat(v, pat); vmx_set_guest_pat(v, uc_pat); - vmx_set_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW); =20 wbinvd(); /* flush possibly polluted cache */ hvm_asid_flush_vcpu(v); /* invalidate memory type cached in TL= B */ @@ -1429,7 +1429,7 @@ static void cf_check vmx_handle_cd(struct vcpu *v, un= signed long value) v->arch.hvm.cache_mode =3D NORMAL_CACHE_MODE; vmx_set_guest_pat(v, *pat); if ( !is_iommu_enabled(v->domain) || iommu_snoop ) - vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW); hvm_asid_flush_vcpu(v); /* no need to flush cache */ } } @@ -1883,9 +1883,9 @@ static void cf_check vmx_update_guest_efer(struct vcp= u *v) * into hardware, clear the read intercept to avoid unnecessary VMExit= s. */ if ( guest_efer =3D=3D v->arch.hvm.guest_efer ) - vmx_clear_msr_intercept(v, MSR_EFER, VMX_MSR_R); + vmx_clear_msr_intercept(v, MSR_EFER, MSR_R); else - vmx_set_msr_intercept(v, MSR_EFER, VMX_MSR_R); + vmx_set_msr_intercept(v, MSR_EFER, MSR_R); } =20 static void nvmx_enqueue_n2_exceptions(struct vcpu *v, @@ -2312,7 +2312,7 @@ static void cf_check vmx_enable_msr_interception(stru= ct domain *d, uint32_t msr) struct vcpu *v; =20 for_each_vcpu ( d, v ) - vmx_set_msr_intercept(v, msr, VMX_MSR_W); + vmx_set_msr_intercept(v, msr, MSR_W); } =20 static void cf_check vmx_vcpu_update_eptp(struct vcpu *v) @@ -3479,17 +3479,17 @@ void cf_check vmx_vlapic_msr_changed(struct vcpu *v) { for ( msr =3D MSR_X2APIC_FIRST; msr <=3D MSR_X2APIC_LAST; msr++ ) - vmx_clear_msr_intercept(v, msr, VMX_MSR_R); + vmx_clear_msr_intercept(v, msr, MSR_R); =20 - vmx_set_msr_intercept(v, MSR_X2APIC_PPR, VMX_MSR_R); - vmx_set_msr_intercept(v, MSR_X2APIC_TMICT, VMX_MSR_R); - vmx_set_msr_intercept(v, MSR_X2APIC_TMCCT, VMX_MSR_R); + vmx_set_msr_intercept(v, MSR_X2APIC_PPR, MSR_R); + vmx_set_msr_intercept(v, MSR_X2APIC_TMICT, MSR_R); + vmx_set_msr_intercept(v, MSR_X2APIC_TMCCT, MSR_R); } if ( cpu_has_vmx_virtual_intr_delivery ) { - vmx_clear_msr_intercept(v, MSR_X2APIC_TPR, VMX_MSR_W); - vmx_clear_msr_intercept(v, MSR_X2APIC_EOI, VMX_MSR_W); - vmx_clear_msr_intercept(v, MSR_X2APIC_SELF, VMX_MSR_W); + vmx_clear_msr_intercept(v, MSR_X2APIC_TPR, MSR_W); + vmx_clear_msr_intercept(v, MSR_X2APIC_EOI, MSR_W); + vmx_clear_msr_intercept(v, MSR_X2APIC_SELF, MSR_W); } } else @@ -3500,7 +3500,7 @@ void cf_check vmx_vlapic_msr_changed(struct vcpu *v) SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE) ) for ( msr =3D MSR_X2APIC_FIRST; msr <=3D MSR_X2APIC_LAST; msr++ ) - vmx_set_msr_intercept(v, msr, VMX_MSR_RW); + vmx_set_msr_intercept(v, msr, MSR_RW); =20 vmx_update_secondary_exec_control(v); vmx_vmcs_exit(v); @@ -3636,7 +3636,7 @@ static int cf_check vmx_msr_write_intercept( return X86EMUL_OKAY; } =20 - vmx_clear_msr_intercept(v, lbr->base + i, VMX_MSR_RW); + vmx_clear_msr_intercept(v, lbr->base + i, MSR_RW); } } =20 diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include= /asm/hvm/vmx/vmcs.h index 51641caa9f..af6a95b5d9 100644 --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h @@ -633,18 +633,10 @@ static inline int vmx_write_guest_msr(struct vcpu *v,= uint32_t msr, return 0; } =20 - -/* MSR intercept bitmap infrastructure. */ -enum vmx_msr_intercept_type { - VMX_MSR_R =3D 1, - VMX_MSR_W =3D 2, - VMX_MSR_RW =3D VMX_MSR_R | VMX_MSR_W, -}; - void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, - enum vmx_msr_intercept_type type); + unsigned int type); void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, - enum vmx_msr_intercept_type type); + unsigned int type); void vmx_vmcs_switch(paddr_t from, paddr_t to); void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector); void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector); --=20 2.34.1 From nobody Wed May 15 06:10:05 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; arc=pass (i=1 dmarc=pass fromdomain=amd.com); dmarc=pass(p=quarantine dis=none) header.from=amd.com ARC-Seal: i=2; a=rsa-sha256; t=1682324515; cv=pass; d=zohomail.com; s=zohoarc; b=XKLiQzyK2ls0TWIG/sVPUtmBV6ruqiayMDlU2Tfa80EypQJSTvQuf2yjXqT7O+dysuPptEG4w/S6Yqz1liy8zpc++rJhH9HelJR7eKLUqrMFnUhYphYkBCMLqpC66OUVeeVcEf7KPQfM1nHBQlSjyGFMBzxGo2FHIsofI4jdBAA= ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682324515; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=bsJJg35SdGTTC2bSQZuKwbd3Cr24cZUb+TyVYaHwI5A=; b=IAPFuzHR1GsmXRC1H9pbzU1meQyS447fCNdu52Ye02lFu8P/q6mbFGbVg3MDYG9+RSbtQwXmhkOrHTFDqUa2JLnwD9Ef3zErQeVd09aHFvk5uewfwftI0MdU6D1uw8zW9INdrrrDbo4R5zVfSOqmoSuXqTPg4nhNgksR2OKImzk= ARC-Authentication-Results: i=2; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; arc=pass (i=1 dmarc=pass fromdomain=amd.com); dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1682324515773267.77644045881596; Mon, 24 Apr 2023 01:21:55 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.525210.816283 (Exim 4.92) (envelope-from ) id 1pqrRy-0003qT-Ry; Mon, 24 Apr 2023 08:21:30 +0000 Received: by outflank-mailman (output) from mailman id 525210.816283; Mon, 24 Apr 2023 08:21:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pqrRy-0003qK-Na; Mon, 24 Apr 2023 08:21:30 +0000 Received: by outflank-mailman (input) for mailman id 525210; Mon, 24 Apr 2023 08:21:29 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pqrRx-0002wP-Do for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:21:29 +0000 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2061f.outbound.protection.outlook.com [2a01:111:f400:fe5a::61f]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 012caa17-e279-11ed-8611-37d641c3527e; Mon, 24 Apr 2023 10:21:27 +0200 (CEST) Received: from MW4PR04CA0240.namprd04.prod.outlook.com (2603:10b6:303:87::35) by SJ0PR12MB8615.namprd12.prod.outlook.com (2603:10b6:a03:484::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr 2023 08:21:20 +0000 Received: from CO1NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:303:87:cafe::16) by MW4PR04CA0240.outlook.office365.com (2603:10b6:303:87::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend Transport; Mon, 24 Apr 2023 08:21:20 +0000 Received: from SATLEXMB04.amd.com (165.204.84.17) by CO1NAM11FT018.mail.protection.outlook.com (10.13.175.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 08:21:19 +0000 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr 2023 03:21:18 -0500 Received: from 10.0.2.15 (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend Transport; Mon, 24 Apr 2023 03:21:17 -0500 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 012caa17-e279-11ed-8611-37d641c3527e ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hQHOFTg5LDC8AuUINx3rtFIeffax/z53phTYUpiVD8K2y3nWy+TbTEeVw2Q7V9rSLdZJhhKnURCY/3m6iu/PC0eHyaDzGbG9rF5xd8n1nfJT1FmY1YGe7aNFrkt48oOYU8t1PYOLxBAKDH9h33lUE0OPyuEbIQWgha5+R68CmUK9cWwYXTbRVX+BeOIFtOE16MtIVrX9vbwlWP24nGemDHan9MPFdt50EQ7I9+80vTAD5IAwTh+YLnd00L0y3TmQBr6winAc1Ulw6Xj0V36G0rYaKUUnnv6/ByvbwN+fi6/xfPSHtxDHu9qjUW9PLc2r0r38tbTGWWG8svSxb1MZ9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bsJJg35SdGTTC2bSQZuKwbd3Cr24cZUb+TyVYaHwI5A=; b=XEdCcuBUAaBJYU06Iu6o1UZmZObiWQ0laNlsFw3MYfSGacF7SI4Z2OPU2g8E6FAdf+wa1uOCS9OwoR6DTBdmtKuxl+6pmJpX1PWKh0r+R40uF4yVyMix6bb1kaou2trkBp2iW4qteLoAVlDStz7P4Fs9me2S66KC+m2XW1OfIfzVvk+GTGG2NKBST8ltPGRrSdTuzKc2hvSGNsasl9Hqf2Dj5akhNQnFDc1KaTqjxeexX4FwVsAQyCtJEgO43R8/n6hu9S6ulxVr4GBPnaCwSnoQ16eU6LTqyZAaq7oFHIGc+ooaZ2+NdAPBPmE6IwTTWhjdbUv0YYKLZDFyK54wkA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bsJJg35SdGTTC2bSQZuKwbd3Cr24cZUb+TyVYaHwI5A=; b=F6X+ymiTsSshlTcG93xG3puoplcraaOGVjV7jbV7n4aZOyFN3jd1uWVH/Ur2U1RX/OU7vhrbWmTRwyr+PDjZu+jTUCCwLvpA9lE8nXgwXpqSRLiIolReafNwOsvrkpb7v9sOrzOPevyxvYqu7fBLyHNjygqb90FUPe5h9uAXU1g= X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C From: Xenia Ragiadakou To: CC: Xenia Ragiadakou , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Jun Nakajima , Kevin Tian Subject: [PATCH v2 3/3] x86/hvm: create hvm_funcs for {svm,vmx}_{set,clear}_msr_intercept() Date: Mon, 24 Apr 2023 11:20:38 +0300 Message-ID: <20230424082038.541122-4-xenia.ragiadakou@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230424082038.541122-1-xenia.ragiadakou@amd.com> References: <20230424082038.541122-1-xenia.ragiadakou@amd.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT018:EE_|SJ0PR12MB8615:EE_ X-MS-Office365-Filtering-Correlation-Id: b8bff983-5563-4151-185e-08db449ce1e7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: z9hANcABW7CmQe4DK4uY5GHrSsvcFmfdp/vybTPNqg/6f3aNkBy8k9uE8sn+1u9bd4oEH/TWrbQyf5znFMh/pRaCMT3l1s2/GauMM1XcBc3+a1mj37xzcGhGCaR+FgQb1U5G/okVz656C4Ts1eWX5TNq4XBkn6IPyd6ITg64/6xFNgYFrz6NVT/j1C1cOK3K+WzGcG1f6oaR59FvWZCkIsqbIMgAoZexiW3dusUBrjX7gBgPPe9PTx7VFsMfnqpRPT8TVomsubQxGN3KjkbPkuhyq/gfxLj1UVAw/ZEX8IHgS8OPVCfWshgLJdIbVWvItHkACqh+HnrYrGo53sZJEiTC3ZYSWz91HNDaHykRO+VP2+Dvb1pDZdwIllLeIljq0GkpgHJvApuQB6ARONmbt2+KOUDCq++ZhbtnlHluoXsbIujVUkYX16pBeE6v8qYrAypcg848wju4Hod5Izn6MMEcefMdHCzQ01H2zFaf7xtT7rbo/kMp6M16wbet4Jay8FLTSnKPdwpgOuRG+iRvPt9OL7VGuaY/3BY0OQKWp5TTFe6KwK3+2qo0YCfa94KU5r2qDrDfIvAq3NJnTuEukuGu15rYlmTLFV9OfL7FvJHzgj6xJhUqxX8AIktmpMlOCDc3gqKfPTyftPHsCTGbtmbMDFbOR2qHqoqFFQpCych1+eqaSZ2oS2L2qLmGahSDNx/cakIwZGQDBMX7d/OJ0uYbbTn4FKtLv/oxPlQl9no= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(376002)(346002)(396003)(451199021)(46966006)(36840700001)(40470700004)(40460700003)(2906002)(70206006)(70586007)(6916009)(316002)(4326008)(44832011)(8676002)(8936002)(5660300002)(41300700001)(82310400005)(36756003)(86362001)(40480700001)(356005)(26005)(186003)(1076003)(81166007)(478600001)(6666004)(36860700001)(83380400001)(47076005)(2616005)(336012)(426003)(16576012)(82740400003)(54906003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 08:21:19.7256 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b8bff983-5563-4151-185e-08db449ce1e7 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB8615 X-ZohoMail-DKIM: pass (identity @amd.com) X-ZM-MESSAGEID: 1682324517634100001 Content-Type: text/plain; charset="utf-8" Add hvm_funcs hooks for {set,clear}_msr_intercept() for controlling the msr intercept in common vpmu code. No functional change intended. Signed-off-by: Xenia Ragiadakou --- Changes in v2: - change the parameter types to unsigned int xen/arch/x86/cpu/vpmu_amd.c | 10 ++++----- xen/arch/x86/cpu/vpmu_intel.c | 24 ++++++++++---------- xen/arch/x86/hvm/svm/svm.c | 7 +++--- xen/arch/x86/hvm/vmx/vmcs.c | 8 +++---- xen/arch/x86/hvm/vmx/vmx.c | 2 ++ xen/arch/x86/include/asm/hvm/hvm.h | 30 +++++++++++++++++++++++++ xen/arch/x86/include/asm/hvm/svm/vmcb.h | 8 +++---- xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 8 +++---- 8 files changed, 65 insertions(+), 32 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c index da8e906972..77dee08588 100644 --- a/xen/arch/x86/cpu/vpmu_amd.c +++ b/xen/arch/x86/cpu/vpmu_amd.c @@ -154,9 +154,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v) =20 for ( i =3D 0; i < num_counters; i++ ) { - svm_clear_msr_intercept(v, counters[i], MSR_RW); - svm_set_msr_intercept(v, ctrls[i], MSR_W); - svm_clear_msr_intercept(v, ctrls[i], MSR_R); + hvm_clear_msr_intercept(v, counters[i], MSR_RW); + hvm_set_msr_intercept(v, ctrls[i], MSR_W); + hvm_clear_msr_intercept(v, ctrls[i], MSR_R); } =20 msr_bitmap_on(vpmu); @@ -169,8 +169,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v) =20 for ( i =3D 0; i < num_counters; i++ ) { - svm_set_msr_intercept(v, counters[i], MSR_RW); - svm_set_msr_intercept(v, ctrls[i], MSR_RW); + hvm_set_msr_intercept(v, counters[i], MSR_RW); + hvm_set_msr_intercept(v, ctrls[i], MSR_RW); } =20 msr_bitmap_off(vpmu); diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c index 395830e803..ed32d4d754 100644 --- a/xen/arch/x86/cpu/vpmu_intel.c +++ b/xen/arch/x86/cpu/vpmu_intel.c @@ -219,22 +219,22 @@ static void core2_vpmu_set_msr_bitmap(struct vcpu *v) =20 /* Allow Read/Write PMU Counters MSR Directly. */ for ( i =3D 0; i < fixed_pmc_cnt; i++ ) - vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); + hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); =20 for ( i =3D 0; i < arch_pmc_cnt; i++ ) { - vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); + hvm_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); =20 if ( full_width_write ) - vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); + hvm_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); } =20 /* Allow Read PMU Non-global Controls Directly. */ for ( i =3D 0; i < arch_pmc_cnt; i++ ) - vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); + hvm_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); =20 - vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); - vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); + hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); + hvm_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); } =20 static void core2_vpmu_unset_msr_bitmap(struct vcpu *v) @@ -242,21 +242,21 @@ static void core2_vpmu_unset_msr_bitmap(struct vcpu *= v) unsigned int i; =20 for ( i =3D 0; i < fixed_pmc_cnt; i++ ) - vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); + hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); =20 for ( i =3D 0; i < arch_pmc_cnt; i++ ) { - vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); + hvm_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); =20 if ( full_width_write ) - vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); + hvm_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); } =20 for ( i =3D 0; i < arch_pmc_cnt; i++ ) - vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); + hvm_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); =20 - vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); - vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); + hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); + hvm_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); } =20 static inline void __core2_vpmu_save(struct vcpu *v) diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index 3ee0805ff3..cbd8eff270 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -277,7 +277,8 @@ svm_msrbit(unsigned long *msr_bitmap, uint32_t msr) return msr_bit; } =20 -void svm_set_msr_intercept(struct vcpu *v, unsigned int msr, unsigned int = flags) +void cf_check svm_set_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags) { unsigned long *msr_bit =3D svm_msrbit(v->arch.hvm.svm.msrpm, msr); =20 @@ -291,8 +292,8 @@ void svm_set_msr_intercept(struct vcpu *v, unsigned int= msr, unsigned int flags) __set_bit(msr * 2 + 1, msr_bit); } =20 -void svm_clear_msr_intercept(struct vcpu *v, unsigned int msr, - unsigned int flags) +void cf_check svm_clear_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags) { unsigned long *msr_bit =3D svm_msrbit(v->arch.hvm.svm.msrpm, msr); =20 diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index e7b67313a2..c051bcb91b 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -891,8 +891,8 @@ static void vmx_set_host_env(struct vcpu *v) (unsigned long)&get_cpu_info()->guest_cpu_user_regs.error_co= de); } =20 -void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, - unsigned int type) +void cf_check vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int type) { struct vmx_msr_bitmap *msr_bitmap =3D v->arch.hvm.vmx.msr_bitmap; struct domain *d =3D v->domain; @@ -923,8 +923,8 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned i= nt msr, ASSERT(!"MSR out of range for interception\n"); } =20 -void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, - unsigned int type) +void cf_check vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int type) { struct vmx_msr_bitmap *msr_bitmap =3D v->arch.hvm.vmx.msr_bitmap; =20 diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 8a873147a5..6a33e92b0a 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2742,6 +2742,8 @@ static struct hvm_function_table __initdata_cf_clobbe= r vmx_function_table =3D { .nhvm_domain_relinquish_resources =3D nvmx_domain_relinquish_resources, .update_vlapic_mode =3D vmx_vlapic_msr_changed, .nhvm_hap_walk_L1_p2m =3D nvmx_hap_walk_L1_p2m, + .set_msr_intercept =3D vmx_set_msr_intercept, + .clear_msr_intercept =3D vmx_clear_msr_intercept, .enable_msr_interception =3D vmx_enable_msr_interception, .altp2m_vcpu_update_p2m =3D vmx_vcpu_update_eptp, .altp2m_vcpu_update_vmfunc_ve =3D vmx_vcpu_update_vmfunc_ve, diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/= hvm/hvm.h index 5740a64281..96ff235614 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -213,6 +213,10 @@ struct hvm_function_table { paddr_t *L1_gpa, unsigned int *page_order, uint8_t *p2m_acc, struct npfec npfec); =20 + void (*set_msr_intercept)(struct vcpu *v, unsigned int msr, + unsigned int flags); + void (*clear_msr_intercept)(struct vcpu *v, unsigned int msr, + unsigned int flags); void (*enable_msr_interception)(struct domain *d, uint32_t msr); =20 /* Alternate p2m */ @@ -647,6 +651,20 @@ static inline int nhvm_hap_walk_L1_p2m( v, L2_gpa, L1_gpa, page_order, p2m_acc, npfec); } =20 +static inline void hvm_set_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags) +{ + if ( hvm_funcs.set_msr_intercept ) + alternative_vcall(hvm_funcs.set_msr_intercept, v, msr, flags); +} + +static inline void hvm_clear_msr_intercept(struct vcpu *v, unsigned int ms= r, + unsigned int flags) +{ + if ( hvm_funcs.clear_msr_intercept ) + alternative_vcall(hvm_funcs.clear_msr_intercept, v, msr, flags); +} + static inline void hvm_enable_msr_interception(struct domain *d, uint32_t = msr) { alternative_vcall(hvm_funcs.enable_msr_interception, d, msr); @@ -905,6 +923,18 @@ static inline void hvm_set_reg(struct vcpu *v, unsigne= d int reg, uint64_t val) ASSERT_UNREACHABLE(); } =20 +static inline void hvm_set_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags) +{ + ASSERT_UNREACHABLE(); +} + +static inline void hvm_clear_msr_intercept(struct vcpu *v, unsigned int ms= r, + unsigned int flags) +{ + ASSERT_UNREACHABLE(); +} + #define is_viridian_domain(d) ((void)(d), false) #define is_viridian_vcpu(v) ((void)(v), false) #define has_viridian_time_ref_count(d) ((void)(d), false) diff --git a/xen/arch/x86/include/asm/hvm/svm/vmcb.h b/xen/arch/x86/include= /asm/hvm/svm/vmcb.h index 94deb0a236..5e84b4f4c1 100644 --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h @@ -603,10 +603,10 @@ void svm_destroy_vmcb(struct vcpu *v); =20 void setup_vmcb_dump(void); =20 -void svm_set_msr_intercept(struct vcpu *v, unsigned int msr, - unsigned int flags); -void svm_clear_msr_intercept(struct vcpu *v, unsigned int msr, - unsigned int flags); +void cf_check svm_set_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags); +void cf_check svm_clear_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int flags); #define svm_disable_intercept_for_msr(v, msr) \ svm_clear_msr_intercept(v, msr, MSR_RW) #define svm_enable_intercept_for_msr(v, msr) \ diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include= /asm/hvm/vmx/vmcs.h index af6a95b5d9..7f7d785977 100644 --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h @@ -633,10 +633,10 @@ static inline int vmx_write_guest_msr(struct vcpu *v,= uint32_t msr, return 0; } =20 -void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, - unsigned int type); -void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, - unsigned int type); +void cf_check vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int type); +void cf_check vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, + unsigned int type); void vmx_vmcs_switch(paddr_t from, paddr_t to); void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector); void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector); --=20 2.34.1