From nobody Sat Apr 20 01:55:25 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=intel.com ARC-Seal: i=1; a=rsa-sha256; t=1592491174; cv=none; d=zohomail.com; s=zohoarc; b=Vbe+pcERv2fZTGAD0LUNhnl9oqpOC55GcRB6Q4TudYIq91+3+o8dWz/uswIIndCt131bptX4FUb2LYOnIhxtcpV4G8+VBXc3VNF7cNtjVvcKudXQwCUvlhsrvfezMM8ykISxWy1uu//3l3zwPPmWWLZFip9cr4VnmGqiufehgAM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1592491174; h=Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=vY2JOy0A9cD42gzuJ7Z071TMHp3zh3j4HCbzuH+11EA=; b=lwSZIuEP17Pfy//xn5q5my6enIkUSbIkFekWMkfgEC7D0VHG0O6ryoE94aqfsEioHZ6GAI/wBxlnKotab/xy4t50BeWmob5N7cXvWZaH2Max13ZOpKaf8DENlj+KdiMzM1Kaxckw+/sucTK+UTvypA/7ssiN75bYme83xD/KrUo= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1592491174389252.06866560061815; Thu, 18 Jun 2020 07:39:34 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jlvhB-0001g3-Lw; Thu, 18 Jun 2020 14:39:13 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jlvhA-0001fw-Ik for xen-devel@lists.xenproject.org; Thu, 18 Jun 2020 14:39:12 +0000 Received: from mga02.intel.com (unknown [134.134.136.20]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 77941c1e-b171-11ea-bca7-bc764e2007e4; Thu, 18 Jun 2020 14:39:08 +0000 (UTC) Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jun 2020 07:39:08 -0700 Received: from unknown (HELO ubuntu.localdomain) ([10.255.79.91]) by orsmga006.jf.intel.com with ESMTP; 18 Jun 2020 07:39:06 -0700 X-Inumbo-ID: 77941c1e-b171-11ea-bca7-bc764e2007e4 IronPort-SDR: ZJwvxblUqawLx2cdqGGRGknGPinoslPeX/q0yZv33wL6t14wMm9k0thO3vDRj/WpNtQpnmf+xa tixZaxYE2FQw== X-IronPort-AV: E=McAfee;i="6000,8403,9655"; a="130991527" X-IronPort-AV: E=Sophos;i="5.73,526,1583222400"; d="scan'208";a="130991527" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False IronPort-SDR: EEyENaOnzeDXI0wIjSv/8wWcfF+r6GXpel4M9d2UGnpC2ie0Pu4vU45v3uh4eROki0CNV1VdyM DgvBrYliamdw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,526,1583222400"; d="scan'208";a="277642948" From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Subject: [PATCH v3 for-4.14] x86/vmx: use P2M_ALLOC in vmx_load_pdptrs instead of P2M_UNSHARE Date: Thu, 18 Jun 2020 07:39:04 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Tamas K Lengyel , Jun Nakajima , Wei Liu , Paul Durrant , Andrew Cooper , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Content-Type: text/plain; charset="utf-8" While forking VMs running a small RTOS system (Zephyr) a Xen crash has been observed due to a mm-lock order violation while copying the HVM CPU context from the parent. This issue has been identified to be due to hap_update_paging_modes first getting a lock on the gfn using get_gfn. This call also creates a shared entry in the fork's memory map for the cr3 gfn. = The function later calls hap_update_cr3 while holding the paging_lock, which results in the lock-order violation in vmx_load_pdptrs when it tries to uns= hare the above entry when it grabs the page with the P2M_UNSHARE flag set. Since vmx_load_pdptrs only reads from the page its usage of P2M_UNSHARE was unnecessary to start with. Using P2M_ALLOC is the appropriate flag to ensure the p2m is properly populated. Note that the lock order violation is avoided because before the paging_loc= k is taken a lookup is performed with P2M_ALLOC that forks the page, thus the se= cond lookup in vmx_load_pdptrs succeeds without having to perform the fork. We k= eep P2M_ALLOC in vmx_load_pdptrs because there are code-paths leading up to it which don't take the paging_lock and that have no previous lookup. Currentl= y no other code-path exists leading there with the paging_lock taken, thus no further adjustments are necessary. Signed-off-by: Tamas K Lengyel Reviewed-by: Kevin Tian Reviewed-by: Roger Pau Monn=C3=A9 --- v3: expand commit message to explain why there is no lock-order violation --- xen/arch/x86/hvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index ab19d9424e..cc6d4ece22 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -1325,7 +1325,7 @@ static void vmx_load_pdptrs(struct vcpu *v) if ( (cr3 & 0x1fUL) && !hvm_pcid_enabled(v) ) goto crash; =20 - page =3D get_page_from_gfn(v->domain, cr3 >> PAGE_SHIFT, &p2mt, P2M_UN= SHARE); + page =3D get_page_from_gfn(v->domain, cr3 >> PAGE_SHIFT, &p2mt, P2M_AL= LOC); if ( !page ) { /* Ideally you don't want to crash but rather go into a wait=20 --=20 2.25.1