From nobody Sun Apr 28 15:40:52 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1614258233; cv=none; d=zohomail.com; s=zohoarc; b=QFVxMmJDh6OCGKzGtW+VVB0a+ivdYIFFU0FXzbdsM0zKk7KE83fSmBWeMiJeuI9seV3CbFr1TPOP3L1Orn5u1WjrwWXKFLf6FdZz1uINA+E0AjEx+pncnpAyebUTixcDaJrKdzgIBg5UAdn6FUB5qUntdBODZxcZWbL+0V7A8W4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614258233; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=sVWb/AlweGBWskdDVYu6DsRg5Q6IZCg3eCyZ3Hr3SoI=; b=AC6QuxPpS6uDRfCq44g/YjUt9mOrYSP0eRvSzWp1cUgLBiq2tzm6jKBbvsbn2gmjdmDZeMr6hYW+j/wGoc+U5KvkzC0q/7BJV3VWBOr/xcRbs3HTIpnHD2/pe/u4fG5fdi3+VpgLKUcPeFOb+UemB+uosahzWrLqnJpfIM/HxfE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1614258233019949.740301213337; Thu, 25 Feb 2021 05:03:53 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.89719.169251 (Exim 4.92) (envelope-from ) id 1lFGIm-00079G-KU; Thu, 25 Feb 2021 13:03:32 +0000 Received: by outflank-mailman (output) from mailman id 89719.169251; Thu, 25 Feb 2021 13:03:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lFGIm-000799-HZ; Thu, 25 Feb 2021 13:03:32 +0000 Received: by outflank-mailman (input) for mailman id 89719; Thu, 25 Feb 2021 13:03:30 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lFGIk-000794-OR for xen-devel@lists.xenproject.org; Thu, 25 Feb 2021 13:03:30 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id d318e31e-6c6f-4bdb-81ba-c165889cae62; Thu, 25 Feb 2021 13:03:30 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 28FABAF6C; Thu, 25 Feb 2021 13:03:29 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d318e31e-6c6f-4bdb-81ba-c165889cae62 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1614258209; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=sVWb/AlweGBWskdDVYu6DsRg5Q6IZCg3eCyZ3Hr3SoI=; b=TVBT02b8c19GtkNVBAUtkkg4ytytwyz2ntHdX0+eKT51Bmmu8SqRqQRtlTYrvhy7VnK10o zkXh7bZLtaCmdPQhRiFRqcSQxgRMJgb/1WYGwqu+GzaSaEWA8ItlGbCIuo3XZGBNF0hn6/ k0m9l5enzTk8QWwM7Xjl2WFY09GQ7JU= To: "xen-devel@lists.xenproject.org" Cc: Tim Deegan , George Dunlap , Andrew Cooper , Wei Liu , =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= , Ian Jackson From: Jan Beulich Subject: [PATCH][4.15] x86/shadow: suppress "fast fault path" optimization without reserved bits Message-ID: Date: Thu, 25 Feb 2021 14:03:29 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" When none of the physical address bits in PTEs are reserved, we can't create any 4k (leaf) PTEs which would trigger reserved bit faults. Hence the present SHOPT_FAST_FAULT_PATH machinery needs to be suppressed in this case, which is most easily achieved by never creating any magic entries. To compensate a little, eliminate sh_write_p2m_entry_post()'s impact on such hardware. While at it, also avoid using an MMIO magic entry when that would truncate the incoming GFN. Requested-by: Andrew Cooper Signed-off-by: Jan Beulich Acked-by: Tim Deegan --- I wonder if subsequently we couldn't arrange for SMEP/SMAP faults to be utilized instead, on capable hardware (which might well be all having such large a physical address width). I further wonder whether SH_L1E_MMIO_GFN_MASK couldn't / shouldn't be widened. I don't see a reason why it would need confining to the low 32 bits of the PTE - using the full space up to bit 50 ought to be fine (i.e. just one address bit left set in the magic mask), and we wouldn't even need that many to encode a 40-bit GFN (i.e. the extra guarding added here wouldn't then be needed in the first place). --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -499,7 +499,8 @@ _sh_propagate(struct vcpu *v, { /* Guest l1e maps emulated MMIO space */ *sp =3D sh_l1e_mmio(target_gfn, gflags); - d->arch.paging.shadow.has_fast_mmio_entries =3D true; + if ( sh_l1e_is_magic(*sp) ) + d->arch.paging.shadow.has_fast_mmio_entries =3D true; goto done; } =20 --- a/xen/arch/x86/mm/shadow/types.h +++ b/xen/arch/x86/mm/shadow/types.h @@ -281,7 +281,8 @@ shadow_put_page_from_l1e(shadow_l1e_t sl * pagetables. * * This is only feasible for PAE and 64bit Xen: 32-bit non-PAE PTEs don't - * have reserved bits that we can use for this. + * have reserved bits that we can use for this. And even there it can only + * be used if the processor doesn't use all 52 address bits. */ =20 #define SH_L1E_MAGIC 0xffffffff00000001ULL @@ -291,14 +292,24 @@ static inline bool sh_l1e_is_magic(shado } =20 /* Guest not present: a single magic value */ -static inline shadow_l1e_t sh_l1e_gnp(void) +static inline shadow_l1e_t sh_l1e_gnp_raw(void) { return (shadow_l1e_t){ -1ULL }; } =20 +static inline shadow_l1e_t sh_l1e_gnp(void) +{ + /* + * On systems with no reserved physical address bits we can't engage t= he + * fast fault path. + */ + return paddr_bits < PADDR_BITS ? sh_l1e_gnp_raw() + : shadow_l1e_empty(); +} + static inline bool sh_l1e_is_gnp(shadow_l1e_t sl1e) { - return sl1e.l1 =3D=3D sh_l1e_gnp().l1; + return sl1e.l1 =3D=3D sh_l1e_gnp_raw().l1; } =20 /* @@ -313,9 +324,14 @@ static inline bool sh_l1e_is_gnp(shadow_ =20 static inline shadow_l1e_t sh_l1e_mmio(gfn_t gfn, u32 gflags) { - return (shadow_l1e_t) { (SH_L1E_MMIO_MAGIC - | MASK_INSR(gfn_x(gfn), SH_L1E_MMIO_GFN_MASK) - | (gflags & (_PAGE_USER|_PAGE_RW))) }; + unsigned long gfn_val =3D MASK_INSR(gfn_x(gfn), SH_L1E_MMIO_GFN_MASK); + + if ( paddr_bits >=3D PADDR_BITS || + gfn_x(gfn) !=3D MASK_EXTR(gfn_val, SH_L1E_MMIO_GFN_MASK) ) + return shadow_l1e_empty(); + + return (shadow_l1e_t) { (SH_L1E_MMIO_MAGIC | gfn_val | + (gflags & (_PAGE_USER | _PAGE_RW))) }; } =20 static inline bool sh_l1e_is_mmio(shadow_l1e_t sl1e)