From nobody Mon May 6 08:47:57 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1679911812120612.6565280190321; Mon, 27 Mar 2023 03:10:12 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.515117.797684 (Exim 4.92) (envelope-from ) id 1pgjnJ-0005G5-2j; Mon, 27 Mar 2023 10:09:41 +0000 Received: by outflank-mailman (output) from mailman id 515117.797684; Mon, 27 Mar 2023 10:09:41 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pgjnI-0005Fy-Uo; Mon, 27 Mar 2023 10:09:40 +0000 Received: by outflank-mailman (input) for mailman id 515117; Mon, 27 Mar 2023 10:09:39 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pgjnG-00050r-RM for xen-devel@lists.xenproject.org; Mon, 27 Mar 2023 10:09:39 +0000 Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 7a4b4b25-cc87-11ed-85db-49a42c6b2330; Mon, 27 Mar 2023 12:09:37 +0200 (CEST) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id E61AD5C014D; Mon, 27 Mar 2023 06:09:35 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Mon, 27 Mar 2023 06:09:35 -0400 Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 27 Mar 2023 06:09:34 -0400 (EDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7a4b4b25-cc87-11ed-85db-49a42c6b2330 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= invisiblethingslab.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm2; t=1679911775; x=1679998175; bh=Xg uvSU65OUXrBnKZJQwmkiD5FqnzcP7jvRUz9zeLO90=; b=aMBneFLq+4/izD/DW8 KxXqt90Boa11BIJIQB6bHIs9ppi2l069YpLX0KBFXJu5ydxzk6aD7zUgehpBNxdv OhooZFZBDuFFdoQr2fzrUmtdgRAp7ksSbPTHmEirY940tws/nK1sXDPtABDNkC02 M4bPvjG/eYqEaJSgs9FKbkJz/zQKVonvJTOfCVBy9OGtGjmIj7Qnp04O8+M7ziEO 8odLp7PP8xr1N7bfehLPR+K6VjHfvZnTW71bDk7ZGzP0QRBO59Hc5eq9k0Ijm+0M NvRroQRWiDsgHSLFSPCP9zvY+vkhK8O6U2u8LUGDQXIpzGJYRtpVq+7uNWJZxghQ uCmA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t= 1679911775; x=1679998175; bh=XguvSU65OUXrBnKZJQwmkiD5FqnzcP7jvRU z9zeLO90=; b=kHOLYx/JlCG+qgSaWf4U+LeZ706+0kaLfwUeuDIYfecsDDPy13x g4qA2nomzJYj3FaGBHy7gZulmz42s4uN9W9XwKZF/iqE8K+EQzg0nZwHtti+87+W qG1UQuMyQM7z46wtPLbg8bOgmPtHridGl8OyqIb2UuuIYE8o8RFpNFUwgDlJQASf PV77hXb58lvp15P5BSUuuw+G5dJer3Ip2Hz5OmzrC2z/HRc3H7l6bPL+dYu2Zlve sGJ8I5Ni8oFUFy+8BL3QcuyiXluGFFnXf2VHUZGY6yU4s6avLPAsak2p+2CCoWPz nAOxvvp46n6rf2vajUOZl6P+KIx4rKhN1EA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdehvddgvdehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm X-ME-Proxy: Feedback-ID: i1568416f:Fastmail From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= To: xen-devel@lists.xenproject.org Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= , Paul Durrant , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu Subject: [PATCH 1/2] x86/mm: add API for marking only part of a MMIO page read only Date: Mon, 27 Mar 2023 12:09:15 +0200 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1679911813954100003 In some cases, only few registers on a page needs to be write-protected. Examples include USB3 console (64 bytes worth of registers) or MSI-X's PBA table (which doesn't need to span the whole table either). Current API allows only marking whole pages pages read-only, which sometimes may cover other registers that guest may need to write into. Currently, when a guest tries to write to an MMIO page on the mmio_ro_ranges, it's either immediately crashed on EPT violation - if that's HVM, or if PV, it gets #PF. In case of Linux PV, if access was from userspace (like, /dev/mem), it will try to fixup by updating page tables (that Xen again will force to read-only) and will hit that #PF again (looping endlessly). Both behaviors are undesirable if guest could actually be allowed the write. Introduce an API that allows marking part of a page read-only. Since sub-page permissions are not a thing in page tables, do this via emulation (or simply page fault handler for PV) that handles writes that are supposed to be allowed. Those writes require the page to be mapped to Xen, so subpage_mmio_ro_add() function takes fixmap index of the page. The page needs to be added to mmio_ro_ranges, first anyway. Sub-page ranges are stored using rangeset for each added page, and those pages are stored on a plain list (as there isn't supposed to be many pages needing this precise r/o control). The mechanism this API is plugged in is slightly different for PV and HVM. For both paths, it's plugged into mmio_ro_emulated_write(). For PV, it's already called for #PF on read-only MMIO page. For HVM however, EPT violation on p2m_mmio_direct page results in a direct domain_crash(). To reach mmio_ro_emulated_write(), change how write violations for p2m_mmio_direct are handled - specifically, treat them similar to p2m_ioreq_server. This makes relevant ioreq handler being called, that finally end up calling mmio_ro_emulated_write(). Both of those paths need an MFN to which guest tried to write (to check which part of the page is supposed to be read-only, and where the page is mapped for writes). This information currently isn't available directly in mmio_ro_emulated_write(), but in both cases it is already resolved somewhere higher in the call tree. Pass it down to mmio_ro_emulated_write() via new mmio_ro_emulate_ctxt.mfn field. Signed-off-by: Marek Marczykowski-G=C3=B3recki --- Shadow mode is not tested, but I don't expect it to work differently than HAP in areas related to this patch. The used locking should make it safe to use similar to mmio_ro_ranges, but frankly the only use (introduced in the next patch) could go without locking at all, as subpage_mmio_ro_add() is called only before any domain is constructed and subpage_mmio_ro_remove() is never called. --- xen/arch/x86/hvm/emulate.c | 2 +- xen/arch/x86/hvm/hvm.c | 3 +- xen/arch/x86/include/asm/mm.h | 22 ++++- xen/arch/x86/mm.c | 181 +++++++++++++++++++++++++++++++++- xen/arch/x86/pv/ro-page-fault.c | 1 +- 5 files changed, 207 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 95364deb1996..311102724dea 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -2733,7 +2733,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned = long gla) .write =3D mmio_ro_emulated_write, .validate =3D hvmemul_validate, }; - struct mmio_ro_emulate_ctxt mmio_ro_ctxt =3D { .cr2 =3D gla }; + struct mmio_ro_emulate_ctxt mmio_ro_ctxt =3D { .cr2 =3D gla, .mfn =3D = _mfn(mfn) }; struct hvm_emulate_ctxt ctxt; const struct x86_emulate_ops *ops; unsigned int seg, bdf; diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index d326fa1c0136..f1c928e3e4ee 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1942,7 +1942,8 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned l= ong gla, */ if ( (p2mt =3D=3D p2m_mmio_dm) || (npfec.write_access && - (p2m_is_discard_write(p2mt) || (p2mt =3D=3D p2m_ioreq_server))) ) + (p2m_is_discard_write(p2mt) || (p2mt =3D=3D p2m_ioreq_server) || + p2mt =3D=3D p2m_mmio_direct)) ) { if ( !handle_mmio_with_translation(gla, gpa >> PAGE_SHIFT, npfec) ) hvm_inject_hw_exception(TRAP_gp_fault, 0); diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h index db29e3e2059f..91937d556bac 100644 --- a/xen/arch/x86/include/asm/mm.h +++ b/xen/arch/x86/include/asm/mm.h @@ -522,9 +522,31 @@ extern struct rangeset *mmio_ro_ranges; void memguard_guard_stack(void *p); void memguard_unguard_stack(void *p); =20 +/* + * Add more precise r/o marking for a MMIO page. Bytes range specified here + * will still be R/O, but the rest of the page (nor marked as R/O via anot= her + * call) will have writes passed through. The write passthrough requires + * providing fixmap entry by the caller. + * Since multiple callers can mark different areas of the same page, they = might + * provide different fixmap entries (although that's very unlikely in + * practice). Only the one provided by the first caller will be used. Retu= rn value + * indicates whether this fixmap entry will be used, or a different one + * provided earlier (in which case the caller might decide to release it). + * + * Return values: + * - negative: error + * - 0: success, fixmap entry is claimed + * - 1: success, fixmap entry set earlier will be used + */ +int subpage_mmio_ro_add(mfn_t mfn, unsigned long offset_s, + unsigned long offset_e, int fixmap_idx); +int subpage_mmio_ro_remove(mfn_t mfn, unsigned long offset_s, + unsigned long offset_e); + struct mmio_ro_emulate_ctxt { unsigned long cr2; unsigned int seg, bdf; + mfn_t mfn; }; =20 int cf_check mmio_ro_emulated_write( diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 0fe14faa5fa7..b50bdee40b6b 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -165,6 +165,19 @@ bool __read_mostly machine_to_phys_mapping_valid; =20 struct rangeset *__read_mostly mmio_ro_ranges; =20 +/* Handling sub-page read-only MMIO regions */ +struct subpage_ro_range { + struct list_head list; + mfn_t mfn; + int fixmap_idx; + struct rangeset *ro_bytes; + struct rcu_head rcu; +}; + +static LIST_HEAD(subpage_ro_ranges); +static DEFINE_RCU_READ_LOCK(subpage_ro_rcu); +static DEFINE_SPINLOCK(subpage_ro_lock); + static uint32_t base_disallow_mask; /* Global bit is allowed to be set on L1 PTEs. Intended for user mappings.= */ #define L1_DISALLOW_MASK ((base_disallow_mask | _PAGE_GNTTAB) & ~_PAGE_GLO= BAL) @@ -4893,6 +4906,172 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HA= NDLE_PARAM(void) arg) return 0; } =20 +int subpage_mmio_ro_add( + mfn_t mfn, + unsigned long offset_s, + unsigned long offset_e, + int fixmap_idx) +{ + struct subpage_ro_range *entry =3D NULL, *iter; + int rc; + + ASSERT(rangeset_contains_singleton(mmio_ro_ranges, mfn_x(mfn))); + ASSERT(offset_s < PAGE_SIZE); + ASSERT(offset_e < PAGE_SIZE); + + spin_lock(&subpage_ro_lock); + + list_for_each_entry( iter, &subpage_ro_ranges, list ) + { + if ( mfn_eq(iter->mfn, mfn) ) + { + entry =3D iter; + break; + } + } + if ( !entry ) + { + /* iter=3D=3DNULL marks it was a newly allocated entry */ + iter =3D NULL; + entry =3D xmalloc(struct subpage_ro_range); + rc =3D -ENOMEM; + if ( !entry ) + goto err_unlock; + entry->mfn =3D mfn; + entry->fixmap_idx =3D fixmap_idx; + entry->ro_bytes =3D rangeset_new(NULL, "subpage r/o mmio", + RANGESETF_prettyprint_hex); + rc =3D -ENOMEM; + if ( !entry->ro_bytes ) + goto err_unlock; + } + + rc =3D rangeset_add_range(entry->ro_bytes, offset_s, offset_e); + if ( rc < 0 ) + goto err_unlock; + + if ( !iter ) + list_add_rcu(&entry->list, &subpage_ro_ranges); + + spin_unlock(&subpage_ro_lock); + + if ( !iter || entry->fixmap_idx =3D=3D fixmap_idx ) + return 0; + else + return 1; + +err_unlock: + spin_unlock(&subpage_ro_lock); + if ( !iter ) + { + if ( entry ) + { + if ( entry->ro_bytes ) + rangeset_destroy(entry->ro_bytes); + xfree(entry); + } + } + return rc; +} + +static void subpage_mmio_ro_free(struct rcu_head *rcu) +{ + struct subpage_ro_range *entry =3D container_of(rcu, struct subpage_ro= _range, rcu); + + rangeset_destroy(entry->ro_bytes); + xfree(entry); +} + +int subpage_mmio_ro_remove( + mfn_t mfn, + unsigned long offset_s, + unsigned long offset_e) +{ + struct subpage_ro_range *entry =3D NULL, *iter; + int rc; + + ASSERT(offset_s < PAGE_SIZE); + ASSERT(offset_e < PAGE_SIZE); + + spin_lock(&subpage_ro_lock); + + list_for_each_entry_rcu( iter, &subpage_ro_ranges, list ) + { + if ( mfn_eq(iter->mfn, mfn) ) + { + entry =3D iter; + break; + } + } + rc =3D -ENOENT; + if ( !entry ) + goto out_unlock; + + rc =3D rangeset_remove_range(entry->ro_bytes, offset_s, offset_e); + if ( rc < 0 ) + goto out_unlock; + + rc =3D 0; + + if ( !rangeset_is_empty(entry->ro_bytes) ) + goto out_unlock; + + list_del_rcu(&entry->list); + call_rcu(&entry->rcu, subpage_mmio_ro_free); + +out_unlock: + spin_unlock(&subpage_ro_lock); + return rc; +} + +static void subpage_mmio_write_emulate( + mfn_t mfn, + unsigned long offset, + void *data, + unsigned int len) +{ + struct subpage_ro_range *entry; + void __iomem *addr; + + rcu_read_lock(&subpage_ro_rcu); + + list_for_each_entry_rcu( entry, &subpage_ro_ranges, list ) + { + if ( mfn_eq(entry->mfn, mfn) ) + { + if ( rangeset_overlaps_range(entry->ro_bytes, offset, offset += len - 1) ) + goto out_unlock; + + addr =3D fix_to_virt(entry->fixmap_idx) + offset; + switch ( len ) + { + case 1: + writeb(*(uint8_t*)data, addr); + break; + case 2: + writew(*(uint16_t*)data, addr); + break; + case 4: + writel(*(uint32_t*)data, addr); + break; + case 8: + writeq(*(uint64_t*)data, addr); + break; + default: + /* mmio_ro_emulated_write() already validated the size */ + ASSERT_UNREACHABLE(); + } + goto out_unlock; + } + } + gdprintk(XENLOG_WARNING, + "ignoring write to R/O MMIO mfn %" PRI_mfn " offset %lx len %= u\n", + mfn_x(mfn), offset, len); + +out_unlock: + rcu_read_unlock(&subpage_ro_rcu); +} + int cf_check mmio_ro_emulated_write( enum x86_segment seg, unsigned long offset, @@ -4911,6 +5090,8 @@ int cf_check mmio_ro_emulated_write( return X86EMUL_UNHANDLEABLE; } =20 + subpage_mmio_write_emulate(mmio_ro_ctxt->mfn, offset & (PAGE_SIZE - 1)= , p_data, bytes); + return X86EMUL_OKAY; } =20 diff --git a/xen/arch/x86/pv/ro-page-fault.c b/xen/arch/x86/pv/ro-page-faul= t.c index 5963f5ee2d51..91caa2c8f520 100644 --- a/xen/arch/x86/pv/ro-page-fault.c +++ b/xen/arch/x86/pv/ro-page-fault.c @@ -342,6 +342,7 @@ static int mmio_ro_do_page_fault(struct x86_emulate_ctx= t *ctxt, return X86EMUL_UNHANDLEABLE; } =20 + mmio_ro_ctxt.mfn =3D mfn; ctxt->data =3D &mmio_ro_ctxt; if ( pci_ro_mmcfg_decode(mfn_x(mfn), &mmio_ro_ctxt.seg, &mmio_ro_ctxt.= bdf) ) return x86_emulate(ctxt, &mmcfg_intercept_ops); --=20 git-series 0.9.1 From nobody Mon May 6 08:47:57 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1679911814319746.2372161324326; Mon, 27 Mar 2023 03:10:14 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.515118.797694 (Exim 4.92) (envelope-from ) id 1pgjnL-0005Wv-G5; Mon, 27 Mar 2023 10:09:43 +0000 Received: by outflank-mailman (output) from mailman id 515118.797694; Mon, 27 Mar 2023 10:09:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pgjnL-0005Wm-D8; Mon, 27 Mar 2023 10:09:43 +0000 Received: by outflank-mailman (input) for mailman id 515118; Mon, 27 Mar 2023 10:09:42 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pgjnK-0005WB-Lc for xen-devel@lists.xenproject.org; Mon, 27 Mar 2023 10:09:42 +0000 Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 7b41fa93-cc87-11ed-b464-930f4c7d94ae; Mon, 27 Mar 2023 12:09:40 +0200 (CEST) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id 8B0805C00D2; Mon, 27 Mar 2023 06:09:37 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Mon, 27 Mar 2023 06:09:37 -0400 Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 27 Mar 2023 06:09:36 -0400 (EDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7b41fa93-cc87-11ed-b464-930f4c7d94ae DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= invisiblethingslab.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm2; t=1679911777; x=1679998177; bh=1P t6XqRuvA6Th9hcIflc1psbelysDkFYRZ1fYTSdFk8=; b=YV3dOcV6/U9h5icRhn Pcy4iRo2Fazj+YZKFej5iUjPGl8gqo94BIVI3q7ARiSJk/HhgssUNnYvf2viGgJL kGLsTkhM/W9JuMJZssQnQHVmOJ6jvnABj4vY/Z3T7zcxNs1B1sJMnYmiqsrNxxY4 ylDrIobV45aDJEwUQqSsJTV4G+kc+1fQtDiovENZHKQqj3VceoToIxcGhJxbDm7a Vn5xb+PmK+fXBmF08TVRN12wWGdwtSKHWbTNdOOuyQdaiwVMt2LnzNy3IsOXfkLI W3xX4ZXwtgVCBcVNXYQLiePM1JVZosbQA0KOsHKJsIFpcnuyV6lxpo5KFYnFplN/ GsJQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t= 1679911777; x=1679998177; bh=1Pt6XqRuvA6Th9hcIflc1psbelysDkFYRZ1 fYTSdFk8=; b=t3PYdUq9BQ35ZENiGBKpgRXgTk/lQmD9zOvJG1kJCp378ehNWqx hLCjlR6naSgeQh9ApcDqGhXDDkbLlp5PsiS+SG0CLWQESSpdoIliUz2c4ZgvHI1/ nnD66FnUlWr9Uu3pBJ4NaF6quPIzVAGPVmwvGdrREcVkHm3dXZy3mQ/CDqwtkOTs U37iuCGXzfloN9MEoqP/tZsNqioR3erJlKcCrXxMW8Lge7izyesglqrF+VnmC89/ j3SVBrOfHdzF36TsLBI4+jx2OLlXRsSFkDVh3BepunZhKek5nEQDyEWw6BBDstZe +yV8fYat+OVAsOSKf4CMi9qo6P7hiaZuhDg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdehvddgvdehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm X-ME-Proxy: Feedback-ID: i1568416f:Fastmail From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= To: xen-devel@lists.xenproject.org Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH 2/2] drivers/char: Use sub-page ro API to make just xhci dbc cap RO Date: Mon, 27 Mar 2023 12:09:16 +0200 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1679911816203100001 ... not the whole page, which may contain other registers too. In fact on Tiger Lake and newer (at least), this page do contain other registers that Linux tries to use. And with share=3Dyes, a domU would use them too. Without this patch, PV dom0 would fail to initialize the controller, while HVM would be killed on EPT violation. Signed-off-by: Marek Marczykowski-G=C3=B3recki --- xen/drivers/char/xhci-dbc.c | 38 ++++++++++++++++++++++++++++++++++++-- 1 file changed, 36 insertions(+), 2 deletions(-) diff --git a/xen/drivers/char/xhci-dbc.c b/xen/drivers/char/xhci-dbc.c index 60b781f87202..df2524b0ca18 100644 --- a/xen/drivers/char/xhci-dbc.c +++ b/xen/drivers/char/xhci-dbc.c @@ -1226,9 +1226,43 @@ static void __init cf_check dbc_uart_init_postirq(st= ruct serial_port *port) uart->dbc.xhc_dbc_offset), PFN_UP((uart->dbc.bar_val & PCI_BASE_ADDRESS_MEM_MASK) + uart->dbc.xhc_dbc_offset + - sizeof(*uart->dbc.dbc_reg)) - 1) ) - printk(XENLOG_INFO + sizeof(*uart->dbc.dbc_reg)) - 1) ) { + printk(XENLOG_WARNING "Error while adding MMIO range of device to mmio_ro_ranges\= n"); + } + else + { + unsigned long dbc_regs_start =3D (uart->dbc.bar_val & + PCI_BASE_ADDRESS_MEM_MASK) + uart->dbc.xhc_dbc_offset; + unsigned long dbc_regs_end =3D dbc_regs_start + sizeof(*uart->dbc.= dbc_reg); + + /* This being smaller than a page simplifies conditions below */ + BUILD_BUG_ON(sizeof(*uart->dbc.dbc_reg) >=3D PAGE_SIZE - 1); + if ( dbc_regs_start & (PAGE_SIZE - 1) || + PFN_DOWN(dbc_regs_start) =3D=3D PFN_DOWN(dbc_regs_end) ) + { + if ( subpage_mmio_ro_add( + _mfn(PFN_DOWN(dbc_regs_start)), + dbc_regs_start & (PAGE_SIZE - 1), + PFN_DOWN(dbc_regs_start) =3D=3D PFN_DOWN(dbc_regs_= end) + ? dbc_regs_end & (PAGE_SIZE - 1) + : PAGE_SIZE - 1, + FIX_XHCI_END) ) + printk(XENLOG_WARNING + "Error while adding MMIO range of device to subpag= e_mmio_ro\n"); + } + if ( dbc_regs_end & (PAGE_SIZE - 1) && + PFN_DOWN(dbc_regs_start) !=3D PFN_DOWN(dbc_regs_end) ) + { + if ( subpage_mmio_ro_add( + _mfn(PFN_DOWN(dbc_regs_end)), + 0, + dbc_regs_end & (PAGE_SIZE - 1), + FIX_XHCI_END + PFN_DOWN(sizeof(*uart->dbc.dbc_reg)= )) ) + printk(XENLOG_WARNING + "Error while adding MMIO range of device to subpag= e_mmio_ro\n"); + } + } #endif } =20 --=20 git-series 0.9.1