From nobody Tue Apr 16 11:28:20 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1572983431; cv=none; d=zoho.com; s=zohoarc; b=YeEzdyq55lvOZ8r6Bgog/dXYvWC/t4NUn7AMSURQi/z7B5azgImD/x+MBwR4jTSH+MDE/cvufz8frZzbP1ETrkmDqvVi3ZBOZY1covKXMy05ATSbjbVaUu/+BasP1tYJFom4dNWf9zzM3Quy7p/fo9MI501/nx+qXEUNCOrLAq8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1572983431; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=JYWAaS2KY6VNwkZvKQjvsk6jz+nQjA6U0g4la1OG9Mg=; b=cBoN6P8pQKxO4TCFzXgtuN2Jjxz+PM760VuPcTqRGIwLcoR4uvScjHBvF9ma8pymhkmhgS3bAi23eL+uZWhCm6aQPiJ+2H0zwC290JAKJta4m674XuQH0SvZkJstfpfOs8Us6+p90FuMMFTbtd/GNI8YdLV9GDZeLnA4ONBCTvk= ARC-Authentication-Results: i=1; mx.zoho.com; dkim=fail; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1572983431359122.75538698461662; Tue, 5 Nov 2019 11:50:31 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iS4pH-0006Gc-Uw; Tue, 05 Nov 2019 19:49:15 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iS4pG-0006GT-Jc for xen-devel@lists.xenproject.org; Tue, 05 Nov 2019 19:49:14 +0000 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 577c2642-0005-11ea-9631-bc764e2007e4; Tue, 05 Nov 2019 19:49:13 +0000 (UTC) X-Inumbo-ID: 577c2642-0005-11ea-9631-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1572983353; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=ycOqv6kD+Z+hJnHNzSqQkOsZVZVHAO/96AIBmdOBHj4=; b=SbC5tPh9uhgMIfIiA8voDhDG9vgJNtmyXNRGEPKH3U1OnRMB3sn/eZha /lRJguxYIAqtYEUZaiifoVxOlhMaBshiP5Sir7mWQEBmWsCvi9SbfPih5 naiQgIJN7c8pjOOAUIWTduCjKZ4vtfe9RYy6lBltLmfAQnRPbd8sOj5BK s=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@citrix.com; spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of andrew.cooper3@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="andrew.cooper3@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of Andrew.Cooper3@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="Andrew.Cooper3@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: LLaHzHIvjFeliltcmXgNjjWUwVB7om6+Cj70cYFsARX1md26KFrM7CeaASYNI/Z+KaHlPY3jJv yLvdfU0aUX2Bte8ldvnzdjoPeWpKj1t4giIgUAgLVZQj/BhFyIM8o0gSO9bApaRc8rBN0+nUJZ uCLdzQ48VHJlKrFeZuvJxJFJ2W3kCU7FsNm9B+c11lAhibE4CVaOuqwSL6OpsJZC8ZWAyhkl9m riGCcp1jcafPsKJ82hXiRhmnAncfg6YgCM7Ghp+hqTqafqjSStsaosba3fvVpWJJ3456ubYHLg iEI= X-SBRS: 2.7 X-MesageID: 8236833 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.68,271,1569297600"; d="scan'208";a="8236833" From: Andrew Cooper To: Xen-devel Date: Tue, 5 Nov 2019 19:49:09 +0000 Message-ID: <20191105194909.32234-1-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20191105194317.16232-3-andrew.cooper3@citrix.com> References: <20191105194317.16232-3-andrew.cooper3@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v1.5] x86/livepatch: Prevent patching with active waitqueues X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Andrew Cooper , Ross Lagerwall , Konrad Rzeszutek Wilk Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) The safety of livepatching depends on every stack having been unwound, but there is one corner case where this is not true. The Sharing/Paging/Monitor infrastructure may use waitqueues, which copy the stack frame sideways and longjmp() to a different vcpu. This case is rare, and can be worked around by pausing the offending domain(s), waiting for their rings to drain, then performing a livepatch. In the case that there is an active waitqueue, fail the livepatch attempt w= ith -EBUSY, which is preforable to the fireworks which occur from trying to unw= ind the old stack frame at a later point. Signed-off-by: Andrew Cooper --- CC: Konrad Rzeszutek Wilk CC: Ross Lagerwall CC: Juergen Gross This fix wants backporting, and is long overdue for posting upstream. v1.5: * Send out a non-stale patch this time. --- xen/arch/arm/livepatch.c | 5 +++++ xen/arch/x86/livepatch.c | 40 ++++++++++++++++++++++++++++++++++++++++ xen/common/livepatch.c | 8 ++++++++ xen/include/xen/livepatch.h | 1 + 4 files changed, 54 insertions(+) diff --git a/xen/arch/arm/livepatch.c b/xen/arch/arm/livepatch.c index 00c5e2bc45..915e9d926a 100644 --- a/xen/arch/arm/livepatch.c +++ b/xen/arch/arm/livepatch.c @@ -18,6 +18,11 @@ =20 void *vmap_of_xen_text; =20 +int arch_livepatch_safety_check(void) +{ + return 0; +} + int arch_livepatch_quiesce(void) { mfn_t text_mfn; diff --git a/xen/arch/x86/livepatch.c b/xen/arch/x86/livepatch.c index c82cf53b9e..2749cbc5cf 100644 --- a/xen/arch/x86/livepatch.c +++ b/xen/arch/x86/livepatch.c @@ -10,10 +10,50 @@ #include #include #include +#include =20 #include #include =20 +static bool has_active_waitqueue(const struct vm_event_domain *ved) +{ + /* ved may be xzalloc()'d without INIT_LIST_HEAD() yet. */ + return (ved && !list_head_is_null(&ved->wq.list) && + !list_empty(&ved->wq.list)); +} + +/* + * x86's implementation of waitqueue violates the livepatching safey princ= iple + * of having unwound every CPUs stack before modifying live content. + * + * Search through every domain and check that no vCPUs have an active + * waitqueue. + */ +int arch_livepatch_safety_check(void) +{ + struct domain *d; + + for_each_domain ( d ) + { +#ifdef CONFIG_MEM_SHARING + if ( has_active_waitqueue(d->vm_event_share) ) + goto fail; +#endif +#ifdef CONFIG_MEM_PAGING + if ( has_active_waitqueue(d->vm_event_paging) ) + goto fail; +#endif + if ( has_active_waitqueue(d->vm_event_monitor) ) + goto fail; + } + + return 0; + + fail: + printk(XENLOG_ERR LIVEPATCH "%pd found with active waitqueue\n", d); + return -EBUSY; +} + int arch_livepatch_quiesce(void) { /* Disable WP to allow changes to read-only pages. */ diff --git a/xen/common/livepatch.c b/xen/common/livepatch.c index 962647616a..8386e611f2 100644 --- a/xen/common/livepatch.c +++ b/xen/common/livepatch.c @@ -1060,6 +1060,14 @@ static int apply_payload(struct payload *data) unsigned int i; int rc; =20 + rc =3D arch_livepatch_safety_check(); + if ( rc ) + { + printk(XENLOG_ERR LIVEPATCH "%s: Safety checks failed: %d\n", + data->name, rc); + return rc; + } + printk(XENLOG_INFO LIVEPATCH "%s: Applying %u functions\n", data->name, data->nfuncs); =20 diff --git a/xen/include/xen/livepatch.h b/xen/include/xen/livepatch.h index 1b1817ca0d..69ede75d20 100644 --- a/xen/include/xen/livepatch.h +++ b/xen/include/xen/livepatch.h @@ -104,6 +104,7 @@ static inline int livepatch_verify_distance(const struc= t livepatch_func *func) * These functions are called around the critical region patching live cod= e, * for an architecture to take make appropratie global state adjustments. */ +int arch_livepatch_safety_check(void); int arch_livepatch_quiesce(void); void arch_livepatch_revive(void); =20 --=20 2.11.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel