From nobody Sun Nov 24 18:24:14 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1727965278; cv=none; d=zohomail.com; s=zohoarc; b=mHvwWxY9s1zjBw4oVZcWeJ3B8p2Ka07CTdzDZgBe5ZQhh2TddLFDuOizx5vtoOmHvr6yEPO3LJ4D/27EzOvvAPr+nqoB0rU/zIa8t2tqUTfneKid8cr/4RqKUWGFuFQHSzC2+7eICqWfIfDmBTRgpqHsRyShTQU4fPIs09t8b6o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1727965278; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=2Iubt4LVeJnWXXUPp+yr5ntgHtX5098yOqc7fyRXQc4=; b=g8WwSrJ2xF//ngV4Ak6dpKv54P8XJuX7/R/379RxL+Vp5r413FtH0U1cDWGYYRLOZ8LBazILsIiIgHUZsaH0jy6NPG5NyN+uU/QlSjc89202SIcNrNKj4NDhCyBGb0V36k8Y2QPKc7kRY7t1k1aAYLjxcfBQ/AKkjhjwBCvazvk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1727965278471896.9398579230625; Thu, 3 Oct 2024 07:21:18 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.809503.1221854 (Exim 4.92) (envelope-from ) id 1swMhN-0005aC-HW; Thu, 03 Oct 2024 14:20:57 +0000 Received: by outflank-mailman (output) from mailman id 809503.1221854; Thu, 03 Oct 2024 14:20:57 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1swMhN-0005a5-DJ; Thu, 03 Oct 2024 14:20:57 +0000 Received: by outflank-mailman (input) for mailman id 809503; Thu, 03 Oct 2024 14:20:55 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1swMhL-0005Zz-Es for xen-devel@lists.xenproject.org; Thu, 03 Oct 2024 14:20:55 +0000 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [2a00:1450:4864:20::436]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id b2708b43-8192-11ef-99a2-01e77a169b0f; Thu, 03 Oct 2024 16:20:53 +0200 (CEST) Received: by mail-wr1-x436.google.com with SMTP id ffacd0b85a97d-37cdac05af9so1049164f8f.0 for ; Thu, 03 Oct 2024 07:20:53 -0700 (PDT) Received: from localhost ([213.195.117.215]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-37d082a6c3asm1370764f8f.69.2024.10.03.07.20.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Oct 2024 07:20:51 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b2708b43-8192-11ef-99a2-01e77a169b0f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1727965252; x=1728570052; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=2Iubt4LVeJnWXXUPp+yr5ntgHtX5098yOqc7fyRXQc4=; b=fmHrNgJHQqYzB19riFqWeQpLgYABoYz81gDRL3JvGqJGaIDuPG0yHNprzc1n1c3xdA mGySFxnOI3jh6KOtnwKrkng3Vcq5ik5gk39s80DXcBKO3f1YeNPG8jIVaR3YpRVhVkRo n0gXstWYuWibWmZHpP+4Tr1sbQ4nZhqHFfGek= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727965252; x=1728570052; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=2Iubt4LVeJnWXXUPp+yr5ntgHtX5098yOqc7fyRXQc4=; b=liM8u/z5SIDUrdXIvsUIdCX/tIjMwf2KOGXBdg3CCdmAS9wLzqDDu+AC2dxIBSP9ys xJPp0FzsAWjtrCH9M2Cd6/yAYzFPMVVn+hjrHCvWmDvHRo4VVbGZ1qyFnOI4SXFeTtIN EWf+JxnrSbpzFSenaOkgDHys01iOPC9j4R8kKve59IhqieU1ua0DeO5931TK90CeHKN6 LQz0J0hZjxydZVbAdcLCo86IPMvRkHm9rJ7W5M7fQFFMhWE3HxiLQoPO4EyamjUZ8QXS o/389W4/pnggXzjkKhC+RqgnjKRXwHKgD8R07clEhHNA+PrRhqg94DUcwFu02Snf3Qlx v19Q== X-Gm-Message-State: AOJu0YxZ2g63ZfkxAZ6dNm4s3+taZQWmsheaMk2rrZrpZzfTz4Gv/vF8 flHJs4EkL5Kd3hpulVZXK4aatXzGoP+iVkXgQsM24O+OE5SufbB7ZlHu/g+i95NanOHGcqCEeEk 2 X-Google-Smtp-Source: AGHT+IHleqqD344+K3yvBg0FYlef2zBT0I4JQAxP2yuYsqbVYPgZkoNh9vTwYHZybHQKfqaR5EnJ3A== X-Received: by 2002:a5d:6612:0:b0:37c:f35c:1634 with SMTP id ffacd0b85a97d-37cfb9d321emr4828122f8f.26.1727965252296; Thu, 03 Oct 2024 07:20:52 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Jan Beulich Subject: [PATCH] x86/dpci: do not leak pending interrupts on CPU offline Date: Thu, 3 Oct 2024 16:20:36 +0200 Message-ID: <20241003142036.43287-1-roger.pau@citrix.com> X-Mailer: git-send-email 2.46.0 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @citrix.com) X-ZM-MESSAGEID: 1727965279386116600 The current dpci logic relies on a softirq being executed as a side effect = of the cpu_notifier_call_chain() call in the code path that offlines the target CPU. However the call to cpu_notifier_call_chain() won't trigger any softi= rq processing, and even if it did, such processing should be done after all interrupts have been migrated off the current CPU, otherwise new pending dp= ci interrupts could still appear. Current ASSERT in the cpu callback notifier is fairly easy to trigger by do= ing CPU offline from a PVH dom0. Solve this by instead moving out any dpci interrupts pending processing once the CPU is dead. This might introduce more latency than attempting to drain before the CPU is put offline, but it's less complex, and CPU online/offlin= e is not a common action. Any extra introduced latency should be tolerable. Fixes: f6dd295381f4 ('dpci: replace tasklet with softirq') Signed-off-by: Roger Pau Monn=C3=A9 Acked-by: Andrew Cooper --- xen/drivers/passthrough/x86/hvm.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/xen/drivers/passthrough/x86/hvm.c b/xen/drivers/passthrough/x8= 6/hvm.c index d3627e4af71b..f5faff7a499a 100644 --- a/xen/drivers/passthrough/x86/hvm.c +++ b/xen/drivers/passthrough/x86/hvm.c @@ -1105,23 +1105,27 @@ static int cf_check cpu_callback( struct notifier_block *nfb, unsigned long action, void *hcpu) { unsigned int cpu =3D (unsigned long)hcpu; + unsigned long flags; =20 switch ( action ) { case CPU_UP_PREPARE: INIT_LIST_HEAD(&per_cpu(dpci_list, cpu)); break; + case CPU_UP_CANCELED: - case CPU_DEAD: - /* - * On CPU_DYING this callback is called (on the CPU that is dying) - * with an possible HVM_DPIC_SOFTIRQ pending - at which point we c= an - * clear out any outstanding domains (by the virtue of the idle lo= op - * calling the softirq later). In CPU_DEAD case the CPU is deaf and - * there are no pending softirqs for us to handle so we can chill. - */ ASSERT(list_empty(&per_cpu(dpci_list, cpu))); break; + + case CPU_DEAD: + if ( list_empty(&per_cpu(dpci_list, cpu)) ) + break; + /* Take whatever dpci interrupts are pending on the dead CPU. */ + local_irq_save(flags); + list_splice_init(&per_cpu(dpci_list, cpu), &this_cpu(dpci_list)); + local_irq_restore(flags); + raise_softirq(HVM_DPCI_SOFTIRQ); + break; } =20 return NOTIFY_DONE; --=20 2.46.0