From nobody Wed Feb 11 04:17:28 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9EB9EB64D9 for ; Tue, 27 Jun 2023 11:23:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231307AbjF0LXj (ORCPT ); Tue, 27 Jun 2023 07:23:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231305AbjF0LXZ (ORCPT ); Tue, 27 Jun 2023 07:23:25 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FA2F26AE for ; Tue, 27 Jun 2023 04:22:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687864962; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8+JMX7PmWMV8slSmifeWswXiVAWwIFJBeTLnY9EmnBs=; b=bGSWd6LFKSE7dSoY8M4wGHs+aNMASw0bNTE9sYGGFOcBXiYzrwQVJHrLSBLlbatMlM+mxc kA1JOpoFBd2flbG6+l2oeQ/UanIh5RgtmWObqoeoLPbAF/+2Hir2GLxWlvpFjtVsqkcsBo ic6AgMp1JJQ34wWae6BvJg+ebjGIGcQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-228-_3hHQO3VOiqe86__7AxxGg-1; Tue, 27 Jun 2023 07:22:38 -0400 X-MC-Unique: _3hHQO3VOiqe86__7AxxGg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CE10B858290; Tue, 27 Jun 2023 11:22:37 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.192.116]) by smtp.corp.redhat.com (Postfix) with ESMTP id E37C0200A3AD; Tue, 27 Jun 2023 11:22:35 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, virtualization@lists.linux-foundation.org, David Hildenbrand , Andrew Morton , "Michael S. Tsirkin" , John Hubbard , Oscar Salvador , Michal Hocko , Jason Wang , Xuan Zhuo Subject: [PATCH v1 5/5] virtio-mem: check if the config changed before (fake) offlining memory Date: Tue, 27 Jun 2023 13:22:20 +0200 Message-Id: <20230627112220.229240-6-david@redhat.com> In-Reply-To: <20230627112220.229240-1-david@redhat.com> References: <20230627112220.229240-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" If we repeatedly fail to (fake) offline memory, we won't be sending any unplug requests to the device. However, we only check if the config changed when sending such (un)plug requests. So we could end up trying for a long time to offline memory, even though the config changed already and we're not supposed to unplug memory anymore. Let's optimize for that case, identified while testing the offline_and_remove() memory timeout and simulating it repeatedly running into the timeout. Signed-off-by: David Hildenbrand --- drivers/virtio/virtio_mem.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index 7468b4a907e3..247fb3e0ce61 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -1922,6 +1922,10 @@ static int virtio_mem_sbm_unplug_sb_online(struct vi= rtio_mem *vm, unsigned long start_pfn; int rc; =20 + /* Stop fake offlining attempts if the config changed. */ + if (atomic_read(&vm->config_changed)) + return -EAGAIN; + start_pfn =3D PFN_DOWN(virtio_mem_mb_id_to_phys(mb_id) + sb_id * vm->sbm.sb_size); =20 @@ -2233,6 +2237,10 @@ static int virtio_mem_bbm_unplug_request(struct virt= io_mem *vm, uint64_t diff) virtio_mem_bbm_for_each_bb_rev(vm, bb_id, VIRTIO_MEM_BBM_BB_ADDED) { cond_resched(); =20 + /* Stop (fake) offlining attempts if the config changed. */ + if (atomic_read(&vm->config_changed)) + return -EAGAIN; + /* * As we're holding no locks, these checks are racy, * but we don't care. --=20 2.40.1