From nobody Wed Nov 27 04:32:26 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701800450; cv=none; d=zohomail.com; s=zohoarc; b=I+dxIOFc3DRPZK69vkvJDnhXUHkhSUgi2ti2fWIZH+x7q9y73hYEr6KkUQyvK5AlbRtRsH2EiGZl6oM8ZIEMDrgDT68koBtJIhVL1YavXuDb3CbxR/PDXtcJJKID2hp0T2vV2UxvJ7ICluUaezJ8+cd7fr6mNmFo1RZp0ZnOg7w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701800450; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=PYzMzXb1PdcjP3A1qwHad/XAhWJKM4hT2cxMg6piGu4=; b=DZzsEDXMSBkkZ1FEKZXXDNIsQzEQKEdXtzomND+BKY+02VqYi1/MSS3MV799vovioyezdtNjSrEw3sWoVrwcQON5cmDO404lnm3Tmtwo0AhuSJW6qvsHPG/45q/HlkhMXDM4NfRWVXsdAv3x888rR270rZUycalouz2h8+hOFjw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701800450845546.0066148480051; Tue, 5 Dec 2023 10:20:50 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.648184.1012290 (Exim 4.92) (envelope-from ) id 1rAa26-0003FW-08; Tue, 05 Dec 2023 18:20:34 +0000 Received: by outflank-mailman (output) from mailman id 648184.1012290; Tue, 05 Dec 2023 18:20:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa25-0003FP-Sz; Tue, 05 Dec 2023 18:20:33 +0000 Received: by outflank-mailman (input) for mailman id 648184; Tue, 05 Dec 2023 18:20:32 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAa24-0002wG-F9 for xen-devel@lists.xenproject.org; Tue, 05 Dec 2023 18:20:32 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id f86807a6-939a-11ee-9b0f-b553b5be7939; Tue, 05 Dec 2023 19:20:30 +0100 (CET) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-365-GF-QWva-Oni-7XU0aB-D9w-1; Tue, 05 Dec 2023 13:20:20 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4226585A597; Tue, 5 Dec 2023 18:20:18 +0000 (UTC) Received: from localhost (unknown [10.39.194.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id 99C5B51E3; Tue, 5 Dec 2023 18:20:17 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f86807a6-939a-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701800429; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PYzMzXb1PdcjP3A1qwHad/XAhWJKM4hT2cxMg6piGu4=; b=WCB28mTctlm3HwtP/GGotoNmvKb3jY7S4dCnfTsaXoi2R9QnZCGAeOSqUWsC5z1ZCuUGNb Mh7yJw/f52Pr13MI1Q4ZS5dcXKhCOZ0ShmJyr4cEJYbRjlcBqjNSM7xyTvVkpegipigXlt P4HrDlGa4qL4FJGZR3nnd9W5rFZAMtY= X-MC-Unique: GF-QWva-Oni-7XU0aB-D9w-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Stefan Hajnoczi , Vladimir Sementsov-Ogievskiy , Cleber Rosa , Xie Changlong , Paul Durrant , Ari Sundholm , Jason Wang , Eric Blake , John Snow , Eduardo Habkost , Wen Congyang , Alberto Garcia , Anthony Perard , "Michael S. Tsirkin" , Stefano Stabellini , qemu-block@nongnu.org, Juan Quintela , Paolo Bonzini , Kevin Wolf , Coiby Xu , Fabiano Rosas , Hanna Reitz , Zhang Chen , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Pavel Dovgalyuk , Peter Xu , Emanuele Giuseppe Esposito , Fam Zheng , Leonardo Bras , David Hildenbrand , Li Zhijian , xen-devel@lists.xenproject.org Subject: [PATCH v2 01/14] virtio-scsi: replace AioContext lock with tmf_bh_lock Date: Tue, 5 Dec 2023 13:19:58 -0500 Message-ID: <20231205182011.1976568-2-stefanha@redhat.com> In-Reply-To: <20231205182011.1976568-1-stefanha@redhat.com> References: <20231205182011.1976568-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701800452360100004 Content-Type: text/plain; charset="utf-8" Protect the Task Management Function BH state with a lock. The TMF BH runs in the main loop thread. An IOThread might process a TMF at the same time as the TMF BH is running. Therefore tmf_bh_list and tmf_bh must be protected by a lock. Run TMF request completion in the IOThread using aio_wait_bh_oneshot(). This avoids more locking to protect the virtqueue and SCSI layer state. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake Reviewed-by: Kevin Wolf --- include/hw/virtio/virtio-scsi.h | 3 +- hw/scsi/virtio-scsi.c | 62 ++++++++++++++++++++++----------- 2 files changed, 43 insertions(+), 22 deletions(-) diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scs= i.h index 779568ab5d..da8cb928d9 100644 --- a/include/hw/virtio/virtio-scsi.h +++ b/include/hw/virtio/virtio-scsi.h @@ -85,8 +85,9 @@ struct VirtIOSCSI { =20 /* * TMFs deferred to main loop BH. These fields are protected by - * virtio_scsi_acquire(). + * tmf_bh_lock. */ + QemuMutex tmf_bh_lock; QEMUBH *tmf_bh; QTAILQ_HEAD(, VirtIOSCSIReq) tmf_bh_list; =20 diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 9c751bf296..4f8d35facc 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -123,6 +123,30 @@ static void virtio_scsi_complete_req(VirtIOSCSIReq *re= q) virtio_scsi_free_req(req); } =20 +static void virtio_scsi_complete_req_bh(void *opaque) +{ + VirtIOSCSIReq *req =3D opaque; + + virtio_scsi_complete_req(req); +} + +/* + * Called from virtio_scsi_do_one_tmf_bh() in main loop thread. The main l= oop + * thread cannot touch the virtqueue since that could race with an IOThrea= d. + */ +static void virtio_scsi_complete_req_from_main_loop(VirtIOSCSIReq *req) +{ + VirtIOSCSI *s =3D req->dev; + + if (!s->ctx || s->ctx =3D=3D qemu_get_aio_context()) { + /* No need to schedule a BH when there is no IOThread */ + virtio_scsi_complete_req(req); + } else { + /* Run request completion in the IOThread */ + aio_wait_bh_oneshot(s->ctx, virtio_scsi_complete_req_bh, req); + } +} + static void virtio_scsi_bad_req(VirtIOSCSIReq *req) { virtio_error(VIRTIO_DEVICE(req->dev), "wrong size for virtio-scsi head= ers"); @@ -338,10 +362,7 @@ static void virtio_scsi_do_one_tmf_bh(VirtIOSCSIReq *r= eq) =20 out: object_unref(OBJECT(d)); - - virtio_scsi_acquire(s); - virtio_scsi_complete_req(req); - virtio_scsi_release(s); + virtio_scsi_complete_req_from_main_loop(req); } =20 /* Some TMFs must be processed from the main loop thread */ @@ -354,18 +375,16 @@ static void virtio_scsi_do_tmf_bh(void *opaque) =20 GLOBAL_STATE_CODE(); =20 - virtio_scsi_acquire(s); + WITH_QEMU_LOCK_GUARD(&s->tmf_bh_lock) { + QTAILQ_FOREACH_SAFE(req, &s->tmf_bh_list, next, tmp) { + QTAILQ_REMOVE(&s->tmf_bh_list, req, next); + QTAILQ_INSERT_TAIL(&reqs, req, next); + } =20 - QTAILQ_FOREACH_SAFE(req, &s->tmf_bh_list, next, tmp) { - QTAILQ_REMOVE(&s->tmf_bh_list, req, next); - QTAILQ_INSERT_TAIL(&reqs, req, next); + qemu_bh_delete(s->tmf_bh); + s->tmf_bh =3D NULL; } =20 - qemu_bh_delete(s->tmf_bh); - s->tmf_bh =3D NULL; - - virtio_scsi_release(s); - QTAILQ_FOREACH_SAFE(req, &reqs, next, tmp) { QTAILQ_REMOVE(&reqs, req, next); virtio_scsi_do_one_tmf_bh(req); @@ -379,8 +398,7 @@ static void virtio_scsi_reset_tmf_bh(VirtIOSCSI *s) =20 GLOBAL_STATE_CODE(); =20 - virtio_scsi_acquire(s); - + /* Called after ioeventfd has been stopped, so tmf_bh_lock is not need= ed */ if (s->tmf_bh) { qemu_bh_delete(s->tmf_bh); s->tmf_bh =3D NULL; @@ -393,19 +411,19 @@ static void virtio_scsi_reset_tmf_bh(VirtIOSCSI *s) req->resp.tmf.response =3D VIRTIO_SCSI_S_TARGET_FAILURE; virtio_scsi_complete_req(req); } - - virtio_scsi_release(s); } =20 static void virtio_scsi_defer_tmf_to_bh(VirtIOSCSIReq *req) { VirtIOSCSI *s =3D req->dev; =20 - QTAILQ_INSERT_TAIL(&s->tmf_bh_list, req, next); + WITH_QEMU_LOCK_GUARD(&s->tmf_bh_lock) { + QTAILQ_INSERT_TAIL(&s->tmf_bh_list, req, next); =20 - if (!s->tmf_bh) { - s->tmf_bh =3D qemu_bh_new(virtio_scsi_do_tmf_bh, s); - qemu_bh_schedule(s->tmf_bh); + if (!s->tmf_bh) { + s->tmf_bh =3D qemu_bh_new(virtio_scsi_do_tmf_bh, s); + qemu_bh_schedule(s->tmf_bh); + } } } =20 @@ -1235,6 +1253,7 @@ static void virtio_scsi_device_realize(DeviceState *d= ev, Error **errp) Error *err =3D NULL; =20 QTAILQ_INIT(&s->tmf_bh_list); + qemu_mutex_init(&s->tmf_bh_lock); =20 virtio_scsi_common_realize(dev, virtio_scsi_handle_ctrl, @@ -1277,6 +1296,7 @@ static void virtio_scsi_device_unrealize(DeviceState = *dev) =20 qbus_set_hotplug_handler(BUS(&s->bus), NULL); virtio_scsi_common_unrealize(dev); + qemu_mutex_destroy(&s->tmf_bh_lock); } =20 static Property virtio_scsi_properties[] =3D { --=20 2.43.0