From nobody Wed Nov 27 04:47:47 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1701287786; cv=none; d=zohomail.com; s=zohoarc; b=kq7XOknVxD09xp8V7j5DXWhmWFNxuvM81T/nkm+Jw1BhBquHAkHrVJ04Nkjjsfvori6QFVyk/MjTnArD9CxdQT+PaQrhu9cPfZ/6GQSd68RfTW2ORWNl9GuoSCtncszbTEHZsn3hy83RXWMHETq534CfW4E5xfSKl6YjnkMhuZc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1701287786; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=4ocMGkaWO2UNgH6BeGysPHsaortxgu0jbKcgqrLTfAY=; b=Aphp8lsPn5xrMoWW6zRJ0VZxuBVJQFM2AmAzO/PTYWVH/tUypJinQ7wajbuli4eiMMO11RKll4IFnGhM0Zo3HSDNGHIzO3Rw6yiA4EO+/d+CLbRheccOHUHw8sFcVE2wmNHVbXHX50b3ymC1DBkQNyDdhUedYQKdNeLKL49VJHk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1701287785995918.3315942773138; Wed, 29 Nov 2023 11:56:25 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.644176.1004878 (Exim 4.92) (envelope-from ) id 1r8QfK-0000e0-Kh; Wed, 29 Nov 2023 19:56:10 +0000 Received: by outflank-mailman (output) from mailman id 644176.1004878; Wed, 29 Nov 2023 19:56:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r8QfK-0000bt-Ff; Wed, 29 Nov 2023 19:56:10 +0000 Received: by outflank-mailman (input) for mailman id 644176; Wed, 29 Nov 2023 19:56:09 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r8QfJ-0000Sw-GO for xen-devel@lists.xenproject.org; Wed, 29 Nov 2023 19:56:09 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 55d330d8-8ef1-11ee-98e4-6d05b1d4d9a1; Wed, 29 Nov 2023 20:56:08 +0100 (CET) Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-630-7PZf8UtzMqq7ue5HK1SifQ-1; Wed, 29 Nov 2023 14:56:01 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BF60E29AA2CA; Wed, 29 Nov 2023 19:56:00 +0000 (UTC) Received: from localhost (unknown [10.39.192.91]) by smtp.corp.redhat.com (Postfix) with ESMTP id 22DFA1121308; Wed, 29 Nov 2023 19:55:59 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 55d330d8-8ef1-11ee-98e4-6d05b1d4d9a1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701287767; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4ocMGkaWO2UNgH6BeGysPHsaortxgu0jbKcgqrLTfAY=; b=MQItrGL3QxuFlzS3M60oZzaIwfVJo+/gA308TwNwWSoZQ44Ew9+qNhidOfvOLqNGGqJOuj aj+L8zlfqAuwDsZfyvbCbwmZQNTxlQOBmiRBcavbbiq5SuiAJ6D4D3VP30wZDET7X6zykh +i8s4skSxdQ57HcTuVmsAJ12ZFLnxxo= X-MC-Unique: 7PZf8UtzMqq7ue5HK1SifQ-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Hanna Reitz , Paul Durrant , Paolo Bonzini , Alberto Garcia , Emanuele Giuseppe Esposito , John Snow , Kevin Wolf , Eric Blake , Wen Congyang , , xen-devel@lists.xenproject.org, Coiby Xu , Stefan Hajnoczi , Eduardo Habkost , Xie Changlong , Ari Sundholm , Li Zhijian , Cleber Rosa , Juan Quintela , "Michael S. Tsirkin" , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Jason Wang , Vladimir Sementsov-Ogievskiy , Zhang Chen , Peter Xu , Anthony Perard , Stefano Stabellini , Leonardo Bras , Pavel Dovgalyuk , Fam Zheng , Fabiano Rosas Subject: [PATCH 01/12] virtio-scsi: replace AioContext lock with tmf_bh_lock Date: Wed, 29 Nov 2023 14:55:42 -0500 Message-ID: <20231129195553.942921-2-stefanha@redhat.com> In-Reply-To: <20231129195553.942921-1-stefanha@redhat.com> References: <20231129195553.942921-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1701287786350000001 Content-Type: text/plain; charset="utf-8" Protect the Task Management Function BH state with a lock. The TMF BH runs in the main loop thread. An IOThread might process a TMF at the same time as the TMF BH is running. Therefore tmf_bh_list and tmf_bh must be protected by a lock. Run TMF request completion in the IOThread using aio_wait_bh_oneshot(). This avoids more locking to protect the virtqueue and SCSI layer state. Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake Reviewed-by: Kevin Wolf --- include/hw/virtio/virtio-scsi.h | 3 +- hw/scsi/virtio-scsi.c | 62 ++++++++++++++++++++++----------- 2 files changed, 43 insertions(+), 22 deletions(-) diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scs= i.h index 779568ab5d..da8cb928d9 100644 --- a/include/hw/virtio/virtio-scsi.h +++ b/include/hw/virtio/virtio-scsi.h @@ -85,8 +85,9 @@ struct VirtIOSCSI { =20 /* * TMFs deferred to main loop BH. These fields are protected by - * virtio_scsi_acquire(). + * tmf_bh_lock. */ + QemuMutex tmf_bh_lock; QEMUBH *tmf_bh; QTAILQ_HEAD(, VirtIOSCSIReq) tmf_bh_list; =20 diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c index 9c751bf296..4f8d35facc 100644 --- a/hw/scsi/virtio-scsi.c +++ b/hw/scsi/virtio-scsi.c @@ -123,6 +123,30 @@ static void virtio_scsi_complete_req(VirtIOSCSIReq *re= q) virtio_scsi_free_req(req); } =20 +static void virtio_scsi_complete_req_bh(void *opaque) +{ + VirtIOSCSIReq *req =3D opaque; + + virtio_scsi_complete_req(req); +} + +/* + * Called from virtio_scsi_do_one_tmf_bh() in main loop thread. The main l= oop + * thread cannot touch the virtqueue since that could race with an IOThrea= d. + */ +static void virtio_scsi_complete_req_from_main_loop(VirtIOSCSIReq *req) +{ + VirtIOSCSI *s =3D req->dev; + + if (!s->ctx || s->ctx =3D=3D qemu_get_aio_context()) { + /* No need to schedule a BH when there is no IOThread */ + virtio_scsi_complete_req(req); + } else { + /* Run request completion in the IOThread */ + aio_wait_bh_oneshot(s->ctx, virtio_scsi_complete_req_bh, req); + } +} + static void virtio_scsi_bad_req(VirtIOSCSIReq *req) { virtio_error(VIRTIO_DEVICE(req->dev), "wrong size for virtio-scsi head= ers"); @@ -338,10 +362,7 @@ static void virtio_scsi_do_one_tmf_bh(VirtIOSCSIReq *r= eq) =20 out: object_unref(OBJECT(d)); - - virtio_scsi_acquire(s); - virtio_scsi_complete_req(req); - virtio_scsi_release(s); + virtio_scsi_complete_req_from_main_loop(req); } =20 /* Some TMFs must be processed from the main loop thread */ @@ -354,18 +375,16 @@ static void virtio_scsi_do_tmf_bh(void *opaque) =20 GLOBAL_STATE_CODE(); =20 - virtio_scsi_acquire(s); + WITH_QEMU_LOCK_GUARD(&s->tmf_bh_lock) { + QTAILQ_FOREACH_SAFE(req, &s->tmf_bh_list, next, tmp) { + QTAILQ_REMOVE(&s->tmf_bh_list, req, next); + QTAILQ_INSERT_TAIL(&reqs, req, next); + } =20 - QTAILQ_FOREACH_SAFE(req, &s->tmf_bh_list, next, tmp) { - QTAILQ_REMOVE(&s->tmf_bh_list, req, next); - QTAILQ_INSERT_TAIL(&reqs, req, next); + qemu_bh_delete(s->tmf_bh); + s->tmf_bh =3D NULL; } =20 - qemu_bh_delete(s->tmf_bh); - s->tmf_bh =3D NULL; - - virtio_scsi_release(s); - QTAILQ_FOREACH_SAFE(req, &reqs, next, tmp) { QTAILQ_REMOVE(&reqs, req, next); virtio_scsi_do_one_tmf_bh(req); @@ -379,8 +398,7 @@ static void virtio_scsi_reset_tmf_bh(VirtIOSCSI *s) =20 GLOBAL_STATE_CODE(); =20 - virtio_scsi_acquire(s); - + /* Called after ioeventfd has been stopped, so tmf_bh_lock is not need= ed */ if (s->tmf_bh) { qemu_bh_delete(s->tmf_bh); s->tmf_bh =3D NULL; @@ -393,19 +411,19 @@ static void virtio_scsi_reset_tmf_bh(VirtIOSCSI *s) req->resp.tmf.response =3D VIRTIO_SCSI_S_TARGET_FAILURE; virtio_scsi_complete_req(req); } - - virtio_scsi_release(s); } =20 static void virtio_scsi_defer_tmf_to_bh(VirtIOSCSIReq *req) { VirtIOSCSI *s =3D req->dev; =20 - QTAILQ_INSERT_TAIL(&s->tmf_bh_list, req, next); + WITH_QEMU_LOCK_GUARD(&s->tmf_bh_lock) { + QTAILQ_INSERT_TAIL(&s->tmf_bh_list, req, next); =20 - if (!s->tmf_bh) { - s->tmf_bh =3D qemu_bh_new(virtio_scsi_do_tmf_bh, s); - qemu_bh_schedule(s->tmf_bh); + if (!s->tmf_bh) { + s->tmf_bh =3D qemu_bh_new(virtio_scsi_do_tmf_bh, s); + qemu_bh_schedule(s->tmf_bh); + } } } =20 @@ -1235,6 +1253,7 @@ static void virtio_scsi_device_realize(DeviceState *d= ev, Error **errp) Error *err =3D NULL; =20 QTAILQ_INIT(&s->tmf_bh_list); + qemu_mutex_init(&s->tmf_bh_lock); =20 virtio_scsi_common_realize(dev, virtio_scsi_handle_ctrl, @@ -1277,6 +1296,7 @@ static void virtio_scsi_device_unrealize(DeviceState = *dev) =20 qbus_set_hotplug_handler(BUS(&s->bus), NULL); virtio_scsi_common_unrealize(dev); + qemu_mutex_destroy(&s->tmf_bh_lock); } =20 static Property virtio_scsi_properties[] =3D { --=20 2.42.0