From nobody Tue Nov 18 18:23:21 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1611598246; cv=none; d=zohomail.com; s=zohoarc; b=QDI1qlZO5wI4jETnFo21Slyz75NLfLf8wqh5CogzNEivB44uB4rSMCVJUZT6CnY0rHoxwHYPVyrjlLk21czhq2I3wpRs61vQtVgagr+b6U4r9u/c/wbM97EEGC34z6zdC3CK4VCTwgHAuvPEzzQ3VpAXkflGZI1wq/Dy4KgUkps= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1611598246; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=dlEMOdA1icJ2kgLpjxfqG5+bbZpxaLJO4PoxLpABNz0=; b=QLW8jTGo0l4mMv+k0yleo5w4nWb2Fm5zA8wvFY8YUD1ix96C7EwVVX99Fuvpo5zraWzu1rsx/PM+P6vhfA1E0beQ/jberEuCCmwm1VzQJS+3LOyjpyr9EIATrJ7D+opisbS8dwKHm9MF1/D8KQYdIqf97TF7EcC/gstKiWU8eKI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 16115982468081011.6966645424926; Mon, 25 Jan 2021 10:10:46 -0800 (PST) Received: from localhost ([::1]:56410 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1l46K5-0000s3-Ln for importer@patchew.org; Mon, 25 Jan 2021 13:10:45 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:45322) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1l46Bq-0001YH-Fo for qemu-devel@nongnu.org; Mon, 25 Jan 2021 13:02:15 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:42398) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1l46Bm-0006mf-Vo for qemu-devel@nongnu.org; Mon, 25 Jan 2021 13:02:14 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-79-G4zK6IvgOhGXF71cQhHEuQ-1; Mon, 25 Jan 2021 13:02:07 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 93912107ACE3 for ; Mon, 25 Jan 2021 18:02:06 +0000 (UTC) Received: from horse.redhat.com (ovpn-116-119.rdu2.redhat.com [10.10.116.119]) by smtp.corp.redhat.com (Postfix) with ESMTP id 23A3260C62; Mon, 25 Jan 2021 18:01:52 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id 4931A225FCF; Mon, 25 Jan 2021 13:01:38 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611597730; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dlEMOdA1icJ2kgLpjxfqG5+bbZpxaLJO4PoxLpABNz0=; b=JwzxjdEwU+/PTwnhS9mjao8fJW2a6oy+Vm4sr4YcZiY5YY3KmAMs+h/ySbC3G/A2R2W0/d ZlfEr7CIHDXQpMoXdaHhSt92Lw1c6i9UgEGLhRtgFfEMTB+pocBSeEgb5M+RPKClRgr4VT Kb51dfCIniY4XujsmZhoHnkvkqajcVE= X-MC-Unique: G4zK6IvgOhGXF71cQhHEuQ-1 From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs@redhat.com Subject: [PATCH 5/6] libvhost-user: Add support to start/stop/flush slave channel Date: Mon, 25 Jan 2021 13:01:14 -0500 Message-Id: <20210125180115.22936-6-vgoyal@redhat.com> In-Reply-To: <20210125180115.22936-1-vgoyal@redhat.com> References: <20210125180115.22936-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=216.205.24.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -30 X-Spam_score: -3.1 X-Spam_bar: --- X-Spam_report: (-3.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.255, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: marcandre.lureau@redhat.com, stefanha@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com, mst@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" This patch adds support to start/stop/flush slave channel functionality. Signed-off-by: Vivek Goyal --- subprojects/libvhost-user/libvhost-user.c | 103 ++++++++++++++++++++-- subprojects/libvhost-user/libvhost-user.h | 8 +- 2 files changed, 105 insertions(+), 6 deletions(-) diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvho= st-user/libvhost-user.c index 7a56c56dc8..b4c795c63e 100644 --- a/subprojects/libvhost-user/libvhost-user.c +++ b/subprojects/libvhost-user/libvhost-user.c @@ -140,6 +140,8 @@ vu_request_to_string(unsigned int req) REQ(VHOST_USER_GET_MAX_MEM_SLOTS), REQ(VHOST_USER_ADD_MEM_REG), REQ(VHOST_USER_REM_MEM_REG), + REQ(VHOST_USER_START_SLAVE_CHANNEL), + REQ(VHOST_USER_STOP_SLAVE_CHANNEL), REQ(VHOST_USER_MAX), }; #undef REQ @@ -437,11 +439,11 @@ out: return result; } =20 -/* Returns true on success, false otherwise */ +/* slave mutex should be held. Will be unlocked upon return */ static bool -vu_message_slave_send_receive(VuDev *dev, VhostUserMsg *vmsg, uint64_t *pa= yload) +vu_message_slave_send_receive_locked(VuDev *dev, VhostUserMsg *vmsg, + uint64_t *payload) { - pthread_mutex_lock(&dev->slave_mutex); if (!vu_message_write(dev, dev->slave_fd, vmsg)) { pthread_mutex_unlock(&dev->slave_mutex); return false; @@ -456,6 +458,46 @@ vu_message_slave_send_receive(VuDev *dev, VhostUserMsg= *vmsg, uint64_t *payload) return vu_process_message_reply(dev, vmsg, payload); } =20 +/* Returns true on success, false otherwise */ +static bool +vu_message_slave_send_receive(VuDev *dev, VhostUserMsg *vmsg, + uint64_t *payload) +{ + pthread_mutex_lock(&dev->slave_mutex); + if (!dev->slave_channel_open) { + pthread_mutex_unlock(&dev->slave_mutex); + return false; + } + return vu_message_slave_send_receive_locked(dev, vmsg, payload); +} + +static bool +vu_finish_stop_slave(VuDev *dev) +{ + bool res; + uint64_t payload =3D 0; + VhostUserMsg vmsg =3D { + .request =3D VHOST_USER_SLAVE_STOP_CHANNEL_COMPLETE, + .flags =3D VHOST_USER_VERSION | VHOST_USER_NEED_REPLY_MASK, + .size =3D sizeof(vmsg.payload.u64), + .payload.u64 =3D 0, + }; + + /* + * Once we get slave_mutex, this should make sure no other caller is + * currently in the process of sending or receiving message on slave_f= d. + * And setting slave_channel_open to false now will make sure any new + * callers will not send message and instead get error back. So it + * is now safe to send stop finished message to master. + */ + pthread_mutex_lock(&dev->slave_mutex); + dev->slave_channel_open =3D false; + /* This also drops slave_mutex */ + res =3D vu_message_slave_send_receive_locked(dev, &vmsg, &payload); + res =3D res && (payload =3D=3D 0); + return res; +} + /* Kick the log_call_fd if required. */ static void vu_log_kick(VuDev *dev) @@ -1529,6 +1571,35 @@ vu_set_slave_req_fd(VuDev *dev, VhostUserMsg *vmsg) return false; } =20 +static bool +vu_slave_channel_start(VuDev *dev, VhostUserMsg *vmsg) +{ + pthread_mutex_lock(&dev->slave_mutex); + dev->slave_channel_open =3D true; + pthread_mutex_unlock(&dev->slave_mutex); + /* Caller (vu_dispatch()) will send a reply */ + return false; +} + +static bool +vu_slave_channel_stop(VuDev *dev, VhostUserMsg *vmsg, bool *reply_sent, + bool *reply_status) +{ + vmsg_set_reply_u64(vmsg, 0); + *reply_sent =3D true; + *reply_status =3D false; + if (!vu_send_reply(dev, dev->sock, vmsg)) { + return false; + } + + if (!vu_finish_stop_slave(dev)) { + return false; + } + + *reply_status =3D true; + return false; +} + static bool vu_get_config(VuDev *dev, VhostUserMsg *vmsg) { @@ -1823,7 +1894,8 @@ static bool vu_handle_get_max_memslots(VuDev *dev, Vh= ostUserMsg *vmsg) } =20 static bool -vu_process_message(VuDev *dev, VhostUserMsg *vmsg) +vu_process_message(VuDev *dev, VhostUserMsg *vmsg, bool *reply_sent, + bool *reply_status) { int do_reply =3D 0; =20 @@ -1843,6 +1915,14 @@ vu_process_message(VuDev *dev, VhostUserMsg *vmsg) DPRINT("\n"); } =20 + if (reply_sent) { + *reply_sent =3D false; + } + + if (reply_status) { + *reply_status =3D false; + } + if (dev->iface->process_msg && dev->iface->process_msg(dev, vmsg, &do_reply)) { return do_reply; @@ -1912,6 +1992,10 @@ vu_process_message(VuDev *dev, VhostUserMsg *vmsg) return vu_add_mem_reg(dev, vmsg); case VHOST_USER_REM_MEM_REG: return vu_rem_mem_reg(dev, vmsg); + case VHOST_USER_START_SLAVE_CHANNEL: + return vu_slave_channel_start(dev, vmsg); + case VHOST_USER_STOP_SLAVE_CHANNEL: + return vu_slave_channel_stop(dev, vmsg, reply_sent, reply_status); default: vmsg_close_fds(vmsg); vu_panic(dev, "Unhandled request: %d", vmsg->request); @@ -1926,6 +2010,7 @@ vu_dispatch(VuDev *dev) VhostUserMsg vmsg =3D { 0, }; int reply_requested; bool need_reply, success =3D false; + bool reply_sent =3D false, reply_status =3D false; =20 if (!dev->read_msg(dev, dev->sock, &vmsg)) { goto end; @@ -1933,7 +2018,14 @@ vu_dispatch(VuDev *dev) =20 need_reply =3D vmsg.flags & VHOST_USER_NEED_REPLY_MASK; =20 - reply_requested =3D vu_process_message(dev, &vmsg); + reply_requested =3D vu_process_message(dev, &vmsg, &reply_sent, + &reply_status); + /* reply has already been sent, if needed */ + if (reply_sent) { + success =3D reply_status; + goto end; + } + if (!reply_requested && need_reply) { vmsg_set_reply_u64(&vmsg, 0); reply_requested =3D 1; @@ -2051,6 +2143,7 @@ vu_init(VuDev *dev, dev->log_call_fd =3D -1; pthread_mutex_init(&dev->slave_mutex, NULL); dev->slave_fd =3D -1; + dev->slave_channel_open =3D false; dev->max_queues =3D max_queues; =20 dev->vq =3D malloc(max_queues * sizeof(dev->vq[0])); diff --git a/subprojects/libvhost-user/libvhost-user.h b/subprojects/libvho= st-user/libvhost-user.h index ee75d4931f..1d0ef54f69 100644 --- a/subprojects/libvhost-user/libvhost-user.h +++ b/subprojects/libvhost-user/libvhost-user.h @@ -64,6 +64,7 @@ enum VhostUserProtocolFeature { VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD =3D 12, VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS =3D 14, VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS =3D 15, + VHOST_USER_PROTOCOL_F_SLAVE_CH_START_STOP =3D 16, =20 VHOST_USER_PROTOCOL_F_MAX }; @@ -109,6 +110,8 @@ typedef enum VhostUserRequest { VHOST_USER_GET_MAX_MEM_SLOTS =3D 36, VHOST_USER_ADD_MEM_REG =3D 37, VHOST_USER_REM_MEM_REG =3D 38, + VHOST_USER_START_SLAVE_CHANNEL =3D 39, + VHOST_USER_STOP_SLAVE_CHANNEL =3D 40, VHOST_USER_MAX } VhostUserRequest; =20 @@ -123,6 +126,7 @@ typedef enum VhostUserSlaveRequest { VHOST_USER_SLAVE_FS_UNMAP =3D 7, VHOST_USER_SLAVE_FS_SYNC =3D 8, VHOST_USER_SLAVE_FS_IO =3D 9, + VHOST_USER_SLAVE_STOP_CHANNEL_COMPLETE =3D 10, VHOST_USER_SLAVE_MAX } VhostUserSlaveRequest; =20 @@ -405,9 +409,11 @@ struct VuDev { VuVirtq *vq; VuDevInflightInfo inflight_info; int log_call_fd; - /* Must be held while using slave_fd */ + /* Must be held while using slave_fd, slave_channel_open */ pthread_mutex_t slave_mutex; int slave_fd; + /* If not set, do not send more requests on slave fd. */ + bool slave_channel_open; uint64_t log_size; uint8_t *log_table; uint64_t features; --=20 2.25.4