From nobody Sun Mar 22 16:03:26 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1774016562; cv=none; d=zohomail.com; s=zohoarc; b=VzFKJHmrH8YwXd8drsDcg6VdxW4PY1gY8lkGS1fLACeBWDaIUTW9Flj9SDFpAB+hTuW/hUZO1kdHsMJbUKP/yQRRpTM5ElVnOzm1OD4/NRGwqCpWwn9X8GrrSgos1Am7iTFtBOgwy4CZbKZ4/A8JwosCcdA93SaVzyhjBL+5YY0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1774016562; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:Reply-To:References:Sender:Subject:Subject:To:To:Message-Id; bh=gbqbMrs7LdafLAZ6yqHmxkhLZEPGK/o30SzLEplpO8w=; b=oGj4V4LcDNh1OwCO20u+L6K8Xf9UvMfxR16j/96TOnCOdvsGJfW3WxftnJ/b43nImx+B0yvdHcQ76758haWRsDc7cSHUIUQzgukbj1M3mO5mRNoGbtbusiJGb5rKFcxmWCr4COoJ2V/rnIfJgCB2tfFrzkdZr4ZFLGEG1kLVr6E= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1774016562204240.68432002027964; Fri, 20 Mar 2026 07:22:42 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1w3air-00033N-Fp; Fri, 20 Mar 2026 10:21:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1w3aip-00032p-Rq for qemu-devel@nongnu.org; Fri, 20 Mar 2026 10:21:07 -0400 Received: from mx0a-00069f02.pphosted.com ([205.220.165.32]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1w3aim-0007V8-Ve for qemu-devel@nongnu.org; Fri, 20 Mar 2026 10:21:07 -0400 Received: from pps.filterd (m0333521.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 62K8YDn53323103; Fri, 20 Mar 2026 14:20:46 GMT Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta03.appoci.oracle.com [138.1.37.129]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 4cvxk8htke-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 20 Mar 2026 14:20:45 +0000 (GMT) Received: from pps.filterd (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 62KDON7l030599; Fri, 20 Mar 2026 14:20:45 GMT Received: from pps.reinject (localhost [127.0.0.1]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 4cvx4eb4cp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 20 Mar 2026 14:20:45 +0000 Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 62KEK2Jx016730; Fri, 20 Mar 2026 14:20:44 GMT Received: from jonah-amd-ol9-bm.osdevelopmeniad.oraclevcn.com (jonah-amd-ol9-bm.allregionaliads.osdevelopmeniad.oraclevcn.com [100.100.252.67]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTP id 4cvx4eb3ue-15; Fri, 20 Mar 2026 14:20:44 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=corp-2025-04-25; bh=gbqbM rs7LdafLAZ6yqHmxkhLZEPGK/o30SzLEplpO8w=; b=Jaq9I8lGg4T1bgkrZbV7H W+YdI1BFU2lpz3gWSpBOC0POEZl23Yc1T2rB00GPBN+rh9FepvQvwHgOoXfs9MVI s6hCxqry0mkDIn20SL8iyBQHOQParxpY1w4gU0oYz6+tmSnpGZ00Nl3pbNyzm4mu AfqxiBkvbx6tD9VloC0Lgfu76ddoSp2D7dlGowQagMLoEb5ViidspGeE1ecc3A/p lM/nLK+hM08GBo+k1susbZ2Vy7/Dqyy4zrQKNI2haAEZF/WzpJPneeoTWkrVz0M+ k6/b9780aGFPneZu6LaD2YDKZ60BxmwwBvpvTrB5ES5Mvv+l3K9EmMLOAbINe86P g== To: qemu-devel@nongnu.org Cc: eduardo@habkost.net, marcel.apfelbaum@gmail.com, philmd@linaro.org, wangyanan55@huawei.com, zhao1.liu@intel.com, mst@redhat.com, sgarzare@redhat.com, jasowang@redhat.com, leiyang@redhat.com, si-wei.liu@oracle.com, eperezma@redhat.com, boris.ostrovsky@oracle.com, armbru@redhat.com, jonah.palmer@oracle.com Subject: [RFC v2 14/14] virtio-net, vhost-net: early migration support for vhost-net Date: Fri, 20 Mar 2026 14:20:15 +0000 Message-ID: <20260320142015.3856652-15-jonah.palmer@oracle.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260320142015.3856652-1-jonah.palmer@oracle.com> References: <20260320142015.3856652-1-jonah.palmer@oracle.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1143,Hydra:6.1.51,FMLib:17.12.100.49 definitions=2026-03-20_02,2026-03-19_05,2025-10-01_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 mlxlogscore=999 phishscore=0 suspectscore=0 spamscore=0 adultscore=0 malwarescore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2603050001 definitions=main-2603200114 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjYwMzIwMDExNCBTYWx0ZWRfX0hf//rzTR+cK ZLIcIy3xXCpBS4NNt61ZCadj6LhQOf4zdwHWEpNNyCAtlZCU6AjWMT0ilrgUIhJ5HB6owK7h8QI Wl06V1hsjOMg5XpyY14Lvqmt3r4+DJPlOp69uGzphzE+9HGYgQgKEBs33pBvxmQ99Q491KjyL0U jvL3fjnZMo2fYQDtfdRgPmgnaNPOL9yjOFCjsq9RZe6B9syiQx3eH6W2m+EcH5K+ucubeqXvA9r sqtn/5MzDOeV1ZNID4rxAe6FmMYQ676irm6bsvNJl7jlPUJyOlZF9xoVSPc9LzZ2yns9hcOSU7R UzuqSnRLo6A0F1DCYWq2Uej/XYjSECC60gLB3CAur86mq6hVAlHbzdvsBV6h5f5muo15jmSYfvp yXQfiwUb81P/BSHHdfZSKGWNFpPhaKTP7Dtq1+lEG0QlxDtXZBVe/hxq5GlO31IYstzQUHwhfmX cMR22iOUUJutW5ZCHNA== X-Authority-Analysis: v=2.4 cv=AI0/m/Lt c=1 sm=1 tr=0 ts=69bd57bd b=1 cx=c_pps a=WeWmnZmh0fydH62SvGsd2A==:117 a=WeWmnZmh0fydH62SvGsd2A==:17 a=Yq5XynenixoA:10 a=VkNPw1HP01LnGYTKEx00:22 a=jiCTI4zE5U7BLdzWsZGv:22 a=x0eKOSpe3m1H3M0S9YoZ:22 a=yPCof4ZbAAAA:8 a=Xb7ORfVXokAGz2rkdSYA:9 X-Proofpoint-GUID: rqVfpMpNzv8eA8JQZ1ZjEyXYUfDJVTkD X-Proofpoint-ORIG-GUID: rqVfpMpNzv8eA8JQZ1ZjEyXYUfDJVTkD Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=205.220.165.32; envelope-from=jonah.palmer@oracle.com; helo=mx0a-00069f02.pphosted.com X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.819, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.903, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Jonah Palmer From: Jonah Palmer via qemu development Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1774016562926158500 Content-Type: text/plain; charset="utf-8" This patch implements early migration support for virtio-net devices using a TAP backend accelerated by vhost-net. More specifically, we initiate the vhost startup routine during early migration time but guard against binding TAP backends at this time. We need to wait until the source VM has been paused before we want the device to actually start. For vhost-net, the remaining stop-and-copy work is to apply final vring bases and bind TAP backends. This is handled via the virtio-net vhost subsection (vmstate_virtio_net_vhost) post_load callback. When a mid-migration delta is detected and we fall back to a full virtio-net reload, explicitly stop any early-started vhost instance before restart so notifier/backend state is handled safely. Failures while starting vhost-net during early post-load, and failures during stop-and-copy quickstart finalization, are treated as non-fatal for migration. In those cases the destination continues migration and falls back to userspace virtio-net datapath. After switchover, the normal vhost start path may retry once status is set; if that retry also fails, the device continues running on userspace virtio-net. By moving most of the post-load startup work out of the stop-and-copy phase, we further minimize the guest-visible downtime incurred by migrating a virtio-net device using vhost-net. A future improvement to this patch should handle deltas more gracefully by updating only what was changed mid-migration instead of relying on a full vhost/virtio-net restart. Signed-off-by: Jonah Palmer --- hw/net/vhost_net.c | 183 +++++++++++++++++++++++++++++++++ hw/net/virtio-net.c | 127 ++++++++++++++++++++++- include/hw/virtio/virtio-net.h | 2 + include/net/vhost_net.h | 9 ++ 4 files changed, 319 insertions(+), 2 deletions(-) diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c index a8ee18a912..f11f30b4f0 100644 --- a/hw/net/vhost_net.c +++ b/hw/net/vhost_net.c @@ -353,6 +353,13 @@ static int vhost_net_start_one(struct vhost_net *net, /* Queue might not be ready for start */ continue; } + if (dev->migration && dev->migration->early_load) { + /* + * Queue isn't ready to start as we're in the middle of an + * early migration. Set the backend later when we're ready. + */ + continue; + } r =3D vhost_net_set_backend(&net->dev, &file); if (r < 0) { r =3D -errno; @@ -695,3 +702,179 @@ err_start: =20 return r; } + +/* + * Helper function for vhost_net_post_load_migration_quickstart: + * + * Sets vring bases for all vhost virtqueues. + */ +int vhost_net_set_all_vring_bases(struct VirtIONet *n, VirtIODevice *vdev, + NetClientState *ncs, int queue_pairs, + int cvq, int nvhosts) +{ + NetClientState *peer; + struct vhost_net *vnet; + struct vhost_dev *hdev; + int queue_idx; + int i, j, r; + + for (i =3D 0; i < nvhosts; i++) { + peer =3D qemu_get_peer(ncs, i < queue_pairs ? i : n->max_queue_pai= rs); + vnet =3D get_vhost_net(peer); + if (!vnet) { + continue; + } + hdev =3D &vnet->dev; + + for (j =3D 0; j < hdev->nvqs; ++j) { + queue_idx =3D hdev->vq_index + j; + struct vhost_vring_state state =3D { + .index =3D hdev->vhost_ops->vhost_get_vq_index(hdev, queue= _idx), + .num =3D virtio_queue_get_last_avail_idx(vdev, queue_idx), + }; + + r =3D hdev->vhost_ops->vhost_set_vring_base(hdev, &state); + if (r) { + error_report("vhost_set_vring_base failed (vq %d)", queue_= idx); + goto fail; + } + } + } + return 0; + +fail: + vhost_net_stop_one(vnet, vdev); + + while (--i >=3D 0) { + peer =3D qemu_get_peer(ncs, i < queue_pairs ? i : n->max_queue_pai= rs); + vhost_net_stop_one(get_vhost_net(peer), vdev); + } + return r; +} + +/* + * Helper function for vhost_net_post_load_migration_quickstart: + * + * Binds TAP backends to all vhost-net virtqueues. All vring bases must be= set + * before attempting to start any backends. + */ +int vhost_net_start_all_backends(struct VirtIONet *n, VirtIODevice *vdev, + NetClientState *ncs, int queue_pairs, int= cvq, + int nvhosts) +{ + NetClientState *peer; + struct vhost_dev *hdev; + struct vhost_vring_file file =3D { }; + struct vhost_net *vnet; + int i, r; + + for (i =3D 0; i < nvhosts; i++) { + peer =3D qemu_get_peer(ncs, i < queue_pairs ? i : n->max_queue_pai= rs); + vnet =3D get_vhost_net(peer); + if (!vnet) { + continue; + } + hdev =3D &vnet->dev; + + qemu_set_fd_handler(vnet->backend, NULL, NULL, NULL); + file.fd =3D vnet->backend; + + for (file.index =3D 0; file.index < hdev->nvqs; ++file.index) { + if (!virtio_queue_enabled(vdev, hdev->vq_index + file.index)) { + /* Queue might not be ready to start */ + continue; + } + + r =3D vhost_net_set_backend(hdev, &file); + if (r < 0) { + r =3D -errno; + goto fail; + } + } + } + return 0; + +fail: + file.fd =3D -1; + while (file.index-- > 0) { + if (!virtio_queue_enabled(vdev, hdev->vq_index + file.index)) { + continue; + } + int ret =3D vhost_net_set_backend(hdev, &file); + assert(ret >=3D 0); + } + if (vnet->nc->info->poll) { + vnet->nc->info->poll(vnet->nc, true); + } + vhost_dev_stop(hdev, vdev, false); + + while (--i >=3D 0) { + peer =3D qemu_get_peer(ncs, i < queue_pairs ? i : n->max_queue_pai= rs); + vhost_net_stop_one(get_vhost_net(peer), vdev); + } + return r; +} + +/* + * Quickstart path for a virtio-net device using vhost acceleration: + * + * Used during migration of a virtio-net device that opted-in to early + * migration. + * + * The goal of this function is to perform any remaining startup work that + * can only be done during the stop-and-copy phase, once the source has be= en + * stopped. + * + * Note: By the time this function is called, the device has essentially b= een + * fully configured, albeit with a few last-minute configurations to be ma= de. + * This means our error handling must completely unwind the device with + * full-stop semantics. + */ +int vhost_net_post_load_migration_quickstart(struct VirtIONet *n) +{ + VirtIODevice *vdev =3D VIRTIO_DEVICE(n); + NetClientState *ncs =3D qemu_get_queue(n->nic); + BusState *qbus =3D BUS(qdev_get_parent_bus(DEVICE(vdev))); + VirtioBusState *vbus =3D VIRTIO_BUS(qbus); + VirtioBusClass *k =3D VIRTIO_BUS_GET_CLASS(vbus); + + int queue_pairs =3D n->multiqueue ? n->max_queue_pairs : 1; + int cvq =3D virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ) ? + n->max_ncs - n->max_queue_pairs : 0; + int nvhosts =3D queue_pairs + cvq; + int total_notifiers =3D queue_pairs * 2 + cvq; + NetClientState *peer =3D qemu_get_peer(ncs, 0); + + int r, e; + + /* First peer must exist for the realized virtio-net device */ + assert(peer); + + /* Apply final vring bases for all vhosts */ + r =3D vhost_net_set_all_vring_bases(n, vdev, ncs, queue_pairs, cvq, nv= hosts); + if (r < 0) { + goto fail; + } + + /* Bind backends (TAP devices only) */ + if (peer->info->type =3D=3D NET_CLIENT_DRIVER_TAP) { + r =3D vhost_net_start_all_backends(n, vdev, ncs, queue_pairs, cvq,= nvhosts); + if (r < 0) { + goto fail; + } + } + return 0; + +fail: + e =3D k->set_guest_notifiers(qbus->parent, total_notifiers, false); + if (e < 0) { + fprintf(stderr, "vhost guest notifier cleanup failed: %d\n", e); + fflush(stderr); + } + vhost_net_disable_notifiers(vdev, ncs, queue_pairs, cvq); + + error_report("unable to start vhost net: %d: " + "falling back on userspace virtio", -r); + n->vhost_started =3D 0; + return r; +} diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c index 483a43be4f..950137c568 100644 --- a/hw/net/virtio-net.c +++ b/hw/net/virtio-net.c @@ -3864,6 +3864,38 @@ static bool failover_hide_primary_device(DeviceListe= ner *listener, return qatomic_read(&n->failover_primary_hidden); } =20 +static int virtio_net_vhost_early_start(VirtIONet *n, VirtIODevice *vdev) +{ + NetClientState *ncs =3D qemu_get_queue(n->nic); + int queue_pairs =3D n->multiqueue ? n->max_queue_pairs : 1; + int cvq =3D virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ) ? + n->max_ncs - n->max_queue_pairs : 0; + int r; + + /* Return early if there's no vhost backend */ + if (!ncs || !ncs->peer || !get_vhost_net(ncs->peer)) { + return 0; + } + + if (virtio_has_feature(vdev->guest_features, VIRTIO_NET_F_MTU)) { + r =3D vhost_net_set_mtu(get_vhost_net(ncs->peer), n->net_conf.mtu); + if (r < 0) { + error_report("%u bytes MTU not supported by the backend", + n->net_conf.mtu); + return r; + } + } + + n->vhost_started =3D 1; + r =3D vhost_net_start(vdev, n->nic->ncs, queue_pairs, cvq); + if (r < 0) { + error_report("unable to start vhost net: %d: " + "falling back on userspace virtio", -r); + n->vhost_started =3D 0; + } + return r; +} + enum VirtIONetRxFlags { VNET_RX_F_PROMISC =3D 1u << 0, VNET_RX_F_ALLMULTI =3D 1u << 1, @@ -3892,6 +3924,9 @@ static int virtio_net_early_pre_save(void *opaque) VirtIONetMigration *vnet_mig =3D n->migration; size_t vlans_size =3D (size_t)(MAX_VLAN >> 3); =20 + /* Reset source-side delta decision for this migration iteration. */ + n->migration->reloaded =3D false; + vdev_mig->status_early =3D vdev->status; vnet_mig->status_early =3D n->status; =20 @@ -3989,6 +4024,14 @@ static int virtio_net_early_post_load(void *opaque, = int version_id) VirtIONet *n =3D opaque; VirtIODevice *vdev =3D VIRTIO_DEVICE(n); =20 + /* + * Start the vhost backend if one is present. Note that while + * vdev->migration->early_load is true, not all vhost startup operatio= ns + * are performed. For example, we defer setting the backends (vhost-ne= t w/ + * TAP) until the stop-and-copy phase (see vmstate_virtio_net_vhost). + */ + virtio_net_vhost_early_start(n, vdev); + vdev->migration->early_load =3D false; return 0; } @@ -4007,6 +4050,49 @@ static const VMStateDescription vmstate_virtio_net_e= arly =3D { }, }; =20 +static int virtio_net_vhost_post_load(void *opaque, int version_id) +{ + VirtIONet *n =3D opaque; + int r; + + if (!n->vhost_started) { + return 0; + } + + /* Finalize vhost startup */ + r =3D vhost_net_post_load_migration_quickstart(n); + if (r < 0) { + error_report("virtio-net vhost post-load quickstart failed: %d", r= ); + } + return 0; +} + +static bool virtio_net_vhost_needed(void *opaque) +{ + VirtIONet *n =3D opaque; + NetClientState *nc =3D qemu_get_queue(n->nic); + + if (!nc || !nc->peer || !get_vhost_net(nc->peer)) { + return false; + } + + /* Skip vhost quickstart section when a full virtio-net reload is need= ed. */ + return !n->migration->reloaded; +} + +static const VMStateDescription vmstate_virtio_net_vhost =3D { + .name =3D "virtio-net-vhost", + .minimum_version_id =3D 1, + .version_id =3D 1, + /* Set prio low to run after vmstate_virtio_net */ + .priority =3D MIG_PRI_LOW, + .needed =3D virtio_net_vhost_needed, + .fields =3D (const VMStateField[]) { + VMSTATE_END_OF_LIST() + }, + .post_load =3D virtio_net_vhost_post_load, +}; + static void virtio_net_device_realize(DeviceState *dev, Error **errp) { VirtIODevice *vdev =3D VIRTIO_DEVICE(dev); @@ -4201,9 +4287,10 @@ static void virtio_net_device_realize(DeviceState *d= ev, Error **errp) vdev->migration =3D g_new0(VirtIODevMigration, 1); vdev->migration->early_load =3D false; n->migration =3D g_new0(VirtIONetMigration, 1); - vmstate_register_any(VMSTATE_IF(n), &vmstate_virtio_net_early,= n); virtio_delta_vmsd_register(vdev); + vmstate_register_any(VMSTATE_IF(n), &vmstate_virtio_net_vhost, + n); } } } @@ -4271,6 +4358,7 @@ static void virtio_net_device_unrealize(DeviceState *= dev) =20 vmstate_unregister(VMSTATE_IF(n), &vmstate_virtio_net_early, n); virtio_delta_vmsd_unregister(vdev); + vmstate_unregister(VMSTATE_IF(n), &vmstate_virtio_net_vhost, n); } } =20 @@ -4336,6 +4424,37 @@ static int virtio_net_pre_save(void *opaque) return 0; } =20 +static int virtio_net_pre_load(void *opaque) +{ + VirtIONet *n =3D opaque; + VirtIODevice *vdev =3D VIRTIO_DEVICE(n); + + /* + * If we're migrating with a vhost device and performed an early + * save/load, then reaching here means that something changed and + * we need to reload all of the virtio-net device's state. + */ + if (n->early_mig) { + /* + * Unwind vhost-net before full reload path re-runs startup. This = keeps + * notifier/backend state handling safe. + */ + if (n->vhost_started) { + NetClientState *nc =3D qemu_get_queue(n->nic); + int queue_pairs =3D n->multiqueue ? n->max_queue_pairs : 1; + int cvq =3D virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ= ) ? + n->max_ncs - n->max_queue_pairs : 0; + + if (nc && nc->peer && get_vhost_net(nc->peer)) { + vhost_net_stop(vdev, n->nic->ncs, queue_pairs, cvq); + } + + n->vhost_started =3D 0; + } + } + return 0; +} + static bool primary_unplug_pending(void *opaque) { DeviceState *dev =3D opaque; @@ -4466,12 +4585,15 @@ static bool virtio_net_needed(void *opaque) { VirtIONet *n =3D opaque; VirtIODevice *vdev =3D VIRTIO_DEVICE(n); + bool delta; =20 if (!n->early_mig) { return true; } =20 - return virtio_net_has_delta(n, vdev); + delta =3D virtio_net_has_delta(n, vdev); + n->migration->reloaded =3D delta; + return delta; } =20 static const VMStateDescription vmstate_virtio_net =3D { @@ -4484,6 +4606,7 @@ static const VMStateDescription vmstate_virtio_net = =3D { VMSTATE_END_OF_LIST() }, .pre_save =3D virtio_net_pre_save, + .pre_load =3D virtio_net_pre_load, .dev_unplug_pending =3D dev_unplug_pending, }; =20 diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h index dbbacc83bb..d1d7c0b742 100644 --- a/include/hw/virtio/virtio-net.h +++ b/include/hw/virtio/virtio-net.h @@ -169,6 +169,7 @@ typedef struct VirtIONetQueue { =20 /** * struct VirtIONetMigration - VirtIONet migration structure + * @reloaded: Flag to indicate the state has been reloaded. * @status_early: VirtIONet status snapshot. * @mac_early: MAC address early migration snapshot. * @mtable_in_use_early: In-use MAC table entries. @@ -191,6 +192,7 @@ typedef struct VirtIONetQueue { * @rss_indirections_table_early: RSS indirections table. */ typedef struct VirtIONetMigration { + bool reloaded; uint16_t status_early; uint8_t mac_early[ETH_ALEN]; uint32_t mtable_in_use_early; diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h index 0225207491..a8a1c1005b 100644 --- a/include/net/vhost_net.h +++ b/include/net/vhost_net.h @@ -4,6 +4,7 @@ #include "net/net.h" #include "hw/virtio/virtio-features.h" #include "hw/virtio/vhost-backend.h" +#include "hw/virtio/virtio-net.h" =20 struct vhost_net; typedef struct vhost_net VHostNetState; @@ -88,4 +89,12 @@ int vhost_net_virtqueue_restart(VirtIODevice *vdev, NetC= lientState *nc, int vq_index); =20 void vhost_net_save_acked_features(NetClientState *nc); + +int vhost_net_set_all_vring_bases(struct VirtIONet *n, VirtIODevice *vdev, + NetClientState *ncs, int queue_pairs, + int cvq, int nvhosts); +int vhost_net_start_all_backends(struct VirtIONet *n, VirtIODevice *vdev, + NetClientState *ncs, int queue_pairs, + int cvq, int nvhosts); +int vhost_net_post_load_migration_quickstart(struct VirtIONet *n); #endif --=20 2.51.0