From nobody Thu Apr 9 05:26:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26D63C05027 for ; Thu, 2 Feb 2023 09:09:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231829AbjBBJJs (ORCPT ); Thu, 2 Feb 2023 04:09:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232439AbjBBJJp (ORCPT ); Thu, 2 Feb 2023 04:09:45 -0500 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47BD669539 for ; Thu, 2 Feb 2023 01:09:44 -0800 (PST) Received: by mail-pj1-x1033.google.com with SMTP id e10-20020a17090a630a00b0022bedd66e6dso4955864pjj.1 for ; Thu, 02 Feb 2023 01:09:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vc9dKkOu9Bxzey4EMEiWLkvY83L8oxqqOyKDYNLTu5g=; b=aJY+fw9Yr3mDpnG1pelzv7csrHxEYkyQh1y4fJRjzdswGPzA7ZfQWGtLMrdepTq9Dc i54BlCMZFSoUglW7CIdfRuIWEikWbQJiNqMYVJEbzig77Z6tb2CH3BBs7l0OH9E6hnG9 3S4L/tGa2DFZtFkutLppOiZoBaxQsseNNwyBHkaacm/QpTuTDolJpgl6nIUDMMNRCgSu J04eoysVHd3Gew+kM8SK7/yDFqf+7VeLP2hfhiCLWNmgtMkGBe1bRZOzDYtx8AupPuo5 vbtBh3DCYD3HrQucGK/ccN7augOUbzp4NhQj1WC5ITcvXA+qdYar2P2fR0ZyOpGGjo9O +cIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vc9dKkOu9Bxzey4EMEiWLkvY83L8oxqqOyKDYNLTu5g=; b=j+I+wE2HYFPGCCizolGxhPfcLHeBkMHCsJXUaul6vlg2g3Vq/5NowXAqZSkgtH8VOf K84ulalKJP5isMDYHp2yE+aKqeIIjk73HN7UWHmAtG5Z+iSiqiEEpJkFvCGscZr2mARw Xd+We6icki71rC2MQPNSll0GuhhZgSjWzwxYV9qqXEEkhJxJmHWoVV6EsPRew/RjlsvH BVNzqb8aCoGAmBf3kPNq1XOPBea5lMEdYMiiWxztXsVG8uGPUky7CP0yhYPcf4ceJqSR kMjZ7udjRxK/36m7zV785Y2tBY6kdPZlP4Ztni8dby9v3t4NbXCWTgCSKX8ZkW07HhJE qrwQ== X-Gm-Message-State: AO0yUKUT0iXnTIumWsxfOGzKMPdt6b5xmJ69jgfe+kyqojvqJQJTKa6k ZQKqrz6p2T4wkX/DTLVSs8gTAw== X-Google-Smtp-Source: AK7set8tx+u5ePofxYMmfqsidTs9Png9ELrgrcDtRFgGE1ZkOU3rNSgt+Gog2PZBE6m4rN7yBj+JCA== X-Received: by 2002:a17:902:f355:b0:196:8cd2:15b1 with SMTP id q21-20020a170902f35500b001968cd215b1mr4593128ple.37.1675328983805; Thu, 02 Feb 2023 01:09:43 -0800 (PST) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id ik12-20020a170902ab0c00b001929827731esm13145968plb.201.2023.02.02.01.09.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Feb 2023 01:09:43 -0800 (PST) From: Shunsuke Mie To: "Michael S. Tsirkin" , Jason Wang , Rusty Russell Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shunsuke Mie Subject: [RFC PATCH v2 1/7] vringh: fix a typo in comments for vringh_kiov Date: Thu, 2 Feb 2023 18:09:28 +0900 Message-Id: <20230202090934.549556-2-mie@igel.co.jp> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230202090934.549556-1-mie@igel.co.jp> References: <20230202090934.549556-1-mie@igel.co.jp> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Probably it is a simple copy error from struct vring_iov. Fixes: f87d0fbb5798 ("vringh: host-side implementation of virtio rings.") Signed-off-by: Shunsuke Mie --- include/linux/vringh.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/vringh.h b/include/linux/vringh.h index 212892cf9822..1991a02c6431 100644 --- a/include/linux/vringh.h +++ b/include/linux/vringh.h @@ -92,7 +92,7 @@ struct vringh_iov { }; =20 /** - * struct vringh_iov - kvec mangler. + * struct vringh_kiov - kvec mangler. * * Mangles kvec in place, and restores it. * Remaining data is iov + i, of used - i elements. --=20 2.25.1 From nobody Thu Apr 9 05:26:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C09AC63797 for ; Thu, 2 Feb 2023 09:10:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232555AbjBBJKE (ORCPT ); Thu, 2 Feb 2023 04:10:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232456AbjBBJJs (ORCPT ); Thu, 2 Feb 2023 04:09:48 -0500 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C64A7CC9A for ; Thu, 2 Feb 2023 01:09:47 -0800 (PST) Received: by mail-pj1-x1036.google.com with SMTP id pj3so1321084pjb.1 for ; Thu, 02 Feb 2023 01:09:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Wj1wi+vfvB54wfWKyUZTobNpEfDR2hWH6JizNjshTls=; b=1Q/JH/sUAULTsLNVNGl+nJTH5Ax+tvJ+1r2DKsuwDfuOKK7Ec5J5bJnGtXbC45rfRC VynUJt6ACswjfIqa8MV8+aq8oj2HUPBfP0teYR/JHiz+uQcFDcvG0/t/1tUEWyWzCg4C EimV0skFo5WvorB8MwJjeN8RIM/0KZVjSL/U4Sq0/P3w8M+bciIDr8PXRPnNxwMYuhYp 1uiIuZ1jZjWhqoykheNnB5QXWK6a0cb6X28sqYlLwQCnLc5tCl4KDqOueYTg1dZN8kaY gmXpt3eUAfDfZVXRPQ+OpYdbRcV6fVlEfZT1k2vMJ14BWtV2uVBge9Y8HoZ7t7JtqZDK i3eA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Wj1wi+vfvB54wfWKyUZTobNpEfDR2hWH6JizNjshTls=; b=ve/y1LUmDxMRSgFc0ct1zA/S9S7Uk7SO2EE3oNMVv3lRv4UNHyuzJmzeJjFfr99NDy guIgEppgcwu3D8Pd5lW3Ye2WtJgS+R7Sce4fRoX+0R1uCDDWkTxcq/cI6GEjSPXYCNPt G7Rbw5tpcXt0oDwD4BxGdtJxjCEbxauZGxV7pdEjf70CNAcO6RGZ8mPRGuX86CSHyZpw gqKJWxRWs6V+wO3nXdSRDZf8nhqJBw5bPLyAjm++k4BNywIG39d29AzdOn+tTwQGEnRB 2FogomRMVMXa7KBvdgqugW12X3iV3hLPQDd5Q4TIJI+24+jp2cAuJpQOQDwwfvbguK3W FHvA== X-Gm-Message-State: AO0yUKWL1PT0JUkp2a+F1rcjVxpGlILiqp8go2qcVulrmJaSlsnNMs0P URUXIqHSithtIXkSzstDg5/wPQ== X-Google-Smtp-Source: AK7set8Mjt/sTl97PMKSnJmxqA5B1xuZBtgxd19g5SZqOTcdXWGQrZOHsyfNQ13X71sz0ldcLMsDJg== X-Received: by 2002:a17:902:e810:b0:194:81c9:b8da with SMTP id u16-20020a170902e81000b0019481c9b8damr7411648plg.3.1675328986637; Thu, 02 Feb 2023 01:09:46 -0800 (PST) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id ik12-20020a170902ab0c00b001929827731esm13145968plb.201.2023.02.02.01.09.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Feb 2023 01:09:46 -0800 (PST) From: Shunsuke Mie To: "Michael S. Tsirkin" , Jason Wang , Rusty Russell Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shunsuke Mie Subject: [RFC PATCH v2 2/7] tools/virtio: enable to build with retpoline Date: Thu, 2 Feb 2023 18:09:29 +0900 Message-Id: <20230202090934.549556-3-mie@igel.co.jp> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230202090934.549556-1-mie@igel.co.jp> References: <20230202090934.549556-1-mie@igel.co.jp> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add build options to bring it close to a linux kernel. It allows for testing that is close to reality. Signed-off-by: Shunsuke Mie --- tools/virtio/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/virtio/Makefile b/tools/virtio/Makefile index 1b25cc7c64bb..7b7139d97d74 100644 --- a/tools/virtio/Makefile +++ b/tools/virtio/Makefile @@ -4,7 +4,7 @@ test: virtio_test vringh_test virtio_test: virtio_ring.o virtio_test.o vringh_test: vringh_test.o vringh.o virtio_ring.o =20 -CFLAGS +=3D -g -O2 -Werror -Wno-maybe-uninitialized -Wall -I. -I../include= / -I ../../usr/include/ -Wno-pointer-sign -fno-strict-overflow -fno-strict-= aliasing -fno-common -MMD -U_FORTIFY_SOURCE -include ../../include/linux/kc= onfig.h +CFLAGS +=3D -g -O2 -Werror -Wno-maybe-uninitialized -Wall -I. -I../include= / -I ../../usr/include/ -Wno-pointer-sign -fno-strict-overflow -fno-strict-= aliasing -fno-common -MMD -U_FORTIFY_SOURCE -include ../../include/linux/kc= onfig.h -mfunction-return=3Dthunk -fcf-protection=3Dnone -mindirect-branch-= register CFLAGS +=3D -pthread LDFLAGS +=3D -pthread vpath %.c ../../drivers/virtio ../../drivers/vhost --=20 2.25.1 From nobody Thu Apr 9 05:26:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD213C63797 for ; Thu, 2 Feb 2023 09:10:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232504AbjBBJKA (ORCPT ); Thu, 2 Feb 2023 04:10:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232498AbjBBJJy (ORCPT ); Thu, 2 Feb 2023 04:09:54 -0500 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 722017D6C8 for ; Thu, 2 Feb 2023 01:09:49 -0800 (PST) Received: by mail-pl1-x631.google.com with SMTP id 5so1208658plo.3 for ; Thu, 02 Feb 2023 01:09:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1RMLYkSO95pUa0F6Cu2apQCJX18FS/X+Z1XVi4WpAM8=; b=lnwrHXTmExnnyNYgz74oH23XeAVX8+/SsE4zJ+GC4CZY+kMas+zg9v4MfLh4yC6BmV Vg7I0LGoPg74p0Efx5Lpyt9KBWgnuIfu8zcqWvo3Kl8yj1RzNxPvb/gqy/jkK/1g1Czy YMVfpn95T6dT0ym0mmUpBnfZN98cSoV5hlke77oMv1UnHr/vdEIzL0CgytLEYhuVnflo 7+yMjg5MTdo+oNyUwj866U4jfH23qmMDvKsSER1LgQ//yZzWfvPvx5/BJO8vMobr+o0Z FjPc+F6O653XmaiuyADkjf0B+AaeVHwhwIdqJOeoky8RIywRS5dGTNQucbUI7zTIwTNF ckCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1RMLYkSO95pUa0F6Cu2apQCJX18FS/X+Z1XVi4WpAM8=; b=FJNlrJl/Iv6YJGQmh2YwnK0bwxD1AJY3GHpFiYQcmdoFEwEWWQHMhlgtw1ZrUhf3Zk pbS2d+qzWhWMSmFmOtEK4lggyWmowfb3iFHxuhfUiKl/dfd2L9py/VMhU2ltdWpsf4nk Wa05f3X1En1rs2ih92vuTbA5vc8PG7Jj57LjfpVp0eRnZBNkmAXcOV9tkUhKpKZO1ykr f+1ml0tvex/HxDjmDcrUkM26WFl/RgNiYHYIrLErigovuBsQoy0GRFluwmBV1Lm2X1yI 8UVp6AKKsO8h3/1Koq8aEjuloEIuoiEqj4sIl9HihJ3l8ov/WngMUn4HsdibyyTE5Bmu goZg== X-Gm-Message-State: AO0yUKVlVAsb8y52FFG9Ekp+AX1dQwgf9LhsRtsUgYptM2YKXTsTjmJa eMD92G8glXeIDwJk6zxG+UOfGEJ9NvHG0c/W X-Google-Smtp-Source: AK7set8TPdsc4qE+Ezq8fdne6seP2FMhU34jrF5Xw3efQHfet86AMSJnW6BukP9zSGqQIKKw12cXxQ== X-Received: by 2002:a17:903:1d2:b0:196:2ade:6e21 with SMTP id e18-20020a17090301d200b001962ade6e21mr8155036plh.14.1675328988955; Thu, 02 Feb 2023 01:09:48 -0800 (PST) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id ik12-20020a170902ab0c00b001929827731esm13145968plb.201.2023.02.02.01.09.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Feb 2023 01:09:48 -0800 (PST) From: Shunsuke Mie To: "Michael S. Tsirkin" , Jason Wang , Rusty Russell Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shunsuke Mie Subject: [RFC PATCH v2 3/7] vringh: remove vringh_iov and unite to vringh_kiov Date: Thu, 2 Feb 2023 18:09:30 +0900 Message-Id: <20230202090934.549556-4-mie@igel.co.jp> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230202090934.549556-1-mie@igel.co.jp> References: <20230202090934.549556-1-mie@igel.co.jp> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" struct vringh_iov is defined to hold userland addresses. However, to use common function, __vring_iov, finally the vringh_iov converts to the vringh_kiov with simple cast. It includes compile time check code to make sure it can be cast correctly. To simplify the code, this patch removes the struct vringh_iov and unifies APIs to struct vringh_kiov. Signed-off-by: Shunsuke Mie --- drivers/vhost/vringh.c | 32 ++++++------------------------ include/linux/vringh.h | 45 ++++-------------------------------------- 2 files changed, 10 insertions(+), 67 deletions(-) diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index 33eb941fcf15..bcdbde1d484e 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -691,8 +691,8 @@ EXPORT_SYMBOL(vringh_init_user); * calling vringh_iov_cleanup() to release the memory, even on error! */ int vringh_getdesc_user(struct vringh *vrh, - struct vringh_iov *riov, - struct vringh_iov *wiov, + struct vringh_kiov *riov, + struct vringh_kiov *wiov, bool (*getrange)(struct vringh *vrh, u64 addr, struct vringh_range *r), u16 *head) @@ -708,26 +708,6 @@ int vringh_getdesc_user(struct vringh *vrh, if (err =3D=3D vrh->vring.num) return 0; =20 - /* We need the layouts to be the identical for this to work */ - BUILD_BUG_ON(sizeof(struct vringh_kiov) !=3D sizeof(struct vringh_iov)); - BUILD_BUG_ON(offsetof(struct vringh_kiov, iov) !=3D - offsetof(struct vringh_iov, iov)); - BUILD_BUG_ON(offsetof(struct vringh_kiov, i) !=3D - offsetof(struct vringh_iov, i)); - BUILD_BUG_ON(offsetof(struct vringh_kiov, used) !=3D - offsetof(struct vringh_iov, used)); - BUILD_BUG_ON(offsetof(struct vringh_kiov, max_num) !=3D - offsetof(struct vringh_iov, max_num)); - BUILD_BUG_ON(sizeof(struct iovec) !=3D sizeof(struct kvec)); - BUILD_BUG_ON(offsetof(struct iovec, iov_base) !=3D - offsetof(struct kvec, iov_base)); - BUILD_BUG_ON(offsetof(struct iovec, iov_len) !=3D - offsetof(struct kvec, iov_len)); - BUILD_BUG_ON(sizeof(((struct iovec *)NULL)->iov_base) - !=3D sizeof(((struct kvec *)NULL)->iov_base)); - BUILD_BUG_ON(sizeof(((struct iovec *)NULL)->iov_len) - !=3D sizeof(((struct kvec *)NULL)->iov_len)); - *head =3D err; err =3D __vringh_iov(vrh, *head, (struct vringh_kiov *)riov, (struct vringh_kiov *)wiov, @@ -740,14 +720,14 @@ int vringh_getdesc_user(struct vringh *vrh, EXPORT_SYMBOL(vringh_getdesc_user); =20 /** - * vringh_iov_pull_user - copy bytes from vring_iov. + * vringh_iov_pull_user - copy bytes from vring_kiov. * @riov: the riov as passed to vringh_getdesc_user() (updated as we consu= me) * @dst: the place to copy. * @len: the maximum length to copy. * * Returns the bytes copied <=3D len or a negative errno. */ -ssize_t vringh_iov_pull_user(struct vringh_iov *riov, void *dst, size_t le= n) +ssize_t vringh_iov_pull_user(struct vringh_kiov *riov, void *dst, size_t l= en) { return vringh_iov_xfer(NULL, (struct vringh_kiov *)riov, dst, len, xfer_from_user); @@ -755,14 +735,14 @@ ssize_t vringh_iov_pull_user(struct vringh_iov *riov,= void *dst, size_t len) EXPORT_SYMBOL(vringh_iov_pull_user); =20 /** - * vringh_iov_push_user - copy bytes into vring_iov. + * vringh_iov_push_user - copy bytes into vring_kiov. * @wiov: the wiov as passed to vringh_getdesc_user() (updated as we consu= me) * @src: the place to copy from. * @len: the maximum length to copy. * * Returns the bytes copied <=3D len or a negative errno. */ -ssize_t vringh_iov_push_user(struct vringh_iov *wiov, +ssize_t vringh_iov_push_user(struct vringh_kiov *wiov, const void *src, size_t len) { return vringh_iov_xfer(NULL, (struct vringh_kiov *)wiov, diff --git a/include/linux/vringh.h b/include/linux/vringh.h index 1991a02c6431..733d948e8123 100644 --- a/include/linux/vringh.h +++ b/include/linux/vringh.h @@ -79,18 +79,6 @@ struct vringh_range { u64 offset; }; =20 -/** - * struct vringh_iov - iovec mangler. - * - * Mangles iovec in place, and restores it. - * Remaining data is iov + i, of used - i elements. - */ -struct vringh_iov { - struct iovec *iov; - size_t consumed; /* Within iov[i] */ - unsigned i, used, max_num; -}; - /** * struct vringh_kiov - kvec mangler. * @@ -113,44 +101,19 @@ int vringh_init_user(struct vringh *vrh, u64 features, vring_avail_t __user *avail, vring_used_t __user *used); =20 -static inline void vringh_iov_init(struct vringh_iov *iov, - struct iovec *iovec, unsigned num) -{ - iov->used =3D iov->i =3D 0; - iov->consumed =3D 0; - iov->max_num =3D num; - iov->iov =3D iovec; -} - -static inline void vringh_iov_reset(struct vringh_iov *iov) -{ - iov->iov[iov->i].iov_len +=3D iov->consumed; - iov->iov[iov->i].iov_base -=3D iov->consumed; - iov->consumed =3D 0; - iov->i =3D 0; -} - -static inline void vringh_iov_cleanup(struct vringh_iov *iov) -{ - if (iov->max_num & VRINGH_IOV_ALLOCATED) - kfree(iov->iov); - iov->max_num =3D iov->used =3D iov->i =3D iov->consumed =3D 0; - iov->iov =3D NULL; -} - /* Convert a descriptor into iovecs. */ int vringh_getdesc_user(struct vringh *vrh, - struct vringh_iov *riov, - struct vringh_iov *wiov, + struct vringh_kiov *riov, + struct vringh_kiov *wiov, bool (*getrange)(struct vringh *vrh, u64 addr, struct vringh_range *r), u16 *head); =20 /* Copy bytes from readable vsg, consuming it (and incrementing wiov->i). = */ -ssize_t vringh_iov_pull_user(struct vringh_iov *riov, void *dst, size_t le= n); +ssize_t vringh_iov_pull_user(struct vringh_kiov *riov, void *dst, size_t l= en); =20 /* Copy bytes into writable vsg, consuming it (and incrementing wiov->i). = */ -ssize_t vringh_iov_push_user(struct vringh_iov *wiov, +ssize_t vringh_iov_push_user(struct vringh_kiov *wiov, const void *src, size_t len); =20 /* Mark a descriptor as used. */ --=20 2.25.1 From nobody Thu Apr 9 05:26:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BB46C61DA4 for ; Thu, 2 Feb 2023 09:10:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232515AbjBBJKu (ORCPT ); Thu, 2 Feb 2023 04:10:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232507AbjBBJJz (ORCPT ); Thu, 2 Feb 2023 04:09:55 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C336F7C73C for ; Thu, 2 Feb 2023 01:09:51 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id cl23-20020a17090af69700b0022c745bfdc3so1187770pjb.3 for ; Thu, 02 Feb 2023 01:09:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9fCw3ePbQ4sWf6yjA8O8Z29UEVxW6bYbx76RpPXO7aY=; b=QvSn8hl/YfOcJ6FUJ33/gevLAhxNPtImxPhVY9NtVFLqgJKnmAdY4fmFodpUno6RNc oen0Nb31vxxT6csN1kDCMM/Td1864giiGAkZEs7gW85Syn6Pj1SM9x2CZKodsLBYYUSz aZOTpZ9Y3lhg4gMxyjWaKs2pqWsaLiiUB4wIMa5l+2HevJulxagZJ5Tx67zZocuoTp53 98kNK6qXTnqcFpvjdEuaciLik7dIpUAL2MYm3x+dcQFG/xkN+ZY4ApbDm44elRY3YvY4 35ynjrCrCa1J9JoHmhwPSALH6eG7hoA8g7zkDWL25SYUeFN3Cv75MEP6szlFF8zvTaih A6nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9fCw3ePbQ4sWf6yjA8O8Z29UEVxW6bYbx76RpPXO7aY=; b=mjQ4xGAKcpcHKt7xMjO/lfHAO1rc94EPQK3zAbayYjwyldZVwSTL/TRH97EDuHfwNP 2qfd4oERrRwnaUszpaw+X8J68l4sLYS/3dtc3G7Y1EOldjrZkuFqsL7dOiRFTPu8jKX4 tD+lcEcriWzH1iUan8ryjJ7UrpjFOEcm7jA8u3VvpZMMx+KEMzUluj4gzSFSM9sbkddo NtWA4lfcUQE2tsWkeRLqE9ddvl1WNllhElfCVRnd2cOWaD1WM0BL1SY8J2cEfBZ1/WEi dr79s0aNHyHQ1a1PgWX4BUZS7SS6HPBmUjPaAOSA5dgCNA5ePCNUiTPBweJw90Mzv3Nf xN/A== X-Gm-Message-State: AO0yUKWqF7Y12Vy6kPJgTlK9G6zoRajKcfjgVSqR0ATUUXEtyKrXxlEz 3JlO39XJI3CWf5XDGxB+em5Sew== X-Google-Smtp-Source: AK7set+VcV7K8yqmRf4cYf2lfFnsxI5XpYxApbh7KrJBd8QyBrfSTpMm7lizGCx4Zb9IGMEax7/M/A== X-Received: by 2002:a17:902:e850:b0:198:a845:fbaf with SMTP id t16-20020a170902e85000b00198a845fbafmr7362979plg.48.1675328991294; Thu, 02 Feb 2023 01:09:51 -0800 (PST) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id ik12-20020a170902ab0c00b001929827731esm13145968plb.201.2023.02.02.01.09.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Feb 2023 01:09:51 -0800 (PST) From: Shunsuke Mie To: "Michael S. Tsirkin" , Jason Wang , Rusty Russell Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shunsuke Mie Subject: [RFC PATCH v2 4/7] tools/virtio: convert to new vringh user APIs Date: Thu, 2 Feb 2023 18:09:31 +0900 Message-Id: <20230202090934.549556-5-mie@igel.co.jp> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230202090934.549556-1-mie@igel.co.jp> References: <20230202090934.549556-1-mie@igel.co.jp> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" struct vringh_iov is being remove, so convert vringh_test to use the vringh user APIs. This has it change to use struct vringh_kiov instead of the struct vringh_iov. Signed-off-by: Shunsuke Mie --- tools/virtio/vringh_test.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/tools/virtio/vringh_test.c b/tools/virtio/vringh_test.c index 98ff808d6f0c..6c9533b8a2ca 100644 --- a/tools/virtio/vringh_test.c +++ b/tools/virtio/vringh_test.c @@ -193,8 +193,8 @@ static int parallel_test(u64 features, errx(1, "Could not set affinity to cpu %u", first_cpu); =20 while (xfers < NUM_XFERS) { - struct iovec host_riov[2], host_wiov[2]; - struct vringh_iov riov, wiov; + struct kvec host_riov[2], host_wiov[2]; + struct vringh_kiov riov, wiov; u16 head, written; =20 if (fast_vringh) { @@ -216,10 +216,10 @@ static int parallel_test(u64 features, written =3D 0; goto complete; } else { - vringh_iov_init(&riov, + vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); - vringh_iov_init(&wiov, + vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 @@ -442,8 +442,8 @@ int main(int argc, char *argv[]) struct virtqueue *vq; struct vringh vrh; struct scatterlist guest_sg[RINGSIZE], *sgs[2]; - struct iovec host_riov[2], host_wiov[2]; - struct vringh_iov riov, wiov; + struct kvec host_riov[2], host_wiov[2]; + struct vringh_kiov riov, wiov; struct vring_used_elem used[RINGSIZE]; char buf[28]; u16 head; @@ -517,8 +517,8 @@ int main(int argc, char *argv[]) __kmalloc_fake =3D NULL; =20 /* Host retreives it. */ - vringh_iov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); - vringh_iov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); + vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); + vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); if (err !=3D 1) @@ -586,8 +586,8 @@ int main(int argc, char *argv[]) __kmalloc_fake =3D NULL; =20 /* Host picks it up (allocates new iov). */ - vringh_iov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); - vringh_iov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); + vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); + vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); if (err !=3D 1) @@ -613,8 +613,8 @@ int main(int argc, char *argv[]) assert(err < 3 || buf[2] =3D=3D (char)(i + 2)); } assert(riov.i =3D=3D riov.used); - vringh_iov_cleanup(&riov); - vringh_iov_cleanup(&wiov); + vringh_kiov_cleanup(&riov); + vringh_kiov_cleanup(&wiov); =20 /* Complete using multi interface, just because we can. */ used[0].id =3D head; @@ -638,8 +638,8 @@ int main(int argc, char *argv[]) } =20 /* Now get many, and consume them all at once. */ - vringh_iov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); - vringh_iov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); + vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); + vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 for (i =3D 0; i < RINGSIZE; i++) { err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); @@ -723,8 +723,8 @@ int main(int argc, char *argv[]) d[5].flags =3D 0; =20 /* Host picks it up (allocates new iov). */ - vringh_iov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); - vringh_iov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); + vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); + vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); if (err !=3D 1) @@ -744,7 +744,7 @@ int main(int argc, char *argv[]) /* Data should be linear. */ for (i =3D 0; i < err; i++) assert(buf[i] =3D=3D i); - vringh_iov_cleanup(&riov); + vringh_kiov_cleanup(&riov); } =20 /* Don't leak memory... */ --=20 2.25.1 From nobody Thu Apr 9 05:26:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0BD0C05027 for ; Thu, 2 Feb 2023 09:10:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232580AbjBBJKL (ORCPT ); Thu, 2 Feb 2023 04:10:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232466AbjBBJJ6 (ORCPT ); Thu, 2 Feb 2023 04:09:58 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCB6A79CB4 for ; Thu, 2 Feb 2023 01:09:54 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id l4-20020a17090a850400b0023013402671so4923864pjn.5 for ; Thu, 02 Feb 2023 01:09:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xg8mbzvajywRdmywiykWvvmS/w8pRA4vsH4C7TBAGn8=; b=T9YTiFnMKoPqI6zGVr7CB5/B6PQi2QWMskd92g8igpYGFrDS4JQpJNncYJVP1TTALj N2tXXsHTmulzYGMWetQkl1dyUsf0HP52Z09t/pzZPj28zHUkbqVaKGGQcfT6JP2HHjTb 8pLSDCovFRjwJKxO1P1eClfIk/MZOP26BOMmVrg76G7LYcT7wosYlv4tQgfTMkOAIpcn aOPhoZrKbwVvNZC7jl8bmIZWS/v5dkEDK0gSczC2l/dGt1HdFC5AxcqUhGBbOe0Qo/7g Ia+n4Fsm+hctm0IFLO/mwMfPspjb9IZFseJzjoI+trYD3yyU12k0MecoyWS5qkmlDDxR +nNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xg8mbzvajywRdmywiykWvvmS/w8pRA4vsH4C7TBAGn8=; b=yGHHQWKp4t8MhGcyrSu753NKVyeYL5+49jjJcGrz2znLuW5U0r/gsdH6dSxfyToEQj TeIXsC2Dj0S3SWItza5vQERHpmnlymGS2bCiS/M6VzLnFyjrTkte3JX9pDn413z/vWql N8K7TLi6W70FSjRR0bB9wHK0gaFqQHG7EJHCrDO9YOYBqGeGPcrdrqLmqyOvKWbFof7l DVgyMrtQ5glQuKj/uzb/1EE9ABqxvNXnevnHwHWi9+IdZ7WFWuV5vflVgkuqzGugnGho nQSctXRFWLitrP/q+/RItAS9N9kCUrwO3mSVXs36PM8pByfLIrL9aqGfXgVV77fsVQ8N J7Ow== X-Gm-Message-State: AO0yUKW0UUaeXUMHTLR7fPTl8tP+lTf3XKPEVXvdo2SHmBHor5CwIh9q kKjHjgugalnKuHXYqBFNJZWcJNwJsvkGc5m4 X-Google-Smtp-Source: AK7set9d3nw08lCq3TUA0RSoRmRPBk1yoOzOk3adGjbe4muN0rym6l369R9WySNUb/CdkGO/s+FOiQ== X-Received: by 2002:a05:6a20:4890:b0:bc:b018:4341 with SMTP id fo16-20020a056a20489000b000bcb0184341mr4757633pzb.7.1675328993836; Thu, 02 Feb 2023 01:09:53 -0800 (PST) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id ik12-20020a170902ab0c00b001929827731esm13145968plb.201.2023.02.02.01.09.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Feb 2023 01:09:53 -0800 (PST) From: Shunsuke Mie To: "Michael S. Tsirkin" , Jason Wang , Rusty Russell Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shunsuke Mie Subject: [RFC PATCH v2 5/7] vringh: unify the APIs for all accessors Date: Thu, 2 Feb 2023 18:09:32 +0900 Message-Id: <20230202090934.549556-6-mie@igel.co.jp> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230202090934.549556-1-mie@igel.co.jp> References: <20230202090934.549556-1-mie@igel.co.jp> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Each vringh memory accessors that are for user, kern and iotlb has own interfaces that calls common code. But some codes are duplicated and that becomes loss extendability. Introduce a struct vringh_ops and provide a common APIs for all accessors. It can bee easily extended vringh code for new memory accessor and simplified a caller code. An affect of this change is small. It is tested with perf, and results are following: - original code $ perf stat --repeat 20 -- nice -n -20 ./vringh_test_retp_origin \ --parallel --eventidx --fast-vringh Using CPUS 0 and 3 Guest: notified 0, pinged 98040 Host: notified 98040, pinged 0 ... Performance counter stats for 'nice -n -20 ./vringh_test_retp_origin --parallel --eventidx --fast-vringh' (20 runs): 6,228.33 msec task-clock # 1.004 CPUs utilized (+- 0.05%) 196,110 context-switches # 31.616 K/sec (+- 0.00%) 6 cpu-migrations # 0.967 /sec (+- 2.39%) 205 page-faults # 33.049 /sec (+- 0.46%) 14,218,527,987 cycles # 2.292 GHz (+- 0.05%) 10,342,897,254 instructions # 0.73 insn per cycle (+- 0.02%) 2,310,572,989 branches # 372.500 M/sec (+- 0.03%) 178,273,068 branch-misses # 7.72% of all branches (+- 0.04%) 6.20406 +- 0.00308 seconds time elapsed ( +- 0.05% ) - changed code $ perf stat --repeat 20 -- nice -n -20 ./vringh_test_retp_patched \ --parallel --eventidx --fast-vringh Using CPUS 0 and 3 Guest: notified 0, pinged 98040 Host: notified 98040, pinged 0 ... Performance counter stats for 'nice -n -20 ./vringh_test_retp_patched --parallel --eventidx --fast-vringh' (20 runs): 6,103.94 msec task-clock # 1.001 CPUs utilized (+- 0.03%) 196,125 context-switches # 32.165 K/sec (+- 0.00%) 7 cpu-migrations # 1.148 /sec (+- 1.56%) 196 page-faults # 32.144 /sec (+- 0.41%) 13,933,055,778 cycles # 2.285 GHz (+- 0.03%) 10,309,004,718 instructions # 0.74 insn per cycle (+- 0.03%) 2,368,447,519 branches # 388.425 M/sec (+- 0.04%) 211,364,886 branch-misses # 8.94% of all branches (+- 0.05%) 6.09888 +- 0.00155 seconds time elapsed ( +- 0.03% ) The result of changed code experiment shows an increase of branches and branch-misses, but number of pages are decremented. As a result, the elapsed time is getting shorter than original one. Signed-off-by: Shunsuke Mie --- drivers/vhost/vringh.c | 667 +++++++++++------------------------------ include/linux/vringh.h | 100 +++--- 2 files changed, 225 insertions(+), 542 deletions(-) diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index bcdbde1d484e..46fb315483ed 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -35,15 +35,12 @@ static __printf(1,2) __cold void vringh_bad(const char = *fmt, ...) } =20 /* Returns vring->num if empty, -ve on error. */ -static inline int __vringh_get_head(const struct vringh *vrh, - int (*getu16)(const struct vringh *vrh, - u16 *val, const __virtio16 *p), - u16 *last_avail_idx) +static inline int __vringh_get_head(const struct vringh *vrh, u16 *last_av= ail_idx) { u16 avail_idx, i, head; int err; =20 - err =3D getu16(vrh, &avail_idx, &vrh->vring.avail->idx); + err =3D vrh->ops.getu16(vrh, &avail_idx, &vrh->vring.avail->idx); if (err) { vringh_bad("Failed to access avail idx at %p", &vrh->vring.avail->idx); @@ -58,7 +55,7 @@ static inline int __vringh_get_head(const struct vringh *= vrh, =20 i =3D *last_avail_idx & (vrh->vring.num - 1); =20 - err =3D getu16(vrh, &head, &vrh->vring.avail->ring[i]); + err =3D vrh->ops.getu16(vrh, &head, &vrh->vring.avail->ring[i]); if (err) { vringh_bad("Failed to read head: idx %d address %p", *last_avail_idx, &vrh->vring.avail->ring[i]); @@ -131,12 +128,10 @@ static inline ssize_t vringh_iov_xfer(struct vringh *= vrh, =20 /* May reduce *len if range is shorter. */ static inline bool range_check(struct vringh *vrh, u64 addr, size_t *len, - struct vringh_range *range, - bool (*getrange)(struct vringh *, - u64, struct vringh_range *)) + struct vringh_range *range) { if (addr < range->start || addr > range->end_incl) { - if (!getrange(vrh, addr, range)) + if (!vrh->ops.getrange(vrh, addr, range)) return false; } BUG_ON(addr < range->start || addr > range->end_incl); @@ -165,9 +160,7 @@ static inline bool range_check(struct vringh *vrh, u64 = addr, size_t *len, } =20 static inline bool no_range_check(struct vringh *vrh, u64 addr, size_t *le= n, - struct vringh_range *range, - bool (*getrange)(struct vringh *, - u64, struct vringh_range *)) + struct vringh_range *range) { return true; } @@ -244,17 +237,7 @@ static u16 __cold return_from_indirect(const struct vr= ingh *vrh, int *up_next, } =20 static int slow_copy(struct vringh *vrh, void *dst, const void *src, - bool (*rcheck)(struct vringh *vrh, u64 addr, size_t *len, - struct vringh_range *range, - bool (*getrange)(struct vringh *vrh, - u64, - struct vringh_range *)), - bool (*getrange)(struct vringh *vrh, - u64 addr, - struct vringh_range *r), - struct vringh_range *range, - int (*copy)(const struct vringh *vrh, - void *dst, const void *src, size_t len)) + struct vringh_range *range) { size_t part, len =3D sizeof(struct vring_desc); =20 @@ -265,10 +248,10 @@ static int slow_copy(struct vringh *vrh, void *dst, c= onst void *src, part =3D len; addr =3D (u64)(unsigned long)src - range->offset; =20 - if (!rcheck(vrh, addr, &part, range, getrange)) + if (!vrh->ops.range_check(vrh, addr, &part, range)) return -EINVAL; =20 - err =3D copy(vrh, dst, src, part); + err =3D vrh->ops.copydesc(vrh, dst, src, part); if (err) return err; =20 @@ -279,18 +262,35 @@ static int slow_copy(struct vringh *vrh, void *dst, c= onst void *src, return 0; } =20 +static int __vringh_init(struct vringh *vrh, u64 features, unsigned int nu= m, + bool weak_barriers, gfp_t gfp, struct vring_desc *desc, + struct vring_avail *avail, struct vring_used *used) +{ + /* Sane power of 2 please! */ + if (!num || num > 0xffff || (num & (num - 1))) { + vringh_bad("Bad ring size %u", num); + return -EINVAL; + } + + vrh->little_endian =3D (features & (1ULL << VIRTIO_F_VERSION_1)); + vrh->event_indices =3D (features & (1 << VIRTIO_RING_F_EVENT_IDX)); + vrh->weak_barriers =3D weak_barriers; + vrh->completed =3D 0; + vrh->last_avail_idx =3D 0; + vrh->last_used_idx =3D 0; + vrh->vring.num =3D num; + vrh->vring.desc =3D desc; + vrh->vring.avail =3D avail; + vrh->vring.used =3D used; + vrh->desc_gfp =3D gfp; + + return 0; +} + static inline int __vringh_iov(struct vringh *vrh, u16 i, struct vringh_kiov *riov, - struct vringh_kiov *wiov, - bool (*rcheck)(struct vringh *vrh, u64 addr, size_t *len, - struct vringh_range *range, - bool (*getrange)(struct vringh *, u64, - struct vringh_range *)), - bool (*getrange)(struct vringh *, u64, struct vringh_range *), - gfp_t gfp, - int (*copy)(const struct vringh *vrh, - void *dst, const void *src, size_t len)) + struct vringh_kiov *wiov, gfp_t gfp) { int err, count =3D 0, indirect_count =3D 0, up_next, desc_max; struct vring_desc desc, *descs; @@ -317,10 +317,9 @@ __vringh_iov(struct vringh *vrh, u16 i, size_t len; =20 if (unlikely(slow)) - err =3D slow_copy(vrh, &desc, &descs[i], rcheck, getrange, - &slowrange, copy); + err =3D slow_copy(vrh, &desc, &descs[i], &slowrange); else - err =3D copy(vrh, &desc, &descs[i], sizeof(desc)); + err =3D vrh->ops.copydesc(vrh, &desc, &descs[i], sizeof(desc)); if (unlikely(err)) goto fail; =20 @@ -330,7 +329,7 @@ __vringh_iov(struct vringh *vrh, u16 i, =20 /* Make sure it's OK, and get offset. */ len =3D vringh32_to_cpu(vrh, desc.len); - if (!rcheck(vrh, a, &len, &range, getrange)) { + if (!vrh->ops.range_check(vrh, a, &len, &range)) { err =3D -EINVAL; goto fail; } @@ -382,8 +381,7 @@ __vringh_iov(struct vringh *vrh, u16 i, again: /* Make sure it's OK, and get offset. */ len =3D vringh32_to_cpu(vrh, desc.len); - if (!rcheck(vrh, vringh64_to_cpu(vrh, desc.addr), &len, &range, - getrange)) { + if (!vrh->ops.range_check(vrh, vringh64_to_cpu(vrh, desc.addr), &len, &r= ange)) { err =3D -EINVAL; goto fail; } @@ -436,13 +434,7 @@ __vringh_iov(struct vringh *vrh, u16 i, =20 static inline int __vringh_complete(struct vringh *vrh, const struct vring_used_elem *used, - unsigned int num_used, - int (*putu16)(const struct vringh *vrh, - __virtio16 *p, u16 val), - int (*putused)(const struct vringh *vrh, - struct vring_used_elem *dst, - const struct vring_used_elem - *src, unsigned num)) + unsigned int num_used) { struct vring_used *used_ring; int err; @@ -456,12 +448,12 @@ static inline int __vringh_complete(struct vringh *vr= h, /* Compiler knows num_used =3D=3D 1 sometimes, hence extra check */ if (num_used > 1 && unlikely(off + num_used >=3D vrh->vring.num)) { u16 part =3D vrh->vring.num - off; - err =3D putused(vrh, &used_ring->ring[off], used, part); + err =3D vrh->ops.putused(vrh, &used_ring->ring[off], used, part); if (!err) - err =3D putused(vrh, &used_ring->ring[0], used + part, + err =3D vrh->ops.putused(vrh, &used_ring->ring[0], used + part, num_used - part); } else - err =3D putused(vrh, &used_ring->ring[off], used, num_used); + err =3D vrh->ops.putused(vrh, &used_ring->ring[off], used, num_used); =20 if (err) { vringh_bad("Failed to write %u used entries %u at %p", @@ -472,7 +464,7 @@ static inline int __vringh_complete(struct vringh *vrh, /* Make sure buffer is written before we update index. */ virtio_wmb(vrh->weak_barriers); =20 - err =3D putu16(vrh, &vrh->vring.used->idx, used_idx + num_used); + err =3D vrh->ops.putu16(vrh, &vrh->vring.used->idx, used_idx + num_used); if (err) { vringh_bad("Failed to update used index at %p", &vrh->vring.used->idx); @@ -483,11 +475,13 @@ static inline int __vringh_complete(struct vringh *vr= h, return 0; } =20 - -static inline int __vringh_need_notify(struct vringh *vrh, - int (*getu16)(const struct vringh *vrh, - u16 *val, - const __virtio16 *p)) +/** + * vringh_need_notify - must we tell the other side about used buffers? + * @vrh: the vring we've called vringh_complete() on. + * + * Returns -errno or 0 if we don't need to tell the other side, 1 if we do. + */ +int vringh_need_notify(struct vringh *vrh) { bool notify; u16 used_event; @@ -501,7 +495,7 @@ static inline int __vringh_need_notify(struct vringh *v= rh, /* Old-style, without event indices. */ if (!vrh->event_indices) { u16 flags; - err =3D getu16(vrh, &flags, &vrh->vring.avail->flags); + err =3D vrh->ops.getu16(vrh, &flags, &vrh->vring.avail->flags); if (err) { vringh_bad("Failed to get flags at %p", &vrh->vring.avail->flags); @@ -511,7 +505,7 @@ static inline int __vringh_need_notify(struct vringh *v= rh, } =20 /* Modern: we know when other side wants to know. */ - err =3D getu16(vrh, &used_event, &vring_used_event(&vrh->vring)); + err =3D vrh->ops.getu16(vrh, &used_event, &vring_used_event(&vrh->vring)); if (err) { vringh_bad("Failed to get used event idx at %p", &vring_used_event(&vrh->vring)); @@ -530,24 +524,28 @@ static inline int __vringh_need_notify(struct vringh = *vrh, vrh->completed =3D 0; return notify; } +EXPORT_SYMBOL(vringh_need_notify); =20 -static inline bool __vringh_notify_enable(struct vringh *vrh, - int (*getu16)(const struct vringh *vrh, - u16 *val, const __virtio16 *p), - int (*putu16)(const struct vringh *vrh, - __virtio16 *p, u16 val)) +/** + * vringh_notify_enable - we want to know if something changes. + * @vrh: the vring. + * + * This always enables notifications, but returns false if there are + * now more buffers available in the vring. + */ +bool vringh_notify_enable(struct vringh *vrh) { u16 avail; =20 if (!vrh->event_indices) { /* Old-school; update flags. */ - if (putu16(vrh, &vrh->vring.used->flags, 0) !=3D 0) { + if (vrh->ops.putu16(vrh, &vrh->vring.used->flags, 0) !=3D 0) { vringh_bad("Clearing used flags %p", &vrh->vring.used->flags); return true; } } else { - if (putu16(vrh, &vring_avail_event(&vrh->vring), + if (vrh->ops.putu16(vrh, &vring_avail_event(&vrh->vring), vrh->last_avail_idx) !=3D 0) { vringh_bad("Updating avail event index %p", &vring_avail_event(&vrh->vring)); @@ -559,7 +557,7 @@ static inline bool __vringh_notify_enable(struct vringh= *vrh, * sure it's written, then check again. */ virtio_mb(vrh->weak_barriers); =20 - if (getu16(vrh, &avail, &vrh->vring.avail->idx) !=3D 0) { + if (vrh->ops.getu16(vrh, &avail, &vrh->vring.avail->idx) !=3D 0) { vringh_bad("Failed to check avail idx at %p", &vrh->vring.avail->idx); return true; @@ -570,20 +568,27 @@ static inline bool __vringh_notify_enable(struct vrin= gh *vrh, * notification anyway). */ return avail =3D=3D vrh->last_avail_idx; } +EXPORT_SYMBOL(vringh_notify_enable); =20 -static inline void __vringh_notify_disable(struct vringh *vrh, - int (*putu16)(const struct vringh *vrh, - __virtio16 *p, u16 val)) +/** + * vringh_notify_disable - don't tell us if something changes. + * @vrh: the vring. + * + * This is our normal running state: we disable and then only enable when + * we're going to sleep. + */ +void vringh_notify_disable(struct vringh *vrh) { if (!vrh->event_indices) { /* Old-school; update flags. */ - if (putu16(vrh, &vrh->vring.used->flags, + if (vrh->ops.putu16(vrh, &vrh->vring.used->flags, VRING_USED_F_NO_NOTIFY)) { vringh_bad("Setting used flags %p", &vrh->vring.used->flags); } } } +EXPORT_SYMBOL(vringh_notify_disable); =20 /* Userspace access helpers: in this case, addresses are really userspace.= */ static inline int getu16_user(const struct vringh *vrh, u16 *val, const __= virtio16 *p) @@ -630,6 +635,16 @@ static inline int xfer_to_user(const struct vringh *vr= h, -EFAULT : 0; } =20 +static struct vringh_ops user_vringh_ops =3D { + .getu16 =3D getu16_user, + .putu16 =3D putu16_user, + .xfer_from =3D xfer_from_user, + .xfer_to =3D xfer_to_user, + .putused =3D putused_user, + .copydesc =3D copydesc_user, + .range_check =3D range_check, +}; + /** * vringh_init_user - initialize a vringh for a userspace vring. * @vrh: the vringh to initialize. @@ -639,6 +654,7 @@ static inline int xfer_to_user(const struct vringh *vrh, * @desc: the userpace descriptor pointer. * @avail: the userpace avail pointer. * @used: the userpace used pointer. + * @getrange: a function that return a range that vring can access. * * Returns an error if num is invalid: you should check pointers * yourself! @@ -647,36 +663,32 @@ int vringh_init_user(struct vringh *vrh, u64 features, unsigned int num, bool weak_barriers, vring_desc_t __user *desc, vring_avail_t __user *avail, - vring_used_t __user *used) + vring_used_t __user *used, + bool (*getrange)(struct vringh *vrh, u64 addr, struct vringh_range *r)) { - /* Sane power of 2 please! */ - if (!num || num > 0xffff || (num & (num - 1))) { - vringh_bad("Bad ring size %u", num); - return -EINVAL; - } + int err; + + err =3D __vringh_init(vrh, features, num, weak_barriers, GFP_KERNEL, + (__force struct vring_desc *)desc, + (__force struct vring_avail *)avail, + (__force struct vring_used *)used); + if (err) + return err; + + memcpy(&vrh->ops, &user_vringh_ops, sizeof(user_vringh_ops)); + vrh->ops.getrange =3D getrange; =20 - vrh->little_endian =3D (features & (1ULL << VIRTIO_F_VERSION_1)); - vrh->event_indices =3D (features & (1 << VIRTIO_RING_F_EVENT_IDX)); - vrh->weak_barriers =3D weak_barriers; - vrh->completed =3D 0; - vrh->last_avail_idx =3D 0; - vrh->last_used_idx =3D 0; - vrh->vring.num =3D num; - /* vring expects kernel addresses, but only used via accessors. */ - vrh->vring.desc =3D (__force struct vring_desc *)desc; - vrh->vring.avail =3D (__force struct vring_avail *)avail; - vrh->vring.used =3D (__force struct vring_used *)used; return 0; } EXPORT_SYMBOL(vringh_init_user); =20 /** - * vringh_getdesc_user - get next available descriptor from userspace ring. - * @vrh: the userspace vring. + * vringh_getdesc - get next available descriptor from ring. + * @vrh: the vringh to get desc. * @riov: where to put the readable descriptors (or NULL) * @wiov: where to put the writable descriptors (or NULL) * @getrange: function to call to check ranges. - * @head: head index we received, for passing to vringh_complete_user(). + * @head: head index we received, for passing to vringh_complete(). * * Returns 0 if there was no descriptor, 1 if there was, or -errno. * @@ -690,17 +702,15 @@ EXPORT_SYMBOL(vringh_init_user); * When you don't have to use riov and wiov anymore, you should clean up t= hem * calling vringh_iov_cleanup() to release the memory, even on error! */ -int vringh_getdesc_user(struct vringh *vrh, +int vringh_getdesc(struct vringh *vrh, struct vringh_kiov *riov, struct vringh_kiov *wiov, - bool (*getrange)(struct vringh *vrh, - u64 addr, struct vringh_range *r), u16 *head) { int err; =20 *head =3D vrh->vring.num; - err =3D __vringh_get_head(vrh, getu16_user, &vrh->last_avail_idx); + err =3D __vringh_get_head(vrh, &vrh->last_avail_idx); if (err < 0) return err; =20 @@ -709,137 +719,100 @@ int vringh_getdesc_user(struct vringh *vrh, return 0; =20 *head =3D err; - err =3D __vringh_iov(vrh, *head, (struct vringh_kiov *)riov, - (struct vringh_kiov *)wiov, - range_check, getrange, GFP_KERNEL, copydesc_user); + err =3D __vringh_iov(vrh, *head, riov, wiov, GFP_KERNEL); if (err) return err; =20 return 1; } -EXPORT_SYMBOL(vringh_getdesc_user); +EXPORT_SYMBOL(vringh_getdesc); =20 /** - * vringh_iov_pull_user - copy bytes from vring_kiov. - * @riov: the riov as passed to vringh_getdesc_user() (updated as we consu= me) + * vringh_iov_pull - copy bytes from vring_kiov. + * @vrh: the vringh to load data. + * @riov: the riov as passed to vringh_getdesc() (updated as we consume) * @dst: the place to copy. * @len: the maximum length to copy. * * Returns the bytes copied <=3D len or a negative errno. */ -ssize_t vringh_iov_pull_user(struct vringh_kiov *riov, void *dst, size_t l= en) +ssize_t vringh_iov_pull(struct vringh *vrh, struct vringh_kiov *riov, void= *dst, size_t len) { return vringh_iov_xfer(NULL, (struct vringh_kiov *)riov, - dst, len, xfer_from_user); + dst, len, vrh->ops.xfer_from); } -EXPORT_SYMBOL(vringh_iov_pull_user); +EXPORT_SYMBOL(vringh_iov_pull); =20 /** - * vringh_iov_push_user - copy bytes into vring_kiov. - * @wiov: the wiov as passed to vringh_getdesc_user() (updated as we consu= me) + * vringh_iov_push - copy bytes into vring_kiov. + * @vrh: the vringh to store data. + * @wiov: the wiov as passed to vringh_getdesc() (updated as we consume) * @src: the place to copy from. * @len: the maximum length to copy. * * Returns the bytes copied <=3D len or a negative errno. */ -ssize_t vringh_iov_push_user(struct vringh_kiov *wiov, +ssize_t vringh_iov_push(struct vringh *vrh, struct vringh_kiov *wiov, const void *src, size_t len) { return vringh_iov_xfer(NULL, (struct vringh_kiov *)wiov, - (void *)src, len, xfer_to_user); + (void *)src, len, vrh->ops.xfer_to); } -EXPORT_SYMBOL(vringh_iov_push_user); +EXPORT_SYMBOL(vringh_iov_push); =20 /** - * vringh_abandon_user - we've decided not to handle the descriptor(s). + * vringh_abandon - we've decided not to handle the descriptor(s). * @vrh: the vring. * @num: the number of descriptors to put back (ie. num * vringh_get_user() to undo). * * The next vringh_get_user() will return the old descriptor(s) again. */ -void vringh_abandon_user(struct vringh *vrh, unsigned int num) +void vringh_abandon(struct vringh *vrh, unsigned int num) { /* We only update vring_avail_event(vr) when we want to be notified, * so we haven't changed that yet. */ vrh->last_avail_idx -=3D num; } -EXPORT_SYMBOL(vringh_abandon_user); +EXPORT_SYMBOL(vringh_abandon); =20 /** - * vringh_complete_user - we've finished with descriptor, publish it. + * vringh_complete - we've finished with descriptor, publish it. * @vrh: the vring. - * @head: the head as filled in by vringh_getdesc_user. + * @head: the head as filled in by vringh_getdesc. * @len: the length of data we have written. * - * You should check vringh_need_notify_user() after one or more calls + * You should check vringh_need_notify() after one or more calls * to this function. */ -int vringh_complete_user(struct vringh *vrh, u16 head, u32 len) +int vringh_complete(struct vringh *vrh, u16 head, u32 len) { struct vring_used_elem used; =20 used.id =3D cpu_to_vringh32(vrh, head); used.len =3D cpu_to_vringh32(vrh, len); - return __vringh_complete(vrh, &used, 1, putu16_user, putused_user); + return __vringh_complete(vrh, &used, 1); } -EXPORT_SYMBOL(vringh_complete_user); +EXPORT_SYMBOL(vringh_complete); =20 /** - * vringh_complete_multi_user - we've finished with many descriptors. + * vringh_complete_multi - we've finished with many descriptors. * @vrh: the vring. * @used: the head, length pairs. * @num_used: the number of used elements. * - * You should check vringh_need_notify_user() after one or more calls + * You should check vringh_need_notify() after one or more calls * to this function. */ -int vringh_complete_multi_user(struct vringh *vrh, +int vringh_complete_multi(struct vringh *vrh, const struct vring_used_elem used[], unsigned num_used) { - return __vringh_complete(vrh, used, num_used, - putu16_user, putused_user); -} -EXPORT_SYMBOL(vringh_complete_multi_user); - -/** - * vringh_notify_enable_user - we want to know if something changes. - * @vrh: the vring. - * - * This always enables notifications, but returns false if there are - * now more buffers available in the vring. - */ -bool vringh_notify_enable_user(struct vringh *vrh) -{ - return __vringh_notify_enable(vrh, getu16_user, putu16_user); + return __vringh_complete(vrh, used, num_used); } -EXPORT_SYMBOL(vringh_notify_enable_user); +EXPORT_SYMBOL(vringh_complete_multi); =20 -/** - * vringh_notify_disable_user - don't tell us if something changes. - * @vrh: the vring. - * - * This is our normal running state: we disable and then only enable when - * we're going to sleep. - */ -void vringh_notify_disable_user(struct vringh *vrh) -{ - __vringh_notify_disable(vrh, putu16_user); -} -EXPORT_SYMBOL(vringh_notify_disable_user); =20 -/** - * vringh_need_notify_user - must we tell the other side about used buffer= s? - * @vrh: the vring we've called vringh_complete_user() on. - * - * Returns -errno or 0 if we don't need to tell the other side, 1 if we do. - */ -int vringh_need_notify_user(struct vringh *vrh) -{ - return __vringh_need_notify(vrh, getu16_user); -} -EXPORT_SYMBOL(vringh_need_notify_user); =20 /* Kernelspace access helpers. */ static inline int getu16_kern(const struct vringh *vrh, @@ -885,6 +858,17 @@ static inline int kern_xfer(const struct vringh *vrh, = void *dst, return 0; } =20 +static const struct vringh_ops kern_vringh_ops =3D { + .getu16 =3D getu16_kern, + .putu16 =3D putu16_kern, + .xfer_from =3D xfer_kern, + .xfer_to =3D xfer_kern, + .putused =3D putused_kern, + .copydesc =3D copydesc_kern, + .range_check =3D no_range_check, + .getrange =3D NULL, +}; + /** * vringh_init_kern - initialize a vringh for a kernelspace vring. * @vrh: the vringh to initialize. @@ -898,179 +882,22 @@ static inline int kern_xfer(const struct vringh *vrh= , void *dst, * Returns an error if num is invalid. */ int vringh_init_kern(struct vringh *vrh, u64 features, - unsigned int num, bool weak_barriers, + unsigned int num, bool weak_barriers, gfp_t gfp, struct vring_desc *desc, struct vring_avail *avail, struct vring_used *used) -{ - /* Sane power of 2 please! */ - if (!num || num > 0xffff || (num & (num - 1))) { - vringh_bad("Bad ring size %u", num); - return -EINVAL; - } - - vrh->little_endian =3D (features & (1ULL << VIRTIO_F_VERSION_1)); - vrh->event_indices =3D (features & (1 << VIRTIO_RING_F_EVENT_IDX)); - vrh->weak_barriers =3D weak_barriers; - vrh->completed =3D 0; - vrh->last_avail_idx =3D 0; - vrh->last_used_idx =3D 0; - vrh->vring.num =3D num; - vrh->vring.desc =3D desc; - vrh->vring.avail =3D avail; - vrh->vring.used =3D used; - return 0; -} -EXPORT_SYMBOL(vringh_init_kern); - -/** - * vringh_getdesc_kern - get next available descriptor from kernelspace ri= ng. - * @vrh: the kernelspace vring. - * @riov: where to put the readable descriptors (or NULL) - * @wiov: where to put the writable descriptors (or NULL) - * @head: head index we received, for passing to vringh_complete_kern(). - * @gfp: flags for allocating larger riov/wiov. - * - * Returns 0 if there was no descriptor, 1 if there was, or -errno. - * - * Note that on error return, you can tell the difference between an - * invalid ring and a single invalid descriptor: in the former case, - * *head will be vrh->vring.num. You may be able to ignore an invalid - * descriptor, but there's not much you can do with an invalid ring. - * - * Note that you can reuse riov and wiov with subsequent calls. Content is - * overwritten and memory reallocated if more space is needed. - * When you don't have to use riov and wiov anymore, you should clean up t= hem - * calling vringh_kiov_cleanup() to release the memory, even on error! - */ -int vringh_getdesc_kern(struct vringh *vrh, - struct vringh_kiov *riov, - struct vringh_kiov *wiov, - u16 *head, - gfp_t gfp) { int err; =20 - err =3D __vringh_get_head(vrh, getu16_kern, &vrh->last_avail_idx); - if (err < 0) - return err; - - /* Empty... */ - if (err =3D=3D vrh->vring.num) - return 0; - - *head =3D err; - err =3D __vringh_iov(vrh, *head, riov, wiov, no_range_check, NULL, - gfp, copydesc_kern); + err =3D __vringh_init(vrh, features, num, weak_barriers, gfp, desc, avail= , used); if (err) return err; =20 - return 1; -} -EXPORT_SYMBOL(vringh_getdesc_kern); - -/** - * vringh_iov_pull_kern - copy bytes from vring_iov. - * @riov: the riov as passed to vringh_getdesc_kern() (updated as we consu= me) - * @dst: the place to copy. - * @len: the maximum length to copy. - * - * Returns the bytes copied <=3D len or a negative errno. - */ -ssize_t vringh_iov_pull_kern(struct vringh_kiov *riov, void *dst, size_t l= en) -{ - return vringh_iov_xfer(NULL, riov, dst, len, xfer_kern); -} -EXPORT_SYMBOL(vringh_iov_pull_kern); - -/** - * vringh_iov_push_kern - copy bytes into vring_iov. - * @wiov: the wiov as passed to vringh_getdesc_kern() (updated as we consu= me) - * @src: the place to copy from. - * @len: the maximum length to copy. - * - * Returns the bytes copied <=3D len or a negative errno. - */ -ssize_t vringh_iov_push_kern(struct vringh_kiov *wiov, - const void *src, size_t len) -{ - return vringh_iov_xfer(NULL, wiov, (void *)src, len, kern_xfer); -} -EXPORT_SYMBOL(vringh_iov_push_kern); + memcpy(&vrh->ops, &kern_vringh_ops, sizeof(kern_vringh_ops)); =20 -/** - * vringh_abandon_kern - we've decided not to handle the descriptor(s). - * @vrh: the vring. - * @num: the number of descriptors to put back (ie. num - * vringh_get_kern() to undo). - * - * The next vringh_get_kern() will return the old descriptor(s) again. - */ -void vringh_abandon_kern(struct vringh *vrh, unsigned int num) -{ - /* We only update vring_avail_event(vr) when we want to be notified, - * so we haven't changed that yet. */ - vrh->last_avail_idx -=3D num; -} -EXPORT_SYMBOL(vringh_abandon_kern); - -/** - * vringh_complete_kern - we've finished with descriptor, publish it. - * @vrh: the vring. - * @head: the head as filled in by vringh_getdesc_kern. - * @len: the length of data we have written. - * - * You should check vringh_need_notify_kern() after one or more calls - * to this function. - */ -int vringh_complete_kern(struct vringh *vrh, u16 head, u32 len) -{ - struct vring_used_elem used; - - used.id =3D cpu_to_vringh32(vrh, head); - used.len =3D cpu_to_vringh32(vrh, len); - - return __vringh_complete(vrh, &used, 1, putu16_kern, putused_kern); -} -EXPORT_SYMBOL(vringh_complete_kern); - -/** - * vringh_notify_enable_kern - we want to know if something changes. - * @vrh: the vring. - * - * This always enables notifications, but returns false if there are - * now more buffers available in the vring. - */ -bool vringh_notify_enable_kern(struct vringh *vrh) -{ - return __vringh_notify_enable(vrh, getu16_kern, putu16_kern); -} -EXPORT_SYMBOL(vringh_notify_enable_kern); - -/** - * vringh_notify_disable_kern - don't tell us if something changes. - * @vrh: the vring. - * - * This is our normal running state: we disable and then only enable when - * we're going to sleep. - */ -void vringh_notify_disable_kern(struct vringh *vrh) -{ - __vringh_notify_disable(vrh, putu16_kern); -} -EXPORT_SYMBOL(vringh_notify_disable_kern); - -/** - * vringh_need_notify_kern - must we tell the other side about used buffer= s? - * @vrh: the vring we've called vringh_complete_kern() on. - * - * Returns -errno or 0 if we don't need to tell the other side, 1 if we do. - */ -int vringh_need_notify_kern(struct vringh *vrh) -{ - return __vringh_need_notify(vrh, getu16_kern); + return 0; } -EXPORT_SYMBOL(vringh_need_notify_kern); +EXPORT_SYMBOL(vringh_init_kern); =20 #if IS_REACHABLE(CONFIG_VHOST_IOTLB) =20 @@ -1122,7 +949,7 @@ static int iotlb_translate(const struct vringh *vrh, return ret; } =20 -static inline int copy_from_iotlb(const struct vringh *vrh, void *dst, +static int copy_from_iotlb(const struct vringh *vrh, void *dst, void *src, size_t len) { u64 total_translated =3D 0; @@ -1155,7 +982,7 @@ static inline int copy_from_iotlb(const struct vringh = *vrh, void *dst, return total_translated; } =20 -static inline int copy_to_iotlb(const struct vringh *vrh, void *dst, +static int copy_to_iotlb(const struct vringh *vrh, void *dst, void *src, size_t len) { u64 total_translated =3D 0; @@ -1188,7 +1015,7 @@ static inline int copy_to_iotlb(const struct vringh *= vrh, void *dst, return total_translated; } =20 -static inline int getu16_iotlb(const struct vringh *vrh, +static int getu16_iotlb(const struct vringh *vrh, u16 *val, const __virtio16 *p) { struct bio_vec iov; @@ -1209,7 +1036,7 @@ static inline int getu16_iotlb(const struct vringh *v= rh, return 0; } =20 -static inline int putu16_iotlb(const struct vringh *vrh, +static int putu16_iotlb(const struct vringh *vrh, __virtio16 *p, u16 val) { struct bio_vec iov; @@ -1230,7 +1057,7 @@ static inline int putu16_iotlb(const struct vringh *v= rh, return 0; } =20 -static inline int copydesc_iotlb(const struct vringh *vrh, +static int copydesc_iotlb(const struct vringh *vrh, void *dst, const void *src, size_t len) { int ret; @@ -1242,7 +1069,7 @@ static inline int copydesc_iotlb(const struct vringh = *vrh, return 0; } =20 -static inline int xfer_from_iotlb(const struct vringh *vrh, void *src, +static int xfer_from_iotlb(const struct vringh *vrh, void *src, void *dst, size_t len) { int ret; @@ -1254,7 +1081,7 @@ static inline int xfer_from_iotlb(const struct vringh= *vrh, void *src, return 0; } =20 -static inline int xfer_to_iotlb(const struct vringh *vrh, +static int xfer_to_iotlb(const struct vringh *vrh, void *dst, void *src, size_t len) { int ret; @@ -1266,7 +1093,7 @@ static inline int xfer_to_iotlb(const struct vringh *= vrh, return 0; } =20 -static inline int putused_iotlb(const struct vringh *vrh, +static int putused_iotlb(const struct vringh *vrh, struct vring_used_elem *dst, const struct vring_used_elem *src, unsigned int num) @@ -1281,6 +1108,17 @@ static inline int putused_iotlb(const struct vringh = *vrh, return 0; } =20 +static const struct vringh_ops iotlb_vringh_ops =3D { + .getu16 =3D getu16_iotlb, + .putu16 =3D putu16_iotlb, + .xfer_from =3D xfer_from_iotlb, + .xfer_to =3D xfer_to_iotlb, + .putused =3D putused_iotlb, + .copydesc =3D copydesc_iotlb, + .range_check =3D no_range_check, + .getrange =3D NULL, +}; + /** * vringh_init_iotlb - initialize a vringh for a ring with IOTLB. * @vrh: the vringh to initialize. @@ -1294,13 +1132,20 @@ static inline int putused_iotlb(const struct vringh= *vrh, * Returns an error if num is invalid. */ int vringh_init_iotlb(struct vringh *vrh, u64 features, - unsigned int num, bool weak_barriers, + unsigned int num, bool weak_barriers, gfp_t gfp, struct vring_desc *desc, struct vring_avail *avail, struct vring_used *used) { - return vringh_init_kern(vrh, features, num, weak_barriers, - desc, avail, used); + int err; + + err =3D __vringh_init(vrh, features, num, weak_barriers, gfp, desc, avail= , used); + if (err) + return err; + + memcpy(&vrh->ops, &iotlb_vringh_ops, sizeof(iotlb_vringh_ops)); + + return 0; } EXPORT_SYMBOL(vringh_init_iotlb); =20 @@ -1318,162 +1163,6 @@ void vringh_set_iotlb(struct vringh *vrh, struct vh= ost_iotlb *iotlb, } EXPORT_SYMBOL(vringh_set_iotlb); =20 -/** - * vringh_getdesc_iotlb - get next available descriptor from ring with - * IOTLB. - * @vrh: the kernelspace vring. - * @riov: where to put the readable descriptors (or NULL) - * @wiov: where to put the writable descriptors (or NULL) - * @head: head index we received, for passing to vringh_complete_iotlb(). - * @gfp: flags for allocating larger riov/wiov. - * - * Returns 0 if there was no descriptor, 1 if there was, or -errno. - * - * Note that on error return, you can tell the difference between an - * invalid ring and a single invalid descriptor: in the former case, - * *head will be vrh->vring.num. You may be able to ignore an invalid - * descriptor, but there's not much you can do with an invalid ring. - * - * Note that you can reuse riov and wiov with subsequent calls. Content is - * overwritten and memory reallocated if more space is needed. - * When you don't have to use riov and wiov anymore, you should clean up t= hem - * calling vringh_kiov_cleanup() to release the memory, even on error! - */ -int vringh_getdesc_iotlb(struct vringh *vrh, - struct vringh_kiov *riov, - struct vringh_kiov *wiov, - u16 *head, - gfp_t gfp) -{ - int err; - - err =3D __vringh_get_head(vrh, getu16_iotlb, &vrh->last_avail_idx); - if (err < 0) - return err; - - /* Empty... */ - if (err =3D=3D vrh->vring.num) - return 0; - - *head =3D err; - err =3D __vringh_iov(vrh, *head, riov, wiov, no_range_check, NULL, - gfp, copydesc_iotlb); - if (err) - return err; - - return 1; -} -EXPORT_SYMBOL(vringh_getdesc_iotlb); - -/** - * vringh_iov_pull_iotlb - copy bytes from vring_iov. - * @vrh: the vring. - * @riov: the riov as passed to vringh_getdesc_iotlb() (updated as we cons= ume) - * @dst: the place to copy. - * @len: the maximum length to copy. - * - * Returns the bytes copied <=3D len or a negative errno. - */ -ssize_t vringh_iov_pull_iotlb(struct vringh *vrh, - struct vringh_kiov *riov, - void *dst, size_t len) -{ - return vringh_iov_xfer(vrh, riov, dst, len, xfer_from_iotlb); -} -EXPORT_SYMBOL(vringh_iov_pull_iotlb); - -/** - * vringh_iov_push_iotlb - copy bytes into vring_iov. - * @vrh: the vring. - * @wiov: the wiov as passed to vringh_getdesc_iotlb() (updated as we cons= ume) - * @src: the place to copy from. - * @len: the maximum length to copy. - * - * Returns the bytes copied <=3D len or a negative errno. - */ -ssize_t vringh_iov_push_iotlb(struct vringh *vrh, - struct vringh_kiov *wiov, - const void *src, size_t len) -{ - return vringh_iov_xfer(vrh, wiov, (void *)src, len, xfer_to_iotlb); -} -EXPORT_SYMBOL(vringh_iov_push_iotlb); - -/** - * vringh_abandon_iotlb - we've decided not to handle the descriptor(s). - * @vrh: the vring. - * @num: the number of descriptors to put back (ie. num - * vringh_get_iotlb() to undo). - * - * The next vringh_get_iotlb() will return the old descriptor(s) again. - */ -void vringh_abandon_iotlb(struct vringh *vrh, unsigned int num) -{ - /* We only update vring_avail_event(vr) when we want to be notified, - * so we haven't changed that yet. - */ - vrh->last_avail_idx -=3D num; -} -EXPORT_SYMBOL(vringh_abandon_iotlb); - -/** - * vringh_complete_iotlb - we've finished with descriptor, publish it. - * @vrh: the vring. - * @head: the head as filled in by vringh_getdesc_iotlb. - * @len: the length of data we have written. - * - * You should check vringh_need_notify_iotlb() after one or more calls - * to this function. - */ -int vringh_complete_iotlb(struct vringh *vrh, u16 head, u32 len) -{ - struct vring_used_elem used; - - used.id =3D cpu_to_vringh32(vrh, head); - used.len =3D cpu_to_vringh32(vrh, len); - - return __vringh_complete(vrh, &used, 1, putu16_iotlb, putused_iotlb); -} -EXPORT_SYMBOL(vringh_complete_iotlb); - -/** - * vringh_notify_enable_iotlb - we want to know if something changes. - * @vrh: the vring. - * - * This always enables notifications, but returns false if there are - * now more buffers available in the vring. - */ -bool vringh_notify_enable_iotlb(struct vringh *vrh) -{ - return __vringh_notify_enable(vrh, getu16_iotlb, putu16_iotlb); -} -EXPORT_SYMBOL(vringh_notify_enable_iotlb); - -/** - * vringh_notify_disable_iotlb - don't tell us if something changes. - * @vrh: the vring. - * - * This is our normal running state: we disable and then only enable when - * we're going to sleep. - */ -void vringh_notify_disable_iotlb(struct vringh *vrh) -{ - __vringh_notify_disable(vrh, putu16_iotlb); -} -EXPORT_SYMBOL(vringh_notify_disable_iotlb); - -/** - * vringh_need_notify_iotlb - must we tell the other side about used buffe= rs? - * @vrh: the vring we've called vringh_complete_iotlb() on. - * - * Returns -errno or 0 if we don't need to tell the other side, 1 if we do. - */ -int vringh_need_notify_iotlb(struct vringh *vrh) -{ - return __vringh_need_notify(vrh, getu16_iotlb); -} -EXPORT_SYMBOL(vringh_need_notify_iotlb); - #endif =20 MODULE_LICENSE("GPL"); diff --git a/include/linux/vringh.h b/include/linux/vringh.h index 733d948e8123..89c73605c85f 100644 --- a/include/linux/vringh.h +++ b/include/linux/vringh.h @@ -21,6 +21,36 @@ #endif #include =20 +struct vringh; +struct vringh_range; + +/** + * struct vringh_ops - ops for accessing a vring and checking to access ra= nge. + * @getu16: read u16 value from pointer + * @putu16: write u16 value to pointer + * @xfer_from: copy memory range from specified address to local virtual a= ddress + * @xfer_tio: copy memory range from local virtual address to specified ad= dress + * @putused: update vring used descriptor + * @copydesc: copy desiptor from target to local virtual address + * @range_check: check if the region is accessible + * @getrange: return a range that vring can access + */ +struct vringh_ops { + int (*getu16)(const struct vringh *vrh, u16 *val, const __virtio16 *p); + int (*putu16)(const struct vringh *vrh, __virtio16 *p, u16 val); + int (*xfer_from)(const struct vringh *vrh, void *src, void *dst, + size_t len); + int (*xfer_to)(const struct vringh *vrh, void *dst, void *src, + size_t len); + int (*putused)(const struct vringh *vrh, struct vring_used_elem *dst, + const struct vring_used_elem *src, unsigned int num); + int (*copydesc)(const struct vringh *vrh, void *dst, const void *src, + size_t len); + bool (*range_check)(struct vringh *vrh, u64 addr, size_t *len, + struct vringh_range *range); + bool (*getrange)(struct vringh *vrh, u64 addr, struct vringh_range *r); +}; + /* virtio_ring with information needed for host access. */ struct vringh { /* Everything is little endian */ @@ -52,6 +82,10 @@ struct vringh { =20 /* The function to call to notify the guest about added buffers */ void (*notify)(struct vringh *); + + struct vringh_ops ops; + + gfp_t desc_gfp; }; =20 /** @@ -99,41 +133,40 @@ int vringh_init_user(struct vringh *vrh, u64 features, unsigned int num, bool weak_barriers, vring_desc_t __user *desc, vring_avail_t __user *avail, - vring_used_t __user *used); + vring_used_t __user *used, + bool (*getrange)(struct vringh *vrh, u64 addr, struct vringh_range *r)= ); =20 /* Convert a descriptor into iovecs. */ -int vringh_getdesc_user(struct vringh *vrh, +int vringh_getdesc(struct vringh *vrh, struct vringh_kiov *riov, struct vringh_kiov *wiov, - bool (*getrange)(struct vringh *vrh, - u64 addr, struct vringh_range *r), u16 *head); =20 /* Copy bytes from readable vsg, consuming it (and incrementing wiov->i). = */ -ssize_t vringh_iov_pull_user(struct vringh_kiov *riov, void *dst, size_t l= en); +ssize_t vringh_iov_pull(struct vringh *vrh, struct vringh_kiov *riov, void= *dst, size_t len); =20 /* Copy bytes into writable vsg, consuming it (and incrementing wiov->i). = */ -ssize_t vringh_iov_push_user(struct vringh_kiov *wiov, +ssize_t vringh_iov_push(struct vringh *vrh, struct vringh_kiov *wiov, const void *src, size_t len); =20 /* Mark a descriptor as used. */ -int vringh_complete_user(struct vringh *vrh, u16 head, u32 len); -int vringh_complete_multi_user(struct vringh *vrh, +int vringh_complete(struct vringh *vrh, u16 head, u32 len); +int vringh_complete_multi(struct vringh *vrh, const struct vring_used_elem used[], unsigned num_used); =20 /* Pretend we've never seen descriptor (for easy error handling). */ -void vringh_abandon_user(struct vringh *vrh, unsigned int num); +void vringh_abandon(struct vringh *vrh, unsigned int num); =20 /* Do we need to fire the eventfd to notify the other side? */ -int vringh_need_notify_user(struct vringh *vrh); +int vringh_need_notify(struct vringh *vrh); =20 -bool vringh_notify_enable_user(struct vringh *vrh); -void vringh_notify_disable_user(struct vringh *vrh); +bool vringh_notify_enable(struct vringh *vrh); +void vringh_notify_disable(struct vringh *vrh); =20 /* Helpers for kernelspace vrings. */ int vringh_init_kern(struct vringh *vrh, u64 features, - unsigned int num, bool weak_barriers, + unsigned int num, bool weak_barriers, gfp_t gfp, struct vring_desc *desc, struct vring_avail *avail, struct vring_used *used); @@ -176,23 +209,6 @@ static inline size_t vringh_kiov_length(struct vringh_= kiov *kiov) =20 void vringh_kiov_advance(struct vringh_kiov *kiov, size_t len); =20 -int vringh_getdesc_kern(struct vringh *vrh, - struct vringh_kiov *riov, - struct vringh_kiov *wiov, - u16 *head, - gfp_t gfp); - -ssize_t vringh_iov_pull_kern(struct vringh_kiov *riov, void *dst, size_t l= en); -ssize_t vringh_iov_push_kern(struct vringh_kiov *wiov, - const void *src, size_t len); -void vringh_abandon_kern(struct vringh *vrh, unsigned int num); -int vringh_complete_kern(struct vringh *vrh, u16 head, u32 len); - -bool vringh_notify_enable_kern(struct vringh *vrh); -void vringh_notify_disable_kern(struct vringh *vrh); - -int vringh_need_notify_kern(struct vringh *vrh); - /* Notify the guest about buffers added to the used ring */ static inline void vringh_notify(struct vringh *vrh) { @@ -242,33 +258,11 @@ void vringh_set_iotlb(struct vringh *vrh, struct vhos= t_iotlb *iotlb, spinlock_t *iotlb_lock); =20 int vringh_init_iotlb(struct vringh *vrh, u64 features, - unsigned int num, bool weak_barriers, + unsigned int num, bool weak_barriers, gfp_t gfp, struct vring_desc *desc, struct vring_avail *avail, struct vring_used *used); =20 -int vringh_getdesc_iotlb(struct vringh *vrh, - struct vringh_kiov *riov, - struct vringh_kiov *wiov, - u16 *head, - gfp_t gfp); - -ssize_t vringh_iov_pull_iotlb(struct vringh *vrh, - struct vringh_kiov *riov, - void *dst, size_t len); -ssize_t vringh_iov_push_iotlb(struct vringh *vrh, - struct vringh_kiov *wiov, - const void *src, size_t len); - -void vringh_abandon_iotlb(struct vringh *vrh, unsigned int num); - -int vringh_complete_iotlb(struct vringh *vrh, u16 head, u32 len); - -bool vringh_notify_enable_iotlb(struct vringh *vrh); -void vringh_notify_disable_iotlb(struct vringh *vrh); - -int vringh_need_notify_iotlb(struct vringh *vrh); - #endif /* CONFIG_VHOST_IOTLB */ =20 #endif /* _LINUX_VRINGH_H */ --=20 2.25.1 From nobody Thu Apr 9 05:26:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02A3BC636D4 for ; Thu, 2 Feb 2023 09:10:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232542AbjBBJKH (ORCPT ); Thu, 2 Feb 2023 04:10:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232532AbjBBJKA (ORCPT ); Thu, 2 Feb 2023 04:10:00 -0500 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7A0A761F8 for ; Thu, 2 Feb 2023 01:09:56 -0800 (PST) Received: by mail-pj1-x1031.google.com with SMTP id e8-20020a17090a9a8800b0022c387f0f93so4930166pjp.3 for ; Thu, 02 Feb 2023 01:09:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fw1zp7wA+/zxFPe9++9b6RsIhQH1xDXghkXPllMc2n4=; b=RHAYUhhi7m4bqCve8jy5sn9GcndvUwRsO5MFk+Ps48SogJBHXlDFp0+/Ek9xa63hAN 6HaoBk3raAJUX4HZChXQu6Di4nWjHlbVRDza+0tmX+oNn7PWgyTH1t/ebxyAG10kWWKL KSWrjnCUHG9z+rBxjkTozSwsYfDTwBrbwnRkap/NvlxNm/R0v5m/nUPdSCEwpUIODcg2 wUc/ssK5jXWrU2hIeFSbc1gmKqD2Oq4b5Xk+JoHUs8s7vK+rCAQxB5lrMtXXh/Ihq5Id //hqWWtLv2vNcZmYMIBScxffN/zTu57fOcX71sXBqVQGiLQ7VSetmEr/NbFPEFtdbwuL NrnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fw1zp7wA+/zxFPe9++9b6RsIhQH1xDXghkXPllMc2n4=; b=UeWwxdlOB0klZ4XHwxg6iiNfc+m4fqapliNyo0QyeXkPbSTlH/C3cflrBdWCTzEZtg 3UH1AKg3M5ma22nwM51aeE/GmPyPskWeLLK980GGFGQAJlnxQVmtVuG12GOB/4kuGjXt f2ZkKkTC1RnIaiNErTXJh2rmnlB6x6Ad+yVRRrggg702xVMUrvp6FNljJHZiAq8JuoRj bMwIaaTZxlpiqYOlzrpAcQDC5HHzBi1M5HWxGyQPho5VurCcrXWzOu57067laQJ50zUO tbsPpxOGuIYFxNy8RXZ2cTOyTk40fTZf6TM03cAxpGFfuvG03PIohwM2lcytsK4dVoCh GsXg== X-Gm-Message-State: AO0yUKXW4ZNwYTMv8MClND0u3518KgGRkJpSvtA7gRNqQ3M8UBnsPjsf s2X47MQmDXpTISt/d2KOrBofDg== X-Google-Smtp-Source: AK7set/Z8dFW1tZRKDrpRGuHnjxLkBwAwsLaNrNmOiR93N9vjf72xnm3Gl2dga1GPB4aoVpiKXjhHg== X-Received: by 2002:a05:6a20:b046:b0:be:22c5:92df with SMTP id dx6-20020a056a20b04600b000be22c592dfmr5208131pzb.16.1675328996263; Thu, 02 Feb 2023 01:09:56 -0800 (PST) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id ik12-20020a170902ab0c00b001929827731esm13145968plb.201.2023.02.02.01.09.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Feb 2023 01:09:55 -0800 (PST) From: Shunsuke Mie To: "Michael S. Tsirkin" , Jason Wang , Rusty Russell Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shunsuke Mie Subject: [RFC PATCH v2 6/7] tools/virtio: convert to use new unified vringh APIs Date: Thu, 2 Feb 2023 18:09:33 +0900 Message-Id: <20230202090934.549556-7-mie@igel.co.jp> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230202090934.549556-1-mie@igel.co.jp> References: <20230202090934.549556-1-mie@igel.co.jp> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" vringh_*_user APIs is being removed without vringh_init_user(). so change to use new APIs. Signed-off-by: Shunsuke Mie --- tools/virtio/vringh_test.c | 89 +++++++++++++++++++------------------- 1 file changed, 44 insertions(+), 45 deletions(-) diff --git a/tools/virtio/vringh_test.c b/tools/virtio/vringh_test.c index 6c9533b8a2ca..068c6d5aa4fd 100644 --- a/tools/virtio/vringh_test.c +++ b/tools/virtio/vringh_test.c @@ -187,7 +187,7 @@ static int parallel_test(u64 features, =20 vring_init(&vrh.vring, RINGSIZE, host_map, ALIGN); vringh_init_user(&vrh, features, RINGSIZE, true, - vrh.vring.desc, vrh.vring.avail, vrh.vring.used); + vrh.vring.desc, vrh.vring.avail, vrh.vring.used, getrange); CPU_SET(first_cpu, &cpu_set); if (sched_setaffinity(getpid(), sizeof(cpu_set), &cpu_set)) errx(1, "Could not set affinity to cpu %u", first_cpu); @@ -202,9 +202,9 @@ static int parallel_test(u64 features, err =3D vringh_get_head(&vrh, &head); if (err !=3D 0) break; - err =3D vringh_need_notify_user(&vrh); + err =3D vringh_need_notify(&vrh); if (err < 0) - errx(1, "vringh_need_notify_user: %i", + errx(1, "vringh_need_notify: %i", err); if (err) { write(to_guest[1], "", 1); @@ -223,46 +223,45 @@ static int parallel_test(u64 features, host_wiov, ARRAY_SIZE(host_wiov)); =20 - err =3D vringh_getdesc_user(&vrh, &riov, &wiov, - getrange, &head); + err =3D vringh_getdesc(&vrh, &riov, &wiov, &head); } if (err =3D=3D 0) { - err =3D vringh_need_notify_user(&vrh); + err =3D vringh_need_notify(&vrh); if (err < 0) - errx(1, "vringh_need_notify_user: %i", + errx(1, "vringh_need_notify: %i", err); if (err) { write(to_guest[1], "", 1); notifies++; } =20 - if (!vringh_notify_enable_user(&vrh)) + if (!vringh_notify_enable(&vrh)) continue; =20 /* Swallow all notifies at once. */ if (read(to_host[0], buf, sizeof(buf)) < 1) break; =20 - vringh_notify_disable_user(&vrh); + vringh_notify_disable(&vrh); receives++; continue; } if (err !=3D 1) - errx(1, "vringh_getdesc_user: %i", err); + errx(1, "vringh_getdesc: %i", err); =20 /* We simply copy bytes. */ if (riov.used) { - rlen =3D vringh_iov_pull_user(&riov, rbuf, + rlen =3D vringh_iov_pull(&vrh, &riov, rbuf, sizeof(rbuf)); if (rlen !=3D 4) - errx(1, "vringh_iov_pull_user: %i", + errx(1, "vringh_iov_pull: %i", rlen); assert(riov.i =3D=3D riov.used); written =3D 0; } else { - err =3D vringh_iov_push_user(&wiov, rbuf, rlen); + err =3D vringh_iov_push(&vrh, &wiov, rbuf, rlen); if (err !=3D rlen) - errx(1, "vringh_iov_push_user: %i", + errx(1, "vringh_iov_push: %i", err); assert(wiov.i =3D=3D wiov.used); written =3D err; @@ -270,14 +269,14 @@ static int parallel_test(u64 features, complete: xfers++; =20 - err =3D vringh_complete_user(&vrh, head, written); + err =3D vringh_complete(&vrh, head, written); if (err !=3D 0) - errx(1, "vringh_complete_user: %i", err); + errx(1, "vringh_complete: %i", err); } =20 - err =3D vringh_need_notify_user(&vrh); + err =3D vringh_need_notify(&vrh); if (err < 0) - errx(1, "vringh_need_notify_user: %i", err); + errx(1, "vringh_need_notify: %i", err); if (err) { write(to_guest[1], "", 1); notifies++; @@ -493,12 +492,12 @@ int main(int argc, char *argv[]) /* Set up host side. */ vring_init(&vrh.vring, RINGSIZE, __user_addr_min, ALIGN); vringh_init_user(&vrh, vdev.features, RINGSIZE, true, - vrh.vring.desc, vrh.vring.avail, vrh.vring.used); + vrh.vring.desc, vrh.vring.avail, vrh.vring.used, getrange); =20 /* No descriptor to get yet... */ - err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); + err =3D vringh_getdesc(&vrh, &riov, &wiov, &head); if (err !=3D 0) - errx(1, "vringh_getdesc_user: %i", err); + errx(1, "vringh_getdesc: %i", err); =20 /* Guest puts in a descriptor. */ memcpy(__user_addr_max - 1, "a", 1); @@ -520,9 +519,9 @@ int main(int argc, char *argv[]) vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 - err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); + err =3D vringh_getdesc(&vrh, &riov, &wiov, &head); if (err !=3D 1) - errx(1, "vringh_getdesc_user: %i", err); + errx(1, "vringh_getdesc: %i", err); =20 assert(riov.used =3D=3D 1); assert(riov.iov[0].iov_base =3D=3D __user_addr_max - 1); @@ -539,25 +538,25 @@ int main(int argc, char *argv[]) assert(wiov.iov[1].iov_len =3D=3D 1); } =20 - err =3D vringh_iov_pull_user(&riov, buf, 5); + err =3D vringh_iov_pull(&vrh, &riov, buf, 5); if (err !=3D 1) - errx(1, "vringh_iov_pull_user: %i", err); + errx(1, "vringh_iov_pull: %i", err); assert(buf[0] =3D=3D 'a'); assert(riov.i =3D=3D 1); - assert(vringh_iov_pull_user(&riov, buf, 5) =3D=3D 0); + assert(vringh_iov_pull(&vrh, &riov, buf, 5) =3D=3D 0); =20 memcpy(buf, "bcdef", 5); - err =3D vringh_iov_push_user(&wiov, buf, 5); + err =3D vringh_iov_push(&vrh, &wiov, buf, 5); if (err !=3D 2) - errx(1, "vringh_iov_push_user: %i", err); + errx(1, "vringh_iov_push: %i", err); assert(memcmp(__user_addr_max - 3, "bc", 2) =3D=3D 0); assert(wiov.i =3D=3D wiov.used); - assert(vringh_iov_push_user(&wiov, buf, 5) =3D=3D 0); + assert(vringh_iov_push(&vrh, &wiov, buf, 5) =3D=3D 0); =20 /* Host is done. */ - err =3D vringh_complete_user(&vrh, head, err); + err =3D vringh_complete(&vrh, head, err); if (err !=3D 0) - errx(1, "vringh_complete_user: %i", err); + errx(1, "vringh_complete: %i", err); =20 /* Guest should see used token now. */ __kfree_ignore_start =3D __user_addr_min + vring_size(RINGSIZE, ALIGN); @@ -589,9 +588,9 @@ int main(int argc, char *argv[]) vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 - err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); + err =3D vringh_getdesc(&vrh, &riov, &wiov, &head); if (err !=3D 1) - errx(1, "vringh_getdesc_user: %i", err); + errx(1, "vringh_getdesc: %i", err); =20 assert(riov.max_num & VRINGH_IOV_ALLOCATED); assert(riov.iov !=3D host_riov); @@ -605,9 +604,9 @@ int main(int argc, char *argv[]) =20 /* Pull data back out (in odd chunks), should be as expected. */ for (i =3D 0; i < RINGSIZE * USER_MEM/4; i +=3D 3) { - err =3D vringh_iov_pull_user(&riov, buf, 3); + err =3D vringh_iov_pull(&vrh, &riov, buf, 3); if (err !=3D 3 && i + err !=3D RINGSIZE * USER_MEM/4) - errx(1, "vringh_iov_pull_user large: %i", err); + errx(1, "vringh_iov_pulllarge: %i", err); assert(buf[0] =3D=3D (char)i); assert(err < 2 || buf[1] =3D=3D (char)(i + 1)); assert(err < 3 || buf[2] =3D=3D (char)(i + 2)); @@ -619,9 +618,9 @@ int main(int argc, char *argv[]) /* Complete using multi interface, just because we can. */ used[0].id =3D head; used[0].len =3D 0; - err =3D vringh_complete_multi_user(&vrh, used, 1); + err =3D vringh_complete_multi(&vrh, used, 1); if (err) - errx(1, "vringh_complete_multi_user(1): %i", err); + errx(1, "vringh_complete_multi(1): %i", err); =20 /* Free up those descriptors. */ ret =3D virtqueue_get_buf(vq, &i); @@ -642,17 +641,17 @@ int main(int argc, char *argv[]) vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 for (i =3D 0; i < RINGSIZE; i++) { - err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); + err =3D vringh_getdesc(&vrh, &riov, &wiov, &head); if (err !=3D 1) - errx(1, "vringh_getdesc_user: %i", err); + errx(1, "vringh_getdesc: %i", err); used[i].id =3D head; used[i].len =3D 0; } /* Make sure it wraps around ring, to test! */ assert(vrh.vring.used->idx % RINGSIZE !=3D 0); - err =3D vringh_complete_multi_user(&vrh, used, RINGSIZE); + err =3D vringh_complete_multi(&vrh, used, RINGSIZE); if (err) - errx(1, "vringh_complete_multi_user: %i", err); + errx(1, "vringh_complete_multi: %i", err); =20 /* Free those buffers. */ for (i =3D 0; i < RINGSIZE; i++) { @@ -726,19 +725,19 @@ int main(int argc, char *argv[]) vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 - err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); + err =3D vringh_getdesc(&vrh, &riov, &wiov, &head); if (err !=3D 1) - errx(1, "vringh_getdesc_user: %i", err); + errx(1, "vringh_getdesc: %i", err); =20 if (head !=3D 0) - errx(1, "vringh_getdesc_user: head %i not 0", head); + errx(1, "vringh_getdesc: head %i not 0", head); =20 assert(riov.max_num & VRINGH_IOV_ALLOCATED); if (getrange !=3D getrange_slow) assert(riov.used =3D=3D 7); else assert(riov.used =3D=3D 28); - err =3D vringh_iov_pull_user(&riov, buf, 29); + err =3D vringh_iov_pull(&vrh, &riov, buf, 29); assert(err =3D=3D 28); =20 /* Data should be linear. */ --=20 2.25.1 From nobody Thu Apr 9 05:26:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21F83C05027 for ; Thu, 2 Feb 2023 09:10:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232142AbjBBJKU (ORCPT ); Thu, 2 Feb 2023 04:10:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231970AbjBBJKB (ORCPT ); Thu, 2 Feb 2023 04:10:01 -0500 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21A777E6DB for ; Thu, 2 Feb 2023 01:09:59 -0800 (PST) Received: by mail-pj1-x102b.google.com with SMTP id e10-20020a17090a630a00b0022bedd66e6dso4956465pjj.1 for ; Thu, 02 Feb 2023 01:09:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gcO0uxyztK7wrPRESDhnWawV9bJajYCn05I6EVJHO1Q=; b=GH6KARO7thcAXAecHFCRqv2AThKvwRpSynFUPRyr/OZKPlK2yL0fvDQhRTEuxWPbag soLM3KjFH60hrsUUukVibYDKMQm0sViyKG5mYZqPbfML2bae+Go5bIdqdw9B901M7ibu wNT7zORKjzFmfIR2oCiVQM3HJyrFKC2BPqVxmkD7X27/Tbuv0DsrL5dChFH+9/NwvzIQ EUYkhc2ftdhCYgF8ACeakDBIlsNguRxNJ82koXMxpahyWP6KNfGH647dLQIQkSf08b9Y Kg9/9k7MncrjLlOlQrFvpRobgAOYtK7H68cuL5RxgyxNu+F+hxjmjIGD1BbGwHe4fB3M 210w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gcO0uxyztK7wrPRESDhnWawV9bJajYCn05I6EVJHO1Q=; b=J6zxQLhgFRzG0FpQlMLDhB6tvUsnuVECSlM5a9dTx/kIy8O07j4bI9G08xe9QXqM77 KTpUAmhZ7ELIcNKiHuDDYCoU5qhTInE+Hc6fsy+jp0MwjBJCXprNVbjDYfidOhivncAW qoUByfUeFPIZAwceltsPJir9A49oZJ/g8JGNxbsfc0NZUdGHjyYg5Waj3EulIL1s6VSa R9YFOm6yhdJHCd9zCAlaFhrE67e/+3b50K0xNHfKXwhkk4pI79JiTLRJRnvhrWed36Yf cvL59DBDSphnB9OdY0y6kAdbc3KcQ9A59L0CzlRfoLG0XpuMX4V4qJ8V8Fm73rdMXsV+ oMcQ== X-Gm-Message-State: AO0yUKVh15JIGbWJ3TcrAZJzKvxFLxYoybgUMjnE5zlOrCgYiFwpLRfR xKRAp05/KsboG8geazZLmihiYA== X-Google-Smtp-Source: AK7set/fXvDEmBQhH8c6B3dTk3eGn1u6YNr6DVl3bDrXS4EUzT2lgnlUWdYhoNLU3kNmqiqzSmYyqg== X-Received: by 2002:a17:903:2303:b0:195:f06f:84fc with SMTP id d3-20020a170903230300b00195f06f84fcmr7368514plh.40.1675328998631; Thu, 02 Feb 2023 01:09:58 -0800 (PST) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id ik12-20020a170902ab0c00b001929827731esm13145968plb.201.2023.02.02.01.09.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Feb 2023 01:09:58 -0800 (PST) From: Shunsuke Mie To: "Michael S. Tsirkin" , Jason Wang , Rusty Russell Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shunsuke Mie Subject: [RFC PATCH v2 7/7] vringh: IOMEM support Date: Thu, 2 Feb 2023 18:09:34 +0900 Message-Id: <20230202090934.549556-8-mie@igel.co.jp> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230202090934.549556-1-mie@igel.co.jp> References: <20230202090934.549556-1-mie@igel.co.jp> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This patch introduces the new memory accessor for vringh. It is able to use vringh to virtio rings located on iomemory region. Signed-off-by: Shunsuke Mie --- drivers/vhost/Kconfig | 6 ++++ drivers/vhost/vringh.c | 76 ++++++++++++++++++++++++++++++++++++++++++ include/linux/vringh.h | 8 +++++ 3 files changed, 90 insertions(+) diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig index 587fbae06182..a79a4efbc817 100644 --- a/drivers/vhost/Kconfig +++ b/drivers/vhost/Kconfig @@ -6,6 +6,12 @@ config VHOST_IOTLB This option is selected by any driver which needs to support an IOMMU in software. =20 +config VHOST_IOMEM + tristate + select VHOST_RING + help + Generic IOMEM implementation for vhost and vringh. + config VHOST_RING tristate select VHOST_IOTLB diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index 46fb315483ed..e3d9c7281ad0 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -18,6 +18,9 @@ #include #include #endif +#if IS_REACHABLE(CONFIG_VHOST_IOMEM) +#include +#endif #include =20 static __printf(1,2) __cold void vringh_bad(const char *fmt, ...) @@ -1165,4 +1168,77 @@ EXPORT_SYMBOL(vringh_set_iotlb); =20 #endif =20 +#if IS_REACHABLE(CONFIG_VHOST_IOMEM) + +/* io-memory space access helpers. */ +static int getu16_iomem(const struct vringh *vrh, u16 *val, const __virtio= 16 *p) +{ + *val =3D vringh16_to_cpu(vrh, ioread16(p)); + return 0; +} + +static int putu16_iomem(const struct vringh *vrh, __virtio16 *p, u16 val) +{ + iowrite16(cpu_to_vringh16(vrh, val), p); + return 0; +} + +static int copydesc_iomem(const struct vringh *vrh, void *dst, const void = *src, + size_t len) +{ + memcpy_fromio(dst, src, len); + return 0; +} + +static int putused_iomem(const struct vringh *vrh, struct vring_used_elem = *dst, + const struct vring_used_elem *src, unsigned int num) +{ + memcpy_toio(dst, src, num * sizeof(*dst)); + return 0; +} + +static int xfer_from_iomem(const struct vringh *vrh, void *src, void *dst, + size_t len) +{ + memcpy_fromio(dst, src, len); + return 0; +} + +static int xfer_to_iomem(const struct vringh *vrh, void *dst, void *src, + size_t len) +{ + memcpy_toio(dst, src, len); + return 0; +} + +static struct vringh_ops iomem_vringh_ops =3D { + .getu16 =3D getu16_iomem, + .putu16 =3D putu16_iomem, + .xfer_from =3D xfer_from_iomem, + .xfer_to =3D xfer_to_iomem, + .putused =3D putused_iomem, + .copydesc =3D copydesc_iomem, + .range_check =3D no_range_check, + .getrange =3D NULL, +}; + +int vringh_init_iomem(struct vringh *vrh, u64 features, unsigned int num, + bool weak_barriers, gfp_t gfp, struct vring_desc *desc, + struct vring_avail *avail, struct vring_used *used) +{ + int err; + + err =3D __vringh_init(vrh, features, num, weak_barriers, gfp, desc, avail, + used); + if (err) + return err; + + memcpy(&vrh->ops, &iomem_vringh_ops, sizeof(iomem_vringh_ops)); + + return 0; +} +EXPORT_SYMBOL(vringh_init_iomem); + +#endif + MODULE_LICENSE("GPL"); diff --git a/include/linux/vringh.h b/include/linux/vringh.h index 89c73605c85f..420c2d0ed398 100644 --- a/include/linux/vringh.h +++ b/include/linux/vringh.h @@ -265,4 +265,12 @@ int vringh_init_iotlb(struct vringh *vrh, u64 features, =20 #endif /* CONFIG_VHOST_IOTLB */ =20 +#if IS_REACHABLE(CONFIG_VHOST_IOMEM) + +int vringh_init_iomem(struct vringh *vrh, u64 features, unsigned int num, + bool weak_barriers, gfp_t gfp, struct vring_desc *desc, + struct vring_avail *avail, struct vring_used *used); + +#endif /* CONFIG_VHOST_IOMEM */ + #endif /* _LINUX_VRINGH_H */ --=20 2.25.1