From nobody Tue Sep 16 23:52:51 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9F72C46467 for ; Tue, 27 Dec 2022 02:25:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232622AbiL0CZq (ORCPT ); Mon, 26 Dec 2022 21:25:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232579AbiL0CZk (ORCPT ); Mon, 26 Dec 2022 21:25:40 -0500 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5084E270A for ; Mon, 26 Dec 2022 18:25:39 -0800 (PST) Received: by mail-pj1-x1035.google.com with SMTP id p4so12030987pjk.2 for ; Mon, 26 Dec 2022 18:25:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vc9dKkOu9Bxzey4EMEiWLkvY83L8oxqqOyKDYNLTu5g=; b=wbZLVqKZnCa58wA2i8a8+KZ6p3B3BKGx6AZ5PJqw+pOt7brvZts5glurkjMa11iFTL RqjgkaYA+1yRF5BXn8Ld/InkXNROzQ7xFuLueevxDuXqNWEQgc7qt9MHvWCt2gqQdUPX oV+d9kpPZel6sq7QA6S5wG+nBa4rNygSq+Y4dzCgVtqv0Cp4+GptPOCMPkIQ0MGwh7E7 UGrrRMjFjndiyuydtbO7/txh6wJtnsFynXdAaGl75AIxNvGmnvZs4jsyZJ+Ka/N8tvP+ wkzUFhNrc9FDCKWp9R2bbGbBvrjLsZQFep1xZ/uI21A9InML4Q9l6dMopWZX7sQNyEU6 nSmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vc9dKkOu9Bxzey4EMEiWLkvY83L8oxqqOyKDYNLTu5g=; b=yd3ZdLQ0A+VnVR7ZICm3j2HtnKp7sS762gPKmZLuGsAXGP3HTXyYUIEjjZ1YgNbgns vr8XWSdvFFRCrYg8X6MB0mcPut+Kf+gOQrPaedZv16hFgA8FhAHEE2P4HHZ4lEeNWFH2 ZHTd+hV8alhE+nK/iBbNnDrB9MNjAmNVETb8t4Ud26fUTWYzo4O1uvbG9ZoAKzyzLtbS NB10AXepTl/xEFmqg210a0z2n+FEeAcksBGm5ADI1wo3Oy1woHtnwLki82i6NM2Sh0De DUdSCYF3/ql9Wz/gIAP+RFQc9O3YYH24z3pr3rKhnJf1musLI56WoM8P7ATq8freGR2w y5eQ== X-Gm-Message-State: AFqh2kq4bkXqRfBOsU5devdIynkKMIdMcinkzDxLxXm8CpGo+k04WOwB dHndWHg1ocQE/cevie3BZH6KQQ== X-Google-Smtp-Source: AMrXdXtZe0PP83G04+jg1kuf3vdXwhv44n2Bde+IvwOeyqGGAZ7SMJJgmZfCmhv40gnPTO2AXV5aUw== X-Received: by 2002:a05:6a20:3b9c:b0:ac:94a1:8afb with SMTP id b28-20020a056a203b9c00b000ac94a18afbmr19088805pzh.13.1672107938858; Mon, 26 Dec 2022 18:25:38 -0800 (PST) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id w15-20020a1709026f0f00b001870dc3b4c0sm2465014plk.74.2022.12.26.18.25.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Dec 2022 18:25:38 -0800 (PST) From: Shunsuke Mie To: "Michael S. Tsirkin" , Jason Wang , Rusty Russell Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shunsuke Mie Subject: [RFC PATCH 1/9] vringh: fix a typo in comments for vringh_kiov Date: Tue, 27 Dec 2022 11:25:23 +0900 Message-Id: <20221227022528.609839-2-mie@igel.co.jp> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221227022528.609839-1-mie@igel.co.jp> References: <20221227022528.609839-1-mie@igel.co.jp> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Probably it is a simple copy error from struct vring_iov. Fixes: f87d0fbb5798 ("vringh: host-side implementation of virtio rings.") Signed-off-by: Shunsuke Mie --- include/linux/vringh.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/vringh.h b/include/linux/vringh.h index 212892cf9822..1991a02c6431 100644 --- a/include/linux/vringh.h +++ b/include/linux/vringh.h @@ -92,7 +92,7 @@ struct vringh_iov { }; =20 /** - * struct vringh_iov - kvec mangler. + * struct vringh_kiov - kvec mangler. * * Mangles kvec in place, and restores it. * Remaining data is iov + i, of used - i elements. --=20 2.25.1 From nobody Tue Sep 16 23:52:51 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA3E8C4708E for ; Tue, 27 Dec 2022 02:25:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232615AbiL0CZv (ORCPT ); Mon, 26 Dec 2022 21:25:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232597AbiL0CZn (ORCPT ); Mon, 26 Dec 2022 21:25:43 -0500 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96FED210 for ; Mon, 26 Dec 2022 18:25:41 -0800 (PST) Received: by mail-pl1-x630.google.com with SMTP id jl4so5761814plb.8 for ; Mon, 26 Dec 2022 18:25:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5haqsSicJ1y5JAf4DzHeYZjjvPaH/eOCnQ95phXljnc=; b=2CQrI3y9LCx/RRrNUdJU8EApVGWI0esJTePaxBlTM7cbLOWb64WOVFKVPTPXKnkbI9 6xp4P8D5Hmjz6+nF3q95kJgycSRDDHByKChF8H2CrtipRLijEJRFOOLGgiox9RiH2k2S IDCeNI5PeeHoO4vP6MQEI+23PmkpRM7EvKVR2QqxImQWYiUjUZNtDAVjAYoxVE/OF4F8 Ba540u1tNN5l1PSh1DgEfxTEH4lyB+l6Iw2zt149mKjROirMIT+ISOI2OhiZxtJe+wBL Y2UwBV0d09p907V7RMOTsUvlggvnaq/Mh/8o99XjnJdZSXnlAurRXwXY187X1DsQvQAQ VqnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5haqsSicJ1y5JAf4DzHeYZjjvPaH/eOCnQ95phXljnc=; b=Tov3n0q/ISa8oAIWoUlCteZ3jZsPuN9wf6sXGhsPloqRnLXsw5qiNMPnQPt1K1jsTj 86nA14KUk/9k6xJuigZ4ZjysnZ/tlcTxiNxdx+P00OjVh79B27n+xGpHcWpQaTxZgWJt Zofy0AAz3C9Zt2agsCH1RtsB0kW7vxiB9OKJ60cLszqczTDO9ZrMB4AhmMbGfUoetCG5 yLWs8yKzLGupb7Hsbktltfo0BQj84XHNkFDm3puFHDX6k6Y0mCa2Oautie91FdZWZ1fK ZcpmjMatCvMRNAdh2Gnb8VnzT2j3YpxL06p7MOrIqeU5pc+I72cob9o2FJngCA4w6Uii wRHQ== X-Gm-Message-State: AFqh2kr6XSQOQ2QzNGs538cNG9X/nwz6si+STXbraHmVrR+YMVHGKRpM 9ChZDzuR86V4tSUgL0T+UO9nZw== X-Google-Smtp-Source: AMrXdXuBf0YmaVtc2Str53QshIsQ5jNRTxQJluF/BUCseYuqpswPq5ANFNlIIq9O+nFVAMm0wV7D1g== X-Received: by 2002:a17:902:da86:b0:191:1987:9f69 with SMTP id j6-20020a170902da8600b0019119879f69mr31083660plx.35.1672107941111; Mon, 26 Dec 2022 18:25:41 -0800 (PST) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id w15-20020a1709026f0f00b001870dc3b4c0sm2465014plk.74.2022.12.26.18.25.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Dec 2022 18:25:40 -0800 (PST) From: Shunsuke Mie To: "Michael S. Tsirkin" , Jason Wang , Rusty Russell Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shunsuke Mie Subject: [RFC PATCH 2/9] vringh: remove vringh_iov and unite to vringh_kiov Date: Tue, 27 Dec 2022 11:25:24 +0900 Message-Id: <20221227022528.609839-3-mie@igel.co.jp> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221227022528.609839-1-mie@igel.co.jp> References: <20221227022528.609839-1-mie@igel.co.jp> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" struct vringh_iov is defined to hold userland addresses. However, to use common function, __vring_iov, finally the vringh_iov converts to the vringh_kiov with simple cast. It includes compile time check code to make sure it can be cast correctly. To simplify the code, this patch removes the struct vringh_iov and unifies APIs to struct vringh_kiov. Signed-off-by: Shunsuke Mie --- drivers/vhost/vringh.c | 32 ++++++------------------------ include/linux/vringh.h | 45 ++++-------------------------------------- 2 files changed, 10 insertions(+), 67 deletions(-) diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index 828c29306565..aa3cd27d2384 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -691,8 +691,8 @@ EXPORT_SYMBOL(vringh_init_user); * calling vringh_iov_cleanup() to release the memory, even on error! */ int vringh_getdesc_user(struct vringh *vrh, - struct vringh_iov *riov, - struct vringh_iov *wiov, + struct vringh_kiov *riov, + struct vringh_kiov *wiov, bool (*getrange)(struct vringh *vrh, u64 addr, struct vringh_range *r), u16 *head) @@ -708,26 +708,6 @@ int vringh_getdesc_user(struct vringh *vrh, if (err =3D=3D vrh->vring.num) return 0; =20 - /* We need the layouts to be the identical for this to work */ - BUILD_BUG_ON(sizeof(struct vringh_kiov) !=3D sizeof(struct vringh_iov)); - BUILD_BUG_ON(offsetof(struct vringh_kiov, iov) !=3D - offsetof(struct vringh_iov, iov)); - BUILD_BUG_ON(offsetof(struct vringh_kiov, i) !=3D - offsetof(struct vringh_iov, i)); - BUILD_BUG_ON(offsetof(struct vringh_kiov, used) !=3D - offsetof(struct vringh_iov, used)); - BUILD_BUG_ON(offsetof(struct vringh_kiov, max_num) !=3D - offsetof(struct vringh_iov, max_num)); - BUILD_BUG_ON(sizeof(struct iovec) !=3D sizeof(struct kvec)); - BUILD_BUG_ON(offsetof(struct iovec, iov_base) !=3D - offsetof(struct kvec, iov_base)); - BUILD_BUG_ON(offsetof(struct iovec, iov_len) !=3D - offsetof(struct kvec, iov_len)); - BUILD_BUG_ON(sizeof(((struct iovec *)NULL)->iov_base) - !=3D sizeof(((struct kvec *)NULL)->iov_base)); - BUILD_BUG_ON(sizeof(((struct iovec *)NULL)->iov_len) - !=3D sizeof(((struct kvec *)NULL)->iov_len)); - *head =3D err; err =3D __vringh_iov(vrh, *head, (struct vringh_kiov *)riov, (struct vringh_kiov *)wiov, @@ -740,14 +720,14 @@ int vringh_getdesc_user(struct vringh *vrh, EXPORT_SYMBOL(vringh_getdesc_user); =20 /** - * vringh_iov_pull_user - copy bytes from vring_iov. + * vringh_iov_pull_user - copy bytes from vring_kiov. * @riov: the riov as passed to vringh_getdesc_user() (updated as we consu= me) * @dst: the place to copy. * @len: the maximum length to copy. * * Returns the bytes copied <=3D len or a negative errno. */ -ssize_t vringh_iov_pull_user(struct vringh_iov *riov, void *dst, size_t le= n) +ssize_t vringh_iov_pull_user(struct vringh_kiov *riov, void *dst, size_t l= en) { return vringh_iov_xfer(NULL, (struct vringh_kiov *)riov, dst, len, xfer_from_user); @@ -755,14 +735,14 @@ ssize_t vringh_iov_pull_user(struct vringh_iov *riov,= void *dst, size_t len) EXPORT_SYMBOL(vringh_iov_pull_user); =20 /** - * vringh_iov_push_user - copy bytes into vring_iov. + * vringh_iov_push_user - copy bytes into vring_kiov. * @wiov: the wiov as passed to vringh_getdesc_user() (updated as we consu= me) * @src: the place to copy from. * @len: the maximum length to copy. * * Returns the bytes copied <=3D len or a negative errno. */ -ssize_t vringh_iov_push_user(struct vringh_iov *wiov, +ssize_t vringh_iov_push_user(struct vringh_kiov *wiov, const void *src, size_t len) { return vringh_iov_xfer(NULL, (struct vringh_kiov *)wiov, diff --git a/include/linux/vringh.h b/include/linux/vringh.h index 1991a02c6431..733d948e8123 100644 --- a/include/linux/vringh.h +++ b/include/linux/vringh.h @@ -79,18 +79,6 @@ struct vringh_range { u64 offset; }; =20 -/** - * struct vringh_iov - iovec mangler. - * - * Mangles iovec in place, and restores it. - * Remaining data is iov + i, of used - i elements. - */ -struct vringh_iov { - struct iovec *iov; - size_t consumed; /* Within iov[i] */ - unsigned i, used, max_num; -}; - /** * struct vringh_kiov - kvec mangler. * @@ -113,44 +101,19 @@ int vringh_init_user(struct vringh *vrh, u64 features, vring_avail_t __user *avail, vring_used_t __user *used); =20 -static inline void vringh_iov_init(struct vringh_iov *iov, - struct iovec *iovec, unsigned num) -{ - iov->used =3D iov->i =3D 0; - iov->consumed =3D 0; - iov->max_num =3D num; - iov->iov =3D iovec; -} - -static inline void vringh_iov_reset(struct vringh_iov *iov) -{ - iov->iov[iov->i].iov_len +=3D iov->consumed; - iov->iov[iov->i].iov_base -=3D iov->consumed; - iov->consumed =3D 0; - iov->i =3D 0; -} - -static inline void vringh_iov_cleanup(struct vringh_iov *iov) -{ - if (iov->max_num & VRINGH_IOV_ALLOCATED) - kfree(iov->iov); - iov->max_num =3D iov->used =3D iov->i =3D iov->consumed =3D 0; - iov->iov =3D NULL; -} - /* Convert a descriptor into iovecs. */ int vringh_getdesc_user(struct vringh *vrh, - struct vringh_iov *riov, - struct vringh_iov *wiov, + struct vringh_kiov *riov, + struct vringh_kiov *wiov, bool (*getrange)(struct vringh *vrh, u64 addr, struct vringh_range *r), u16 *head); =20 /* Copy bytes from readable vsg, consuming it (and incrementing wiov->i). = */ -ssize_t vringh_iov_pull_user(struct vringh_iov *riov, void *dst, size_t le= n); +ssize_t vringh_iov_pull_user(struct vringh_kiov *riov, void *dst, size_t l= en); =20 /* Copy bytes into writable vsg, consuming it (and incrementing wiov->i). = */ -ssize_t vringh_iov_push_user(struct vringh_iov *wiov, +ssize_t vringh_iov_push_user(struct vringh_kiov *wiov, const void *src, size_t len); =20 /* Mark a descriptor as used. */ --=20 2.25.1 From nobody Tue Sep 16 23:52:51 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6687C4332F for ; Tue, 27 Dec 2022 02:26:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229814AbiL0C0A (ORCPT ); Mon, 26 Dec 2022 21:26:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232611AbiL0CZp (ORCPT ); Mon, 26 Dec 2022 21:25:45 -0500 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3779DBEE for ; Mon, 26 Dec 2022 18:25:43 -0800 (PST) Received: by mail-pl1-x634.google.com with SMTP id b2so12027806pld.7 for ; Mon, 26 Dec 2022 18:25:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9fCw3ePbQ4sWf6yjA8O8Z29UEVxW6bYbx76RpPXO7aY=; b=PdEinY5QSfamx3Mg38O2RR2okv24jZTiIqqQZSzIQSP/5YXvP+Qly0wyKRgl/F1uNK JangFsfS4pMaDyShPoK7l8kkT9Zh44XnLQo8hUlqWrsz5qe4ILWqcvzmy/7bB35gV0xj oPa5StauWTQq1jb57VF3CVOllZ3LRnEAymbQLB1k1skqbLxXPkBCpUdtA5Yw6SJ1BnV0 R2MH3gioA5e3yYrRTjTM1T/Ci+Xzh/WM2nvicePmWnVaad6WutwHYASqAk9pxsh9SI09 Z0Y7XT5B461WhBeQqZuHi5ZIyJMHvH5RLI2+oprZjrQraOdWfe99ZOsuUD68p5Jr7mlL yLKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9fCw3ePbQ4sWf6yjA8O8Z29UEVxW6bYbx76RpPXO7aY=; b=nwVEZ7KulMXXj+ZSkhOeFeNLuQjOWHSG+DPoety/4E05pjq6jJWGhh+YMt0KMydC1r q8x8j/E4UbsVvrEFpwy1pLNIYx3nN4qsiYlE0/0ZFd0OAvV8MQXBJM5KN3sNbZEPUlWj skCRZgcP8Vflo8yS95oWqU0cPMvttf/bIIJt0VLoLTI05fRZDbUGNokcmVArZX10j4Gi klcM/lzY91myRNruOszMT8WmQHUODVE+5a078Lj0Wt5Aggc6T+BFoHMvi2Fc3Ed7UlAq SICuyPOYZ7HCg65YsoDh09HfUhSIcSzSXyzu8TzXtC3NjKXpsAYhh4hSoD5W9Tq+k0hL Ykog== X-Gm-Message-State: AFqh2kqdBQka6Uq1cK70kFWCTS3qSEuVOtBVIpg/W45zjuIG7LtSvM4N A3r0AmQE/aGI8lXkU8O2pX1USg== X-Google-Smtp-Source: AMrXdXtibhicJEAQU7zlMTYHTx9LNjc53ZCY1j8YzALFGnjrccnlQk473w/zeMGg3h4pH4QlYbUMRQ== X-Received: by 2002:a17:902:f706:b0:190:fc28:8cca with SMTP id h6-20020a170902f70600b00190fc288ccamr24903066plo.10.1672107943300; Mon, 26 Dec 2022 18:25:43 -0800 (PST) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id w15-20020a1709026f0f00b001870dc3b4c0sm2465014plk.74.2022.12.26.18.25.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Dec 2022 18:25:43 -0800 (PST) From: Shunsuke Mie To: "Michael S. Tsirkin" , Jason Wang , Rusty Russell Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shunsuke Mie Subject: [RFC PATCH 3/9] tools/virtio: convert to new vringh user APIs Date: Tue, 27 Dec 2022 11:25:25 +0900 Message-Id: <20221227022528.609839-4-mie@igel.co.jp> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221227022528.609839-1-mie@igel.co.jp> References: <20221227022528.609839-1-mie@igel.co.jp> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" struct vringh_iov is being remove, so convert vringh_test to use the vringh user APIs. This has it change to use struct vringh_kiov instead of the struct vringh_iov. Signed-off-by: Shunsuke Mie --- tools/virtio/vringh_test.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/tools/virtio/vringh_test.c b/tools/virtio/vringh_test.c index 98ff808d6f0c..6c9533b8a2ca 100644 --- a/tools/virtio/vringh_test.c +++ b/tools/virtio/vringh_test.c @@ -193,8 +193,8 @@ static int parallel_test(u64 features, errx(1, "Could not set affinity to cpu %u", first_cpu); =20 while (xfers < NUM_XFERS) { - struct iovec host_riov[2], host_wiov[2]; - struct vringh_iov riov, wiov; + struct kvec host_riov[2], host_wiov[2]; + struct vringh_kiov riov, wiov; u16 head, written; =20 if (fast_vringh) { @@ -216,10 +216,10 @@ static int parallel_test(u64 features, written =3D 0; goto complete; } else { - vringh_iov_init(&riov, + vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); - vringh_iov_init(&wiov, + vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 @@ -442,8 +442,8 @@ int main(int argc, char *argv[]) struct virtqueue *vq; struct vringh vrh; struct scatterlist guest_sg[RINGSIZE], *sgs[2]; - struct iovec host_riov[2], host_wiov[2]; - struct vringh_iov riov, wiov; + struct kvec host_riov[2], host_wiov[2]; + struct vringh_kiov riov, wiov; struct vring_used_elem used[RINGSIZE]; char buf[28]; u16 head; @@ -517,8 +517,8 @@ int main(int argc, char *argv[]) __kmalloc_fake =3D NULL; =20 /* Host retreives it. */ - vringh_iov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); - vringh_iov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); + vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); + vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); if (err !=3D 1) @@ -586,8 +586,8 @@ int main(int argc, char *argv[]) __kmalloc_fake =3D NULL; =20 /* Host picks it up (allocates new iov). */ - vringh_iov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); - vringh_iov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); + vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); + vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); if (err !=3D 1) @@ -613,8 +613,8 @@ int main(int argc, char *argv[]) assert(err < 3 || buf[2] =3D=3D (char)(i + 2)); } assert(riov.i =3D=3D riov.used); - vringh_iov_cleanup(&riov); - vringh_iov_cleanup(&wiov); + vringh_kiov_cleanup(&riov); + vringh_kiov_cleanup(&wiov); =20 /* Complete using multi interface, just because we can. */ used[0].id =3D head; @@ -638,8 +638,8 @@ int main(int argc, char *argv[]) } =20 /* Now get many, and consume them all at once. */ - vringh_iov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); - vringh_iov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); + vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); + vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 for (i =3D 0; i < RINGSIZE; i++) { err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); @@ -723,8 +723,8 @@ int main(int argc, char *argv[]) d[5].flags =3D 0; =20 /* Host picks it up (allocates new iov). */ - vringh_iov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); - vringh_iov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); + vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); + vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); if (err !=3D 1) @@ -744,7 +744,7 @@ int main(int argc, char *argv[]) /* Data should be linear. */ for (i =3D 0; i < err; i++) assert(buf[i] =3D=3D i); - vringh_iov_cleanup(&riov); + vringh_kiov_cleanup(&riov); } =20 /* Don't leak memory... */ --=20 2.25.1 From nobody Tue Sep 16 23:52:51 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F7C3C3DA79 for ; Tue, 27 Dec 2022 02:26:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232649AbiL0C0J (ORCPT ); Mon, 26 Dec 2022 21:26:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232647AbiL0CZv (ORCPT ); Mon, 26 Dec 2022 21:25:51 -0500 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BED8B71 for ; Mon, 26 Dec 2022 18:25:46 -0800 (PST) Received: by mail-pj1-x102b.google.com with SMTP id k88-20020a17090a4ce100b00219d0b857bcso12049820pjh.1 for ; Mon, 26 Dec 2022 18:25:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jpD0O86+K69F9CNelaZHlDWFW4nv0NdjAuIxj3XUltA=; b=OYIykGkrMYyxL3uLcdOsP4Xk6Gpfj0jz4LzPxIMm/s20o3B5AkS2NObErwc5WG3zQp kv6hmHrvMaqfjl5qq+18eu1IcBWmLe+sFFdiBSjHnLH9akMu3DFJnjo/INnU0aTQu/HM 4pAjUGSzUcCXgqSXo2Rt3V93viSsIb/aiR2mLIB9XUdAUYLEou8zMvCIM+grYEEcyeyO Dok+49SgOfWPgW02C2B+5SgSKQ+iU3TpOVjjXNyfA/qO5SLUPwEpCv88PaqlI+5A8Rn7 bEQ7kuClfh/sVvjIb/7OUp2qaxssSR/1IKsxsxrVwteC8P13SDxtiDuBZ+nKSUlPn/Gy lLvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jpD0O86+K69F9CNelaZHlDWFW4nv0NdjAuIxj3XUltA=; b=TivpkhZlOprSo3rHJ1QGJGjDw19aP+q/7SK5qD3+pHbnmK7UcKforhl83e8duy0EJZ mjJWjCib3g70e/MR2+TTPbma9tzeUIf/UifZ4pxTZcLgXzOKcBxDkWO+EeGR4Kj87qsE Zg3w//KVkxLTmAFih2Gq3I6Wqxkc7qGemXuv4RAEiHfvN0LVG98apKwa4++xCoPJLKV5 IgAN6nmeNeyh2QysV+xztO/NVwGdAETG8bTT+YQLAi4S8P5/YAikPtRULuIfFUv0wUWR 3+CbXJrl4S542tEdc5d679n3tuKlQHt98OCJY76NwQ2ueXpmoIV6ydRddn0teRHN96dr L7eg== X-Gm-Message-State: AFqh2kqYhe4CSu1NB5SpG++Gc06iHEBjZLsfqaBzMbEEH3dTkYCyAwHg Ydlm+R5d7k0yzJVqk8t043uKXr8sD3ZJuY4nrGQ= X-Google-Smtp-Source: AMrXdXsP3r515rdj3ZJe10dsPsiqh4PUp28z2P79sm4ut+qXe/m1gIHd0DJVMIAaNeFkR/MTzuWQdQ== X-Received: by 2002:a17:902:e54f:b0:191:1e88:ea4c with SMTP id n15-20020a170902e54f00b001911e88ea4cmr29612993plf.40.1672107945681; Mon, 26 Dec 2022 18:25:45 -0800 (PST) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id w15-20020a1709026f0f00b001870dc3b4c0sm2465014plk.74.2022.12.26.18.25.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Dec 2022 18:25:45 -0800 (PST) From: Shunsuke Mie To: "Michael S. Tsirkin" , Jason Wang , Rusty Russell Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shunsuke Mie Subject: [RFC PATCH 4/9] vringh: unify the APIs for all accessors Date: Tue, 27 Dec 2022 11:25:26 +0900 Message-Id: <20221227022528.609839-5-mie@igel.co.jp> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221227022528.609839-1-mie@igel.co.jp> References: <20221227022528.609839-1-mie@igel.co.jp> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Each vringh memory accessors that are for user, kern and iotlb has own interfaces that calls common code. But some codes are duplicated and that becomes loss extendability. Introduce a struct vringh_ops and provide a common APIs for all accessors. It can bee easily extended vringh code for new memory accessor and simplified a caller code. Signed-off-by: Shunsuke Mie --- drivers/vhost/vringh.c | 667 +++++++++++------------------------------ include/linux/vringh.h | 100 +++--- 2 files changed, 225 insertions(+), 542 deletions(-) diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index aa3cd27d2384..ebfd3644a1a3 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -35,15 +35,12 @@ static __printf(1,2) __cold void vringh_bad(const char = *fmt, ...) } =20 /* Returns vring->num if empty, -ve on error. */ -static inline int __vringh_get_head(const struct vringh *vrh, - int (*getu16)(const struct vringh *vrh, - u16 *val, const __virtio16 *p), - u16 *last_avail_idx) +static inline int __vringh_get_head(const struct vringh *vrh, u16 *last_av= ail_idx) { u16 avail_idx, i, head; int err; =20 - err =3D getu16(vrh, &avail_idx, &vrh->vring.avail->idx); + err =3D vrh->ops.getu16(vrh, &avail_idx, &vrh->vring.avail->idx); if (err) { vringh_bad("Failed to access avail idx at %p", &vrh->vring.avail->idx); @@ -58,7 +55,7 @@ static inline int __vringh_get_head(const struct vringh *= vrh, =20 i =3D *last_avail_idx & (vrh->vring.num - 1); =20 - err =3D getu16(vrh, &head, &vrh->vring.avail->ring[i]); + err =3D vrh->ops.getu16(vrh, &head, &vrh->vring.avail->ring[i]); if (err) { vringh_bad("Failed to read head: idx %d address %p", *last_avail_idx, &vrh->vring.avail->ring[i]); @@ -131,12 +128,10 @@ static inline ssize_t vringh_iov_xfer(struct vringh *= vrh, =20 /* May reduce *len if range is shorter. */ static inline bool range_check(struct vringh *vrh, u64 addr, size_t *len, - struct vringh_range *range, - bool (*getrange)(struct vringh *, - u64, struct vringh_range *)) + struct vringh_range *range) { if (addr < range->start || addr > range->end_incl) { - if (!getrange(vrh, addr, range)) + if (!vrh->ops.getrange(vrh, addr, range)) return false; } BUG_ON(addr < range->start || addr > range->end_incl); @@ -165,9 +160,7 @@ static inline bool range_check(struct vringh *vrh, u64 = addr, size_t *len, } =20 static inline bool no_range_check(struct vringh *vrh, u64 addr, size_t *le= n, - struct vringh_range *range, - bool (*getrange)(struct vringh *, - u64, struct vringh_range *)) + struct vringh_range *range) { return true; } @@ -244,17 +237,7 @@ static u16 __cold return_from_indirect(const struct vr= ingh *vrh, int *up_next, } =20 static int slow_copy(struct vringh *vrh, void *dst, const void *src, - bool (*rcheck)(struct vringh *vrh, u64 addr, size_t *len, - struct vringh_range *range, - bool (*getrange)(struct vringh *vrh, - u64, - struct vringh_range *)), - bool (*getrange)(struct vringh *vrh, - u64 addr, - struct vringh_range *r), - struct vringh_range *range, - int (*copy)(const struct vringh *vrh, - void *dst, const void *src, size_t len)) + struct vringh_range *range) { size_t part, len =3D sizeof(struct vring_desc); =20 @@ -265,10 +248,10 @@ static int slow_copy(struct vringh *vrh, void *dst, c= onst void *src, part =3D len; addr =3D (u64)(unsigned long)src - range->offset; =20 - if (!rcheck(vrh, addr, &part, range, getrange)) + if (!vrh->ops.range_check(vrh, addr, &part, range)) return -EINVAL; =20 - err =3D copy(vrh, dst, src, part); + err =3D vrh->ops.copydesc(vrh, dst, src, part); if (err) return err; =20 @@ -279,18 +262,35 @@ static int slow_copy(struct vringh *vrh, void *dst, c= onst void *src, return 0; } =20 +static int __vringh_init(struct vringh *vrh, u64 features, unsigned int nu= m, + bool weak_barriers, gfp_t gfp, struct vring_desc *desc, + struct vring_avail *avail, struct vring_used *used) +{ + /* Sane power of 2 please! */ + if (!num || num > 0xffff || (num & (num - 1))) { + vringh_bad("Bad ring size %u", num); + return -EINVAL; + } + + vrh->little_endian =3D (features & (1ULL << VIRTIO_F_VERSION_1)); + vrh->event_indices =3D (features & (1 << VIRTIO_RING_F_EVENT_IDX)); + vrh->weak_barriers =3D weak_barriers; + vrh->completed =3D 0; + vrh->last_avail_idx =3D 0; + vrh->last_used_idx =3D 0; + vrh->vring.num =3D num; + vrh->vring.desc =3D desc; + vrh->vring.avail =3D avail; + vrh->vring.used =3D used; + vrh->desc_gfp =3D gfp; + + return 0; +} + static inline int __vringh_iov(struct vringh *vrh, u16 i, struct vringh_kiov *riov, - struct vringh_kiov *wiov, - bool (*rcheck)(struct vringh *vrh, u64 addr, size_t *len, - struct vringh_range *range, - bool (*getrange)(struct vringh *, u64, - struct vringh_range *)), - bool (*getrange)(struct vringh *, u64, struct vringh_range *), - gfp_t gfp, - int (*copy)(const struct vringh *vrh, - void *dst, const void *src, size_t len)) + struct vringh_kiov *wiov, gfp_t gfp) { int err, count =3D 0, indirect_count =3D 0, up_next, desc_max; struct vring_desc desc, *descs; @@ -317,10 +317,9 @@ __vringh_iov(struct vringh *vrh, u16 i, size_t len; =20 if (unlikely(slow)) - err =3D slow_copy(vrh, &desc, &descs[i], rcheck, getrange, - &slowrange, copy); + err =3D slow_copy(vrh, &desc, &descs[i], &slowrange); else - err =3D copy(vrh, &desc, &descs[i], sizeof(desc)); + err =3D vrh->ops.copydesc(vrh, &desc, &descs[i], sizeof(desc)); if (unlikely(err)) goto fail; =20 @@ -330,7 +329,7 @@ __vringh_iov(struct vringh *vrh, u16 i, =20 /* Make sure it's OK, and get offset. */ len =3D vringh32_to_cpu(vrh, desc.len); - if (!rcheck(vrh, a, &len, &range, getrange)) { + if (!vrh->ops.range_check(vrh, a, &len, &range)) { err =3D -EINVAL; goto fail; } @@ -382,8 +381,7 @@ __vringh_iov(struct vringh *vrh, u16 i, again: /* Make sure it's OK, and get offset. */ len =3D vringh32_to_cpu(vrh, desc.len); - if (!rcheck(vrh, vringh64_to_cpu(vrh, desc.addr), &len, &range, - getrange)) { + if (!vrh->ops.range_check(vrh, vringh64_to_cpu(vrh, desc.addr), &len, &r= ange)) { err =3D -EINVAL; goto fail; } @@ -436,13 +434,7 @@ __vringh_iov(struct vringh *vrh, u16 i, =20 static inline int __vringh_complete(struct vringh *vrh, const struct vring_used_elem *used, - unsigned int num_used, - int (*putu16)(const struct vringh *vrh, - __virtio16 *p, u16 val), - int (*putused)(const struct vringh *vrh, - struct vring_used_elem *dst, - const struct vring_used_elem - *src, unsigned num)) + unsigned int num_used) { struct vring_used *used_ring; int err; @@ -456,12 +448,12 @@ static inline int __vringh_complete(struct vringh *vr= h, /* Compiler knows num_used =3D=3D 1 sometimes, hence extra check */ if (num_used > 1 && unlikely(off + num_used >=3D vrh->vring.num)) { u16 part =3D vrh->vring.num - off; - err =3D putused(vrh, &used_ring->ring[off], used, part); + err =3D vrh->ops.putused(vrh, &used_ring->ring[off], used, part); if (!err) - err =3D putused(vrh, &used_ring->ring[0], used + part, + err =3D vrh->ops.putused(vrh, &used_ring->ring[0], used + part, num_used - part); } else - err =3D putused(vrh, &used_ring->ring[off], used, num_used); + err =3D vrh->ops.putused(vrh, &used_ring->ring[off], used, num_used); =20 if (err) { vringh_bad("Failed to write %u used entries %u at %p", @@ -472,7 +464,7 @@ static inline int __vringh_complete(struct vringh *vrh, /* Make sure buffer is written before we update index. */ virtio_wmb(vrh->weak_barriers); =20 - err =3D putu16(vrh, &vrh->vring.used->idx, used_idx + num_used); + err =3D vrh->ops.putu16(vrh, &vrh->vring.used->idx, used_idx + num_used); if (err) { vringh_bad("Failed to update used index at %p", &vrh->vring.used->idx); @@ -483,11 +475,13 @@ static inline int __vringh_complete(struct vringh *vr= h, return 0; } =20 - -static inline int __vringh_need_notify(struct vringh *vrh, - int (*getu16)(const struct vringh *vrh, - u16 *val, - const __virtio16 *p)) +/** + * vringh_need_notify - must we tell the other side about used buffers? + * @vrh: the vring we've called vringh_complete() on. + * + * Returns -errno or 0 if we don't need to tell the other side, 1 if we do. + */ +int vringh_need_notify(struct vringh *vrh) { bool notify; u16 used_event; @@ -501,7 +495,7 @@ static inline int __vringh_need_notify(struct vringh *v= rh, /* Old-style, without event indices. */ if (!vrh->event_indices) { u16 flags; - err =3D getu16(vrh, &flags, &vrh->vring.avail->flags); + err =3D vrh->ops.getu16(vrh, &flags, &vrh->vring.avail->flags); if (err) { vringh_bad("Failed to get flags at %p", &vrh->vring.avail->flags); @@ -511,7 +505,7 @@ static inline int __vringh_need_notify(struct vringh *v= rh, } =20 /* Modern: we know when other side wants to know. */ - err =3D getu16(vrh, &used_event, &vring_used_event(&vrh->vring)); + err =3D vrh->ops.getu16(vrh, &used_event, &vring_used_event(&vrh->vring)); if (err) { vringh_bad("Failed to get used event idx at %p", &vring_used_event(&vrh->vring)); @@ -530,24 +524,28 @@ static inline int __vringh_need_notify(struct vringh = *vrh, vrh->completed =3D 0; return notify; } +EXPORT_SYMBOL(vringh_need_notify); =20 -static inline bool __vringh_notify_enable(struct vringh *vrh, - int (*getu16)(const struct vringh *vrh, - u16 *val, const __virtio16 *p), - int (*putu16)(const struct vringh *vrh, - __virtio16 *p, u16 val)) +/** + * vringh_notify_enable - we want to know if something changes. + * @vrh: the vring. + * + * This always enables notifications, but returns false if there are + * now more buffers available in the vring. + */ +bool vringh_notify_enable(struct vringh *vrh) { u16 avail; =20 if (!vrh->event_indices) { /* Old-school; update flags. */ - if (putu16(vrh, &vrh->vring.used->flags, 0) !=3D 0) { + if (vrh->ops.putu16(vrh, &vrh->vring.used->flags, 0) !=3D 0) { vringh_bad("Clearing used flags %p", &vrh->vring.used->flags); return true; } } else { - if (putu16(vrh, &vring_avail_event(&vrh->vring), + if (vrh->ops.putu16(vrh, &vring_avail_event(&vrh->vring), vrh->last_avail_idx) !=3D 0) { vringh_bad("Updating avail event index %p", &vring_avail_event(&vrh->vring)); @@ -559,7 +557,7 @@ static inline bool __vringh_notify_enable(struct vringh= *vrh, * sure it's written, then check again. */ virtio_mb(vrh->weak_barriers); =20 - if (getu16(vrh, &avail, &vrh->vring.avail->idx) !=3D 0) { + if (vrh->ops.getu16(vrh, &avail, &vrh->vring.avail->idx) !=3D 0) { vringh_bad("Failed to check avail idx at %p", &vrh->vring.avail->idx); return true; @@ -570,20 +568,27 @@ static inline bool __vringh_notify_enable(struct vrin= gh *vrh, * notification anyway). */ return avail =3D=3D vrh->last_avail_idx; } +EXPORT_SYMBOL(vringh_notify_enable); =20 -static inline void __vringh_notify_disable(struct vringh *vrh, - int (*putu16)(const struct vringh *vrh, - __virtio16 *p, u16 val)) +/** + * vringh_notify_disable - don't tell us if something changes. + * @vrh: the vring. + * + * This is our normal running state: we disable and then only enable when + * we're going to sleep. + */ +void vringh_notify_disable(struct vringh *vrh) { if (!vrh->event_indices) { /* Old-school; update flags. */ - if (putu16(vrh, &vrh->vring.used->flags, + if (vrh->ops.putu16(vrh, &vrh->vring.used->flags, VRING_USED_F_NO_NOTIFY)) { vringh_bad("Setting used flags %p", &vrh->vring.used->flags); } } } +EXPORT_SYMBOL(vringh_notify_disable); =20 /* Userspace access helpers: in this case, addresses are really userspace.= */ static inline int getu16_user(const struct vringh *vrh, u16 *val, const __= virtio16 *p) @@ -630,6 +635,16 @@ static inline int xfer_to_user(const struct vringh *vr= h, -EFAULT : 0; } =20 +static struct vringh_ops user_vringh_ops =3D { + .getu16 =3D getu16_user, + .putu16 =3D putu16_user, + .xfer_from =3D xfer_from_user, + .xfer_to =3D xfer_to_user, + .putused =3D putused_user, + .copydesc =3D copydesc_user, + .range_check =3D range_check, +}; + /** * vringh_init_user - initialize a vringh for a userspace vring. * @vrh: the vringh to initialize. @@ -639,6 +654,7 @@ static inline int xfer_to_user(const struct vringh *vrh, * @desc: the userpace descriptor pointer. * @avail: the userpace avail pointer. * @used: the userpace used pointer. + * @getrange: a function that return a range that vring can access. * * Returns an error if num is invalid: you should check pointers * yourself! @@ -647,36 +663,32 @@ int vringh_init_user(struct vringh *vrh, u64 features, unsigned int num, bool weak_barriers, vring_desc_t __user *desc, vring_avail_t __user *avail, - vring_used_t __user *used) + vring_used_t __user *used, + bool (*getrange)(struct vringh *vrh, u64 addr, struct vringh_range *r)) { - /* Sane power of 2 please! */ - if (!num || num > 0xffff || (num & (num - 1))) { - vringh_bad("Bad ring size %u", num); - return -EINVAL; - } + int err; + + err =3D __vringh_init(vrh, features, num, weak_barriers, GFP_KERNEL, + (__force struct vring_desc *)desc, + (__force struct vring_avail *)avail, + (__force struct vring_used *)used); + if (err) + return err; + + memcpy(&vrh->ops, &user_vringh_ops, sizeof(user_vringh_ops)); + vrh->ops.getrange =3D getrange; =20 - vrh->little_endian =3D (features & (1ULL << VIRTIO_F_VERSION_1)); - vrh->event_indices =3D (features & (1 << VIRTIO_RING_F_EVENT_IDX)); - vrh->weak_barriers =3D weak_barriers; - vrh->completed =3D 0; - vrh->last_avail_idx =3D 0; - vrh->last_used_idx =3D 0; - vrh->vring.num =3D num; - /* vring expects kernel addresses, but only used via accessors. */ - vrh->vring.desc =3D (__force struct vring_desc *)desc; - vrh->vring.avail =3D (__force struct vring_avail *)avail; - vrh->vring.used =3D (__force struct vring_used *)used; return 0; } EXPORT_SYMBOL(vringh_init_user); =20 /** - * vringh_getdesc_user - get next available descriptor from userspace ring. - * @vrh: the userspace vring. + * vringh_getdesc - get next available descriptor from ring. + * @vrh: the vringh to get desc. * @riov: where to put the readable descriptors (or NULL) * @wiov: where to put the writable descriptors (or NULL) * @getrange: function to call to check ranges. - * @head: head index we received, for passing to vringh_complete_user(). + * @head: head index we received, for passing to vringh_complete(). * * Returns 0 if there was no descriptor, 1 if there was, or -errno. * @@ -690,17 +702,15 @@ EXPORT_SYMBOL(vringh_init_user); * When you don't have to use riov and wiov anymore, you should clean up t= hem * calling vringh_iov_cleanup() to release the memory, even on error! */ -int vringh_getdesc_user(struct vringh *vrh, +int vringh_getdesc(struct vringh *vrh, struct vringh_kiov *riov, struct vringh_kiov *wiov, - bool (*getrange)(struct vringh *vrh, - u64 addr, struct vringh_range *r), u16 *head) { int err; =20 *head =3D vrh->vring.num; - err =3D __vringh_get_head(vrh, getu16_user, &vrh->last_avail_idx); + err =3D __vringh_get_head(vrh, &vrh->last_avail_idx); if (err < 0) return err; =20 @@ -709,137 +719,100 @@ int vringh_getdesc_user(struct vringh *vrh, return 0; =20 *head =3D err; - err =3D __vringh_iov(vrh, *head, (struct vringh_kiov *)riov, - (struct vringh_kiov *)wiov, - range_check, getrange, GFP_KERNEL, copydesc_user); + err =3D __vringh_iov(vrh, *head, riov, wiov, GFP_KERNEL); if (err) return err; =20 return 1; } -EXPORT_SYMBOL(vringh_getdesc_user); +EXPORT_SYMBOL(vringh_getdesc); =20 /** - * vringh_iov_pull_user - copy bytes from vring_kiov. - * @riov: the riov as passed to vringh_getdesc_user() (updated as we consu= me) + * vringh_iov_pull - copy bytes from vring_kiov. + * @vrh: the vringh to load data. + * @riov: the riov as passed to vringh_getdesc() (updated as we consume) * @dst: the place to copy. * @len: the maximum length to copy. * * Returns the bytes copied <=3D len or a negative errno. */ -ssize_t vringh_iov_pull_user(struct vringh_kiov *riov, void *dst, size_t l= en) +ssize_t vringh_iov_pull(struct vringh *vrh, struct vringh_kiov *riov, void= *dst, size_t len) { return vringh_iov_xfer(NULL, (struct vringh_kiov *)riov, - dst, len, xfer_from_user); + dst, len, vrh->ops.xfer_from); } -EXPORT_SYMBOL(vringh_iov_pull_user); +EXPORT_SYMBOL(vringh_iov_pull); =20 /** - * vringh_iov_push_user - copy bytes into vring_kiov. - * @wiov: the wiov as passed to vringh_getdesc_user() (updated as we consu= me) + * vringh_iov_push - copy bytes into vring_kiov. + * @vrh: the vringh to store data. + * @wiov: the wiov as passed to vringh_getdesc() (updated as we consume) * @src: the place to copy from. * @len: the maximum length to copy. * * Returns the bytes copied <=3D len or a negative errno. */ -ssize_t vringh_iov_push_user(struct vringh_kiov *wiov, +ssize_t vringh_iov_push(struct vringh *vrh, struct vringh_kiov *wiov, const void *src, size_t len) { return vringh_iov_xfer(NULL, (struct vringh_kiov *)wiov, - (void *)src, len, xfer_to_user); + (void *)src, len, vrh->ops.xfer_to); } -EXPORT_SYMBOL(vringh_iov_push_user); +EXPORT_SYMBOL(vringh_iov_push); =20 /** - * vringh_abandon_user - we've decided not to handle the descriptor(s). + * vringh_abandon - we've decided not to handle the descriptor(s). * @vrh: the vring. * @num: the number of descriptors to put back (ie. num * vringh_get_user() to undo). * * The next vringh_get_user() will return the old descriptor(s) again. */ -void vringh_abandon_user(struct vringh *vrh, unsigned int num) +void vringh_abandon(struct vringh *vrh, unsigned int num) { /* We only update vring_avail_event(vr) when we want to be notified, * so we haven't changed that yet. */ vrh->last_avail_idx -=3D num; } -EXPORT_SYMBOL(vringh_abandon_user); +EXPORT_SYMBOL(vringh_abandon); =20 /** - * vringh_complete_user - we've finished with descriptor, publish it. + * vringh_complete - we've finished with descriptor, publish it. * @vrh: the vring. - * @head: the head as filled in by vringh_getdesc_user. + * @head: the head as filled in by vringh_getdesc. * @len: the length of data we have written. * - * You should check vringh_need_notify_user() after one or more calls + * You should check vringh_need_notify() after one or more calls * to this function. */ -int vringh_complete_user(struct vringh *vrh, u16 head, u32 len) +int vringh_complete(struct vringh *vrh, u16 head, u32 len) { struct vring_used_elem used; =20 used.id =3D cpu_to_vringh32(vrh, head); used.len =3D cpu_to_vringh32(vrh, len); - return __vringh_complete(vrh, &used, 1, putu16_user, putused_user); + return __vringh_complete(vrh, &used, 1); } -EXPORT_SYMBOL(vringh_complete_user); +EXPORT_SYMBOL(vringh_complete); =20 /** - * vringh_complete_multi_user - we've finished with many descriptors. + * vringh_complete_multi - we've finished with many descriptors. * @vrh: the vring. * @used: the head, length pairs. * @num_used: the number of used elements. * - * You should check vringh_need_notify_user() after one or more calls + * You should check vringh_need_notify() after one or more calls * to this function. */ -int vringh_complete_multi_user(struct vringh *vrh, +int vringh_complete_multi(struct vringh *vrh, const struct vring_used_elem used[], unsigned num_used) { - return __vringh_complete(vrh, used, num_used, - putu16_user, putused_user); -} -EXPORT_SYMBOL(vringh_complete_multi_user); - -/** - * vringh_notify_enable_user - we want to know if something changes. - * @vrh: the vring. - * - * This always enables notifications, but returns false if there are - * now more buffers available in the vring. - */ -bool vringh_notify_enable_user(struct vringh *vrh) -{ - return __vringh_notify_enable(vrh, getu16_user, putu16_user); + return __vringh_complete(vrh, used, num_used); } -EXPORT_SYMBOL(vringh_notify_enable_user); +EXPORT_SYMBOL(vringh_complete_multi); =20 -/** - * vringh_notify_disable_user - don't tell us if something changes. - * @vrh: the vring. - * - * This is our normal running state: we disable and then only enable when - * we're going to sleep. - */ -void vringh_notify_disable_user(struct vringh *vrh) -{ - __vringh_notify_disable(vrh, putu16_user); -} -EXPORT_SYMBOL(vringh_notify_disable_user); =20 -/** - * vringh_need_notify_user - must we tell the other side about used buffer= s? - * @vrh: the vring we've called vringh_complete_user() on. - * - * Returns -errno or 0 if we don't need to tell the other side, 1 if we do. - */ -int vringh_need_notify_user(struct vringh *vrh) -{ - return __vringh_need_notify(vrh, getu16_user); -} -EXPORT_SYMBOL(vringh_need_notify_user); =20 /* Kernelspace access helpers. */ static inline int getu16_kern(const struct vringh *vrh, @@ -885,6 +858,17 @@ static inline int kern_xfer(const struct vringh *vrh, = void *dst, return 0; } =20 +static const struct vringh_ops kern_vringh_ops =3D { + .getu16 =3D getu16_kern, + .putu16 =3D putu16_kern, + .xfer_from =3D xfer_kern, + .xfer_to =3D xfer_kern, + .putused =3D putused_kern, + .copydesc =3D copydesc_kern, + .range_check =3D no_range_check, + .getrange =3D NULL, +}; + /** * vringh_init_kern - initialize a vringh for a kernelspace vring. * @vrh: the vringh to initialize. @@ -898,179 +882,22 @@ static inline int kern_xfer(const struct vringh *vrh= , void *dst, * Returns an error if num is invalid. */ int vringh_init_kern(struct vringh *vrh, u64 features, - unsigned int num, bool weak_barriers, + unsigned int num, bool weak_barriers, gfp_t gfp, struct vring_desc *desc, struct vring_avail *avail, struct vring_used *used) -{ - /* Sane power of 2 please! */ - if (!num || num > 0xffff || (num & (num - 1))) { - vringh_bad("Bad ring size %u", num); - return -EINVAL; - } - - vrh->little_endian =3D (features & (1ULL << VIRTIO_F_VERSION_1)); - vrh->event_indices =3D (features & (1 << VIRTIO_RING_F_EVENT_IDX)); - vrh->weak_barriers =3D weak_barriers; - vrh->completed =3D 0; - vrh->last_avail_idx =3D 0; - vrh->last_used_idx =3D 0; - vrh->vring.num =3D num; - vrh->vring.desc =3D desc; - vrh->vring.avail =3D avail; - vrh->vring.used =3D used; - return 0; -} -EXPORT_SYMBOL(vringh_init_kern); - -/** - * vringh_getdesc_kern - get next available descriptor from kernelspace ri= ng. - * @vrh: the kernelspace vring. - * @riov: where to put the readable descriptors (or NULL) - * @wiov: where to put the writable descriptors (or NULL) - * @head: head index we received, for passing to vringh_complete_kern(). - * @gfp: flags for allocating larger riov/wiov. - * - * Returns 0 if there was no descriptor, 1 if there was, or -errno. - * - * Note that on error return, you can tell the difference between an - * invalid ring and a single invalid descriptor: in the former case, - * *head will be vrh->vring.num. You may be able to ignore an invalid - * descriptor, but there's not much you can do with an invalid ring. - * - * Note that you can reuse riov and wiov with subsequent calls. Content is - * overwritten and memory reallocated if more space is needed. - * When you don't have to use riov and wiov anymore, you should clean up t= hem - * calling vringh_kiov_cleanup() to release the memory, even on error! - */ -int vringh_getdesc_kern(struct vringh *vrh, - struct vringh_kiov *riov, - struct vringh_kiov *wiov, - u16 *head, - gfp_t gfp) { int err; =20 - err =3D __vringh_get_head(vrh, getu16_kern, &vrh->last_avail_idx); - if (err < 0) - return err; - - /* Empty... */ - if (err =3D=3D vrh->vring.num) - return 0; - - *head =3D err; - err =3D __vringh_iov(vrh, *head, riov, wiov, no_range_check, NULL, - gfp, copydesc_kern); + err =3D __vringh_init(vrh, features, num, weak_barriers, gfp, desc, avail= , used); if (err) return err; =20 - return 1; -} -EXPORT_SYMBOL(vringh_getdesc_kern); - -/** - * vringh_iov_pull_kern - copy bytes from vring_iov. - * @riov: the riov as passed to vringh_getdesc_kern() (updated as we consu= me) - * @dst: the place to copy. - * @len: the maximum length to copy. - * - * Returns the bytes copied <=3D len or a negative errno. - */ -ssize_t vringh_iov_pull_kern(struct vringh_kiov *riov, void *dst, size_t l= en) -{ - return vringh_iov_xfer(NULL, riov, dst, len, xfer_kern); -} -EXPORT_SYMBOL(vringh_iov_pull_kern); - -/** - * vringh_iov_push_kern - copy bytes into vring_iov. - * @wiov: the wiov as passed to vringh_getdesc_kern() (updated as we consu= me) - * @src: the place to copy from. - * @len: the maximum length to copy. - * - * Returns the bytes copied <=3D len or a negative errno. - */ -ssize_t vringh_iov_push_kern(struct vringh_kiov *wiov, - const void *src, size_t len) -{ - return vringh_iov_xfer(NULL, wiov, (void *)src, len, kern_xfer); -} -EXPORT_SYMBOL(vringh_iov_push_kern); + memcpy(&vrh->ops, &kern_vringh_ops, sizeof(kern_vringh_ops)); =20 -/** - * vringh_abandon_kern - we've decided not to handle the descriptor(s). - * @vrh: the vring. - * @num: the number of descriptors to put back (ie. num - * vringh_get_kern() to undo). - * - * The next vringh_get_kern() will return the old descriptor(s) again. - */ -void vringh_abandon_kern(struct vringh *vrh, unsigned int num) -{ - /* We only update vring_avail_event(vr) when we want to be notified, - * so we haven't changed that yet. */ - vrh->last_avail_idx -=3D num; -} -EXPORT_SYMBOL(vringh_abandon_kern); - -/** - * vringh_complete_kern - we've finished with descriptor, publish it. - * @vrh: the vring. - * @head: the head as filled in by vringh_getdesc_kern. - * @len: the length of data we have written. - * - * You should check vringh_need_notify_kern() after one or more calls - * to this function. - */ -int vringh_complete_kern(struct vringh *vrh, u16 head, u32 len) -{ - struct vring_used_elem used; - - used.id =3D cpu_to_vringh32(vrh, head); - used.len =3D cpu_to_vringh32(vrh, len); - - return __vringh_complete(vrh, &used, 1, putu16_kern, putused_kern); -} -EXPORT_SYMBOL(vringh_complete_kern); - -/** - * vringh_notify_enable_kern - we want to know if something changes. - * @vrh: the vring. - * - * This always enables notifications, but returns false if there are - * now more buffers available in the vring. - */ -bool vringh_notify_enable_kern(struct vringh *vrh) -{ - return __vringh_notify_enable(vrh, getu16_kern, putu16_kern); -} -EXPORT_SYMBOL(vringh_notify_enable_kern); - -/** - * vringh_notify_disable_kern - don't tell us if something changes. - * @vrh: the vring. - * - * This is our normal running state: we disable and then only enable when - * we're going to sleep. - */ -void vringh_notify_disable_kern(struct vringh *vrh) -{ - __vringh_notify_disable(vrh, putu16_kern); -} -EXPORT_SYMBOL(vringh_notify_disable_kern); - -/** - * vringh_need_notify_kern - must we tell the other side about used buffer= s? - * @vrh: the vring we've called vringh_complete_kern() on. - * - * Returns -errno or 0 if we don't need to tell the other side, 1 if we do. - */ -int vringh_need_notify_kern(struct vringh *vrh) -{ - return __vringh_need_notify(vrh, getu16_kern); + return 0; } -EXPORT_SYMBOL(vringh_need_notify_kern); +EXPORT_SYMBOL(vringh_init_kern); =20 #if IS_REACHABLE(CONFIG_VHOST_IOTLB) =20 @@ -1122,7 +949,7 @@ static int iotlb_translate(const struct vringh *vrh, return ret; } =20 -static inline int copy_from_iotlb(const struct vringh *vrh, void *dst, +static int copy_from_iotlb(const struct vringh *vrh, void *dst, void *src, size_t len) { u64 total_translated =3D 0; @@ -1155,7 +982,7 @@ static inline int copy_from_iotlb(const struct vringh = *vrh, void *dst, return total_translated; } =20 -static inline int copy_to_iotlb(const struct vringh *vrh, void *dst, +static int copy_to_iotlb(const struct vringh *vrh, void *dst, void *src, size_t len) { u64 total_translated =3D 0; @@ -1188,7 +1015,7 @@ static inline int copy_to_iotlb(const struct vringh *= vrh, void *dst, return total_translated; } =20 -static inline int getu16_iotlb(const struct vringh *vrh, +static int getu16_iotlb(const struct vringh *vrh, u16 *val, const __virtio16 *p) { struct bio_vec iov; @@ -1209,7 +1036,7 @@ static inline int getu16_iotlb(const struct vringh *v= rh, return 0; } =20 -static inline int putu16_iotlb(const struct vringh *vrh, +static int putu16_iotlb(const struct vringh *vrh, __virtio16 *p, u16 val) { struct bio_vec iov; @@ -1230,7 +1057,7 @@ static inline int putu16_iotlb(const struct vringh *v= rh, return 0; } =20 -static inline int copydesc_iotlb(const struct vringh *vrh, +static int copydesc_iotlb(const struct vringh *vrh, void *dst, const void *src, size_t len) { int ret; @@ -1242,7 +1069,7 @@ static inline int copydesc_iotlb(const struct vringh = *vrh, return 0; } =20 -static inline int xfer_from_iotlb(const struct vringh *vrh, void *src, +static int xfer_from_iotlb(const struct vringh *vrh, void *src, void *dst, size_t len) { int ret; @@ -1254,7 +1081,7 @@ static inline int xfer_from_iotlb(const struct vringh= *vrh, void *src, return 0; } =20 -static inline int xfer_to_iotlb(const struct vringh *vrh, +static int xfer_to_iotlb(const struct vringh *vrh, void *dst, void *src, size_t len) { int ret; @@ -1266,7 +1093,7 @@ static inline int xfer_to_iotlb(const struct vringh *= vrh, return 0; } =20 -static inline int putused_iotlb(const struct vringh *vrh, +static int putused_iotlb(const struct vringh *vrh, struct vring_used_elem *dst, const struct vring_used_elem *src, unsigned int num) @@ -1281,6 +1108,17 @@ static inline int putused_iotlb(const struct vringh = *vrh, return 0; } =20 +static const struct vringh_ops iotlb_vringh_ops =3D { + .getu16 =3D getu16_iotlb, + .putu16 =3D putu16_iotlb, + .xfer_from =3D xfer_from_iotlb, + .xfer_to =3D xfer_to_iotlb, + .putused =3D putused_iotlb, + .copydesc =3D copydesc_iotlb, + .range_check =3D no_range_check, + .getrange =3D NULL, +}; + /** * vringh_init_iotlb - initialize a vringh for a ring with IOTLB. * @vrh: the vringh to initialize. @@ -1294,13 +1132,20 @@ static inline int putused_iotlb(const struct vringh= *vrh, * Returns an error if num is invalid. */ int vringh_init_iotlb(struct vringh *vrh, u64 features, - unsigned int num, bool weak_barriers, + unsigned int num, bool weak_barriers, gfp_t gfp, struct vring_desc *desc, struct vring_avail *avail, struct vring_used *used) { - return vringh_init_kern(vrh, features, num, weak_barriers, - desc, avail, used); + int err; + + err =3D __vringh_init(vrh, features, num, weak_barriers, gfp, desc, avail= , used); + if (err) + return err; + + memcpy(&vrh->ops, &iotlb_vringh_ops, sizeof(iotlb_vringh_ops)); + + return 0; } EXPORT_SYMBOL(vringh_init_iotlb); =20 @@ -1318,162 +1163,6 @@ void vringh_set_iotlb(struct vringh *vrh, struct vh= ost_iotlb *iotlb, } EXPORT_SYMBOL(vringh_set_iotlb); =20 -/** - * vringh_getdesc_iotlb - get next available descriptor from ring with - * IOTLB. - * @vrh: the kernelspace vring. - * @riov: where to put the readable descriptors (or NULL) - * @wiov: where to put the writable descriptors (or NULL) - * @head: head index we received, for passing to vringh_complete_iotlb(). - * @gfp: flags for allocating larger riov/wiov. - * - * Returns 0 if there was no descriptor, 1 if there was, or -errno. - * - * Note that on error return, you can tell the difference between an - * invalid ring and a single invalid descriptor: in the former case, - * *head will be vrh->vring.num. You may be able to ignore an invalid - * descriptor, but there's not much you can do with an invalid ring. - * - * Note that you can reuse riov and wiov with subsequent calls. Content is - * overwritten and memory reallocated if more space is needed. - * When you don't have to use riov and wiov anymore, you should clean up t= hem - * calling vringh_kiov_cleanup() to release the memory, even on error! - */ -int vringh_getdesc_iotlb(struct vringh *vrh, - struct vringh_kiov *riov, - struct vringh_kiov *wiov, - u16 *head, - gfp_t gfp) -{ - int err; - - err =3D __vringh_get_head(vrh, getu16_iotlb, &vrh->last_avail_idx); - if (err < 0) - return err; - - /* Empty... */ - if (err =3D=3D vrh->vring.num) - return 0; - - *head =3D err; - err =3D __vringh_iov(vrh, *head, riov, wiov, no_range_check, NULL, - gfp, copydesc_iotlb); - if (err) - return err; - - return 1; -} -EXPORT_SYMBOL(vringh_getdesc_iotlb); - -/** - * vringh_iov_pull_iotlb - copy bytes from vring_iov. - * @vrh: the vring. - * @riov: the riov as passed to vringh_getdesc_iotlb() (updated as we cons= ume) - * @dst: the place to copy. - * @len: the maximum length to copy. - * - * Returns the bytes copied <=3D len or a negative errno. - */ -ssize_t vringh_iov_pull_iotlb(struct vringh *vrh, - struct vringh_kiov *riov, - void *dst, size_t len) -{ - return vringh_iov_xfer(vrh, riov, dst, len, xfer_from_iotlb); -} -EXPORT_SYMBOL(vringh_iov_pull_iotlb); - -/** - * vringh_iov_push_iotlb - copy bytes into vring_iov. - * @vrh: the vring. - * @wiov: the wiov as passed to vringh_getdesc_iotlb() (updated as we cons= ume) - * @src: the place to copy from. - * @len: the maximum length to copy. - * - * Returns the bytes copied <=3D len or a negative errno. - */ -ssize_t vringh_iov_push_iotlb(struct vringh *vrh, - struct vringh_kiov *wiov, - const void *src, size_t len) -{ - return vringh_iov_xfer(vrh, wiov, (void *)src, len, xfer_to_iotlb); -} -EXPORT_SYMBOL(vringh_iov_push_iotlb); - -/** - * vringh_abandon_iotlb - we've decided not to handle the descriptor(s). - * @vrh: the vring. - * @num: the number of descriptors to put back (ie. num - * vringh_get_iotlb() to undo). - * - * The next vringh_get_iotlb() will return the old descriptor(s) again. - */ -void vringh_abandon_iotlb(struct vringh *vrh, unsigned int num) -{ - /* We only update vring_avail_event(vr) when we want to be notified, - * so we haven't changed that yet. - */ - vrh->last_avail_idx -=3D num; -} -EXPORT_SYMBOL(vringh_abandon_iotlb); - -/** - * vringh_complete_iotlb - we've finished with descriptor, publish it. - * @vrh: the vring. - * @head: the head as filled in by vringh_getdesc_iotlb. - * @len: the length of data we have written. - * - * You should check vringh_need_notify_iotlb() after one or more calls - * to this function. - */ -int vringh_complete_iotlb(struct vringh *vrh, u16 head, u32 len) -{ - struct vring_used_elem used; - - used.id =3D cpu_to_vringh32(vrh, head); - used.len =3D cpu_to_vringh32(vrh, len); - - return __vringh_complete(vrh, &used, 1, putu16_iotlb, putused_iotlb); -} -EXPORT_SYMBOL(vringh_complete_iotlb); - -/** - * vringh_notify_enable_iotlb - we want to know if something changes. - * @vrh: the vring. - * - * This always enables notifications, but returns false if there are - * now more buffers available in the vring. - */ -bool vringh_notify_enable_iotlb(struct vringh *vrh) -{ - return __vringh_notify_enable(vrh, getu16_iotlb, putu16_iotlb); -} -EXPORT_SYMBOL(vringh_notify_enable_iotlb); - -/** - * vringh_notify_disable_iotlb - don't tell us if something changes. - * @vrh: the vring. - * - * This is our normal running state: we disable and then only enable when - * we're going to sleep. - */ -void vringh_notify_disable_iotlb(struct vringh *vrh) -{ - __vringh_notify_disable(vrh, putu16_iotlb); -} -EXPORT_SYMBOL(vringh_notify_disable_iotlb); - -/** - * vringh_need_notify_iotlb - must we tell the other side about used buffe= rs? - * @vrh: the vring we've called vringh_complete_iotlb() on. - * - * Returns -errno or 0 if we don't need to tell the other side, 1 if we do. - */ -int vringh_need_notify_iotlb(struct vringh *vrh) -{ - return __vringh_need_notify(vrh, getu16_iotlb); -} -EXPORT_SYMBOL(vringh_need_notify_iotlb); - #endif =20 MODULE_LICENSE("GPL"); diff --git a/include/linux/vringh.h b/include/linux/vringh.h index 733d948e8123..89c73605c85f 100644 --- a/include/linux/vringh.h +++ b/include/linux/vringh.h @@ -21,6 +21,36 @@ #endif #include =20 +struct vringh; +struct vringh_range; + +/** + * struct vringh_ops - ops for accessing a vring and checking to access ra= nge. + * @getu16: read u16 value from pointer + * @putu16: write u16 value to pointer + * @xfer_from: copy memory range from specified address to local virtual a= ddress + * @xfer_tio: copy memory range from local virtual address to specified ad= dress + * @putused: update vring used descriptor + * @copydesc: copy desiptor from target to local virtual address + * @range_check: check if the region is accessible + * @getrange: return a range that vring can access + */ +struct vringh_ops { + int (*getu16)(const struct vringh *vrh, u16 *val, const __virtio16 *p); + int (*putu16)(const struct vringh *vrh, __virtio16 *p, u16 val); + int (*xfer_from)(const struct vringh *vrh, void *src, void *dst, + size_t len); + int (*xfer_to)(const struct vringh *vrh, void *dst, void *src, + size_t len); + int (*putused)(const struct vringh *vrh, struct vring_used_elem *dst, + const struct vring_used_elem *src, unsigned int num); + int (*copydesc)(const struct vringh *vrh, void *dst, const void *src, + size_t len); + bool (*range_check)(struct vringh *vrh, u64 addr, size_t *len, + struct vringh_range *range); + bool (*getrange)(struct vringh *vrh, u64 addr, struct vringh_range *r); +}; + /* virtio_ring with information needed for host access. */ struct vringh { /* Everything is little endian */ @@ -52,6 +82,10 @@ struct vringh { =20 /* The function to call to notify the guest about added buffers */ void (*notify)(struct vringh *); + + struct vringh_ops ops; + + gfp_t desc_gfp; }; =20 /** @@ -99,41 +133,40 @@ int vringh_init_user(struct vringh *vrh, u64 features, unsigned int num, bool weak_barriers, vring_desc_t __user *desc, vring_avail_t __user *avail, - vring_used_t __user *used); + vring_used_t __user *used, + bool (*getrange)(struct vringh *vrh, u64 addr, struct vringh_range *r)= ); =20 /* Convert a descriptor into iovecs. */ -int vringh_getdesc_user(struct vringh *vrh, +int vringh_getdesc(struct vringh *vrh, struct vringh_kiov *riov, struct vringh_kiov *wiov, - bool (*getrange)(struct vringh *vrh, - u64 addr, struct vringh_range *r), u16 *head); =20 /* Copy bytes from readable vsg, consuming it (and incrementing wiov->i). = */ -ssize_t vringh_iov_pull_user(struct vringh_kiov *riov, void *dst, size_t l= en); +ssize_t vringh_iov_pull(struct vringh *vrh, struct vringh_kiov *riov, void= *dst, size_t len); =20 /* Copy bytes into writable vsg, consuming it (and incrementing wiov->i). = */ -ssize_t vringh_iov_push_user(struct vringh_kiov *wiov, +ssize_t vringh_iov_push(struct vringh *vrh, struct vringh_kiov *wiov, const void *src, size_t len); =20 /* Mark a descriptor as used. */ -int vringh_complete_user(struct vringh *vrh, u16 head, u32 len); -int vringh_complete_multi_user(struct vringh *vrh, +int vringh_complete(struct vringh *vrh, u16 head, u32 len); +int vringh_complete_multi(struct vringh *vrh, const struct vring_used_elem used[], unsigned num_used); =20 /* Pretend we've never seen descriptor (for easy error handling). */ -void vringh_abandon_user(struct vringh *vrh, unsigned int num); +void vringh_abandon(struct vringh *vrh, unsigned int num); =20 /* Do we need to fire the eventfd to notify the other side? */ -int vringh_need_notify_user(struct vringh *vrh); +int vringh_need_notify(struct vringh *vrh); =20 -bool vringh_notify_enable_user(struct vringh *vrh); -void vringh_notify_disable_user(struct vringh *vrh); +bool vringh_notify_enable(struct vringh *vrh); +void vringh_notify_disable(struct vringh *vrh); =20 /* Helpers for kernelspace vrings. */ int vringh_init_kern(struct vringh *vrh, u64 features, - unsigned int num, bool weak_barriers, + unsigned int num, bool weak_barriers, gfp_t gfp, struct vring_desc *desc, struct vring_avail *avail, struct vring_used *used); @@ -176,23 +209,6 @@ static inline size_t vringh_kiov_length(struct vringh_= kiov *kiov) =20 void vringh_kiov_advance(struct vringh_kiov *kiov, size_t len); =20 -int vringh_getdesc_kern(struct vringh *vrh, - struct vringh_kiov *riov, - struct vringh_kiov *wiov, - u16 *head, - gfp_t gfp); - -ssize_t vringh_iov_pull_kern(struct vringh_kiov *riov, void *dst, size_t l= en); -ssize_t vringh_iov_push_kern(struct vringh_kiov *wiov, - const void *src, size_t len); -void vringh_abandon_kern(struct vringh *vrh, unsigned int num); -int vringh_complete_kern(struct vringh *vrh, u16 head, u32 len); - -bool vringh_notify_enable_kern(struct vringh *vrh); -void vringh_notify_disable_kern(struct vringh *vrh); - -int vringh_need_notify_kern(struct vringh *vrh); - /* Notify the guest about buffers added to the used ring */ static inline void vringh_notify(struct vringh *vrh) { @@ -242,33 +258,11 @@ void vringh_set_iotlb(struct vringh *vrh, struct vhos= t_iotlb *iotlb, spinlock_t *iotlb_lock); =20 int vringh_init_iotlb(struct vringh *vrh, u64 features, - unsigned int num, bool weak_barriers, + unsigned int num, bool weak_barriers, gfp_t gfp, struct vring_desc *desc, struct vring_avail *avail, struct vring_used *used); =20 -int vringh_getdesc_iotlb(struct vringh *vrh, - struct vringh_kiov *riov, - struct vringh_kiov *wiov, - u16 *head, - gfp_t gfp); - -ssize_t vringh_iov_pull_iotlb(struct vringh *vrh, - struct vringh_kiov *riov, - void *dst, size_t len); -ssize_t vringh_iov_push_iotlb(struct vringh *vrh, - struct vringh_kiov *wiov, - const void *src, size_t len); - -void vringh_abandon_iotlb(struct vringh *vrh, unsigned int num); - -int vringh_complete_iotlb(struct vringh *vrh, u16 head, u32 len); - -bool vringh_notify_enable_iotlb(struct vringh *vrh); -void vringh_notify_disable_iotlb(struct vringh *vrh); - -int vringh_need_notify_iotlb(struct vringh *vrh); - #endif /* CONFIG_VHOST_IOTLB */ =20 #endif /* _LINUX_VRINGH_H */ --=20 2.25.1 From nobody Tue Sep 16 23:52:51 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE05CC4708D for ; Tue, 27 Dec 2022 02:26:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232667AbiL0C0v (ORCPT ); Mon, 26 Dec 2022 21:26:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232653AbiL0CZv (ORCPT ); Mon, 26 Dec 2022 21:25:51 -0500 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89FFE63EE for ; Mon, 26 Dec 2022 18:25:48 -0800 (PST) Received: by mail-pj1-x102a.google.com with SMTP id w4-20020a17090ac98400b002186f5d7a4cso16082955pjt.0 for ; Mon, 26 Dec 2022 18:25:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fw1zp7wA+/zxFPe9++9b6RsIhQH1xDXghkXPllMc2n4=; b=dmWwqELbw9ZEZJgxhlpTM+I5eJSOq6C0nFb5niSwcy+brkJPLb+pRruGiewMtfgDy0 2oJIziVqszQ+rxkD9M5cwMVDrHvNPdGobcuqb5wAn82QLAgLTY+3D1/E9vrnBCKmbeBE 9a/C42/KBck+uafJDNlqi5zgPfinCVzdFwi4TG+k0LNp/Gpipdh9qDEx4LlDkwaB0KQQ +zhSi6OeBCysEmrmdkIPZTB/VyzpBkG0llnLRrh81f2yZGFAj5C1AalYPVt8VUV1/Jly G9+38kJdlTzpEUR9XRGePMeUlp4klYV8MER19PSJK5ijva9L8gzowcBInyZboJpIXxlN ay6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fw1zp7wA+/zxFPe9++9b6RsIhQH1xDXghkXPllMc2n4=; b=tohlxKCAelWsu0cG5ucmi0H9WcA3Zy0pmBCfE39ft8IT7NdjFSpn+1m4Ix3YnYSJfO JmRAAXjT3qmxbVEkC4VURtET6Kigs5diMsV+65XpiDr88xP3tLnJrWZny9DNOuKpPJ1B VHVcixgHk6FcfI7tsEVf9m9MMph/aN+NXYG3U9HgmxwlrFgr4MVCGYNHAWPWaCPMkyqP P5idAbaISIntq6W/C3Pulnt5VyWV2+0Wh4V5RgAwuOgZQW0Ulb0MYtY48i8Vfa7BzWE5 twBoVbNhDtC74raQNreV1M3RRsdivdTYBElqkZXBBywZHeuCUe3YImeD3+K+tqmKFxJT zSQA== X-Gm-Message-State: AFqh2krS+RyzRiQP+C88iybKkiBBokSXhDdCnpXKZrrYU3gANAVSw2Xs W+Io0MvkiFX9iZDjrgYNy89KBA== X-Google-Smtp-Source: AMrXdXthXoQLGifDOmu5LuHbRt/+6ge15hO72pdWQm3ohBcfdoG9AvdFFZDLqr/g0rDj9Oi6RbKKmQ== X-Received: by 2002:a17:902:a582:b0:192:52d7:b574 with SMTP id az2-20020a170902a58200b0019252d7b574mr17963636plb.63.1672107947993; Mon, 26 Dec 2022 18:25:47 -0800 (PST) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id w15-20020a1709026f0f00b001870dc3b4c0sm2465014plk.74.2022.12.26.18.25.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Dec 2022 18:25:47 -0800 (PST) From: Shunsuke Mie To: "Michael S. Tsirkin" , Jason Wang , Rusty Russell Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shunsuke Mie Subject: [RFC PATCH 5/9] tools/virtio: convert to use new unified vringh APIs Date: Tue, 27 Dec 2022 11:25:27 +0900 Message-Id: <20221227022528.609839-6-mie@igel.co.jp> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221227022528.609839-1-mie@igel.co.jp> References: <20221227022528.609839-1-mie@igel.co.jp> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" vringh_*_user APIs is being removed without vringh_init_user(). so change to use new APIs. Signed-off-by: Shunsuke Mie --- tools/virtio/vringh_test.c | 89 +++++++++++++++++++------------------- 1 file changed, 44 insertions(+), 45 deletions(-) diff --git a/tools/virtio/vringh_test.c b/tools/virtio/vringh_test.c index 6c9533b8a2ca..068c6d5aa4fd 100644 --- a/tools/virtio/vringh_test.c +++ b/tools/virtio/vringh_test.c @@ -187,7 +187,7 @@ static int parallel_test(u64 features, =20 vring_init(&vrh.vring, RINGSIZE, host_map, ALIGN); vringh_init_user(&vrh, features, RINGSIZE, true, - vrh.vring.desc, vrh.vring.avail, vrh.vring.used); + vrh.vring.desc, vrh.vring.avail, vrh.vring.used, getrange); CPU_SET(first_cpu, &cpu_set); if (sched_setaffinity(getpid(), sizeof(cpu_set), &cpu_set)) errx(1, "Could not set affinity to cpu %u", first_cpu); @@ -202,9 +202,9 @@ static int parallel_test(u64 features, err =3D vringh_get_head(&vrh, &head); if (err !=3D 0) break; - err =3D vringh_need_notify_user(&vrh); + err =3D vringh_need_notify(&vrh); if (err < 0) - errx(1, "vringh_need_notify_user: %i", + errx(1, "vringh_need_notify: %i", err); if (err) { write(to_guest[1], "", 1); @@ -223,46 +223,45 @@ static int parallel_test(u64 features, host_wiov, ARRAY_SIZE(host_wiov)); =20 - err =3D vringh_getdesc_user(&vrh, &riov, &wiov, - getrange, &head); + err =3D vringh_getdesc(&vrh, &riov, &wiov, &head); } if (err =3D=3D 0) { - err =3D vringh_need_notify_user(&vrh); + err =3D vringh_need_notify(&vrh); if (err < 0) - errx(1, "vringh_need_notify_user: %i", + errx(1, "vringh_need_notify: %i", err); if (err) { write(to_guest[1], "", 1); notifies++; } =20 - if (!vringh_notify_enable_user(&vrh)) + if (!vringh_notify_enable(&vrh)) continue; =20 /* Swallow all notifies at once. */ if (read(to_host[0], buf, sizeof(buf)) < 1) break; =20 - vringh_notify_disable_user(&vrh); + vringh_notify_disable(&vrh); receives++; continue; } if (err !=3D 1) - errx(1, "vringh_getdesc_user: %i", err); + errx(1, "vringh_getdesc: %i", err); =20 /* We simply copy bytes. */ if (riov.used) { - rlen =3D vringh_iov_pull_user(&riov, rbuf, + rlen =3D vringh_iov_pull(&vrh, &riov, rbuf, sizeof(rbuf)); if (rlen !=3D 4) - errx(1, "vringh_iov_pull_user: %i", + errx(1, "vringh_iov_pull: %i", rlen); assert(riov.i =3D=3D riov.used); written =3D 0; } else { - err =3D vringh_iov_push_user(&wiov, rbuf, rlen); + err =3D vringh_iov_push(&vrh, &wiov, rbuf, rlen); if (err !=3D rlen) - errx(1, "vringh_iov_push_user: %i", + errx(1, "vringh_iov_push: %i", err); assert(wiov.i =3D=3D wiov.used); written =3D err; @@ -270,14 +269,14 @@ static int parallel_test(u64 features, complete: xfers++; =20 - err =3D vringh_complete_user(&vrh, head, written); + err =3D vringh_complete(&vrh, head, written); if (err !=3D 0) - errx(1, "vringh_complete_user: %i", err); + errx(1, "vringh_complete: %i", err); } =20 - err =3D vringh_need_notify_user(&vrh); + err =3D vringh_need_notify(&vrh); if (err < 0) - errx(1, "vringh_need_notify_user: %i", err); + errx(1, "vringh_need_notify: %i", err); if (err) { write(to_guest[1], "", 1); notifies++; @@ -493,12 +492,12 @@ int main(int argc, char *argv[]) /* Set up host side. */ vring_init(&vrh.vring, RINGSIZE, __user_addr_min, ALIGN); vringh_init_user(&vrh, vdev.features, RINGSIZE, true, - vrh.vring.desc, vrh.vring.avail, vrh.vring.used); + vrh.vring.desc, vrh.vring.avail, vrh.vring.used, getrange); =20 /* No descriptor to get yet... */ - err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); + err =3D vringh_getdesc(&vrh, &riov, &wiov, &head); if (err !=3D 0) - errx(1, "vringh_getdesc_user: %i", err); + errx(1, "vringh_getdesc: %i", err); =20 /* Guest puts in a descriptor. */ memcpy(__user_addr_max - 1, "a", 1); @@ -520,9 +519,9 @@ int main(int argc, char *argv[]) vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 - err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); + err =3D vringh_getdesc(&vrh, &riov, &wiov, &head); if (err !=3D 1) - errx(1, "vringh_getdesc_user: %i", err); + errx(1, "vringh_getdesc: %i", err); =20 assert(riov.used =3D=3D 1); assert(riov.iov[0].iov_base =3D=3D __user_addr_max - 1); @@ -539,25 +538,25 @@ int main(int argc, char *argv[]) assert(wiov.iov[1].iov_len =3D=3D 1); } =20 - err =3D vringh_iov_pull_user(&riov, buf, 5); + err =3D vringh_iov_pull(&vrh, &riov, buf, 5); if (err !=3D 1) - errx(1, "vringh_iov_pull_user: %i", err); + errx(1, "vringh_iov_pull: %i", err); assert(buf[0] =3D=3D 'a'); assert(riov.i =3D=3D 1); - assert(vringh_iov_pull_user(&riov, buf, 5) =3D=3D 0); + assert(vringh_iov_pull(&vrh, &riov, buf, 5) =3D=3D 0); =20 memcpy(buf, "bcdef", 5); - err =3D vringh_iov_push_user(&wiov, buf, 5); + err =3D vringh_iov_push(&vrh, &wiov, buf, 5); if (err !=3D 2) - errx(1, "vringh_iov_push_user: %i", err); + errx(1, "vringh_iov_push: %i", err); assert(memcmp(__user_addr_max - 3, "bc", 2) =3D=3D 0); assert(wiov.i =3D=3D wiov.used); - assert(vringh_iov_push_user(&wiov, buf, 5) =3D=3D 0); + assert(vringh_iov_push(&vrh, &wiov, buf, 5) =3D=3D 0); =20 /* Host is done. */ - err =3D vringh_complete_user(&vrh, head, err); + err =3D vringh_complete(&vrh, head, err); if (err !=3D 0) - errx(1, "vringh_complete_user: %i", err); + errx(1, "vringh_complete: %i", err); =20 /* Guest should see used token now. */ __kfree_ignore_start =3D __user_addr_min + vring_size(RINGSIZE, ALIGN); @@ -589,9 +588,9 @@ int main(int argc, char *argv[]) vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 - err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); + err =3D vringh_getdesc(&vrh, &riov, &wiov, &head); if (err !=3D 1) - errx(1, "vringh_getdesc_user: %i", err); + errx(1, "vringh_getdesc: %i", err); =20 assert(riov.max_num & VRINGH_IOV_ALLOCATED); assert(riov.iov !=3D host_riov); @@ -605,9 +604,9 @@ int main(int argc, char *argv[]) =20 /* Pull data back out (in odd chunks), should be as expected. */ for (i =3D 0; i < RINGSIZE * USER_MEM/4; i +=3D 3) { - err =3D vringh_iov_pull_user(&riov, buf, 3); + err =3D vringh_iov_pull(&vrh, &riov, buf, 3); if (err !=3D 3 && i + err !=3D RINGSIZE * USER_MEM/4) - errx(1, "vringh_iov_pull_user large: %i", err); + errx(1, "vringh_iov_pulllarge: %i", err); assert(buf[0] =3D=3D (char)i); assert(err < 2 || buf[1] =3D=3D (char)(i + 1)); assert(err < 3 || buf[2] =3D=3D (char)(i + 2)); @@ -619,9 +618,9 @@ int main(int argc, char *argv[]) /* Complete using multi interface, just because we can. */ used[0].id =3D head; used[0].len =3D 0; - err =3D vringh_complete_multi_user(&vrh, used, 1); + err =3D vringh_complete_multi(&vrh, used, 1); if (err) - errx(1, "vringh_complete_multi_user(1): %i", err); + errx(1, "vringh_complete_multi(1): %i", err); =20 /* Free up those descriptors. */ ret =3D virtqueue_get_buf(vq, &i); @@ -642,17 +641,17 @@ int main(int argc, char *argv[]) vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 for (i =3D 0; i < RINGSIZE; i++) { - err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); + err =3D vringh_getdesc(&vrh, &riov, &wiov, &head); if (err !=3D 1) - errx(1, "vringh_getdesc_user: %i", err); + errx(1, "vringh_getdesc: %i", err); used[i].id =3D head; used[i].len =3D 0; } /* Make sure it wraps around ring, to test! */ assert(vrh.vring.used->idx % RINGSIZE !=3D 0); - err =3D vringh_complete_multi_user(&vrh, used, RINGSIZE); + err =3D vringh_complete_multi(&vrh, used, RINGSIZE); if (err) - errx(1, "vringh_complete_multi_user: %i", err); + errx(1, "vringh_complete_multi: %i", err); =20 /* Free those buffers. */ for (i =3D 0; i < RINGSIZE; i++) { @@ -726,19 +725,19 @@ int main(int argc, char *argv[]) vringh_kiov_init(&riov, host_riov, ARRAY_SIZE(host_riov)); vringh_kiov_init(&wiov, host_wiov, ARRAY_SIZE(host_wiov)); =20 - err =3D vringh_getdesc_user(&vrh, &riov, &wiov, getrange, &head); + err =3D vringh_getdesc(&vrh, &riov, &wiov, &head); if (err !=3D 1) - errx(1, "vringh_getdesc_user: %i", err); + errx(1, "vringh_getdesc: %i", err); =20 if (head !=3D 0) - errx(1, "vringh_getdesc_user: head %i not 0", head); + errx(1, "vringh_getdesc: head %i not 0", head); =20 assert(riov.max_num & VRINGH_IOV_ALLOCATED); if (getrange !=3D getrange_slow) assert(riov.used =3D=3D 7); else assert(riov.used =3D=3D 28); - err =3D vringh_iov_pull_user(&riov, buf, 29); + err =3D vringh_iov_pull(&vrh, &riov, buf, 29); assert(err =3D=3D 28); =20 /* Data should be linear. */ --=20 2.25.1 From nobody Tue Sep 16 23:52:51 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3119C4708D for ; Tue, 27 Dec 2022 02:27:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232698AbiL0C04 (ORCPT ); Mon, 26 Dec 2022 21:26:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232660AbiL0CZy (ORCPT ); Mon, 26 Dec 2022 21:25:54 -0500 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4026CCD for ; Mon, 26 Dec 2022 18:25:50 -0800 (PST) Received: by mail-pj1-x1034.google.com with SMTP id u4-20020a17090a518400b00223f7eba2c4so12021834pjh.5 for ; Mon, 26 Dec 2022 18:25:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DvMG7HI+MNWsEUIqh/BKpMAFptkHeZ7mWrh2CvuyciY=; b=7HVylYQ0632scSj+HTgHmiMQ9nmof478HLpvsoucgyabAEQZNGTPT3Z/qsis+kC59W z35iuuygucSkTPXxU2rQzouhhLGAIzGkvF3m7pk5+8xBSsGt+CbSN0lbq8ILUybz57yy cLyKQLDjZ6MgxEg4kRt1+iYywQ7HR8xyrDuFAzQVwnJcLNyWbuKPoxsVqhr721slS4Za O9+JHGODB/C27e+6KSmMqvmV9uRuZbcKyXuSTupy/YfskVRR+/DQMQ21jpFiz/NpFOp6 qAegzBRb6C6gPU9eBEYbHN/r+zXNOUUvJ1CgMxl8Ye95yh50MnJl1XQGHGC7x49OYu+h WUlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DvMG7HI+MNWsEUIqh/BKpMAFptkHeZ7mWrh2CvuyciY=; b=BOv8rjM2HGUsyJm1H2ctfNIAiKiy5G1duW5nEwvuW2ytnuyZ0J45BpMMSCJkivP3b2 iWBSQOg8KveXhqeBU7L2IQQLeyxLqEVtQqghN6wrni4fDwZe53TvnzL4tmE7nP8A0Fix gIZnO1gD7CHNUbKCtDJrDy/Br7Ling01XBJjKFi90ez7YpbPzkQAt6t6DMdC4G6RbKXX f2/WtEh/HF1Yv0r435mXA2BxvxDU98sR2GwoLK1mdf2bynzgJzQlYDKHiVoEFcKepjPV ZqoPTB5wcuj9LHO1nrnE++rI6XycvymrjlBLlGwNHKeUcsQgYZpJnJpIEOqIJVBNxRmm 1vPw== X-Gm-Message-State: AFqh2kqmvQdb4XDjYV7nGfuXOGzCF36wyZk9E4mvHI+GWGbyMq3TTtV7 cOr9yEhzSTzl8JZIyIVuJ8PGzQ== X-Google-Smtp-Source: AMrXdXsCQQEPIYOXi9oM5ZYP8gBVlj2VmXkt8bjfLAfaQc9Y/meV2EpOLBa5VvOmiNYjdHQCuyrHug== X-Received: by 2002:a17:902:a584:b0:18f:ac9f:29f6 with SMTP id az4-20020a170902a58400b0018fac9f29f6mr21116232plb.50.1672107950201; Mon, 26 Dec 2022 18:25:50 -0800 (PST) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id w15-20020a1709026f0f00b001870dc3b4c0sm2465014plk.74.2022.12.26.18.25.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Dec 2022 18:25:49 -0800 (PST) From: Shunsuke Mie To: "Michael S. Tsirkin" , Jason Wang , Rusty Russell Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shunsuke Mie Subject: [RFC PATCH 6/9] caif_virtio: convert to new unified vringh APIs Date: Tue, 27 Dec 2022 11:25:28 +0900 Message-Id: <20221227022528.609839-7-mie@igel.co.jp> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221227022528.609839-1-mie@igel.co.jp> References: <20221227022528.609839-1-mie@igel.co.jp> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" vringh_*_kern APIs are being removed without vringh_init_kern(), so change to use new APIs. Signed-off-by: Shunsuke Mie --- drivers/net/caif/caif_virtio.c | 26 ++++++++++---------------- 1 file changed, 10 insertions(+), 16 deletions(-) diff --git a/drivers/net/caif/caif_virtio.c b/drivers/net/caif/caif_virtio.c index 0b0f234b0b50..f9dd79807afa 100644 --- a/drivers/net/caif/caif_virtio.c +++ b/drivers/net/caif/caif_virtio.c @@ -265,18 +265,12 @@ static int cfv_rx_poll(struct napi_struct *napi, int = quota) */ if (riov->i =3D=3D riov->used) { if (cfv->ctx.head !=3D USHRT_MAX) { - vringh_complete_kern(cfv->vr_rx, - cfv->ctx.head, - 0); + vringh_complete(cfv->vr_rx, cfv->ctx.head, 0); cfv->ctx.head =3D USHRT_MAX; } =20 - err =3D vringh_getdesc_kern( - cfv->vr_rx, - riov, - NULL, - &cfv->ctx.head, - GFP_ATOMIC); + err =3D vringh_getdesc(cfv->vr_rx, riov, NULL, + &cfv->ctx.head); =20 if (err <=3D 0) goto exit; @@ -317,9 +311,9 @@ static int cfv_rx_poll(struct napi_struct *napi, int qu= ota) =20 /* Really out of packets? (stolen from virtio_net)*/ napi_complete(napi); - if (unlikely(!vringh_notify_enable_kern(cfv->vr_rx)) && + if (unlikely(!vringh_notify_enable(cfv->vr_rx)) && napi_schedule_prep(napi)) { - vringh_notify_disable_kern(cfv->vr_rx); + vringh_notify_disable(cfv->vr_rx); __napi_schedule(napi); } break; @@ -329,7 +323,7 @@ static int cfv_rx_poll(struct napi_struct *napi, int qu= ota) dev_kfree_skb(skb); /* Stop NAPI poll on OOM, we hope to be polled later */ napi_complete(napi); - vringh_notify_enable_kern(cfv->vr_rx); + vringh_notify_enable(cfv->vr_rx); break; =20 default: @@ -337,12 +331,12 @@ static int cfv_rx_poll(struct napi_struct *napi, int = quota) netdev_warn(cfv->ndev, "Bad ring, disable device\n"); cfv->ndev->stats.rx_dropped =3D riov->used - riov->i; napi_complete(napi); - vringh_notify_disable_kern(cfv->vr_rx); + vringh_notify_disable(cfv->vr_rx); netif_carrier_off(cfv->ndev); break; } out: - if (rxcnt && vringh_need_notify_kern(cfv->vr_rx) > 0) + if (rxcnt && vringh_need_notify(cfv->vr_rx) > 0) vringh_notify(cfv->vr_rx); return rxcnt; } @@ -352,7 +346,7 @@ static void cfv_recv(struct virtio_device *vdev, struct= vringh *vr_rx) struct cfv_info *cfv =3D vdev->priv; =20 ++cfv->stats.rx_kicks; - vringh_notify_disable_kern(cfv->vr_rx); + vringh_notify_disable(cfv->vr_rx); napi_schedule(&cfv->napi); } =20 @@ -460,7 +454,7 @@ static int cfv_netdev_close(struct net_device *netdev) /* Disable interrupts, queues and NAPI polling */ netif_carrier_off(netdev); virtqueue_disable_cb(cfv->vq_tx); - vringh_notify_disable_kern(cfv->vr_rx); + vringh_notify_disable(cfv->vr_rx); napi_disable(&cfv->napi); =20 /* Release any TX buffers on both used and available rings */ --=20 2.25.1