Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
include/hw/virtio/virtio-access.h | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/include/hw/virtio/virtio-access.h b/include/hw/virtio/virtio-access.h
index cd17d0c87eb..51d9d725690 100644
--- a/include/hw/virtio/virtio-access.h
+++ b/include/hw/virtio/virtio-access.h
@@ -17,27 +17,27 @@
#define QEMU_VIRTIO_ACCESS_H
#include "exec/hwaddr.h"
+#include "qemu/target-info.h"
#include "system/memory_cached.h"
#include "hw/virtio/virtio.h"
#include "hw/virtio/virtio-bus.h"
-#if defined(TARGET_PPC64) || defined(TARGET_ARM)
-#define LEGACY_VIRTIO_IS_BIENDIAN 1
-#endif
-
static inline bool virtio_access_is_big_endian(VirtIODevice *vdev)
{
-#if defined(LEGACY_VIRTIO_IS_BIENDIAN)
- return virtio_is_big_endian(vdev);
-#elif TARGET_BIG_ENDIAN
- if (virtio_vdev_has_feature(vdev, VIRTIO_F_VERSION_1)) {
- /* Devices conforming to VIRTIO 1.0 or later are always LE. */
- return false;
+ if (target_ppc64() || target_base_arm()) {
+ return virtio_is_big_endian(vdev);
}
- return true;
-#else
+
+ if (target_big_endian()) {
+ if (virtio_vdev_has_feature(vdev, VIRTIO_F_VERSION_1)) {
+ /* Devices conforming to VIRTIO 1.0 or later are always LE. */
+ return false;
+ } else {
+ return true;
+ }
+ }
+
return false;
-#endif
}
static inline void virtio_stw_p(VirtIODevice *vdev, void *ptr, uint16_t v)
--
2.47.3
On Fri, Jan 30, 2026 at 06:00:56PM -0800, Pierrick Bouvier wrote:
> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Performance impact?
the reason we have these is for performance ...
> ---
> include/hw/virtio/virtio-access.h | 26 +++++++++++++-------------
> 1 file changed, 13 insertions(+), 13 deletions(-)
>
> diff --git a/include/hw/virtio/virtio-access.h b/include/hw/virtio/virtio-access.h
> index cd17d0c87eb..51d9d725690 100644
> --- a/include/hw/virtio/virtio-access.h
> +++ b/include/hw/virtio/virtio-access.h
> @@ -17,27 +17,27 @@
> #define QEMU_VIRTIO_ACCESS_H
>
> #include "exec/hwaddr.h"
> +#include "qemu/target-info.h"
> #include "system/memory_cached.h"
> #include "hw/virtio/virtio.h"
> #include "hw/virtio/virtio-bus.h"
>
> -#if defined(TARGET_PPC64) || defined(TARGET_ARM)
> -#define LEGACY_VIRTIO_IS_BIENDIAN 1
> -#endif
> -
> static inline bool virtio_access_is_big_endian(VirtIODevice *vdev)
> {
> -#if defined(LEGACY_VIRTIO_IS_BIENDIAN)
> - return virtio_is_big_endian(vdev);
> -#elif TARGET_BIG_ENDIAN
> - if (virtio_vdev_has_feature(vdev, VIRTIO_F_VERSION_1)) {
> - /* Devices conforming to VIRTIO 1.0 or later are always LE. */
> - return false;
> + if (target_ppc64() || target_base_arm()) {
> + return virtio_is_big_endian(vdev);
> }
> - return true;
> -#else
> +
> + if (target_big_endian()) {
> + if (virtio_vdev_has_feature(vdev, VIRTIO_F_VERSION_1)) {
> + /* Devices conforming to VIRTIO 1.0 or later are always LE. */
> + return false;
> + } else {
> + return true;
> + }
> + }
> +
> return false;
> -#endif
> }
>
> static inline void virtio_stw_p(VirtIODevice *vdev, void *ptr, uint16_t v)
> --
> 2.47.3
On 1/31/26 9:48 AM, Michael S. Tsirkin wrote: > On Fri, Jan 30, 2026 at 06:00:56PM -0800, Pierrick Bouvier wrote: >> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> > > Performance impact? > > the reason we have these is for performance ... > I would be very happy to run any benchmark that you might judge critical. Should we run a disk read/write sequence with a virtio disk, or a download with a virtio interface? Any other idea? Regards, Pierrick
On Sun, Feb 01, 2026 at 05:24:03PM -0800, Pierrick Bouvier wrote: > On 1/31/26 9:48 AM, Michael S. Tsirkin wrote: > > On Fri, Jan 30, 2026 at 06:00:56PM -0800, Pierrick Bouvier wrote: > > > Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> > > > > Performance impact? > > > > the reason we have these is for performance ... > > > > I would be very happy to run any benchmark that you might judge critical. > > Should we run a disk read/write sequence with a virtio disk, or a download > with a virtio interface? Any other idea? block for sure, people who care about network perf go the vhost or vhost user path. So I CC'd Stefan Hajnoczi. Stefan do you feel this needs a test and what kind of test do you suggest as the most representative of I/O overhead? > > Regards, > Pierrick
On Mon, Feb 02, 2026 at 02:50:28AM -0500, Michael S. Tsirkin wrote:
> On Sun, Feb 01, 2026 at 05:24:03PM -0800, Pierrick Bouvier wrote:
> > On 1/31/26 9:48 AM, Michael S. Tsirkin wrote:
> > > On Fri, Jan 30, 2026 at 06:00:56PM -0800, Pierrick Bouvier wrote:
> > > > Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
> > >
> > > Performance impact?
> > >
> > > the reason we have these is for performance ...
> > >
> >
> > I would be very happy to run any benchmark that you might judge critical.
> >
> > Should we run a disk read/write sequence with a virtio disk, or a download
> > with a virtio interface? Any other idea?
>
> block for sure, people who care about network perf go the vhost or
> vhost user path.
>
> So I CC'd Stefan Hajnoczi. Stefan do you feel this needs a test and what
> kind of test do you suggest as the most representative of I/O overhead?
This command-line lets you benchmark virtio-blk without actual I/O
slowing down the request processing:
qemu-system-x86_64 \
-M accel=kvm \
-cpu host \
-m 4G \
--blockdev file,node-name=drive0,filename=boot.img,cache.direct=on,aio=native \
--blockdev null-co,node-name=drive1,size=$((10 * 1024 * 1024 * 1024)) \
--object iothread,id=iothread0 \
--device virtio-blk-pci,drive=drive0,iothread=iothread0 \
--device virtio-blk-pci,drive=drive1,iothread=iothread0
Here is a fio command-line for 4 KiB random reads:
fio \
--ioengine=libaio \
--direct=1 \
--runtime=30 \
--ramp_time=10 \
--rw=randread \
--bs=4k \
--iodepth=128 \
--filename=/dev/vdb \
--name=randread
This is just a single vCPU, but it should be enough to see if there is
any difference in I/O Operations Per Second (IOPS) or efficiency
(IOPS/CPU utilization).
Stefan
On 2/2/26 10:00 AM, Stefan Hajnoczi wrote: > On Mon, Feb 02, 2026 at 02:50:28AM -0500, Michael S. Tsirkin wrote: >> On Sun, Feb 01, 2026 at 05:24:03PM -0800, Pierrick Bouvier wrote: >>> On 1/31/26 9:48 AM, Michael S. Tsirkin wrote: >>>> On Fri, Jan 30, 2026 at 06:00:56PM -0800, Pierrick Bouvier wrote: >>>>> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> >>>> >>>> Performance impact? >>>> >>>> the reason we have these is for performance ... >>>> >>> >>> I would be very happy to run any benchmark that you might judge critical. >>> >>> Should we run a disk read/write sequence with a virtio disk, or a download >>> with a virtio interface? Any other idea? >> >> block for sure, people who care about network perf go the vhost or >> vhost user path. >> >> So I CC'd Stefan Hajnoczi. Stefan do you feel this needs a test and what >> kind of test do you suggest as the most representative of I/O overhead? > > This command-line lets you benchmark virtio-blk without actual I/O > slowing down the request processing: > > qemu-system-x86_64 \ > -M accel=kvm \ > -cpu host \ > -m 4G \ > --blockdev file,node-name=drive0,filename=boot.img,cache.direct=on,aio=native \ > --blockdev null-co,node-name=drive1,size=$((10 * 1024 * 1024 * 1024)) \ > --object iothread,id=iothread0 \ > --device virtio-blk-pci,drive=drive0,iothread=iothread0 \ > --device virtio-blk-pci,drive=drive1,iothread=iothread0 > > Here is a fio command-line for 4 KiB random reads: > > fio \ > --ioengine=libaio \ > --direct=1 \ > --runtime=30 \ > --ramp_time=10 \ > --rw=randread \ > --bs=4k \ > --iodepth=128 \ > --filename=/dev/vdb \ > --name=randread > > This is just a single vCPU, but it should be enough to see if there is > any difference in I/O Operations Per Second (IOPS) or efficiency > (IOPS/CPU utilization). > > Stefan I'll reply on v3 to keep conversation there. Thanks, Pierrick
On 31/1/26 18:48, Michael S. Tsirkin wrote: > On Fri, Jan 30, 2026 at 06:00:56PM -0800, Pierrick Bouvier wrote: >> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> > > Performance impact? > > the reason we have these is for performance ... If this is a worry, we can declare the virtio LD/ST helpers in VirtIODevice and set them in virtio_reset(); the call chain will be even simpler than doing that check on every calls.
On Sun, Feb 01, 2026 at 11:36:04PM +0100, Philippe Mathieu-Daudé wrote: > On 31/1/26 18:48, Michael S. Tsirkin wrote: > > On Fri, Jan 30, 2026 at 06:00:56PM -0800, Pierrick Bouvier wrote: > > > Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> > > > > Performance impact? > > > > the reason we have these is for performance ... > > If this is a worry, we can declare the virtio LD/ST helpers in > VirtIODevice and set them in virtio_reset(); the call chain will > be even simpler than doing that check on every calls. Would make them indirect calls though, no? So I suspect the performance of this needs to be tested, anyway. -- MST
On 2/2/26 08:40, Michael S. Tsirkin wrote: > On Sun, Feb 01, 2026 at 11:36:04PM +0100, Philippe Mathieu-Daudé wrote: >> On 31/1/26 18:48, Michael S. Tsirkin wrote: >>> On Fri, Jan 30, 2026 at 06:00:56PM -0800, Pierrick Bouvier wrote: >>>> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> >>> >>> Performance impact? >>> >>> the reason we have these is for performance ... >> >> If this is a worry, we can declare the virtio LD/ST helpers in >> VirtIODevice and set them in virtio_reset(); the call chain will >> be even simpler than doing that check on every calls. > > Would make them indirect calls though, no? Currently I count 3 indirect calls; such approach would have only 1. Anyway I posted a v3 which should be even better. > > So I suspect the performance of this needs to be tested, anyway. >
On 1/2/26 23:36, Philippe Mathieu-Daudé wrote: > On 31/1/26 18:48, Michael S. Tsirkin wrote: >> On Fri, Jan 30, 2026 at 06:00:56PM -0800, Pierrick Bouvier wrote: >>> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> >> >> Performance impact? >> >> the reason we have these is for performance ... > > If this is a worry, we can declare the virtio LD/ST helpers in > VirtIODevice and set them in virtio_reset(); the call chain will > be even simpler than doing that check on every calls. Or just have a VirtIODevice::access_is_big_endian boolean field (re)initialized in virtio_reset(), and drop this virtio_access_is_big_endian() method.
On 1/2/26 23:41, Philippe Mathieu-Daudé wrote: > On 1/2/26 23:36, Philippe Mathieu-Daudé wrote: >> On 31/1/26 18:48, Michael S. Tsirkin wrote: >>> On Fri, Jan 30, 2026 at 06:00:56PM -0800, Pierrick Bouvier wrote: >>>> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> >>> >>> Performance impact? >>> >>> the reason we have these is for performance ... >> >> If this is a worry, we can declare the virtio LD/ST helpers in >> VirtIODevice and set them in virtio_reset(); the call chain will >> be even simpler than doing that check on every calls. > > Or just have a VirtIODevice::access_is_big_endian boolean > field (re)initialized in virtio_reset(), and drop this > virtio_access_is_big_endian() method. Implemented as v2: https://lore.kernel.org/qemu-devel/20260201231959.88559-1-philmd@linaro.org/
On 2/1/26 3:20 PM, Philippe Mathieu-Daudé wrote: > On 1/2/26 23:41, Philippe Mathieu-Daudé wrote: >> On 1/2/26 23:36, Philippe Mathieu-Daudé wrote: >>> On 31/1/26 18:48, Michael S. Tsirkin wrote: >>>> On Fri, Jan 30, 2026 at 06:00:56PM -0800, Pierrick Bouvier wrote: >>>>> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> >>>> >>>> Performance impact? >>>> >>>> the reason we have these is for performance ... >>> >>> If this is a worry, we can declare the virtio LD/ST helpers in >>> VirtIODevice and set them in virtio_reset(); the call chain will >>> be even simpler than doing that check on every calls. >> >> Or just have a VirtIODevice::access_is_big_endian boolean >> field (re)initialized in virtio_reset(), and drop this >> virtio_access_is_big_endian() method. > > Implemented as v2: > https://lore.kernel.org/qemu-devel/20260201231959.88559-1-philmd@linaro.org/ Let's go one step at a time. The refactoring you propose is worth investigating, but the current approach can be done first and effective now. Regards, Pierrick
© 2016 - 2026 Red Hat, Inc.