drivers/vhost/vsock.c | 21 +++++++++------ include/linux/virtio_vsock.h | 36 +++++++++++++++++++------ net/vmw_vsock/virtio_transport.c | 2 +- net/vmw_vsock/virtio_transport_common.c | 9 +++++-- 4 files changed, 49 insertions(+), 19 deletions(-)
Hi folks, We're using vsock extensively in Android as a channel over which we can route binder transactions to/from virtual machines managed by the Android Virtualisation Framework. However, we have been observing some issues in production builds when using vsock in a low-memory environment (on the host and the guest) such as: * The host receive path hanging forever, despite the guest performing a successful write to the socket. * Page allocation failures in the vhost receive path (this is a likely contributor to the above) * -ENOMEM coming back from sendmsg() This series aims to improve the vsock SKB allocation for both the host (vhost) and the guest when using the virtio transport to help mitigate these issues. Specifically: - Avoid single allocations of order > PAGE_ALLOC_COSTLY_ORDER - Use non-linear SKBs for the transmit and vhost receive paths - Reduce the guest RX buffers to a single page There are more details in the individual commit messages but overall this results in less wasted memory and puts less pressure on the allocator. This is my first time looking at this stuff, so all feedback is welcome. Patches based on v6.16-rc3. Cheers, Will Cc: Keir Fraser <keirf@google.com> Cc: Steven Moreland <smoreland@google.com> Cc: Frederick Mayle <fmayle@google.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Cc: Stefano Garzarella <sgarzare@redhat.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: "Eugenio Pérez" <eperezma@redhat.com> Cc: netdev@vger.kernel.org Cc: virtualization@lists.linux.dev --->8 Will Deacon (5): vhost/vsock: Avoid allocating arbitrarily-sized SKBs vsock/virtio: Resize receive buffers so that each SKB fits in a page vhost/vsock: Allocate nonlinear SKBs for handling large receive buffers vsock/virtio: Rename virtio_vsock_skb_rx_put() to virtio_vsock_skb_put() vhost/vsock: Allocate nonlinear SKBs for handling large transmit buffers drivers/vhost/vsock.c | 21 +++++++++------ include/linux/virtio_vsock.h | 36 +++++++++++++++++++------ net/vmw_vsock/virtio_transport.c | 2 +- net/vmw_vsock/virtio_transport_common.c | 9 +++++-- 4 files changed, 49 insertions(+), 19 deletions(-) -- 2.50.0.714.g196bf9f422-goog
On Wed, Jun 25, 2025 at 02:15:38PM +0100, Will Deacon wrote: >Hi folks, > >We're using vsock extensively in Android as a channel over which we can >route binder transactions to/from virtual machines managed by the >Android Virtualisation Framework. However, we have been observing some >issues in production builds when using vsock in a low-memory environment >(on the host and the guest) such as: > > * The host receive path hanging forever, despite the guest performing > a successful write to the socket. > > * Page allocation failures in the vhost receive path (this is a likely > contributor to the above) > > * -ENOMEM coming back from sendmsg() > >This series aims to improve the vsock SKB allocation for both the host >(vhost) and the guest when using the virtio transport to help mitigate >these issues. Specifically: > > - Avoid single allocations of order > PAGE_ALLOC_COSTLY_ORDER > > - Use non-linear SKBs for the transmit and vhost receive paths > > - Reduce the guest RX buffers to a single page > >There are more details in the individual commit messages but overall >this results in less wasted memory and puts less pressure on the >allocator. > >This is my first time looking at this stuff, so all feedback is welcome. Thank you very much for this series! I left some minor comments, but overall LGTM! Thanks, Stefano > >Patches based on v6.16-rc3. > >Cheers, > >Will > >Cc: Keir Fraser <keirf@google.com> >Cc: Steven Moreland <smoreland@google.com> >Cc: Frederick Mayle <fmayle@google.com> >Cc: Stefan Hajnoczi <stefanha@redhat.com> >Cc: Stefano Garzarella <sgarzare@redhat.com> >Cc: "Michael S. Tsirkin" <mst@redhat.com> >Cc: Jason Wang <jasowang@redhat.com> >Cc: "Eugenio Pérez" <eperezma@redhat.com> >Cc: netdev@vger.kernel.org >Cc: virtualization@lists.linux.dev > >--->8 > >Will Deacon (5): > vhost/vsock: Avoid allocating arbitrarily-sized SKBs > vsock/virtio: Resize receive buffers so that each SKB fits in a page > vhost/vsock: Allocate nonlinear SKBs for handling large receive > buffers > vsock/virtio: Rename virtio_vsock_skb_rx_put() to > virtio_vsock_skb_put() > vhost/vsock: Allocate nonlinear SKBs for handling large transmit > buffers > > drivers/vhost/vsock.c | 21 +++++++++------ > include/linux/virtio_vsock.h | 36 +++++++++++++++++++------ > net/vmw_vsock/virtio_transport.c | 2 +- > net/vmw_vsock/virtio_transport_common.c | 9 +++++-- > 4 files changed, 49 insertions(+), 19 deletions(-) > >-- >2.50.0.714.g196bf9f422-goog > >
On Fri, Jun 27, 2025 at 12:51:45PM +0200, Stefano Garzarella wrote: > On Wed, Jun 25, 2025 at 02:15:38PM +0100, Will Deacon wrote: > > We're using vsock extensively in Android as a channel over which we can > > route binder transactions to/from virtual machines managed by the > > Android Virtualisation Framework. However, we have been observing some > > issues in production builds when using vsock in a low-memory environment > > (on the host and the guest) such as: > > > > * The host receive path hanging forever, despite the guest performing > > a successful write to the socket. > > > > * Page allocation failures in the vhost receive path (this is a likely > > contributor to the above) > > > > * -ENOMEM coming back from sendmsg() > > > > This series aims to improve the vsock SKB allocation for both the host > > (vhost) and the guest when using the virtio transport to help mitigate > > these issues. Specifically: > > > > - Avoid single allocations of order > PAGE_ALLOC_COSTLY_ORDER > > > > - Use non-linear SKBs for the transmit and vhost receive paths > > > > - Reduce the guest RX buffers to a single page > > > > There are more details in the individual commit messages but overall > > this results in less wasted memory and puts less pressure on the > > allocator. > > > > This is my first time looking at this stuff, so all feedback is welcome. > > Thank you very much for this series! > > I left some minor comments, but overall LGTM! Cheers for going through it! I'll work through your comments now... Will
© 2016 - 2025 Red Hat, Inc.