From nobody Wed Nov 5 14:41:41 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1497005421865982.6488991271056; Fri, 9 Jun 2017 03:50:21 -0700 (PDT) Received: from localhost ([::1]:53767 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dJHUi-0002Xh-FO for importer@patchew.org; Fri, 09 Jun 2017 06:50:20 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50660) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dJHTL-0001fJ-Fb for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:48:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dJHTK-0001iQ-7Y for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:48:55 -0400 Received: from mga04.intel.com ([192.55.52.120]:35795) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dJHTJ-0001he-Uu for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:48:54 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Jun 2017 03:48:53 -0700 Received: from devel-ww.sh.intel.com ([10.239.48.97]) by fmsmga001.fm.intel.com with ESMTP; 09 Jun 2017 03:48:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,317,1493708400"; d="scan'208";a="1158514400" From: Wei Wang To: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, mst@redhat.com, david@redhat.com, dave.hansen@intel.com, cornelia.huck@de.ibm.com, akpm@linux-foundation.org, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, wei.w.wang@intel.com, liliang.opensource@gmail.com Date: Fri, 9 Jun 2017 18:41:36 +0800 Message-Id: <1497004901-30593-2-git-send-email-wei.w.wang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497004901-30593-1-git-send-email-wei.w.wang@intel.com> References: <1497004901-30593-1-git-send-email-wei.w.wang@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.120 Subject: [Qemu-devel] [PATCH v11 1/6] virtio-balloon: deflate via a page list X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Liang Li This patch saves the deflated pages to a list, instead of the PFN array. Accordingly, the balloon_pfn_to_page() function is removed. Signed-off-by: Liang Li Signed-off-by: Michael S. Tsirkin Signed-off-by: Wei Wang --- drivers/virtio/virtio_balloon.c | 22 ++++++++-------------- 1 file changed, 8 insertions(+), 14 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloo= n.c index 34adf9b..4a9f307 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -104,12 +104,6 @@ static u32 page_to_balloon_pfn(struct page *page) return pfn * VIRTIO_BALLOON_PAGES_PER_PAGE; } =20 -static struct page *balloon_pfn_to_page(u32 pfn) -{ - BUG_ON(pfn % VIRTIO_BALLOON_PAGES_PER_PAGE); - return pfn_to_page(pfn / VIRTIO_BALLOON_PAGES_PER_PAGE); -} - static void balloon_ack(struct virtqueue *vq) { struct virtio_balloon *vb =3D vq->vdev->priv; @@ -182,18 +176,16 @@ static unsigned fill_balloon(struct virtio_balloon *v= b, size_t num) return num_allocated_pages; } =20 -static void release_pages_balloon(struct virtio_balloon *vb) +static void release_pages_balloon(struct virtio_balloon *vb, + struct list_head *pages) { - unsigned int i; - struct page *page; + struct page *page, *next; =20 - /* Find pfns pointing at start of each page, get pages and free them. */ - for (i =3D 0; i < vb->num_pfns; i +=3D VIRTIO_BALLOON_PAGES_PER_PAGE) { - page =3D balloon_pfn_to_page(virtio32_to_cpu(vb->vdev, - vb->pfns[i])); + list_for_each_entry_safe(page, next, pages, lru) { if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) adjust_managed_page_count(page, 1); + list_del(&page->lru); put_page(page); /* balloon reference */ } } @@ -203,6 +195,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb,= size_t num) unsigned num_freed_pages; struct page *page; struct balloon_dev_info *vb_dev_info =3D &vb->vb_dev_info; + LIST_HEAD(pages); =20 /* We can only do one array worth at a time. */ num =3D min(num, ARRAY_SIZE(vb->pfns)); @@ -216,6 +209,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb,= size_t num) if (!page) break; set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + list_add(&page->lru, &pages); vb->num_pages -=3D VIRTIO_BALLOON_PAGES_PER_PAGE; } =20 @@ -227,7 +221,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb,= size_t num) */ if (vb->num_pfns !=3D 0) tell_host(vb, vb->deflate_vq); - release_pages_balloon(vb); + release_pages_balloon(vb, &pages); mutex_unlock(&vb->balloon_lock); return num_freed_pages; } --=20 2.7.4 From nobody Wed Nov 5 14:41:41 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1497005524322589.85474945073; Fri, 9 Jun 2017 03:52:04 -0700 (PDT) Received: from localhost ([::1]:53778 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dJHWN-0003rE-30 for importer@patchew.org; Fri, 09 Jun 2017 06:52:03 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50685) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dJHTO-0001gn-2M for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:48:58 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dJHTN-0001kg-9l for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:48:58 -0400 Received: from mga04.intel.com ([192.55.52.120]:35795) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dJHTN-0001he-1J for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:48:57 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Jun 2017 03:48:56 -0700 Received: from devel-ww.sh.intel.com ([10.239.48.97]) by fmsmga001.fm.intel.com with ESMTP; 09 Jun 2017 03:48:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,317,1493708400"; d="scan'208";a="1158514408" From: Wei Wang To: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, mst@redhat.com, david@redhat.com, dave.hansen@intel.com, cornelia.huck@de.ibm.com, akpm@linux-foundation.org, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, wei.w.wang@intel.com, liliang.opensource@gmail.com Date: Fri, 9 Jun 2017 18:41:37 +0800 Message-Id: <1497004901-30593-3-git-send-email-wei.w.wang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497004901-30593-1-git-send-email-wei.w.wang@intel.com> References: <1497004901-30593-1-git-send-email-wei.w.wang@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.120 Subject: [Qemu-devel] [PATCH v11 2/6] virtio-balloon: coding format cleanup X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Clean up the comment format. Signed-off-by: Wei Wang --- drivers/virtio/virtio_balloon.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloo= n.c index 4a9f307..ecb64e9 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -132,8 +132,10 @@ static void set_page_pfns(struct virtio_balloon *vb, { unsigned int i; =20 - /* Set balloon pfns pointing at this page. - * Note that the first pfn points at start of the page. */ + /* + * Set balloon pfns pointing at this page. + * Note that the first pfn points at start of the page. + */ for (i =3D 0; i < VIRTIO_BALLOON_PAGES_PER_PAGE; i++) pfns[i] =3D cpu_to_virtio32(vb->vdev, page_to_balloon_pfn(page) + i); --=20 2.7.4 From nobody Wed Nov 5 14:41:41 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1497005522616624.5273563783318; Fri, 9 Jun 2017 03:52:02 -0700 (PDT) Received: from localhost ([::1]:53777 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dJHWL-0003pO-1S for importer@patchew.org; Fri, 09 Jun 2017 06:52:01 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50746) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dJHTW-0001ne-PJ for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:49:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dJHTQ-0001mM-No for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:49:06 -0400 Received: from mga04.intel.com ([192.55.52.120]:35795) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dJHTQ-0001he-7T for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:49:00 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Jun 2017 03:48:59 -0700 Received: from devel-ww.sh.intel.com ([10.239.48.97]) by fmsmga001.fm.intel.com with ESMTP; 09 Jun 2017 03:48:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,317,1493708400"; d="scan'208";a="1158514421" From: Wei Wang To: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, mst@redhat.com, david@redhat.com, dave.hansen@intel.com, cornelia.huck@de.ibm.com, akpm@linux-foundation.org, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, wei.w.wang@intel.com, liliang.opensource@gmail.com Date: Fri, 9 Jun 2017 18:41:38 +0800 Message-Id: <1497004901-30593-4-git-send-email-wei.w.wang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497004901-30593-1-git-send-email-wei.w.wang@intel.com> References: <1497004901-30593-1-git-send-email-wei.w.wang@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.120 Subject: [Qemu-devel] [PATCH v11 3/6] virtio-balloon: VIRTIO_BALLOON_F_PAGE_CHUNKS X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Add a new feature, VIRTIO_BALLOON_F_PAGE_CHUNKS, which enables the transfer of the ballooned (i.e. inflated/deflated) pages in chunks to the host. The implementation of the previous virtio-balloon is not very efficient, because the ballooned pages are transferred to the host one by one. Here is the breakdown of the time in percentage spent on each step of the balloon inflating process (inflating 7GB of an 8GB idle guest). 1) allocating pages (6.5%) 2) sending PFNs to host (68.3%) 3) address translation (6.1%) 4) madvise (19%) It takes about 4126ms for the inflating process to complete. The above profiling shows that the bottlenecks are stage 2) and stage 4). This patch optimizes step 2) by transferring pages to the host in chunks. A chunk consists of guest physically continuous pages. When the pages are packed into a chunk, they are converted into balloon page size (4KB) pages. A chunk is offered to the host via a base address (i.e. the start guest physical address of those physically continuous pages) and the size (i.e. the total number of the 4KB balloon size pages). A chunk is described via a vring_desc struct in the implementation. By doing so, step 4) can also be optimized by doing address translation and madvise() in chunks rather than page by page. With this new feature, the above ballooning process takes ~590ms resulting in an improvement of ~85%. TODO: optimize stage 1) by allocating/freeing a chunk of pages instead of a single page each time. Signed-off-by: Wei Wang Signed-off-by: Liang Li Suggested-by: Michael S. Tsirkin --- drivers/virtio/virtio_balloon.c | 418 ++++++++++++++++++++++++++++++++= +--- drivers/virtio/virtio_ring.c | 120 ++++++++++- include/linux/virtio.h | 7 + include/uapi/linux/virtio_balloon.h | 1 + include/uapi/linux/virtio_ring.h | 3 + 5 files changed, 517 insertions(+), 32 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloo= n.c index ecb64e9..0cf945c 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -51,6 +51,36 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); static struct vfsmount *balloon_mnt; #endif =20 +/* The size of one page_bmap used to record inflated/deflated pages. */ +#define VIRTIO_BALLOON_PAGE_BMAP_SIZE (8 * PAGE_SIZE) +/* + * Callulates how many pfns can a page_bmap record. A bit corresponds to a + * page of PAGE_SIZE. + */ +#define VIRTIO_BALLOON_PFNS_PER_PAGE_BMAP \ + (VIRTIO_BALLOON_PAGE_BMAP_SIZE * BITS_PER_BYTE) + +/* The number of page_bmap to allocate by default. */ +#define VIRTIO_BALLOON_PAGE_BMAP_DEFAULT_NUM 1 +/* The maximum number of page_bmap that can be allocated. */ +#define VIRTIO_BALLOON_PAGE_BMAP_MAX_NUM 32 + +/* + * QEMU virtio implementation requires the desc table size less than + * VIRTQUEUE_MAX_SIZE, so minus 1 here. + */ +#define VIRTIO_BALLOON_MAX_PAGE_CHUNKS (VIRTQUEUE_MAX_SIZE - 1) + +/* The struct to manage ballooned pages in chunks */ +struct virtio_balloon_page_chunk { + /* Indirect desc table to hold chunks of balloon pages */ + struct vring_desc *desc_table; + /* Number of added chunks of balloon pages */ + unsigned int chunk_num; + /* Bitmap used to record ballooned pages. */ + unsigned long *page_bmap[VIRTIO_BALLOON_PAGE_BMAP_MAX_NUM]; +}; + struct virtio_balloon { struct virtio_device *vdev; struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; @@ -79,6 +109,8 @@ struct virtio_balloon { /* Synchronize access/update to this struct virtio_balloon elements */ struct mutex balloon_lock; =20 + struct virtio_balloon_page_chunk balloon_page_chunk; + /* The array of pfns we tell the Host about. */ unsigned int num_pfns; __virtio32 pfns[VIRTIO_BALLOON_ARRAY_PFNS_MAX]; @@ -111,6 +143,133 @@ static void balloon_ack(struct virtqueue *vq) wake_up(&vb->acked); } =20 +/* Update pfn_max and pfn_min according to the pfn of page */ +static inline void update_pfn_range(struct virtio_balloon *vb, + struct page *page, + unsigned long *pfn_min, + unsigned long *pfn_max) +{ + unsigned long pfn =3D page_to_pfn(page); + + *pfn_min =3D min(pfn, *pfn_min); + *pfn_max =3D max(pfn, *pfn_max); +} + +static unsigned int extend_page_bmap_size(struct virtio_balloon *vb, + unsigned long pfn_num) +{ + unsigned int i, bmap_num, allocated_bmap_num; + unsigned long bmap_len; + + allocated_bmap_num =3D VIRTIO_BALLOON_PAGE_BMAP_DEFAULT_NUM; + bmap_len =3D ALIGN(pfn_num, BITS_PER_LONG) / BITS_PER_BYTE; + bmap_len =3D roundup(bmap_len, VIRTIO_BALLOON_PAGE_BMAP_SIZE); + /* + * VIRTIO_BALLOON_PAGE_BMAP_SIZE is the size of one page_bmap, so + * divide it to calculate how many page_bmap that we need. + */ + bmap_num =3D (unsigned int)(bmap_len / VIRTIO_BALLOON_PAGE_BMAP_SIZE); + /* The number of page_bmap to allocate should not exceed the max */ + bmap_num =3D min_t(unsigned int, VIRTIO_BALLOON_PAGE_BMAP_MAX_NUM, + bmap_num); + + for (i =3D VIRTIO_BALLOON_PAGE_BMAP_DEFAULT_NUM; i < bmap_num; i++) { + vb->balloon_page_chunk.page_bmap[i] =3D + kmalloc(VIRTIO_BALLOON_PAGE_BMAP_SIZE, GFP_KERNEL); + if (vb->balloon_page_chunk.page_bmap[i]) + allocated_bmap_num++; + else + break; + } + + return allocated_bmap_num; +} + +static void free_extended_page_bmap(struct virtio_balloon *vb, + unsigned int page_bmap_num) +{ + unsigned int i; + + for (i =3D VIRTIO_BALLOON_PAGE_BMAP_DEFAULT_NUM; i < page_bmap_num; + i++) { + kfree(vb->balloon_page_chunk.page_bmap[i]); + vb->balloon_page_chunk.page_bmap[i] =3D NULL; + page_bmap_num--; + } +} + +static void clear_page_bmap(struct virtio_balloon *vb, + unsigned int page_bmap_num) +{ + int i; + + for (i =3D 0; i < page_bmap_num; i++) + memset(vb->balloon_page_chunk.page_bmap[i], 0, + VIRTIO_BALLOON_PAGE_BMAP_SIZE); +} + +static void send_page_chunks(struct virtio_balloon *vb, struct virtqueue *= vq) +{ + unsigned int len, num; + struct vring_desc *desc =3D vb->balloon_page_chunk.desc_table; + + num =3D vb->balloon_page_chunk.chunk_num; + if (!virtqueue_indirect_desc_table_add(vq, desc, num)) { + virtqueue_kick(vq); + wait_event(vb->acked, virtqueue_get_buf(vq, &len)); + vb->balloon_page_chunk.chunk_num =3D 0; + } +} + +/* Add a chunk to the buffer. */ +static void add_one_chunk(struct virtio_balloon *vb, struct virtqueue *vq, + u64 base_addr, u32 size) +{ + unsigned int *num =3D &vb->balloon_page_chunk.chunk_num; + struct vring_desc *desc =3D &vb->balloon_page_chunk.desc_table[*num]; + + desc->addr =3D cpu_to_virtio64(vb->vdev, base_addr); + desc->len =3D cpu_to_virtio32(vb->vdev, size); + *num +=3D 1; + if (*num =3D=3D VIRTIO_BALLOON_MAX_PAGE_CHUNKS) + send_page_chunks(vb, vq); +} + +static void convert_bmap_to_chunks(struct virtio_balloon *vb, + struct virtqueue *vq, + unsigned long *bmap, + unsigned long pfn_start, + unsigned long size) +{ + unsigned long next_one, next_zero, pos =3D 0; + u64 chunk_base_addr; + u32 chunk_size; + + while (pos < size) { + next_one =3D find_next_bit(bmap, size, pos); + /* + * No "1" bit found, which means that there is no pfn + * recorded in the rest of this bmap. + */ + if (next_one =3D=3D size) + break; + next_zero =3D find_next_zero_bit(bmap, size, next_one + 1); + /* + * A bit in page_bmap corresponds to a page of PAGE_SIZE. + * Convert it to be pages of 4KB balloon page size when + * adding it to a chunk. + */ + chunk_size =3D (next_zero - next_one) * + VIRTIO_BALLOON_PAGES_PER_PAGE; + chunk_base_addr =3D (pfn_start + next_one) << + VIRTIO_BALLOON_PFN_SHIFT; + if (chunk_size) { + add_one_chunk(vb, vq, chunk_base_addr, chunk_size); + pos +=3D next_zero + 1; + } + } +} + static void tell_host(struct virtio_balloon *vb, struct virtqueue *vq) { struct scatterlist sg; @@ -124,7 +283,35 @@ static void tell_host(struct virtio_balloon *vb, struc= t virtqueue *vq) =20 /* When host has read buffer, this completes via balloon_ack */ wait_event(vb->acked, virtqueue_get_buf(vq, &len)); +} + +static void tell_host_from_page_bmap(struct virtio_balloon *vb, + struct virtqueue *vq, + unsigned long pfn_start, + unsigned long pfn_end, + unsigned int page_bmap_num) +{ + unsigned long i, pfn_num; =20 + for (i =3D 0; i < page_bmap_num; i++) { + /* + * For the last page_bmap, only the remaining number of pfns + * need to be searched rather than the entire page_bmap. + */ + if (i + 1 =3D=3D page_bmap_num) + pfn_num =3D (pfn_end - pfn_start) % + VIRTIO_BALLOON_PFNS_PER_PAGE_BMAP; + else + pfn_num =3D VIRTIO_BALLOON_PFNS_PER_PAGE_BMAP; + + convert_bmap_to_chunks(vb, vq, + vb->balloon_page_chunk.page_bmap[i], + pfn_start + + i * VIRTIO_BALLOON_PFNS_PER_PAGE_BMAP, + pfn_num); + } + if (vb->balloon_page_chunk.chunk_num > 0) + send_page_chunks(vb, vq); } =20 static void set_page_pfns(struct virtio_balloon *vb, @@ -141,13 +328,89 @@ static void set_page_pfns(struct virtio_balloon *vb, page_to_balloon_pfn(page) + i); } =20 +/* + * Send ballooned pages in chunks to host. + * The ballooned pages are recorded in page bitmaps. Each bit in a bitmap + * corresponds to a page of PAGE_SIZE. The page bitmaps are searched for + * continuous "1" bits, which correspond to continuous pages, to chunk. + * When packing those continuous pages into chunks, pages are converted in= to + * 4KB balloon pages. + * + * pfn_max and pfn_min form the range of pfns that need to use page bitmap= s to + * record. If the range is too large to be recorded into the allocated page + * bitmaps, the page bitmaps are used multiple times to record the entire + * range of pfns. + */ +static void tell_host_page_chunks(struct virtio_balloon *vb, + struct list_head *pages, + struct virtqueue *vq, + unsigned long pfn_max, + unsigned long pfn_min) +{ + /* + * The pfn_start and pfn_end form the range of pfns that the allocated + * page_bmap can record in each round. + */ + unsigned long pfn_start, pfn_end; + /* Total number of allocated page_bmap */ + unsigned int page_bmap_num; + struct page *page; + bool found; + + /* + * In the case that one page_bmap is not sufficient to record the pfn + * range, page_bmap will be extended by allocating more numbers of + * page_bmap. + */ + page_bmap_num =3D extend_page_bmap_size(vb, pfn_max - pfn_min + 1); + + /* Start from the beginning of the whole pfn range */ + pfn_start =3D pfn_min; + while (pfn_start < pfn_max) { + pfn_end =3D pfn_start + + VIRTIO_BALLOON_PFNS_PER_PAGE_BMAP * page_bmap_num; + pfn_end =3D pfn_end < pfn_max ? pfn_end : pfn_max; + clear_page_bmap(vb, page_bmap_num); + found =3D false; + + list_for_each_entry(page, pages, lru) { + unsigned long bmap_idx, bmap_pos, this_pfn; + + this_pfn =3D page_to_pfn(page); + if (this_pfn < pfn_start || this_pfn > pfn_end) + continue; + bmap_idx =3D (this_pfn - pfn_start) / + VIRTIO_BALLOON_PFNS_PER_PAGE_BMAP; + bmap_pos =3D (this_pfn - pfn_start) % + VIRTIO_BALLOON_PFNS_PER_PAGE_BMAP; + set_bit(bmap_pos, + vb->balloon_page_chunk.page_bmap[bmap_idx]); + + found =3D true; + } + if (found) + tell_host_from_page_bmap(vb, vq, pfn_start, pfn_end, + page_bmap_num); + /* + * Start the next round when pfn_start and pfn_end couldn't + * cover the whole pfn range given by pfn_max and pfn_min. + */ + pfn_start =3D pfn_end; + } + free_extended_page_bmap(vb, page_bmap_num); +} + static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) { struct balloon_dev_info *vb_dev_info =3D &vb->vb_dev_info; unsigned num_allocated_pages; + bool chunking =3D virtio_has_feature(vb->vdev, + VIRTIO_BALLOON_F_PAGE_CHUNKS); + unsigned long pfn_max =3D 0, pfn_min =3D ULONG_MAX; =20 /* We can only do one array worth at a time. */ - num =3D min(num, ARRAY_SIZE(vb->pfns)); + if (!chunking) + num =3D min(num, ARRAY_SIZE(vb->pfns)); =20 mutex_lock(&vb->balloon_lock); for (vb->num_pfns =3D 0; vb->num_pfns < num; @@ -162,7 +425,10 @@ static unsigned fill_balloon(struct virtio_balloon *vb= , size_t num) msleep(200); break; } - set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + if (chunking) + update_pfn_range(vb, page, &pfn_min, &pfn_max); + else + set_page_pfns(vb, vb->pfns + vb->num_pfns, page); vb->num_pages +=3D VIRTIO_BALLOON_PAGES_PER_PAGE; if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) @@ -171,8 +437,14 @@ static unsigned fill_balloon(struct virtio_balloon *vb= , size_t num) =20 num_allocated_pages =3D vb->num_pfns; /* Did we get any? */ - if (vb->num_pfns !=3D 0) - tell_host(vb, vb->inflate_vq); + if (vb->num_pfns !=3D 0) { + if (chunking) + tell_host_page_chunks(vb, &vb_dev_info->pages, + vb->inflate_vq, + pfn_max, pfn_min); + else + tell_host(vb, vb->inflate_vq); + } mutex_unlock(&vb->balloon_lock); =20 return num_allocated_pages; @@ -198,9 +470,13 @@ static unsigned leak_balloon(struct virtio_balloon *vb= , size_t num) struct page *page; struct balloon_dev_info *vb_dev_info =3D &vb->vb_dev_info; LIST_HEAD(pages); + bool chunking =3D virtio_has_feature(vb->vdev, + VIRTIO_BALLOON_F_PAGE_CHUNKS); + unsigned long pfn_max =3D 0, pfn_min =3D ULONG_MAX; =20 - /* We can only do one array worth at a time. */ - num =3D min(num, ARRAY_SIZE(vb->pfns)); + /* Traditionally, we can only do one array worth at a time. */ + if (!chunking) + num =3D min(num, ARRAY_SIZE(vb->pfns)); =20 mutex_lock(&vb->balloon_lock); /* We can't release more pages than taken */ @@ -210,7 +486,10 @@ static unsigned leak_balloon(struct virtio_balloon *vb= , size_t num) page =3D balloon_page_dequeue(vb_dev_info); if (!page) break; - set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + if (chunking) + update_pfn_range(vb, page, &pfn_min, &pfn_max); + else + set_page_pfns(vb, vb->pfns + vb->num_pfns, page); list_add(&page->lru, &pages); vb->num_pages -=3D VIRTIO_BALLOON_PAGES_PER_PAGE; } @@ -221,8 +500,13 @@ static unsigned leak_balloon(struct virtio_balloon *vb= , size_t num) * virtio_has_feature(vdev, VIRTIO_BALLOON_F_MUST_TELL_HOST); * is true, we *have* to do it in this order */ - if (vb->num_pfns !=3D 0) - tell_host(vb, vb->deflate_vq); + if (vb->num_pfns !=3D 0) { + if (chunking) + tell_host_page_chunks(vb, &pages, vb->deflate_vq, + pfn_max, pfn_min); + else + tell_host(vb, vb->deflate_vq); + } release_pages_balloon(vb, &pages); mutex_unlock(&vb->balloon_lock); return num_freed_pages; @@ -442,6 +726,14 @@ static int init_vqs(struct virtio_balloon *vb) } =20 #ifdef CONFIG_BALLOON_COMPACTION + +static void tell_host_one_page(struct virtio_balloon *vb, + struct virtqueue *vq, struct page *page) +{ + add_one_chunk(vb, vq, page_to_pfn(page) << VIRTIO_BALLOON_PFN_SHIFT, + VIRTIO_BALLOON_PAGES_PER_PAGE); +} + /* * virtballoon_migratepage - perform the balloon page migration on behalf = of * a compation thread. (called under page lock) @@ -465,6 +757,8 @@ static int virtballoon_migratepage(struct balloon_dev_i= nfo *vb_dev_info, { struct virtio_balloon *vb =3D container_of(vb_dev_info, struct virtio_balloon, vb_dev_info); + bool chunking =3D virtio_has_feature(vb->vdev, + VIRTIO_BALLOON_F_PAGE_CHUNKS); unsigned long flags; =20 /* @@ -486,16 +780,22 @@ static int virtballoon_migratepage(struct balloon_dev= _info *vb_dev_info, vb_dev_info->isolated_pages--; __count_vm_event(BALLOON_MIGRATE); spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags); - vb->num_pfns =3D VIRTIO_BALLOON_PAGES_PER_PAGE; - set_page_pfns(vb, vb->pfns, newpage); - tell_host(vb, vb->inflate_vq); - + if (chunking) { + tell_host_one_page(vb, vb->inflate_vq, newpage); + } else { + vb->num_pfns =3D VIRTIO_BALLOON_PAGES_PER_PAGE; + set_page_pfns(vb, vb->pfns, newpage); + tell_host(vb, vb->inflate_vq); + } /* balloon's page migration 2nd step -- deflate "page" */ balloon_page_delete(page); - vb->num_pfns =3D VIRTIO_BALLOON_PAGES_PER_PAGE; - set_page_pfns(vb, vb->pfns, page); - tell_host(vb, vb->deflate_vq); - + if (chunking) { + tell_host_one_page(vb, vb->deflate_vq, page); + } else { + vb->num_pfns =3D VIRTIO_BALLOON_PAGES_PER_PAGE; + set_page_pfns(vb, vb->pfns, page); + tell_host(vb, vb->deflate_vq); + } mutex_unlock(&vb->balloon_lock); =20 put_page(page); /* balloon reference */ @@ -522,9 +822,78 @@ static struct file_system_type balloon_fs =3D { =20 #endif /* CONFIG_BALLOON_COMPACTION */ =20 +static void free_page_bmap(struct virtio_balloon *vb) +{ + int i; + + for (i =3D 0; i < VIRTIO_BALLOON_PAGE_BMAP_DEFAULT_NUM; i++) { + kfree(vb->balloon_page_chunk.page_bmap[i]); + vb->balloon_page_chunk.page_bmap[i] =3D NULL; + } +} + +static int balloon_page_chunk_init(struct virtio_balloon *vb) +{ + int i; + + vb->balloon_page_chunk.desc_table =3D alloc_indirect(vb->vdev, + VIRTIO_BALLOON_MAX_PAGE_CHUNKS, + GFP_KERNEL); + if (!vb->balloon_page_chunk.desc_table) + goto err_page_chunk; + vb->balloon_page_chunk.chunk_num =3D 0; + + /* + * The default number of page_bmaps are allocated. More may be + * allocated on demand. + */ + for (i =3D 0; i < VIRTIO_BALLOON_PAGE_BMAP_DEFAULT_NUM; i++) { + vb->balloon_page_chunk.page_bmap[i] =3D + kmalloc(VIRTIO_BALLOON_PAGE_BMAP_SIZE, GFP_KERNEL); + if (!vb->balloon_page_chunk.page_bmap[i]) + goto err_page_bmap; + } + + return 0; +err_page_bmap: + free_page_bmap(vb); + kfree(vb->balloon_page_chunk.desc_table); + vb->balloon_page_chunk.desc_table =3D NULL; +err_page_chunk: + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_PAGE_CHUNKS); + dev_warn(&vb->vdev->dev, "%s: failed\n", __func__); + return -ENOMEM; +} + +static int virtballoon_validate(struct virtio_device *vdev) +{ + struct virtio_balloon *vb =3D NULL; + int err; + + vdev->priv =3D vb =3D kmalloc(sizeof(*vb), GFP_KERNEL); + if (!vb) { + err =3D -ENOMEM; + goto err_vb; + } + vb->vdev =3D vdev; + + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_CHUNKS)) { + err =3D balloon_page_chunk_init(vb); + if (err < 0) + goto err_page_chunk; + } + + return 0; + +err_page_chunk: + kfree(vb); +err_vb: + return err; +} + static int virtballoon_probe(struct virtio_device *vdev) { - struct virtio_balloon *vb; + struct virtio_balloon *vb =3D vdev->priv; int err; =20 if (!vdev->config->get) { @@ -533,20 +902,14 @@ static int virtballoon_probe(struct virtio_device *vd= ev) return -EINVAL; } =20 - vdev->priv =3D vb =3D kmalloc(sizeof(*vb), GFP_KERNEL); - if (!vb) { - err =3D -ENOMEM; - goto out; - } - INIT_WORK(&vb->update_balloon_stats_work, update_balloon_stats_func); INIT_WORK(&vb->update_balloon_size_work, update_balloon_size_func); spin_lock_init(&vb->stop_update_lock); vb->stop_update =3D false; vb->num_pages =3D 0; + mutex_init(&vb->balloon_lock); init_waitqueue_head(&vb->acked); - vb->vdev =3D vdev; =20 balloon_devinfo_init(&vb->vb_dev_info); =20 @@ -590,7 +953,6 @@ static int virtballoon_probe(struct virtio_device *vdev) vdev->config->del_vqs(vdev); out_free_vb: kfree(vb); -out: return err; } =20 @@ -620,6 +982,8 @@ static void virtballoon_remove(struct virtio_device *vd= ev) cancel_work_sync(&vb->update_balloon_stats_work); =20 remove_common(vb); + free_page_bmap(vb); + kfree(vb->balloon_page_chunk.desc_table); #ifdef CONFIG_BALLOON_COMPACTION if (vb->vb_dev_info.inode) iput(vb->vb_dev_info.inode); @@ -664,6 +1028,7 @@ static unsigned int features[] =3D { VIRTIO_BALLOON_F_MUST_TELL_HOST, VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, + VIRTIO_BALLOON_F_PAGE_CHUNKS, }; =20 static struct virtio_driver virtio_balloon_driver =3D { @@ -674,6 +1039,7 @@ static struct virtio_driver virtio_balloon_driver =3D { .id_table =3D id_table, .probe =3D virtballoon_probe, .remove =3D virtballoon_remove, + .validate =3D virtballoon_validate, .config_changed =3D virtballoon_changed, #ifdef CONFIG_PM_SLEEP .freeze =3D virtballoon_freeze, diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 409aeaa..0ea2512 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -235,8 +235,17 @@ static int vring_mapping_error(const struct vring_virt= queue *vq, return dma_mapping_error(vring_dma_dev(vq), addr); } =20 -static struct vring_desc *alloc_indirect(struct virtqueue *_vq, - unsigned int total_sg, gfp_t gfp) +/** + * alloc_indirect - allocate an indirect desc table + * @vdev: the virtio_device that owns the indirect desc table. + * @num: the number of entries that the table will have. + * @gfp: how to do memory allocations (if necessary). + * + * Return NULL if the table allocation failed. Otherwise, return the addre= ss + * of the table. + */ +struct vring_desc *alloc_indirect(struct virtio_device *vdev, unsigned int= num, + gfp_t gfp) { struct vring_desc *desc; unsigned int i; @@ -248,14 +257,15 @@ static struct vring_desc *alloc_indirect(struct virtq= ueue *_vq, */ gfp &=3D ~__GFP_HIGHMEM; =20 - desc =3D kmalloc(total_sg * sizeof(struct vring_desc), gfp); + desc =3D kmalloc_array(num, sizeof(struct vring_desc), gfp); if (!desc) return NULL; =20 - for (i =3D 0; i < total_sg; i++) - desc[i].next =3D cpu_to_virtio16(_vq->vdev, i + 1); + for (i =3D 0; i < num; i++) + desc[i].next =3D cpu_to_virtio16(vdev, i + 1); return desc; } +EXPORT_SYMBOL_GPL(alloc_indirect); =20 static inline int virtqueue_add(struct virtqueue *_vq, struct scatterlist *sgs[], @@ -302,7 +312,7 @@ static inline int virtqueue_add(struct virtqueue *_vq, /* If the host supports indirect descriptor tables, and we have multiple * buffers, then go indirect. FIXME: tune this threshold */ if (vq->indirect && total_sg > 1 && vq->vq.num_free) - desc =3D alloc_indirect(_vq, total_sg, gfp); + desc =3D alloc_indirect(_vq->vdev, total_sg, gfp); else desc =3D NULL; =20 @@ -433,6 +443,104 @@ static inline int virtqueue_add(struct virtqueue *_vq, } =20 /** + * virtqueue_indirect_desc_table_add - add an indirect desc table to the vq + * @_vq: the struct virtqueue we're talking about. + * @desc: the desc table we're talking about. + * @num: the number of entries that the desc table has. + * + * Returns zero or a negative error (ie. ENOSPC, EIO). + */ +int virtqueue_indirect_desc_table_add(struct virtqueue *_vq, + struct vring_desc *desc, + unsigned int num) +{ + struct vring_virtqueue *vq =3D to_vvq(_vq); + dma_addr_t desc_addr; + unsigned int i, avail; + int head; + + /* Sanity check */ + if (!desc) { + pr_debug("%s: empty desc table\n", __func__); + return -EINVAL; + } + + START_USE(vq); + + if (unlikely(vq->broken)) { + END_USE(vq); + return -EIO; + } + + if (!vq->vq.num_free) { + pr_debug("%s: the virtioqueue is full\n", __func__); + END_USE(vq); + return -ENOSPC; + } + + /* Map and fill in the indirect table */ + desc_addr =3D vring_map_single(vq, desc, num * sizeof(struct vring_desc), + DMA_TO_DEVICE); + if (vring_mapping_error(vq, desc_addr)) { + pr_debug("%s: map desc failed\n", __func__); + END_USE(vq); + return -EIO; + } + + /* Mark the flag of the table entries */ + for (i =3D 0; i < num; i++) + desc[i].flags =3D cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT); + /* The last one doesn't continue. */ + desc[num - 1].flags &=3D cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT); + + /* Get a ring entry to point to the indirect table */ + head =3D vq->free_head; + vq->vring.desc[head].flags =3D cpu_to_virtio16(_vq->vdev, + VRING_DESC_F_INDIRECT); + vq->vring.desc[head].addr =3D cpu_to_virtio64(_vq->vdev, desc_addr); + vq->vring.desc[head].len =3D cpu_to_virtio32(_vq->vdev, num * + sizeof(struct vring_desc)); + /* We're using 1 buffers from the free list. */ + vq->vq.num_free--; + /* Update free pointer */ + vq->free_head =3D virtio16_to_cpu(_vq->vdev, vq->vring.desc[head].next); + + /* Store token and indirect buffer state. */ + vq->desc_state[head].data =3D desc; + /* Don't free the caller allocated indirect table when detach_buf. */ + vq->desc_state[head].indir_desc =3D NULL; + + /* + * Put entry in available array (but don't update avail->idx until they + * do sync). + */ + avail =3D vq->avail_idx_shadow & (vq->vring.num - 1); + vq->vring.avail->ring[avail] =3D cpu_to_virtio16(_vq->vdev, head); + + /* + * Descriptors and available array need to be set before we expose the + * new available array entries. + */ + virtio_wmb(vq->weak_barriers); + vq->avail_idx_shadow++; + vq->vring.avail->idx =3D cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow); + vq->num_added++; + + pr_debug("%s: added buffer head %i to %p\n", __func__, head, vq); + END_USE(vq); + + /* + * This is very unlikely, but theoretically possible. Kick + * just in case. + */ + if (unlikely(vq->num_added =3D=3D (1 << 16) - 1)) + virtqueue_kick(_vq); + + return 0; +} +EXPORT_SYMBOL_GPL(virtqueue_indirect_desc_table_add); + +/** * virtqueue_add_sgs - expose buffers to other end * @vq: the struct virtqueue we're talking about. * @sgs: array of terminated scatterlists. diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 7edfbdb..01dad22 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -34,6 +34,13 @@ struct virtqueue { void *priv; }; =20 +struct vring_desc *alloc_indirect(struct virtio_device *vdev, + unsigned int num, gfp_t gfp); + +int virtqueue_indirect_desc_table_add(struct virtqueue *_vq, + struct vring_desc *desc, + unsigned int num); + int virtqueue_add_outbuf(struct virtqueue *vq, struct scatterlist sg[], unsigned int num, void *data, diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virti= o_balloon.h index 343d7dd..5ed3c7b 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -34,6 +34,7 @@ #define VIRTIO_BALLOON_F_MUST_TELL_HOST 0 /* Tell before reclaiming pages = */ #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ +#define VIRTIO_BALLOON_F_PAGE_CHUNKS 3 /* Inflate/Deflate pages in chunks = */ =20 /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 diff --git a/include/uapi/linux/virtio_ring.h b/include/uapi/linux/virtio_r= ing.h index c072959..0499fb8 100644 --- a/include/uapi/linux/virtio_ring.h +++ b/include/uapi/linux/virtio_ring.h @@ -111,6 +111,9 @@ struct vring { #define VRING_USED_ALIGN_SIZE 4 #define VRING_DESC_ALIGN_SIZE 16 =20 +/* The supported max queue size */ +#define VIRTQUEUE_MAX_SIZE 1024 + /* The standard layout for the ring is a continuous chunk of memory which = looks * like this. We assume num is a power of 2. * --=20 2.7.4 From nobody Wed Nov 5 14:41:41 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1497005594597605.138307086397; Fri, 9 Jun 2017 03:53:14 -0700 (PDT) Received: from localhost ([::1]:53781 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dJHXV-0004up-9k for importer@patchew.org; Fri, 09 Jun 2017 06:53:13 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50744) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dJHTW-0001na-NO for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:49:07 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dJHTT-0001oO-JP for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:49:06 -0400 Received: from mga04.intel.com ([192.55.52.120]:35795) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dJHTT-0001he-Am for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:49:03 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Jun 2017 03:49:02 -0700 Received: from devel-ww.sh.intel.com ([10.239.48.97]) by fmsmga001.fm.intel.com with ESMTP; 09 Jun 2017 03:48:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,317,1493708400"; d="scan'208";a="1158514477" From: Wei Wang To: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, mst@redhat.com, david@redhat.com, dave.hansen@intel.com, cornelia.huck@de.ibm.com, akpm@linux-foundation.org, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, wei.w.wang@intel.com, liliang.opensource@gmail.com Date: Fri, 9 Jun 2017 18:41:39 +0800 Message-Id: <1497004901-30593-5-git-send-email-wei.w.wang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497004901-30593-1-git-send-email-wei.w.wang@intel.com> References: <1497004901-30593-1-git-send-email-wei.w.wang@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.120 Subject: [Qemu-devel] [PATCH v11 4/6] mm: function to offer a page block on the free list X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Add a function to find a page block on the free list specified by the caller. Pages from the page block may be used immediately after the function returns. The caller is responsible for detecting or preventing the use of such pages. Signed-off-by: Wei Wang Signed-off-by: Liang Li --- include/linux/mm.h | 5 +++ mm/page_alloc.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++= ++++ 2 files changed, 96 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 5d22e69..82361a6 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1841,6 +1841,11 @@ extern void free_area_init_node(int nid, unsigned lo= ng * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); =20 +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) +extern int report_unused_page_block(struct zone *zone, unsigned int order, + unsigned int migratetype, + struct page **page); +#endif /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) * into the buddy system. The freed pages will be poisoned with pattern diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2c25de4..0aefe02 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4615,6 +4615,97 @@ void show_free_areas(unsigned int filter, nodemask_t= *nodemask) show_swap_cache_info(); } =20 +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) + +/* + * Heuristically get a page block in the system that is unused. + * It is possible that pages from the page block are used immediately after + * report_unused_page_block() returns. It is the caller's responsibility + * to either detect or prevent the use of such pages. + * + * The free list to check: zone->free_area[order].free_list[migratetype]. + * + * If the caller supplied page block (i.e. **page) is on the free list, of= fer + * the next page block on the list to the caller. Otherwise, offer the fir= st + * page block on the list. + * + * Return 0 when a page block is found on the caller specified free list. + */ +int report_unused_page_block(struct zone *zone, unsigned int order, + unsigned int migratetype, struct page **page) +{ + struct zone *this_zone; + struct list_head *this_list; + int ret =3D 0; + unsigned long flags; + + /* Sanity check */ + if (zone =3D=3D NULL || page =3D=3D NULL || order >=3D MAX_ORDER || + migratetype >=3D MIGRATE_TYPES) + return -EINVAL; + + /* Zone validity check */ + for_each_populated_zone(this_zone) { + if (zone =3D=3D this_zone) + break; + } + + /* Got a non-existent zone from the caller? */ + if (zone !=3D this_zone) + return -EINVAL; + + spin_lock_irqsave(&this_zone->lock, flags); + + this_list =3D &zone->free_area[order].free_list[migratetype]; + if (list_empty(this_list)) { + *page =3D NULL; + ret =3D 1; + goto out; + } + + /* The caller is asking for the first free page block on the list */ + if ((*page) =3D=3D NULL) { + *page =3D list_first_entry(this_list, struct page, lru); + ret =3D 0; + goto out; + } + + /* + * The page block passed from the caller is not on this free list + * anymore (e.g. a 1MB free page block has been split). In this case, + * offer the first page block on the free list that the caller is + * asking for. + */ + if (PageBuddy(*page) && order !=3D page_order(*page)) { + *page =3D list_first_entry(this_list, struct page, lru); + ret =3D 0; + goto out; + } + + /* + * The page block passed from the caller has been the last page block + * on the list. + */ + if ((*page)->lru.next =3D=3D this_list) { + *page =3D NULL; + ret =3D 1; + goto out; + } + + /* + * Finally, fall into the regular case: the page block passed from the + * caller is still on the free list. Offer the next one. + */ + *page =3D list_next_entry((*page), lru); + ret =3D 0; +out: + spin_unlock_irqrestore(&this_zone->lock, flags); + return ret; +} +EXPORT_SYMBOL(report_unused_page_block); + +#endif + static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone =3D zone; --=20 2.7.4 From nobody Wed Nov 5 14:41:41 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1497005668258509.60074696706124; Fri, 9 Jun 2017 03:54:28 -0700 (PDT) Received: from localhost ([::1]:53786 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dJHYg-00060E-El for importer@patchew.org; Fri, 09 Jun 2017 06:54:26 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50760) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dJHTX-0001oE-Da for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:49:08 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dJHTW-0001pz-Nd for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:49:07 -0400 Received: from mga04.intel.com ([192.55.52.120]:35795) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dJHTW-0001he-F5 for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:49:06 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Jun 2017 03:49:06 -0700 Received: from devel-ww.sh.intel.com ([10.239.48.97]) by fmsmga001.fm.intel.com with ESMTP; 09 Jun 2017 03:49:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,317,1493708400"; d="scan'208";a="1158514532" From: Wei Wang To: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, mst@redhat.com, david@redhat.com, dave.hansen@intel.com, cornelia.huck@de.ibm.com, akpm@linux-foundation.org, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, wei.w.wang@intel.com, liliang.opensource@gmail.com Date: Fri, 9 Jun 2017 18:41:40 +0800 Message-Id: <1497004901-30593-6-git-send-email-wei.w.wang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497004901-30593-1-git-send-email-wei.w.wang@intel.com> References: <1497004901-30593-1-git-send-email-wei.w.wang@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.120 Subject: [Qemu-devel] [PATCH v11 5/6] mm: export symbol of next_zone and first_online_pgdat X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This patch enables for_each_zone()/for_each_populated_zone() to be invoked by a kernel module. Signed-off-by: Wei Wang --- mm/mmzone.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/mmzone.c b/mm/mmzone.c index a51c0a6..08a2a3a 100644 --- a/mm/mmzone.c +++ b/mm/mmzone.c @@ -13,6 +13,7 @@ struct pglist_data *first_online_pgdat(void) { return NODE_DATA(first_online_node); } +EXPORT_SYMBOL_GPL(first_online_pgdat); =20 struct pglist_data *next_online_pgdat(struct pglist_data *pgdat) { @@ -41,6 +42,7 @@ struct zone *next_zone(struct zone *zone) } return zone; } +EXPORT_SYMBOL_GPL(next_zone); =20 static inline int zref_in_nodemask(struct zoneref *zref, nodemask_t *nodes) { --=20 2.7.4 From nobody Wed Nov 5 14:41:41 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1497005752347189.9172835718431; Fri, 9 Jun 2017 03:55:52 -0700 (PDT) Received: from localhost ([::1]:53795 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dJHa3-00075Y-1m for importer@patchew.org; Fri, 09 Jun 2017 06:55:51 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50813) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dJHTc-0001sG-78 for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:49:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dJHTZ-0001rh-Ul for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:49:12 -0400 Received: from mga04.intel.com ([192.55.52.120]:35795) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dJHTZ-0001he-H7 for qemu-devel@nongnu.org; Fri, 09 Jun 2017 06:49:09 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Jun 2017 03:49:09 -0700 Received: from devel-ww.sh.intel.com ([10.239.48.97]) by fmsmga001.fm.intel.com with ESMTP; 09 Jun 2017 03:49:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,317,1493708400"; d="scan'208";a="1158514584" From: Wei Wang To: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, mst@redhat.com, david@redhat.com, dave.hansen@intel.com, cornelia.huck@de.ibm.com, akpm@linux-foundation.org, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, wei.w.wang@intel.com, liliang.opensource@gmail.com Date: Fri, 9 Jun 2017 18:41:41 +0800 Message-Id: <1497004901-30593-7-git-send-email-wei.w.wang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497004901-30593-1-git-send-email-wei.w.wang@intel.com> References: <1497004901-30593-1-git-send-email-wei.w.wang@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.120 Subject: [Qemu-devel] [PATCH v11 6/6] virtio-balloon: VIRTIO_BALLOON_F_CMD_VQ X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Add a new vq, cmdq, to handle requests between the device and driver. This patch implements two commands send from the device and handled in the driver. 1) cmd VIRTIO_BALLOON_CMDQ_REPORT_STATS: this command is used to report the guest memory statistics to the host. The stats_vq mechanism is not used when the cmdq mechanism is enabled. 2) cmd VIRTIO_BALLOON_CMDQ_REPORT_UNUSED_PAGES: this command is used to report the guest unused pages to the host. Signed-off-by: Wei Wang --- drivers/virtio/virtio_balloon.c | 363 ++++++++++++++++++++++++++++++++= ---- include/uapi/linux/virtio_balloon.h | 13 ++ 2 files changed, 337 insertions(+), 39 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloo= n.c index 0cf945c..4ac90a5 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -51,6 +51,10 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); static struct vfsmount *balloon_mnt; #endif =20 +/* Types of pages to chunk */ +#define PAGE_CHUNK_TYPE_BALLOON 0 /* Chunk of inflate/deflate pages */ +#define PAGE_CHNUK_UNUSED_PAGE 1 /* Chunk of unused pages */ + /* The size of one page_bmap used to record inflated/deflated pages. */ #define VIRTIO_BALLOON_PAGE_BMAP_SIZE (8 * PAGE_SIZE) /* @@ -81,12 +85,25 @@ struct virtio_balloon_page_chunk { unsigned long *page_bmap[VIRTIO_BALLOON_PAGE_BMAP_MAX_NUM]; }; =20 +struct virtio_balloon_cmdq_unused_page { + struct virtio_balloon_cmdq_hdr hdr; + struct vring_desc *desc_table; + /* Number of added descriptors */ + unsigned int num; +}; + +struct virtio_balloon_cmdq_stats { + struct virtio_balloon_cmdq_hdr hdr; + struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR]; +}; + struct virtio_balloon { struct virtio_device *vdev; - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *cmd_vq; =20 /* The balloon servicing is delegated to a freezable workqueue. */ struct work_struct update_balloon_stats_work; + struct work_struct cmdq_handle_work; struct work_struct update_balloon_size_work; =20 /* Prevent updating balloon when it is being canceled. */ @@ -115,8 +132,10 @@ struct virtio_balloon { unsigned int num_pfns; __virtio32 pfns[VIRTIO_BALLOON_ARRAY_PFNS_MAX]; =20 - /* Memory statistics */ - struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR]; + /* Cmdq msg buffer for memory statistics */ + struct virtio_balloon_cmdq_stats cmdq_stats; + /* Cmdq msg buffer for reporting ununsed pages */ + struct virtio_balloon_cmdq_unused_page cmdq_unused_page; =20 /* To register callback in oom notifier call chain */ struct notifier_block nb; @@ -208,31 +227,77 @@ static void clear_page_bmap(struct virtio_balloon *vb, VIRTIO_BALLOON_PAGE_BMAP_SIZE); } =20 -static void send_page_chunks(struct virtio_balloon *vb, struct virtqueue *= vq) +static void send_page_chunks(struct virtio_balloon *vb, struct virtqueue *= vq, + int type, bool busy_wait) { - unsigned int len, num; - struct vring_desc *desc =3D vb->balloon_page_chunk.desc_table; + unsigned int len, *num, reset_num; + struct vring_desc *desc; + + switch (type) { + case PAGE_CHUNK_TYPE_BALLOON: + desc =3D vb->balloon_page_chunk.desc_table; + num =3D &vb->balloon_page_chunk.chunk_num; + reset_num =3D 0; + break; + case PAGE_CHNUK_UNUSED_PAGE: + desc =3D vb->cmdq_unused_page.desc_table; + num =3D &vb->cmdq_unused_page.num; + /* + * The first desc is used for the cmdq_hdr, so chunks will be + * added from the second desc. + */ + reset_num =3D 1; + break; + default: + dev_warn(&vb->vdev->dev, "%s: unknown page chunk type %d\n", + __func__, type); + return; + } =20 - num =3D vb->balloon_page_chunk.chunk_num; - if (!virtqueue_indirect_desc_table_add(vq, desc, num)) { + if (!virtqueue_indirect_desc_table_add(vq, desc, *num)) { virtqueue_kick(vq); - wait_event(vb->acked, virtqueue_get_buf(vq, &len)); - vb->balloon_page_chunk.chunk_num =3D 0; + if (busy_wait) + while (!virtqueue_get_buf(vq, &len) && + !virtqueue_is_broken(vq)) + cpu_relax(); + else + wait_event(vb->acked, virtqueue_get_buf(vq, &len)); + /* + * Now, the descriptor have been delivered to the host. Reset + * the field in the structure that records the number of added + * descriptors, so that new added descriptor can be re-counted. + */ + *num =3D reset_num; } } =20 /* Add a chunk to the buffer. */ static void add_one_chunk(struct virtio_balloon *vb, struct virtqueue *vq, - u64 base_addr, u32 size) + int type, u64 base_addr, u32 size) { - unsigned int *num =3D &vb->balloon_page_chunk.chunk_num; - struct vring_desc *desc =3D &vb->balloon_page_chunk.desc_table[*num]; + unsigned int *num; + struct vring_desc *desc; + + switch (type) { + case PAGE_CHUNK_TYPE_BALLOON: + num =3D &vb->balloon_page_chunk.chunk_num; + desc =3D &vb->balloon_page_chunk.desc_table[*num]; + break; + case PAGE_CHNUK_UNUSED_PAGE: + num =3D &vb->cmdq_unused_page.num; + desc =3D &vb->cmdq_unused_page.desc_table[*num]; + break; + default: + dev_warn(&vb->vdev->dev, "%s: chunk %d of unknown pages\n", + __func__, type); + return; + } =20 desc->addr =3D cpu_to_virtio64(vb->vdev, base_addr); desc->len =3D cpu_to_virtio32(vb->vdev, size); *num +=3D 1; if (*num =3D=3D VIRTIO_BALLOON_MAX_PAGE_CHUNKS) - send_page_chunks(vb, vq); + send_page_chunks(vb, vq, type, false); } =20 static void convert_bmap_to_chunks(struct virtio_balloon *vb, @@ -264,7 +329,8 @@ static void convert_bmap_to_chunks(struct virtio_balloo= n *vb, chunk_base_addr =3D (pfn_start + next_one) << VIRTIO_BALLOON_PFN_SHIFT; if (chunk_size) { - add_one_chunk(vb, vq, chunk_base_addr, chunk_size); + add_one_chunk(vb, vq, PAGE_CHUNK_TYPE_BALLOON, + chunk_base_addr, chunk_size); pos +=3D next_zero + 1; } } @@ -311,7 +377,7 @@ static void tell_host_from_page_bmap(struct virtio_ball= oon *vb, pfn_num); } if (vb->balloon_page_chunk.chunk_num > 0) - send_page_chunks(vb, vq); + send_page_chunks(vb, vq, PAGE_CHUNK_TYPE_BALLOON, false); } =20 static void set_page_pfns(struct virtio_balloon *vb, @@ -516,8 +582,8 @@ static inline void update_stat(struct virtio_balloon *v= b, int idx, u16 tag, u64 val) { BUG_ON(idx >=3D VIRTIO_BALLOON_S_NR); - vb->stats[idx].tag =3D cpu_to_virtio16(vb->vdev, tag); - vb->stats[idx].val =3D cpu_to_virtio64(vb->vdev, val); + vb->cmdq_stats.stats[idx].tag =3D cpu_to_virtio16(vb->vdev, tag); + vb->cmdq_stats.stats[idx].val =3D cpu_to_virtio64(vb->vdev, val); } =20 #define pages_to_bytes(x) ((u64)(x) << PAGE_SHIFT) @@ -582,7 +648,8 @@ static void stats_handle_request(struct virtio_balloon = *vb) vq =3D vb->stats_vq; if (!virtqueue_get_buf(vq, &len)) return; - sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats); + sg_init_one(&sg, vb->cmdq_stats.stats, + sizeof(vb->cmdq_stats.stats[0]) * num_stats); virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); virtqueue_kick(vq); } @@ -686,43 +753,216 @@ static void update_balloon_size_func(struct work_str= uct *work) queue_work(system_freezable_wq, work); } =20 +static void cmdq_handle_stats(struct virtio_balloon *vb) +{ + struct scatterlist sg; + unsigned int num_stats; + + spin_lock(&vb->stop_update_lock); + if (!vb->stop_update) { + num_stats =3D update_balloon_stats(vb); + sg_init_one(&sg, &vb->cmdq_stats, + sizeof(struct virtio_balloon_cmdq_hdr) + + sizeof(struct virtio_balloon_stat) * num_stats); + virtqueue_add_outbuf(vb->cmd_vq, &sg, 1, vb, GFP_KERNEL); + virtqueue_kick(vb->cmd_vq); + } + spin_unlock(&vb->stop_update_lock); +} + +/* + * The header part of the message buffer is given to the device to send a + * command to the driver. + */ +static void host_cmd_buf_add(struct virtio_balloon *vb, + struct virtio_balloon_cmdq_hdr *hdr) +{ + struct scatterlist sg; + + hdr->flags =3D 0; + sg_init_one(&sg, hdr, VIRTIO_BALLOON_CMDQ_HDR_SIZE); + + if (virtqueue_add_inbuf(vb->cmd_vq, &sg, 1, hdr, GFP_KERNEL) < 0) { + __virtio_clear_bit(vb->vdev, + VIRTIO_BALLOON_F_CMD_VQ); + dev_warn(&vb->vdev->dev, "%s: add miscq msg buf err\n", + __func__); + return; + } + + virtqueue_kick(vb->cmd_vq); +} + +static void cmdq_handle_unused_pages(struct virtio_balloon *vb) +{ + struct virtqueue *vq =3D vb->cmd_vq; + struct vring_desc *hdr_desc =3D &vb->cmdq_unused_page.desc_table[0]; + unsigned long hdr_pa; + unsigned int order =3D 0, migratetype =3D 0; + struct zone *zone =3D NULL; + struct page *page =3D NULL; + u64 pfn; + int ret =3D 0; + + /* Put the hdr to the first desc */ + hdr_pa =3D virt_to_phys((void *)&vb->cmdq_unused_page.hdr); + hdr_desc->addr =3D cpu_to_virtio64(vb->vdev, hdr_pa); + hdr_desc->len =3D cpu_to_virtio32(vb->vdev, + sizeof(struct virtio_balloon_cmdq_hdr)); + vb->cmdq_unused_page.num =3D 1; + + for_each_populated_zone(zone) { + for (order =3D MAX_ORDER - 1; order > 0; order--) { + for (migratetype =3D 0; migratetype < MIGRATE_TYPES; + migratetype++) { + do { + ret =3D report_unused_page_block(zone, + order, migratetype, &page); + if (!ret) { + pfn =3D (u64)page_to_pfn(page); + add_one_chunk(vb, vq, + PAGE_CHNUK_UNUSED_PAGE, + pfn << VIRTIO_BALLOON_PFN_SHIFT, + (u64)(1 << order) * + VIRTIO_BALLOON_PAGES_PER_PAGE); + } + } while (!ret); + } + } + } + + /* Set the cmd completion flag. */ + vb->cmdq_unused_page.hdr.flags |=3D + cpu_to_le32(VIRTIO_BALLOON_CMDQ_F_COMPLETION); + send_page_chunks(vb, vq, PAGE_CHNUK_UNUSED_PAGE, true); +} + +static void cmdq_handle(struct virtio_balloon *vb) +{ + struct virtqueue *vq; + struct virtio_balloon_cmdq_hdr *hdr; + unsigned int len; + + vq =3D vb->cmd_vq; + while ((hdr =3D (struct virtio_balloon_cmdq_hdr *) + virtqueue_get_buf(vq, &len)) !=3D NULL) { + switch (hdr->cmd) { + case VIRTIO_BALLOON_CMDQ_REPORT_STATS: + cmdq_handle_stats(vb); + break; + case VIRTIO_BALLOON_CMDQ_REPORT_UNUSED_PAGES: + cmdq_handle_unused_pages(vb); + break; + default: + dev_warn(&vb->vdev->dev, "%s: wrong cmd\n", __func__); + return; + } + /* + * Replenish all the command buffer to the device after a + * command is handled. This is for the convenience of the + * device to rewind the cmdq to get back all the command + * buffer after live migration. + */ + host_cmd_buf_add(vb, &vb->cmdq_stats.hdr); + host_cmd_buf_add(vb, &vb->cmdq_unused_page.hdr); + } +} + +static void cmdq_handle_work_func(struct work_struct *work) +{ + struct virtio_balloon *vb; + + vb =3D container_of(work, struct virtio_balloon, + cmdq_handle_work); + cmdq_handle(vb); +} + +static void cmdq_callback(struct virtqueue *vq) +{ + struct virtio_balloon *vb =3D vq->vdev->priv; + + queue_work(system_freezable_wq, &vb->cmdq_handle_work); +} + static int init_vqs(struct virtio_balloon *vb) { - struct virtqueue *vqs[3]; - vq_callback_t *callbacks[] =3D { balloon_ack, balloon_ack, stats_request = }; - static const char * const names[] =3D { "inflate", "deflate", "stats" }; - int err, nvqs; + struct virtqueue **vqs; + vq_callback_t **callbacks; + const char **names; + int err =3D -ENOMEM; + int nvqs; + + /* Inflateq and deflateq are used unconditionally */ + nvqs =3D 2; + + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_CMD_VQ) || + virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) + nvqs++; + + /* Allocate space for find_vqs parameters */ + vqs =3D kcalloc(nvqs, sizeof(*vqs), GFP_KERNEL); + if (!vqs) + goto err_vq; + callbacks =3D kmalloc_array(nvqs, sizeof(*callbacks), GFP_KERNEL); + if (!callbacks) + goto err_callback; + names =3D kmalloc_array(nvqs, sizeof(*names), GFP_KERNEL); + if (!names) + goto err_names; + + callbacks[0] =3D balloon_ack; + names[0] =3D "inflate"; + callbacks[1] =3D balloon_ack; + names[1] =3D "deflate"; =20 /* - * We expect two virtqueues: inflate and deflate, and - * optionally stat. + * The stats_vq is used only when cmdq is not supported (or disabled) + * by the device. */ - nvqs =3D virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2; - err =3D vb->vdev->config->find_vqs(vb->vdev, nvqs, vqs, callbacks, names, - NULL); - if (err) - return err; + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_CMD_VQ)) { + callbacks[2] =3D cmdq_callback; + names[2] =3D "cmdq"; + } else if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { + callbacks[2] =3D stats_request; + names[2] =3D "stats"; + } =20 + err =3D vb->vdev->config->find_vqs(vb->vdev, nvqs, vqs, callbacks, + names, NULL); + if (err) + goto err_find; vb->inflate_vq =3D vqs[0]; vb->deflate_vq =3D vqs[1]; - if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { + + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_CMD_VQ)) { + vb->cmd_vq =3D vqs[2]; + /* Prime the cmdq with the header buffer. */ + host_cmd_buf_add(vb, &vb->cmdq_stats.hdr); + host_cmd_buf_add(vb, &vb->cmdq_unused_page.hdr); + } else if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { struct scatterlist sg; - unsigned int num_stats; - vb->stats_vq =3D vqs[2]; =20 + vb->stats_vq =3D vqs[2]; /* * Prime this virtqueue with one buffer so the hypervisor can * use it to signal us later (it can't be broken yet!). */ - num_stats =3D update_balloon_stats(vb); - - sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats); + sg_init_one(&sg, vb->cmdq_stats.stats, + sizeof(vb->cmdq_stats.stats)); if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL) < 0) BUG(); virtqueue_kick(vb->stats_vq); } - return 0; + +err_find: + kfree(names); +err_names: + kfree(callbacks); +err_callback: + kfree(vqs); +err_vq: + return err; } =20 #ifdef CONFIG_BALLOON_COMPACTION @@ -730,7 +970,8 @@ static int init_vqs(struct virtio_balloon *vb) static void tell_host_one_page(struct virtio_balloon *vb, struct virtqueue *vq, struct page *page) { - add_one_chunk(vb, vq, page_to_pfn(page) << VIRTIO_BALLOON_PFN_SHIFT, + add_one_chunk(vb, vq, PAGE_CHUNK_TYPE_BALLOON, + page_to_pfn(page) << VIRTIO_BALLOON_PFN_SHIFT, VIRTIO_BALLOON_PAGES_PER_PAGE); } =20 @@ -865,6 +1106,40 @@ static int balloon_page_chunk_init(struct virtio_ball= oon *vb) return -ENOMEM; } =20 +/* + * Each type of command is handled one in-flight each time. So, we allocate + * one message buffer for each type of command. The header part of the mes= sage + * buffer will be offered to the device, so that the device can send a com= mand + * using the corresponding command buffer to the driver later. + */ +static int cmdq_init(struct virtio_balloon *vb) +{ + vb->cmdq_unused_page.desc_table =3D alloc_indirect(vb->vdev, + VIRTIO_BALLOON_MAX_PAGE_CHUNKS, + GFP_KERNEL); + if (!vb->cmdq_unused_page.desc_table) { + dev_warn(&vb->vdev->dev, "%s: failed\n", __func__); + __virtio_clear_bit(vb->vdev, + VIRTIO_BALLOON_F_CMD_VQ); + return -ENOMEM; + } + vb->cmdq_unused_page.num =3D 0; + + /* + * The header is initialized to let the device know which type of + * command buffer it receives. The device will later use a buffer + * according to the type of command that it needs to send. + */ + vb->cmdq_stats.hdr.cmd =3D VIRTIO_BALLOON_CMDQ_REPORT_STATS; + vb->cmdq_stats.hdr.flags =3D 0; + vb->cmdq_unused_page.hdr.cmd =3D VIRTIO_BALLOON_CMDQ_REPORT_UNUSED_PAGES; + vb->cmdq_unused_page.hdr.flags =3D 0; + + INIT_WORK(&vb->cmdq_handle_work, cmdq_handle_work_func); + + return 0; +} + static int virtballoon_validate(struct virtio_device *vdev) { struct virtio_balloon *vb =3D NULL; @@ -883,6 +1158,11 @@ static int virtballoon_validate(struct virtio_device = *vdev) goto err_page_chunk; } =20 + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_CMD_VQ)) { + err =3D cmdq_init(vb); + if (err < 0) + goto err_vb; + } return 0; =20 err_page_chunk: @@ -902,7 +1182,10 @@ static int virtballoon_probe(struct virtio_device *vd= ev) return -EINVAL; } =20 - INIT_WORK(&vb->update_balloon_stats_work, update_balloon_stats_func); + if (!virtio_has_feature(vdev, VIRTIO_BALLOON_F_CMD_VQ) && + virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) + INIT_WORK(&vb->update_balloon_stats_work, + update_balloon_stats_func); INIT_WORK(&vb->update_balloon_size_work, update_balloon_size_func); spin_lock_init(&vb->stop_update_lock); vb->stop_update =3D false; @@ -980,6 +1263,7 @@ static void virtballoon_remove(struct virtio_device *v= dev) spin_unlock_irq(&vb->stop_update_lock); cancel_work_sync(&vb->update_balloon_size_work); cancel_work_sync(&vb->update_balloon_stats_work); + cancel_work_sync(&vb->cmdq_handle_work); =20 remove_common(vb); free_page_bmap(vb); @@ -1029,6 +1313,7 @@ static unsigned int features[] =3D { VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, VIRTIO_BALLOON_F_PAGE_CHUNKS, + VIRTIO_BALLOON_F_CMD_VQ, }; =20 static struct virtio_driver virtio_balloon_driver =3D { diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virti= o_balloon.h index 5ed3c7b..cb66c1a 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -35,6 +35,7 @@ #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ #define VIRTIO_BALLOON_F_PAGE_CHUNKS 3 /* Inflate/Deflate pages in chunks = */ +#define VIRTIO_BALLOON_F_CMD_VQ 4 /* Command virtqueue */ =20 /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 @@ -83,4 +84,16 @@ struct virtio_balloon_stat { __virtio64 val; } __attribute__((packed)); =20 +/* Use the memory of a vring_desc to place the cmdq header */ +#define VIRTIO_BALLOON_CMDQ_HDR_SIZE sizeof(struct vring_desc) + +struct virtio_balloon_cmdq_hdr { +#define VIRTIO_BALLOON_CMDQ_REPORT_STATS 0 +#define VIRTIO_BALLOON_CMDQ_REPORT_UNUSED_PAGES 1 + __le32 cmd; +/* Flag to indicate the completion of handling a command */ +#define VIRTIO_BALLOON_CMDQ_F_COMPLETION 1 + __le32 flags; +}; + #endif /* _LINUX_VIRTIO_BALLOON_H */ --=20 2.7.4