From nobody Mon May 6 04:09:26 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1489648551155882.7047786954877; Thu, 16 Mar 2017 00:15:51 -0700 (PDT) Received: from localhost ([::1]:41367 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1coPdW-0007Qz-10 for importer@patchew.org; Thu, 16 Mar 2017 03:15:50 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41834) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1coPb1-0005ik-Kn for qemu-devel@nongnu.org; Thu, 16 Mar 2017 03:13:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1coPay-0001Dm-Vn for qemu-devel@nongnu.org; Thu, 16 Mar 2017 03:13:15 -0400 Received: from mga04.intel.com ([192.55.52.120]:5238) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1coPay-0001Cp-MI for qemu-devel@nongnu.org; Thu, 16 Mar 2017 03:13:12 -0400 Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Mar 2017 00:13:08 -0700 Received: from devel-ww.sh.intel.com ([10.239.48.105]) by fmsmga005.fm.intel.com with ESMTP; 16 Mar 2017 00:13:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1489648392; x=1521184392; h=from:to:subject:date:message-id:in-reply-to:references; bh=ifPSOfdg2GW0SdO/R3k9ZMhAFwvnO+M1RAOnxXM7TWg=; b=QLlrcANgRAyDv98SQwvoLab4GAC1/W3WETwRV2bNUKVbcLQ4xwzMXIdP +wj/pwq7CaWHqq+2Dm95gQQQaxCC+Q==; X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,170,1486454400"; d="scan'208";a="76816715" From: Wei Wang To: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, mst@redhat.com, david@redhat.com, dave.hansen@intel.com, cornelia.huck@de.ibm.com, akpm@linux-foundation.org, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, wei.w.wang@intel.com, liliang.opensource@gmail.com Date: Thu, 16 Mar 2017 15:08:44 +0800 Message-Id: <1489648127-37282-2-git-send-email-wei.w.wang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1489648127-37282-1-git-send-email-wei.w.wang@intel.com> References: <1489648127-37282-1-git-send-email-wei.w.wang@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.120 Subject: [Qemu-devel] [PATCH kernel v8 1/4] virtio-balloon: deflate via a page list X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Liang Li This patch saves the deflated pages to a list, instead of the PFN array. Accordingly, the balloon_pfn_to_page() function is removed. Signed-off-by: Liang Li Signed-off-by: Michael S. Tsirkin Signed-off-by: Wei Wang --- drivers/virtio/virtio_balloon.c | 22 ++++++++-------------- 1 file changed, 8 insertions(+), 14 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloo= n.c index 181793f..f59cb4f 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -103,12 +103,6 @@ static u32 page_to_balloon_pfn(struct page *page) return pfn * VIRTIO_BALLOON_PAGES_PER_PAGE; } =20 -static struct page *balloon_pfn_to_page(u32 pfn) -{ - BUG_ON(pfn % VIRTIO_BALLOON_PAGES_PER_PAGE); - return pfn_to_page(pfn / VIRTIO_BALLOON_PAGES_PER_PAGE); -} - static void balloon_ack(struct virtqueue *vq) { struct virtio_balloon *vb =3D vq->vdev->priv; @@ -181,18 +175,16 @@ static unsigned fill_balloon(struct virtio_balloon *v= b, size_t num) return num_allocated_pages; } =20 -static void release_pages_balloon(struct virtio_balloon *vb) +static void release_pages_balloon(struct virtio_balloon *vb, + struct list_head *pages) { - unsigned int i; - struct page *page; + struct page *page, *next; =20 - /* Find pfns pointing at start of each page, get pages and free them. */ - for (i =3D 0; i < vb->num_pfns; i +=3D VIRTIO_BALLOON_PAGES_PER_PAGE) { - page =3D balloon_pfn_to_page(virtio32_to_cpu(vb->vdev, - vb->pfns[i])); + list_for_each_entry_safe(page, next, pages, lru) { if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) adjust_managed_page_count(page, 1); + list_del(&page->lru); put_page(page); /* balloon reference */ } } @@ -202,6 +194,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb,= size_t num) unsigned num_freed_pages; struct page *page; struct balloon_dev_info *vb_dev_info =3D &vb->vb_dev_info; + LIST_HEAD(pages); =20 /* We can only do one array worth at a time. */ num =3D min(num, ARRAY_SIZE(vb->pfns)); @@ -215,6 +208,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb,= size_t num) if (!page) break; set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + list_add(&page->lru, &pages); vb->num_pages -=3D VIRTIO_BALLOON_PAGES_PER_PAGE; } =20 @@ -226,7 +220,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb,= size_t num) */ if (vb->num_pfns !=3D 0) tell_host(vb, vb->deflate_vq); - release_pages_balloon(vb); + release_pages_balloon(vb, &pages); mutex_unlock(&vb->balloon_lock); return num_freed_pages; } --=20 2.7.4 From nobody Mon May 6 04:09:26 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1489648454095517.5207151590497; Thu, 16 Mar 2017 00:14:14 -0700 (PDT) Received: from localhost ([::1]:41356 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1coPbw-0005kw-Lk for importer@patchew.org; Thu, 16 Mar 2017 03:14:12 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41858) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1coPb2-0005iv-Lt for qemu-devel@nongnu.org; Thu, 16 Mar 2017 03:13:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1coPb0-0001EK-0d for qemu-devel@nongnu.org; Thu, 16 Mar 2017 03:13:16 -0400 Received: from mga04.intel.com ([192.55.52.120]:17256) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1coPay-0001D9-SW for qemu-devel@nongnu.org; Thu, 16 Mar 2017 03:13:13 -0400 Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Mar 2017 00:13:11 -0700 Received: from devel-ww.sh.intel.com ([10.239.48.105]) by fmsmga005.fm.intel.com with ESMTP; 16 Mar 2017 00:13:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1489648392; x=1521184392; h=from:to:subject:date:message-id:in-reply-to:references; bh=sQe0pBvHegwYhhz5vYCoPzPVQpnWqKB38sDGku+t1Kg=; b=QrMPYFu14KPHZOC/rwcdOHFlhE5aWx6roCqp6y/IFF2KGBBTe8srvQnN 0cJnQjKR1/Uj1hQQ/8H/qbYV8tpcCA==; X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,170,1486454400"; d="scan'208";a="76816740" From: Wei Wang To: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, mst@redhat.com, david@redhat.com, dave.hansen@intel.com, cornelia.huck@de.ibm.com, akpm@linux-foundation.org, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, wei.w.wang@intel.com, liliang.opensource@gmail.com Date: Thu, 16 Mar 2017 15:08:45 +0800 Message-Id: <1489648127-37282-3-git-send-email-wei.w.wang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1489648127-37282-1-git-send-email-wei.w.wang@intel.com> References: <1489648127-37282-1-git-send-email-wei.w.wang@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.120 Subject: [Qemu-devel] [PATCH kernel v8 2/4] virtio-balloon: VIRTIO_BALLOON_F_CHUNK_TRANSFER X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Liang Li The implementation of the current virtio-balloon is not very efficient, because the ballooned pages are transferred to the host one by one. Here is the breakdown of the time in percentage spent on each step of the balloon inflating process (inflating 7GB of an 8GB idle guest). 1) allocating pages (6.5%) 2) sending PFNs to host (68.3%) 3) address translation (6.1%) 4) madvise (19%) It takes about 4126ms for the inflating process to complete. The above profiling shows that the bottlenecks are stage 2) and stage 4). This patch optimizes step 2) by transferring pages to the host in chunks. A chunk consists of guest physically continuous pages, and it is offered to the host via a base PFN (i.e. the start PFN of those physically continuous pages) and the size (i.e. the total number of the pages). A chunk is formated as below: Suggested-by: Michael S. Tsirkin -------------------------------------------------------- | Base (52 bit) | Rsvd (12 bit) | -------------------------------------------------------- -------------------------------------------------------- | Size (52 bit) | Rsvd (12 bit) | -------------------------------------------------------- By doing so, step 4) can also be optimized by doing address translation and madvise() in chunks rather than page by page. This optimization requires the negotiation of a new feature bit, VIRTIO_BALLOON_F_CHUNK_TRANSFER. With this new feature, the above ballooning process takes ~590ms resulting in an improvement of ~85%. TODO: optimize stage 1) by allocating/freeing a chunk of pages instead of a single page each time. Signed-off-by: Liang Li Signed-off-by: Wei Wang Suggested-by: Michael S. Tsirkin --- drivers/virtio/virtio_balloon.c | 371 ++++++++++++++++++++++++++++++++= +--- include/uapi/linux/virtio_balloon.h | 9 + 2 files changed, 353 insertions(+), 27 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloo= n.c index f59cb4f..3f4a161 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -42,6 +42,10 @@ #define OOM_VBALLOON_DEFAULT_PAGES 256 #define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80 =20 +#define PAGE_BMAP_SIZE (8 * PAGE_SIZE) +#define PFNS_PER_PAGE_BMAP (PAGE_BMAP_SIZE * BITS_PER_BYTE) +#define PAGE_BMAP_COUNT_MAX 32 + static int oom_pages =3D OOM_VBALLOON_DEFAULT_PAGES; module_param(oom_pages, int, S_IRUSR | S_IWUSR); MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); @@ -50,6 +54,14 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); static struct vfsmount *balloon_mnt; #endif =20 +#define BALLOON_CHUNK_BASE_SHIFT 12 +#define BALLOON_CHUNK_SIZE_SHIFT 12 +struct balloon_page_chunk { + __le64 base; + __le64 size; +}; + +typedef __le64 resp_data_t; struct virtio_balloon { struct virtio_device *vdev; struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; @@ -67,6 +79,31 @@ struct virtio_balloon { =20 /* Number of balloon pages we've told the Host we're not using. */ unsigned int num_pages; + /* Pointer to the response header. */ + struct virtio_balloon_resp_hdr *resp_hdr; + /* Pointer to the start address of response data. */ + resp_data_t *resp_data; + /* Size of response data buffer. */ + unsigned int resp_buf_size; + /* Pointer offset of the response data. */ + unsigned int resp_pos; + /* Bitmap used to record pages */ + unsigned long *page_bmap[PAGE_BMAP_COUNT_MAX]; + /* Number of split page bmaps */ + unsigned int page_bmaps; + + /* + * The allocated page_bmap size may be smaller than the pfn range of + * the ballooned pages. In this case, we need to use the page_bmap + * multiple times to cover the entire pfn range. It's like using a + * short ruler several times to finish measuring a long object. + * The start location of the ruler in the next measurement is the end + * location of the ruler in the previous measurement. + * + * pfn_max & pfn_min: forms the pfn range of the ballooned pages + * pfn_start & pfn_stop: records the start and stop pfn in each cover + */ + unsigned long pfn_min, pfn_max, pfn_start, pfn_stop; /* * The pages we've told the Host we're not using are enqueued * at vb_dev_info->pages list. @@ -110,20 +147,187 @@ static void balloon_ack(struct virtqueue *vq) wake_up(&vb->acked); } =20 -static void tell_host(struct virtio_balloon *vb, struct virtqueue *vq) +static inline void init_page_bmap_range(struct virtio_balloon *vb) +{ + vb->pfn_min =3D ULONG_MAX; + vb->pfn_max =3D 0; +} + +static inline void update_page_bmap_range(struct virtio_balloon *vb, + struct page *page) +{ + unsigned long balloon_pfn =3D page_to_balloon_pfn(page); + + vb->pfn_min =3D min(balloon_pfn, vb->pfn_min); + vb->pfn_max =3D max(balloon_pfn, vb->pfn_max); +} + +/* The page_bmap size is extended by adding more number of page_bmap */ +static void extend_page_bmap_size(struct virtio_balloon *vb, + unsigned long pfns) +{ + int i, bmaps; + unsigned long bmap_len; + + bmap_len =3D ALIGN(pfns, BITS_PER_LONG) / BITS_PER_BYTE; + bmap_len =3D ALIGN(bmap_len, PAGE_BMAP_SIZE); + bmaps =3D min((int)(bmap_len / PAGE_BMAP_SIZE), + PAGE_BMAP_COUNT_MAX); + + for (i =3D 1; i < bmaps; i++) { + vb->page_bmap[i] =3D kmalloc(PAGE_BMAP_SIZE, GFP_KERNEL); + if (vb->page_bmap[i]) + vb->page_bmaps++; + else + break; + } +} + +static void free_extended_page_bmap(struct virtio_balloon *vb) +{ + int i, bmaps =3D vb->page_bmaps; + + for (i =3D 1; i < bmaps; i++) { + kfree(vb->page_bmap[i]); + vb->page_bmap[i] =3D NULL; + vb->page_bmaps--; + } +} + +static void free_page_bmap(struct virtio_balloon *vb) +{ + int i; + + for (i =3D 0; i < vb->page_bmaps; i++) + kfree(vb->page_bmap[i]); +} + +static void clear_page_bmap(struct virtio_balloon *vb) +{ + int i; + + for (i =3D 0; i < vb->page_bmaps; i++) + memset(vb->page_bmap[i], 0, PAGE_BMAP_SIZE); +} + +static void send_resp_data(struct virtio_balloon *vb, struct virtqueue *vq, + bool busy_wait) { struct scatterlist sg; + struct virtio_balloon_resp_hdr *hdr =3D vb->resp_hdr; unsigned int len; =20 - sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); + len =3D vb->resp_pos * sizeof(resp_data_t); + hdr->data_len =3D cpu_to_le32(len); + len +=3D sizeof(struct virtio_balloon_resp_hdr); + sg_init_table(&sg, 1); + sg_set_buf(&sg, hdr, len); + + if (!virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL)) { + virtqueue_kick(vq); + if (busy_wait) + while (!virtqueue_get_buf(vq, &len) && + !virtqueue_is_broken(vq)) + cpu_relax(); + else + wait_event(vb->acked, virtqueue_get_buf(vq, &len)); + vb->resp_pos =3D 0; + free_extended_page_bmap(vb); + } +} + +/* Calculate how many resp_data does one chunk need */ +#define RESP_POS_ADD_CHUNK (sizeof(struct balloon_page_chunk) / \ + sizeof(resp_data_t)) +static void add_one_chunk(struct virtio_balloon *vb, struct virtqueue *vq, + unsigned long base, int size) +{ + struct balloon_page_chunk *chunk =3D + (struct balloon_page_chunk *)(vb->resp_data + + vb->resp_pos); + /* + * Not enough resp_data space to hold the next + * chunk? + */ + if ((vb->resp_pos + RESP_POS_ADD_CHUNK) * + sizeof(resp_data_t) > vb->resp_buf_size) + send_resp_data(vb, vq, false); =20 - /* We should always be able to add one buffer to an empty queue. */ - virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); - virtqueue_kick(vq); + chunk->base =3D cpu_to_le64(base << BALLOON_CHUNK_BASE_SHIFT); + chunk->size =3D cpu_to_le64(size << BALLOON_CHUNK_SIZE_SHIFT); + vb->resp_pos +=3D RESP_POS_ADD_CHUNK; +} + +static void chunking_pages_from_bmap(struct virtio_balloon *vb, + struct virtqueue *vq, + unsigned long pfn_start, + unsigned long *bmap, + unsigned long len) +{ + unsigned long pos =3D 0, end =3D len * BITS_PER_BYTE; + + while (pos < end) { + unsigned long one =3D find_next_bit(bmap, end, pos); + + if (one < end) { + unsigned long chunk_size, zero; =20 - /* When host has read buffer, this completes via balloon_ack */ - wait_event(vb->acked, virtqueue_get_buf(vq, &len)); + zero =3D find_next_zero_bit(bmap, end, one + 1); + if (zero >=3D end) + chunk_size =3D end - one; + else + chunk_size =3D zero - one; =20 + if (chunk_size) + add_one_chunk(vb, vq, pfn_start + one, + chunk_size); + pos =3D one + chunk_size; + } else + break; + } +} + +static void tell_host(struct virtio_balloon *vb, struct virtqueue *vq) +{ + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_CHUNK_TRANSFER)) { + int pfns, page_bmaps, i; + unsigned long pfn_start, pfns_len; + + pfn_start =3D vb->pfn_start; + pfns =3D vb->pfn_stop - pfn_start + 1; + pfns =3D roundup(roundup(pfns, BITS_PER_LONG), + PFNS_PER_PAGE_BMAP); + page_bmaps =3D pfns / PFNS_PER_PAGE_BMAP; + pfns_len =3D pfns / BITS_PER_BYTE; + + for (i =3D 0; i < page_bmaps; i++) { + unsigned int bmap_len =3D PAGE_BMAP_SIZE; + + /* The last one takes the leftover only */ + if (i + 1 =3D=3D page_bmaps) + bmap_len =3D pfns_len - PAGE_BMAP_SIZE * i; + + chunking_pages_from_bmap(vb, vq, pfn_start + + i * PFNS_PER_PAGE_BMAP, + vb->page_bmap[i], bmap_len); + } + if (vb->resp_pos > 0) + send_resp_data(vb, vq, false); + } else { + struct scatterlist sg; + unsigned int len; + + sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); + + /* + * We should always be able to add one buffer to an + * empty queue + */ + virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); + virtqueue_kick(vq); + /* When host has read buffer, this completes via balloon_ack */ + wait_event(vb->acked, virtqueue_get_buf(vq, &len)); + } } =20 static void set_page_pfns(struct virtio_balloon *vb, @@ -138,13 +342,61 @@ static void set_page_pfns(struct virtio_balloon *vb, page_to_balloon_pfn(page) + i); } =20 -static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) +static void set_page_bmap(struct virtio_balloon *vb, + struct list_head *pages, struct virtqueue *vq) +{ + unsigned long pfn_start, pfn_stop; + struct page *page; + bool found; + + vb->pfn_min =3D rounddown(vb->pfn_min, BITS_PER_LONG); + vb->pfn_max =3D roundup(vb->pfn_max, BITS_PER_LONG); + + extend_page_bmap_size(vb, vb->pfn_max - vb->pfn_min + 1); + pfn_start =3D vb->pfn_min; + + while (pfn_start < vb->pfn_max) { + pfn_stop =3D pfn_start + PFNS_PER_PAGE_BMAP * vb->page_bmaps; + pfn_stop =3D pfn_stop < vb->pfn_max ? pfn_stop : vb->pfn_max; + + vb->pfn_start =3D pfn_start; + clear_page_bmap(vb); + found =3D false; + + list_for_each_entry(page, pages, lru) { + unsigned long bmap_idx, bmap_pos, balloon_pfn; + + balloon_pfn =3D page_to_balloon_pfn(page); + if (balloon_pfn < pfn_start || balloon_pfn > pfn_stop) + continue; + bmap_idx =3D (balloon_pfn - pfn_start) / + PFNS_PER_PAGE_BMAP; + bmap_pos =3D (balloon_pfn - pfn_start) % + PFNS_PER_PAGE_BMAP; + set_bit(bmap_pos, vb->page_bmap[bmap_idx]); + + found =3D true; + } + if (found) { + vb->pfn_stop =3D pfn_stop; + tell_host(vb, vq); + } + pfn_start =3D pfn_stop; + } +} + +static unsigned int fill_balloon(struct virtio_balloon *vb, size_t num) { struct balloon_dev_info *vb_dev_info =3D &vb->vb_dev_info; - unsigned num_allocated_pages; + unsigned int num_allocated_pages; + bool chunking =3D virtio_has_feature(vb->vdev, + VIRTIO_BALLOON_F_CHUNK_TRANSFER); =20 - /* We can only do one array worth at a time. */ - num =3D min(num, ARRAY_SIZE(vb->pfns)); + if (chunking) + init_page_bmap_range(vb); + else + /* We can only do one array worth at a time. */ + num =3D min(num, ARRAY_SIZE(vb->pfns)); =20 mutex_lock(&vb->balloon_lock); for (vb->num_pfns =3D 0; vb->num_pfns < num; @@ -159,7 +411,10 @@ static unsigned fill_balloon(struct virtio_balloon *vb= , size_t num) msleep(200); break; } - set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + if (chunking) + update_page_bmap_range(vb, page); + else + set_page_pfns(vb, vb->pfns + vb->num_pfns, page); vb->num_pages +=3D VIRTIO_BALLOON_PAGES_PER_PAGE; if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) @@ -168,8 +423,13 @@ static unsigned fill_balloon(struct virtio_balloon *vb= , size_t num) =20 num_allocated_pages =3D vb->num_pfns; /* Did we get any? */ - if (vb->num_pfns !=3D 0) - tell_host(vb, vb->inflate_vq); + if (vb->num_pfns !=3D 0) { + if (chunking) + set_page_bmap(vb, &vb_dev_info->pages, + vb->inflate_vq); + else + tell_host(vb, vb->inflate_vq); + } mutex_unlock(&vb->balloon_lock); =20 return num_allocated_pages; @@ -189,15 +449,20 @@ static void release_pages_balloon(struct virtio_ballo= on *vb, } } =20 -static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) +static unsigned int leak_balloon(struct virtio_balloon *vb, size_t num) { - unsigned num_freed_pages; + unsigned int num_freed_pages; struct page *page; struct balloon_dev_info *vb_dev_info =3D &vb->vb_dev_info; LIST_HEAD(pages); + bool chunking =3D virtio_has_feature(vb->vdev, + VIRTIO_BALLOON_F_CHUNK_TRANSFER); =20 - /* We can only do one array worth at a time. */ - num =3D min(num, ARRAY_SIZE(vb->pfns)); + if (chunking) + init_page_bmap_range(vb); + else + /* We can only do one array worth at a time. */ + num =3D min(num, ARRAY_SIZE(vb->pfns)); =20 mutex_lock(&vb->balloon_lock); /* We can't release more pages than taken */ @@ -207,7 +472,10 @@ static unsigned leak_balloon(struct virtio_balloon *vb= , size_t num) page =3D balloon_page_dequeue(vb_dev_info); if (!page) break; - set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + if (chunking) + update_page_bmap_range(vb, page); + else + set_page_pfns(vb, vb->pfns + vb->num_pfns, page); list_add(&page->lru, &pages); vb->num_pages -=3D VIRTIO_BALLOON_PAGES_PER_PAGE; } @@ -218,8 +486,12 @@ static unsigned leak_balloon(struct virtio_balloon *vb= , size_t num) * virtio_has_feature(vdev, VIRTIO_BALLOON_F_MUST_TELL_HOST); * is true, we *have* to do it in this order */ - if (vb->num_pfns !=3D 0) - tell_host(vb, vb->deflate_vq); + if (vb->num_pfns !=3D 0) { + if (chunking) + set_page_bmap(vb, &pages, vb->deflate_vq); + else + tell_host(vb, vb->deflate_vq); + } release_pages_balloon(vb, &pages); mutex_unlock(&vb->balloon_lock); return num_freed_pages; @@ -431,6 +703,12 @@ static int init_vqs(struct virtio_balloon *vb) } =20 #ifdef CONFIG_BALLOON_COMPACTION +static void tell_host_one_page(struct virtio_balloon *vb, + struct virtqueue *vq, struct page *page) +{ + add_one_chunk(vb, vq, page_to_pfn(page), 1); +} + /* * virtballoon_migratepage - perform the balloon page migration on behalf = of * a compation thread. (called under page lock) @@ -455,6 +733,8 @@ static int virtballoon_migratepage(struct balloon_dev_i= nfo *vb_dev_info, struct virtio_balloon *vb =3D container_of(vb_dev_info, struct virtio_balloon, vb_dev_info); unsigned long flags; + bool chunking =3D virtio_has_feature(vb->vdev, + VIRTIO_BALLOON_F_CHUNK_TRANSFER); =20 /* * In order to avoid lock contention while migrating pages concurrently @@ -475,15 +755,23 @@ static int virtballoon_migratepage(struct balloon_dev= _info *vb_dev_info, vb_dev_info->isolated_pages--; __count_vm_event(BALLOON_MIGRATE); spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags); - vb->num_pfns =3D VIRTIO_BALLOON_PAGES_PER_PAGE; - set_page_pfns(vb, vb->pfns, newpage); - tell_host(vb, vb->inflate_vq); + if (chunking) { + tell_host_one_page(vb, vb->inflate_vq, newpage); + } else { + vb->num_pfns =3D VIRTIO_BALLOON_PAGES_PER_PAGE; + set_page_pfns(vb, vb->pfns, newpage); + tell_host(vb, vb->inflate_vq); + } =20 /* balloon's page migration 2nd step -- deflate "page" */ balloon_page_delete(page); - vb->num_pfns =3D VIRTIO_BALLOON_PAGES_PER_PAGE; - set_page_pfns(vb, vb->pfns, page); - tell_host(vb, vb->deflate_vq); + if (chunking) { + tell_host_one_page(vb, vb->deflate_vq, page); + } else { + vb->num_pfns =3D VIRTIO_BALLOON_PAGES_PER_PAGE; + set_page_pfns(vb, vb->pfns, page); + tell_host(vb, vb->deflate_vq); + } =20 mutex_unlock(&vb->balloon_lock); =20 @@ -533,6 +821,30 @@ static int virtballoon_probe(struct virtio_device *vde= v) spin_lock_init(&vb->stop_update_lock); vb->stop_update =3D false; vb->num_pages =3D 0; + + /* + * By default, we allocate page_bmap[0] only. More page_bmap will be + * allocated on demand. + */ + vb->page_bmap[0] =3D kmalloc(PAGE_BMAP_SIZE, GFP_KERNEL); + if (!vb->page_bmap[0]) { + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_CHUNK_TRANSFER); + } else { + vb->page_bmaps =3D 1; + vb->resp_hdr =3D + kmalloc(sizeof(struct virtio_balloon_resp_hdr) + + PAGE_BMAP_SIZE, GFP_KERNEL); + if (!vb->resp_hdr) { + __virtio_clear_bit(vdev, + VIRTIO_BALLOON_F_CHUNK_TRANSFER); + kfree(vb->page_bmap[0]); + } else { + vb->resp_data =3D (void *)vb->resp_hdr + + sizeof(struct virtio_balloon_resp_hdr); + vb->resp_pos =3D 0; + vb->resp_buf_size =3D PAGE_BMAP_SIZE; + } + } mutex_init(&vb->balloon_lock); init_waitqueue_head(&vb->acked); vb->vdev =3D vdev; @@ -578,6 +890,8 @@ static int virtballoon_probe(struct virtio_device *vdev) out_del_vqs: vdev->config->del_vqs(vdev); out_free_vb: + kfree(vb->resp_hdr); + free_page_bmap(vb); kfree(vb); out: return err; @@ -611,6 +925,8 @@ static void virtballoon_remove(struct virtio_device *vd= ev) remove_common(vb); if (vb->vb_dev_info.inode) iput(vb->vb_dev_info.inode); + free_page_bmap(vb); + kfree(vb->resp_hdr); kfree(vb); } =20 @@ -649,6 +965,7 @@ static unsigned int features[] =3D { VIRTIO_BALLOON_F_MUST_TELL_HOST, VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, + VIRTIO_BALLOON_F_CHUNK_TRANSFER, }; =20 static struct virtio_driver virtio_balloon_driver =3D { diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virti= o_balloon.h index 343d7dd..aa0e5f0 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -34,6 +34,7 @@ #define VIRTIO_BALLOON_F_MUST_TELL_HOST 0 /* Tell before reclaiming pages = */ #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ +#define VIRTIO_BALLOON_F_CHUNK_TRANSFER 3 /* Transfer pages in chunks */ =20 /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 @@ -82,4 +83,12 @@ struct virtio_balloon_stat { __virtio64 val; } __attribute__((packed)); =20 +/* Response header structure */ +struct virtio_balloon_resp_hdr { + u8 cmd; + u8 flag; + __le16 id; /* cmd id */ + __le32 data_len; /* Payload len in bytes */ +}; + #endif /* _LINUX_VIRTIO_BALLOON_H */ --=20 2.7.4 From nobody Mon May 6 04:09:26 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1489648452241320.4497908655462; Thu, 16 Mar 2017 00:14:12 -0700 (PDT) Received: from localhost ([::1]:41355 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1coPbu-0005ku-TX for importer@patchew.org; Thu, 16 Mar 2017 03:14:10 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41875) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1coPb3-0005j0-K8 for qemu-devel@nongnu.org; Thu, 16 Mar 2017 03:13:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1coPb2-0001FG-9N for qemu-devel@nongnu.org; Thu, 16 Mar 2017 03:13:17 -0400 Received: from mga04.intel.com ([192.55.52.120]:55976) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1coPb1-0001Ei-Ts for qemu-devel@nongnu.org; Thu, 16 Mar 2017 03:13:16 -0400 Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Mar 2017 00:13:14 -0700 Received: from devel-ww.sh.intel.com ([10.239.48.105]) by fmsmga005.fm.intel.com with ESMTP; 16 Mar 2017 00:13:11 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1489648395; x=1521184395; h=from:to:subject:date:message-id:in-reply-to:references; bh=6W9beWh6HYfHZwlEavVAW/+lGXl0YyEYhSPyMLYK0ZM=; b=CfAyU/VjaUZx7ak/ejxr5yuAilvmUDmjd/Q99LzF91azwibDi9TW2iLj A2oSLQ4jRlHVwaOONBTZbq1Rei08kA==; X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,170,1486454400"; d="scan'208";a="76816757" From: Wei Wang To: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, mst@redhat.com, david@redhat.com, dave.hansen@intel.com, cornelia.huck@de.ibm.com, akpm@linux-foundation.org, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, wei.w.wang@intel.com, liliang.opensource@gmail.com Date: Thu, 16 Mar 2017 15:08:46 +0800 Message-Id: <1489648127-37282-4-git-send-email-wei.w.wang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1489648127-37282-1-git-send-email-wei.w.wang@intel.com> References: <1489648127-37282-1-git-send-email-wei.w.wang@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.120 Subject: [Qemu-devel] [PATCH kernel v8 3/4] mm: add inerface to offer info about unused pages X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Liang Li This patch adds a function to provides a snapshot of the present system unused pages. An important usage of this function is to provide the unsused pages to the Live migration thread, which skips the transfer of thoses unused pages. Newly used pages can be re-tracked by the dirty page logging mechanisms. Signed-off-by: Liang Li Signed-off-by: Wei Wang --- include/linux/mm.h | 3 ++ mm/page_alloc.c | 114 +++++++++++++++++++++++++++++++++++++++++++++++++= ++++ 2 files changed, 117 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index b84615b..869749d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1764,6 +1764,9 @@ extern void free_area_init(unsigned long * zones_size= ); extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +extern int record_unused_pages(struct zone **start_zone, int order, + __le64 *pages, unsigned int size, + unsigned int *pos, bool part_fill); =20 /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f3e0c69..b72a7ac 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4498,6 +4498,120 @@ void show_free_areas(unsigned int filter) show_swap_cache_info(); } =20 +static int __record_unused_pages(struct zone *zone, int order, + __le64 *buf, unsigned int size, + unsigned int *offset, bool part_fill) +{ + unsigned long pfn, flags; + int t, ret =3D 0; + struct list_head *curr; + __le64 *chunk; + + if (zone_is_empty(zone)) + return 0; + + spin_lock_irqsave(&zone->lock, flags); + + if (*offset + zone->free_area[order].nr_free > size && !part_fill) { + ret =3D -ENOSPC; + goto out; + } + for (t =3D 0; t < MIGRATE_TYPES; t++) { + list_for_each(curr, &zone->free_area[order].free_list[t]) { + pfn =3D page_to_pfn(list_entry(curr, struct page, lru)); + chunk =3D buf + *offset; + if (*offset + 2 > size) { + ret =3D -ENOSPC; + goto out; + } + /* Align to the chunk format used in virtio-balloon */ + *chunk =3D cpu_to_le64(pfn << 12); + *(chunk + 1) =3D cpu_to_le64((1 << order) << 12); + *offset +=3D 2; + } + } + +out: + spin_unlock_irqrestore(&zone->lock, flags); + + return ret; +} + +/* + * The record_unused_pages() function is used to record the system unused + * pages. The unused pages can be skipped to transfer during live migratio= n. + * Though the unused pages are dynamically changing, dirty page logging + * mechanisms are able to capture the newly used pages though they were + * recorded as unused pages via this function. + * + * This function scans the free page list of the specified order to record + * the unused pages, and chunks those continuous pages following the chunk + * format below: + * -------------------------------------- + * | Base (52-bit) | Rsvd (12-bit) | + * -------------------------------------- + * -------------------------------------- + * | Size (52-bit) | Rsvd (12-bit) | + * -------------------------------------- + * + * @start_zone: zone to start the record operation. + * @order: order of the free page list to record. + * @buf: buffer to record the unused page info in chunks. + * @size: size of the buffer in __le64 to record + * @offset: offset in the buffer to record. + * @part_fill: indicate if partial fill is used. + * + * return -EINVAL if parameter is invalid + * return -ENOSPC when the buffer is too small to record all the unsed pag= es + * return 0 when sccess + */ +int record_unused_pages(struct zone **start_zone, int order, + __le64 *buf, unsigned int size, + unsigned int *offset, bool part_fill) +{ + struct zone *zone; + int ret =3D 0; + bool skip_check =3D false; + + /* Make sure all the parameters are valid */ + if (buf =3D=3D NULL || offset =3D=3D NULL || order >=3D MAX_ORDER) + return -EINVAL; + + if (*start_zone !=3D NULL) { + bool found =3D false; + + for_each_populated_zone(zone) { + if (zone !=3D *start_zone) + continue; + found =3D true; + break; + } + if (!found) + return -EINVAL; + } else + skip_check =3D true; + + for_each_populated_zone(zone) { + /* Start from *start_zone if it's not NULL */ + if (!skip_check) { + if (*start_zone !=3D zone) + continue; + else + skip_check =3D true; + } + ret =3D __record_unused_pages(zone, order, buf, size, + offset, part_fill); + if (ret < 0) { + /* record the failed zone */ + *start_zone =3D zone; + break; + } + } + + return ret; +} +EXPORT_SYMBOL(record_unused_pages); + static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone =3D zone; --=20 2.7.4 From nobody Mon May 6 04:09:26 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1489648556757937.0214525004758; Thu, 16 Mar 2017 00:15:56 -0700 (PDT) Received: from localhost ([::1]:41368 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1coPdb-0007Xl-AK for importer@patchew.org; Thu, 16 Mar 2017 03:15:55 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41928) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1coPb6-0005lf-TU for qemu-devel@nongnu.org; Thu, 16 Mar 2017 03:13:24 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1coPb4-0001GH-Qp for qemu-devel@nongnu.org; Thu, 16 Mar 2017 03:13:20 -0400 Received: from mga04.intel.com ([192.55.52.120]:55976) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1coPb4-0001Ei-Eg for qemu-devel@nongnu.org; Thu, 16 Mar 2017 03:13:18 -0400 Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Mar 2017 00:13:18 -0700 Received: from devel-ww.sh.intel.com ([10.239.48.105]) by fmsmga005.fm.intel.com with ESMTP; 16 Mar 2017 00:13:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1489648398; x=1521184398; h=from:to:subject:date:message-id:in-reply-to:references; bh=aDOyR7wo5ny/ilPXAhPFZo76NL4sv/YInl2AuvRMA1Q=; b=XQC1nungNJZgiu5ftdhk+7IBaBcZLocooY19I37Akwxxu+s6TWPqV8Cr tgHkEYeiFJwJy3EVjl6mLBXrWFzQRw==; X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,170,1486454400"; d="scan'208";a="76816770" From: Wei Wang To: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, mst@redhat.com, david@redhat.com, dave.hansen@intel.com, cornelia.huck@de.ibm.com, akpm@linux-foundation.org, mgorman@techsingularity.net, aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com, wei.w.wang@intel.com, liliang.opensource@gmail.com Date: Thu, 16 Mar 2017 15:08:47 +0800 Message-Id: <1489648127-37282-5-git-send-email-wei.w.wang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1489648127-37282-1-git-send-email-wei.w.wang@intel.com> References: <1489648127-37282-1-git-send-email-wei.w.wang@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.120 Subject: [Qemu-devel] [PATCH kernel v8 4/4] virtio-balloon: VIRTIO_BALLOON_F_HOST_REQ_VQ X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Liang Li Add a new vq, host request vq. The host uses the vq to send requests to the guest. Upon getting a request, the guest responds what the host needs via this vq. The patch implements the request of getting the unsed pages from the guest. The unused guest pages are avoided to migrate in live migration. For an idle guest with 8GB RAM, this optimization shorterns the total migration time to 1/4. Furthermore, it's also possible to drop the guest's page cache before live migration. This optimization will be implemented on top of this new feature in the future. Signed-off-by: Liang Li Signed-off-by: Wei Wang --- drivers/virtio/virtio_balloon.c | 140 ++++++++++++++++++++++++++++++++= ++-- include/uapi/linux/virtio_balloon.h | 22 ++++++ 2 files changed, 157 insertions(+), 5 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloo= n.c index 3f4a161..bcf2baa 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -64,7 +64,7 @@ struct balloon_page_chunk { typedef __le64 resp_data_t; struct virtio_balloon { struct virtio_device *vdev; - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *host_req_vq; =20 /* The balloon servicing is delegated to a freezable workqueue. */ struct work_struct update_balloon_stats_work; @@ -104,6 +104,8 @@ struct virtio_balloon { * pfn_start & pfn_stop: records the start and stop pfn in each cover */ unsigned long pfn_min, pfn_max, pfn_start, pfn_stop; + /* Request header */ + struct virtio_balloon_req_hdr req_hdr; /* * The pages we've told the Host we're not using are enqueued * at vb_dev_info->pages list. @@ -568,6 +570,81 @@ static void stats_handle_request(struct virtio_balloon= *vb) virtqueue_kick(vq); } =20 +static void __send_unused_pages(struct virtio_balloon *vb, + unsigned long req_id, unsigned int pos, + bool done) +{ + struct virtio_balloon_resp_hdr *hdr =3D vb->resp_hdr; + struct virtqueue *vq =3D vb->host_req_vq; + + vb->resp_pos =3D pos; + hdr->cmd =3D BALLOON_GET_UNUSED_PAGES; + hdr->id =3D cpu_to_le16(req_id); + if (!done) + hdr->flag =3D BALLOON_FLAG_CONT; + else + hdr->flag =3D BALLOON_FLAG_DONE; + + if (pos > 0 || done) + send_resp_data(vb, vq, true); + +} + +static void send_unused_pages(struct virtio_balloon *vb, + unsigned long req_id) +{ + struct scatterlist sg_in; + unsigned int pos =3D 0; + struct virtqueue *vq =3D vb->host_req_vq; + int ret, order; + struct zone *zone =3D NULL; + bool part_fill =3D false; + + mutex_lock(&vb->balloon_lock); + + for (order =3D MAX_ORDER - 1; order >=3D 0; order--) { + ret =3D record_unused_pages(&zone, order, vb->resp_data, + vb->resp_buf_size / sizeof(__le64), + &pos, part_fill); + if (ret =3D=3D -ENOSPC) { + if (pos =3D=3D 0) { + void *new_resp_data; + + new_resp_data =3D kmalloc(2 * vb->resp_buf_size, + GFP_KERNEL); + if (new_resp_data) { + kfree(vb->resp_data); + vb->resp_data =3D new_resp_data; + vb->resp_buf_size *=3D 2; + } else { + part_fill =3D true; + dev_warn(&vb->vdev->dev, + "%s: part fill order: %d\n", + __func__, order); + } + } else { + __send_unused_pages(vb, req_id, pos, false); + pos =3D 0; + } + + if (!part_fill) { + order++; + continue; + } + } else + zone =3D NULL; + + if (order =3D=3D 0) + __send_unused_pages(vb, req_id, pos, true); + + } + + mutex_unlock(&vb->balloon_lock); + sg_init_one(&sg_in, &vb->req_hdr, sizeof(vb->req_hdr)); + virtqueue_add_inbuf(vq, &sg_in, 1, &vb->req_hdr, GFP_KERNEL); + virtqueue_kick(vq); +} + static void virtballoon_changed(struct virtio_device *vdev) { struct virtio_balloon *vb =3D vdev->priv; @@ -667,18 +744,53 @@ static void update_balloon_size_func(struct work_stru= ct *work) queue_work(system_freezable_wq, work); } =20 +static void handle_host_request(struct virtqueue *vq) +{ + struct virtio_balloon *vb =3D vq->vdev->priv; + struct virtio_balloon_req_hdr *hdr; + unsigned long req_id; + unsigned int len; + + hdr =3D virtqueue_get_buf(vb->host_req_vq, &len); + if (!hdr || len !=3D sizeof(vb->req_hdr)) + return; + + switch (hdr->cmd) { + case BALLOON_GET_UNUSED_PAGES: + req_id =3D le64_to_cpu(hdr->param); + send_unused_pages(vb, req_id); + break; + default: + dev_warn(&vb->vdev->dev, "%s: host request %d not supported\n", + __func__, hdr->cmd); + } +} + static int init_vqs(struct virtio_balloon *vb) { - struct virtqueue *vqs[3]; - vq_callback_t *callbacks[] =3D { balloon_ack, balloon_ack, stats_request = }; - static const char * const names[] =3D { "inflate", "deflate", "stats" }; + struct virtqueue *vqs[4]; + vq_callback_t *callbacks[] =3D { balloon_ack, balloon_ack, + stats_request, handle_host_request }; + static const char * const names[] =3D { "inflate", "deflate", + "stats", "host_request" }; int err, nvqs; =20 /* * We expect two virtqueues: inflate and deflate, and * optionally stat. */ - nvqs =3D virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2; + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_HOST_REQ_VQ)) + nvqs =3D 4; + else if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) + nvqs =3D 3; + else + nvqs =3D 2; + + if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_CHUNK_TRANSFER); + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_HOST_REQ_VQ); + } + err =3D vb->vdev->config->find_vqs(vb->vdev, nvqs, vqs, callbacks, names); if (err) return err; @@ -699,6 +811,20 @@ static int init_vqs(struct virtio_balloon *vb) BUG(); virtqueue_kick(vb->stats_vq); } + + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_HOST_REQ_VQ)) { + struct scatterlist sg_in; + + vb->host_req_vq =3D vqs[3]; + sg_init_one(&sg_in, &vb->req_hdr, sizeof(vb->req_hdr)); + if (virtqueue_add_inbuf(vb->host_req_vq, &sg_in, 1, + &vb->req_hdr, GFP_KERNEL) < 0) + __virtio_clear_bit(vb->vdev, + VIRTIO_BALLOON_F_HOST_REQ_VQ); + else + virtqueue_kick(vb->host_req_vq); + } + return 0; } =20 @@ -829,6 +955,7 @@ static int virtballoon_probe(struct virtio_device *vdev) vb->page_bmap[0] =3D kmalloc(PAGE_BMAP_SIZE, GFP_KERNEL); if (!vb->page_bmap[0]) { __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_CHUNK_TRANSFER); + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_HOST_REQ_VQ); } else { vb->page_bmaps =3D 1; vb->resp_hdr =3D @@ -837,6 +964,8 @@ static int virtballoon_probe(struct virtio_device *vdev) if (!vb->resp_hdr) { __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_CHUNK_TRANSFER); + __virtio_clear_bit(vdev, + VIRTIO_BALLOON_F_HOST_REQ_VQ); kfree(vb->page_bmap[0]); } else { vb->resp_data =3D (void *)vb->resp_hdr + @@ -966,6 +1095,7 @@ static unsigned int features[] =3D { VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, VIRTIO_BALLOON_F_CHUNK_TRANSFER, + VIRTIO_BALLOON_F_HOST_REQ_VQ, }; =20 static struct virtio_driver virtio_balloon_driver =3D { diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virti= o_balloon.h index aa0e5f0..1f75bee 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -35,6 +35,7 @@ #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ #define VIRTIO_BALLOON_F_CHUNK_TRANSFER 3 /* Transfer pages in chunks */ +#define VIRTIO_BALLOON_F_HOST_REQ_VQ 4 /* Host request virtqueue */ =20 /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 @@ -91,4 +92,25 @@ struct virtio_balloon_resp_hdr { __le32 data_len; /* Payload len in bytes */ }; =20 +enum virtio_balloon_req_id { + /* Get unused page information */ + BALLOON_GET_UNUSED_PAGES, +}; + +enum virtio_balloon_flag { + /* Have more data for a request */ + BALLOON_FLAG_CONT, + /* No more data for a request */ + BALLOON_FLAG_DONE, +}; + +struct virtio_balloon_req_hdr { + /* Used to distinguish different requests */ + __le16 cmd; + /* Reserved */ + __le16 reserved[3]; + /* Request parameter */ + __le64 param; +}; + #endif /* _LINUX_VIRTIO_BALLOON_H */ --=20 2.7.4