From nobody Thu May 2 11:27:46 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1607347871; cv=none; d=zohomail.com; s=zohoarc; b=ZtYVSOSE7lWRp2oelXRlWR3ozE2TpsXqLAvm2fdoU1lk1ysNyoQsuNExV594bvILbRjnBYoCRf1XSyGAaoF5ATB7HuQrrxHygR9Wkx/x0FeRnI8hijhd6mE9MO2FOR0ZiRbjvJWeFV6prNG9Lc5QCs34Mn9rH5uAugbBZmH6LT0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1607347871; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ZO5teaKU4bY6WAv2ur3lcADxCg7ltO2UkLJkOCEfo4M=; b=gyOBdAlJjzxdVZe4PAGOwkSjAUyMKHI7iysQve5HFaONRZ/g+m0DCAgffx2REdcYxySBp3Gil/BPtXg/upSpXVvWaa50sZeRVmoR03LVam68xgTYY/G5EyJfakLxWm5iWpXTgGER1YDs5Wrqw3fxfUFT7yfTGNqNIZdkakBfcF8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1607347871428938.7817389773406; Mon, 7 Dec 2020 05:31:11 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.46568.82656 (Exim 4.92) (envelope-from ) id 1kmGb7-0006hi-DL; Mon, 07 Dec 2020 13:30:37 +0000 Received: by outflank-mailman (output) from mailman id 46568.82656; Mon, 07 Dec 2020 13:30:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kmGb7-0006hX-AA; Mon, 07 Dec 2020 13:30:37 +0000 Received: by outflank-mailman (input) for mailman id 46568; Mon, 07 Dec 2020 13:30:35 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kmGb5-0006dg-Or for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:30:35 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id c23c5b8c-3fee-403d-b56a-747c45366e30; Mon, 07 Dec 2020 13:30:30 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 3A2F3ACF4; Mon, 7 Dec 2020 13:30:29 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c23c5b8c-3fee-403d-b56a-747c45366e30 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1607347829; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZO5teaKU4bY6WAv2ur3lcADxCg7ltO2UkLJkOCEfo4M=; b=C/6Y806l2Ayi/SLM4GU4bpZhT0TniNWrDQy+PscKv/RT+RDSTxtACI4B+6F0XNJ0+p8/86 RR/CQZbk9JT4Y5CwF6Yp8BSOKKx/CByoCxdVrb33S28gpnmq9Ye076aTOQ9FVhqfFMcRYg kMUZobzs8B7FXpD/SRkBNiQOgUYtqlw= From: Juergen Gross To: xen-devel@lists.xenproject.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org Cc: Juergen Gross , Konrad Rzeszutek Wilk , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Jens Axboe , Boris Ostrovsky , Stefano Stabellini Subject: [PATCH 1/2] xen: add helpers for caching grant mapping pages Date: Mon, 7 Dec 2020 14:30:23 +0100 Message-Id: <20201207133024.16621-2-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201207133024.16621-1-jgross@suse.com> References: <20201207133024.16621-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" Instead of having similar helpers in multiple backend drivers use common helpers for caching pages allocated via gnttab_alloc_pages(). Make use of those helpers in blkback and scsiback. Signed-off-by: Juergen Gross Reviewed-by: Boris Ostrovsky --- drivers/block/xen-blkback/blkback.c | 89 ++++++----------------------- drivers/block/xen-blkback/common.h | 4 +- drivers/block/xen-blkback/xenbus.c | 6 +- drivers/xen/grant-table.c | 72 +++++++++++++++++++++++ drivers/xen/xen-scsiback.c | 60 ++++--------------- include/xen/grant_table.h | 13 +++++ 6 files changed, 116 insertions(+), 128 deletions(-) diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkbac= k/blkback.c index 501e9dacfff9..9ebf53903d7b 100644 --- a/drivers/block/xen-blkback/blkback.c +++ b/drivers/block/xen-blkback/blkback.c @@ -132,73 +132,12 @@ module_param(log_stats, int, 0644); =20 #define BLKBACK_INVALID_HANDLE (~0) =20 -/* Number of free pages to remove on each call to gnttab_free_pages */ -#define NUM_BATCH_FREE_PAGES 10 - static inline bool persistent_gnt_timeout(struct persistent_gnt *persisten= t_gnt) { return pgrant_timeout && (jiffies - persistent_gnt->last_used >=3D HZ * pgrant_timeout); } =20 -static inline int get_free_page(struct xen_blkif_ring *ring, struct page *= *page) -{ - unsigned long flags; - - spin_lock_irqsave(&ring->free_pages_lock, flags); - if (list_empty(&ring->free_pages)) { - BUG_ON(ring->free_pages_num !=3D 0); - spin_unlock_irqrestore(&ring->free_pages_lock, flags); - return gnttab_alloc_pages(1, page); - } - BUG_ON(ring->free_pages_num =3D=3D 0); - page[0] =3D list_first_entry(&ring->free_pages, struct page, lru); - list_del(&page[0]->lru); - ring->free_pages_num--; - spin_unlock_irqrestore(&ring->free_pages_lock, flags); - - return 0; -} - -static inline void put_free_pages(struct xen_blkif_ring *ring, struct page= **page, - int num) -{ - unsigned long flags; - int i; - - spin_lock_irqsave(&ring->free_pages_lock, flags); - for (i =3D 0; i < num; i++) - list_add(&page[i]->lru, &ring->free_pages); - ring->free_pages_num +=3D num; - spin_unlock_irqrestore(&ring->free_pages_lock, flags); -} - -static inline void shrink_free_pagepool(struct xen_blkif_ring *ring, int n= um) -{ - /* Remove requested pages in batches of NUM_BATCH_FREE_PAGES */ - struct page *page[NUM_BATCH_FREE_PAGES]; - unsigned int num_pages =3D 0; - unsigned long flags; - - spin_lock_irqsave(&ring->free_pages_lock, flags); - while (ring->free_pages_num > num) { - BUG_ON(list_empty(&ring->free_pages)); - page[num_pages] =3D list_first_entry(&ring->free_pages, - struct page, lru); - list_del(&page[num_pages]->lru); - ring->free_pages_num--; - if (++num_pages =3D=3D NUM_BATCH_FREE_PAGES) { - spin_unlock_irqrestore(&ring->free_pages_lock, flags); - gnttab_free_pages(num_pages, page); - spin_lock_irqsave(&ring->free_pages_lock, flags); - num_pages =3D 0; - } - } - spin_unlock_irqrestore(&ring->free_pages_lock, flags); - if (num_pages !=3D 0) - gnttab_free_pages(num_pages, page); -} - #define vaddr(page) ((unsigned long)pfn_to_kaddr(page_to_pfn(page))) =20 static int do_block_io_op(struct xen_blkif_ring *ring, unsigned int *eoi_f= lags); @@ -331,7 +270,8 @@ static void free_persistent_gnts(struct xen_blkif_ring = *ring, struct rb_root *ro unmap_data.count =3D segs_to_unmap; BUG_ON(gnttab_unmap_refs_sync(&unmap_data)); =20 - put_free_pages(ring, pages, segs_to_unmap); + gnttab_page_cache_put(&ring->free_pages, pages, + segs_to_unmap); segs_to_unmap =3D 0; } =20 @@ -371,7 +311,8 @@ void xen_blkbk_unmap_purged_grants(struct work_struct *= work) if (++segs_to_unmap =3D=3D BLKIF_MAX_SEGMENTS_PER_REQUEST) { unmap_data.count =3D segs_to_unmap; BUG_ON(gnttab_unmap_refs_sync(&unmap_data)); - put_free_pages(ring, pages, segs_to_unmap); + gnttab_page_cache_put(&ring->free_pages, pages, + segs_to_unmap); segs_to_unmap =3D 0; } kfree(persistent_gnt); @@ -379,7 +320,7 @@ void xen_blkbk_unmap_purged_grants(struct work_struct *= work) if (segs_to_unmap > 0) { unmap_data.count =3D segs_to_unmap; BUG_ON(gnttab_unmap_refs_sync(&unmap_data)); - put_free_pages(ring, pages, segs_to_unmap); + gnttab_page_cache_put(&ring->free_pages, pages, segs_to_unmap); } } =20 @@ -664,9 +605,10 @@ int xen_blkif_schedule(void *arg) =20 /* Shrink the free pages pool if it is too large. */ if (time_before(jiffies, blkif->buffer_squeeze_end)) - shrink_free_pagepool(ring, 0); + gnttab_page_cache_shrink(&ring->free_pages, 0); else - shrink_free_pagepool(ring, max_buffer_pages); + gnttab_page_cache_shrink(&ring->free_pages, + max_buffer_pages); =20 if (log_stats && time_after(jiffies, ring->st_print)) print_stats(ring); @@ -697,7 +639,7 @@ void xen_blkbk_free_caches(struct xen_blkif_ring *ring) ring->persistent_gnt_c =3D 0; =20 /* Since we are shutting down remove all pages from the buffer */ - shrink_free_pagepool(ring, 0 /* All */); + gnttab_page_cache_shrink(&ring->free_pages, 0 /* All */); } =20 static unsigned int xen_blkbk_unmap_prepare( @@ -736,7 +678,7 @@ static void xen_blkbk_unmap_and_respond_callback(int re= sult, struct gntab_unmap_ but is this the best way to deal with this? */ BUG_ON(result); =20 - put_free_pages(ring, data->pages, data->count); + gnttab_page_cache_put(&ring->free_pages, data->pages, data->count); make_response(ring, pending_req->id, pending_req->operation, pending_req->status); free_req(ring, pending_req); @@ -803,7 +745,8 @@ static void xen_blkbk_unmap(struct xen_blkif_ring *ring, if (invcount) { ret =3D gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount); BUG_ON(ret); - put_free_pages(ring, unmap_pages, invcount); + gnttab_page_cache_put(&ring->free_pages, unmap_pages, + invcount); } pages +=3D batch; num -=3D batch; @@ -850,7 +793,8 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring, pages[i]->page =3D persistent_gnt->page; pages[i]->persistent_gnt =3D persistent_gnt; } else { - if (get_free_page(ring, &pages[i]->page)) + if (gnttab_page_cache_get(&ring->free_pages, + &pages[i]->page)) goto out_of_memory; addr =3D vaddr(pages[i]->page); pages_to_gnt[segs_to_map] =3D pages[i]->page; @@ -883,7 +827,8 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring, BUG_ON(new_map_idx >=3D segs_to_map); if (unlikely(map[new_map_idx].status !=3D 0)) { pr_debug("invalid buffer -- could not remap it\n"); - put_free_pages(ring, &pages[seg_idx]->page, 1); + gnttab_page_cache_put(&ring->free_pages, + &pages[seg_idx]->page, 1); pages[seg_idx]->handle =3D BLKBACK_INVALID_HANDLE; ret |=3D 1; goto next; @@ -944,7 +889,7 @@ static int xen_blkbk_map(struct xen_blkif_ring *ring, =20 out_of_memory: pr_alert("%s: out of memory\n", __func__); - put_free_pages(ring, pages_to_gnt, segs_to_map); + gnttab_page_cache_put(&ring->free_pages, pages_to_gnt, segs_to_map); for (i =3D last_map; i < num; i++) pages[i]->handle =3D BLKBACK_INVALID_HANDLE; return -ENOMEM; diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback= /common.h index c6ea5d38c509..a1b9df2c4ef1 100644 --- a/drivers/block/xen-blkback/common.h +++ b/drivers/block/xen-blkback/common.h @@ -288,9 +288,7 @@ struct xen_blkif_ring { struct work_struct persistent_purge_work; =20 /* Buffer of free pages to map grant refs. */ - spinlock_t free_pages_lock; - int free_pages_num; - struct list_head free_pages; + struct gnttab_page_cache free_pages; =20 struct work_struct free_work; /* Thread shutdown wait queue. */ diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback= /xenbus.c index f5705569e2a7..76912c584a76 100644 --- a/drivers/block/xen-blkback/xenbus.c +++ b/drivers/block/xen-blkback/xenbus.c @@ -144,8 +144,7 @@ static int xen_blkif_alloc_rings(struct xen_blkif *blki= f) INIT_LIST_HEAD(&ring->pending_free); INIT_LIST_HEAD(&ring->persistent_purge_list); INIT_WORK(&ring->persistent_purge_work, xen_blkbk_unmap_purged_grants); - spin_lock_init(&ring->free_pages_lock); - INIT_LIST_HEAD(&ring->free_pages); + gnttab_page_cache_init(&ring->free_pages); =20 spin_lock_init(&ring->pending_free_lock); init_waitqueue_head(&ring->pending_free_wq); @@ -317,8 +316,7 @@ static int xen_blkif_disconnect(struct xen_blkif *blkif) BUG_ON(atomic_read(&ring->persistent_gnt_in_use) !=3D 0); BUG_ON(!list_empty(&ring->persistent_purge_list)); BUG_ON(!RB_EMPTY_ROOT(&ring->persistent_gnts)); - BUG_ON(!list_empty(&ring->free_pages)); - BUG_ON(ring->free_pages_num !=3D 0); + BUG_ON(ring->free_pages.num_pages !=3D 0); BUG_ON(ring->persistent_gnt_c !=3D 0); WARN_ON(i !=3D (XEN_BLKIF_REQS_PER_PAGE * blkif->nr_ring_pages)); ring->active =3D false; diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c index 523dcdf39cc9..e2e42912f241 100644 --- a/drivers/xen/grant-table.c +++ b/drivers/xen/grant-table.c @@ -813,6 +813,78 @@ int gnttab_alloc_pages(int nr_pages, struct page **pag= es) } EXPORT_SYMBOL_GPL(gnttab_alloc_pages); =20 +void gnttab_page_cache_init(struct gnttab_page_cache *cache) +{ + spin_lock_init(&cache->lock); + INIT_LIST_HEAD(&cache->pages); + cache->num_pages =3D 0; +} +EXPORT_SYMBOL_GPL(gnttab_page_cache_init); + +int gnttab_page_cache_get(struct gnttab_page_cache *cache, struct page **p= age) +{ + unsigned long flags; + + spin_lock_irqsave(&cache->lock, flags); + + if (list_empty(&cache->pages)) { + spin_unlock_irqrestore(&cache->lock, flags); + return gnttab_alloc_pages(1, page); + } + + page[0] =3D list_first_entry(&cache->pages, struct page, lru); + list_del(&page[0]->lru); + cache->num_pages--; + + spin_unlock_irqrestore(&cache->lock, flags); + + return 0; +} +EXPORT_SYMBOL_GPL(gnttab_page_cache_get); + +void gnttab_page_cache_put(struct gnttab_page_cache *cache, struct page **= page, + unsigned int num) +{ + unsigned long flags; + unsigned int i; + + spin_lock_irqsave(&cache->lock, flags); + + for (i =3D 0; i < num; i++) + list_add(&page[i]->lru, &cache->pages); + cache->num_pages +=3D num; + + spin_unlock_irqrestore(&cache->lock, flags); +} +EXPORT_SYMBOL_GPL(gnttab_page_cache_put); + +void gnttab_page_cache_shrink(struct gnttab_page_cache *cache, unsigned in= t num) +{ + struct page *page[10]; + unsigned int i =3D 0; + unsigned long flags; + + spin_lock_irqsave(&cache->lock, flags); + + while (cache->num_pages > num) { + page[i] =3D list_first_entry(&cache->pages, struct page, lru); + list_del(&page[i]->lru); + cache->num_pages--; + if (++i =3D=3D ARRAY_SIZE(page)) { + spin_unlock_irqrestore(&cache->lock, flags); + gnttab_free_pages(i, page); + i =3D 0; + spin_lock_irqsave(&cache->lock, flags); + } + } + + spin_unlock_irqrestore(&cache->lock, flags); + + if (i !=3D 0) + gnttab_free_pages(i, page); +} +EXPORT_SYMBOL_GPL(gnttab_page_cache_shrink); + void gnttab_pages_clear_private(int nr_pages, struct page **pages) { int i; diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c index 4acc4e899600..862162dca33c 100644 --- a/drivers/xen/xen-scsiback.c +++ b/drivers/xen/xen-scsiback.c @@ -99,6 +99,8 @@ struct vscsibk_info { struct list_head v2p_entry_lists; =20 wait_queue_head_t waiting_to_free; + + struct gnttab_page_cache free_pages; }; =20 /* theoretical maximum of grants for one request */ @@ -188,10 +190,6 @@ module_param_named(max_buffer_pages, scsiback_max_buff= er_pages, int, 0644); MODULE_PARM_DESC(max_buffer_pages, "Maximum number of free pages to keep in backend buffer"); =20 -static DEFINE_SPINLOCK(free_pages_lock); -static int free_pages_num; -static LIST_HEAD(scsiback_free_pages); - /* Global spinlock to protect scsiback TPG list */ static DEFINE_MUTEX(scsiback_mutex); static LIST_HEAD(scsiback_list); @@ -207,41 +205,6 @@ static void scsiback_put(struct vscsibk_info *info) wake_up(&info->waiting_to_free); } =20 -static void put_free_pages(struct page **page, int num) -{ - unsigned long flags; - int i =3D free_pages_num + num, n =3D num; - - if (num =3D=3D 0) - return; - if (i > scsiback_max_buffer_pages) { - n =3D min(num, i - scsiback_max_buffer_pages); - gnttab_free_pages(n, page + num - n); - n =3D num - n; - } - spin_lock_irqsave(&free_pages_lock, flags); - for (i =3D 0; i < n; i++) - list_add(&page[i]->lru, &scsiback_free_pages); - free_pages_num +=3D n; - spin_unlock_irqrestore(&free_pages_lock, flags); -} - -static int get_free_page(struct page **page) -{ - unsigned long flags; - - spin_lock_irqsave(&free_pages_lock, flags); - if (list_empty(&scsiback_free_pages)) { - spin_unlock_irqrestore(&free_pages_lock, flags); - return gnttab_alloc_pages(1, page); - } - page[0] =3D list_first_entry(&scsiback_free_pages, struct page, lru); - list_del(&page[0]->lru); - free_pages_num--; - spin_unlock_irqrestore(&free_pages_lock, flags); - return 0; -} - static unsigned long vaddr_page(struct page *page) { unsigned long pfn =3D page_to_pfn(page); @@ -302,7 +265,8 @@ static void scsiback_fast_flush_area(struct vscsibk_pen= d *req) BUG_ON(err); } =20 - put_free_pages(req->pages, req->n_grants); + gnttab_page_cache_put(&req->info->free_pages, req->pages, + req->n_grants); req->n_grants =3D 0; } =20 @@ -445,8 +409,8 @@ static int scsiback_gnttab_data_map_list(struct vscsibk= _pend *pending_req, struct vscsibk_info *info =3D pending_req->info; =20 for (i =3D 0; i < cnt; i++) { - if (get_free_page(pg + mapcount)) { - put_free_pages(pg, mapcount); + if (gnttab_page_cache_get(&info->free_pages, pg + mapcount)) { + gnttab_page_cache_put(&info->free_pages, pg, mapcount); pr_err("no grant page\n"); return -ENOMEM; } @@ -796,6 +760,8 @@ static int scsiback_do_cmd_fn(struct vscsibk_info *info, cond_resched(); } =20 + gnttab_page_cache_shrink(&info->free_pages, scsiback_max_buffer_pages); + RING_FINAL_CHECK_FOR_REQUESTS(&info->ring, more_to_do); return more_to_do; } @@ -1233,6 +1199,8 @@ static int scsiback_remove(struct xenbus_device *dev) =20 scsiback_release_translation_entry(info); =20 + gnttab_page_cache_shrink(&info->free_pages, 0); + dev_set_drvdata(&dev->dev, NULL); =20 return 0; @@ -1263,6 +1231,7 @@ static int scsiback_probe(struct xenbus_device *dev, info->irq =3D 0; INIT_LIST_HEAD(&info->v2p_entry_lists); spin_lock_init(&info->v2p_lock); + gnttab_page_cache_init(&info->free_pages); =20 err =3D xenbus_printf(XBT_NIL, dev->nodename, "feature-sg-grant", "%u", SG_ALL); @@ -1879,13 +1848,6 @@ static int __init scsiback_init(void) =20 static void __exit scsiback_exit(void) { - struct page *page; - - while (free_pages_num) { - if (get_free_page(&page)) - BUG(); - gnttab_free_pages(1, &page); - } target_unregister_template(&scsiback_ops); xenbus_unregister_driver(&scsiback_driver); } diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h index 9bc5bc07d4d3..c6ef8ffc1a09 100644 --- a/include/xen/grant_table.h +++ b/include/xen/grant_table.h @@ -198,6 +198,19 @@ void gnttab_free_auto_xlat_frames(void); int gnttab_alloc_pages(int nr_pages, struct page **pages); void gnttab_free_pages(int nr_pages, struct page **pages); =20 +struct gnttab_page_cache { + spinlock_t lock; + struct list_head pages; + unsigned int num_pages; +}; + +void gnttab_page_cache_init(struct gnttab_page_cache *cache); +int gnttab_page_cache_get(struct gnttab_page_cache *cache, struct page **p= age); +void gnttab_page_cache_put(struct gnttab_page_cache *cache, struct page **= page, + unsigned int num); +void gnttab_page_cache_shrink(struct gnttab_page_cache *cache, + unsigned int num); + #ifdef CONFIG_XEN_GRANT_DMA_ALLOC struct gnttab_dma_alloc_args { /* Device for which DMA memory will be/was allocated. */ --=20 2.26.2 From nobody Thu May 2 11:27:46 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1607349397; cv=none; d=zohomail.com; s=zohoarc; b=QWFBRdCoMYF0xZTNeJ2rdT8QKIfXK/2oF753zkPrrn4R6REE/Wu91x9QnaufScBJvRWAG52aq+CWh6y1545ZArwjBRqIv6DPTREMtXKT2DWRnoorWFEIB+e/c+aIgIKiMBJGoTbsgCmcRBDMGxK3mkteyu2/u4qOzOzvk04iUoE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1607349397; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=qO8uDSCaCVklC/1884Z3m6tRv+wQIAXdoqVmyxZTomc=; b=FoQSMV7km3JmmWIWxnMdNxlNHNKrELDjQNI1gprj912PA0v16nB7Oq0VURtk6K/4T8+PaNKS4wF46sHHNu7NPHGkRt5UMurs++TvIwyT+jBwaS7/PqrNaqpAKcwiqsCybyUlOriiljn0N9lL9K5eeYchqH/KIeikaTJE0fBqAgA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1607349397006603.9812168723098; Mon, 7 Dec 2020 05:56:37 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.46567.82644 (Exim 4.92) (envelope-from ) id 1kmGb3-0006eu-4e; Mon, 07 Dec 2020 13:30:33 +0000 Received: by outflank-mailman (output) from mailman id 46567.82644; Mon, 07 Dec 2020 13:30:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kmGb3-0006en-0w; Mon, 07 Dec 2020 13:30:33 +0000 Received: by outflank-mailman (input) for mailman id 46567; Mon, 07 Dec 2020 13:30:31 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kmGb1-0006do-Ju for xen-devel@lists.xenproject.org; Mon, 07 Dec 2020 13:30:31 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id dd845fb6-9b2c-47e7-b339-d9698956272a; Mon, 07 Dec 2020 13:30:30 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 5E70EAD09; Mon, 7 Dec 2020 13:30:29 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: dd845fb6-9b2c-47e7-b339-d9698956272a X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1607347829; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qO8uDSCaCVklC/1884Z3m6tRv+wQIAXdoqVmyxZTomc=; b=Lqq67VCQiRnET2stayy3VmJFaL/+SiunSGCMyjY+/iAT69IgVj1Po9szzlkif8CQKoFjJw O421QkLOJd619Fb3UmAxypDynmuGG8KeDiysJD/h1VK+iW+mTfloQjJWku0CVt7Mb7pfrj jAbOZY6dksmrkLKftV+hq5yZbqKd2ME= From: Juergen Gross To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org Cc: Juergen Gross , Boris Ostrovsky , Stefano Stabellini Subject: [PATCH 2/2] xen: don't use page->lru for ZONE_DEVICE memory Date: Mon, 7 Dec 2020 14:30:24 +0100 Message-Id: <20201207133024.16621-3-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201207133024.16621-1-jgross@suse.com> References: <20201207133024.16621-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" Commit 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated memory") introduced usage of ZONE_DEVICE memory for foreign memory mappings. Unfortunately this collides with using page->lru for Xen backend private page caches. Fix that by using page->zone_device_data instead. Fixes: 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated memory") Signed-off-by: Juergen Gross Acked-by: Roger Pau Monn=C3=A9 Reviewed-by: Boris Ostrovsky Reviewed-by: Jason Andryuk --- drivers/xen/grant-table.c | 65 +++++++++++++++++++++++++++++---- drivers/xen/unpopulated-alloc.c | 20 +++++----- include/xen/grant_table.h | 4 ++ 3 files changed, 73 insertions(+), 16 deletions(-) diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c index e2e42912f241..696663a439fe 100644 --- a/drivers/xen/grant-table.c +++ b/drivers/xen/grant-table.c @@ -813,10 +813,63 @@ int gnttab_alloc_pages(int nr_pages, struct page **pa= ges) } EXPORT_SYMBOL_GPL(gnttab_alloc_pages); =20 +#ifdef CONFIG_XEN_UNPOPULATED_ALLOC +static inline void cache_init(struct gnttab_page_cache *cache) +{ + cache->pages =3D NULL; +} + +static inline bool cache_empty(struct gnttab_page_cache *cache) +{ + return !cache->pages; +} + +static inline struct page *cache_deq(struct gnttab_page_cache *cache) +{ + struct page *page; + + page =3D cache->pages; + cache->pages =3D page->zone_device_data; + + return page; +} + +static inline void cache_enq(struct gnttab_page_cache *cache, struct page = *page) +{ + page->zone_device_data =3D cache->pages; + cache->pages =3D page; +} +#else +static inline void cache_init(struct gnttab_page_cache *cache) +{ + INIT_LIST_HEAD(&cache->pages); +} + +static inline bool cache_empty(struct gnttab_page_cache *cache) +{ + return list_empty(&cache->pages); +} + +static inline struct page *cache_deq(struct gnttab_page_cache *cache) +{ + struct page *page; + + page =3D list_first_entry(&cache->pages, struct page, lru); + list_del(&page[0]->lru); + + return page; +} + +static inline void cache_enq(struct gnttab_page_cache *cache, struct page = *page) +{ + list_add(&page->lru, &cache->pages); +} +#endif + void gnttab_page_cache_init(struct gnttab_page_cache *cache) { spin_lock_init(&cache->lock); - INIT_LIST_HEAD(&cache->pages); + cache_init(cache); cache->num_pages =3D 0; } EXPORT_SYMBOL_GPL(gnttab_page_cache_init); @@ -827,13 +880,12 @@ int gnttab_page_cache_get(struct gnttab_page_cache *c= ache, struct page **page) =20 spin_lock_irqsave(&cache->lock, flags); =20 - if (list_empty(&cache->pages)) { + if (cache_empty(cache)) { spin_unlock_irqrestore(&cache->lock, flags); return gnttab_alloc_pages(1, page); } =20 - page[0] =3D list_first_entry(&cache->pages, struct page, lru); - list_del(&page[0]->lru); + page[0] =3D cache_deq(cache); cache->num_pages--; =20 spin_unlock_irqrestore(&cache->lock, flags); @@ -851,7 +903,7 @@ void gnttab_page_cache_put(struct gnttab_page_cache *ca= che, struct page **page, spin_lock_irqsave(&cache->lock, flags); =20 for (i =3D 0; i < num; i++) - list_add(&page[i]->lru, &cache->pages); + cache_enq(cache, page[i]); cache->num_pages +=3D num; =20 spin_unlock_irqrestore(&cache->lock, flags); @@ -867,8 +919,7 @@ void gnttab_page_cache_shrink(struct gnttab_page_cache = *cache, unsigned int num) spin_lock_irqsave(&cache->lock, flags); =20 while (cache->num_pages > num) { - page[i] =3D list_first_entry(&cache->pages, struct page, lru); - list_del(&page[i]->lru); + page[i] =3D cache_deq(cache); cache->num_pages--; if (++i =3D=3D ARRAY_SIZE(page)) { spin_unlock_irqrestore(&cache->lock, flags); diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-allo= c.c index 8c512ea550bb..7762c1bb23cb 100644 --- a/drivers/xen/unpopulated-alloc.c +++ b/drivers/xen/unpopulated-alloc.c @@ -12,7 +12,7 @@ #include =20 static DEFINE_MUTEX(list_lock); -static LIST_HEAD(page_list); +static struct page *page_list; static unsigned int list_count; =20 static int fill_list(unsigned int nr_pages) @@ -84,7 +84,8 @@ static int fill_list(unsigned int nr_pages) struct page *pg =3D virt_to_page(vaddr + PAGE_SIZE * i); =20 BUG_ON(!virt_addr_valid(vaddr + PAGE_SIZE * i)); - list_add(&pg->lru, &page_list); + pg->zone_device_data =3D page_list; + page_list =3D pg; list_count++; } =20 @@ -118,12 +119,10 @@ int xen_alloc_unpopulated_pages(unsigned int nr_pages= , struct page **pages) } =20 for (i =3D 0; i < nr_pages; i++) { - struct page *pg =3D list_first_entry_or_null(&page_list, - struct page, - lru); + struct page *pg =3D page_list; =20 BUG_ON(!pg); - list_del(&pg->lru); + page_list =3D pg->zone_device_data; list_count--; pages[i] =3D pg; =20 @@ -134,7 +133,8 @@ int xen_alloc_unpopulated_pages(unsigned int nr_pages, = struct page **pages) unsigned int j; =20 for (j =3D 0; j <=3D i; j++) { - list_add(&pages[j]->lru, &page_list); + pages[j]->zone_device_data =3D page_list; + page_list =3D pages[j]; list_count++; } goto out; @@ -160,7 +160,8 @@ void xen_free_unpopulated_pages(unsigned int nr_pages, = struct page **pages) =20 mutex_lock(&list_lock); for (i =3D 0; i < nr_pages; i++) { - list_add(&pages[i]->lru, &page_list); + pages[i]->zone_device_data =3D page_list; + page_list =3D pages[i]; list_count++; } mutex_unlock(&list_lock); @@ -189,7 +190,8 @@ static int __init init(void) struct page *pg =3D pfn_to_page(xen_extra_mem[i].start_pfn + j); =20 - list_add(&pg->lru, &page_list); + pg->zone_device_data =3D page_list; + page_list =3D pg; list_count++; } } diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h index c6ef8ffc1a09..b9c937b3a149 100644 --- a/include/xen/grant_table.h +++ b/include/xen/grant_table.h @@ -200,7 +200,11 @@ void gnttab_free_pages(int nr_pages, struct page **pag= es); =20 struct gnttab_page_cache { spinlock_t lock; +#ifdef CONFIG_XEN_UNPOPULATED_ALLOC + struct page *pages; +#else struct list_head pages; +#endif unsigned int num_pages; }; =20 --=20 2.26.2