From nobody Sat Nov 30 10:46:20 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 788A31C0DD9 for ; Tue, 10 Sep 2024 23:45:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011919; cv=none; b=CLPP0EDBnyO5Oqvqaq4bOuV9LnczYpsGABcK33KO1TgZXXkJbCI74MiyArbsvl/z6a39xwvGyaEQwTTbpugzzfxDPEWoNbIDCsiXfIqdLjPEADQwEajp2Rh/WHqM5XEjBa5XOdIjTmeo3KhjF8SlBWuX5U9NPQ6fbT7kzAVjPas= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011919; c=relaxed/simple; bh=YDXLNAYH07lT15zEUgoXIIGg0s3PEPkF7OA5Po/D0S0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QUuKVqdutiCuS6z+hoIpk1OsyEfux8xdD+vD1rRY5eOM/0h3RFgsbYopL3zJZ05EgZzkt5luYPahM0IZgBfdnT6cCXHSNV3TJxx/5HUG9HlqnDHvZ2tGBhkkqkBQuzyItNU/+OQaAOlEU/NH92NRnlPm25Aiwq9ca1kSDWI2svQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GX8MPKyN; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GX8MPKyN" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-204e310e050so86964355ad.0 for ; Tue, 10 Sep 2024 16:45:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011917; x=1726616717; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dBcAwVvZIYzGXNkByOPNSzS/2ZaPoaCsQUMEIRBg/WU=; b=GX8MPKyNCX67aXI3adCgwuHVt5bcsPCVsBvDYfNAJsj22DIUAKopTmlyFVoCu2lwI1 eQatPRVFraqqnf0qnvhtagXMO4knZdlNXbxbHcbHxbe+tbhexZVsIO8JewjP5C5wVhLa WbrMwUdDlJuarYOiAFftXsSFcu3INWF3qYA81ovHXFCfexJ1mZJcPai7SMKa++q1so7v paX8qJjqpj4aRM5mgnvwPR5AuDUIxguUM7SISup6nH4PsdCMSUBFHwNqKTKCRpP7EO4q xZgNXZAyT22xolrCNEVbwQOlj/kibPiymgRrpMBX+cRWxwCxLMp7NIo8xZYqUkKVdfY7 mylQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011917; x=1726616717; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dBcAwVvZIYzGXNkByOPNSzS/2ZaPoaCsQUMEIRBg/WU=; b=ZwwXMhFC+2VdcbNb8AYyMjt1hun6YtgzG57lJh+fHE9ffneqJTrYB/cxe7n+v9WsRq 5RLXMigwUC6XN4EKEF38Vu7LZPnJ1NhLQcYH77gsrJW6chMEljUvvmYx+u5VVA9eodDx mfSIw/7ZwKkcDXN8eUbDmILrfacnx4EcyWjh2ZtxBMgGzc12oyQGG9r/ouGQIgUhSJsp 1co8jjj6CUTVuPyaOWfndeZ0dhJHCD9OUwV1yglNBWs7aogIIEEFmP4hvReOPnfgKvON CIdmb2uBrI1AicJK0t8KuiOMD1ffIHEYKpD/JDJJyv+a7SnGvM936H0gcA0wGJRGJ0UT kpMg== X-Forwarded-Encrypted: i=1; AJvYcCVEq+JUVbK0kLpm2r2bRbUulBqyOxZDX7XXGwiJHy7cXGNnBrgfTK5xi2jYdgE1m/ipg/HdpI/KygAVFKg=@vger.kernel.org X-Gm-Message-State: AOJu0YxpTxOz7S4dlm/lsVgISno2aic3d2vdt3zpe2xerOxaghbs50YP Z13cItdivAgd9WUrUDaajW8/6lZO498igpuZe2fXmGaeGYjadiwLIF6AIhQMkdrDzQQPeAnbi2O nPFJULf0uWw6lZQ58CLqywg== X-Google-Smtp-Source: AGHT+IHPvqQuW1nzaRxrJX4GIRZi5NgMOC89C+YN+0vpncrauk1fada8rcGXAL5RiJtXK/Fd4loftkEuOb/GRSTx+A== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:902:e5c6:b0:205:799f:124f with SMTP id d9443c01a7336-2074c5f3b46mr1590105ad.5.1726011916743; Tue, 10 Sep 2024 16:45:16 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:56 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: Subject: [RFC PATCH 25/39] KVM: guest_memfd: Split HugeTLB pages for guest_memfd use From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vishal Annapurve In this patch, newly allocated HugeTLB pages are split to 4K regular pages before providing them to the requester (fallocate() or KVM). The pages are then reconstructed/merged to HugeTLB pages before the HugeTLB pages are returned to HugeTLB. This is an intermediate step to build page splitting/merging functionality before allowing guest_memfd files to be mmap()ed. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve --- virt/kvm/guest_memfd.c | 299 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 281 insertions(+), 18 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index eacbfdb950d1..8151df2c03e5 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -229,31 +229,206 @@ static int kvm_gmem_hugetlb_filemap_add_folio(struct= address_space *mapping, return 0; } =20 +struct kvm_gmem_split_stash { + struct { + unsigned long _flags_2; + unsigned long _head_2; + + void *_hugetlb_subpool; + void *_hugetlb_cgroup; + void *_hugetlb_cgroup_rsvd; + void *_hugetlb_hwpoison; + }; + void *hugetlb_private; +}; + +static int kvm_gmem_hugetlb_stash_metadata(struct folio *folio) +{ + struct kvm_gmem_split_stash *stash; + + stash =3D kmalloc(sizeof(*stash), GFP_KERNEL); + if (!stash) + return -ENOMEM; + + stash->_flags_2 =3D folio->_flags_2; + stash->_head_2 =3D folio->_head_2; + stash->_hugetlb_subpool =3D folio->_hugetlb_subpool; + stash->_hugetlb_cgroup =3D folio->_hugetlb_cgroup; + stash->_hugetlb_cgroup_rsvd =3D folio->_hugetlb_cgroup_rsvd; + stash->_hugetlb_hwpoison =3D folio->_hugetlb_hwpoison; + stash->hugetlb_private =3D folio_get_private(folio); + + folio_change_private(folio, (void *)stash); + + return 0; +} + +static int kvm_gmem_hugetlb_unstash_metadata(struct folio *folio) +{ + struct kvm_gmem_split_stash *stash; + + stash =3D folio_get_private(folio); + + if (!stash) + return -EINVAL; + + folio->_flags_2 =3D stash->_flags_2; + folio->_head_2 =3D stash->_head_2; + folio->_hugetlb_subpool =3D stash->_hugetlb_subpool; + folio->_hugetlb_cgroup =3D stash->_hugetlb_cgroup; + folio->_hugetlb_cgroup_rsvd =3D stash->_hugetlb_cgroup_rsvd; + folio->_hugetlb_hwpoison =3D stash->_hugetlb_hwpoison; + folio_change_private(folio, stash->hugetlb_private); + + kfree(stash); + + return 0; +} + +/** + * Reconstruct a HugeTLB folio from a contiguous block of folios where the= first + * of the contiguous folios is @folio. + * + * The size of the contiguous block is of huge_page_size(@h). All the foli= os in + * the block are checked to have a refcount of 1 before reconstruction. Af= ter + * reconstruction, the reconstructed folio has a refcount of 1. + * + * Return 0 on success and negative error otherwise. + */ +static int kvm_gmem_hugetlb_reconstruct_folio(struct hstate *h, struct fol= io *folio) +{ + int ret; + + WARN_ON((folio->index & (huge_page_order(h) - 1)) !=3D 0); + + ret =3D kvm_gmem_hugetlb_unstash_metadata(folio); + if (ret) + return ret; + + if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { + kvm_gmem_hugetlb_stash_metadata(folio); + return -ENOMEM; + } + + __folio_set_hugetlb(folio); + + folio_set_count(folio, 1); + + hugetlb_vmemmap_optimize_folio(h, folio); + + return 0; +} + +/* Basically folio_set_order(folio, 1) without the checks. */ +static inline void kvm_gmem_folio_set_order(struct folio *folio, unsigned = int order) +{ + folio->_flags_1 =3D (folio->_flags_1 & ~0xffUL) | order; +#ifdef CONFIG_64BIT + folio->_folio_nr_pages =3D 1U << order; +#endif +} + +/** + * Split a HugeTLB @folio of size huge_page_size(@h). + * + * After splitting, each split folio has a refcount of 1. There are no che= cks on + * refcounts before splitting. + * + * Return 0 on success and negative error otherwise. + */ +static int kvm_gmem_hugetlb_split_folio(struct hstate *h, struct folio *fo= lio) +{ + int ret; + + ret =3D hugetlb_vmemmap_restore_folio(h, folio); + if (ret) + return ret; + + ret =3D kvm_gmem_hugetlb_stash_metadata(folio); + if (ret) { + hugetlb_vmemmap_optimize_folio(h, folio); + return ret; + } + + kvm_gmem_folio_set_order(folio, 0); + + destroy_compound_gigantic_folio(folio, huge_page_order(h)); + __folio_clear_hugetlb(folio); + + /* + * Remove the first folio from h->hugepage_activelist since it is no + * longer a HugeTLB page. The other split pages should not be on any + * lists. + */ + hugetlb_folio_list_del(folio); + + return 0; +} + static struct folio *kvm_gmem_hugetlb_alloc_and_cache_folio(struct inode *= inode, pgoff_t index) { + struct folio *allocated_hugetlb_folio; + pgoff_t hugetlb_first_subpage_index; + struct page *hugetlb_first_subpage; struct kvm_gmem_hugetlb *hgmem; - struct folio *folio; + struct page *requested_page; int ret; + int i; =20 hgmem =3D kvm_gmem_hgmem(inode); - folio =3D kvm_gmem_hugetlb_alloc_folio(hgmem->h, hgmem->spool); - if (IS_ERR(folio)) - return folio; + allocated_hugetlb_folio =3D kvm_gmem_hugetlb_alloc_folio(hgmem->h, hgmem-= >spool); + if (IS_ERR(allocated_hugetlb_folio)) + return allocated_hugetlb_folio; + + requested_page =3D folio_file_page(allocated_hugetlb_folio, index); + hugetlb_first_subpage =3D folio_file_page(allocated_hugetlb_folio, 0); + hugetlb_first_subpage_index =3D index & (huge_page_mask(hgmem->h) >> PAGE= _SHIFT); =20 - /* TODO: Fix index here to be aligned to huge page size. */ - ret =3D kvm_gmem_hugetlb_filemap_add_folio( - inode->i_mapping, folio, index, htlb_alloc_mask(hgmem->h)); + ret =3D kvm_gmem_hugetlb_split_folio(hgmem->h, allocated_hugetlb_folio); if (ret) { - folio_put(folio); + folio_put(allocated_hugetlb_folio); return ERR_PTR(ret); } =20 + for (i =3D 0; i < pages_per_huge_page(hgmem->h); ++i) { + struct folio *folio =3D page_folio(nth_page(hugetlb_first_subpage, i)); + + ret =3D kvm_gmem_hugetlb_filemap_add_folio(inode->i_mapping, + folio, + hugetlb_first_subpage_index + i, + htlb_alloc_mask(hgmem->h)); + if (ret) { + /* TODO: handle cleanup properly. */ + pr_err("Handle cleanup properly index=3D%lx, ret=3D%d\n", + hugetlb_first_subpage_index + i, ret); + dump_page(nth_page(hugetlb_first_subpage, i), "check"); + return ERR_PTR(ret); + } + + /* + * Skip unlocking for the requested index since + * kvm_gmem_get_folio() returns a locked folio. + * + * Do folio_put() to drop the refcount that came with the folio, + * from splitting the folio. Splitting the folio has a refcount + * to be in line with hugetlb_alloc_folio(), which returns a + * folio with refcount 1. + * + * Skip folio_put() for requested index since + * kvm_gmem_get_folio() returns a folio with refcount 1. + */ + if (hugetlb_first_subpage_index + i !=3D index) { + folio_unlock(folio); + folio_put(folio); + } + } + spin_lock(&inode->i_lock); inode->i_blocks +=3D blocks_per_huge_page(hgmem->h); spin_unlock(&inode->i_lock); =20 - return folio; + return page_folio(requested_page); } =20 static struct folio *kvm_gmem_get_hugetlb_folio(struct inode *inode, @@ -365,7 +540,9 @@ static inline void kvm_gmem_hugetlb_filemap_remove_foli= o(struct folio *folio) =20 /** * Removes folios in range [@lstart, @lend) from page cache/filemap (@mapp= ing), - * returning the number of pages freed. + * returning the number of HugeTLB pages freed. + * + * @lend - @lstart must be a multiple of the HugeTLB page size. */ static int kvm_gmem_hugetlb_filemap_remove_folios(struct address_space *ma= pping, struct hstate *h, @@ -373,37 +550,69 @@ static int kvm_gmem_hugetlb_filemap_remove_folios(str= uct address_space *mapping, { const pgoff_t end =3D lend >> PAGE_SHIFT; pgoff_t next =3D lstart >> PAGE_SHIFT; + LIST_HEAD(folios_to_reconstruct); struct folio_batch fbatch; + struct folio *folio, *tmp; int num_freed =3D 0; + int i; =20 + /* + * TODO: Iterate over huge_page_size(h) blocks to avoid taking and + * releasing hugetlb_fault_mutex_table[hash] lock so often. When + * truncating, lstart and lend should be clipped to the size of this + * guest_memfd file, otherwise there would be too many iterations. + */ folio_batch_init(&fbatch); while (filemap_get_folios(mapping, &next, end - 1, &fbatch)) { - int i; for (i =3D 0; i < folio_batch_count(&fbatch); ++i) { struct folio *folio; pgoff_t hindex; u32 hash; =20 folio =3D fbatch.folios[i]; + hindex =3D folio->index >> huge_page_order(h); hash =3D hugetlb_fault_mutex_hash(mapping, hindex); - mutex_lock(&hugetlb_fault_mutex_table[hash]); + + /* + * Collect first pages of HugeTLB folios for + * reconstruction later. + */ + if ((folio->index & ~(huge_page_mask(h) >> PAGE_SHIFT)) =3D=3D 0) + list_add(&folio->lru, &folios_to_reconstruct); + + /* + * Before removing from filemap, take a reference so + * sub-folios don't get freed. Don't free the sub-folios + * until after reconstruction. + */ + folio_get(folio); + kvm_gmem_hugetlb_filemap_remove_folio(folio); - mutex_unlock(&hugetlb_fault_mutex_table[hash]); =20 - num_freed++; + mutex_unlock(&hugetlb_fault_mutex_table[hash]); } folio_batch_release(&fbatch); cond_resched(); } =20 + list_for_each_entry_safe(folio, tmp, &folios_to_reconstruct, lru) { + kvm_gmem_hugetlb_reconstruct_folio(h, folio); + hugetlb_folio_list_move(folio, &h->hugepage_activelist); + + folio_put(folio); + num_freed++; + } + return num_freed; } =20 /** * Removes folios in range [@lstart, @lend) from page cache of inode, upda= tes * inode metadata and hugetlb reservations. + * + * @lend - @lstart must be a multiple of the HugeTLB page size. */ static void kvm_gmem_hugetlb_truncate_folios_range(struct inode *inode, loff_t lstart, loff_t lend) @@ -427,6 +636,56 @@ static void kvm_gmem_hugetlb_truncate_folios_range(str= uct inode *inode, spin_unlock(&inode->i_lock); } =20 +/** + * Zeroes offsets [@start, @end) in a folio from @mapping. + * + * [@start, @end) must be within the same folio. + */ +static void kvm_gmem_zero_partial_page( + struct address_space *mapping, loff_t start, loff_t end) +{ + struct folio *folio; + pgoff_t idx =3D start >> PAGE_SHIFT; + + folio =3D filemap_lock_folio(mapping, idx); + if (IS_ERR(folio)) + return; + + start =3D offset_in_folio(folio, start); + end =3D offset_in_folio(folio, end); + if (!end) + end =3D folio_size(folio); + + folio_zero_segment(folio, (size_t)start, (size_t)end); + folio_unlock(folio); + folio_put(folio); +} + +/** + * Zeroes all pages in range [@start, @end) in @mapping. + * + * hugetlb_zero_partial_page() would work if this had been a full page, bu= t is + * not suitable since the pages have been split. + * + * truncate_inode_pages_range() isn't the right function because it removes + * pages from the page cache; this function only zeroes the pages. + */ +static void kvm_gmem_hugetlb_zero_split_pages(struct address_space *mappin= g, + loff_t start, loff_t end) +{ + loff_t aligned_start; + loff_t index; + + aligned_start =3D round_up(start, PAGE_SIZE); + + kvm_gmem_zero_partial_page(mapping, start, min(aligned_start, end)); + + for (index =3D aligned_start; index < end; index +=3D PAGE_SIZE) { + kvm_gmem_zero_partial_page(mapping, index, + min((loff_t)(index + PAGE_SIZE), end)); + } +} + static void kvm_gmem_hugetlb_truncate_range(struct inode *inode, loff_t ls= tart, loff_t lend) { @@ -442,8 +701,8 @@ static void kvm_gmem_hugetlb_truncate_range(struct inod= e *inode, loff_t lstart, full_hpage_end =3D round_down(lend, hsize); =20 if (lstart < full_hpage_start) { - hugetlb_zero_partial_page(h, inode->i_mapping, lstart, - full_hpage_start); + kvm_gmem_hugetlb_zero_split_pages(inode->i_mapping, lstart, + full_hpage_start); } =20 if (full_hpage_end > full_hpage_start) { @@ -452,8 +711,8 @@ static void kvm_gmem_hugetlb_truncate_range(struct inod= e *inode, loff_t lstart, } =20 if (lend > full_hpage_end) { - hugetlb_zero_partial_page(h, inode->i_mapping, full_hpage_end, - lend); + kvm_gmem_hugetlb_zero_split_pages(inode->i_mapping, full_hpage_end, + lend); } } =20 @@ -1060,6 +1319,10 @@ __kvm_gmem_get_pfn(struct file *file, struct kvm_mem= ory_slot *slot, =20 if (folio_test_hwpoison(folio)) { folio_unlock(folio); + /* + * TODO: this folio may be part of a HugeTLB folio. Perhaps + * reconstruct and then free page? + */ folio_put(folio); return ERR_PTR(-EHWPOISON); } --=20 2.46.0.598.g6f2099f65c-goog