From nobody Sat Nov 30 07:37:39 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 38CCC1AC8B7 for ; Tue, 10 Sep 2024 23:44:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011879; cv=none; b=Bwy5XxX6ypwk4c2vllMU9oNySCMcaGt9Uh1GuEjk+tf8WmgaZKchVc1t2nZNCuQi4S+HQ1QE2QksN4qMV/PB0JpySwTvdTCOl6ryO5Jpnr3Hm1XSbfCnfCJ2pl/muaOe427gHRpVD+BeRQoQEUb6r+xZI1b9SriE1m2k/NpC6tc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011879; c=relaxed/simple; bh=dFKiwdQFdeHeGDHgLUaoks2Iut/fwJ7U/8H5WQn0Ux0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Qdk2a3aUzWvBBuJmGXDGJsFa3UD5jhCgu/RCKeR8+c91jIQAEWcHdTcP4JBqxm8Gq/o0F4Lw53/4N/HHpAldcpxCgn6ykOJr6hVQ79ayR9hbu4g6aKm+gKISh9j0cWq2pIX3XZe+3OqXOa+Ye9gP9Bsth836f9zV5zLMXp2k1/8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tZZ6lDqC; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tZZ6lDqC" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6d3e062dbeeso7481047b3.0 for ; Tue, 10 Sep 2024 16:44:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011875; x=1726616675; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sKXYIyjsovK+WV8DiI7P/1dY1v3IRDe3dp7/1EItjd0=; b=tZZ6lDqCMDPAbXr0s5fioLQtSNw04aNvJAapoxiKcSHPHFyjnIFq/9NwYKH183unLo 6CGMTETbwrPHRPfw4nIJjuuT2O/xIxbGtD2a4BQ75b2TudDhNuuQfcCuOGlhtc51yDot 07gZ0/Ild7OtNqopHt3C1wFznh60/BHjAxVzs0k6xvPboz8QH1YrS8T1Q36J8qTsr6kC 1lcDuqVGY9+1im4FFwBAOSvzQcTHXVFolL5x2CLwe/7TX9aMCZg02bL5UWl8KXum1v3I cUGr3UDh0Fiy2opvWWMyvjhYSMWoQrdtMt2XnCmWExAPDHuOMR3WfkmDZh/bo/yDvkQz rXrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011875; x=1726616675; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sKXYIyjsovK+WV8DiI7P/1dY1v3IRDe3dp7/1EItjd0=; b=smuI8jzHKvUlc6P0yI8NqFLB+K5ze3P0KNHyl/+fIs8y1M7wDBj4CEtfcr1ol/5G8d +pfdlEyDw791ZoFeD3SfugFwELI5pzkLXd28aTAQUMv1jT9LAT4JdHVQRT6UntRlBZK6 O3WphE/2KxytKSjYudlyy8hCgeLVvu037h4F6oeKDuMf0mQMVRHcBIs05qEMSNe4PV7z H6sDVSXr0DvjOtoJXMx19j58BtheFB5HeGdPsOg0cjmO/Y/b2Z9lh4gw9CZibiU2z1lD rxMT2kBWbeBqG4TkAQ2Y9nMtdCnftUHovaIoKVj0k13TO4hu7LPU0qYPxMK5CZa+b1tl biSA== X-Forwarded-Encrypted: i=1; AJvYcCWNWZCO/EJkk2gAWdSZ/U3htaf6c1uHWfuAEDM4wvzb6BtIir2l7pT8HjeJQnHDWMoossgplnlHT/hZPXA=@vger.kernel.org X-Gm-Message-State: AOJu0Yyx5F55VHYdNMz3vIvTfMEIRnLeHft2gdKux6xjAgPDWITMcCng T9oUwC1u7FwKfWP1Q5CEZS+7gQI+hCKTaiMV0xrSbJTPEHBr5daNTVYd8xNaNiU/TAFdpL0krGN WmRvO5Sua9su8TAFi8j7Geg== X-Google-Smtp-Source: AGHT+IGUU7n7hwsUDrLro4Wgjoj/bFNd+sQ+f4VxxKGmgg0FjsnP+LmFWcl01bi6YDhQwKnjz58ssMXCFPvnGLCBGA== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a05:690c:20a0:b0:6db:7f4d:f79f with SMTP id 00721157ae682-6db951c4d86mr1153687b3.0.1726011875005; Tue, 10 Sep 2024 16:44:35 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:32 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: Subject: [RFC PATCH 01/39] mm: hugetlb: Simplify logic in dequeue_hugetlb_folio_vma() From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace arguments avoid_reserve and chg in dequeue_hugetlb_folio_vma() so dequeue_hugetlb_folio_vma() is more understandable. The new argument, use_hstate_resv, indicates whether the folio to be dequeued should be taken from reservations in hstate. If use_hstate_resv is true, the folio to be dequeued should be taken from reservations in hstate and hence h->resv_huge_pages is decremented, and the folio is marked so that the reservation is restored. If use_hstate_resv is false, then a folio needs to be taken from the pool and hence there must exist available_huge_pages(h), failing which, goto err. The bool use_hstate_resv can be reused within dequeue_hugetlb_folio_vma()'s caller, alloc_hugetlb_folio(). No functional changes are intended. As proof, the original two if conditions !vma_has_reserves(vma, chg) && !available_huge_pages(h) and avoid_reserve && !available_huge_pages(h) can be combined into (avoid_reserve || !vma_has_reserves(vma, chg)) && !available_huge_pages(h). Applying de Morgan's theorem on avoid_reserve || !vma_has_reserves(vma, chg) yields !avoid_reserve && vma_has_reserves(vma, chg), hence the simplification is correct. Signed-off-by: Ackerley Tng --- mm/hugetlb.c | 33 +++++++++++---------------------- 1 file changed, 11 insertions(+), 22 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index aaf508be0a2b..af5c6bbc9ff0 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1280,8 +1280,9 @@ static bool vma_has_reserves(struct vm_area_struct *v= ma, long chg) } =20 /* - * Only the process that called mmap() has reserves for - * private mappings. + * Only the process that called mmap() has reserves for private + * mappings. A child process with MAP_PRIVATE mappings created by their + * parent have no page reserves. */ if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) { /* @@ -1393,8 +1394,7 @@ static unsigned long available_huge_pages(struct hsta= te *h) =20 static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h, struct vm_area_struct *vma, - unsigned long address, int avoid_reserve, - long chg) + unsigned long address, bool use_hstate_resv) { struct folio *folio =3D NULL; struct mempolicy *mpol; @@ -1402,16 +1402,7 @@ static struct folio *dequeue_hugetlb_folio_vma(struc= t hstate *h, nodemask_t *nodemask; int nid; =20 - /* - * A child process with MAP_PRIVATE mappings created by their parent - * have no page reserves. This check ensures that reservations are - * not "stolen". The child may still get SIGKILLed - */ - if (!vma_has_reserves(vma, chg) && !available_huge_pages(h)) - goto err; - - /* If reserves cannot be used, ensure enough pages are in the pool */ - if (avoid_reserve && !available_huge_pages(h)) + if (!use_hstate_resv && !available_huge_pages(h)) goto err; =20 gfp_mask =3D htlb_alloc_mask(h); @@ -1429,7 +1420,7 @@ static struct folio *dequeue_hugetlb_folio_vma(struct= hstate *h, folio =3D dequeue_hugetlb_folio_nodemask(h, gfp_mask, nid, nodemask); =20 - if (folio && !avoid_reserve && vma_has_reserves(vma, chg)) { + if (folio && use_hstate_resv) { folio_set_hugetlb_restore_reserve(folio); h->resv_huge_pages--; } @@ -3130,6 +3121,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, struct mem_cgroup *memcg; bool deferred_reserve; gfp_t gfp =3D htlb_alloc_mask(h) | __GFP_RETRY_MAYFAIL; + bool use_hstate_resv; =20 memcg =3D get_mem_cgroup_from_current(); memcg_charge_ret =3D mem_cgroup_hugetlb_try_charge(memcg, gfp, nr_pages); @@ -3190,20 +3182,17 @@ struct folio *alloc_hugetlb_folio(struct vm_area_st= ruct *vma, if (ret) goto out_uncharge_cgroup_reservation; =20 + use_hstate_resv =3D !avoid_reserve && vma_has_reserves(vma, gbl_chg); + spin_lock_irq(&hugetlb_lock); - /* - * glb_chg is passed to indicate whether or not a page must be taken - * from the global free pool (global change). gbl_chg =3D=3D 0 indicates - * a reservation exists for the allocation. - */ - folio =3D dequeue_hugetlb_folio_vma(h, vma, addr, avoid_reserve, gbl_chg); + folio =3D dequeue_hugetlb_folio_vma(h, vma, addr, use_hstate_resv); if (!folio) { spin_unlock_irq(&hugetlb_lock); folio =3D alloc_buddy_hugetlb_folio_with_mpol(h, vma, addr); if (!folio) goto out_uncharge_cgroup; spin_lock_irq(&hugetlb_lock); - if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) { + if (use_hstate_resv) { folio_set_hugetlb_restore_reserve(folio); h->resv_huge_pages--; } --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A9401AE036 for ; Tue, 10 Sep 2024 23:44:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011878; cv=none; b=Pl8QwCZ/XCAs4O41gAFqG0GRPEyjIrSuzRM9ZoIH4KWCI0b6xkm38Z33gykOQODb2X//mGH9SqczATQrYMdfFpMdfqws24nxnf8zl879dNH1PXQbWBM8qb53IUGKS3NHO4oExdguPg5+IHPgTin1FQ45woGMDyjob2fZMdeOhX8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011878; c=relaxed/simple; bh=io1ZCw2TLt0wqz/lMjXlJWELP6TFBEOB8jTdyPdEz+k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=R8ynPcuBtsEXZPrVNDKutGdrmQjPSjipXDsh4f2TiLF07gHJznkdyhXG8ABm56VhancMdDJTKOya7eSTJRe4hyUqiup5L+UmNlbSIENKteQCK+cRcXj3GbhINdEVb8CpePbq+suSga2F/5/NWSuDolffTOvZh3wQGINA/gUQYzY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4eynnDoq; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4eynnDoq" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-20535259f94so76830265ad.1 for ; Tue, 10 Sep 2024 16:44:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011877; x=1726616677; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=J90/0IHODiFrv0z5R6WYA4Y44jSC3bFJWxuZTo3OsX8=; b=4eynnDoq0BfljQ08PTt2OfiklTKycIXL0Yd8DdwpW0YBbdzaMeDe2+G6c+Uq6M7fuM cPoBzn4LiL1EnRLlqZ4pXFy0P2j6Yx/FvM8AvYG1NeBqw9clbgy4V+i3/AgJfQbguMo7 mhRvNtONmKcXyGRCXAOdGu1a+gLIku2WmbkkRfGMKPQUwL9UoduOU74Ow+cy3ekB9kjp Q1803rqmD2i5i+cAjNJ2yU2oxj1yXbdor1rN7HNpq3xHVIMg0UvmjfksDrMadaKInGFB /rOsW9WYvWAIJC1R9/7oqoXRGJrA16sqWy33DIFc/3iH6exM3qvOmNpzzkPQcj0teRok ezQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011877; x=1726616677; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=J90/0IHODiFrv0z5R6WYA4Y44jSC3bFJWxuZTo3OsX8=; b=atZndma7FlZ55OrJb/pJ9AhU2Nqbo62mhALJDK05kM2Zp1BloqVuZVmCLjcgVkvr88 czzycWqDu6v4R/n+0elLLr4eGrxy+ZR2EPjajTsd+AYzQ0Nb0gLLXXmwIpgbiWYGliAj CJR5XsR6reFxEgyq2J2luBVWjkAtM8CgSa980RlDGUJMUONYgNFFL5fjCXEHmhVk+vWx xiTjyHGbR6b72VahL2VdemtTJOuJz7SRifkUTTWs1MLrsMizSKGIJeMlGo3/02aAoTPh BJSoig3350/YwJsekoZNgAs15IJx0s7T46VP5JcV0QAcuCywUWU6ZMUplILgC3sARHJK zbqw== X-Forwarded-Encrypted: i=1; AJvYcCXnOC9PWxtT3Iv3hAvteQqlCejHUTQvQ9F4QOgnnFlFnIBRu7OKpkrFDtv/t2HrYBY5dclbBiDUb3P4afc=@vger.kernel.org X-Gm-Message-State: AOJu0Yy0gcgc8OgBp2tDgFmjH3kIuGq9IJBFt8Afa8pfM1s10pilCoAC SyzBSfJp7w5SifvPVA/if+ypRZgpTGSzHgxawlVyvCqX+cklGu09pzv7xjyktjLZrZyNMSrIgwz la1AlFW1adiF6tTdu0chjMA== X-Google-Smtp-Source: AGHT+IH+P75b7qdmkP6eoZLmeGyUgGCW8nOF1XpFIdx3c0B8s35iOw3gNUn1NvFFrnEnCdwtnZF761N+oNC8hfEQ5Q== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:902:c40f:b0:205:4d27:6164 with SMTP id d9443c01a7336-2074c5e71f2mr248205ad.5.1726011876473; Tue, 10 Sep 2024 16:44:36 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:33 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <416274da1bb0f07db37944578f9e7d96dac3873c.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 02/39] mm: hugetlb: Refactor vma_has_reserves() to should_use_hstate_resv() From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the addition of the chg parameter, vma_has_reserves() no longer just determines whether the vma has reserves. The comment in the vma->vm_flags & VM_NORESERVE block indicates that this function actually computes whether or not the reserved count should be decremented. This refactoring also takes into account the allocation's request parameter avoid_reserve, which helps to further simplify the calling function alloc_hugetlb_folio(). Signed-off-by: Ackerley Tng --- mm/hugetlb.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index af5c6bbc9ff0..597102ed224b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1245,9 +1245,19 @@ void clear_vma_resv_huge_pages(struct vm_area_struct= *vma) hugetlb_dup_vma_private(vma); } =20 -/* Returns true if the VMA has associated reserve pages */ -static bool vma_has_reserves(struct vm_area_struct *vma, long chg) +/* + * Returns true if this allocation should use (debit) hstate reservations,= based on + * + * @vma: VMA config + * @chg: Whether the page requirement can be satisfied using subpool reser= vations + * @avoid_reserve: Whether allocation was requested to avoid using reserva= tions + */ +static bool should_use_hstate_resv(struct vm_area_struct *vma, long chg, + bool avoid_reserve) { + if (avoid_reserve) + return false; + if (vma->vm_flags & VM_NORESERVE) { /* * This address is already reserved by other process(chg =3D=3D 0), @@ -3182,7 +3192,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, if (ret) goto out_uncharge_cgroup_reservation; =20 - use_hstate_resv =3D !avoid_reserve && vma_has_reserves(vma, gbl_chg); + use_hstate_resv =3D should_use_hstate_resv(vma, gbl_chg, avoid_reserve); =20 spin_lock_irq(&hugetlb_lock); folio =3D dequeue_hugetlb_folio_vma(h, vma, addr, use_hstate_resv); --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 12AC81AE047 for ; Tue, 10 Sep 2024 23:44:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011880; cv=none; b=czwn+G6trY6NNPCqR12nkgm1JTgvzbaw8Xgc8EVNaOrUH7PUP4Dxur9e/qctf/TBy3Q/fByPUbk827k7Ogi7grgKwW9tmufiQLnyQKi9FBsw3J/wr1AwclWVO2Bjf3zVlM8uOfY7T7PCmz5wA/dmA6kFWr+st2Y85eCVDmRu9aM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011880; c=relaxed/simple; bh=SUyjtMHnKCBirkEyGaCQmRTjwdwC4Zfx5OkiQdo5v1w=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mu1HQQ679gmX2uS1GD3g3zPKIX3mQl1mrvpwL2O7MQ+k9WitcpoYE2FYkJZFfB/MixgtpFzXQhsq04U+MTwB7Y8URe5+bCw4cb2O8eAuAlgxJBYNvIeKzs6yegg61bZHm0OQVEbJJPjmIm47a6ZPgk+rcYGMz2SIo653qTjhuyg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Z/iNgi57; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Z/iNgi57" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-205516d992eso15741225ad.3 for ; Tue, 10 Sep 2024 16:44:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011878; x=1726616678; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=krcbfxFAJP32Bl8ts+LdeAkshb3wdoRz7FQc5et7+e0=; b=Z/iNgi57aLEHJg4BcLMEzaIvu5jOsIROElY/EBtvI6qVdPVdjkFe+bX0uuAZkcTjGU FH/UfqYH4n77GuNM9PL9ejze3Nb2Bu+YECbIR2UEFf9obE5fsigLdRUJCJQjRnN0WxqV 7ZfeTMmiNMGQ3kYaLlal/jmJaQAaKKmRrySdMiURWwuUN5nZtewgxDvbo+l15gFbQLH1 Kkc5+84prctqRJoHUe0u+dJ4tk0OADpyBx00EgaGsV1k+Li72EM/CqUZbMR+Vy5dXNoN cestXe4N9cN3Aw4GOJAaCUDZNEWVMFxJlAbFHtcDeXqmUGihf8PmU5a2JB5gUg2erJth jsrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011878; x=1726616678; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=krcbfxFAJP32Bl8ts+LdeAkshb3wdoRz7FQc5et7+e0=; b=Mw6kU94hYlLCx9KxQuFAHNz1Bb/a45h3NJCcG5IMVoMcYO/N6VfZqau/x9Z6hfOz8p 3+OUp7DUiVNWXMHOSmuR+f7lrzuuYXcSGX2wFTV2nN0cXim/sqYIKjDoqlCKTFlr/f8u ap1a+C2i3IvZszygCRArxI0Q6FGpzYeXELB83HSNEdNVVYFnQKG08VkjYGPt1FH2Z4Aa 5ZA+POGjqYTTdoepRU6SEBE+G8iJ233OKtk5IbvK8zFdW7iS4YNASBRxpEJAiByMiyX2 GKtdG2xSyX5vdQsA51HbIOqnMm2jPIAsRpdf6cLvk/lDr7layBInPOP6HDLxW+5LS41i t6dw== X-Forwarded-Encrypted: i=1; AJvYcCVOBiS+DFLBMN5yZff9r8CMWdKLQFbDeFYomh1BHM5dzUDeGoeKK9CtJiG2tMywTYI4YjrZUFTXIJpXQow=@vger.kernel.org X-Gm-Message-State: AOJu0YzIcjSqSQ3LsItUWKm6zh7WY9Pym6LXkJXXJFkKmARLCZ6BXelg Thg6KWdaW9pJw2Hsym13Jp9NyWifm+/Uvkos7SngFHw6BMSljfAGR+MJk45kv8N59sKv0CGrnBv awf8QU2wpUzcWP7xmbZXIOw== X-Google-Smtp-Source: AGHT+IGzA4nKfM70wloRZKktU8I5wwl9MfYLThvDDzHIgaA6loY/0Smr81sI2GQQVA1zTeWX0Ryp6QAH+KUadLwDFg== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:903:1c3:b0:205:71f1:853f with SMTP id d9443c01a7336-207521d6944mr178125ad.5.1726011878156; Tue, 10 Sep 2024 16:44:38 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:34 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <5a5e998e8f154c28a28dcdab73fb563f658f2f51.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 03/39] mm: hugetlb: Remove unnecessary check for avoid_reserve From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If avoid_reserve is true, gbl_chg is not used anyway, so there is no point in setting gbl_chg. Signed-off-by: Ackerley Tng --- mm/hugetlb.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 597102ed224b..5cf7fb117e9d 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3166,16 +3166,6 @@ struct folio *alloc_hugetlb_folio(struct vm_area_str= uct *vma, if (gbl_chg < 0) goto out_end_reservation; =20 - /* - * Even though there was no reservation in the region/reserve - * map, there could be reservations associated with the - * subpool that can be used. This would be indicated if the - * return value of hugepage_subpool_get_pages() is zero. - * However, if avoid_reserve is specified we still avoid even - * the subpool reservations. - */ - if (avoid_reserve) - gbl_chg =3D 1; } =20 /* If this allocation is not consuming a reservation, charge it now. --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5CFE1B14EB for ; Tue, 10 Sep 2024 23:44:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011882; cv=none; b=PFr87UKNOHcaFcgRJh56RZ7uhI/gIt8bf5HB1T/Z7gFqDmrYdeYToQspaqGGVyGZU6Vz+uVInGEjMjhoFJ3eVeQEO64w2qnC11RjAw0JKilhDd0qjY4/awUIsSrjbG8MvgAxht2Jk4AoqjsVYlSxJysXe3nmIhJQ9g/ToAFuieo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011882; c=relaxed/simple; bh=O/edjtmypoPixHRv0x+vItyFxW+7M9gDRhPux2Wm++E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sFWxiyAukYjoW6urthxrFBjrWWC8tIIWgTCbiuXOwPai8BLm8pyJoScrUal9rhLsbBG+GgU4iLFBw462tKLEu7I9wGDvWjQWZMlfTXVBYwqrtBxJezkXomkL0+D+AfFz1tUhlWntm8JVIE/g9cDLcQ1C0D+rXUkBF+WsDLzw5HA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rOZokNwW; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rOZokNwW" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e1a91576564so12396597276.0 for ; Tue, 10 Sep 2024 16:44:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011880; x=1726616680; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jTd2rO5Zdb8WdV1vtO69lkT5XfB4R/SsMg9R2z/uiu4=; b=rOZokNwWfVjQwfsiG59uePX96CmdA7qptRgEwB9Fe12fVDwDfvKRSsPoovObcX8CvE 3FsV8+EDkURejrssYLuYoNN2jymSRaulqS328CMVeB/6rH25UImuSwAH4Mjaot/JbqSH NOpKIYa7v6QeS1BY5Wzpd8ZZmlOH1XUqoohoaUF8C/tGB+I5AMR4uKEC11cAjJ/FYDXF PA2TcEEm0NJtOFqCp82zbdfHhZ2M3ff240jsv+Wems1iDEI4To2libgXN4x6Cf9XiLAe KGEKkw601A7nmU492wkyXdiSWdMG+AG4iX4ixQnbl7aqJBCNR5VhMI/qWur/v2Bm+6uW mynQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011880; x=1726616680; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jTd2rO5Zdb8WdV1vtO69lkT5XfB4R/SsMg9R2z/uiu4=; b=uIGxg6MSf96uBb5kzce3OCJ5WODbWNM91i4RdIwpPJdxYQPP3gxFOnqwwYgZeECCN3 lnKhTFtQSWwiIR/IByD8l5S31GTSKs5BQ0hfJ+yV3Vixf9C3kvPFTm3w0ByN8M6ttYzN 1HZNcyUPDhyaxEnFVJ9mpnjgGgqHL39bg5nwWla2vc9jjfQVOwYUxZlArpdYGUzPNBH1 w6OJ6dm8U7iFNCmZjotkSaCCpD2eLnNgrATzEOuhRvEibh5Szk5flzvc+WBz92Tc7pcR b4KiXLMeIpimIIyDyecKtW0j5Ip2MCjS/IdnApx39Ulqn5tHB3RE4l8TcLpSinblb4+3 O4qg== X-Forwarded-Encrypted: i=1; AJvYcCU8jqKsqovxgdZaFB0LVfbpdczYnDyeuqQv2cVcMJ5lTSBEOVftWmATc67JFWJs7Xs+ms0OtJ3xTPmh4K8=@vger.kernel.org X-Gm-Message-State: AOJu0YzbhzH3xHsKrgFIX+8TPoohDdWPeU0EF49ILohQ/ajq686H63n+ 0YJihCoRF1xo8qmSvjNzKIDPDy4fpIPvNpJGddqxg3XeXlHt6K6MZiKl2zkwpRAuOauI2watbFN ePLRa9AbkCwp+Tug96+9Riw== X-Google-Smtp-Source: AGHT+IELcWQwCF/9EVJh/HJrP9xEFU149NgacCTcvYjoLYUSuucXNFK9/9x50btN31jddKHhaQJO6vkBJPZwQK4+6w== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a25:ae9b:0:b0:e0b:f69b:da30 with SMTP id 3f1490d57ef6-e1d34a2f4b4mr88712276.9.1726011879738; Tue, 10 Sep 2024 16:44:39 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:35 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <9831cfcc77e325e48ec3674c3a518bda76e78df5.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 04/39] mm: mempolicy: Refactor out policy_node_nodemask() From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This was refactored out of huge_node(). huge_node()'s interpretation of vma for order assumes the hugetlb-specific storage of the hstate information in the inode. policy_node_nodemask() does not assume that, and can be used more generically. This refactoring also enforces that nid default to the current node id, which was not previously enforced. alloc_pages_mpol_noprof() is the last remaining direct user of policy_nodemask(). All its callers begin with nid being the current node id as well. More refactoring is required for to simplify that. Signed-off-by: Ackerley Tng Reviewed-by: Gregory Price --- include/linux/mempolicy.h | 2 ++ mm/mempolicy.c | 36 ++++++++++++++++++++++++++---------- 2 files changed, 28 insertions(+), 10 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 1add16f21612..a49631e47421 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -138,6 +138,8 @@ extern void numa_policy_init(void); extern void mpol_rebind_task(struct task_struct *tsk, const nodemask_t *ne= w); extern void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new); =20 +extern int policy_node_nodemask(struct mempolicy *mpol, gfp_t gfp_flags, + pgoff_t ilx, nodemask_t **nodemask); extern int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags, struct mempolicy **mpol, nodemask_t **nodemask); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b858e22b259d..f3e572e17775 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1212,7 +1212,6 @@ static struct folio *alloc_migration_target_by_mpol(s= truct folio *src, struct mempolicy *pol =3D mmpol->pol; pgoff_t ilx =3D mmpol->ilx; unsigned int order; - int nid =3D numa_node_id(); gfp_t gfp; =20 order =3D folio_order(src); @@ -1221,10 +1220,11 @@ static struct folio *alloc_migration_target_by_mpol= (struct folio *src, if (folio_test_hugetlb(src)) { nodemask_t *nodemask; struct hstate *h; + int nid; =20 h =3D folio_hstate(src); gfp =3D htlb_alloc_mask(h); - nodemask =3D policy_nodemask(gfp, pol, ilx, &nid); + nid =3D policy_node_nodemask(pol, gfp, ilx, &nodemask); return alloc_hugetlb_folio_nodemask(h, nid, nodemask, gfp, htlb_allow_alloc_fallback(MR_MEMPOLICY_MBIND)); } @@ -1234,7 +1234,7 @@ static struct folio *alloc_migration_target_by_mpol(s= truct folio *src, else gfp =3D GFP_HIGHUSER_MOVABLE | __GFP_RETRY_MAYFAIL | __GFP_COMP; =20 - return folio_alloc_mpol(gfp, order, pol, ilx, nid); + return folio_alloc_mpol(gfp, order, pol, ilx, numa_node_id()); } #else =20 @@ -2084,6 +2084,27 @@ static nodemask_t *policy_nodemask(gfp_t gfp, struct= mempolicy *pol, return nodemask; } =20 +/** + * policy_node_nodemask(@mpol, @gfp_flags, @ilx, @nodemask) + * @mpol: the memory policy to interpret. Reference must be taken. + * @gfp_flags: for this request + * @ilx: interleave index, for use only when MPOL_INTERLEAVE or + * MPOL_WEIGHTED_INTERLEAVE + * @nodemask: (output) pointer to nodemask pointer for 'bind' and 'prefer-= many' + * policy + * + * Returns a nid suitable for a page allocation and a pointer. If the effe= ctive + * policy is 'bind' or 'prefer-many', returns a pointer to the mempolicy's + * @nodemask for filtering the zonelist. + */ +int policy_node_nodemask(struct mempolicy *mpol, gfp_t gfp_flags, + pgoff_t ilx, nodemask_t **nodemask) +{ + int nid =3D numa_node_id(); + *nodemask =3D policy_nodemask(gfp_flags, mpol, ilx, &nid); + return nid; +} + #ifdef CONFIG_HUGETLBFS /* * huge_node(@vma, @addr, @gfp_flags, @mpol) @@ -2102,12 +2123,8 @@ int huge_node(struct vm_area_struct *vma, unsigned l= ong addr, gfp_t gfp_flags, struct mempolicy **mpol, nodemask_t **nodemask) { pgoff_t ilx; - int nid; - - nid =3D numa_node_id(); *mpol =3D get_vma_policy(vma, addr, hstate_vma(vma)->order, &ilx); - *nodemask =3D policy_nodemask(gfp_flags, *mpol, ilx, &nid); - return nid; + return policy_node_nodemask(*mpol, gfp_flags, ilx, nodemask); } =20 /* @@ -2549,8 +2566,7 @@ unsigned long alloc_pages_bulk_array_mempolicy_noprof= (gfp_t gfp, return alloc_pages_bulk_array_preferred_many(gfp, numa_node_id(), pol, nr_pages, page_array); =20 - nid =3D numa_node_id(); - nodemask =3D policy_nodemask(gfp, pol, NO_INTERLEAVE_INDEX, &nid); + nid =3D policy_node_nodemask(pol, gfp, NO_INTERLEAVE_INDEX, &nodemask); return alloc_pages_bulk_noprof(gfp, nid, nodemask, nr_pages, NULL, page_array); } --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C7E21B2ECE for ; Tue, 10 Sep 2024 23:44:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011884; cv=none; b=SynQVfnPlXOQ4/geGh9bC9uoUxUPBM4bQEH7t/3+Wzi5xFAiZ5GJNlWUclxf9D4WmZvCk1xhuXmvFNP+cUv0yuwJf04UJ3SXQiabIDkYzNoz3mt4ze4n5mNwjuvFCP/yBxTSpG0RhCanzWu3PKCqSe5diOGDj3pGNK7Bh5/QxQI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011884; c=relaxed/simple; bh=OQBUOF2Uq6jS4pRXGCaJ5/4GiCZ/0iOxc0vUCCmO+bc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=W640F9xUgZkz2v6vQ0IjhkCQm5UP2Vx5HeE/JWOj0dl15BRAWv4J0BxPx0OrH4AZ4dxhyaL9faMSe1XP0BiE+ZH2L6w6o/mXX6jbxNVa8CFa8vGPR5MLxls9DKn1FTlA6qCZrVKxeTkWDb4udIyYzseqQ3zUKXdDA3aD2TE6brs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1MOvzuO4; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1MOvzuO4" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-7178f096d62so6934803b3a.2 for ; Tue, 10 Sep 2024 16:44:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011881; x=1726616681; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=P0FEJ4ZKBglbUpSmnWaSSUOycb+KTZhx2f+dpCTvB7Q=; b=1MOvzuO4D+tO/RpFPmleZfo8l7WxdQPP79UsyAYOG9+WcsIbXzUeMAW/aA/2awVfRf 9gaxSeDuykpNIYxUAdK4B0YDDWD+qQZ8I7nZnW8MA3IzGlStt4QpCQQyDrBs/VQ78XUg DyIrYJOI9OXI2kyCpRYeXlARiDg7xeeznTLrcDgv8cz4U6V6H/AxmhdSAvaaNuOTrvt8 +cLAKSTWnBESsoOdA0BzqKEBihhj5HqSCuNbdVLpyIhJRXUjC4AkZkJ8DpyIMKx8YqTQ 2ZlayVvVesxKAbmL9bjVFG/sfpSdsH/WjeCgTTPBAUrSDKeR3MPqJJFPX/rgDtcs0+AQ BM0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011881; x=1726616681; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=P0FEJ4ZKBglbUpSmnWaSSUOycb+KTZhx2f+dpCTvB7Q=; b=B87YH9dbflnoL1/tWFIClOvDrbxiohHpuBgEvRSgCOqeByrx8uz0piG8rxPB1hZEJ5 aD0OAXxNmttsO7zz5+NTID/Hfl2fDDm9sOah4l7HXTEgwtp2VsFQwcN0qP2AOtZyeqEO zhJyrtfJCzVBSwZTHqW6MytqL7uthfNlVBAuTqyAfh4oeOU5dHwtOnJtqCsdJmzD9aKa K9T7uYJUBbxB+lB/q6F2OH4DXo7BCEAe6rtz/D1Lk2G89vxAPAfT5dyPD17DLdsqoyWw HjQ6a4wX2jAxdMZhEF+OtoUP2o63qZHYtAKkHIvq9R/C8i72rZAx8rCWfBYkI0CSgtga PfVA== X-Forwarded-Encrypted: i=1; AJvYcCXQlcvCJnpDQe99LC+2RjP7D/L6nOPt5Gwnw6dQyyQNWkiEQPH8EZ8y/bPHuCDwtxc6ndluSqEDP3ZgrJI=@vger.kernel.org X-Gm-Message-State: AOJu0Yw7s9ruSK4u5zFlmf4lwy0mfc5wQwbhdMQQ+rRSAbVOe0X2UCPK ffH+iQEd02bzhyEj0pcU9d9xO4EL5d9r32u9Sf/x7rFlk/GBchp7dxfeX0aD9pX8BFenCkJ6loZ Ig+Al0OlBsTvMDnsBJT+9hQ== X-Google-Smtp-Source: AGHT+IHx4+TfNxnFVDcFxBFG2SogMutruOf/RcFp6adNqekz63WzrINlsJVnsUCpFS0h+PHdTGU8HSxvhRc6JvhHRw== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a62:ab02:0:b0:710:9d5e:4b9a with SMTP id d2e1a72fcca58-718d5e04dadmr37747b3a.2.1726011881266; Tue, 10 Sep 2024 16:44:41 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:36 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <1778a7324a1242fa907981576ebd69716a94d778.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 05/39] mm: hugetlb: Refactor alloc_buddy_hugetlb_folio_with_mpol() to interpret mempolicy instead of vma From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Reducing dependence on vma avoids the hugetlb-specific assumption of where the mempolicy is stored. This will open up other ways of using hugetlb. Signed-off-by: Ackerley Tng --- mm/hugetlb.c | 37 +++++++++++++++++++++++-------------- 1 file changed, 23 insertions(+), 14 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 5cf7fb117e9d..2f2bd2444ae2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2536,32 +2536,31 @@ static struct folio *alloc_migrate_hugetlb_folio(st= ruct hstate *h, gfp_t gfp_mas } =20 /* - * Use the VMA's mpolicy to allocate a huge page from the buddy. + * Allocate a huge page from the buddy allocator, given memory policy, nod= e id + * and nodemask. */ -static -struct folio *alloc_buddy_hugetlb_folio_with_mpol(struct hstate *h, - struct vm_area_struct *vma, unsigned long addr) +static struct folio *alloc_buddy_hugetlb_folio_from_node(struct hstate *h, + struct mempolicy *mpol, + int nid, + nodemask_t *nodemask) { - struct folio *folio =3D NULL; - struct mempolicy *mpol; gfp_t gfp_mask =3D htlb_alloc_mask(h); - int nid; - nodemask_t *nodemask; + struct folio *folio =3D NULL; =20 - nid =3D huge_node(vma, addr, gfp_mask, &mpol, &nodemask); if (mpol_is_preferred_many(mpol)) { gfp_t gfp =3D gfp_mask | __GFP_NOWARN; =20 gfp &=3D ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); folio =3D alloc_surplus_hugetlb_folio(h, gfp, nid, nodemask); + } =20 - /* Fallback to all nodes if page=3D=3DNULL */ + if (!folio) { + /* Fallback to all nodes if earlier allocation failed */ nodemask =3D NULL; - } =20 - if (!folio) folio =3D alloc_surplus_hugetlb_folio(h, gfp_mask, nid, nodemask); - mpol_cond_put(mpol); + } + return folio; } =20 @@ -3187,8 +3186,18 @@ struct folio *alloc_hugetlb_folio(struct vm_area_str= uct *vma, spin_lock_irq(&hugetlb_lock); folio =3D dequeue_hugetlb_folio_vma(h, vma, addr, use_hstate_resv); if (!folio) { + struct mempolicy *mpol; + nodemask_t *nodemask; + pgoff_t ilx; + int nid; + spin_unlock_irq(&hugetlb_lock); - folio =3D alloc_buddy_hugetlb_folio_with_mpol(h, vma, addr); + + mpol =3D get_vma_policy(vma, addr, hstate_vma(vma)->order, &ilx); + nid =3D policy_node_nodemask(mpol, htlb_alloc_mask(h), ilx, &nodemask); + folio =3D alloc_buddy_hugetlb_folio_from_node(h, mpol, nid, nodemask); + mpol_cond_put(mpol); + if (!folio) goto out_uncharge_cgroup; spin_lock_irq(&hugetlb_lock); --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED3F61AB534 for ; Tue, 10 Sep 2024 23:44:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011885; cv=none; b=DFpnGaWkJevRjaDPQHtoOrLXLallZ7oPGCY/OXTmEhnR1vFemy+m3p8YJkQxP3CMTKSK2WJ6khrzcNEy0XPM/k58BsIoNoLO6URnDJP3ycDnPRvNEz7qGnphjTSnWocO4hZ0GZRY7UcV68IrW55G6cmG+BK4fsil2JuzD6QU/5I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011885; c=relaxed/simple; bh=vz+K542nyzdqJxLlExSXInUw0qLgTrnVa+HN27oSntk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=F/sk7m3KtCOLh+U0v7d7X3F6P48Wt8vmiHoSKXVVsGDtm2TG99+SxtnSJ3ugzISRAY/0zXM/qyV4gf30B8DiuPZRbVa1TvOTAVH1SnLSZvgD4YX//L/mqnnKR32qapq4eBol2l1afth/aPSweuO4miLlJc2FBvWyjuHWY1R//G0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Lq4dO4Np; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Lq4dO4Np" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6db791c42e3so77972657b3.1 for ; Tue, 10 Sep 2024 16:44:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011883; x=1726616683; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=A2/Y6gthU1MbFrUehJ3S769Qii4LzOP83N9oJxSMzr4=; b=Lq4dO4Npjc/5Hrpv9HzszDeLE+r3nofpe8xRJi8bVhX6jv3TsMxLzG9+0F8CRGXFaU OCHuZRHOv8z3Mb4COu+ZLy6eC7GmwFl+aq/fHSOwZuRwcE/jJy4LXI8LqBRezt3+P3Du lmil/vVJXv22rT0s7CmJFGSQi6V+xHcWJ4gI0hsKPO9ekm2Om9r94TIvwFqE6ZoeLo8y 1xVYXvcNwS/cieDTA3vqTGKogqFC90yEJoH9ZeFH3vMp5wp0UU2CZaLLEaeKKhMVrxXD TcNVn6BAzAKOlaftPO3OF98N2zHHLQFkayMAVIJceKBXPxsJWr+ABmOpZ87frV4+yQG7 akwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011883; x=1726616683; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=A2/Y6gthU1MbFrUehJ3S769Qii4LzOP83N9oJxSMzr4=; b=JhstGsU/VgHIb9KpbEzwgZHAILkWOCHxWyPtB/OBVFfhwfGqPQunWvzytALO6qiFSt i0fKv7GGOekO0cMwLQMu7lf04PjguydAhu7k+LyOuIy+GZc6Er+XUlAxcY7qf/+28nco mCvbV7khBej+dZitXa5CyFx8C+iTlVe0EOkYhJslcF28kjlAzN0qseWyDK08DwE955mv NNIqBwuOOU5eG3ZbNXb0EEiU0z3zlksDLz6CCKfNq4yj/yezSlItK7JprRybsi8j0I2y DZQmnAsYND5TVRX2bJDpsDapJtSyuXyTeqpjrzdbQ8S3++jAv78jrFXWLAa/OJxb5PHI 62xA== X-Forwarded-Encrypted: i=1; AJvYcCWKwxiDwN2Yqny31QnD+Ws35wWiPqc2ixUjZ7oXLkVTSAqmxvpOxAUTUAdqFssQp+EfhaledNPgAj7nmvY=@vger.kernel.org X-Gm-Message-State: AOJu0YxSJLsqJ4C1yNSjSnTttwpZfj73Z4jwFZ2fZVtH1PXkEGWmiPD+ KLby4Q8qPdlXBTDruHa8xWnaI7JJDLnD5svaD0QXmbx7tsSRPuVgtjpZMeEHeiyeZZCNwevVNc5 DIFESgEkBoIvhzXkQT8afMA== X-Google-Smtp-Source: AGHT+IFPoUvOh1q0B66QYZI1YuXyVP47nvXVDhT/pcsOhAfzJk8Td8wo+BQHvkUkT8CnKxYbBRP+4EVLJOXxK7IwcA== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a05:690c:360c:b0:6db:7b3d:b414 with SMTP id 00721157ae682-6db7b3db573mr3408167b3.0.1726011882893; Tue, 10 Sep 2024 16:44:42 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:37 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <2e9109761869029bf82555e60d98850ac7888ae5.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 06/39] mm: hugetlb: Refactor dequeue_hugetlb_folio_vma() to use mpol From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Reduce dependence on vma since the use of huge_node() assumes that the mempolicy is stored in a specific place in the inode, accessed via the vma. Signed-off-by: Ackerley Tng --- mm/hugetlb.c | 55 ++++++++++++++++++++++------------------------------ 1 file changed, 23 insertions(+), 32 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2f2bd2444ae2..e341bc0eb49a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1402,44 +1402,33 @@ static unsigned long available_huge_pages(struct hs= tate *h) return h->free_huge_pages - h->resv_huge_pages; } =20 -static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h, - struct vm_area_struct *vma, - unsigned long address, bool use_hstate_resv) +static struct folio *dequeue_hugetlb_folio(struct hstate *h, + struct mempolicy *mpol, int nid, + nodemask_t *nodemask, + bool use_hstate_resv) { struct folio *folio =3D NULL; - struct mempolicy *mpol; gfp_t gfp_mask; - nodemask_t *nodemask; - int nid; =20 if (!use_hstate_resv && !available_huge_pages(h)) - goto err; + return NULL; =20 gfp_mask =3D htlb_alloc_mask(h); - nid =3D huge_node(vma, address, gfp_mask, &mpol, &nodemask); =20 - if (mpol_is_preferred_many(mpol)) { - folio =3D dequeue_hugetlb_folio_nodemask(h, gfp_mask, - nid, nodemask); + if (mpol_is_preferred_many(mpol)) + folio =3D dequeue_hugetlb_folio_nodemask(h, gfp_mask, nid, nodemask); =20 - /* Fallback to all nodes if page=3D=3DNULL */ - nodemask =3D NULL; + if (!folio) { + /* Fallback to all nodes if earlier allocation failed */ + folio =3D dequeue_hugetlb_folio_nodemask(h, gfp_mask, nid, NULL); } =20 - if (!folio) - folio =3D dequeue_hugetlb_folio_nodemask(h, gfp_mask, - nid, nodemask); - if (folio && use_hstate_resv) { folio_set_hugetlb_restore_reserve(folio); h->resv_huge_pages--; } =20 - mpol_cond_put(mpol); return folio; - -err: - return NULL; } =20 /* @@ -3131,6 +3120,10 @@ struct folio *alloc_hugetlb_folio(struct vm_area_str= uct *vma, bool deferred_reserve; gfp_t gfp =3D htlb_alloc_mask(h) | __GFP_RETRY_MAYFAIL; bool use_hstate_resv; + struct mempolicy *mpol; + nodemask_t *nodemask; + pgoff_t ilx; + int nid; =20 memcg =3D get_mem_cgroup_from_current(); memcg_charge_ret =3D mem_cgroup_hugetlb_try_charge(memcg, gfp, nr_pages); @@ -3184,22 +3177,19 @@ struct folio *alloc_hugetlb_folio(struct vm_area_st= ruct *vma, use_hstate_resv =3D should_use_hstate_resv(vma, gbl_chg, avoid_reserve); =20 spin_lock_irq(&hugetlb_lock); - folio =3D dequeue_hugetlb_folio_vma(h, vma, addr, use_hstate_resv); - if (!folio) { - struct mempolicy *mpol; - nodemask_t *nodemask; - pgoff_t ilx; - int nid; =20 + mpol =3D get_vma_policy(vma, addr, hstate_vma(vma)->order, &ilx); + nid =3D policy_node_nodemask(mpol, htlb_alloc_mask(h), ilx, &nodemask); + folio =3D dequeue_hugetlb_folio(h, mpol, nid, nodemask, use_hstate_resv); + if (!folio) { spin_unlock_irq(&hugetlb_lock); =20 - mpol =3D get_vma_policy(vma, addr, hstate_vma(vma)->order, &ilx); - nid =3D policy_node_nodemask(mpol, htlb_alloc_mask(h), ilx, &nodemask); folio =3D alloc_buddy_hugetlb_folio_from_node(h, mpol, nid, nodemask); - mpol_cond_put(mpol); - - if (!folio) + if (!folio) { + mpol_cond_put(mpol); goto out_uncharge_cgroup; + } + spin_lock_irq(&hugetlb_lock); if (use_hstate_resv) { folio_set_hugetlb_restore_reserve(folio); @@ -3209,6 +3199,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, folio_ref_unfreeze(folio, 1); /* Fall through */ } + mpol_cond_put(mpol); =20 hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, folio); /* If allocation is not consuming a reservation, also store the --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F58E1B5825 for ; Tue, 10 Sep 2024 23:44:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011887; cv=none; b=U9OQcZuhVBEthSolM1Qfj832JeVDaR+wmNUBsXIF+/thJ4aq4jAKv3ZOwedDo4PIXWurIBbGPZDQRxbINthhtMM098X5xadVkEFV9dTlsIKAc06N64scIL4zg1s7redvL4of8R7Sbim2Zf5mtRseNIePNO3fQK/Gzt33LOdJr+s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011887; c=relaxed/simple; bh=K2LzTAiIRtOE5clnGSJ+0g05nFLVoKPOdw852v0pmCA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hnabooVCsQEOZUnnnoDBjQ8vbDJ3uaimgQgmxNVpgcgHpm6nllykh1eZ5YouHSFLwHyKQ3dhyiTGieYy4aartyR77RX9YFG8k4Pn5np9iLO+2bC2WJy+SAUz5btoM1VFHfJrLM4EMmfg7zyNI+fis2AY/qm0E0jsPcVvsXk3T4E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YMpFDJHc; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YMpFDJHc" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-207464a9b59so16641345ad.3 for ; Tue, 10 Sep 2024 16:44:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011885; x=1726616685; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5f8yQFUffYPzUq824ZihetLpeNkj57gNWJVxt7QoH4Y=; b=YMpFDJHcEYokKYHR+LZ3dXpbKws8RDrgYPeZNBFjUwRqpCDbXDaANTSv0upDLn8jkG Ig11OBcRMsHhzECs6KqeqOtXN44sY49PxN5NUJQx/0OxvrVcRh37hWqUo0bcmA02yaix 25IxB1Tqwra+3B2lx1pAo0f2mLx+Gc/fMWMdrrrq2SHoFj1Mq+8l6nzLQFmMxUcBuEYU x5TlcwYpBJ5xpgP2ggD/R947WCEl4GIrp6sVQQ0+YaBPi5eI+75RTYNmQI1r2FET2Dzk F+6tFcjfeQfUNFddXvFWxiB2JV5J3TLfR0iWP3mHw/XJCVN9NWBFc4tzjHIORhd6Rdw3 VW1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011885; x=1726616685; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5f8yQFUffYPzUq824ZihetLpeNkj57gNWJVxt7QoH4Y=; b=pVqJuK/Qtrd/udc3mKPu2iCencDyfB2/uX4IwfXktzqFxXlg2i2zFYbJCcyr54a0Bc 7B1SlALr27gqBJwCV4HYUyIOf/I0E6SkDL35twN2xWEzNEo9O/w/YX+xGzW9MkHv23fU eFLb2e5XtUhbjD6nt5ZCJLlNUi+2Le5wM8SbeiJaEGzW4LHyG/CTgriSwD0uB97DME80 n7wwfgS9Bk4RfA5cK5CCsO1gswmGYaBFkCzBKaWm4wEI85cFt70i7fMcUsiL/t+3lz4w KDX1nL9nIlzGG/IY2LL6v6pgNkDujiVGh7C77Gnqnq8O7l7KkytApWqvGOY2n3nnr8Iq 2xdw== X-Forwarded-Encrypted: i=1; AJvYcCXFlnz27WZvIV41sk9OyKQPncacfK+BCNx2UFK0lR9KIwG6pK4ZXvc4lyQS6Z5gVfbe+w61mM+haDcAWzU=@vger.kernel.org X-Gm-Message-State: AOJu0Yx2NTATz93fQxHQKaAexbXKCHQjkd1n98YQNvYG7flhjSVO7t0r UFcM/QBZLWyBXAd1LxhJevhE4AquTlBKP7GYBOC44FVljDIERMQUheoX7SgSplJRxQahLxdzzbW 2nJJsr7anJaJBQcVNQCI3HA== X-Google-Smtp-Source: AGHT+IEMrn0OqAypgUQC+spX1CPlDQJLfBB8w26Ns2KNptOMPv9ijUEGA9m5hBvK895d2wE0z6f+lmJkIjqr2SJ6OA== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:903:1ca:b0:206:aa07:b62 with SMTP id d9443c01a7336-2074c5f2a6emr2330595ad.5.1726011884521; Tue, 10 Sep 2024 16:44:44 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:38 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <7348091f4c539ed207d9bb0f3744d0f0efb7f2b3.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 07/39] mm: hugetlb: Refactor out hugetlb_alloc_folio From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" hugetlb_alloc_folio() allocates a hugetlb folio without handling reservations in the vma and subpool, since some of that reservation concepts are hugetlbfs specific. Signed-off-by: Ackerley Tng --- include/linux/hugetlb.h | 12 ++++ mm/hugetlb.c | 144 ++++++++++++++++++++++++---------------- 2 files changed, 98 insertions(+), 58 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index c9bf68c239a0..e4a05a421623 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -690,6 +690,10 @@ struct huge_bootmem_page { }; =20 int isolate_or_dissolve_huge_page(struct page *page, struct list_head *lis= t); +struct folio *hugetlb_alloc_folio(struct hstate *h, struct mempolicy *mpol, + int nid, nodemask_t *nodemask, + bool charge_cgroup_reservation, + bool use_hstate_resv); struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, unsigned long addr, int avoid_reserve); struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred= _nid, @@ -1027,6 +1031,14 @@ static inline int isolate_or_dissolve_huge_page(stru= ct page *page, return -ENOMEM; } =20 +static inline struct folio * +hugetlb_alloc_folio(struct hstate *h, struct mempolicy *mpol, int nid, + nodemask_t *nodemask, bool charge_cgroup_reservation, + bool use_hstate_resv) +{ + return NULL; +} + static inline struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, unsigned long addr, int avoid_reserve) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e341bc0eb49a..7e73ebcc0f26 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3106,6 +3106,75 @@ int isolate_or_dissolve_huge_page(struct page *page,= struct list_head *list) return ret; } =20 +/** + * Allocates a hugetlb folio either by dequeueing or from buddy allocator. + */ +struct folio *hugetlb_alloc_folio(struct hstate *h, struct mempolicy *mpol, + int nid, nodemask_t *nodemask, + bool charge_cgroup_reservation, + bool use_hstate_resv) +{ + struct hugetlb_cgroup *h_cg =3D NULL; + struct folio *folio; + int ret; + int idx; + + idx =3D hstate_index(h); + + if (charge_cgroup_reservation) { + ret =3D hugetlb_cgroup_charge_cgroup_rsvd( + idx, pages_per_huge_page(h), &h_cg); + if (ret) + return NULL; + } + + ret =3D hugetlb_cgroup_charge_cgroup(idx, pages_per_huge_page(h), &h_cg); + if (ret) + goto err_uncharge_cgroup_reservation; + + spin_lock_irq(&hugetlb_lock); + + folio =3D dequeue_hugetlb_folio(h, mpol, nid, nodemask, use_hstate_resv); + if (!folio) { + spin_unlock_irq(&hugetlb_lock); + + folio =3D alloc_buddy_hugetlb_folio_from_node(h, mpol, nid, nodemask); + if (!folio) + goto err_uncharge_cgroup; + + spin_lock_irq(&hugetlb_lock); + if (use_hstate_resv) { + folio_set_hugetlb_restore_reserve(folio); + h->resv_huge_pages--; + } + list_add(&folio->lru, &h->hugepage_activelist); + folio_ref_unfreeze(folio, 1); + /* Fall through */ + } + + hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, folio); + + if (charge_cgroup_reservation) { + hugetlb_cgroup_commit_charge_rsvd(idx, pages_per_huge_page(h), + h_cg, folio); + } + + spin_unlock_irq(&hugetlb_lock); + + return folio; + +err_uncharge_cgroup: + hugetlb_cgroup_uncharge_cgroup(idx, pages_per_huge_page(h), h_cg); + +err_uncharge_cgroup_reservation: + if (charge_cgroup_reservation) { + hugetlb_cgroup_uncharge_cgroup_rsvd(idx, pages_per_huge_page(h), + h_cg); + } + + return NULL; +} + struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, unsigned long addr, int avoid_reserve) { @@ -3114,11 +3183,10 @@ struct folio *alloc_hugetlb_folio(struct vm_area_st= ruct *vma, struct folio *folio; long map_chg, map_commit, nr_pages =3D pages_per_huge_page(h); long gbl_chg; - int memcg_charge_ret, ret, idx; - struct hugetlb_cgroup *h_cg =3D NULL; + int memcg_charge_ret; struct mem_cgroup *memcg; - bool deferred_reserve; - gfp_t gfp =3D htlb_alloc_mask(h) | __GFP_RETRY_MAYFAIL; + bool charge_cgroup_reservation; + gfp_t gfp =3D htlb_alloc_mask(h); bool use_hstate_resv; struct mempolicy *mpol; nodemask_t *nodemask; @@ -3126,13 +3194,14 @@ struct folio *alloc_hugetlb_folio(struct vm_area_st= ruct *vma, int nid; =20 memcg =3D get_mem_cgroup_from_current(); - memcg_charge_ret =3D mem_cgroup_hugetlb_try_charge(memcg, gfp, nr_pages); + memcg_charge_ret =3D + mem_cgroup_hugetlb_try_charge(memcg, gfp | __GFP_RETRY_MAYFAIL, + nr_pages); if (memcg_charge_ret =3D=3D -ENOMEM) { mem_cgroup_put(memcg); return ERR_PTR(-ENOMEM); } =20 - idx =3D hstate_index(h); /* * Examine the region/reserve map to determine if the process * has a reservation for the page to be allocated. A return @@ -3160,57 +3229,22 @@ struct folio *alloc_hugetlb_folio(struct vm_area_st= ruct *vma, =20 } =20 - /* If this allocation is not consuming a reservation, charge it now. - */ - deferred_reserve =3D map_chg || avoid_reserve; - if (deferred_reserve) { - ret =3D hugetlb_cgroup_charge_cgroup_rsvd( - idx, pages_per_huge_page(h), &h_cg); - if (ret) - goto out_subpool_put; - } - - ret =3D hugetlb_cgroup_charge_cgroup(idx, pages_per_huge_page(h), &h_cg); - if (ret) - goto out_uncharge_cgroup_reservation; - use_hstate_resv =3D should_use_hstate_resv(vma, gbl_chg, avoid_reserve); =20 - spin_lock_irq(&hugetlb_lock); + /* + * charge_cgroup_reservation if this allocation is not consuming a + * reservation + */ + charge_cgroup_reservation =3D map_chg || avoid_reserve; =20 mpol =3D get_vma_policy(vma, addr, hstate_vma(vma)->order, &ilx); - nid =3D policy_node_nodemask(mpol, htlb_alloc_mask(h), ilx, &nodemask); - folio =3D dequeue_hugetlb_folio(h, mpol, nid, nodemask, use_hstate_resv); - if (!folio) { - spin_unlock_irq(&hugetlb_lock); - - folio =3D alloc_buddy_hugetlb_folio_from_node(h, mpol, nid, nodemask); - if (!folio) { - mpol_cond_put(mpol); - goto out_uncharge_cgroup; - } - - spin_lock_irq(&hugetlb_lock); - if (use_hstate_resv) { - folio_set_hugetlb_restore_reserve(folio); - h->resv_huge_pages--; - } - list_add(&folio->lru, &h->hugepage_activelist); - folio_ref_unfreeze(folio, 1); - /* Fall through */ - } + nid =3D policy_node_nodemask(mpol, gfp, ilx, &nodemask); + folio =3D hugetlb_alloc_folio(h, mpol, nid, nodemask, + charge_cgroup_reservation, use_hstate_resv); mpol_cond_put(mpol); =20 - hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, folio); - /* If allocation is not consuming a reservation, also store the - * hugetlb_cgroup pointer on the page. - */ - if (deferred_reserve) { - hugetlb_cgroup_commit_charge_rsvd(idx, pages_per_huge_page(h), - h_cg, folio); - } - - spin_unlock_irq(&hugetlb_lock); + if (!folio) + goto out_subpool_put; =20 hugetlb_set_folio_subpool(folio, spool); =20 @@ -3229,7 +3263,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_stru= ct *vma, =20 rsv_adjust =3D hugepage_subpool_put_pages(spool, 1); hugetlb_acct_memory(h, -rsv_adjust); - if (deferred_reserve) { + if (charge_cgroup_reservation) { spin_lock_irq(&hugetlb_lock); hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h), pages_per_huge_page(h), folio); @@ -3243,12 +3277,6 @@ struct folio *alloc_hugetlb_folio(struct vm_area_str= uct *vma, =20 return folio; =20 -out_uncharge_cgroup: - hugetlb_cgroup_uncharge_cgroup(idx, pages_per_huge_page(h), h_cg); -out_uncharge_cgroup_reservation: - if (deferred_reserve) - hugetlb_cgroup_uncharge_cgroup_rsvd(idx, pages_per_huge_page(h), - h_cg); out_subpool_put: if (map_chg || avoid_reserve) hugepage_subpool_put_pages(spool, 1); --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2ABF41B5331 for ; Tue, 10 Sep 2024 23:44:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011888; cv=none; b=HPdVeNevyx57lUUY3Zz31wdrWhuXF5pvMiFescoHe2oWKO2QJ7qF8GXkF+0JiDOTSnG8/eZUELNGugQ06qIvEx6k2qn38pELJ/Nn5qbAgx3AiiUDtlPmKUhUKGU9Rf0zIWJ9pBaLXZZxOcNanC5KuiMcnXMMx4gtNuv+BKzlOuM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011888; c=relaxed/simple; bh=P/1nrx2IuQ6u/tRlLat3CEqnsSzaylehYsr5F1KL3bU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Kqyp+djC9v1CbuJRwZcfrRoqw6ZLNCRRapRgAPtQNGVwIp0yNK5gZd0OxM0dhtaOeVZ1Bbnq4FgT9MgHS8rfKp/SR34ncLJEz/dKGNIgP74p9xYVnl3bHIPjbD8ksxK3/CuxzYGSKWY62DzUHnRuLqchu1C6gfCa0A8Pz8f4nw4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vHt0X+YR; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vHt0X+YR" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2056364914eso79415555ad.3 for ; Tue, 10 Sep 2024 16:44:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011886; x=1726616686; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wCQtr3LYUTrh92YO/TM3BLSTa63Bn54Ak3CYVuDQSOE=; b=vHt0X+YRa0IQq9jff12LFWszMiRmTu81WRf95qZCHH2FBFUHpH3OwFaOVCXEKoGMwP eMxv1VWr2rKqOz+YKdr4jqT0avOy9isYIwsd2fbrWA3CzjCYYV+lmtLuk0PzgdP7akFX Ek4uF0STXC/7BaEF3Eq4GGTc26AZMh/U4GRo1UstG0GfjVCfU4CPCBHc7YZe36soVp+b oz2MIUONVw3/0DRYNAnue1KkJbWu5PDcD/nS/fZrudA3tdV16DX6TZTd6H0PB8L4VTSU RILZ1f89ZweoYEzrmeymJzi9RL07tHEb34uBq5ta9ImJNjo4vul0b2DgzCZ8hXogLLk+ I+1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011886; x=1726616686; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wCQtr3LYUTrh92YO/TM3BLSTa63Bn54Ak3CYVuDQSOE=; b=GCZEdFI3jPV4HqDIgIom0vPt5CAA4nFqpxhktPM1Kz2ao5abD2q/f8X2LtSfgez7Ui sVkH1LVB/XSDKkxRvo9EPrubWHNy2fm0Tb/ElFPTOqk5joEwtb11r9tarapC0CwhtXFf +tUSIZIvW0tyOLIzJfF7nHolTGnp+Na5scLDbWJBaN7O5bws54t1Bxs90HRpYDSBczgB bc1931D7HV4GmzcWLMhlRAMLP1QrtouL8VM/iwjaMG8Sl5XcGWSIxOelrCjhRLuzBici LBFFkFC5pujrGCZaCeVx9kR+C0YvV87Z/1EwaSjbfnxal52AT9lKx3dCda6x1AmlTCMF o+dg== X-Forwarded-Encrypted: i=1; AJvYcCXi6x3l+hULpGsHrtKdRpSlff0Zqk4g8vZt7A0AwgCnWbbc1+AxPH3/kw+yl1xWoZlhZLphnFI9BKrDjIE=@vger.kernel.org X-Gm-Message-State: AOJu0Yyg8O4aBOpf7XCsUO+lYkF9CK8QyQtLg+cNJMTgcPjUfNMiyehh eNMlwEoc+UtW23UYBVIYAskpLIECQBR0UECkb+ZHBVX4TTtvfQOufzmT52ySs/GsOOjy64EaQX+ eSXbb0A9JLpix4R9In4m3DA== X-Google-Smtp-Source: AGHT+IH5Nj5kPHNo6rBfhQxFH2WNjr8MfjXL+h+toY4sJe8PVOOMsYFK1lr+WyYqGBk0WvcoCe7tmQGhI2Eo6rySgw== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:902:b686:b0:205:5db4:44b8 with SMTP id d9443c01a7336-2074c63a9cemr948735ad.5.1726011886284; Tue, 10 Sep 2024 16:44:46 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:39 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <9f287e19cb80258b406800c8758fc58eff449d56.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 08/39] mm: truncate: Expose preparation steps for truncate_inode_pages_final From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This will allow preparation steps to be shared Signed-off-by: Ackerley Tng --- include/linux/mm.h | 1 + mm/truncate.c | 26 ++++++++++++++++---------- 2 files changed, 17 insertions(+), 10 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c4b238a20b76..ffb4788295b4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3442,6 +3442,7 @@ extern unsigned long vm_unmapped_area(struct vm_unmap= ped_area_info *info); extern void truncate_inode_pages(struct address_space *, loff_t); extern void truncate_inode_pages_range(struct address_space *, loff_t lstart, loff_t lend); +extern void truncate_inode_pages_final_prepare(struct address_space *); extern void truncate_inode_pages_final(struct address_space *); =20 /* generic vm_area_ops exported for stackable file systems */ diff --git a/mm/truncate.c b/mm/truncate.c index 4d61fbdd4b2f..28cca86424f8 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -424,16 +424,7 @@ void truncate_inode_pages(struct address_space *mappin= g, loff_t lstart) } EXPORT_SYMBOL(truncate_inode_pages); =20 -/** - * truncate_inode_pages_final - truncate *all* pages before inode dies - * @mapping: mapping to truncate - * - * Called under (and serialized by) inode->i_rwsem. - * - * Filesystems have to use this in the .evict_inode path to inform the - * VM that this is the final truncate and the inode is going away. - */ -void truncate_inode_pages_final(struct address_space *mapping) +void truncate_inode_pages_final_prepare(struct address_space *mapping) { /* * Page reclaim can not participate in regular inode lifetime @@ -454,6 +445,21 @@ void truncate_inode_pages_final(struct address_space *= mapping) xa_lock_irq(&mapping->i_pages); xa_unlock_irq(&mapping->i_pages); } +} +EXPORT_SYMBOL(truncate_inode_pages_final_prepare); + +/** + * truncate_inode_pages_final - truncate *all* pages before inode dies + * @mapping: mapping to truncate + * + * Called under (and serialized by) inode->i_rwsem. + * + * Filesystems have to use this in the .evict_inode path to inform the + * VM that this is the final truncate and the inode is going away. + */ +void truncate_inode_pages_final(struct address_space *mapping) +{ + truncate_inode_pages_final_prepare(mapping); =20 truncate_inode_pages(mapping, 0); } --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C30651BA87B for ; Tue, 10 Sep 2024 23:44:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011890; cv=none; b=p79K5mgvTrsM/iPr+nnIopa1P7MmZZ/yfSVkpGDoE9buGm/78BgYb9gO8xI62f5In64R+tryeBUop/Vr1LyK3QCflogx6nl+Sv6nnJIr46MojCJVsoujQwtzk2ZIaFG2Kk+1pf1qECL3IxReYNv9uRmXUcavlAtLB645RJPRnOc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011890; c=relaxed/simple; bh=vKn6pjIRFyRt8Opg+2dj1UETcidGMMZpXY2Tm+1+Wnk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ux7qo+YNPn/NxzPY1cbPaihI34Tv1H+9ghlCTqyDathissR4Qv3rtAk2IRpsyUOk9MYBLd03uA8HxWx+BBWVartkqlpDsg+juwxJiyRpbt0c/HzAay9jzfZnqKDnluUwUKf12h8CVZasPC3sw9N732op9R4HkmpshcqU63+iNxE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dw+mGHtC; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dw+mGHtC" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e1d46cee0b0so7046988276.2 for ; Tue, 10 Sep 2024 16:44:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011888; x=1726616688; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wON4ebge/JtWXTatH36Oz8RnYdWXk0aZ0I0s+pE884Y=; b=dw+mGHtCedFwjPpV2akhGZwfEIzo4Q+I/jMx0YlaJ94gpmUFT4uSefXAEesPERKn2w yB1GdmTrvwkbEiReBiwyVjmWhWGmWJbn0RggjTfVJEczO95G/vBJF/EmZEp0OoKvwQeW J0NKem+YA19uMgUWNiz4AiekLupwvfGMVCNCj8ZMgP2+sfXteu5JofGJCGl3+eNA5cwz G6CByNX3QNzKBwsB0rcf11MW5eg4YGRBgcw2NRWFRn9VIqUDTo4aTraEGr8ySvrXUfsh amdnmlrpkoBhbwWVtZ7KU4ytNIAHJav0NY6+vzyGad+gO+/LiuD1znHVvZCTIfIwuX15 esfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011888; x=1726616688; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wON4ebge/JtWXTatH36Oz8RnYdWXk0aZ0I0s+pE884Y=; b=tUtcBbbzsLA0tLator6UHAAO1Cki3mUrcBkzeFbTtKkwP1IaD5ZFR4zY4FxmtNssW5 G72qA51NImFq46RnIdmK3tKNtL7qboEiSS1goOxJMIDpMCIS0sFBIJLSYC9n1UeHnZ+o SmWk62wEtB8yPtSeU+41fC4gCLG73t6y3Ln335rEkPrGg5Ug4uhU8D3MlWpTduJqqhgi urVoxoKoSw/IcVYNG34gjknth/kLdAd5eZY54qcqJ/hE3r1yQquZ1H+myaaCJMM+QhpF wTkh/e6xedayqcRygjQdWsZO9tx68noG1X5ghj9xmN6/DCZwylywTCU247jvbNGM87ku Gn2g== X-Forwarded-Encrypted: i=1; AJvYcCUxOrwYz5YOOuWfHPWJkX3kYv6+icJZj1FCNRDDvxfyIV0bo3E9WaFT3DUHyLvH8a6GkAD03PWLoYNgKm0=@vger.kernel.org X-Gm-Message-State: AOJu0YwWiHlGdzcvorYnQTHJie6bCGFYvyir1OLSdwt52q+usoMGFr7Q llX35/0WoxyYB7r5t3MDgeBtPLRbDBKyuNPxYSWxO/a1TC9jjemepY29UdhKwq5NCkK1McktHow xE32YLrrLar0K+LRt7/1O5A== X-Google-Smtp-Source: AGHT+IGifFJzomPV42kuvxwSOW6D8KRl1ZUZcLgu/9vMTp3RDdNsVY265K9HmTBfWg/leXDMNAyCn73aN71Ukp8ufA== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a25:d347:0:b0:e16:51f9:59da with SMTP id 3f1490d57ef6-e1d349e2dd5mr43068276.6.1726011887733; Tue, 10 Sep 2024 16:44:47 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:40 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: Subject: [RFC PATCH 09/39] mm: hugetlb: Expose hugetlb_subpool_{get,put}_pages() From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This will allow hugetlb subpools to be used by guest_memfd. Signed-off-by: Ackerley Tng --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 6 ++---- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index e4a05a421623..907cfbbd9e24 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -119,6 +119,9 @@ struct hugepage_subpool *hugepage_new_subpool(struct hs= tate *h, long max_hpages, long min_hpages); void hugepage_put_subpool(struct hugepage_subpool *spool); =20 +long hugepage_subpool_get_pages(struct hugepage_subpool *spool, long delta= ); +long hugepage_subpool_put_pages(struct hugepage_subpool *spool, long delta= ); + void hugetlb_dup_vma_private(struct vm_area_struct *vma); void clear_vma_resv_huge_pages(struct vm_area_struct *vma); int move_hugetlb_page_tables(struct vm_area_struct *vma, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7e73ebcc0f26..808915108126 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -170,8 +170,7 @@ void hugepage_put_subpool(struct hugepage_subpool *spoo= l) * only be different than the passed value (delta) in the case where * a subpool minimum size must be maintained. */ -static long hugepage_subpool_get_pages(struct hugepage_subpool *spool, - long delta) +long hugepage_subpool_get_pages(struct hugepage_subpool *spool, long delta) { long ret =3D delta; =20 @@ -215,8 +214,7 @@ static long hugepage_subpool_get_pages(struct hugepage_= subpool *spool, * The return value may only be different than the passed value (delta) * in the case where a subpool minimum size must be maintained. */ -static long hugepage_subpool_put_pages(struct hugepage_subpool *spool, - long delta) +long hugepage_subpool_put_pages(struct hugepage_subpool *spool, long delta) { long ret =3D delta; unsigned long flags; --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28FEA1BB688 for ; Tue, 10 Sep 2024 23:44:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011891; cv=none; b=KkiQr+Yeltt5FVyAnfLtj1JMGddXVyidBjjf2UAriVgJ+St82gLKBUMDfwmcLWnchTHz+yj1eIvZejFKsjLyfKQH3Rxh4j1uChILfZjBEsZs8jDC17MX/0YHSuXxPj91xc4p/mudfLzFV+169QeugaIUwOmUfC+E5mFCgQQYrO8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011891; c=relaxed/simple; bh=oRugCpYHEXBZY4B6pgzgYsgouWvAgYzAOcCTQimGKYE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ee0hih9VGSv/wEWCWq2CpreJNnBNn4ttXumaQwam3dlSQ1DI3l3YmBNH3fqbGhEqzLW9TEkmO9adD39IC4CvytbScn1t2Ygi5ZnoKhkQ6pa1+nYLCFChvHfJkgE1BN53ydWE1/wYiSYBeovJXNq4upQii5RTXdD+LlN9cvg8/xE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=o57i4hOk; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="o57i4hOk" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e163641feb9so1115564276.0 for ; Tue, 10 Sep 2024 16:44:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011889; x=1726616689; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8iv7923HL7PnugAdoQNMJmA75my6sY7uB1Q3OZBVqww=; b=o57i4hOkTaHnE5A42sqVlbSJJe8GVY25gblcufXANZymi8gcSMBo5rVCkY2dTdl3/t dDvh99zGNo+84/y4OQsXbyGaQx0wDk5VsgieqZyYYc9DuEROR80zXeVrjsUMxE+4acrQ l6lsxAxQIauYzVRJ0+lNKgM1RK80GaBWD154IkxDxU1K4HCWSpkH2fOSt4UP445WqsYr VwT1Jit5HQ6+ttk77IB1UzRKdHlYqll7vCChT31b+qRfXHODedRObGUVFSpYda9ZX3H2 QjU7091hGJrxFJbKTrxC34RJvXOhqWxC5GHqVvajrkjIme2Ofy7KKTJAOTZ6J5UC4XH4 nMpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011889; x=1726616689; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8iv7923HL7PnugAdoQNMJmA75my6sY7uB1Q3OZBVqww=; b=awhFVl0ua4j9Y6hsdOotVdthw3lrGe+FqsdlW9WaRkfkEdmlQ6d4a0qzg7nmF9EeOy 4ZwETnYkEO27ob/sWEEe9Zcq+6N5WkHU4KuSbwgS9ebLTcg/drB8+wjunaZCpzUYQ9uz SppMip0S9EwwWooIQgCdrYq7hmL/ArRFlCK5xLpZAI1PClSi82HXchPTB/L81kEmhhSy I+Mu3QumcuiIwvQz5tltNzebCZnPrplau6b/M0GvEwCaOq4VTAYAr7T6iJpXm2J+nE6v xxIsKUC+ig5nIxUZy8ybSSoYTbp05naYTGjpGD7g3oDdZQjbl8oVovas2Mnd1BRq2TbQ znrw== X-Forwarded-Encrypted: i=1; AJvYcCVFC3rJhdyKaUh0P0Uw0DyUy6wf2LzVP54R4fvthJ4YzYf3PCkMe0HrkQ23uG5zh7cjc8J2EnvspOiWI50=@vger.kernel.org X-Gm-Message-State: AOJu0YwWjvmUx3FZjorb+yPaRI7IFR0vZenNcbbvsR64VSnDXlbIMjHU 7n2/hDupon8LO24oBDxjq2+rveLGTDICHbtvoxV838E+qeCjPiTs9BjqBy4QJU91Kh6N0GBeSjb x/Qhjhdqr6FQvqfujx1A4xQ== X-Google-Smtp-Source: AGHT+IExTdEexhbqpimHxd6gs3l7Pq2iWLuHahj/FFfLrEvqqV1hWUN+M4z6YvLC879N40fo1vT1sN3faPRrRHEZgw== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a25:aaaf:0:b0:e1a:7eff:f66b with SMTP id 3f1490d57ef6-e1d7a0f3520mr31045276.5.1726011889166; Tue, 10 Sep 2024 16:44:49 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:41 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <083829f3f633d6d24d64d4639f92d163355b24fd.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 10/39] mm: hugetlb: Add option to create new subpool without using surplus From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" __hugetlb_acct_memory() today does more than just memory accounting. when there's insufficient HugeTLB pages, __hugetlb_acct_memory() will attempt to get surplus pages. This change adds a flag to disable getting surplus pages if there are insufficient HugeTLB pages. Signed-off-by: Ackerley Tng --- fs/hugetlbfs/inode.c | 2 +- include/linux/hugetlb.h | 2 +- mm/hugetlb.c | 43 ++++++++++++++++++++++++++++++----------- 3 files changed, 34 insertions(+), 13 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 9f6cff356796..300a6ef300c1 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -1488,7 +1488,7 @@ hugetlbfs_fill_super(struct super_block *sb, struct f= s_context *fc) if (ctx->max_hpages !=3D -1 || ctx->min_hpages !=3D -1) { sbinfo->spool =3D hugepage_new_subpool(ctx->hstate, ctx->max_hpages, - ctx->min_hpages); + ctx->min_hpages, true); if (!sbinfo->spool) goto out_free; } diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 907cfbbd9e24..9ef1adbd3207 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -116,7 +116,7 @@ extern int hugetlb_max_hstate __read_mostly; for ((h) =3D hstates; (h) < &hstates[hugetlb_max_hstate]; (h)++) =20 struct hugepage_subpool *hugepage_new_subpool(struct hstate *h, long max_h= pages, - long min_hpages); + long min_hpages, bool use_surplus); void hugepage_put_subpool(struct hugepage_subpool *spool); =20 long hugepage_subpool_get_pages(struct hugepage_subpool *spool, long delta= ); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 808915108126..efdb5772b367 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -92,6 +92,7 @@ static int num_fault_mutexes; struct mutex *hugetlb_fault_mutex_table ____cacheline_aligned_in_smp; =20 /* Forward declaration */ +static int __hugetlb_acct_memory(struct hstate *h, long delta, bool use_su= rplus); static int hugetlb_acct_memory(struct hstate *h, long delta); static void hugetlb_vma_lock_free(struct vm_area_struct *vma); static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma); @@ -129,7 +130,7 @@ static inline void unlock_or_release_subpool(struct hug= epage_subpool *spool, } =20 struct hugepage_subpool *hugepage_new_subpool(struct hstate *h, long max_h= pages, - long min_hpages) + long min_hpages, bool use_surplus) { struct hugepage_subpool *spool; =20 @@ -143,7 +144,8 @@ struct hugepage_subpool *hugepage_new_subpool(struct hs= tate *h, long max_hpages, spool->hstate =3D h; spool->min_hpages =3D min_hpages; =20 - if (min_hpages !=3D -1 && hugetlb_acct_memory(h, min_hpages)) { + if (min_hpages !=3D -1 && + __hugetlb_acct_memory(h, min_hpages, use_surplus)) { kfree(spool); return NULL; } @@ -2592,6 +2594,21 @@ static nodemask_t *policy_mbind_nodemask(gfp_t gfp) return NULL; } =20 +static int hugetlb_hstate_reserve_pages(struct hstate *h, + long num_pages_to_reserve) + __must_hold(&hugetlb_lock) +{ + long needed; + + needed =3D (h->resv_huge_pages + num_pages_to_reserve) - h->free_huge_pag= es; + if (needed <=3D 0) { + h->resv_huge_pages +=3D num_pages_to_reserve; + return 0; + } + + return needed; +} + /* * Increase the hugetlb pool such that it can accommodate a reservation * of size 'delta'. @@ -2608,13 +2625,7 @@ static int gather_surplus_pages(struct hstate *h, lo= ng delta) int node; nodemask_t *mbind_nodemask =3D policy_mbind_nodemask(htlb_alloc_mask(h)); =20 - lockdep_assert_held(&hugetlb_lock); - needed =3D (h->resv_huge_pages + delta) - h->free_huge_pages; - if (needed <=3D 0) { - h->resv_huge_pages +=3D delta; - return 0; - } - + needed =3D delta; allocated =3D 0; =20 ret =3D -ENOMEM; @@ -5104,7 +5115,7 @@ unsigned long hugetlb_total_pages(void) return nr_total_pages; } =20 -static int hugetlb_acct_memory(struct hstate *h, long delta) +static int __hugetlb_acct_memory(struct hstate *h, long delta, bool use_su= rplus) { int ret =3D -ENOMEM; =20 @@ -5136,7 +5147,12 @@ static int hugetlb_acct_memory(struct hstate *h, lon= g delta) * above. */ if (delta > 0) { - if (gather_surplus_pages(h, delta) < 0) + long required_surplus =3D hugetlb_hstate_reserve_pages(h, delta); + + if (!use_surplus && required_surplus > 0) + goto out; + + if (gather_surplus_pages(h, required_surplus) < 0) goto out; =20 if (delta > allowed_mems_nr(h)) { @@ -5154,6 +5170,11 @@ static int hugetlb_acct_memory(struct hstate *h, lon= g delta) return ret; } =20 +static int hugetlb_acct_memory(struct hstate *h, long delta) +{ + return __hugetlb_acct_memory(h, delta, true); +} + static void hugetlb_vm_op_open(struct vm_area_struct *vma) { struct resv_map *resv =3D vma_resv_map(vma); --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C003E1BB6B5 for ; Tue, 10 Sep 2024 23:44:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011893; cv=none; b=P4oJD5pTvBtDy1rBO0niwBxcINPsqPYMwIol1nJJ88cxmvoHPbYmal4owuR7ZuC1d9Cpl8uwaH06paejqEV+asGfzhq9R/CRFg9tzx+b436AnpHE95ENmCHgK8mra7OxuVo9CmOK8BbTycjPeKdZIK1KmNAqb3bWzI9pPAxS7aM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011893; c=relaxed/simple; bh=5iqTuQB0fj5V1jzzqaRJUquxydmdc8dIqTru11KkJIU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HgXF84a7S+Ic2JGNYdviKX+3PG5wTCvZGOtWuwHS9g6ZENTmgFaAbIkQPYzumCOaM/SQ6c6gNYwANDtCJQ2hqXw4YPjeW6JHSSnP7/HXFlxiqyeD9uEOor963iURJAdi4NjnGBhaF20qpJx3Kqv6h8ZZZp5tprl/HZc/m3zNUng= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=uD2ixa7T; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uD2ixa7T" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2d8b7c662ccso1538489a91.0 for ; Tue, 10 Sep 2024 16:44:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011891; x=1726616691; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2ar3js6y/uqSuZ8kEtPgJYc0vWuZF8j/b1JF25slk+c=; b=uD2ixa7TW6gUg899GkwZWVy9IuCOXKkt/uwlwU4r+x+CFqunpPNP3Rv+gbmz6FjgnJ HuD1NBITWm+znQzWHZoUylP0rPnuZIv+EpQ2y27vm5uyAjTAMpaU25UmiQ4KinVSoumm 2APpgp3FXfYsmarEeoJX3oGvSPkwA3GlyXU6ee26c69K8qkAT1AeYhC19taRabegzXb1 8h0QwaUM2QzXarzH0Vop18bbn4Zl3aHWNY9pUo4bRVTH+eN88CYcKfJAH48ytBV7z8bK e6ySzn8u2OW3MdEunjh0XSbKnyWrKyDuCkUaRgiBcXWRY/HXbSYfzvUpFqHHBsMj1/bm gZYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011891; x=1726616691; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2ar3js6y/uqSuZ8kEtPgJYc0vWuZF8j/b1JF25slk+c=; b=fNaAwwAEMKT/ehpML1EqvEEgcJpETwKR4jfgY0yq+qNHGwzlWuZfeHpYWViWCOjFsw myT/QxhnkB3mxKu4YKxd02dgwkwx9wWwsxJRZTuuVwuQeBz54Bu5XYM2yQKwK7E0LbO9 iw/oFra9dlFgNu7Wz8GdA71vKbGnV+QGy6m6ptM3tefiQ/g6u/QJxL36TbY8P8RVcsRE VRZ8NA8ZFqamHpI2Is89amqDLr7F/JCxb4m0DmveqDw7X+bMjwSu1mu2TZbAo98E95K5 BmQDYto/oDuGGQxK/n5TicAtmOSfoeSYwADTamsq05rbJENKw58xQ1Vch1TMpnbVcfXG wNtQ== X-Forwarded-Encrypted: i=1; AJvYcCVMM4TX9CUmAsW1W/82Bc79iJmpwfcIsVvlv7u/mOPOAElHCvV0D9RJtYOtzJWJm0pDzWAHjw7KbzWz4zs=@vger.kernel.org X-Gm-Message-State: AOJu0YxrU1HUwVqXMX+2dHpmEZOF5DcgM7lJvSQuGKPf84LAoU9EbZsj E1PdsQXeoj10Iu67rSTFZh1OOoLuEGmFcGa6V16LC5Vr2z4TcGT+4SVbpIMIB6tblMcizegYpqN FBX+f+ByshBbl9zsBbqr4ZQ== X-Google-Smtp-Source: AGHT+IFJTKDezjG0ZGixJJEKykOQcK4Q2RdFfnGKlypTv43g1OIUEWvOedTsxO4Flime3+ygfPrCyueBrEPlyPHzTg== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:90b:1e50:b0:2d2:453:1501 with SMTP id 98e67ed59e1d1-2db82e64986mr2443a91.2.1726011890733; Tue, 10 Sep 2024 16:44:50 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:42 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <3b49aeaa7ec0a91f601cde00b9e183bc75dc37a6.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 11/39] mm: hugetlb: Expose hugetlb_acct_memory() From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This will used by guest_memfd in a later patch. Signed-off-by: Ackerley Tng --- include/linux/hugetlb.h | 2 ++ mm/hugetlb.c | 4 ++-- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 9ef1adbd3207..4d47bf94c211 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -122,6 +122,8 @@ void hugepage_put_subpool(struct hugepage_subpool *spoo= l); long hugepage_subpool_get_pages(struct hugepage_subpool *spool, long delta= ); long hugepage_subpool_put_pages(struct hugepage_subpool *spool, long delta= ); =20 +int hugetlb_acct_memory(struct hstate *h, long delta); + void hugetlb_dup_vma_private(struct vm_area_struct *vma); void clear_vma_resv_huge_pages(struct vm_area_struct *vma); int move_hugetlb_page_tables(struct vm_area_struct *vma, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index efdb5772b367..5a37b03e1361 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -93,7 +93,7 @@ struct mutex *hugetlb_fault_mutex_table ____cacheline_ali= gned_in_smp; =20 /* Forward declaration */ static int __hugetlb_acct_memory(struct hstate *h, long delta, bool use_su= rplus); -static int hugetlb_acct_memory(struct hstate *h, long delta); +int hugetlb_acct_memory(struct hstate *h, long delta); static void hugetlb_vma_lock_free(struct vm_area_struct *vma); static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma); static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma); @@ -5170,7 +5170,7 @@ static int __hugetlb_acct_memory(struct hstate *h, lo= ng delta, bool use_surplus) return ret; } =20 -static int hugetlb_acct_memory(struct hstate *h, long delta) +int hugetlb_acct_memory(struct hstate *h, long delta) { return __hugetlb_acct_memory(h, delta, true); } --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B53961BC06E for ; Tue, 10 Sep 2024 23:44:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011895; cv=none; b=KjaekAd+LgNn2RUyt1vi0lcCwRzuytF4NbfM2NlMJTHoN/naP9WG9EiWPc/Lj1pOI6DYIeE47Oj/72hhKXiBfBbAGa8PWk9lusgdAL184Ftz+Va4LByRxEBu87xpLodjYTIeaMe32AYqUuzHlgZoC9yMWkEhfBZdBeBFWjShRDE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011895; c=relaxed/simple; bh=jfrlv9Zwe9WtcJZnfnb3Is0i40E/8Bmzj5N4J0pkDAw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jEY9CO9x3Y0MrbJVMgcVuQNHR2uscXh8jGFO03Kv42PmJgoB0uN667oVXIXxVuGjsNKzFafT6iONRrzkpSQA5Ldca0nc6CWSmgN0Seis0NnDSBMCQKEvrXLoWD7lHV76EWGl28g3eq5uxAg7TCcYea4s8xt3B299NPHSP8qqPdk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=E5CmHT76; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="E5CmHT76" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e178e745c49so2334498276.2 for ; Tue, 10 Sep 2024 16:44:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011893; x=1726616693; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gEfEne/GQ2Ka6MCKI1Fo0Z3xc55YqHsI9PC21Ep9Z3Y=; b=E5CmHT76fBBu+F/q4KGEOE2wmRqPW9mRbNmFKKDxFFMnOuHGleA0eX+0cIpVG7rkb/ sk0eYvo1Xi0n/Fmh8qku3Dd1Y3XMH6uMeWW9NQEXqh9xxWdUyhORl8QZ9LpxnsWYNBra COcmIprNQpNoT4GwSNk5UNASnbv7hUgnECVppTlfiPOUEYA4XFRUR/e21fdK344630Li P04PRj8tE+jF3XilgPp1gFEkDOeblVi5Z7D8q3EGvjTUZqwKXMPPHKGkjvjtKg71aOgP KZCr3XwYWr/c6TqtkNgAy8PEFIMJ37/z1L+DwO+RVYFhZvqMFPBtsjrieu9JuL57dkwu 3qLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011893; x=1726616693; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gEfEne/GQ2Ka6MCKI1Fo0Z3xc55YqHsI9PC21Ep9Z3Y=; b=abcHubPHHcVQ3hn9GLtZvqR/LTTy33qXMWRwtrjToAoF3MhCAm/9W0Q2dBXs+YVVcl m1baqIBI5IWZ+ACvlY/sWS2JJ3nAWTxWEcw0Okby65YGp5kZwI69lJg82qQiXawsdazL SqdqjBoTl6uIRR6Mf+jjt4Si+aGc6nP1Vd4TTxgxR+F8onhxxjt0v/RoqsPPGv/rHRIf /hR1vxEJWkx24XzlVJTVWMUbQaNkO2lU+wiDyXQ9AYvv1hVnwxlD6919ag8IKcTgRCfW bBpxjmUs/E58Dlqtm/pg0F+3F8e51fShynaOG51qxEUMS+1BI8TUY0jBy+EFs0CmJ4C3 Z55w== X-Forwarded-Encrypted: i=1; AJvYcCUqyt4UHGZfKW09UWxFgy7KtS5DKJ7b9Iqgmd5vGbtlOYEiSAI+AHG2ukExW0zW4bRMJY2xGnGS/nyI4Fs=@vger.kernel.org X-Gm-Message-State: AOJu0Ywzrq1kYLIBoR03kq2VQy+jKTE30juGaFRX7lUZTmYgqWcgC/wy 6t8UznNeeaxOiM9JElZNoxU+2GEe9gZgL49YMGVFV0UN3U+PYf0WgRNeRFpbDfBRlodP1L39uBh c84B8XZdEd79i0y4XvhtnCg== X-Google-Smtp-Source: AGHT+IF6ODlxEXjF8aJZSBZZ4I/iqFxheCqSbnSqwhL4p+xdg87aJqDyTNVQn4Ts0x37EKQBXrqF/+I4z5+HsrSwWw== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a25:698a:0:b0:e16:50d2:3d39 with SMTP id 3f1490d57ef6-e1d8c5610cbmr1470276.9.1726011892629; Tue, 10 Sep 2024 16:44:52 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:43 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <315ae41e7a53edab139c0323fa96892f2b647450.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 12/39] mm: hugetlb: Move and expose hugetlb_zero_partial_page() From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This will used by guest_memfd in a later patch. Signed-off-by: Ackerley Tng --- fs/hugetlbfs/inode.c | 33 +++++---------------------------- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 21 +++++++++++++++++++++ 3 files changed, 29 insertions(+), 28 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 300a6ef300c1..f76001418672 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -720,29 +720,6 @@ static void hugetlb_vmtruncate(struct inode *inode, lo= ff_t offset) remove_inode_hugepages(inode, offset, LLONG_MAX); } =20 -static void hugetlbfs_zero_partial_page(struct hstate *h, - struct address_space *mapping, - loff_t start, - loff_t end) -{ - pgoff_t idx =3D start >> huge_page_shift(h); - struct folio *folio; - - folio =3D filemap_lock_hugetlb_folio(h, mapping, idx); - if (IS_ERR(folio)) - return; - - start =3D start & ~huge_page_mask(h); - end =3D end & ~huge_page_mask(h); - if (!end) - end =3D huge_page_size(h); - - folio_zero_segment(folio, (size_t)start, (size_t)end); - - folio_unlock(folio); - folio_put(folio); -} - static long hugetlbfs_punch_hole(struct inode *inode, loff_t offset, loff_= t len) { struct hugetlbfs_inode_info *info =3D HUGETLBFS_I(inode); @@ -768,9 +745,10 @@ static long hugetlbfs_punch_hole(struct inode *inode, = loff_t offset, loff_t len) i_mmap_lock_write(mapping); =20 /* If range starts before first full page, zero partial page. */ - if (offset < hole_start) - hugetlbfs_zero_partial_page(h, mapping, - offset, min(offset + len, hole_start)); + if (offset < hole_start) { + hugetlb_zero_partial_page(h, mapping, offset, + min(offset + len, hole_start)); + } =20 /* Unmap users of full pages in the hole. */ if (hole_end > hole_start) { @@ -782,8 +760,7 @@ static long hugetlbfs_punch_hole(struct inode *inode, l= off_t offset, loff_t len) =20 /* If range extends beyond last full page, zero partial page. */ if ((offset + len) > hole_end && (offset + len) > hole_start) - hugetlbfs_zero_partial_page(h, mapping, - hole_end, offset + len); + hugetlb_zero_partial_page(h, mapping, hole_end, offset + len); =20 i_mmap_unlock_write(mapping); =20 diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 4d47bf94c211..752062044b0b 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -124,6 +124,9 @@ long hugepage_subpool_put_pages(struct hugepage_subpool= *spool, long delta); =20 int hugetlb_acct_memory(struct hstate *h, long delta); =20 +void hugetlb_zero_partial_page(struct hstate *h, struct address_space *map= ping, + loff_t start, loff_t end); + void hugetlb_dup_vma_private(struct vm_area_struct *vma); void clear_vma_resv_huge_pages(struct vm_area_struct *vma); int move_hugetlb_page_tables(struct vm_area_struct *vma, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 5a37b03e1361..372d8294fb2f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1989,6 +1989,27 @@ void free_huge_folio(struct folio *folio) } } =20 +void hugetlb_zero_partial_page(struct hstate *h, struct address_space *map= ping, + loff_t start, loff_t end) +{ + pgoff_t idx =3D start >> huge_page_shift(h); + struct folio *folio; + + folio =3D filemap_lock_hugetlb_folio(h, mapping, idx); + if (IS_ERR(folio)) + return; + + start =3D start & ~huge_page_mask(h); + end =3D end & ~huge_page_mask(h); + if (!end) + end =3D huge_page_size(h); + + folio_zero_segment(folio, (size_t)start, (size_t)end); + + folio_unlock(folio); + folio_put(folio); +} + /* * Must be called with the hugetlb lock held */ --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D0821BC9E5 for ; Tue, 10 Sep 2024 23:44:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011896; cv=none; b=twmUoGGEoq8NLgYeUHCOm/B8IwjZdsXPX0gdAheo7Yr4Eg0IKJKDUANli1lVflzQYjTVDEhfusfwlujyEuUBiUnrGU+KgaMHQpF/+SvfTXIxQsCSy7zolkc1HxgOYFzNJdqv/o6vIlpsbq3QcRODO4QHdMKPHn6eaYShvQ07gBA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011896; c=relaxed/simple; bh=XocZIyZvhmk8Gbypm0WGqGmS1qv9uLlsMNuSkwppb/k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=aJD+mehNUhMqjurdG8cXqu5XUyNR7tq95v/q3oquMueeokC8yjv2jjSPhrNnSK8KpzN0iLK2BICpJmm+6FkQ9HIJHDE2THaw2b6hK+YkUiKxctYmNh2iMLD48vihpsXEfGS0hOSD6Q4XyQ+9P+KdNgXwF1EIIkuqjvXW4UbKVGA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=OBRva3P7; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OBRva3P7" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-20556f1cfebso88210295ad.1 for ; Tue, 10 Sep 2024 16:44:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011894; x=1726616694; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fsO00i1fJ+fyO29aW7nLdVziXXefzp2jZQcWLLFpi3o=; b=OBRva3P7NNE0Sp9kNqOQz67kJmjpqSrrvt11azE7Uec4PZ6Y+Vv8CdBR1b0N45mt6w dLMKnuT38xFkcLNes0EKUrnq3IO7ZukjbmnHQ76l9NRHR1405yvdJCilQIRKx8BsQdCR n8MdZGEVoA9DfreQBUMj2nHV0zcD+VOF+fQ9ylUeq8ra7cw5JMTjekqwKu2D/qZQC6oL /DSA7ED3+ojDsf/cBJ+gyXgQFr0wagfcugL3zXTrX5CfSmN9ogDSVcTfXr5dO6uyXRbR 0tPG78N6SXBFYgyYHQKuZDvnDI92w1tyGoFd3i2Momp8h9HwWEkHa1h8tyo1qjJsJtPt q/Rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011894; x=1726616694; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fsO00i1fJ+fyO29aW7nLdVziXXefzp2jZQcWLLFpi3o=; b=cBFgMDfk+e3xnxJqgWshT02DhCFtm6ddWRJX9hcAYD7Z8nYpEv/S2a7S+Pe2KWJXZ8 U6KQi+VpKBl8be0ncH042SwcCRUIw5cd4015Lt5+Uxy/WCn9kctfgmCkB99MsU8ysNbC W+bc/zah+zUwd+1MPqmAMiNNF2MGZwQCpcEURkjQTPstrG+u+dYgMNMq0ZOXWyReuO2Y NtiF3XCQ6e4OgpHshBaEPmwcraHSR0dbO/QdrW8tY80f2bd9cXLNnEZB0dQ8jSAn6KCq wVa5Pp7Iko+oGOw3ZNlgBo2DfiOUE1ogKBJVX25TApu3b38Yo1+VCEnKO0aQSCIddmMQ lwSA== X-Forwarded-Encrypted: i=1; AJvYcCX6MJ0oXYlJgyiri7GhcueW/eDjEaLrakNKv+mNSjzavPZcbi7qdjWNfS9/MTHPasbfCtJ1VjZ+Sh2vjzk=@vger.kernel.org X-Gm-Message-State: AOJu0YytcxgSGkgibLFyiO3EEBdVEaDRm2HhotUIRaT8VXa4RmXBwRcn iBN1okdxpaBHe0xX4Y+Wjoa83DHReYrFy1GeHvr8QIDoA6KRUKXif9sqaM47OzfB2sijEJVWzom WYI3dmbuJmew2xNHCc4LIkw== X-Google-Smtp-Source: AGHT+IGkG5bdlVVJvvD6Ym5sklISkCRvUY93a1/yDRHhOSbI1kqRvzrl9TXy8aPAOijAjgn0S5+iR/vuXr5UMgKZiQ== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:902:c407:b0:205:5284:e52c with SMTP id d9443c01a7336-2074c7995e4mr761845ad.9.1726011894266; Tue, 10 Sep 2024 16:44:54 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:44 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: Subject: [RFC PATCH 13/39] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Using guest mem inodes allows us to store metadata for the backing memory on the inode. Metadata will be added in a later patch to support HugeTLB pages. Metadata about backing memory should not be stored on the file, since the file represents a guest_memfd's binding with a struct kvm, and metadata about backing memory is not unique to a specific binding and struct kvm. Signed-off-by: Ackerley Tng --- include/uapi/linux/magic.h | 1 + virt/kvm/guest_memfd.c | 119 ++++++++++++++++++++++++++++++------- 2 files changed, 100 insertions(+), 20 deletions(-) diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h index bb575f3ab45e..169dba2a6920 100644 --- a/include/uapi/linux/magic.h +++ b/include/uapi/linux/magic.h @@ -103,5 +103,6 @@ #define DEVMEM_MAGIC 0x454d444d /* "DMEM" */ #define SECRETMEM_MAGIC 0x5345434d /* "SECM" */ #define PID_FS_MAGIC 0x50494446 /* "PIDF" */ +#define GUEST_MEMORY_MAGIC 0x474d454d /* "GMEM" */ =20 #endif /* __LINUX_MAGIC_H__ */ diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 8f079a61a56d..5d7fd1f708a6 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -1,12 +1,17 @@ // SPDX-License-Identifier: GPL-2.0 +#include +#include #include #include #include +#include #include #include =20 #include "kvm_mm.h" =20 +static struct vfsmount *kvm_gmem_mnt; + struct kvm_gmem { struct kvm *kvm; struct xarray bindings; @@ -302,6 +307,38 @@ static inline struct file *kvm_gmem_get_file(struct kv= m_memory_slot *slot) return get_file_active(&slot->gmem.file); } =20 +static const struct super_operations kvm_gmem_super_operations =3D { + .statfs =3D simple_statfs, +}; + +static int kvm_gmem_init_fs_context(struct fs_context *fc) +{ + struct pseudo_fs_context *ctx; + + if (!init_pseudo(fc, GUEST_MEMORY_MAGIC)) + return -ENOMEM; + + ctx =3D fc->fs_private; + ctx->ops =3D &kvm_gmem_super_operations; + + return 0; +} + +static struct file_system_type kvm_gmem_fs =3D { + .name =3D "kvm_guest_memory", + .init_fs_context =3D kvm_gmem_init_fs_context, + .kill_sb =3D kill_anon_super, +}; + +static void kvm_gmem_init_mount(void) +{ + kvm_gmem_mnt =3D kern_mount(&kvm_gmem_fs); + BUG_ON(IS_ERR(kvm_gmem_mnt)); + + /* For giggles. Userspace can never map this anyways. */ + kvm_gmem_mnt->mnt_flags |=3D MNT_NOEXEC; +} + static struct file_operations kvm_gmem_fops =3D { .open =3D generic_file_open, .release =3D kvm_gmem_release, @@ -311,6 +348,8 @@ static struct file_operations kvm_gmem_fops =3D { void kvm_gmem_init(struct module *module) { kvm_gmem_fops.owner =3D module; + + kvm_gmem_init_mount(); } =20 static int kvm_gmem_migrate_folio(struct address_space *mapping, @@ -392,11 +431,67 @@ static const struct inode_operations kvm_gmem_iops = =3D { .setattr =3D kvm_gmem_setattr, }; =20 +static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, + loff_t size, u64 flags) +{ + const struct qstr qname =3D QSTR_INIT(name, strlen(name)); + struct inode *inode; + int err; + + inode =3D alloc_anon_inode(kvm_gmem_mnt->mnt_sb); + if (IS_ERR(inode)) + return inode; + + err =3D security_inode_init_security_anon(inode, &qname, NULL); + if (err) { + iput(inode); + return ERR_PTR(err); + } + + inode->i_private =3D (void *)(unsigned long)flags; + inode->i_op =3D &kvm_gmem_iops; + inode->i_mapping->a_ops =3D &kvm_gmem_aops; + inode->i_mode |=3D S_IFREG; + inode->i_size =3D size; + mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); + mapping_set_inaccessible(inode->i_mapping); + /* Unmovable mappings are supposed to be marked unevictable as well. */ + WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); + + return inode; +} + +static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size, + u64 flags) +{ + static const char *name =3D "[kvm-gmem]"; + struct inode *inode; + struct file *file; + + if (kvm_gmem_fops.owner && !try_module_get(kvm_gmem_fops.owner)) + return ERR_PTR(-ENOENT); + + inode =3D kvm_gmem_inode_make_secure_inode(name, size, flags); + if (IS_ERR(inode)) + return ERR_CAST(inode); + + file =3D alloc_file_pseudo(inode, kvm_gmem_mnt, name, O_RDWR, + &kvm_gmem_fops); + if (IS_ERR(file)) { + iput(inode); + return file; + } + + file->f_mapping =3D inode->i_mapping; + file->f_flags |=3D O_LARGEFILE; + file->private_data =3D priv; + + return file; +} + static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) { - const char *anon_name =3D "[kvm-gmem]"; struct kvm_gmem *gmem; - struct inode *inode; struct file *file; int fd, err; =20 @@ -410,32 +505,16 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t = size, u64 flags) goto err_fd; } =20 - file =3D anon_inode_create_getfile(anon_name, &kvm_gmem_fops, gmem, - O_RDWR, NULL); + file =3D kvm_gmem_inode_create_getfile(gmem, size, flags); if (IS_ERR(file)) { err =3D PTR_ERR(file); goto err_gmem; } =20 - file->f_flags |=3D O_LARGEFILE; - - inode =3D file->f_inode; - WARN_ON(file->f_mapping !=3D inode->i_mapping); - - inode->i_private =3D (void *)(unsigned long)flags; - inode->i_op =3D &kvm_gmem_iops; - inode->i_mapping->a_ops =3D &kvm_gmem_aops; - inode->i_mode |=3D S_IFREG; - inode->i_size =3D size; - mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); - mapping_set_inaccessible(inode->i_mapping); - /* Unmovable mappings are supposed to be marked unevictable as well. */ - WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); - kvm_get_kvm(kvm); gmem->kvm =3D kvm; xa_init(&gmem->bindings); - list_add(&gmem->entry, &inode->i_mapping->i_private_list); + list_add(&gmem->entry, &file_inode(file)->i_mapping->i_private_list); =20 fd_install(fd, file); return fd; --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D191A1BC08F for ; Tue, 10 Sep 2024 23:44:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011898; cv=none; b=OExu+4U0+eEkmzr9lXlVvvHbJWSFWT3ZOvzj6IOhslwni/q25AGldRYfSwVdiS2caJTKizgzq/p8JMrMl4ybz/dtyXx4EMcanMaLJ7Ag8VcWDdHfECXyQCWfmfkDRKeLEg7dNFPEHmx7i11zFgLq0cO3hzv3GFBTMG+t6BFK4sg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011898; c=relaxed/simple; bh=tK2AWcK5RorCRS4IJuPgWIL6zepcg7gijUrMtw7JGFk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SUAv0YfP2jBS8Xj7Wa4+Au+2i+zqO83HZWnzFvc9BhFpkOE/u/3q4LHfiKfkC+mD29LoGhI0nvvcjET2qWGiVlQ1ugPAwjUmnSlKk76ohu3V2CGpzhamZL0o10Yizd7QMQkItNyU1tyiQ6f7TR9KIo3fs3a9myffPdr5vorhUAo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZClEDWzJ; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZClEDWzJ" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-205516d992eso15743255ad.3 for ; Tue, 10 Sep 2024 16:44:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011896; x=1726616696; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9Gc31H97UbE78QiSNq4CUIpVlCNlnI2SeIjwGOQZSc0=; b=ZClEDWzJk1GlPJoEJMSACkIJIVStYDEWhklSbtVitrIhLSwZcqKKOwxF1b7Aj79upQ lkgFyGXxfW44uE5cKHMyx31/womGfKAnsTKSBYXZqenNfK23jqY8VomAsbvlxohGpRVJ cnD4V99vvwrxjudqLigTUd6EC7SFHVa0+QSUmkKKng7SDkGc7HfDgGTr6XycKMEDJX2G 13lBs+/gXQdD/HS9KN6Kqb1GUm8/uRIO7lNVlBB3FKBiiafo7EITnrUA8yzox/vTbyoR JQxe69VGtXtt+IMK/0lkT4d1B9+eEb+Y61w/afDzO6GybDokVkMdd8QJpZSwCIXXtRe2 Tz8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011896; x=1726616696; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9Gc31H97UbE78QiSNq4CUIpVlCNlnI2SeIjwGOQZSc0=; b=kxt3xJmz+yCAgCP7gsvqqhZM6E06HX9tPjVbXKhIGvBkYc+KF4jbvYd8RM10iMXpZn NeAUO4QJiLkvjc/Vvnk0V1/jopIToLloI6gxHHZh9i4Vt4nf5EhFsGR8wDMfy71BJjzU bqQpd3fQNzKMhcb2O5aw8U/2JhdZTEROYRfJjyY1ouP18WpUY5dLaLUxK2WIkwiK1SwB vUny6OggubrzK2b0jdboIR2bqXXWXcYFH6HTyhD/1JlHGJ1lYyKt9savPmSPZeVgBoVt wQhdMVSLz685vH35nn/OY6mxw+6u16g5Zvq3C6nhXtZ6xBQAXY+z6+TBybf64PrKjQgO S0EQ== X-Forwarded-Encrypted: i=1; AJvYcCWzx5fA1dsVcvYU/iHzJJntCTvUvlaN+lnuncxKVVqq9SAzrEXLpgfnPzab4t2woeBZMO4IfK/VaJG3Qic=@vger.kernel.org X-Gm-Message-State: AOJu0YzwTFjzmTet+NkixSbQhLzG63f90eK5bIJ5vFDLgsumfrjhNXva UgbAxYXeVG9QB9U15GZS0WMUmyx9du3RgTS7yL+rCNh74dKAC5csI9K+WQnVTqA1v9A3ib1h1wK 5O7+wWe/Ysu/vqLLCqtCAHA== X-Google-Smtp-Source: AGHT+IGTAqnZcaBNVO3Nl4Ef5dtGerZRypTfLYB/jrgUVkEZWYdnxrwGBGsysGpj4YxZly3jtQUA0VOLee5RXF37ZQ== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:902:ce91:b0:206:928c:bfd9 with SMTP id d9443c01a7336-20752208a62mr470995ad.6.1726011896097; Tue, 10 Sep 2024 16:44:56 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:45 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <3fec11d8a007505405eadcf2b3e10ec9051cf6bf.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 14/39] KVM: guest_memfd: hugetlb: initialization and cleanup From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" First stage of hugetlb support: add initialization and cleanup routines. After guest_mem was massaged to use guest_mem inodes instead of anonymous inodes in an earlier patch, the .evict_inode handler can now be overridden to do hugetlb metadata cleanup. Signed-off-by: Ackerley Tng --- include/uapi/linux/kvm.h | 26 ++++++ virt/kvm/guest_memfd.c | 177 +++++++++++++++++++++++++++++++++++++-- 2 files changed, 197 insertions(+), 6 deletions(-) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 637efc055145..77de7c4432f6 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -13,6 +13,7 @@ #include #include #include +#include =20 #define KVM_API_VERSION 12 =20 @@ -1558,6 +1559,31 @@ struct kvm_memory_attributes { =20 #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest= _memfd) =20 +#define KVM_GUEST_MEMFD_HUGETLB (1ULL << 1) + +/* + * Huge page size encoding when KVM_GUEST_MEMFD_HUGETLB is specified, and = a huge + * page size other than the default is desired. See hugetlb_encode.h. All + * known huge page size encodings are provided here. It is the responsibi= lity + * of the application to know which sizes are supported on the running sys= tem. + * See mmap(2) man page for details. + */ +#define KVM_GUEST_MEMFD_HUGE_SHIFT HUGETLB_FLAG_ENCODE_SHIFT +#define KVM_GUEST_MEMFD_HUGE_MASK HUGETLB_FLAG_ENCODE_MASK + +#define KVM_GUEST_MEMFD_HUGE_64KB HUGETLB_FLAG_ENCODE_64KB +#define KVM_GUEST_MEMFD_HUGE_512KB HUGETLB_FLAG_ENCODE_512KB +#define KVM_GUEST_MEMFD_HUGE_1MB HUGETLB_FLAG_ENCODE_1MB +#define KVM_GUEST_MEMFD_HUGE_2MB HUGETLB_FLAG_ENCODE_2MB +#define KVM_GUEST_MEMFD_HUGE_8MB HUGETLB_FLAG_ENCODE_8MB +#define KVM_GUEST_MEMFD_HUGE_16MB HUGETLB_FLAG_ENCODE_16MB +#define KVM_GUEST_MEMFD_HUGE_32MB HUGETLB_FLAG_ENCODE_32MB +#define KVM_GUEST_MEMFD_HUGE_256MB HUGETLB_FLAG_ENCODE_256MB +#define KVM_GUEST_MEMFD_HUGE_512MB HUGETLB_FLAG_ENCODE_512MB +#define KVM_GUEST_MEMFD_HUGE_1GB HUGETLB_FLAG_ENCODE_1GB +#define KVM_GUEST_MEMFD_HUGE_2GB HUGETLB_FLAG_ENCODE_2GB +#define KVM_GUEST_MEMFD_HUGE_16GB HUGETLB_FLAG_ENCODE_16GB + struct kvm_create_guest_memfd { __u64 size; __u64 flags; diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 5d7fd1f708a6..31e1115273e1 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -3,6 +3,7 @@ #include #include #include +#include #include #include #include @@ -18,6 +19,16 @@ struct kvm_gmem { struct list_head entry; }; =20 +struct kvm_gmem_hugetlb { + struct hstate *h; + struct hugepage_subpool *spool; +}; + +static struct kvm_gmem_hugetlb *kvm_gmem_hgmem(struct inode *inode) +{ + return inode->i_mapping->i_private_data; +} + /** * folio_file_pfn - like folio_file_page, but return a pfn. * @folio: The folio which contains this index. @@ -154,6 +165,82 @@ static void kvm_gmem_invalidate_end(struct kvm_gmem *g= mem, pgoff_t start, } } =20 +static inline void kvm_gmem_hugetlb_filemap_remove_folio(struct folio *fol= io) +{ + folio_lock(folio); + + folio_clear_dirty(folio); + folio_clear_uptodate(folio); + filemap_remove_folio(folio); + + folio_unlock(folio); +} + +/** + * Removes folios in range [@lstart, @lend) from page cache/filemap (@mapp= ing), + * returning the number of pages freed. + */ +static int kvm_gmem_hugetlb_filemap_remove_folios(struct address_space *ma= pping, + struct hstate *h, + loff_t lstart, loff_t lend) +{ + const pgoff_t end =3D lend >> PAGE_SHIFT; + pgoff_t next =3D lstart >> PAGE_SHIFT; + struct folio_batch fbatch; + int num_freed =3D 0; + + folio_batch_init(&fbatch); + while (filemap_get_folios(mapping, &next, end - 1, &fbatch)) { + int i; + for (i =3D 0; i < folio_batch_count(&fbatch); ++i) { + struct folio *folio; + pgoff_t hindex; + u32 hash; + + folio =3D fbatch.folios[i]; + hindex =3D folio->index >> huge_page_order(h); + hash =3D hugetlb_fault_mutex_hash(mapping, hindex); + + mutex_lock(&hugetlb_fault_mutex_table[hash]); + kvm_gmem_hugetlb_filemap_remove_folio(folio); + mutex_unlock(&hugetlb_fault_mutex_table[hash]); + + num_freed++; + } + folio_batch_release(&fbatch); + cond_resched(); + } + + return num_freed; +} + +/** + * Removes folios in range [@lstart, @lend) from page cache of inode, upda= tes + * inode metadata and hugetlb reservations. + */ +static void kvm_gmem_hugetlb_truncate_folios_range(struct inode *inode, + loff_t lstart, loff_t lend) +{ + struct kvm_gmem_hugetlb *hgmem; + struct hstate *h; + int gbl_reserve; + int num_freed; + + hgmem =3D kvm_gmem_hgmem(inode); + h =3D hgmem->h; + + num_freed =3D kvm_gmem_hugetlb_filemap_remove_folios(inode->i_mapping, + h, lstart, lend); + + gbl_reserve =3D hugepage_subpool_put_pages(hgmem->spool, num_freed); + hugetlb_acct_memory(h, -gbl_reserve); + + spin_lock(&inode->i_lock); + inode->i_blocks -=3D blocks_per_huge_page(h) * num_freed; + spin_unlock(&inode->i_lock); +} + + static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t= len) { struct list_head *gmem_list =3D &inode->i_mapping->i_private_list; @@ -307,8 +394,33 @@ static inline struct file *kvm_gmem_get_file(struct kv= m_memory_slot *slot) return get_file_active(&slot->gmem.file); } =20 +static void kvm_gmem_hugetlb_teardown(struct inode *inode) +{ + struct kvm_gmem_hugetlb *hgmem; + + truncate_inode_pages_final_prepare(inode->i_mapping); + kvm_gmem_hugetlb_truncate_folios_range(inode, 0, LLONG_MAX); + + hgmem =3D kvm_gmem_hgmem(inode); + hugepage_put_subpool(hgmem->spool); + kfree(hgmem); +} + +static void kvm_gmem_evict_inode(struct inode *inode) +{ + u64 flags =3D (u64)inode->i_private; + + if (flags & KVM_GUEST_MEMFD_HUGETLB) + kvm_gmem_hugetlb_teardown(inode); + else + truncate_inode_pages_final(inode->i_mapping); + + clear_inode(inode); +} + static const struct super_operations kvm_gmem_super_operations =3D { .statfs =3D simple_statfs, + .evict_inode =3D kvm_gmem_evict_inode, }; =20 static int kvm_gmem_init_fs_context(struct fs_context *fc) @@ -431,6 +543,42 @@ static const struct inode_operations kvm_gmem_iops =3D= { .setattr =3D kvm_gmem_setattr, }; =20 +static int kvm_gmem_hugetlb_setup(struct inode *inode, loff_t size, u64 fl= ags) +{ + struct kvm_gmem_hugetlb *hgmem; + struct hugepage_subpool *spool; + int page_size_log; + struct hstate *h; + long hpages; + + page_size_log =3D (flags >> KVM_GUEST_MEMFD_HUGE_SHIFT) & KVM_GUEST_MEMFD= _HUGE_MASK; + h =3D hstate_sizelog(page_size_log); + + /* Round up to accommodate size requests that don't align with huge pages= */ + hpages =3D round_up(size, huge_page_size(h)) >> huge_page_shift(h); + + spool =3D hugepage_new_subpool(h, hpages, hpages, false); + if (!spool) + goto err; + + hgmem =3D kzalloc(sizeof(*hgmem), GFP_KERNEL); + if (!hgmem) + goto err_subpool; + + inode->i_blkbits =3D huge_page_shift(h); + + hgmem->h =3D h; + hgmem->spool =3D spool; + inode->i_mapping->i_private_data =3D hgmem; + + return 0; + +err_subpool: + kfree(spool); +err: + return -ENOMEM; +} + static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, loff_t size, u64 flags) { @@ -443,9 +591,13 @@ static struct inode *kvm_gmem_inode_make_secure_inode(= const char *name, return inode; =20 err =3D security_inode_init_security_anon(inode, &qname, NULL); - if (err) { - iput(inode); - return ERR_PTR(err); + if (err) + goto out; + + if (flags & KVM_GUEST_MEMFD_HUGETLB) { + err =3D kvm_gmem_hugetlb_setup(inode, size, flags); + if (err) + goto out; } =20 inode->i_private =3D (void *)(unsigned long)flags; @@ -459,6 +611,11 @@ static struct inode *kvm_gmem_inode_make_secure_inode(= const char *name, WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); =20 return inode; + +out: + iput(inode); + + return ERR_PTR(err); } =20 static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size, @@ -526,14 +683,22 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t = size, u64 flags) return err; } =20 +#define KVM_GUEST_MEMFD_ALL_FLAGS KVM_GUEST_MEMFD_HUGETLB + int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) { loff_t size =3D args->size; u64 flags =3D args->flags; - u64 valid_flags =3D 0; =20 - if (flags & ~valid_flags) - return -EINVAL; + if (flags & KVM_GUEST_MEMFD_HUGETLB) { + /* Allow huge page size encoding in flags */ + if (flags & ~(KVM_GUEST_MEMFD_ALL_FLAGS | + (KVM_GUEST_MEMFD_HUGE_MASK << KVM_GUEST_MEMFD_HUGE_SHIFT))) + return -EINVAL; + } else { + if (flags & ~KVM_GUEST_MEMFD_ALL_FLAGS) + return -EINVAL; + } =20 if (size <=3D 0 || !PAGE_ALIGNED(size)) return -EINVAL; --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AFAB71BD01A for ; Tue, 10 Sep 2024 23:44:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011901; cv=none; b=JYzLr5Cahvm4JBT4MVANtIUvycqZ8fl6HI4eg2Q5rtEnVX18Js4Mubt6rU7fef4B9PFfztaoq6S9fJ+dL6n61QoCLUgz9OiB8yBsIU7zVTw2jW5mAP0VCQqCz47g+JPcYgw1KA5N2FSlxdHUUIc5xKFm+oyshMPt6A0HndySwK0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011901; c=relaxed/simple; bh=u+HSX5D8aSxjlCK0wmIZa9ZvzIze1ENSWav1reSe9Gs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LZQEA6OW4naE8AyJr0gZbqBNBpuJkHnSd5CtvC4CzVwD/9HFzVp3a4HdSrNrEDbqpgMDMGgBfS9f28OGOZXwzAigvsmE7ugvNiLDdA0cb3rSy9YOESzDa7rxaA0CAF+lNxDir//Zqu8DURfh05qlft61n+9dJBBznDucZL8n9XQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=N02rKgPe; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="N02rKgPe" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5e4df21f22dso302009a12.0 for ; Tue, 10 Sep 2024 16:44:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011898; x=1726616698; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3koUG4NR7/VYilG4JhqRWXEMHNTy5kmn0YA+V+1Maxk=; b=N02rKgPe4gMe+IEIOKmGdDRpAYjqdFB7ydop3Jgha2MDrmuOfG01ntQZGFRazzMol+ TQ3WqbJUUemImVsoR5s2/wMqqcSJN6dAMY3tGnnmjS0f3e7ZoKhrTviLCKIZwWV+Hx7i ymDeeb4Gi/g74JzT4IZ6zj2SX8DJQZ9bzZB2DztjQSAtaoZfspoXbmj/5IAKX23oT/rn QItTWk+gLS0Tvyfmtt3gq1sf1/jejc1vCGMzJycWim9Kl8mk7xxzLbt1CwFA4Zb1eY4A ax/38DUR73Ut5cDOPPQOmVJdxZjfOEQUDH5oiTwPszgLsrmoDJupvfyFA1joeOhlN9B4 XBAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011898; x=1726616698; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3koUG4NR7/VYilG4JhqRWXEMHNTy5kmn0YA+V+1Maxk=; b=tMxHvwQPvSGxVhcSiGfjB83M8fdEbjyXy5ykD6Ep9dAgWRkWijK2CqZsFSOSwQdyeM Q87YzF2x0/4ZmSiWW7Jy06mp7aDWZ/YPt9JPVGccwErsJ8dqKQQZ/FQNyhbPeS3CwoiU hEUG5k8pzqtXigZXF6q1o9iJhB3z1cn7Q1/3a/u96KDlKmYvEgwZgy3qtvX/SpE93ekK 2BYIAjjEakUiVIaW77zd1u3zkxbNPB8qnaulsu6zKyVySiaGWikzfnll9m6CZ70mWSzG 7iLuhoohu6Q6JVTbEvkurVOmSNdOYirRaDwCO2j+nlQRCcO7QD94dyhCi7OZDfK0p96E AJjg== X-Forwarded-Encrypted: i=1; AJvYcCXfYNvidoU+JA6m81dn6dZ38OEd3ahSEeIvnAS8v4nVjFg2waTUqJ+tHrFOAnFWMibOY87UfQjVXcOUnbw=@vger.kernel.org X-Gm-Message-State: AOJu0Yx7bCSfoPnTsILIzTYPqoiidKDSLqql8hnIlydqzS7vmKu7KweD V1y06LdrAZJ86ZJcg/TjTOlFKgRg0Kl2KBOZRARpB4FvfUNnNRQP3l8WUGOl77Lt+BJuOjvQWxI Qb5vQ8OZe4ZddryA7CaL2mA== X-Google-Smtp-Source: AGHT+IEDqZZk8irFHoo8VNnnGmulx1leYo1hJT0YyoAnYorw/TvU2WADq0CylGqiF1u5/YMn1zVKE8FaP8ki3Qm88w== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a63:8f5e:0:b0:6e3:a2ac:efd4 with SMTP id 41be03b00d2f7-7db088941ecmr7238a12.6.1726011897848; Tue, 10 Sep 2024 16:44:57 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:46 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <768488c67540aa18c200d7ee16e75a3a087022d4.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 15/39] KVM: guest_memfd: hugetlb: allocate and truncate from hugetlb From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If HugeTLB is requested at guest_memfd creation time, HugeTLB pages will be used to back guest_memfd. Signed-off-by: Ackerley Tng --- virt/kvm/guest_memfd.c | 252 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 239 insertions(+), 13 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 31e1115273e1..2e6f12e2bac8 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -8,6 +8,8 @@ #include #include #include +#include +#include =20 #include "kvm_mm.h" =20 @@ -29,6 +31,13 @@ static struct kvm_gmem_hugetlb *kvm_gmem_hgmem(struct in= ode *inode) return inode->i_mapping->i_private_data; } =20 +static bool is_kvm_gmem_hugetlb(struct inode *inode) +{ + u64 flags =3D (u64)inode->i_private; + + return flags & KVM_GUEST_MEMFD_HUGETLB; +} + /** * folio_file_pfn - like folio_file_page, but return a pfn. * @folio: The folio which contains this index. @@ -58,6 +67,9 @@ static int __kvm_gmem_prepare_folio(struct kvm *kvm, stru= ct kvm_memory_slot *slo return 0; } =20 +/** + * Use the uptodate flag to indicate that the folio is prepared for KVM's = usage. + */ static inline void kvm_gmem_mark_prepared(struct folio *folio) { folio_mark_uptodate(folio); @@ -72,13 +84,18 @@ static inline void kvm_gmem_mark_prepared(struct folio = *folio) static int kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot = *slot, gfn_t gfn, struct folio *folio) { - unsigned long nr_pages, i; pgoff_t index; int r; =20 - nr_pages =3D folio_nr_pages(folio); - for (i =3D 0; i < nr_pages; i++) - clear_highpage(folio_page(folio, i)); + if (folio_test_hugetlb(folio)) { + folio_zero_user(folio, folio->index << PAGE_SHIFT); + } else { + unsigned long nr_pages, i; + + nr_pages =3D folio_nr_pages(folio); + for (i =3D 0; i < nr_pages; i++) + clear_highpage(folio_page(folio, i)); + } =20 /* * Preparing huge folios should always be safe, since it should @@ -103,6 +120,174 @@ static int kvm_gmem_prepare_folio(struct kvm *kvm, st= ruct kvm_memory_slot *slot, return r; } =20 +static int kvm_gmem_get_mpol_node_nodemask(gfp_t gfp_mask, + struct mempolicy **mpol, + nodemask_t **nodemask) +{ + /* + * TODO: mempolicy would probably have to be stored on the inode, use + * task policy for now. + */ + *mpol =3D get_task_policy(current); + + /* TODO: ignore interleaving (set ilx to 0) for now. */ + return policy_node_nodemask(*mpol, gfp_mask, 0, nodemask); +} + +static struct folio *kvm_gmem_hugetlb_alloc_folio(struct hstate *h, + struct hugepage_subpool *spool) +{ + bool memcg_charge_was_prepared; + struct mem_cgroup *memcg; + struct mempolicy *mpol; + nodemask_t *nodemask; + struct folio *folio; + gfp_t gfp_mask; + int ret; + int nid; + + gfp_mask =3D htlb_alloc_mask(h); + + memcg =3D get_mem_cgroup_from_current(); + ret =3D mem_cgroup_hugetlb_try_charge(memcg, + gfp_mask | __GFP_RETRY_MAYFAIL, + pages_per_huge_page(h)); + if (ret =3D=3D -ENOMEM) + goto err; + + memcg_charge_was_prepared =3D ret !=3D -EOPNOTSUPP; + + /* Pages are only to be taken from guest_memfd subpool and nowhere else. = */ + if (hugepage_subpool_get_pages(spool, 1)) + goto err_cancel_charge; + + nid =3D kvm_gmem_get_mpol_node_nodemask(htlb_alloc_mask(h), &mpol, + &nodemask); + /* + * charge_cgroup_reservation is false because we didn't make any cgroup + * reservations when creating the guest_memfd subpool. + * + * use_hstate_resv is true because we reserved from global hstate when + * creating the guest_memfd subpool. + */ + folio =3D hugetlb_alloc_folio(h, mpol, nid, nodemask, false, true); + mpol_cond_put(mpol); + + if (!folio) + goto err_put_pages; + + hugetlb_set_folio_subpool(folio, spool); + + if (memcg_charge_was_prepared) + mem_cgroup_commit_charge(folio, memcg); + +out: + mem_cgroup_put(memcg); + + return folio; + +err_put_pages: + hugepage_subpool_put_pages(spool, 1); + +err_cancel_charge: + if (memcg_charge_was_prepared) + mem_cgroup_cancel_charge(memcg, pages_per_huge_page(h)); + +err: + folio =3D ERR_PTR(-ENOMEM); + goto out; +} + +static int kvm_gmem_hugetlb_filemap_add_folio(struct address_space *mappin= g, + struct folio *folio, pgoff_t index, + gfp_t gfp) +{ + int ret; + + __folio_set_locked(folio); + ret =3D __filemap_add_folio(mapping, folio, index, gfp, NULL); + if (unlikely(ret)) { + __folio_clear_locked(folio); + return ret; + } + + /* + * In hugetlb_add_to_page_cache(), there is a call to + * folio_clear_hugetlb_restore_reserve(). This is handled when the pages + * are removed from the page cache in unmap_hugepage_range() -> + * __unmap_hugepage_range() by conditionally calling + * folio_set_hugetlb_restore_reserve(). In kvm_gmem_hugetlb's usage of + * hugetlb, there are no VMAs involved, and pages are never taken from + * the surplus, so when pages are freed, the hstate reserve must be + * restored. Hence, this function makes no call to + * folio_clear_hugetlb_restore_reserve(). + */ + + /* mark folio dirty so that it will not be removed from cache/inode */ + folio_mark_dirty(folio); + + return 0; +} + +static struct folio *kvm_gmem_hugetlb_alloc_and_cache_folio(struct inode *= inode, + pgoff_t index) +{ + struct kvm_gmem_hugetlb *hgmem; + struct folio *folio; + int ret; + + hgmem =3D kvm_gmem_hgmem(inode); + folio =3D kvm_gmem_hugetlb_alloc_folio(hgmem->h, hgmem->spool); + if (IS_ERR(folio)) + return folio; + + /* TODO: Fix index here to be aligned to huge page size. */ + ret =3D kvm_gmem_hugetlb_filemap_add_folio( + inode->i_mapping, folio, index, htlb_alloc_mask(hgmem->h)); + if (ret) { + folio_put(folio); + return ERR_PTR(ret); + } + + spin_lock(&inode->i_lock); + inode->i_blocks +=3D blocks_per_huge_page(hgmem->h); + spin_unlock(&inode->i_lock); + + return folio; +} + +static struct folio *kvm_gmem_get_hugetlb_folio(struct inode *inode, + pgoff_t index) +{ + struct address_space *mapping; + struct folio *folio; + struct hstate *h; + pgoff_t hindex; + u32 hash; + + h =3D kvm_gmem_hgmem(inode)->h; + hindex =3D index >> huge_page_order(h); + mapping =3D inode->i_mapping; + + /* To lock, we calculate the hash using the hindex and not index. */ + hash =3D hugetlb_fault_mutex_hash(mapping, hindex); + mutex_lock(&hugetlb_fault_mutex_table[hash]); + + /* + * The filemap is indexed with index and not hindex. Taking lock on + * folio to align with kvm_gmem_get_regular_folio() + */ + folio =3D filemap_lock_folio(mapping, index); + if (!IS_ERR(folio)) + goto out; + + folio =3D kvm_gmem_hugetlb_alloc_and_cache_folio(inode, index); +out: + mutex_unlock(&hugetlb_fault_mutex_table[hash]); + + return folio; +} + /* * Returns a locked folio on success. The caller is responsible for * setting the up-to-date flag before the memory is mapped into the guest. @@ -114,8 +299,10 @@ static int kvm_gmem_prepare_folio(struct kvm *kvm, str= uct kvm_memory_slot *slot, */ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index) { - /* TODO: Support huge pages. */ - return filemap_grab_folio(inode->i_mapping, index); + if (is_kvm_gmem_hugetlb(inode)) + return kvm_gmem_get_hugetlb_folio(inode, index); + else + return filemap_grab_folio(inode->i_mapping, index); } =20 static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, @@ -240,6 +427,35 @@ static void kvm_gmem_hugetlb_truncate_folios_range(str= uct inode *inode, spin_unlock(&inode->i_lock); } =20 +static void kvm_gmem_hugetlb_truncate_range(struct inode *inode, loff_t ls= tart, + loff_t lend) +{ + loff_t full_hpage_start; + loff_t full_hpage_end; + unsigned long hsize; + struct hstate *h; + + h =3D kvm_gmem_hgmem(inode)->h; + hsize =3D huge_page_size(h); + + full_hpage_start =3D round_up(lstart, hsize); + full_hpage_end =3D round_down(lend, hsize); + + if (lstart < full_hpage_start) { + hugetlb_zero_partial_page(h, inode->i_mapping, lstart, + full_hpage_start); + } + + if (full_hpage_end > full_hpage_start) { + kvm_gmem_hugetlb_truncate_folios_range(inode, full_hpage_start, + full_hpage_end); + } + + if (lend > full_hpage_end) { + hugetlb_zero_partial_page(h, inode->i_mapping, full_hpage_end, + lend); + } +} =20 static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t= len) { @@ -257,7 +473,12 @@ static long kvm_gmem_punch_hole(struct inode *inode, l= off_t offset, loff_t len) list_for_each_entry(gmem, gmem_list, entry) kvm_gmem_invalidate_begin(gmem, start, end); =20 - truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1); + if (is_kvm_gmem_hugetlb(inode)) { + kvm_gmem_hugetlb_truncate_range(inode, offset, offset + len); + } else { + truncate_inode_pages_range(inode->i_mapping, offset, + offset + len - 1); + } =20 list_for_each_entry(gmem, gmem_list, entry) kvm_gmem_invalidate_end(gmem, start, end); @@ -279,8 +500,15 @@ static long kvm_gmem_allocate(struct inode *inode, lof= f_t offset, loff_t len) =20 filemap_invalidate_lock_shared(mapping); =20 - start =3D offset >> PAGE_SHIFT; - end =3D (offset + len) >> PAGE_SHIFT; + if (is_kvm_gmem_hugetlb(inode)) { + unsigned long hsize =3D huge_page_size(kvm_gmem_hgmem(inode)->h); + + start =3D round_down(offset, hsize) >> PAGE_SHIFT; + end =3D round_down(offset + len, hsize) >> PAGE_SHIFT; + } else { + start =3D offset >> PAGE_SHIFT; + end =3D (offset + len) >> PAGE_SHIFT; + } =20 r =3D 0; for (index =3D start; index < end; ) { @@ -408,9 +636,7 @@ static void kvm_gmem_hugetlb_teardown(struct inode *ino= de) =20 static void kvm_gmem_evict_inode(struct inode *inode) { - u64 flags =3D (u64)inode->i_private; - - if (flags & KVM_GUEST_MEMFD_HUGETLB) + if (is_kvm_gmem_hugetlb(inode)) kvm_gmem_hugetlb_teardown(inode); else truncate_inode_pages_final(inode->i_mapping); @@ -827,7 +1053,7 @@ __kvm_gmem_get_pfn(struct file *file, struct kvm_memor= y_slot *slot, =20 *pfn =3D folio_file_pfn(folio, index); if (max_order) - *max_order =3D 0; + *max_order =3D folio_order(folio); =20 *is_prepared =3D folio_test_uptodate(folio); return folio; --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 722561AE843 for ; Tue, 10 Sep 2024 23:45:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011901; cv=none; b=BmTdVmXWFfrFQ+79KIDeq8un/rUcjaF0G8814dUUEst0S2oYz8UxIK3OGU7/Uy1cNkc+kEMqwA/9edN7PApW7YMhSV0QkkWFequsQa4MFHo5+m/29guRZn+2v/rGuS8NKpm8rcpLQrAN3hRSM0GkmFfiNRq/1WCUb9+yYA4GwA4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011901; c=relaxed/simple; bh=LrEdspsLCh8pfXrwGeKzyBwtTCfJskcHEhFW59Ake2E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BhdwQb4ys5+AHyTMAwEWc8qXCFUnak//b6igZFv0N/xQkG2+OaauHtmprdZgRF+D1WJEPzPtgB89tPQgzZY0lrHCWLnonYSuZMtWrpDRjA+uM9qW4JkHQ3XCg02qR0MbQC2z+PKm4H265MdTJjlw25jnse4a4Iv6PDZ9Lq9fqlo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wl6OsPG2; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wl6OsPG2" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-70ac9630e3aso1171523a12.1 for ; Tue, 10 Sep 2024 16:45:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011900; x=1726616700; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=j5QWCSGDqbjsS0n7qJVo2UbLIELWUXKqatSc9NHlgvU=; b=wl6OsPG2sF4zyriW6DFl0qO1oNl+O3xGGPUsVTH1OBPzBAb2fmrzYwY8Fkkd1zQKGS 3c3QfFMmAglWrDVoiYRk/2MtXtwftj9sG4CzBFJiMev0Ck98FZ39/BzWRmY64d7RWIBE 9DHOINxXuWmkpGDB1gj6nb4MyWVOxSVVrnB86PCegV91H8EqmU6Fqa49Ll1XD76a/IOw 4xzKVTHivmLjhLvMKwVQpA6dVoNsIbRTDHQ7yJg+PEWnMtbmol2EVF+IgX1vuo4SJanv J2G9rUzgwGVCT7Okj4R/JsGcXrqLFASegt6YkUphTGyyHUN/fBoypB1tmmMN9m1CwMzE ojvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011900; x=1726616700; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=j5QWCSGDqbjsS0n7qJVo2UbLIELWUXKqatSc9NHlgvU=; b=dx96shaaTzVJ/WaU9rkH51WBnIu8RtJPdNTKuZtoERFtt4OC4sPfSF5C/rSjwzW0sV i9I/vhmR7n4D0MPq4UCP21ALQ3f3FblaVnb9P81kuwAxFYcPAbNcJb4RJ59NmLpt29rd dUjnGU7eBhPplUlUbDbcfsas7wbfCJqBBgSDBQ4zsRV7FqVoPygRByiFc4bf+2TAv7mJ bR0yZAzGNB3npgI2Xc1g/Y9S3iyiUFcU4mPcjGIRR2qxLfweybICTI9ZjivYswtnoUjH dU7TzguwAZD4G/WocZLuME1VvYU50gKhwo889+KrQ6MvU0sC2GpLzEgFh6X8r/8VSz3Q jNqw== X-Forwarded-Encrypted: i=1; AJvYcCVEYdShRdeFyi8rSDD3WoJ9LJ6X++QJLr4YXjo7tixk5ZRYaTYIaQZBH9jkwU1+7H9p1u0kTgxuSR2i4cg=@vger.kernel.org X-Gm-Message-State: AOJu0YxxPG64TzZ7nu1nP9ssbHTN4LNmrk8VXR9TA8EP7qSJ0lDmjkuy 8S10XsVpojoaz9ifmpyy3lO/me57gBmraOuayDMiytWRhK49uKJvppnFQ9RXjTVnklX3mkBkBi5 DIvPU8ZgSE8jtxLhjQzMFog== X-Google-Smtp-Source: AGHT+IHo9nrZMACTW1GYMiSHsHPpyJhtvpckq5UoyzEbjp2HiFKw7SxN6vYrJsaFr5HrF/PjOMK6efHCzGq+Tv7awg== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a63:3dc6:0:b0:6e9:8a61:b8aa with SMTP id 41be03b00d2f7-7db0bb80899mr2016a12.0.1726011899568; Tue, 10 Sep 2024 16:44:59 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:47 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <6f6b891d693ea0733f4b2737858af914bd70a8b6.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 16/39] KVM: guest_memfd: Add page alignment check for hugetlb guest_memfd From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When a hugetlb guest_memfd is requested, the requested size should be aligned to the size of the hugetlb page requested. Signed-off-by: Ackerley Tng --- virt/kvm/guest_memfd.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 2e6f12e2bac8..eacbfdb950d1 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -909,6 +909,13 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t s= ize, u64 flags) return err; } =20 +static inline bool kvm_gmem_hugetlb_page_aligned(u32 flags, u64 value) +{ + int page_size_log =3D (flags >> KVM_GUEST_MEMFD_HUGE_SHIFT) & KVM_GUEST_M= EMFD_HUGE_MASK; + u64 page_size =3D 1ULL << page_size_log; + return IS_ALIGNED(value, page_size); +} + #define KVM_GUEST_MEMFD_ALL_FLAGS KVM_GUEST_MEMFD_HUGETLB =20 int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) @@ -921,12 +928,18 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_creat= e_guest_memfd *args) if (flags & ~(KVM_GUEST_MEMFD_ALL_FLAGS | (KVM_GUEST_MEMFD_HUGE_MASK << KVM_GUEST_MEMFD_HUGE_SHIFT))) return -EINVAL; + + if (!kvm_gmem_hugetlb_page_aligned(flags, size)) + return -EINVAL; } else { if (flags & ~KVM_GUEST_MEMFD_ALL_FLAGS) return -EINVAL; + + if (!PAGE_ALIGNED(size)) + return -EINVAL; } =20 - if (size <=3D 0 || !PAGE_ALIGNED(size)) + if (size <=3D 0) return -EINVAL; =20 return __kvm_gmem_create(kvm, size, flags); --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5BE0E1BDA8A for ; Tue, 10 Sep 2024 23:45:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011903; cv=none; b=okFy7dg20uAbGcPkvbsTS5PPfCsJNO+MzuwIpZZugfaQIpvWO2kwwM/CF6K3JQqu7Z9KeRI+iAT3u+FmW2ut8dZyRTRYIx7eOAnoX7vfKGpxq8SlkSPMr2cWu3a6UwJZOVJpIZl18CnckjljUgwV8XHceeh0DArObNlmXw84qIw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011903; c=relaxed/simple; bh=vc4EsPZFORRD3kG426hVT8sa27BXWZ1MjE0MPigLyyg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=m5QH7Gu2lCZjXh9uzhi3YV2fvQR/j/UqueICoCMY8D4pMsWJ8gW9eW/QkfsP2FG/fJ6U1bX98vryo11LQHGx6YOzJYc+iL5WfEUBwAQDun5gCMrcOksBGiCBrYQ3TAFfKPUikwY4gDhxBGwiv2PoH5jUQoQsrclnEGidZQQeCyE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cDbckmXP; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cDbckmXP" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6d9353e1360so38504097b3.1 for ; Tue, 10 Sep 2024 16:45:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011901; x=1726616701; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=o/HXd5Kdj9SBXB3jabZ8LV/kfZeRyP8D+V3Q9EiJjcU=; b=cDbckmXPSsZT2jTkf0XcEaq+GE5mmfBUixvfmptWQ58c2Dl+kXE077kGPGiZw/7aD3 zPgtanCGvlgy2S4YHQTxxR62Gq2iZJgegWkqnl+95N/dCs/f8TTK68sBo9oOyPOqgo4q YI/WPNJmybEl0DwLoKZ/AZQ9/twhmGEiLoiAcVJflsKWr6O0B7t9szjwXgfc5FXSvPlI IJJ5QD6wQ6ld9FrU2dETpmYTlQvP+ReylZib2WZwP7bvTe0HOkCU9+mZ2jZCfoQspDoa f4WKngCZwTTvTvEr49jVUATHZfcI8Ib98fStYOBKkI4IZlUG/0uC9jDHU9G484IcTRUj aTvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011901; x=1726616701; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=o/HXd5Kdj9SBXB3jabZ8LV/kfZeRyP8D+V3Q9EiJjcU=; b=eGYHs2zxvmx1/8UZzPqZDo6tKzAQlIZ/7h9GPFbx/MoUtr74UkWI9HBA0etqrHCNTr sR6ZiB6gigmBqFFzLZaR3zWOwPbexm7PWY7q6vBZcYjVE/LJQ3sS3tv8XI8w1izoGKHE 167cNKW473jdE7nCf6z4n7Zg6Gv/chpfKfv0hwHHm9enjk1npeP51cklSP9wHh7JA9Oh 97gYwcH2q+MuGD72op7NZErK4Tsp0NcFnd+0IkVv4VWrHNk1AOy20kvKqvSZrrRThSzJ bwyuF8dFEF9itkbrKUaHBHqkaOxpOgo0hzMJ+CVaGrU3seDlURxrAq0o/TEyMvS6yZYQ uQhQ== X-Forwarded-Encrypted: i=1; AJvYcCU+WUQnl+dICaR3W/0ivp/G6+4fse3ZHB9uzRzvJiEbyZrZqZP3SVKBmeqXZdmyjanZcBT2bx/IdAOKMiU=@vger.kernel.org X-Gm-Message-State: AOJu0YyefJgNN38negVCJXegSV1vdbv9B7eV+D6l0sYU/YqGfxmctYOT vYXQWhv1vG8Ef/W2LzWS06Gx6RJr6NUH/4DVsqEIIxL6fqLFH1lQEJ2xEEnnlR5fHQFbEyE005U TUnuAhsMc8mbddenMhQaH7w== X-Google-Smtp-Source: AGHT+IHO3Hz+XYA+Vm8ou+yVR+TLwcjnNq/stukVuZHNAMFeSdnEW4ysXQTgCnqywX2CHH+Ew1hV18uk9npr6EUJ3A== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a05:690c:6e0a:b0:6ae:34f0:8832 with SMTP id 00721157ae682-6dba6d5d3e6mr543327b3.2.1726011901348; Tue, 10 Sep 2024 16:45:01 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:48 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <2f0572464beebbcd2166fe9d709d0ce33a0cee78.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 17/39] KVM: selftests: Add basic selftests for hugetlb-backed guest_memfd From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add tests for 2MB and 1GB page sizes, and update the invalid flags test for the new KVM_GUEST_MEMFD_HUGETLB flag. Signed-off-by: Ackerley Tng --- .../testing/selftests/kvm/guest_memfd_test.c | 45 ++++++++++++++----- 1 file changed, 35 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing= /selftests/kvm/guest_memfd_test.c index ba0c8e996035..3618ce06663e 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -13,6 +13,7 @@ =20 #include #include +#include #include #include #include @@ -122,6 +123,7 @@ static void test_invalid_punch_hole(int fd, size_t page= _size, size_t total_size) =20 static void test_create_guest_memfd_invalid(struct kvm_vm *vm) { + uint64_t valid_flags =3D KVM_GUEST_MEMFD_HUGETLB; size_t page_size =3D getpagesize(); uint64_t flag; size_t size; @@ -135,6 +137,9 @@ static void test_create_guest_memfd_invalid(struct kvm_= vm *vm) } =20 for (flag =3D 0; flag; flag <<=3D 1) { + if (flag & valid_flags) + continue; + fd =3D __vm_create_guest_memfd(vm, page_size, flag); TEST_ASSERT(fd =3D=3D -1 && errno =3D=3D EINVAL, "guest_memfd() with flag '0x%lx' should fail with EINVAL", @@ -170,24 +175,16 @@ static void test_create_guest_memfd_multiple(struct k= vm_vm *vm) close(fd1); } =20 -int main(int argc, char *argv[]) +static void test_guest_memfd(struct kvm_vm *vm, uint32_t flags, size_t pag= e_size) { - size_t page_size; size_t total_size; int fd; - struct kvm_vm *vm; =20 TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); =20 - page_size =3D getpagesize(); total_size =3D page_size * 4; =20 - vm =3D vm_create_barebones(); - - test_create_guest_memfd_invalid(vm); - test_create_guest_memfd_multiple(vm); - - fd =3D vm_create_guest_memfd(vm, total_size, 0); + fd =3D vm_create_guest_memfd(vm, total_size, flags); =20 test_file_read_write(fd); test_mmap(fd, page_size); @@ -197,3 +194,31 @@ int main(int argc, char *argv[]) =20 close(fd); } + +int main(int argc, char *argv[]) +{ + struct kvm_vm *vm; + + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); + + vm =3D vm_create_barebones(); + + test_create_guest_memfd_invalid(vm); + test_create_guest_memfd_multiple(vm); + + printf("Test guest_memfd with 4K pages\n"); + test_guest_memfd(vm, 0, getpagesize()); + printf("\tPASSED\n"); + + printf("Test guest_memfd with 2M pages\n"); + test_guest_memfd(vm, KVM_GUEST_MEMFD_HUGETLB | KVM_GUEST_MEMFD_HUGE_2MB, + 2UL << 20); + printf("\tPASSED\n"); + + printf("Test guest_memfd with 1G pages\n"); + test_guest_memfd(vm, KVM_GUEST_MEMFD_HUGETLB | KVM_GUEST_MEMFD_HUGE_1GB, + 1UL << 30); + printf("\tPASSED\n"); + + return 0; +} --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C154A1BDAB7 for ; Tue, 10 Sep 2024 23:45:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011905; cv=none; b=qwQxMEDs16OB5iLAcgrZAJwJ8i4yS+WTSbRsQncBYMYb1OkVE+eRZcu128RgGQxwt72z7FurTsYwtxkrYYw9Ao71MdeNwP7SZA6XNd0ptKYPgqAhFBBIP1ajiRpAp8H9Lfj+8XqWTofUnyq/W1NJ+sKAhS3r+hU3X1eF78fprhQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011905; c=relaxed/simple; bh=eWDkcOSZuZ5Pq1xHHeSym7RfOxqvVkcI++gI8lj3qk0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=FBcYwxBbq1nEOR9opkB5nn48yTqLaQoV6FjCMtvnAgtMheSxoweaClLcRE4zg7ls3MilTV/83FOo/hLGlrC747GCU4tq0Gu7FZci0jhuscXdFHnDaErFQZgUkvoguP94YluiS6fc1toaWw4lgjvr2P+gkTym1PJ3SQZuRH20FTA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TVvaCXSW; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TVvaCXSW" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2d882c11ce4so6332424a91.2 for ; Tue, 10 Sep 2024 16:45:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011903; x=1726616703; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YRCNcbi6F+8e/+W6bDlSdC9DBYtBZ6HMw9eZxHX0xEQ=; b=TVvaCXSWAqvlTaSewTvfrylxAXS694NKkoQgWuUEX9xWZ7pCRo5mICqtp+EP9A1kWz Hp7LWM7nZYnj8ssyx7Z7G9UxWikVIwx3PpwhQaGw4UGmyGJztEVnjdtfwY087vPZ5Otx c0Cb4X0yPLRloaMtTK5QpI+W1/IKlErv+nmZ9XYaEI9sfe42msRHzJPGhMSQLlzHXnHG qKpwR5regWKj4lSR/9uXVxzAhprDQQh6bxXJMu6Mm5LWUqc7hNOzO0uMNmDWS17lsFFt x7L401zG4E6n6ZFxD4grdqOuHZGDTPEMlcjne6YrnyAA7WMt6rplWEM/iZk5SKhATgdG KhqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011903; x=1726616703; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YRCNcbi6F+8e/+W6bDlSdC9DBYtBZ6HMw9eZxHX0xEQ=; b=GqgCvhuUeM6/0WPIOWe8Uu9qDUJXh6S3RE9wRVxhuWcKYx08+qTevmbPqUg+Cxi6OD L3DNKQYNFtrOJRZ84debq7Vu08fIQcpxtxCV7J8W7oxs2c1HTbfY+LsBNM0RpRi+sgL4 LulyG/LhtDlp/i57bRHiVTa0wDocXo+UM6V1v/sABNZboveCtV9zvwDIoS9cowusYzAL F1hN4ZZhh2g+FAn5zm2yEvn5QQ6G3dRiFFfRcqWuxaXJZYsc+3ZiLKn6f8mZJwvFNlhU OUiKKaHCoAc1LvEZ24G748gvpjrWEshoEsVm752umCyiby9Zkwp9ddANUElLkIJKaCpg Kx0A== X-Forwarded-Encrypted: i=1; AJvYcCWH9tJSaAIi++AbZkSZHVIwFfbzvFKZ4joqn9AE3NARK2sWMOMKorsU7dK3P/qMZZqdWdkXwj83fKCbHtg=@vger.kernel.org X-Gm-Message-State: AOJu0YzEQb5II+35bo2JLx10K/DU1abNXjYYJ32AvMOGXHzijzibGzAW vTSVbNXmaxisCzqR/rrtuyevy3i5WTaUi6wbMmHM2mQ0VwXGUDOUAJ6wx2J5ssiaC/xe75oaA0N DuRNOrJbogfVK5Pp310+mOg== X-Google-Smtp-Source: AGHT+IHtETmvMtzECuoQj0TF7p/fCvI8MzpUYVVeKR6wp9ZmcH42X+aBOrBQUronIKH1gIl+RV35B4SpP9LwziE8Sg== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:90a:b38d:b0:2d8:a37d:b762 with SMTP id 98e67ed59e1d1-2dad517ea05mr68668a91.4.1726011902955; Tue, 10 Sep 2024 16:45:02 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:49 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <327dadf390b5e397074e5bcc9f85468b1467f9a6.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 18/39] KVM: selftests: Support various types of backing sources for private memory From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Adds support for various type of backing sources for private memory (in the sense of confidential computing), similar to the backing sources available for shared memory. Signed-off-by: Ackerley Tng --- .../testing/selftests/kvm/include/test_util.h | 16 ++++ tools/testing/selftests/kvm/lib/test_util.c | 74 +++++++++++++++++++ 2 files changed, 90 insertions(+) diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testin= g/selftests/kvm/include/test_util.h index 3e473058849f..011e757d4e2c 100644 --- a/tools/testing/selftests/kvm/include/test_util.h +++ b/tools/testing/selftests/kvm/include/test_util.h @@ -142,6 +142,16 @@ struct vm_mem_backing_src_alias { uint32_t flag; }; =20 +enum vm_private_mem_backing_src_type { + VM_PRIVATE_MEM_SRC_GUEST_MEM, /* Use default page size */ + VM_PRIVATE_MEM_SRC_HUGETLB, /* Use kernel default page size for hugetl= b pages */ + VM_PRIVATE_MEM_SRC_HUGETLB_2MB, + VM_PRIVATE_MEM_SRC_HUGETLB_1GB, + NUM_PRIVATE_MEM_SRC_TYPES, +}; + +#define DEFAULT_VM_PRIVATE_MEM_SRC VM_PRIVATE_MEM_SRC_GUEST_MEM + #define MIN_RUN_DELAY_NS 200000UL =20 bool thp_configured(void); @@ -152,6 +162,12 @@ size_t get_backing_src_pagesz(uint32_t i); bool is_backing_src_hugetlb(uint32_t i); void backing_src_help(const char *flag); enum vm_mem_backing_src_type parse_backing_src_type(const char *type_name); + +void private_mem_backing_src_help(const char *flag); +enum vm_private_mem_backing_src_type parse_private_mem_backing_src_type(co= nst char *type_name); +const struct vm_mem_backing_src_alias *vm_private_mem_backing_src_alias(ui= nt32_t i); +size_t get_private_mem_backing_src_pagesz(uint32_t i); + long get_run_delay(void); =20 /* diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/se= lftests/kvm/lib/test_util.c index 8ed0b74ae837..d0a9b5ee0c01 100644 --- a/tools/testing/selftests/kvm/lib/test_util.c +++ b/tools/testing/selftests/kvm/lib/test_util.c @@ -15,6 +15,7 @@ #include #include #include "linux/kernel.h" +#include =20 #include "test_util.h" =20 @@ -288,6 +289,34 @@ const struct vm_mem_backing_src_alias *vm_mem_backing_= src_alias(uint32_t i) return &aliases[i]; } =20 +const struct vm_mem_backing_src_alias *vm_private_mem_backing_src_alias(ui= nt32_t i) +{ + static const struct vm_mem_backing_src_alias aliases[] =3D { + [VM_PRIVATE_MEM_SRC_GUEST_MEM] =3D { + .name =3D "private_mem_guest_mem", + .flag =3D 0, + }, + [VM_PRIVATE_MEM_SRC_HUGETLB] =3D { + .name =3D "private_mem_hugetlb", + .flag =3D KVM_GUEST_MEMFD_HUGETLB, + }, + [VM_PRIVATE_MEM_SRC_HUGETLB_2MB] =3D { + .name =3D "private_mem_hugetlb_2mb", + .flag =3D KVM_GUEST_MEMFD_HUGETLB | KVM_GUEST_MEMFD_HUGE_2MB, + }, + [VM_PRIVATE_MEM_SRC_HUGETLB_1GB] =3D { + .name =3D "private_mem_hugetlb_1gb", + .flag =3D KVM_GUEST_MEMFD_HUGETLB | KVM_GUEST_MEMFD_HUGE_1GB, + }, + }; + _Static_assert(ARRAY_SIZE(aliases) =3D=3D NUM_PRIVATE_MEM_SRC_TYPES, + "Missing new backing private mem src types?"); + + TEST_ASSERT(i < NUM_PRIVATE_MEM_SRC_TYPES, "Private mem backing src type = ID %d too big", i); + + return &aliases[i]; +} + #define MAP_HUGE_PAGE_SIZE(x) (1ULL << ((x >> MAP_HUGE_SHIFT) & MAP_HUGE_M= ASK)) =20 size_t get_backing_src_pagesz(uint32_t i) @@ -308,6 +337,20 @@ size_t get_backing_src_pagesz(uint32_t i) } } =20 +size_t get_private_mem_backing_src_pagesz(uint32_t i) +{ + uint32_t flag =3D vm_private_mem_backing_src_alias(i)->flag; + + switch (i) { + case VM_PRIVATE_MEM_SRC_GUEST_MEM: + return getpagesize(); + case VM_PRIVATE_MEM_SRC_HUGETLB: + return get_def_hugetlb_pagesz(); + default: + return MAP_HUGE_PAGE_SIZE(flag); + } +} + bool is_backing_src_hugetlb(uint32_t i) { return !!(vm_mem_backing_src_alias(i)->flag & MAP_HUGETLB); @@ -344,6 +387,37 @@ enum vm_mem_backing_src_type parse_backing_src_type(co= nst char *type_name) return -1; } =20 +static void print_available_private_mem_backing_src_types(const char *pref= ix) +{ + int i; + + printf("%sAvailable private mem backing src types:\n", prefix); + + for (i =3D 0; i < NUM_PRIVATE_MEM_SRC_TYPES; i++) + printf("%s %s\n", prefix, vm_private_mem_backing_src_alias(i)->name); +} + +void private_mem_backing_src_help(const char *flag) +{ + printf(" %s: specify the type of memory that should be used to\n" + " back guest private memory. (default: %s)\n", + flag, vm_private_mem_backing_src_alias(DEFAULT_VM_PRIVATE_MEM_SRC)= ->name); + print_available_private_mem_backing_src_types(" "); +} + +enum vm_private_mem_backing_src_type parse_private_mem_backing_src_type(co= nst char *type_name) +{ + int i; + + for (i =3D 0; i < NUM_PRIVATE_MEM_SRC_TYPES; i++) + if (!strcmp(type_name, vm_private_mem_backing_src_alias(i)->name)) + return i; + + print_available_private_mem_backing_src_types(""); + TEST_FAIL("Unknown private mem backing src type: %s", type_name); + return -1; +} + long get_run_delay(void) { char path[64]; --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 543BA1BE23F for ; Tue, 10 Sep 2024 23:45:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011906; cv=none; b=l+tAXzYfvBymIZll+37AEA3U4UnuHwhPfXY9L06HGsNt4nljedz29W5UsbndN9mBkw1lR4YgnXgZd+UGtt3tO0P3s+Y92+cFSjDOy9mnQ/1gK+hGVHyXTD9ke/18nytmfDQu7yGx0jVKIsFMjcWeGjPPbnTtcCbtMRS+EFPIuAA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011906; c=relaxed/simple; bh=TzSJJrRlAi780A8pE55jdRFKRZBu64XpcxDuBoryqQs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iXr2qNsj5nDq4MldGhCCo1/DGlntjQmReoNuChHRGJjqajnigbv9QfteYW4fJnJjDdOHRZyX464hTgiQWIaEvrLbKg5kieJL3QuuXpFm/02eyNEsuY0X8DElEocdrYD6dhxZiA6OkqR43qnKunp5/On3br1kUywULoF5g5elFhw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qBJJpB7n; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qBJJpB7n" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7d50c3d0f1aso5097104a12.3 for ; Tue, 10 Sep 2024 16:45:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011905; x=1726616705; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JNe/WmHLeg6VKg7h5W2f+s+eqgKPFc2fIX3V6u3knDI=; b=qBJJpB7n5mTvijCfEdMh2HbVXo1BCYMHQbNlSNfKFWq+K8pGQyhWQQOOodkJRxLTOO Xnka5U1glb+Eor5CES1U6pvCgkC/peoM8mwIlda6IgnIQfVqj2o0+IK7xfJLdANdknbJ cgu2V8psQ2uxiCFU0eXwrQd9q2TEGsp4FZWVcptQ2mNjCGGIqOMO7Hdo+bMIeG7ThEgj wV9IBLz/g3MfUjng7fkk5KLk4GTrP5iHIBWj9mvJrrG95LuJ8FBcWwp49e6ssLwFpaov dzdw/tb+WulrtLW5jpv/eH8BH22m18vkj024F05piYoUiPkVcwCuqLIW2kWMpmQ5U9UI OmyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011905; x=1726616705; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JNe/WmHLeg6VKg7h5W2f+s+eqgKPFc2fIX3V6u3knDI=; b=UiXpreUPYsz6JyjD/Fr4tO1oxt94n0j8YV+wBRIEzgLOCXzVEI+g88tXBIDmXBqfG0 OfhygUkukqxDRuxPwHZr/pThzcCGaVDE+74X0tI4IEm36OE8xehy/Ns/XecvpnfSGjAf Dpz32l+bkkfc8OPJgztrNIS9nsIKI9kxBhTGOt9MVcuYuuwfI6UA8HDs2ShLNuOItZkF vzCfHct00bSXXqUeLN9vDBuVE2Fe7FPHanX0MYVjFst7xwuWmCTfel/cRy6ApCp2EAJr OFRxzrWHS06pP0wZoMsFMsps7TWcdRLwkrnEom38zPr+CyrO1YFzu16dKZZIkSdIWCbh 9nOg== X-Forwarded-Encrypted: i=1; AJvYcCUnCufB922D8KA+elt8Kf5j6444AiQ4vR4gbl2I3Nc1UW6WesWZtOOSnq5KBEiGTvCC4dh/GkA7cvDDnNg=@vger.kernel.org X-Gm-Message-State: AOJu0Yw0mbxL0WiKwmuVBQTHziVwiKkwEE2cPNKh833Uu+hJPnXVk7/l 8zvtxgVzV7e6tTftwu5xv2+YecMwEk7uCpq6WpGAg66uutEIQQg0qb+PqmeIvLZ9EzNdwzWMyHM Z3dh+wpJthWoYms3SZlGp+w== X-Google-Smtp-Source: AGHT+IEf/I6TXDj4AnDTGwL9SBffiRIBIsWvgKs+GwMaxvWB3wQdGWq/Gxc5GyUB8zA1nCQX52oCQ5i2eL9ajayvXg== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a63:2506:0:b0:7c9:58ed:7139 with SMTP id 41be03b00d2f7-7db084ae38emr8766a12.2.1726011904575; Tue, 10 Sep 2024 16:45:04 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:50 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <41d7d714cfa7cec3e7089a184918da39e93008ee.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 19/39] KVM: selftests: Update test for various private memory backing source types From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update private_mem_conversions_test for various private memory backing source types. Signed-off-by: Ackerley Tng --- .../kvm/x86_64/private_mem_conversions_test.c | 28 ++++++++++++++----- 1 file changed, 21 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_tes= t.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c index 82a8d88b5338..71f480c19f92 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c @@ -366,14 +366,20 @@ static void *__test_mem_conversions(void *__vcpu) } } =20 -static void test_mem_conversions(enum vm_mem_backing_src_type src_type, ui= nt32_t nr_vcpus, - uint32_t nr_memslots) +static void +test_mem_conversions(enum vm_mem_backing_src_type src_type, + enum vm_private_mem_backing_src_type private_mem_src_type, + uint32_t nr_vcpus, + uint32_t nr_memslots) { /* * Allocate enough memory so that each vCPU's chunk of memory can be * naturally aligned with respect to the size of the backing store. */ - const size_t alignment =3D max_t(size_t, SZ_2M, get_backing_src_pagesz(sr= c_type)); + const size_t alignment =3D max_t(size_t, SZ_2M, + max_t(size_t, + get_private_mem_backing_src_pagesz(private_mem_src_type), + get_backing_src_pagesz(src_type))); const size_t per_cpu_size =3D align_up(PER_CPU_DATA_SIZE, alignment); const size_t memfd_size =3D per_cpu_size * nr_vcpus; const size_t slot_size =3D memfd_size / nr_memslots; @@ -394,7 +400,9 @@ static void test_mem_conversions(enum vm_mem_backing_sr= c_type src_type, uint32_t =20 vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, (1 << KVM_HC_MAP_GPA_RANGE)); =20 - memfd =3D vm_create_guest_memfd(vm, memfd_size, 0); + memfd =3D vm_create_guest_memfd( + vm, memfd_size, + vm_private_mem_backing_src_alias(private_mem_src_type)->flag); =20 for (i =3D 0; i < nr_memslots; i++) vm_mem_add(vm, src_type, BASE_DATA_GPA + slot_size * i, @@ -440,10 +448,12 @@ static void test_mem_conversions(enum vm_mem_backing_= src_type src_type, uint32_t static void usage(const char *cmd) { puts(""); - printf("usage: %s [-h] [-m nr_memslots] [-s mem_type] [-n nr_vcpus]\n", c= md); + printf("usage: %s [-h] [-m nr_memslots] [-s mem_type] [-p private_mem_typ= e] [-n nr_vcpus]\n", cmd); puts(""); backing_src_help("-s"); puts(""); + private_mem_backing_src_help("-p"); + puts(""); puts(" -n: specify the number of vcpus (default: 1)"); puts(""); puts(" -m: specify the number of memslots (default: 1)"); @@ -453,17 +463,21 @@ static void usage(const char *cmd) int main(int argc, char *argv[]) { enum vm_mem_backing_src_type src_type =3D DEFAULT_VM_MEM_SRC; + enum vm_private_mem_backing_src_type private_mem_src_type =3D DEFAULT_VM_= PRIVATE_MEM_SRC; uint32_t nr_memslots =3D 1; uint32_t nr_vcpus =3D 1; int opt; =20 TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_V= M)); =20 - while ((opt =3D getopt(argc, argv, "hm:s:n:")) !=3D -1) { + while ((opt =3D getopt(argc, argv, "hm:s:p:n:")) !=3D -1) { switch (opt) { case 's': src_type =3D parse_backing_src_type(optarg); break; + case 'p': + private_mem_src_type =3D parse_private_mem_backing_src_type(optarg); + break; case 'n': nr_vcpus =3D atoi_positive("nr_vcpus", optarg); break; @@ -477,7 +491,7 @@ int main(int argc, char *argv[]) } } =20 - test_mem_conversions(src_type, nr_vcpus, nr_memslots); + test_mem_conversions(src_type, private_mem_src_type, nr_vcpus, nr_memslot= s); =20 return 0; } --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0DA6F1BE24B for ; Tue, 10 Sep 2024 23:45:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011908; cv=none; b=LeQl9t85Qn4Q1JjAPwMOmgNu/if4eJCPKftBUkQ86hAU7Gh/gHrHS4VaNrnk3nCW63Pc+8KIQqKZjKu3PlKbYgyLGL4RsbtTEs93B/0NDafipnCLak89u/5yDGY1CNB6Fy9jPX6FKgm0X1nQtch+d9tTUiqfQSwdr8YW4mgnzyA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011908; c=relaxed/simple; bh=XiY3BKQF2LguGns0tGK1hRv3k4qaW06a47f+QLg/xl8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XwUm2gPWF8QHWWoovnKBNzBtxVbVN74jYoBmy9gOpCfhlQUK7X57Ma7siwmsMCzv//IESt+4WfNkfo7OidMwZ/4Do2zpC2tVlYer/N54qGEzyb7QdRgzsATFWcX3p7hxcSf3s+dR2w6WnX7z/59SjAo/NQ/P1+NGd1ufCIFoiYE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GPrF7ut9; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GPrF7ut9" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-717dfdd7c72so7071969b3a.0 for ; Tue, 10 Sep 2024 16:45:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011906; x=1726616706; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=WNwyUtrg2ydCG7OEuuYFt3GviKLhR33B3wmqNjEPlY0=; b=GPrF7ut9yZpPR4ISa4MMHpXUP/tg2Qk5c2Yp1SvNZW2V0nZ81Ql436VsbZQwMPFvOX Edwz51Cgs2fGmaN69ASSuae2/16qtajVpnU1ijwojUmZgPhOsuEXbuD0fi332ITaU57y CdctyCvRWHV1KGbpQEwpKt0zC0sABM6eRk9y+YEXcqXMpbHn1VCkjR6mupHbZAY/biCq mEWfL404K4u9t9c/aqHbw626QQPdW6LSLhH2Auerl2DnOkcPzyphyhZHF/vxt+h1MqnM 6ahIf4Vd49oUlt6/Ui2kqHkvztH0wgEigvb/WwPfVm3P5+pcAinKptA/eBx9VpmGIm3n hPuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011906; x=1726616706; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WNwyUtrg2ydCG7OEuuYFt3GviKLhR33B3wmqNjEPlY0=; b=fjycabXTaQJmUmQdgRDT7yE35PWLxo7cIv2v9vaL3fLEiZtO/4WdJO+GI+sh+HWZvd 3THvRwYzRZN7HYh54RI4WLJ8/ibPQ8cPEtHkoAdX6jzt2lX5RnZorVz+cqKAcpOT2Gao weKt7YAhhCqLMNBYifQRd2bVQh8V3nQgoSgLkQxlMRQnNm5GOI0tsd79a8SHmwqGmQkh 7EdLY8GGnxxWYtmLRuLG3UrKIM2ULS1GDLybA/MCR98KGxSxvYOuyCYgfMh/mKCSz/Jk zU7bmLPCXwb5Qjo0Na8YFO6JdJHV1Pm2L1E90/Kb3bpnnV8XGOrIBYXmKhXomZjcnt7t YKMQ== X-Forwarded-Encrypted: i=1; AJvYcCU8ipN2GUaNGsd38MvIPpLrqLj0UiDirsoROLCjEzqZ+PU2G9fzdy9X0ErnQI7HJhHlmUNLz4hPeZBfBks=@vger.kernel.org X-Gm-Message-State: AOJu0Yw+fSf7lXfVj98df/xEj2QwDvF7q1GSUouAKkp6vhxOazrNLgzW tIoJFMDwIYIPa+p2EEheUTRaQ123F+GEsr6gQpMfDsEuqpUy7FtMq2+f0+LMAfwnxgVbLSEfsUJ 5Kob8j44oX2Nk6FzMVI0wuQ== X-Google-Smtp-Source: AGHT+IEcZ/ghxITnGvgKkAKwRaOU4W9SdtHb2nrXhbk+ACBlgEJVwXnXXaBkRqqDbpe8CxVzylPUbg24aRXjUe363A== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a05:6a00:81c6:b0:718:d8af:4551 with SMTP id d2e1a72fcca58-718d8af4650mr30583b3a.1.1726011906107; Tue, 10 Sep 2024 16:45:06 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:51 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <6bfe8c9baabc6ad89ccc2c4481db2b4983cbfd8b.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 20/39] KVM: selftests: Add private_mem_conversions_test.sh From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add private_mem_conversions_test.sh to automate testing of different combinations of private_mem_conversions_test. Signed-off-by: Ackerley Tng --- .../x86_64/private_mem_conversions_test.sh | 88 +++++++++++++++++++ 1 file changed, 88 insertions(+) create mode 100755 tools/testing/selftests/kvm/x86_64/private_mem_conversi= ons_test.sh diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_tes= t.sh b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.sh new file mode 100755 index 000000000000..fb6705fef466 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.sh @@ -0,0 +1,88 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0-only */ +# +# Wrapper script which runs different test setups of +# private_mem_conversions_test. +# +# tools/testing/selftests/kvm/private_mem_conversions_test.sh +# Copyright (C) 2023, Google LLC. + +set -e + +num_vcpus_to_test=3D4 +num_memslots_to_test=3D$num_vcpus_to_test + +get_default_hugepage_size_in_kB() { + grep "Hugepagesize:" /proc/meminfo | grep -o '[[:digit:]]\+' +} + +# Required pages are based on the test setup (see computation for memfd_si= ze) in +# test_mem_conversions() in private_mem_migrate_tests.c) + +# These static requirements are set to the maximum required for +# num_vcpus_to_test, over all the hugetlb-related tests +required_num_2m_hugepages=3D$(( 1024 * num_vcpus_to_test )) +required_num_1g_hugepages=3D$(( 2 * num_vcpus_to_test )) + +# The other hugetlb sizes are not supported on x86_64 +[ "$(cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 2>/dev/nul= l || echo 0)" -ge "$required_num_2m_hugepages" ] && hugepage_2mb_enabled=3D1 +[ "$(cat /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages 2>/dev/= null || echo 0)" -ge "$required_num_1g_hugepages" ] && hugepage_1gb_enabled= =3D1 + +case $(get_default_hugepage_size_in_kB) in + 2048) + hugepage_default_enabled=3D$hugepage_2mb_enabled + ;; + 1048576) + hugepage_default_enabled=3D$hugepage_1gb_enabled + ;; + *) + hugepage_default_enabled=3D0 + ;; +esac + +backing_src_types=3D( anonymous ) +backing_src_types+=3D( anonymous_thp ) +[ -n "$hugepage_default_enabled" ] && \ + backing_src_types+=3D( anonymous_hugetlb ) || echo "skipping anonymous= _hugetlb backing source type" +[ -n "$hugepage_2mb_enabled" ] && \ + backing_src_types+=3D( anonymous_hugetlb_2mb ) || echo "skipping anony= mous_hugetlb_2mb backing source type" +[ -n "$hugepage_1gb_enabled" ] && \ + backing_src_types+=3D( anonymous_hugetlb_1gb ) || echo "skipping anony= mous_hugetlb_1gb backing source type" +backing_src_types+=3D( shmem ) +[ -n "$hugepage_default_enabled" ] && \ + backing_src_types+=3D( shared_hugetlb ) || echo "skipping shared_hugetlb= backing source type" + +private_mem_backing_src_types=3D( private_mem_guest_mem ) +[ -n "$hugepage_default_enabled" ] && \ + private_mem_backing_src_types+=3D( private_mem_hugetlb ) || echo "skip= ping private_mem_hugetlb backing source type" +[ -n "$hugepage_2mb_enabled" ] && \ + private_mem_backing_src_types+=3D( private_mem_hugetlb_2mb ) || echo "= skipping private_mem_hugetlb_2mb backing source type" +[ -n "$hugepage_1gb_enabled" ] && \ + private_mem_backing_src_types+=3D( private_mem_hugetlb_1gb ) || echo "= skipping private_mem_hugetlb_1gb backing source type" + +set +e + +TEST_EXECUTABLE=3D"$(dirname "$0")/private_mem_conversions_test" + +( + set -e + + for src_type in "${backing_src_types[@]}"; do + + for private_mem_src_type in "${private_mem_backing_src_types[@]}"; do + set -x + + $TEST_EXECUTABLE -s "$src_type" -p "$private_mem_src_type" -n $num_vcpu= s_to_test + $TEST_EXECUTABLE -s "$src_type" -p "$private_mem_src_type" -n $num_vcpu= s_to_test -m $num_memslots_to_test + + { set +x; } 2>/dev/null + + echo + + done + + done +) +RET=3D$? + +exit $RET --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 350501BDA91 for ; Tue, 10 Sep 2024 23:45:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011910; cv=none; b=luOi7AAGoisAhb13z352eIuZzye5M3m0y+7M6QFaa4Cm5dScOddyaWpcZ5cFlqFhx6fWBsYt6eR2q1pvNp18Q0oJ8g1dOgevwPGy/v5lWC2ljY5IBho6Xz04VT2Le5Al68o/sqGvIBZivCOVF/rlvN0bVPhSxu07AJYGB43X/nk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011910; c=relaxed/simple; bh=oLEF7a9OeLIv2Dj0t9zeehek0UM2JWB4c4mv/k82pDE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XNkcJHTjgA55UTSN5RLEA0XR1qVNQUxLKqgt6PHX7fN9H90CwrwrHF5mH8kOFfKLimRxZCoibkZo5VZIQqy3xcgCNiZnExsSSdtdL3WWkAhMzPhGPdSlP3iDwVjn1no1oEeZ7TVmIgxC9neKJ6dUqt5PJIKbWadutZOznxKxXYI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XZe6vWTE; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XZe6vWTE" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2052d07ff07so67616535ad.2 for ; Tue, 10 Sep 2024 16:45:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011908; x=1726616708; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/qKZmcmtlvLxUiaB9mxVA7dCbtHF2c/ntC+IAYmWTbE=; b=XZe6vWTEE//x1bMalKbaiud1Ku36Qr9hcS6J/Dh7mCTuzDIsizCkezzRDR5ZpEWUjd 2QrVvf1j2jK0ncSN8Y6Nu4hKXhT1wATxoKsJ642TUb7pxZ8qV3XQK+4gIiUOqZxsgmDX /uIUFaoVZZ+w+Vsauh95ShLhHuvizy+edJa+F89szDZNPBonbiXqTTjevS1DI/XeVblb BJq9zWH461flnbP9QjVGeVPLjACCWYIYjjwL4pft7cdgCsOuBiaPHYs47eTW75W43wxn un6V5V3FY4+VrXSvtXj/7O79BW//kAZRVCy2yBHaV/Xwdndec9SYcrt27/t+eryybfCq MsxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011908; x=1726616708; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/qKZmcmtlvLxUiaB9mxVA7dCbtHF2c/ntC+IAYmWTbE=; b=donIy2wP6B1im6R3CwA81Ae7aRpj8L65ENguKGVkYKHMjTAsv1c3pU4QGjC2OwhVPJ 6aXdrl30BLSx5yA/hqBcLnpWBpe14JsxMY93Pbl3Ktekon6BQAiMbzOYH9DVijxL+sMi 5jyRXUOwUSqDJUyqhEAd9Q8wnphfz35HVv13AQxsm8ZVf1FHsho+bCa96c+ZlNZJqORC 7/ZPEzbuZHgGkqigQdqhNhORpOdAQKN94Dn5hN3u0Pep9r8OAAAFtiyQW3QNsxXvmpLn cMXyJqn2A0lIzT5ubdwHvDtWhWCPxe5bEp+XI6YqLnNPMdjQAo+jNPtSmwN9CnPzkFZM DzSQ== X-Forwarded-Encrypted: i=1; AJvYcCW4I1ZKesqcSsAuMMMEsn1mLT6WWW70ZSc6oaP0DqvAjdsu3tLCk16P1khXwESCf93d9ul0/KBO89/nR1I=@vger.kernel.org X-Gm-Message-State: AOJu0YzZXTcIXViKRNpr1XS42xzsXjLKKm/3higls/DUe8f88gZEUjpL yPT9zemuKNjxS0wVkKsNVKp4kgeg8tp/Y7SKGp+kcopQ0HPJeGFU0w00r5/ZD/SRxumrLzqUdFm kEH7sbuigccWLm2NAwVSY/w== X-Google-Smtp-Source: AGHT+IFg8OuONbuLb/Y7bD7B++eP0JBBTd74fUtN+JQ8GVPvsqKbJU+z/AbwfH4duxwkO4C0+G16tdQi6/1xEgA62w== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:902:c946:b0:206:a858:40d0 with SMTP id d9443c01a7336-2074c7cb495mr2763305ad.9.1726011907972; Tue, 10 Sep 2024 16:45:07 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:52 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <405825c1c3924ca534da3016dda812df17d6c233.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 21/39] KVM: selftests: Test that guest_memfd usage is reported via hugetlb From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Using HugeTLB as the huge page allocator for guest_memfd allows reuse of HugeTLB's reporting mechanism. Signed-off-by: Ackerley Tng --- tools/testing/selftests/kvm/Makefile | 1 + .../kvm/guest_memfd_hugetlb_reporting_test.c | 222 ++++++++++++++++++ 2 files changed, 223 insertions(+) create mode 100644 tools/testing/selftests/kvm/guest_memfd_hugetlb_reporti= ng_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index 48d32c5aa3eb..b3b7e83f39fc 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -134,6 +134,7 @@ TEST_GEN_PROGS_x86_64 +=3D demand_paging_test TEST_GEN_PROGS_x86_64 +=3D dirty_log_test TEST_GEN_PROGS_x86_64 +=3D dirty_log_perf_test TEST_GEN_PROGS_x86_64 +=3D guest_memfd_test +TEST_GEN_PROGS_x86_64 +=3D guest_memfd_hugetlb_reporting_test TEST_GEN_PROGS_x86_64 +=3D guest_print_test TEST_GEN_PROGS_x86_64 +=3D hardware_disable_test TEST_GEN_PROGS_x86_64 +=3D kvm_create_max_vcpus diff --git a/tools/testing/selftests/kvm/guest_memfd_hugetlb_reporting_test= .c b/tools/testing/selftests/kvm/guest_memfd_hugetlb_reporting_test.c new file mode 100644 index 000000000000..cb9fdf0d4ec8 --- /dev/null +++ b/tools/testing/selftests/kvm/guest_memfd_hugetlb_reporting_test.c @@ -0,0 +1,222 @@ +#include +#include +#include +#include +#include +#include +#include + +#include "kvm_util.h" +#include "test_util.h" +#include "processor.h" + +static int read_int(const char *file_name) +{ + FILE *fp; + int num; + + fp =3D fopen(file_name, "r"); + TEST_ASSERT(fp !=3D NULL, "Error opening file %s!\n", file_name); + + TEST_ASSERT_EQ(fscanf(fp, "%d", &num), 1); + + fclose(fp); + + return num; +} + +enum hugetlb_statistic { + FREE_HUGEPAGES, + NR_HUGEPAGES, + NR_OVERCOMMIT_HUGEPAGES, + RESV_HUGEPAGES, + SURPLUS_HUGEPAGES, + NR_TESTED_HUGETLB_STATISTICS, +}; + +static const char *hugetlb_statistics[NR_TESTED_HUGETLB_STATISTICS] =3D { + [FREE_HUGEPAGES] =3D "free_hugepages", + [NR_HUGEPAGES] =3D "nr_hugepages", + [NR_OVERCOMMIT_HUGEPAGES] =3D "nr_overcommit_hugepages", + [RESV_HUGEPAGES] =3D "resv_hugepages", + [SURPLUS_HUGEPAGES] =3D "surplus_hugepages", +}; + +enum test_page_size { + TEST_SZ_2M, + TEST_SZ_1G, + NR_TEST_SIZES, +}; + +struct test_param { + size_t page_size; + int memfd_create_flags; + int guest_memfd_flags; + char *path_suffix; +}; + +const struct test_param *test_params(enum test_page_size size) +{ + static const struct test_param params[] =3D { + [TEST_SZ_2M] =3D { + .page_size =3D PG_SIZE_2M, + .memfd_create_flags =3D MFD_HUGETLB | MFD_HUGE_2MB, + .guest_memfd_flags =3D KVM_GUEST_MEMFD_HUGETLB | KVM_GUEST_MEMFD_HUGE_2= MB, + .path_suffix =3D "2048kB", + }, + [TEST_SZ_1G] =3D { + .page_size =3D PG_SIZE_1G, + .memfd_create_flags =3D MFD_HUGETLB | MFD_HUGE_1GB, + .guest_memfd_flags =3D KVM_GUEST_MEMFD_HUGETLB | KVM_GUEST_MEMFD_HUGE_1= GB, + .path_suffix =3D "1048576kB", + }, + }; + + return ¶ms[size]; +} + +static int read_statistic(enum test_page_size size, enum hugetlb_statistic= statistic) +{ + char path[PATH_MAX] =3D "/sys/kernel/mm/hugepages/hugepages-"; + + strcat(path, test_params(size)->path_suffix); + strcat(path, "/"); + strcat(path, hugetlb_statistics[statistic]); + + return read_int(path); +} + +static int baseline[NR_TEST_SIZES][NR_TESTED_HUGETLB_STATISTICS]; + +static void establish_baseline(void) +{ + int i, j; + + for (i =3D 0; i < NR_TEST_SIZES; ++i) + for (j =3D 0; j < NR_TESTED_HUGETLB_STATISTICS; ++j) + baseline[i][j] =3D read_statistic(i, j); +} + +static void assert_stats_at_baseline(void) +{ + TEST_ASSERT_EQ(read_statistic(TEST_SZ_2M, FREE_HUGEPAGES), + baseline[TEST_SZ_2M][FREE_HUGEPAGES]); + TEST_ASSERT_EQ(read_statistic(TEST_SZ_2M, NR_HUGEPAGES), + baseline[TEST_SZ_2M][NR_HUGEPAGES]); + TEST_ASSERT_EQ(read_statistic(TEST_SZ_2M, NR_OVERCOMMIT_HUGEPAGES), + baseline[TEST_SZ_2M][NR_OVERCOMMIT_HUGEPAGES]); + TEST_ASSERT_EQ(read_statistic(TEST_SZ_2M, RESV_HUGEPAGES), + baseline[TEST_SZ_2M][RESV_HUGEPAGES]); + TEST_ASSERT_EQ(read_statistic(TEST_SZ_2M, SURPLUS_HUGEPAGES), + baseline[TEST_SZ_2M][SURPLUS_HUGEPAGES]); + + TEST_ASSERT_EQ(read_statistic(TEST_SZ_1G, FREE_HUGEPAGES), + baseline[TEST_SZ_1G][FREE_HUGEPAGES]); + TEST_ASSERT_EQ(read_statistic(TEST_SZ_1G, NR_HUGEPAGES), + baseline[TEST_SZ_1G][NR_HUGEPAGES]); + TEST_ASSERT_EQ(read_statistic(TEST_SZ_1G, NR_OVERCOMMIT_HUGEPAGES), + baseline[TEST_SZ_1G][NR_OVERCOMMIT_HUGEPAGES]); + TEST_ASSERT_EQ(read_statistic(TEST_SZ_1G, RESV_HUGEPAGES), + baseline[TEST_SZ_1G][RESV_HUGEPAGES]); + TEST_ASSERT_EQ(read_statistic(TEST_SZ_1G, SURPLUS_HUGEPAGES), + baseline[TEST_SZ_1G][SURPLUS_HUGEPAGES]); +} + +static void assert_stats(enum test_page_size size, int num_reserved, int n= um_faulted) +{ + TEST_ASSERT_EQ(read_statistic(size, FREE_HUGEPAGES), + baseline[size][FREE_HUGEPAGES] - num_faulted); + TEST_ASSERT_EQ(read_statistic(size, NR_HUGEPAGES), + baseline[size][NR_HUGEPAGES]); + TEST_ASSERT_EQ(read_statistic(size, NR_OVERCOMMIT_HUGEPAGES), + baseline[size][NR_OVERCOMMIT_HUGEPAGES]); + TEST_ASSERT_EQ(read_statistic(size, RESV_HUGEPAGES), + baseline[size][RESV_HUGEPAGES] + num_reserved - num_faulted); + TEST_ASSERT_EQ(read_statistic(size, SURPLUS_HUGEPAGES), + baseline[size][SURPLUS_HUGEPAGES]); +} + +/* Use hugetlb behavior as a baseline. guest_memfd should have comparable = behavior. */ +static void test_hugetlb_behavior(enum test_page_size test_size) +{ + const struct test_param *param; + char *mem; + int memfd; + + param =3D test_params(test_size); + + assert_stats_at_baseline(); + + memfd =3D memfd_create("guest_memfd_hugetlb_reporting_test", + param->memfd_create_flags); + + mem =3D mmap(NULL, param->page_size, PROT_READ | PROT_WRITE, + MAP_SHARED | MAP_HUGETLB, memfd, 0); + TEST_ASSERT(mem !=3D MAP_FAILED, "Couldn't mmap()"); + + assert_stats(test_size, 1, 0); + + *mem =3D 'A'; + + assert_stats(test_size, 1, 1); + + munmap(mem, param->page_size); + + assert_stats(test_size, 1, 1); + + madvise(mem, param->page_size, MADV_DONTNEED); + + assert_stats(test_size, 1, 1); + + madvise(mem, param->page_size, MADV_REMOVE); + + assert_stats(test_size, 1, 1); + + close(memfd); + + assert_stats_at_baseline(); +} + +static void test_guest_memfd_behavior(enum test_page_size test_size) +{ + const struct test_param *param; + struct kvm_vm *vm; + int guest_memfd; + + param =3D test_params(test_size); + + assert_stats_at_baseline(); + + vm =3D vm_create_barebones_type(KVM_X86_SW_PROTECTED_VM); + + guest_memfd =3D vm_create_guest_memfd(vm, param->page_size, + param->guest_memfd_flags); + + assert_stats(test_size, 1, 0); + + fallocate(guest_memfd, FALLOC_FL_KEEP_SIZE, 0, param->page_size); + + assert_stats(test_size, 1, 1); + + fallocate(guest_memfd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, + param->page_size); + + assert_stats(test_size, 1, 0); + + close(guest_memfd); + + assert_stats_at_baseline(); + + kvm_vm_free(vm); +} + +int main(int argc, char *argv[]) +{ + establish_baseline(); + + test_hugetlb_behavior(TEST_SZ_2M); + test_hugetlb_behavior(TEST_SZ_1G); + + test_guest_memfd_behavior(TEST_SZ_2M); + test_guest_memfd_behavior(TEST_SZ_1G); +} --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A9D6F1AC427 for ; Tue, 10 Sep 2024 23:45:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011912; cv=none; b=sSz9ijtc7W41gHWMSZML2Z8qUyQD/VMMK2FhW72AwIQnUucpdq99P0xZ9+R8Zep8KNgGjbOxU3iWnopNn4ouXUx+8dQLNF4K+4W1HjZnj35nyKj+mTK/xRM2gvvP3tzb5HirpzLDOGuUs+Z2MQbP2PlQUyyqohP7Dmy6omoptgk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011912; c=relaxed/simple; bh=Pzkef+0eT8GE/mhwdecw3ku8eLSzr5GfZsggHd/uWcc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Mt69WvgWmh9zR83cHb0EzLuVCN0DO9iBDBXx2ks65vRV7829/rT1/gkZqiDYQyJ1CSi2cK9XdlLZUdVDCM+1/U6GrsWf9au/ucQZoofM/Oq1fMFCIfhvmBaYzqSJ6kqdCbJlULcDcdV4HgupoOwGsv26J3XbtCoj+KfkrX8tPkE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zRcikCRJ; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zRcikCRJ" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2052d07ff07so67616705ad.2 for ; Tue, 10 Sep 2024 16:45:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011910; x=1726616710; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=v2byeEE5glf1HSY5uEmaELERh+IVrgf4yY9vZeuIn10=; b=zRcikCRJtogCxcQWZPyaJK3KB8Tm0HA+f2CjsWHMP7hR75ftC+UlSdrlu0HUMi5Cx0 hzaiOFpn0A0r1m0uD3W5Nr37390s1i1TL9XoK5bcU4/OLzEVqJRMLG5SptPrA29jcieR 0sVjK45aGZj1uIOxWk0/6REW0c/vXiJCBygFK2pz8Pso0SM9421WMgKfMOIod2htn43I 3gRukiTI9Gtw8i62nW5MAtvASRpF1DqirdeNnXEhnBWmvSRf5O7rnzJNCyku3Sh4KMaQ bvxCex2Q/UJ1/q/Zsd/GGAm/BZnJcueLadv+eTZwMBnMWimrgVT37fP75h/KqV5Gj9ci qhtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011910; x=1726616710; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=v2byeEE5glf1HSY5uEmaELERh+IVrgf4yY9vZeuIn10=; b=wP5ULyYNiXlF5jtYeEH6b9c6A/YRnA5ZamzasMqf8Oyg3vRriu9n87WTs/vpbAY0O5 twr7dA6SYFtQi+QvMKo28y9cRbZ2+2fQHm5RnRm9IK3kH+CkkljoAAuKLoq2pQj6WlCo LI5E2g27qazdkjpFwMMMz4aj0TgFOVuQMnbT3kxpXpjUcVXK/pUdx8uCf4V5pLGoTKRW J3u/8DTtZc+rFEmQ5YxcY5lrdFMwHlgplHqLjC9z3nfTFTNl2j5ISq8gqASMCkvrPab2 xL25RyxZQ/97ISLHj+hFg44P/EiRq7P8hbJ1R7/FuoqsmQHXNB9KXwe31JE2F4Wewk8g ncRA== X-Forwarded-Encrypted: i=1; AJvYcCW6zRJmcxO2gXzw7VR/+BOdVqFfZobl7CgbZnqU7ko2YbNfuMtCNKOAToVIkbnhK2PnXFS4FlH0/omtG34=@vger.kernel.org X-Gm-Message-State: AOJu0YwUNxelfW7fqQ53B6PpbsZHK2n3XgRHRjzAeVQmFpujZrozYRW+ 7xdUBPQicSEyGE3+tHXjOQsvvjEyV0ZbaV+nTYzXwhC2VWYetKdx1K/D0gEtkjOyudKRzdAA6e4 z9eVyjA/V4TTjeetMvqeqcw== X-Google-Smtp-Source: AGHT+IHRNRhjiY3twcj3iiSAd6sK0DpsvR2293BDcZR4oPPmoZmD0jtdld7lS7ASmX+XqQWY6/tAjSk1hpoqvcGWvg== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:902:eccb:b0:206:aa47:adb6 with SMTP id d9443c01a7336-2074c703af1mr2052395ad.6.1726011909816; Tue, 10 Sep 2024 16:45:09 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:53 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: Subject: [RFC PATCH 22/39] mm: hugetlb: Expose vmemmap optimization functions From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" These functions will need to be used by guest_memfd when splitting/reconstructing HugeTLB pages. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve --- include/linux/hugetlb.h | 14 ++++++++++++++ mm/hugetlb_vmemmap.h | 11 ----------- 2 files changed, 14 insertions(+), 11 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 752062044b0b..7ba4ed9e0001 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -284,6 +284,20 @@ bool is_hugetlb_entry_migration(pte_t pte); bool is_hugetlb_entry_hwpoisoned(pte_t pte); void hugetlb_unshare_all_pmds(struct vm_area_struct *vma); =20 +#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP +int hugetlb_vmemmap_restore_folio(const struct hstate *h, struct folio *fo= lio); +void hugetlb_vmemmap_optimize_folio(const struct hstate *h, struct folio *= folio); +#else +static inline int hugetlb_vmemmap_restore_folio(const struct hstate *h, st= ruct folio *folio) +{ + return 0; +} + +static inline void hugetlb_vmemmap_optimize_folio(const struct hstate *h, = struct folio *folio) +{ +} +#endif + #else /* !CONFIG_HUGETLB_PAGE */ =20 static inline void hugetlb_dup_vma_private(struct vm_area_struct *vma) diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 2fcae92d3359..e702ace3b42f 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -18,11 +18,9 @@ #define HUGETLB_VMEMMAP_RESERVE_PAGES (HUGETLB_VMEMMAP_RESERVE_SIZE / size= of(struct page)) =20 #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP -int hugetlb_vmemmap_restore_folio(const struct hstate *h, struct folio *fo= lio); long hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *folio_list, struct list_head *non_hvo_folios); -void hugetlb_vmemmap_optimize_folio(const struct hstate *h, struct folio *= folio); void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *f= olio_list); =20 static inline unsigned int hugetlb_vmemmap_size(const struct hstate *h) @@ -43,11 +41,6 @@ static inline unsigned int hugetlb_vmemmap_optimizable_s= ize(const struct hstate return size > 0 ? size : 0; } #else -static inline int hugetlb_vmemmap_restore_folio(const struct hstate *h, st= ruct folio *folio) -{ - return 0; -} - static long hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *folio_list, struct list_head *non_hvo_folios) @@ -56,10 +49,6 @@ static long hugetlb_vmemmap_restore_folios(const struct = hstate *h, return 0; } =20 -static inline void hugetlb_vmemmap_optimize_folio(const struct hstate *h, = struct folio *folio) -{ -} - static inline void hugetlb_vmemmap_optimize_folios(struct hstate *h, struc= t list_head *folio_list) { } --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CB1011BF7E6 for ; Tue, 10 Sep 2024 23:45:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011915; cv=none; b=nStzQCML4ECHVuH8DFT/FLaPTvhIBHx2g2MsH6ibFfZktZljwXlHS5MhpEpp7K+EiqjxJoI1CSWTFmumHuElKMvwQxWcA55kPAfXPprPKDTqTNX9ZVr3iPKYZYzlrS49zQIZxW//q6G3bRA1OHepkns2U7m5mGmFtE/pp3Bj71w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011915; c=relaxed/simple; bh=CrNuBEQB9mPgq5xJPKkAqgO4LUIgHUUw5bIzyAMc5hg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=FY832xtsFzJZx2p+AUUoY0Ml9qTUTJ9Xo5oHURUlqsb0bdCwqeV821VRPzCNjcW8SCk9n0EGGVkOqns0jlzVcbAJuXOi/PHwcTVuqJRL2BtPtyNHjjSXFIknkbSM/DZpr4L03Hpvxl3t2npWq0UPmMwayx5Kx/Xrc5gM9L7CyRo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=eAB17Vqe; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="eAB17Vqe" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-206f405f453so58298025ad.1 for ; Tue, 10 Sep 2024 16:45:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011913; x=1726616713; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XJnVJ3hPk966+JzwkBdvbRfaDiCGChJRXNWj1r0A/AI=; b=eAB17VqetZI0XT4jygNlCX8pMR14vcPxyhRuQcLYyt6eg0S9H0qD008NNo5spsy5AA ime7bd8mNxGh97CCCCtlahSIzm61ItbagYawo/Cu5QOjBpmZb2ufEirv98Zq4100N8J9 sldeF70I4TXu4SQpkHtlor6uIjPmye5qdDNqQ+J46+149nFjwI+6Ae+uKN+Jt56YL0BX lEx9Xy1ah+Uo0rep5GyNHp1CKx09QJ/VSHAVPylQsuV3GqnsBu4R62DlrUXFLzIFmLZF AoZ5v945Pw5wW1M2CU3llcg5XnmZo/EKYRkgNgsqNjQXGdcxO24fBC9l0XvOe2Jx3HAA BZgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011913; x=1726616713; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XJnVJ3hPk966+JzwkBdvbRfaDiCGChJRXNWj1r0A/AI=; b=eDGu1RBv3sLjQHtLOSGPfmsWFX2YXnPPD08QHNY2mVWImRmNslku4ozuvd6HvfF2k0 rPOg7cIK5vtabrXlHhIVpV4U65mIRD4oQzNsQt0H4vUGFyNNuRsQX0AvzN8DoyHXwnxH mlfPLvrslOQQq20nrpLo4hiTatDOVGqnkPmfNSdiM/WUa/CL6e94XBzGTBxn8Ywcjdhy x/BN6xeMPzTsGfQ0tYXv//krZZWI4c+3cmtHGcdvDFotM1mlKxlGtHgGfNBNC3pK2jI9 Pe3UgBnw1R+1lCfSg4OtblC/zLjwHZEeF/pDcPY3S35tUGyYZcpJMJp56Y61soKEciHC p1QA== X-Forwarded-Encrypted: i=1; AJvYcCVTh6MeXTSUoG/gt/LOX3K+kXuEJV2fTe2M6rXTshds0zwph4qs9bAtqcD0Ifucnbco9jyRkjdv5x9zmPQ=@vger.kernel.org X-Gm-Message-State: AOJu0YzbwdELpDXsTu6ViBOK5ybNukcMidWy7BG+pJZCSbGxzCP3AuOr 3Dvfp6eYK6x4uRYXqSD5iALP5y0uOF35NTg2dGALVFBB6UK9YWEnGCS2WQEicY5GDlhNs3RScEh b9OhiWPk+D2M4zVcJY/8euA== X-Google-Smtp-Source: AGHT+IFupyBy4ChWh4+QQZzDjp/0RDwyZGNbYZlLigdfa+9eTsAjgorPIS9wF7AAjJqg5MMFYi7bl04vejqoUPfawA== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:903:244d:b0:206:cace:ae9c with SMTP id d9443c01a7336-2074c6a44aamr66745ad.6.1726011911534; Tue, 10 Sep 2024 16:45:11 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:54 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <226a836ca381824cfe17ed42be5cbf9972b09ab1.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 23/39] mm: hugetlb: Expose HugeTLB functions for promoting/demoting pages From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" These functions will be used by guest_memfd to split/reconstruct HugeTLB pages. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve --- include/linux/hugetlb.h | 15 +++++++++++++++ mm/hugetlb.c | 8 ++------ 2 files changed, 17 insertions(+), 6 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 7ba4ed9e0001..ac9d4ada52bd 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -298,6 +298,21 @@ static inline void hugetlb_vmemmap_optimize_folio(cons= t struct hstate *h, struct } #endif =20 +#ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE +bool prep_compound_gigantic_folio(struct folio *folio, unsigned int order); +void destroy_compound_gigantic_folio(struct folio *folio, unsigned int ord= er); +#else +bool prep_compound_gigantic_folio(struct folio *folio, unsigned int order) +{ + return false; +} + +static inline void destroy_compound_gigantic_folio(struct folio *folio, + unsigned int order) +{ +} +#endif + #else /* !CONFIG_HUGETLB_PAGE */ =20 static inline void hugetlb_dup_vma_private(struct vm_area_struct *vma) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 372d8294fb2f..8f2b7b411b60 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1533,8 +1533,7 @@ static void destroy_compound_hugetlb_folio_for_demote= (struct folio *folio, } =20 #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE -static void destroy_compound_gigantic_folio(struct folio *folio, - unsigned int order) +void destroy_compound_gigantic_folio(struct folio *folio, unsigned int ord= er) { __destroy_compound_gigantic_folio(folio, order, false); } @@ -1609,8 +1608,6 @@ static struct folio *alloc_gigantic_folio(struct hsta= te *h, gfp_t gfp_mask, } static inline void free_gigantic_folio(struct folio *folio, unsigned int order) { } -static inline void destroy_compound_gigantic_folio(struct folio *folio, - unsigned int order) { } #endif =20 /* @@ -2120,8 +2117,7 @@ static bool __prep_compound_gigantic_folio(struct fol= io *folio, return false; } =20 -static bool prep_compound_gigantic_folio(struct folio *folio, - unsigned int order) +bool prep_compound_gigantic_folio(struct folio *folio, unsigned int order) { return __prep_compound_gigantic_folio(folio, order, false); } --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF2061BFE16 for ; Tue, 10 Sep 2024 23:45:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011917; cv=none; b=b6r5w6x631FR3ZNe1PhyVjfIN2SQmjdjVA8OFkMZNQTEnuRvU3a85a73uD/XoQqFwWe9D5j9MLcq0xfNPHX7SKQAMxq0RrXTYAf9fr1iLtOxyNRYAhoFqOZxWrN+GwQBuwdxs9g+pvjNXJ3DaehPY5vB1DX4o31Ny7QS4bkBqNs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011917; c=relaxed/simple; bh=YFlqwUDwGyInXdsNJdGve7rF6qfBl4eA7ZhQKm02v4M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=g2VwBZ6jSLTn3UPFUjaOccW7+V9JsEtQS9oPqBpbO2QxoobMwROo9ROMsuW+L8fAy/90+/OaDb8rcTw0mDDgEMgGRNUCfrGEqEJphc33PA7F1bcmaZpAe/LdtM1L2oDRpWTMM+AnOIN8Y/+c1EE4EJNBtOAGGtaizQCzklF9Pbs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ckY+7F+H; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ckY+7F+H" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2d878377fc2so330642a91.0 for ; Tue, 10 Sep 2024 16:45:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011915; x=1726616715; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lBJef16ALyOYFlot9YzaY+1ElfdQQGF78ZnzsjyO0V0=; b=ckY+7F+HEpKKGVcmRr56+h88pZNcrxOzgCupbmKVr97yoBG/iY6WUwhMxxdLB1e3Ut EpEYwtRxkyqK5h1gqiYwVgDZ8Zm9dEdXHWM9XPQvRRqhafqh1A1HX+quZPM34EU9eAYa jJtJ+Bp/7x5TsvlJ9kRE1zqFGOxJPcOmXnFkGWaiX+JaZlOl4eLfgn3Dk8J5pjt8zKx4 6X4o0If+05Jjf8L4/lo0pE1J4mmF54aLdS/ffx9kynx13WWeMNo3pSGlCbOdmdYuQhG2 Mb5zhj9uFEB8kFgu7a2bn6LWh/X+5+jxsaMvkq4hGQptACPE0NYWCy7gxC10Cz9mGxbt C4kA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011915; x=1726616715; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lBJef16ALyOYFlot9YzaY+1ElfdQQGF78ZnzsjyO0V0=; b=KIwWEQDc4MsXo8kA8wk+o+61fyTOnp0ZbyObqwJdRhSerf0fWFPqxkeYx2BT/R00d7 gEZ7/92pZEGjE6kOrx3kXK9qYQLTqSCD9I/+frOsSrWRPldZG+gUT6d3fm3m7vTjGPDV Xf8qfVjyXLTERlOVW5J3ChrpSNi5wAfVddtmpnaMMWo/ttg4KV6L3HI9yTf3FLd8NQsF 9lfSp4kfDpAZHW3qKfzEwYF30xsTRBuIetE0MlCPx1s07ct6TAC1ofm2YoNuHt2u/jwa 5Ex0a5mrGbwb667TgTQoygT2lDnIZtlhlj8K9N1p5es17kxkDSVRWMvdQQZ9MbC0cB96 KzDg== X-Forwarded-Encrypted: i=1; AJvYcCXgE7HtsrntK0TH5/pemYATPQSR5ZgoD9Cz65qnJlLMY4Z6MAW+U9Dw8qUYnNCc0dBeIbj9JYbVf7FLfao=@vger.kernel.org X-Gm-Message-State: AOJu0YxSvSSIR/MZiOu7wNlVDnyKRF88wB4pCiyW5v/uJeH19udanGt5 hZfwXaYW75QA2yq8FdSmhjix4yASp5AdiFsunpZsWr99CF8mMRdg1qYM14dEOaxiIGRPtxAYMUx qaBY1Jjpnv62sAMO8SJWoRg== X-Google-Smtp-Source: AGHT+IE6UzsGakpseqVGpYmVeT4krDva7uEV1yzairw6T5Hn14xupYXurGRjEZiCuCr5K2NNFKPuVFRVzmSBBIl0nw== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:90a:d14d:b0:2da:6d50:e79c with SMTP id 98e67ed59e1d1-2db67176e9fmr38917a91.1.1726011914978; Tue, 10 Sep 2024 16:45:14 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:55 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <55b2d15ddd03b4c7df195cace3dff83ffcbfa71c.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 24/39] mm: hugetlb: Add functions to add/move/remove from hugetlb lists From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" These functions are introduced in hugetlb.c so the private hugetlb_lock can be accessed. hugetlb_lock is reused for this PoC, but a separate lock should be used in a future revision to avoid interference due to hash collisions with HugeTLB's usage of this lock. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 21 +++++++++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ac9d4ada52bd..0f3f920ad608 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -164,6 +164,9 @@ bool hugetlb_reserve_pages(struct inode *inode, long fr= om, long to, vm_flags_t vm_flags); long hugetlb_unreserve_pages(struct inode *inode, long start, long end, long freed); +void hugetlb_folio_list_add(struct folio *folio, struct list_head *list); +void hugetlb_folio_list_move(struct folio *folio, struct list_head *list); +void hugetlb_folio_list_del(struct folio *folio); bool isolate_hugetlb(struct folio *folio, struct list_head *list); int get_hwpoison_hugetlb_folio(struct folio *folio, bool *hugetlb, bool un= poison); int get_huge_page_for_hwpoison(unsigned long pfn, int flags, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8f2b7b411b60..60e72214d5bf 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7264,6 +7264,27 @@ long hugetlb_unreserve_pages(struct inode *inode, lo= ng start, long end, return 0; } =20 +void hugetlb_folio_list_add(struct folio *folio, struct list_head *list) +{ + spin_lock_irq(&hugetlb_lock); + list_add(&folio->lru, list); + spin_unlock_irq(&hugetlb_lock); +} + +void hugetlb_folio_list_move(struct folio *folio, struct list_head *list) +{ + spin_lock_irq(&hugetlb_lock); + list_move_tail(&folio->lru, list); + spin_unlock_irq(&hugetlb_lock); +} + +void hugetlb_folio_list_del(struct folio *folio) +{ + spin_lock_irq(&hugetlb_lock); + list_del(&folio->lru); + spin_unlock_irq(&hugetlb_lock); +} + #ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE static unsigned long page_table_shareable(struct vm_area_struct *svma, struct vm_area_struct *vma, --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 788A31C0DD9 for ; Tue, 10 Sep 2024 23:45:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011919; cv=none; b=CLPP0EDBnyO5Oqvqaq4bOuV9LnczYpsGABcK33KO1TgZXXkJbCI74MiyArbsvl/z6a39xwvGyaEQwTTbpugzzfxDPEWoNbIDCsiXfIqdLjPEADQwEajp2Rh/WHqM5XEjBa5XOdIjTmeo3KhjF8SlBWuX5U9NPQ6fbT7kzAVjPas= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011919; c=relaxed/simple; bh=YDXLNAYH07lT15zEUgoXIIGg0s3PEPkF7OA5Po/D0S0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QUuKVqdutiCuS6z+hoIpk1OsyEfux8xdD+vD1rRY5eOM/0h3RFgsbYopL3zJZ05EgZzkt5luYPahM0IZgBfdnT6cCXHSNV3TJxx/5HUG9HlqnDHvZ2tGBhkkqkBQuzyItNU/+OQaAOlEU/NH92NRnlPm25Aiwq9ca1kSDWI2svQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GX8MPKyN; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GX8MPKyN" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-204e310e050so86964355ad.0 for ; Tue, 10 Sep 2024 16:45:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011917; x=1726616717; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dBcAwVvZIYzGXNkByOPNSzS/2ZaPoaCsQUMEIRBg/WU=; b=GX8MPKyNCX67aXI3adCgwuHVt5bcsPCVsBvDYfNAJsj22DIUAKopTmlyFVoCu2lwI1 eQatPRVFraqqnf0qnvhtagXMO4knZdlNXbxbHcbHxbe+tbhexZVsIO8JewjP5C5wVhLa WbrMwUdDlJuarYOiAFftXsSFcu3INWF3qYA81ovHXFCfexJ1mZJcPai7SMKa++q1so7v paX8qJjqpj4aRM5mgnvwPR5AuDUIxguUM7SISup6nH4PsdCMSUBFHwNqKTKCRpP7EO4q xZgNXZAyT22xolrCNEVbwQOlj/kibPiymgRrpMBX+cRWxwCxLMp7NIo8xZYqUkKVdfY7 mylQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011917; x=1726616717; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dBcAwVvZIYzGXNkByOPNSzS/2ZaPoaCsQUMEIRBg/WU=; b=ZwwXMhFC+2VdcbNb8AYyMjt1hun6YtgzG57lJh+fHE9ffneqJTrYB/cxe7n+v9WsRq 5RLXMigwUC6XN4EKEF38Vu7LZPnJ1NhLQcYH77gsrJW6chMEljUvvmYx+u5VVA9eodDx mfSIw/7ZwKkcDXN8eUbDmILrfacnx4EcyWjh2ZtxBMgGzc12oyQGG9r/ouGQIgUhSJsp 1co8jjj6CUTVuPyaOWfndeZ0dhJHCD9OUwV1yglNBWs7aogIIEEFmP4hvReOPnfgKvON CIdmb2uBrI1AicJK0t8KuiOMD1ffIHEYKpD/JDJJyv+a7SnGvM936H0gcA0wGJRGJ0UT kpMg== X-Forwarded-Encrypted: i=1; AJvYcCVEq+JUVbK0kLpm2r2bRbUulBqyOxZDX7XXGwiJHy7cXGNnBrgfTK5xi2jYdgE1m/ipg/HdpI/KygAVFKg=@vger.kernel.org X-Gm-Message-State: AOJu0YxpTxOz7S4dlm/lsVgISno2aic3d2vdt3zpe2xerOxaghbs50YP Z13cItdivAgd9WUrUDaajW8/6lZO498igpuZe2fXmGaeGYjadiwLIF6AIhQMkdrDzQQPeAnbi2O nPFJULf0uWw6lZQ58CLqywg== X-Google-Smtp-Source: AGHT+IHPvqQuW1nzaRxrJX4GIRZi5NgMOC89C+YN+0vpncrauk1fada8rcGXAL5RiJtXK/Fd4loftkEuOb/GRSTx+A== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:902:e5c6:b0:205:799f:124f with SMTP id d9443c01a7336-2074c5f3b46mr1590105ad.5.1726011916743; Tue, 10 Sep 2024 16:45:16 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:56 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: Subject: [RFC PATCH 25/39] KVM: guest_memfd: Split HugeTLB pages for guest_memfd use From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vishal Annapurve In this patch, newly allocated HugeTLB pages are split to 4K regular pages before providing them to the requester (fallocate() or KVM). The pages are then reconstructed/merged to HugeTLB pages before the HugeTLB pages are returned to HugeTLB. This is an intermediate step to build page splitting/merging functionality before allowing guest_memfd files to be mmap()ed. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve --- virt/kvm/guest_memfd.c | 299 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 281 insertions(+), 18 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index eacbfdb950d1..8151df2c03e5 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -229,31 +229,206 @@ static int kvm_gmem_hugetlb_filemap_add_folio(struct= address_space *mapping, return 0; } =20 +struct kvm_gmem_split_stash { + struct { + unsigned long _flags_2; + unsigned long _head_2; + + void *_hugetlb_subpool; + void *_hugetlb_cgroup; + void *_hugetlb_cgroup_rsvd; + void *_hugetlb_hwpoison; + }; + void *hugetlb_private; +}; + +static int kvm_gmem_hugetlb_stash_metadata(struct folio *folio) +{ + struct kvm_gmem_split_stash *stash; + + stash =3D kmalloc(sizeof(*stash), GFP_KERNEL); + if (!stash) + return -ENOMEM; + + stash->_flags_2 =3D folio->_flags_2; + stash->_head_2 =3D folio->_head_2; + stash->_hugetlb_subpool =3D folio->_hugetlb_subpool; + stash->_hugetlb_cgroup =3D folio->_hugetlb_cgroup; + stash->_hugetlb_cgroup_rsvd =3D folio->_hugetlb_cgroup_rsvd; + stash->_hugetlb_hwpoison =3D folio->_hugetlb_hwpoison; + stash->hugetlb_private =3D folio_get_private(folio); + + folio_change_private(folio, (void *)stash); + + return 0; +} + +static int kvm_gmem_hugetlb_unstash_metadata(struct folio *folio) +{ + struct kvm_gmem_split_stash *stash; + + stash =3D folio_get_private(folio); + + if (!stash) + return -EINVAL; + + folio->_flags_2 =3D stash->_flags_2; + folio->_head_2 =3D stash->_head_2; + folio->_hugetlb_subpool =3D stash->_hugetlb_subpool; + folio->_hugetlb_cgroup =3D stash->_hugetlb_cgroup; + folio->_hugetlb_cgroup_rsvd =3D stash->_hugetlb_cgroup_rsvd; + folio->_hugetlb_hwpoison =3D stash->_hugetlb_hwpoison; + folio_change_private(folio, stash->hugetlb_private); + + kfree(stash); + + return 0; +} + +/** + * Reconstruct a HugeTLB folio from a contiguous block of folios where the= first + * of the contiguous folios is @folio. + * + * The size of the contiguous block is of huge_page_size(@h). All the foli= os in + * the block are checked to have a refcount of 1 before reconstruction. Af= ter + * reconstruction, the reconstructed folio has a refcount of 1. + * + * Return 0 on success and negative error otherwise. + */ +static int kvm_gmem_hugetlb_reconstruct_folio(struct hstate *h, struct fol= io *folio) +{ + int ret; + + WARN_ON((folio->index & (huge_page_order(h) - 1)) !=3D 0); + + ret =3D kvm_gmem_hugetlb_unstash_metadata(folio); + if (ret) + return ret; + + if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { + kvm_gmem_hugetlb_stash_metadata(folio); + return -ENOMEM; + } + + __folio_set_hugetlb(folio); + + folio_set_count(folio, 1); + + hugetlb_vmemmap_optimize_folio(h, folio); + + return 0; +} + +/* Basically folio_set_order(folio, 1) without the checks. */ +static inline void kvm_gmem_folio_set_order(struct folio *folio, unsigned = int order) +{ + folio->_flags_1 =3D (folio->_flags_1 & ~0xffUL) | order; +#ifdef CONFIG_64BIT + folio->_folio_nr_pages =3D 1U << order; +#endif +} + +/** + * Split a HugeTLB @folio of size huge_page_size(@h). + * + * After splitting, each split folio has a refcount of 1. There are no che= cks on + * refcounts before splitting. + * + * Return 0 on success and negative error otherwise. + */ +static int kvm_gmem_hugetlb_split_folio(struct hstate *h, struct folio *fo= lio) +{ + int ret; + + ret =3D hugetlb_vmemmap_restore_folio(h, folio); + if (ret) + return ret; + + ret =3D kvm_gmem_hugetlb_stash_metadata(folio); + if (ret) { + hugetlb_vmemmap_optimize_folio(h, folio); + return ret; + } + + kvm_gmem_folio_set_order(folio, 0); + + destroy_compound_gigantic_folio(folio, huge_page_order(h)); + __folio_clear_hugetlb(folio); + + /* + * Remove the first folio from h->hugepage_activelist since it is no + * longer a HugeTLB page. The other split pages should not be on any + * lists. + */ + hugetlb_folio_list_del(folio); + + return 0; +} + static struct folio *kvm_gmem_hugetlb_alloc_and_cache_folio(struct inode *= inode, pgoff_t index) { + struct folio *allocated_hugetlb_folio; + pgoff_t hugetlb_first_subpage_index; + struct page *hugetlb_first_subpage; struct kvm_gmem_hugetlb *hgmem; - struct folio *folio; + struct page *requested_page; int ret; + int i; =20 hgmem =3D kvm_gmem_hgmem(inode); - folio =3D kvm_gmem_hugetlb_alloc_folio(hgmem->h, hgmem->spool); - if (IS_ERR(folio)) - return folio; + allocated_hugetlb_folio =3D kvm_gmem_hugetlb_alloc_folio(hgmem->h, hgmem-= >spool); + if (IS_ERR(allocated_hugetlb_folio)) + return allocated_hugetlb_folio; + + requested_page =3D folio_file_page(allocated_hugetlb_folio, index); + hugetlb_first_subpage =3D folio_file_page(allocated_hugetlb_folio, 0); + hugetlb_first_subpage_index =3D index & (huge_page_mask(hgmem->h) >> PAGE= _SHIFT); =20 - /* TODO: Fix index here to be aligned to huge page size. */ - ret =3D kvm_gmem_hugetlb_filemap_add_folio( - inode->i_mapping, folio, index, htlb_alloc_mask(hgmem->h)); + ret =3D kvm_gmem_hugetlb_split_folio(hgmem->h, allocated_hugetlb_folio); if (ret) { - folio_put(folio); + folio_put(allocated_hugetlb_folio); return ERR_PTR(ret); } =20 + for (i =3D 0; i < pages_per_huge_page(hgmem->h); ++i) { + struct folio *folio =3D page_folio(nth_page(hugetlb_first_subpage, i)); + + ret =3D kvm_gmem_hugetlb_filemap_add_folio(inode->i_mapping, + folio, + hugetlb_first_subpage_index + i, + htlb_alloc_mask(hgmem->h)); + if (ret) { + /* TODO: handle cleanup properly. */ + pr_err("Handle cleanup properly index=3D%lx, ret=3D%d\n", + hugetlb_first_subpage_index + i, ret); + dump_page(nth_page(hugetlb_first_subpage, i), "check"); + return ERR_PTR(ret); + } + + /* + * Skip unlocking for the requested index since + * kvm_gmem_get_folio() returns a locked folio. + * + * Do folio_put() to drop the refcount that came with the folio, + * from splitting the folio. Splitting the folio has a refcount + * to be in line with hugetlb_alloc_folio(), which returns a + * folio with refcount 1. + * + * Skip folio_put() for requested index since + * kvm_gmem_get_folio() returns a folio with refcount 1. + */ + if (hugetlb_first_subpage_index + i !=3D index) { + folio_unlock(folio); + folio_put(folio); + } + } + spin_lock(&inode->i_lock); inode->i_blocks +=3D blocks_per_huge_page(hgmem->h); spin_unlock(&inode->i_lock); =20 - return folio; + return page_folio(requested_page); } =20 static struct folio *kvm_gmem_get_hugetlb_folio(struct inode *inode, @@ -365,7 +540,9 @@ static inline void kvm_gmem_hugetlb_filemap_remove_foli= o(struct folio *folio) =20 /** * Removes folios in range [@lstart, @lend) from page cache/filemap (@mapp= ing), - * returning the number of pages freed. + * returning the number of HugeTLB pages freed. + * + * @lend - @lstart must be a multiple of the HugeTLB page size. */ static int kvm_gmem_hugetlb_filemap_remove_folios(struct address_space *ma= pping, struct hstate *h, @@ -373,37 +550,69 @@ static int kvm_gmem_hugetlb_filemap_remove_folios(str= uct address_space *mapping, { const pgoff_t end =3D lend >> PAGE_SHIFT; pgoff_t next =3D lstart >> PAGE_SHIFT; + LIST_HEAD(folios_to_reconstruct); struct folio_batch fbatch; + struct folio *folio, *tmp; int num_freed =3D 0; + int i; =20 + /* + * TODO: Iterate over huge_page_size(h) blocks to avoid taking and + * releasing hugetlb_fault_mutex_table[hash] lock so often. When + * truncating, lstart and lend should be clipped to the size of this + * guest_memfd file, otherwise there would be too many iterations. + */ folio_batch_init(&fbatch); while (filemap_get_folios(mapping, &next, end - 1, &fbatch)) { - int i; for (i =3D 0; i < folio_batch_count(&fbatch); ++i) { struct folio *folio; pgoff_t hindex; u32 hash; =20 folio =3D fbatch.folios[i]; + hindex =3D folio->index >> huge_page_order(h); hash =3D hugetlb_fault_mutex_hash(mapping, hindex); - mutex_lock(&hugetlb_fault_mutex_table[hash]); + + /* + * Collect first pages of HugeTLB folios for + * reconstruction later. + */ + if ((folio->index & ~(huge_page_mask(h) >> PAGE_SHIFT)) =3D=3D 0) + list_add(&folio->lru, &folios_to_reconstruct); + + /* + * Before removing from filemap, take a reference so + * sub-folios don't get freed. Don't free the sub-folios + * until after reconstruction. + */ + folio_get(folio); + kvm_gmem_hugetlb_filemap_remove_folio(folio); - mutex_unlock(&hugetlb_fault_mutex_table[hash]); =20 - num_freed++; + mutex_unlock(&hugetlb_fault_mutex_table[hash]); } folio_batch_release(&fbatch); cond_resched(); } =20 + list_for_each_entry_safe(folio, tmp, &folios_to_reconstruct, lru) { + kvm_gmem_hugetlb_reconstruct_folio(h, folio); + hugetlb_folio_list_move(folio, &h->hugepage_activelist); + + folio_put(folio); + num_freed++; + } + return num_freed; } =20 /** * Removes folios in range [@lstart, @lend) from page cache of inode, upda= tes * inode metadata and hugetlb reservations. + * + * @lend - @lstart must be a multiple of the HugeTLB page size. */ static void kvm_gmem_hugetlb_truncate_folios_range(struct inode *inode, loff_t lstart, loff_t lend) @@ -427,6 +636,56 @@ static void kvm_gmem_hugetlb_truncate_folios_range(str= uct inode *inode, spin_unlock(&inode->i_lock); } =20 +/** + * Zeroes offsets [@start, @end) in a folio from @mapping. + * + * [@start, @end) must be within the same folio. + */ +static void kvm_gmem_zero_partial_page( + struct address_space *mapping, loff_t start, loff_t end) +{ + struct folio *folio; + pgoff_t idx =3D start >> PAGE_SHIFT; + + folio =3D filemap_lock_folio(mapping, idx); + if (IS_ERR(folio)) + return; + + start =3D offset_in_folio(folio, start); + end =3D offset_in_folio(folio, end); + if (!end) + end =3D folio_size(folio); + + folio_zero_segment(folio, (size_t)start, (size_t)end); + folio_unlock(folio); + folio_put(folio); +} + +/** + * Zeroes all pages in range [@start, @end) in @mapping. + * + * hugetlb_zero_partial_page() would work if this had been a full page, bu= t is + * not suitable since the pages have been split. + * + * truncate_inode_pages_range() isn't the right function because it removes + * pages from the page cache; this function only zeroes the pages. + */ +static void kvm_gmem_hugetlb_zero_split_pages(struct address_space *mappin= g, + loff_t start, loff_t end) +{ + loff_t aligned_start; + loff_t index; + + aligned_start =3D round_up(start, PAGE_SIZE); + + kvm_gmem_zero_partial_page(mapping, start, min(aligned_start, end)); + + for (index =3D aligned_start; index < end; index +=3D PAGE_SIZE) { + kvm_gmem_zero_partial_page(mapping, index, + min((loff_t)(index + PAGE_SIZE), end)); + } +} + static void kvm_gmem_hugetlb_truncate_range(struct inode *inode, loff_t ls= tart, loff_t lend) { @@ -442,8 +701,8 @@ static void kvm_gmem_hugetlb_truncate_range(struct inod= e *inode, loff_t lstart, full_hpage_end =3D round_down(lend, hsize); =20 if (lstart < full_hpage_start) { - hugetlb_zero_partial_page(h, inode->i_mapping, lstart, - full_hpage_start); + kvm_gmem_hugetlb_zero_split_pages(inode->i_mapping, lstart, + full_hpage_start); } =20 if (full_hpage_end > full_hpage_start) { @@ -452,8 +711,8 @@ static void kvm_gmem_hugetlb_truncate_range(struct inod= e *inode, loff_t lstart, } =20 if (lend > full_hpage_end) { - hugetlb_zero_partial_page(h, inode->i_mapping, full_hpage_end, - lend); + kvm_gmem_hugetlb_zero_split_pages(inode->i_mapping, full_hpage_end, + lend); } } =20 @@ -1060,6 +1319,10 @@ __kvm_gmem_get_pfn(struct file *file, struct kvm_mem= ory_slot *slot, =20 if (folio_test_hwpoison(folio)) { folio_unlock(folio); + /* + * TODO: this folio may be part of a HugeTLB folio. Perhaps + * reconstruct and then free page? + */ folio_put(folio); return ERR_PTR(-EHWPOISON); } --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:39 2024 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92FC71C0DD7 for ; Tue, 10 Sep 2024 23:45:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011921; cv=none; b=T3m2LNgXWe/neFRU0RBJyqLJw9xLP2t6pdayAf6251QCR04t7MKFXhPiY58L+eULVdRcbU4RbmxdV6+p6QQYUllGi44RaXuW49jhHYrXfCh+bxQTCuo9XGF4m/gg3OJy9bQu/NEMxJbE8d5BlFnGYsWy6BzbNUtc4+HpPlQwtSk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011921; c=relaxed/simple; bh=OYgGvX0hYL3mGQHZcgbKQoTaViWJ4HoiAFWlcJhpcoQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QW+zkw8ei5Lxvj1yURREzSeza3pdUJMlgCG6IxhLCfRDkut2mfuxEGqwlCWxlVbZa92uRIzDDMYRabaTkBdtqlEzwZXEICNXgJuuafrx34kGIwY4mWEo4Tvn2jPEtoibg8zLFFseetAz6F0cHNanu10+QZHWmQ8KRShKbxQgKFQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TWnEg+ha; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TWnEg+ha" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6db70bf67faso99451367b3.0 for ; Tue, 10 Sep 2024 16:45:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011918; x=1726616718; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tJq0MqYSe34io4UmOdJbczSi4Mtmt5uxlF7QPZDIEdU=; b=TWnEg+haf1eLvMdkzMcC9xC5gy3LKA7p5jQVamv+1lxQIaPHUFJ3d0TPTGcfXH0j03 1oMejKuzzl4EEhNrKgtdWc87G84fxHtXG7ALfU219YXtn7Q+uMHodcwCOIqdiR8jAYFu Gif3VgoEX5yVrIafdddBIIKYxglFaDR7XGAQYK6bsWeqtNQ5M3BZTTSa86tzJMGrJHAk 5dHr5KpAHlTUX/TNvXzJEneKk2Va6gvKAMCoRNidyqeTO7XxO3cc0wg5rRtpX/6g2dGx ikf/Idfa5VgDUf+KxX0vR9hdinqV5rSITVsHGoYrRIytWv+2IC2QreMLaYoO4gn+OYpo sPyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011918; x=1726616718; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tJq0MqYSe34io4UmOdJbczSi4Mtmt5uxlF7QPZDIEdU=; b=RiMSibRRyFOVwiRvBuheZ4dj+ycII/4ni7lCkBQupHwbA3MHrD+EfQBLgrRrPlMA5U X442j5ntW7KDbzl1yc9ulC1xCocJ+AftmVB+I+e8TMUftVDpssRFaunPdiZxUqVu3yvG BPVqKWY+r+Oj07YHgmuJjGWBFDJM9P8GuEAu3yVb4Wr+wvN+aXhlekq2bmqkD33D1tqI lG2YqTWmNoBt7zfs/Vtkpk4bDet4x1ZIrGq1azoOUHb2Ro1RQcNqmtgP22a6x4MUl6Sx gFf9DhDroZYWuc9KyOdYn0zcj0V9shqbhBgLEToRKZkBcwBF82raTx8aBl5y92kniMN9 OPLw== X-Forwarded-Encrypted: i=1; AJvYcCWwc+q3zEOqPGr0A2d9oBA6u+j8ob7AV/jG24LI1BAz6zLSD0b/eiSqkJaAl1bsudOzmeaPxRlWfc839ZM=@vger.kernel.org X-Gm-Message-State: AOJu0YzLMmNRWHHxKqPlK26j4sLhnBgnFMi2ZhD0pW2Izddn0koAog2n rPECxsXpmQ6YE4Mr5MXTCiINKVqX6Bmf3JJukSFKTiJcQVH2I3FXXCqD1PAuUDNpj8In5AJgnJ2 Xl7CKz3SrJTn5eFkfVqUvCA== X-Google-Smtp-Source: AGHT+IGSLUm8oBiaMD+q3wklbXJeQqiWR8FYu/x3IcQqY2JhzZy9eAY0OzQzZWTZkFFReNAsglf+owucD34Qou3lfw== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a05:690c:38c:b0:6af:623c:7694 with SMTP id 00721157ae682-6db44a5d200mr10772047b3.0.1726011918533; Tue, 10 Sep 2024 16:45:18 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:57 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: Subject: [RFC PATCH 26/39] KVM: guest_memfd: Track faultability within a struct kvm_gmem_private From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The faultability xarray is stored on the inode since faultability is a property of the guest_memfd's memory contents. In this RFC, presence of an entry in the xarray indicates faultable, but this could be flipped so that presence indicates unfaultable. For flexibility, a special value "FAULT" is used instead of a simple boolean. However, at some stages of a VM's lifecycle there could be more private pages, and at other stages there could be more shared pages. This is likely to be replaced by a better data structure in a future revision to better support ranges. Also store struct kvm_gmem_hugetlb in struct kvm_gmem_hugetlb as a pointer. inode->i_mapping->i_private_data. Co-developed-by: Fuad Tabba Signed-off-by: Fuad Tabba Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve --- virt/kvm/guest_memfd.c | 105 ++++++++++++++++++++++++++++++++++++----- 1 file changed, 94 insertions(+), 11 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 8151df2c03e5..b603518f7b62 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -26,11 +26,21 @@ struct kvm_gmem_hugetlb { struct hugepage_subpool *spool; }; =20 -static struct kvm_gmem_hugetlb *kvm_gmem_hgmem(struct inode *inode) +struct kvm_gmem_inode_private { + struct xarray faultability; + struct kvm_gmem_hugetlb *hgmem; +}; + +static struct kvm_gmem_inode_private *kvm_gmem_private(struct inode *inode) { return inode->i_mapping->i_private_data; } =20 +static struct kvm_gmem_hugetlb *kvm_gmem_hgmem(struct inode *inode) +{ + return kvm_gmem_private(inode)->hgmem; +} + static bool is_kvm_gmem_hugetlb(struct inode *inode) { u64 flags =3D (u64)inode->i_private; @@ -38,6 +48,57 @@ static bool is_kvm_gmem_hugetlb(struct inode *inode) return flags & KVM_GUEST_MEMFD_HUGETLB; } =20 +#define KVM_GMEM_FAULTABILITY_VALUE 0x4641554c54 /* FAULT */ + +/** + * Set faultability of given range of inode indices [@start, @end) to + * @faultable. Return 0 if attributes were successfully updated or negative + * errno on error. + */ +static int kvm_gmem_set_faultable(struct inode *inode, pgoff_t start, pgof= f_t end, + bool faultable) +{ + struct xarray *faultability; + void *val; + pgoff_t i; + + /* + * The expectation is that fewer pages are faultable, hence save memory + * entries are created for faultable pages as opposed to creating + * entries for non-faultable pages. + */ + val =3D faultable ? xa_mk_value(KVM_GMEM_FAULTABILITY_VALUE) : NULL; + faultability =3D &kvm_gmem_private(inode)->faultability; + + /* + * TODO replace this with something else (maybe interval + * tree?). store_range doesn't quite do what we expect if overlapping + * ranges are specified: if we store_range(5, 10, val) and then + * store_range(7, 12, NULL), the entire range [5, 12] will be NULL. For + * now, use the slower xa_store() to store individual entries on indices + * to avoid this. + */ + for (i =3D start; i < end; i++) { + int r; + + r =3D xa_err(xa_store(faultability, i, val, GFP_KERNEL_ACCOUNT)); + if (r) + return r; + } + + return 0; +} + +/** + * Return true if the page at @index is allowed to be faulted in. + */ +static bool kvm_gmem_is_faultable(struct inode *inode, pgoff_t index) +{ + struct xarray *faultability =3D &kvm_gmem_private(inode)->faultability; + + return xa_to_value(xa_load(faultability, index)) =3D=3D KVM_GMEM_FAULTABI= LITY_VALUE; +} + /** * folio_file_pfn - like folio_file_page, but return a pfn. * @folio: The folio which contains this index. @@ -895,11 +956,21 @@ static void kvm_gmem_hugetlb_teardown(struct inode *i= node) =20 static void kvm_gmem_evict_inode(struct inode *inode) { + struct kvm_gmem_inode_private *private =3D kvm_gmem_private(inode); + + /* + * .evict_inode can be called before faultability is set up if there are + * issues during inode creation. + */ + if (private) + xa_destroy(&private->faultability); + if (is_kvm_gmem_hugetlb(inode)) kvm_gmem_hugetlb_teardown(inode); else truncate_inode_pages_final(inode->i_mapping); =20 + kfree(private); clear_inode(inode); } =20 @@ -1028,7 +1099,9 @@ static const struct inode_operations kvm_gmem_iops = =3D { .setattr =3D kvm_gmem_setattr, }; =20 -static int kvm_gmem_hugetlb_setup(struct inode *inode, loff_t size, u64 fl= ags) +static int kvm_gmem_hugetlb_setup(struct inode *inode, + struct kvm_gmem_inode_private *private, + loff_t size, u64 flags) { struct kvm_gmem_hugetlb *hgmem; struct hugepage_subpool *spool; @@ -1036,6 +1109,10 @@ static int kvm_gmem_hugetlb_setup(struct inode *inod= e, loff_t size, u64 flags) struct hstate *h; long hpages; =20 + hgmem =3D kzalloc(sizeof(*hgmem), GFP_KERNEL); + if (!hgmem) + return -ENOMEM; + page_size_log =3D (flags >> KVM_GUEST_MEMFD_HUGE_SHIFT) & KVM_GUEST_MEMFD= _HUGE_MASK; h =3D hstate_sizelog(page_size_log); =20 @@ -1046,21 +1123,16 @@ static int kvm_gmem_hugetlb_setup(struct inode *ino= de, loff_t size, u64 flags) if (!spool) goto err; =20 - hgmem =3D kzalloc(sizeof(*hgmem), GFP_KERNEL); - if (!hgmem) - goto err_subpool; - inode->i_blkbits =3D huge_page_shift(h); =20 hgmem->h =3D h; hgmem->spool =3D spool; - inode->i_mapping->i_private_data =3D hgmem; =20 + private->hgmem =3D hgmem; return 0; =20 -err_subpool: - kfree(spool); err: + kfree(hgmem); return -ENOMEM; } =20 @@ -1068,6 +1140,7 @@ static struct inode *kvm_gmem_inode_make_secure_inode= (const char *name, loff_t size, u64 flags) { const struct qstr qname =3D QSTR_INIT(name, strlen(name)); + struct kvm_gmem_inode_private *private; struct inode *inode; int err; =20 @@ -1079,12 +1152,20 @@ static struct inode *kvm_gmem_inode_make_secure_ino= de(const char *name, if (err) goto out; =20 + err =3D -ENOMEM; + private =3D kzalloc(sizeof(*private), GFP_KERNEL); + if (!private) + goto out; + if (flags & KVM_GUEST_MEMFD_HUGETLB) { - err =3D kvm_gmem_hugetlb_setup(inode, size, flags); + err =3D kvm_gmem_hugetlb_setup(inode, private, size, flags); if (err) - goto out; + goto free_private; } =20 + xa_init(&private->faultability); + inode->i_mapping->i_private_data =3D private; + inode->i_private =3D (void *)(unsigned long)flags; inode->i_op =3D &kvm_gmem_iops; inode->i_mapping->a_ops =3D &kvm_gmem_aops; @@ -1097,6 +1178,8 @@ static struct inode *kvm_gmem_inode_make_secure_inode= (const char *name, =20 return inode; =20 +free_private: + kfree(private); out: iput(inode); =20 --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:40 2024 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B70421AED39 for ; Tue, 10 Sep 2024 23:45:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011923; cv=none; b=bxBRNVHb69MQnyhdk3dmfWpioV1JBJCgwcbZrXa33Kyc2jFFBLvo8VQnN9jqWyEUkblFX9yNrYqg0ldKUbwA0ksojtKpU66KHfNTZeKBABrdjUAL1JvPls3Br7Bg7jY8xCJEIiJ51SwXc5iw/os1Krv9Av4LQB+O3EoiAjLC8hc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011923; c=relaxed/simple; bh=HDEODfs0sW9DcivdVbXGJVJf24chSDZ1+b7TBU5QrCE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qaGxUimlw0tTydLo5DHsL/U3u8FuNd2Aq77wJL6W/hkdlh26h7lRMPm21j2Z2Rgi7l+GEwsDyNEwy+ZWBLNloaOSeChulUCwz2F0C7H11Zo+gKfhD8V5oqngZ+P3+EFGemT4bSJPV4xaZ6N2spWXO0rD2rG4sJxMdO8aPAtDCto= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Y6yv1uUZ; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Y6yv1uUZ" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7cbe272efa6so7146255a12.3 for ; Tue, 10 Sep 2024 16:45:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011920; x=1726616720; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=34wlcJFkfGNQzzb1hZdL7ujzeulibiS7N8dJMuUgV78=; b=Y6yv1uUZ1BTUTpadaccGK0CgdnG5qo6Av8UwKc8ujmKUD7nJFE6qIWjXAcGmH0hmWR bUEw2N56bXAOEA8K9zhtlFt1RuPbnGedMGnf1UHKK3oHo43947wBzPO1LxkNvjJvdVRL e3mOsdYBQvTSbkcIS4wN70mI2CEQyjNBueeO98xAL0CF6V7j2wi5j+wIEY7XrjLrKFJI uhOa1awuYB0bWnb2wlQ43QeBIPq5lOCyj7O8dUTlN9RBa2DMV986aKAlrgION4pXpgZs iL3cPHqf3kqay+PAzaaHM9X9WVemDidoCZ6jomBhTUVmG5bQyEhaozACxFTPzJm3ktN/ AAzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011920; x=1726616720; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=34wlcJFkfGNQzzb1hZdL7ujzeulibiS7N8dJMuUgV78=; b=rU1ytLEQeLrtERtxIohY4KjlJ/hceO4D4dgN8IUhMJ3+qU8IJ8XHGlNVuvYxOzbaoE T1mIdtw3TubqDYUT/Shp5lb9HJMJjEckHvxqvZzFIe/GVtMddDdPcnA3SvSsV3XS5QOl aPphujmsNAz+vdBama6EPlLrO3CnoaD7+1UQBH7k9+LmU+Ak5N9YB0x7A5rhjcoIAkDf mTnHvN1yInvk8SWYAWKM5uF5mRZGhrxI+CAxqzTK5vPxD1hY1qn1dpJYrrR5kh5JWoMW RYXeJAzNmvl+dGAAeSq6STHC7BA+FEk4wbhk1TssD5e46hqDD6PftmKsLAp0Yvv1+HAD D4jw== X-Forwarded-Encrypted: i=1; AJvYcCVGvV/ZDUDhKnwgfBl2bKXJ1rFb6MDcajKhGd2mxIuLhZhI/ktaLtGCixRAKgb200E9TyzWjw+//zWFVQY=@vger.kernel.org X-Gm-Message-State: AOJu0YwoFJJublIoOVE4IEmaqjaPEhJt/l9/FpSy7F60U75QUYfRxI/Y skrtwnbH1Z1+BX+prhhw9pvsJgLdmn+cEd0llGfhCsZTRSESn3kGDzuzHzn5b9L0Bf2obdrho8c QQUwyZ6JgUyGCoQ14SjDX+g== X-Google-Smtp-Source: AGHT+IHhJHaKR4KPnPlGSVYGkBp4x4+Xb5fuhdBUkXuBYQ8usu5fhaV07d9H987RUToHw3cMQAZNLNO7RWO0YnhXdQ== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a65:41c6:0:b0:785:e3e:38db with SMTP id 41be03b00d2f7-7db08543c58mr8464a12.8.1726011919943; Tue, 10 Sep 2024 16:45:19 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:58 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <5a05eb947cf7aa21f00b94171ca818cc3d5bdfee.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 27/39] KVM: guest_memfd: Allow mmapping guest_memfd files From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" guest_memfd files can always be mmap()ed to userspace, but faultability is controlled by an attribute on the inode. Co-developed-by: Fuad Tabba Signed-off-by: Fuad Tabba Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng --- virt/kvm/guest_memfd.c | 46 ++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 44 insertions(+), 2 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index b603518f7b62..fc2483e35876 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -781,7 +781,8 @@ static long kvm_gmem_punch_hole(struct inode *inode, lo= ff_t offset, loff_t len) { struct list_head *gmem_list =3D &inode->i_mapping->i_private_list; pgoff_t start =3D offset >> PAGE_SHIFT; - pgoff_t end =3D (offset + len) >> PAGE_SHIFT; + pgoff_t nr =3D len >> PAGE_SHIFT; + pgoff_t end =3D start + nr; struct kvm_gmem *gmem; =20 /* @@ -790,6 +791,9 @@ static long kvm_gmem_punch_hole(struct inode *inode, lo= ff_t offset, loff_t len) */ filemap_invalidate_lock(inode->i_mapping); =20 + /* TODO: Check if even_cows should be 0 or 1 */ + unmap_mapping_range(inode->i_mapping, start, len, 0); + list_for_each_entry(gmem, gmem_list, entry) kvm_gmem_invalidate_begin(gmem, start, end); =20 @@ -946,6 +950,9 @@ static void kvm_gmem_hugetlb_teardown(struct inode *ino= de) { struct kvm_gmem_hugetlb *hgmem; =20 + /* TODO: Check if even_cows should be 0 or 1 */ + unmap_mapping_range(inode->i_mapping, 0, LLONG_MAX, 0); + truncate_inode_pages_final_prepare(inode->i_mapping); kvm_gmem_hugetlb_truncate_folios_range(inode, 0, LLONG_MAX); =20 @@ -1003,11 +1010,46 @@ static void kvm_gmem_init_mount(void) kvm_gmem_mnt =3D kern_mount(&kvm_gmem_fs); BUG_ON(IS_ERR(kvm_gmem_mnt)); =20 - /* For giggles. Userspace can never map this anyways. */ kvm_gmem_mnt->mnt_flags |=3D MNT_NOEXEC; } =20 +static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) +{ + struct inode *inode; + struct folio *folio; + + inode =3D file_inode(vmf->vma->vm_file); + if (!kvm_gmem_is_faultable(inode, vmf->pgoff)) + return VM_FAULT_SIGBUS; + + folio =3D kvm_gmem_get_folio(inode, vmf->pgoff); + if (!folio) + return VM_FAULT_SIGBUS; + + vmf->page =3D folio_file_page(folio, vmf->pgoff); + return VM_FAULT_LOCKED; +} + +static const struct vm_operations_struct kvm_gmem_vm_ops =3D { + .fault =3D kvm_gmem_fault, +}; + +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) !=3D + (VM_SHARED | VM_MAYSHARE)) { + return -EINVAL; + } + + file_accessed(file); + vm_flags_set(vma, VM_DONTDUMP); + vma->vm_ops =3D &kvm_gmem_vm_ops; + + return 0; +} + static struct file_operations kvm_gmem_fops =3D { + .mmap =3D kvm_gmem_mmap, .open =3D generic_file_open, .release =3D kvm_gmem_release, .fallocate =3D kvm_gmem_fallocate, --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:40 2024 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7927B1C174F for ; Tue, 10 Sep 2024 23:45:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011923; cv=none; b=rUGpsrqFKd0te7gww9F8DX6orVW8zTQzCZQnmYDKdfPq2RiNSEIRJ4a038XzF19P4OJ3E0eVR0boOc66cJ3ubzY65O3NAWqQmJL9/idr9Q0TkNwWxU4w6aWB/mvdEC4NEwA8nrf+crYPY4Nn0DRhGe37ciUlQ8N8kqGJVwYf4Yg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011923; c=relaxed/simple; bh=ypXICWB7lHgzaCptTfWhJZz4InV12rYGuerAaDggmsU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VkSxCwLGRmKfeq6ETn842aJllq9VYeq6mu5ubp/7ibCjxMi5oJvXXj2L4NbJuirHKDCjZki0F7kOxenhxhcL1IBzbDirC8mNuUarymAim7kNKikD1icsKjf+8sUwD+Qt6hm2j+mD5XJhb7euZfYZOmEJD4sKNon5dkCKmbVlmyk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jJKoPEL5; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jJKoPEL5" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2d8e59fcd4bso1530073a91.1 for ; Tue, 10 Sep 2024 16:45:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011922; x=1726616722; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iSL2kMUQX0MYfWs91FUoGWuk2V9BSwbXfpTOfw5RJmI=; b=jJKoPEL5Pmb45B+8+RMGJFvQC93LQAVhjpNR/kyelVwfPPqFRre9/i0SdBLdj0UTAB nr2v33YeiY1URdOoTpIEYnz+/azVVtYvHt2agFroaKkE9v3BxrSVE3BmTpIKXSwnUNJ5 dZai4o3wpezTskOwtSAwWgXXdXLeIiAn19bik6cyexknaV8OloCJgC95n1f6XR9chRJg JBQp0b+ea50PrOOA18fyqUx0fvWlnFOA1e5C+KHDXBhoiJFhk5aMGcjgc17HYN2LG7bf L4bg825BkuQjFPuFpUAw9CUYohp2qNqhTaArPek9nYagDCdt+NDpmSWo66GI2Q9wJyqV lwMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011922; x=1726616722; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iSL2kMUQX0MYfWs91FUoGWuk2V9BSwbXfpTOfw5RJmI=; b=qCnl6LcIaRPvgQIljY0g/TJ7TxAPvjsIgwwdYzHbcTemVKRw5Ur2gcaMUJD4j4KnWG L4bHvuGYggYuLwFcGMvOJI9oC2M9Yfg++ESDWMGvt6H5WIe9G9wI32zJXpqcKYGpi9Kb iLEpnCAiIlpYViVesNFF5lkGw1GVwhZd/Vmq242YwX4DFrY5J8FusiniGlBqXm5NHVHp tqnS8/ppY3VYHTKPFdgPJTlSOtSvfveGfUBTnpRiERQp2hnXFB+LtF0Ab7y08mlzVopa ayGpIxZFcawx2HJHexbu5Ww8NG+trny8vtbNERoiyPkRvVPuZVAbDa+K7q8l1n4HBEFC z8FA== X-Forwarded-Encrypted: i=1; AJvYcCWwCTr/gD4wSDZ1hvJjszZHk19kBiiGwt52eRN6sEKBXNw6JdjP//svtf1vhg7Lehbl6C23i1LT1sKutn4=@vger.kernel.org X-Gm-Message-State: AOJu0Yy3ltCv9D7aOJp3eePQvW3cBg3Dwce8RDmderKNTVHmnfQXwbW1 vt40Ms6JF+Sv8wUI+88lmUD/YI8wzsNq+dTGjBfBwdkOh0Ls5zejpXM8qORWOFgf1GLBGH4KFuS 0oy2+Mjv+btlOubWsaEsWWA== X-Google-Smtp-Source: AGHT+IHeEJa/t3gbMh3LQ6wjI0/JOXt73EyPzcfCUwf0I0tvYHeE0b/ouOGMNpZEECjGvQ2LqF2A810orFlmLkMS4Q== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:90a:c2c5:b0:2d8:bf47:947c with SMTP id 98e67ed59e1d1-2db8304d552mr2449a91.3.1726011921647; Tue, 10 Sep 2024 16:45:21 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:59 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: Subject: [RFC PATCH 28/39] KVM: guest_memfd: Use vm_type to determine default faultability From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Memory of a KVM_X86_SW_PROTECTED_VM defaults to faultable to align with the default in kvm->mem_attr_array. For this RFC, determine default faultability when associating a range with a memslot. Another option is to determine default faultability at guest_memfd creation time. guest_memfd is created for a specific VM, hence we can set default faultability based on the VM type. In future, if different struct kvms are bound to the same guest_memfd inode, all the struct kvms must be of the same vm_type. TODO: Perhaps faultability should be based on kvm->mem_attr_array? Signed-off-by: Ackerley Tng --- virt/kvm/guest_memfd.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index fc2483e35876..1d4dfe0660ad 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -1256,6 +1256,23 @@ static struct file *kvm_gmem_inode_create_getfile(vo= id *priv, loff_t size, return file; } =20 +static void kvm_gmem_set_default_faultability_by_vm_type(struct inode *ino= de, + u8 vm_type, + loff_t start, loff_t end) +{ + bool faultable; + + switch (vm_type) { + case KVM_X86_SW_PROTECTED_VM: + faultable =3D true; + break; + default: + faultable =3D false; + } + + WARN_ON(kvm_gmem_set_faultable(inode, start, end, faultable)); +} + static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) { struct kvm_gmem *gmem; @@ -1378,6 +1395,11 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory= _slot *slot, slot->gmem.pgoff =3D start; =20 xa_store_range(&gmem->bindings, start, end - 1, slot, GFP_KERNEL); + + kvm_gmem_set_default_faultability_by_vm_type(file_inode(file), + kvm->arch.vm_type, + start, end); + filemap_invalidate_unlock(inode->i_mapping); =20 /* --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:40 2024 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 355181C1AB1 for ; Tue, 10 Sep 2024 23:45:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011926; cv=none; b=BWkq1IZE7b1mAst4WvvDmMGikZhadi72zhqddEZ/u8iCDxcaKd4G/D1LYDHC+z7Lq9P0HglJYOFYvqgbdnktsRqPs10StwJcbZHDJTARJ+m5nRotBWss3ZUB21BtQcV0lEludzZoER4xjKORwvCuJ4x9SoD2wrWMdU1vw4j7jNg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011926; c=relaxed/simple; bh=E5exUTcHox7ZsBbSugU4kB9L1tXTbIgmGnzZ+uWTAcI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=a2Y82VfNlCSqIk4yRNCTze3ERLFhn96RLPdUVKWpX+glmIRNNBLXzhOWxYjlgt2OpMfKub/YhI4Vyd9iLtPRKrRdms6+aoYN9DcDeOMhKmn2ar2TdA0b5Td6kkRB8nH0qksSYxo21OuJsmVle5N3FrXPzAzqvennEz8m3tGZUxU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kkvXkgr/; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kkvXkgr/" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7d817394b39so2910054a12.1 for ; Tue, 10 Sep 2024 16:45:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011923; x=1726616723; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FjI6cBjR9AW7KizYJRES3kHs2nxbgzqD9PMU12DXiXA=; b=kkvXkgr/VVSh+6ffqFLFkQrHLirqCJUIz2CLLG4LBIreXlfxbKEyBtXPDgc57rKgeJ 41smivW0T1pTZGVhcxkL/MoemqHPzyJP+Q/NAC5eIj54zcLigUFWnTq1eamHC9m+5EZT kSTA49px7qb4+TAWYKCLYv5sgKGE9/oUeZFZ/nTyCTOjiD6eK/GLOb8c6xopdvtWZuwM DJyxZaDKR79JWE2WdU7aJTKnt0a9tlwa9g2GxGDWmbwSTEQNencTMsxH1OZwlQ0oE4t0 ccvEXTglDIX+JwpOHGbD7+7cfqzLpNFsXhmDYj9C0OSgcsrkNKV5NVxRQnu+uj+bmQr6 eMYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011923; x=1726616723; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FjI6cBjR9AW7KizYJRES3kHs2nxbgzqD9PMU12DXiXA=; b=rVzii7rSILMSQ0U+MhDyRMPVZUGdvrEUGWIhh8WbplN1DW8xpYKYf2PADb5LzDU+cY hkRHFRHXoAZ0k5QP5zjKnejWBFAj0F3krqELF7gXGR47HxW2s6GKzfHP1QqCSGRNZy92 +HJ2tp1+XS6T2mrCi9VGS3HjcWXV2MbeJvtHt/hevrdxUyL2xH1qgd5GW/4FD/YuTWhQ UUFyHQ7Nnu7CWXZFhgPLINwyDGNiOyRwsL4r+iR5ChFtbzrUDwZr3QfDKy2JwOLrG1UD eBFwub+QsQg4vjgQfih6cMLT50SPxEo0zLFvK8xiek5gA8F+F2BbJuQwwkQwE1frrBjF 8r4A== X-Forwarded-Encrypted: i=1; AJvYcCV4g+e2AmCJGLzNL8A3G0+LUdEgr42gwynaWb52YfD/Gsm3nlKl19RW0v+B1OCzIv7xFqixEMGEmKISK68=@vger.kernel.org X-Gm-Message-State: AOJu0Yz0vdPfo3tIwH+23auYZPUSAGV2FQ3E6IAT3QFMUBZ+7NWXJGQR /Cdx6A1eUMdAstxlUwODfr6N+8rEgkSGnbDq7bJX8Rg4PSc0B9XwmFwEbMcstwPQYgAuaNAtpct 3+WXauIZCm2MSTfOvx1przA== X-Google-Smtp-Source: AGHT+IHSKJ4yx6H+R6vmjAajFKhOO5Lbm3TP4ppQgPnLaqzbpul3ZYzZk4Z4nMlq9x4ej3kbxbURUt5MTOfpTqRbXQ== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:902:c405:b0:207:50e2:f54 with SMTP id d9443c01a7336-20750e20f6emr1226925ad.1.1726011923409; Tue, 10 Sep 2024 16:45:23 -0700 (PDT) Date: Tue, 10 Sep 2024 23:44:00 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: Subject: [RFC PATCH 29/39] KVM: Handle conversions in the SET_MEMORY_ATTRIBUTES ioctl From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The key steps for a private to shared conversion are: 1. Unmap from guest page tables 2. Set pages associated with requested range in memslot to be faultable 3. Update kvm->mem_attr_array The key steps for a shared to private conversion are: 1. Check and disallow set_memory_attributes if any page in the range is still mapped or pinned, by a. Updating guest_memfd's faultability to prevent future faulting b. Returning -EINVAL if any pages are still pinned. 2. Update kvm->mem_attr_array Userspace VMM must ensure shared pages are not in use, since any faults racing with this call will get a SIGBUS. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve --- include/linux/kvm_host.h | 1 + virt/kvm/guest_memfd.c | 207 +++++++++++++++++++++++++++++++++++++++ virt/kvm/kvm_main.c | 15 +++ virt/kvm/kvm_mm.h | 9 ++ 4 files changed, 232 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 79a6b1a63027..10993cd33e34 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2476,6 +2476,7 @@ typedef int (*kvm_gmem_populate_cb)(struct kvm *kvm, = gfn_t gfn, kvm_pfn_t pfn, =20 long kvm_gmem_populate(struct kvm *kvm, gfn_t gfn, void __user *src, long = npages, kvm_gmem_populate_cb post_populate, void *opaque); + #endif =20 #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 1d4dfe0660ad..110c4bbb004b 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -1592,4 +1592,211 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start= _gfn, void __user *src, long return ret && !i ? ret : i; } EXPORT_SYMBOL_GPL(kvm_gmem_populate); + +/** + * Returns true if pages in range [@start, @end) in inode @inode have no + * userspace mappings. + */ +static bool kvm_gmem_no_mappings_range(struct inode *inode, pgoff_t start,= pgoff_t end) +{ + pgoff_t index; + bool checked_indices_unmapped; + + filemap_invalidate_lock_shared(inode->i_mapping); + + /* TODO: replace iteration with filemap_get_folios() for efficiency. */ + checked_indices_unmapped =3D true; + for (index =3D start; checked_indices_unmapped && index < end;) { + struct folio *folio; + + /* Don't use kvm_gmem_get_folio to avoid allocating */ + folio =3D filemap_lock_folio(inode->i_mapping, index); + if (IS_ERR(folio)) { + ++index; + continue; + } + + if (folio_mapped(folio) || folio_maybe_dma_pinned(folio)) + checked_indices_unmapped =3D false; + else + index =3D folio_next_index(folio); + + folio_unlock(folio); + folio_put(folio); + } + + filemap_invalidate_unlock_shared(inode->i_mapping); + return checked_indices_unmapped; +} + +/** + * Returns true if pages in range [@start, @end) in memslot @slot have no + * userspace mappings. + */ +static bool kvm_gmem_no_mappings_slot(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end) +{ + pgoff_t offset_start; + pgoff_t offset_end; + struct file *file; + bool ret; + + offset_start =3D start - slot->base_gfn + slot->gmem.pgoff; + offset_end =3D end - slot->base_gfn + slot->gmem.pgoff; + + file =3D kvm_gmem_get_file(slot); + if (!file) + return false; + + ret =3D kvm_gmem_no_mappings_range(file_inode(file), offset_start, offset= _end); + + fput(file); + + return ret; +} + +/** + * Returns true if pages in range [@start, @end) have no host userspace ma= ppings. + */ +static bool kvm_gmem_no_mappings(struct kvm *kvm, gfn_t start, gfn_t end) +{ + int i; + + lockdep_assert_held(&kvm->slots_lock); + + for (i =3D 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { + struct kvm_memslot_iter iter; + struct kvm_memslots *slots; + + slots =3D __kvm_memslots(kvm, i); + kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) { + struct kvm_memory_slot *slot; + gfn_t gfn_start; + gfn_t gfn_end; + + slot =3D iter.slot; + gfn_start =3D max(start, slot->base_gfn); + gfn_end =3D min(end, slot->base_gfn + slot->npages); + + if (iter.slot->flags & KVM_MEM_GUEST_MEMFD && + !kvm_gmem_no_mappings_slot(iter.slot, gfn_start, gfn_end)) + return false; + } + } + + return true; +} + +/** + * Set faultability of given range of gfns [@start, @end) in memslot @slot= to + * @faultable. + */ +static void kvm_gmem_set_faultable_slot(struct kvm_memory_slot *slot, gfn_= t start, + gfn_t end, bool faultable) +{ + pgoff_t start_offset; + pgoff_t end_offset; + struct file *file; + + file =3D kvm_gmem_get_file(slot); + if (!file) + return; + + start_offset =3D start - slot->base_gfn + slot->gmem.pgoff; + end_offset =3D end - slot->base_gfn + slot->gmem.pgoff; + + WARN_ON(kvm_gmem_set_faultable(file_inode(file), start_offset, end_offset, + faultable)); + + fput(file); +} + +/** + * Set faultability of given range of gfns [@start, @end) in memslot @slot= to + * @faultable. + */ +static void kvm_gmem_set_faultable_vm(struct kvm *kvm, gfn_t start, gfn_t = end, + bool faultable) +{ + int i; + + lockdep_assert_held(&kvm->slots_lock); + + for (i =3D 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { + struct kvm_memslot_iter iter; + struct kvm_memslots *slots; + + slots =3D __kvm_memslots(kvm, i); + kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) { + struct kvm_memory_slot *slot; + gfn_t gfn_start; + gfn_t gfn_end; + + slot =3D iter.slot; + gfn_start =3D max(start, slot->base_gfn); + gfn_end =3D min(end, slot->base_gfn + slot->npages); + + if (iter.slot->flags & KVM_MEM_GUEST_MEMFD) { + kvm_gmem_set_faultable_slot(slot, gfn_start, + gfn_end, faultable); + } + } + } +} + +/** + * Returns true if guest_memfd permits setting range [@start, @end) to PRI= VATE. + * + * If memory is faulted in to host userspace and a request was made to set= the + * memory to PRIVATE, the faulted in pages must not be pinned for the requ= est to + * be permitted. + */ +static int kvm_gmem_should_set_attributes_private(struct kvm *kvm, gfn_t s= tart, + gfn_t end) +{ + kvm_gmem_set_faultable_vm(kvm, start, end, false); + + if (kvm_gmem_no_mappings(kvm, start, end)) + return 0; + + kvm_gmem_set_faultable_vm(kvm, start, end, true); + return -EINVAL; +} + +/** + * Returns true if guest_memfd permits setting range [@start, @end) to SHA= RED. + * + * Because this allows pages to be faulted in to userspace, this must only= be + * called after the pages have been invalidated from guest page tables. + */ +static int kvm_gmem_should_set_attributes_shared(struct kvm *kvm, gfn_t st= art, + gfn_t end) +{ + /* Always okay to set shared, hence set range faultable here. */ + kvm_gmem_set_faultable_vm(kvm, start, end, true); + + return 0; +} + +/** + * Returns 0 if guest_memfd permits setting attributes @attrs for range [@= start, + * @end) or negative error otherwise. + * + * If memory is faulted in to host userspace and a request was made to set= the + * memory to PRIVATE, the faulted in pages must not be pinned for the requ= est to + * be permitted. + * + * Because this may allow pages to be faulted in to userspace when request= ed to + * set attributes to shared, this must only be called after the pages have= been + * invalidated from guest page tables. + */ +int kvm_gmem_should_set_attributes(struct kvm *kvm, gfn_t start, gfn_t end, + unsigned long attrs) +{ + if (attrs & KVM_MEMORY_ATTRIBUTE_PRIVATE) + return kvm_gmem_should_set_attributes_private(kvm, start, end); + else + return kvm_gmem_should_set_attributes_shared(kvm, start, end); +} + #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 92901656a0d4..1a7bbcc31b7e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2524,6 +2524,13 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm= , gfn_t start, gfn_t end, .on_lock =3D kvm_mmu_invalidate_end, .may_block =3D true, }; + struct kvm_mmu_notifier_range error_set_range =3D { + .start =3D start, + .end =3D end, + .handler =3D (void *)kvm_null_fn, + .on_lock =3D kvm_mmu_invalidate_end, + .may_block =3D true, + }; unsigned long i; void *entry; int r =3D 0; @@ -2548,6 +2555,10 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm= , gfn_t start, gfn_t end, =20 kvm_handle_gfn_range(kvm, &pre_set_range); =20 + r =3D kvm_gmem_should_set_attributes(kvm, start, end, attributes); + if (r) + goto err; + for (i =3D start; i < end; i++) { r =3D xa_err(xa_store(&kvm->mem_attr_array, i, entry, GFP_KERNEL_ACCOUNT)); @@ -2560,6 +2571,10 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm= , gfn_t start, gfn_t end, mutex_unlock(&kvm->slots_lock); =20 return r; + +err: + kvm_handle_gfn_range(kvm, &error_set_range); + goto out_unlock; } static int kvm_vm_ioctl_set_mem_attributes(struct kvm *kvm, struct kvm_memory_attributes *attrs) diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index 715f19669d01..d8ff2b380d0e 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -41,6 +41,8 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_gu= est_memfd *args); int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset); void kvm_gmem_unbind(struct kvm_memory_slot *slot); +int kvm_gmem_should_set_attributes(struct kvm *kvm, gfn_t start, gfn_t end, + unsigned long attrs); #else static inline void kvm_gmem_init(struct module *module) { @@ -59,6 +61,13 @@ static inline void kvm_gmem_unbind(struct kvm_memory_slo= t *slot) { WARN_ON_ONCE(1); } + +static inline int kvm_gmem_should_set_attributes(struct kvm *kvm, gfn_t st= art, + gfn_t end, unsigned long attrs) +{ + return 0; +} + #endif /* CONFIG_KVM_PRIVATE_MEM */ =20 #endif /* __KVM_MM_H__ */ --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:40 2024 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 066431C1ADE for ; Tue, 10 Sep 2024 23:45:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011927; cv=none; b=hDv1hv1aY4njyCl60vIXx/lpKIrBhanG4XrQcPGvPLEF/40Kzm2DoPIkdCK368Kqi6AfgRB0rz61dc0gIrQWagLHxULExME8s5WzolUeb4KFr3imgAwHlxlQE6UXFmXCClbsbV8xLmSF3BVnb+U5DR867CNSVPvqY8MXzgTX+7A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011927; c=relaxed/simple; bh=bB9BdtkjjVNcS9mxcL5NczC5O+Pq5uAFXDHF0ZiMZ2Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=REzkYl1baJK2Ui2AfvXLwMvia8rV0ZD2TuwvlPrN6cXlGQcTzmnfm5qRyKVPGKgRBrTPqLFFsGlwsgPbqw/QzYn/DjVQ0UoWJTFPmc7pb1bbug9hRQH2V5uFNx1yTZPj6848GCEqze+Y0BG7elAj5c30jTa7wPjWBCOYSA7Lyhs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=CX42pobh; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CX42pobh" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6d3e062dbeeso7487997b3.0 for ; Tue, 10 Sep 2024 16:45:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011925; x=1726616725; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=v3nxHl/A8XO7B4MgOY4YbNCzyESNqY5WdHlaVGBDG2E=; b=CX42pobhdVm6+IyAp8M6oTsOuofj4qZRJtNy1RgQpZZIB2hiaqoYRNP8m+5vF2fB2i SVRzah2aT28nPhWSrg+8fuCpMktKaV/rZaaE/s1t4muCESgQKOiRMkly8L4+tgzcXvSm I7aUQ8LTEvNP/w054rEwP8UYFffk9ope21s9q1RwU+CiSmgAKyBzm2ee3vto3TNWwYnB z3+h+E0H0eOrkboltRoEmZiyxVxJ+QSfJzhj7RTAFtnqy8qQ3JaaLbonvskgDvUeHkb4 yOWNAILNHEQ6ZvDL8yVrMieqyJCoRv1e9JJjBehzRNExPOuLv/F6HBXsIFaYCDq+ocYv QbiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011925; x=1726616725; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=v3nxHl/A8XO7B4MgOY4YbNCzyESNqY5WdHlaVGBDG2E=; b=lJwNODgiMe3qpqLz75eaFiqXKliDxDa7xSTrn1ShL2dmDg6XblNeUpCe3bH1SHjJsY x8lqt/JadUQJZNrAyDSchFV5qsy4IBz+kZVFF1z8qblcILXAOHfnRSY6E/wL1LFqK6FB YnnL7JWWaNJ8MrVA73I/G4QSNE8GOHm+7lKWU25i763G/9xXyRuNizv615imsFGGZOOW w/XkI277hMo3bdXD+sb3ffnGMP/8I5jAE2Jh03o5Gk0sWwQtM6InGQTy3Eq7dXZoKgM5 fVuQzs/GNwq+1kcigvcLNk/hXgwH8sl8JFTdWjxo7ElfJCnp4tS3k2D5ScL8gqjDo22P 1TUQ== X-Forwarded-Encrypted: i=1; AJvYcCV2AMtOH7fkVxOANPREi99nTOOkTnECwPvzdvuyfju5JWlAqO+LGANUS7g74gsI1YLUkwkC4ZXiEA6Qrd8=@vger.kernel.org X-Gm-Message-State: AOJu0YzOPHtaY4Cf8c+Z0rD9+++8Jvnj3KF4CzI1zeqzphcGE5dob11q Kdu3tXPO2PJMvZXwX/Cly4aWAVXnLUH8uoEI6WLNqsVZrutCue1BEuYOmEBPz71d5QoG6wxgjB0 FJFg+0/CTn52i2f+7zI/OUA== X-Google-Smtp-Source: AGHT+IHnLEBVmryIC9G4GF4GWQoXu2VnZ3KvRT32M4K9/h8Y/56ASW5COSwM++WZGjik0HMML4LUNHLenr0THNvvKg== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a81:a883:0:b0:6d3:e7e6:8462 with SMTP id 00721157ae682-6db952f5ee5mr1095417b3.1.1726011924932; Tue, 10 Sep 2024 16:45:24 -0700 (PDT) Date: Tue, 10 Sep 2024 23:44:01 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <24cf7a9b1ee499c4ca4da76e9945429072014d1e.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 30/39] KVM: guest_memfd: Handle folio preparation for guest_memfd mmap From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since guest_memfd now supports mmap(), folios have to be prepared before they are faulted into userspace. When memory attributes are switched between shared and private, the up-to-date flags will be cleared. Use the folio's up-to-date flag to indicate being ready for the guest usage and can be used to mark whether the folio is ready for shared OR private use. Signed-off-by: Ackerley Tng --- virt/kvm/guest_memfd.c | 131 ++++++++++++++++++++++++++++++++++++++++- virt/kvm/kvm_main.c | 2 + virt/kvm/kvm_mm.h | 7 +++ 3 files changed, 139 insertions(+), 1 deletion(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 110c4bbb004b..fb292e542381 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -129,13 +129,29 @@ static int __kvm_gmem_prepare_folio(struct kvm *kvm, = struct kvm_memory_slot *slo } =20 /** - * Use the uptodate flag to indicate that the folio is prepared for KVM's = usage. + * Use folio's up-to-date flag to indicate that this folio is prepared for= usage + * by the guest. + * + * This flag can be used whether the folio is prepared for PRIVATE or SHAR= ED + * usage. */ static inline void kvm_gmem_mark_prepared(struct folio *folio) { folio_mark_uptodate(folio); } =20 +/** + * Use folio's up-to-date flag to indicate that this folio is not yet prep= ared for + * usage by the guest. + * + * This flag can be used whether the folio is prepared for PRIVATE or SHAR= ED + * usage. + */ +static inline void kvm_gmem_clear_prepared(struct folio *folio) +{ + folio_clear_uptodate(folio); +} + /* * Process @folio, which contains @gfn, so that the guest can use it. * The folio must be locked and the gfn must be contained in @slot. @@ -148,6 +164,12 @@ static int kvm_gmem_prepare_folio(struct kvm *kvm, str= uct kvm_memory_slot *slot, pgoff_t index; int r; =20 + /* + * Defensively zero folio to avoid leaking kernel memory in + * uninitialized pages. This is important since pages can now be mapped + * into userspace, where hardware (e.g. TDX) won't be clearing those + * pages. + */ if (folio_test_hugetlb(folio)) { folio_zero_user(folio, folio->index << PAGE_SHIFT); } else { @@ -1017,6 +1039,7 @@ static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) { struct inode *inode; struct folio *folio; + bool is_prepared; =20 inode =3D file_inode(vmf->vma->vm_file); if (!kvm_gmem_is_faultable(inode, vmf->pgoff)) @@ -1026,6 +1049,31 @@ static vm_fault_t kvm_gmem_fault(struct vm_fault *vm= f) if (!folio) return VM_FAULT_SIGBUS; =20 + is_prepared =3D folio_test_uptodate(folio); + if (!is_prepared) { + unsigned long nr_pages; + unsigned long i; + + if (folio_test_hugetlb(folio)) { + folio_zero_user(folio, folio->index << PAGE_SHIFT); + } else { + /* + * Defensively zero folio to avoid leaking kernel memory in + * uninitialized pages. This is important since pages can now be + * mapped into userspace, where hardware (e.g. TDX) won't be + * clearing those pages. + * + * Will probably need a version of kvm_gmem_prepare_folio() to + * prepare the page for SHARED use. + */ + nr_pages =3D folio_nr_pages(folio); + for (i =3D 0; i < nr_pages; i++) + clear_highpage(folio_page(folio, i)); + } + + kvm_gmem_mark_prepared(folio); + } + vmf->page =3D folio_file_page(folio, vmf->pgoff); return VM_FAULT_LOCKED; } @@ -1593,6 +1641,87 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_= gfn, void __user *src, long } EXPORT_SYMBOL_GPL(kvm_gmem_populate); =20 +static void kvm_gmem_clear_prepared_range(struct inode *inode, pgoff_t sta= rt, + pgoff_t end) +{ + pgoff_t index; + + filemap_invalidate_lock_shared(inode->i_mapping); + + /* TODO: replace iteration with filemap_get_folios() for efficiency. */ + for (index =3D start; index < end;) { + struct folio *folio; + + /* Don't use kvm_gmem_get_folio to avoid allocating */ + folio =3D filemap_lock_folio(inode->i_mapping, index); + if (IS_ERR(folio)) { + ++index; + continue; + } + + kvm_gmem_clear_prepared(folio); + + index =3D folio_next_index(folio); + folio_unlock(folio); + folio_put(folio); + } + + filemap_invalidate_unlock_shared(inode->i_mapping); +} + +/** + * Clear the prepared flag for all folios in gfn range [@start, @end) in m= emslot + * @slot. + */ +static void kvm_gmem_clear_prepared_slot(struct kvm_memory_slot *slot, gfn= _t start, + gfn_t end) +{ + pgoff_t start_offset; + pgoff_t end_offset; + struct file *file; + + file =3D kvm_gmem_get_file(slot); + if (!file) + return; + + start_offset =3D start - slot->base_gfn + slot->gmem.pgoff; + end_offset =3D end - slot->base_gfn + slot->gmem.pgoff; + + kvm_gmem_clear_prepared_range(file_inode(file), start_offset, end_offset); + + fput(file); +} + +/** + * Clear the prepared flag for all folios for any slot in gfn range + * [@start, @end) in @kvm. + */ +void kvm_gmem_clear_prepared_vm(struct kvm *kvm, gfn_t start, gfn_t end) +{ + int i; + + lockdep_assert_held(&kvm->slots_lock); + + for (i =3D 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { + struct kvm_memslot_iter iter; + struct kvm_memslots *slots; + + slots =3D __kvm_memslots(kvm, i); + kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) { + struct kvm_memory_slot *slot; + gfn_t gfn_start; + gfn_t gfn_end; + + slot =3D iter.slot; + gfn_start =3D max(start, slot->base_gfn); + gfn_end =3D min(end, slot->base_gfn + slot->npages); + + if (iter.slot->flags & KVM_MEM_GUEST_MEMFD) + kvm_gmem_clear_prepared_slot(iter.slot, gfn_start, gfn_end); + } + } +} + /** * Returns true if pages in range [@start, @end) in inode @inode have no * userspace mappings. diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1a7bbcc31b7e..255d27df7f5c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2565,6 +2565,8 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm,= gfn_t start, gfn_t end, KVM_BUG_ON(r, kvm); } =20 + kvm_gmem_clear_prepared_vm(kvm, start, end); + kvm_handle_gfn_range(kvm, &post_set_range); =20 out_unlock: diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index d8ff2b380d0e..25fd0d9f66cc 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -43,6 +43,7 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot= *slot, void kvm_gmem_unbind(struct kvm_memory_slot *slot); int kvm_gmem_should_set_attributes(struct kvm *kvm, gfn_t start, gfn_t end, unsigned long attrs); +void kvm_gmem_clear_prepared_vm(struct kvm *kvm, gfn_t start, gfn_t end); #else static inline void kvm_gmem_init(struct module *module) { @@ -68,6 +69,12 @@ static inline int kvm_gmem_should_set_attributes(struct = kvm *kvm, gfn_t start, return 0; } =20 +static inline void kvm_gmem_clear_prepared_slots(struct kvm *kvm, + gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); +} + #endif /* CONFIG_KVM_PRIVATE_MEM */ =20 #endif /* __KVM_MM_H__ */ --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:40 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 96F6B1C2326 for ; Tue, 10 Sep 2024 23:45:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011929; cv=none; b=h6ypKAo1iujV4uS+mtd/VgPYCTRQXfaJKPyxCNdiy9HGBYXfQ4IeQWZyQ5sM6RS5kL98iuiiSTghTyGQfhh8bN5j2jcX8/EjVCWMZKoDBeUcWjwq8DREZERgRFWcIT9f9qJWgv52CPqsIrpvX4jABeN2iEv5azKr0Sm/NCeKWBk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011929; c=relaxed/simple; bh=T0w2KZlhe2rovrIvRN5dcVMPNovGh8zN8pefjySPrWM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=d9Qs7uTmy2fF8sU7NfsKRoeZhOhvYWNn39kb3BMNx4tI2IP20sH05/yfoWbex27FxMVjXAjOwBohuHtFAeo/qNm9Ym7WDva3KMOtpIKny58qfjnOHeFfptJj8lMkEjyqnfyCAAZo8XzR3A6Nrto4upzolgL5QoYjBTu1t8IFq/o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pLAsUS0G; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pLAsUS0G" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6d4426ad833so39295217b3.2 for ; Tue, 10 Sep 2024 16:45:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011927; x=1726616727; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uwf1PxblOjz5NXZUaTh7zfbmgn6lBhrxKJq/6ZLohcs=; b=pLAsUS0GYbfquLsP8tIzmZ9+BPahMdB/i5COrQPy6oX8uknuzzVCHUkB+ygy+RKm5c eXJgXJJp3RF1whdQ/l8YBIrOtLaBBu0UjRoVrPVmXoNxUIexsQzs8pRHm2jRuN6LZPMh 9vO1+3e7+/vYnbRL40NtUwAe5gtCiDM4f4abIY8TmeA3AZ50KRWWrfHNf7KjLkYlamVn 39gPRS+wkKfxufFupi7WKM8VBfGOxBI/0D/M/7k+tiVcDbTAGxdM6EW1xSJEbusjkSg/ KrfJ4zklFt1H9hAFZpsFMdqANWBLrggGTaWKg5Lidwu4WqWgntu6z/12oxlA+nEKmN0W WMnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011927; x=1726616727; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uwf1PxblOjz5NXZUaTh7zfbmgn6lBhrxKJq/6ZLohcs=; b=DTgSATG5j8CuOaFK1xmUS6LO0kpDK11v0pGyR4ua95yIXg+JUeoGS/wYIGA17iDayQ ib+tS4SQN8efYkKnMHrU6CTZpVV2y0+sUhQJC4Y0yRG8xtrCoiyDue+Rd0LeFMdQ/kqb 5pi9/Ua+2OqaBCKmiJnRM6XJfbOG19FzvbIwzz34Mro5m9xqc75spNZSxf2hYQbHt4+g niC1/kc4aWJmszgpV/2ITyn/uw+lNol2FfoK8WMS0l70a24e7s/FpDKmJ9zbny87pn/y 8hJv/UDiJbCEneaz8BbCIPkLeeFr9YpjDEJM0yej3OnlMh2pNJJq6N/XwWGE9+xUvlwA T6ag== X-Forwarded-Encrypted: i=1; AJvYcCV3kgFTwcLHM8YnmYBKvAQhFfLoscAAXfCyl7FHMbnCN2c7J6wGwKTcVxcw5cnuPP7YjHXCdIXAm89Wm8s=@vger.kernel.org X-Gm-Message-State: AOJu0Yx0ctZ3CN5qq/72UeWcv3V4SXzcCL9rhR9vcLonjcD37RtGMHGW JQg3pbM67IflRTDhI1R160wIhR7j4jPHVX87IAyRRIgwWp1pzcvFCAckuzDMmtEkU2EBC7ma1cp uLboIONh30kMUnwxLCEgklA== X-Google-Smtp-Source: AGHT+IFHLL/JWASNEnEkZcv6PFIj58I/J8+OY1FCIRm5r4Bk9ay3Th/8kINWVB3DiXd8md6IPOJl8xsEmDXgV5bz6w== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a25:aa43:0:b0:e16:55e7:5138 with SMTP id 3f1490d57ef6-e1d8c022ba7mr1649276.0.1726011926642; Tue, 10 Sep 2024 16:45:26 -0700 (PDT) Date: Tue, 10 Sep 2024 23:44:02 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <6faf6d63a98531539b05ea36728e51ff51bb3cde.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 31/39] KVM: selftests: Allow vm_set_memory_attributes to be used without asserting return value of 0 From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" No functional change intended. Signed-off-by: Ackerley Tng --- tools/testing/selftests/kvm/include/kvm_util.h | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing= /selftests/kvm/include/kvm_util.h index 63c2aaae51f3..d336cd0c8f19 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -374,8 +374,8 @@ static inline void vm_enable_cap(struct kvm_vm *vm, uin= t32_t cap, uint64_t arg0) vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap); } =20 -static inline void vm_set_memory_attributes(struct kvm_vm *vm, uint64_t gp= a, - uint64_t size, uint64_t attributes) +static inline int __vm_set_memory_attributes(struct kvm_vm *vm, uint64_t g= pa, + uint64_t size, uint64_t attributes) { struct kvm_memory_attributes attr =3D { .attributes =3D attributes, @@ -391,7 +391,15 @@ static inline void vm_set_memory_attributes(struct kvm= _vm *vm, uint64_t gpa, TEST_ASSERT(!attributes || attributes =3D=3D KVM_MEMORY_ATTRIBUTE_PRIVATE, "Update me to support multiple attributes!"); =20 - vm_ioctl(vm, KVM_SET_MEMORY_ATTRIBUTES, &attr); + return __vm_ioctl(vm, KVM_SET_MEMORY_ATTRIBUTES, &attr); +} + +static inline void vm_set_memory_attributes(struct kvm_vm *vm, uint64_t gp= a, + uint64_t size, uint64_t attributes) +{ + int ret =3D __vm_set_memory_attributes(vm, gpa, size, attributes); + + __TEST_ASSERT_VM_VCPU_IOCTL(!ret, "KVM_SET_MEMORY_ATTRIBUTES", ret, vm); } =20 =20 --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:40 2024 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F8FD1C2423 for ; Tue, 10 Sep 2024 23:45:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011930; cv=none; b=bsMgO/PKWgAdaIj64DijEVX7sEMrLgMKhy01xDPv40NDnaTBOgZCKLoLZ7CMp6Ko1ItwvB/51l5GmNgRaSGTDW/WHLVNDrIA3ZaItT3ny9mWZM+U2nR0OcZ0gGD1w76BOXi8DArA59T96Zfb/KL6MWrMUrZkNkkLvlpG/QAQdgA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011930; c=relaxed/simple; bh=G1xvv3VhKdX7sOu89I1TMKp58s4sr/aulFadLA7UKxQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oOQG3SF5y1Q0xq3Fr0vv7Yn0SRBhUFkJ1qI/yec0YygF5BXTPBlliIUtiVLbCoa9IHZrKtuIudL5A+61mqYr6g4IsNUKkm9FWtyAwJTcZFS6coyd4aYaVjwo/TruFpKNCjLZgkmkJOo1N/EaslsYmjrtxSGwt80U12mmoA0EnVU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=liqKuMGg; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="liqKuMGg" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2d8ce69ed4cso5635633a91.0 for ; Tue, 10 Sep 2024 16:45:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011928; x=1726616728; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OpTxhyUXuMtHPMfLsf+VosdcWmSbqHHnAO9eI97+DWM=; b=liqKuMGg/lEQECBg5CK0FuCJNHVl2PQKknoWbsPLpOR99fsxuP+ZEspceZutrYZQtv RUhRj5V8QPJupdqfMSusP9e2aTVLcXS1Jqla70O3eR1FAosCr1mWtg2OFsDHWBzzSPKd Y5i9P4LL4/FvWSKBvNlElj5x9Dj9XhrZLmsP2CDzFFZBTlfSzBAIDET3y5TwIZDb0plA D7s+iL89lLluD3NAcVEu4vOAnw4N+iQ49Bu9xrAl+5qHiXA9a5d0VezZwyp0qXQrz9ud kRcrjcS4rrqGJL2ONpnSm1+XGK3bnl6SvYtZatS/OUGUoA1YucVjtrkcErxi8S9ShbSY qahg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011928; x=1726616728; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OpTxhyUXuMtHPMfLsf+VosdcWmSbqHHnAO9eI97+DWM=; b=pTkIt/a2LeRPjB1kN6CEuNrtaJPCEnS4TSP4qjes5vcTQ7XPZ+NWzjM0HPZ9CAXFnd kgIlznD/DuJKmMQU1pe7Aj/bpwlnn+JKBch4ojsU+HuAu1hR7XqDdZVL8RqIeeXcUvEf sgj7Oo2OZopoFG9Uae3euSspe7TAcIRJVnCT2v0rkrKapKvsyjcgG4pRNWE6nn2JEqHc YDj4JYpWz2rj1bvHCoVO5VRV/900gwY1J3l4bFsTfnfeTnFUEAyJWiZox6SgEzGOsNRg 0o82aRg2nrpawhOz3C3Y5qMUS+GxJLRUGyN3Vz/WF0cUDcxI1wXFvuULOf3DC8ZHq4ZL wucA== X-Forwarded-Encrypted: i=1; AJvYcCVGk0OGAKMzjpnJCKI29qamqww+OhAv3ZCVBu/0aQhL/l0zIiU2qjJRMDtWgza+n92fxLZgPpc0RweFxUE=@vger.kernel.org X-Gm-Message-State: AOJu0YyJ2LeRRS/BTox6Y2Z1LQmQsRKS2Bkxk5ZLIezXxwtg8tZ2Fjuk niK3i5AklCzz5Wi0ht5d9WWBX27pA5p1Au4olo3jmn+gun2WL+46M/W7eJGCeVFg3+fxffsAOo8 K4T+bSdO9yWv+9ExpvEghzA== X-Google-Smtp-Source: AGHT+IGDCVAzc3Os2kH8Tey/rqyfjIzDxCVVgwAtu26YcOS+JGYzGKn0YwTy0lhdNyewynSeBma3RYB4AAVHIEMUkQ== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:90b:4f49:b0:2da:6c1e:1576 with SMTP id 98e67ed59e1d1-2dad4b8ba79mr41392a91.0.1726011928288; Tue, 10 Sep 2024 16:45:28 -0700 (PDT) Date: Tue, 10 Sep 2024 23:44:03 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: Subject: [RFC PATCH 32/39] KVM: selftests: Test using guest_memfd memory from userspace From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Test using guest_memfd from userspace, since guest_memfd now has mmap() support. Tests: 1. mmap() should now always return a valid address 2. Test that madvise() doesn't give any issues when pages are not faulted in. 3. Test that pages should not be faultable before association with a memslot, and that faults result in SIGBUS. 4. Test that pages can be faulted if marked faultable, and the flow of setting a memory range as private, which is: a. madvise(MADV_DONTNEED) to request kernel to unmap pages b. Set memory attributes of VM to private Also test that if pages are still mapped, setting memory attributes will fail. 5. Test that madvise(MADV_REMOVE) can be used to remove pages from guest_memfd, forcing zeroing of those pages before the next time the pages are faulted in. Signed-off-by: Ackerley Tng --- .../testing/selftests/kvm/guest_memfd_test.c | 195 +++++++++++++++++- 1 file changed, 189 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing= /selftests/kvm/guest_memfd_test.c index 3618ce06663e..b6f3c3e6d0dd 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -6,6 +6,7 @@ */ #include #include +#include #include #include #include @@ -35,12 +36,192 @@ static void test_file_read_write(int fd) "pwrite on a guest_mem fd should fail"); } =20 -static void test_mmap(int fd, size_t page_size) +static void test_mmap_should_map_pages_into_userspace(int fd, size_t page_= size) { char *mem; =20 mem =3D mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); - TEST_ASSERT_EQ(mem, MAP_FAILED); + TEST_ASSERT(mem !=3D MAP_FAILED, "mmap should return valid address"); + + TEST_ASSERT_EQ(munmap(mem, page_size), 0); +} + +static void test_madvise_no_error_when_pages_not_faulted(int fd, size_t pa= ge_size) +{ + char *mem; + + mem =3D mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem !=3D MAP_FAILED, "mmap should return valid address"); + + TEST_ASSERT_EQ(madvise(mem, page_size, MADV_DONTNEED), 0); + + TEST_ASSERT_EQ(munmap(mem, page_size), 0); +} + +static void assert_not_faultable(char *address) +{ + pid_t child_pid; + + child_pid =3D fork(); + TEST_ASSERT(child_pid !=3D -1, "fork failed"); + + if (child_pid =3D=3D 0) { + *address =3D 'A'; + } else { + int status; + waitpid(child_pid, &status, 0); + + TEST_ASSERT(WIFSIGNALED(status), + "Child should have exited with a signal"); + TEST_ASSERT_EQ(WTERMSIG(status), SIGBUS); + } +} + +/* + * Pages should not be faultable before association with memslot because p= ages + * (in a KVM_X86_SW_PROTECTED_VM) only default to faultable at memslot + * association time. + */ +static void test_pages_not_faultable_if_not_associated_with_memslot(int fd, + size_t page_size) +{ + char *mem =3D mmap(NULL, page_size, PROT_READ | PROT_WRITE, + MAP_SHARED, fd, 0); + TEST_ASSERT(mem !=3D MAP_FAILED, "mmap should return valid address"); + + assert_not_faultable(mem); + + TEST_ASSERT_EQ(munmap(mem, page_size), 0); +} + +static void test_pages_faultable_if_marked_faultable(struct kvm_vm *vm, in= t fd, + size_t page_size) +{ + char *mem; + uint64_t gpa =3D 0; + uint64_t guest_memfd_offset =3D 0; + + /* + * This test uses KVM_X86_SW_PROTECTED_VM is required to set + * arch.has_private_mem, to add a memslot with guest_memfd to a VM. + */ + if (!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM))) { + printf("Faultability test skipped since KVM_X86_SW_PROTECTED_VM is not s= upported."); + return; + } + + mem =3D mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, + guest_memfd_offset); + TEST_ASSERT(mem !=3D MAP_FAILED, "mmap should return valid address"); + + /* + * Setting up this memslot with a KVM_X86_SW_PROTECTED_VM marks all + * offsets in the file as shared, allowing pages to be faulted in. + */ + vm_set_user_memory_region2(vm, 0, KVM_MEM_GUEST_MEMFD, gpa, page_size, + mem, fd, guest_memfd_offset); + + *mem =3D 'A'; + TEST_ASSERT_EQ(*mem, 'A'); + + /* Should fail since the page is still faulted in. */ + TEST_ASSERT_EQ(__vm_set_memory_attributes(vm, gpa, page_size, + KVM_MEMORY_ATTRIBUTE_PRIVATE), + -1); + TEST_ASSERT_EQ(errno, EINVAL); + + /* + * Use madvise() to remove the pages from userspace page tables, then + * test that the page is still faultable, and that page contents remain + * the same. + */ + madvise(mem, page_size, MADV_DONTNEED); + TEST_ASSERT_EQ(*mem, 'A'); + + /* Tell kernel to unmap the page from userspace. */ + madvise(mem, page_size, MADV_DONTNEED); + + /* Now kernel can set this page to private. */ + vm_mem_set_private(vm, gpa, page_size); + assert_not_faultable(mem); + + /* + * Should be able to fault again after setting this back to shared, and + * memory contents should be cleared since pages must be re-prepared for + * SHARED use. + */ + vm_mem_set_shared(vm, gpa, page_size); + TEST_ASSERT_EQ(*mem, 0); + + /* Cleanup */ + vm_set_user_memory_region2(vm, 0, KVM_MEM_GUEST_MEMFD, gpa, 0, mem, fd, + guest_memfd_offset); + + TEST_ASSERT_EQ(munmap(mem, page_size), 0); +} + +static void test_madvise_remove_releases_pages(struct kvm_vm *vm, int fd, + size_t page_size) +{ + char *mem; + uint64_t gpa =3D 0; + uint64_t guest_memfd_offset =3D 0; + + /* + * This test uses KVM_X86_SW_PROTECTED_VM is required to set + * arch.has_private_mem, to add a memslot with guest_memfd to a VM. + */ + if (!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM))) { + printf("madvise test skipped since KVM_X86_SW_PROTECTED_VM is not suppor= ted."); + return; + } + + mem =3D mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem !=3D MAP_FAILED, "mmap should return valid address"); + + /* + * Setting up this memslot with a KVM_X86_SW_PROTECTED_VM marks all + * offsets in the file as shared, allowing pages to be faulted in. + */ + vm_set_user_memory_region2(vm, 0, KVM_MEM_GUEST_MEMFD, gpa, page_size, + mem, fd, guest_memfd_offset); + + *mem =3D 'A'; + TEST_ASSERT_EQ(*mem, 'A'); + + /* + * MADV_DONTNEED causes pages to be removed from userspace page tables + * but should not release pages, hence page contents are kept. + */ + TEST_ASSERT_EQ(madvise(mem, page_size, MADV_DONTNEED), 0); + TEST_ASSERT_EQ(*mem, 'A'); + + /* + * MADV_REMOVE causes pages to be released. Pages are then zeroed when + * prepared for shared use, hence 0 is expected on next fault. + */ + TEST_ASSERT_EQ(madvise(mem, page_size, MADV_REMOVE), 0); + TEST_ASSERT_EQ(*mem, 0); + + TEST_ASSERT_EQ(munmap(mem, page_size), 0); + + /* Cleanup */ + vm_set_user_memory_region2(vm, 0, KVM_MEM_GUEST_MEMFD, gpa, 0, mem, fd, + guest_memfd_offset); +} + +static void test_using_memory_directly_from_userspace(struct kvm_vm *vm, + int fd, size_t page_size) +{ + test_mmap_should_map_pages_into_userspace(fd, page_size); + + test_madvise_no_error_when_pages_not_faulted(fd, page_size); + + test_pages_not_faultable_if_not_associated_with_memslot(fd, page_size); + + test_pages_faultable_if_marked_faultable(vm, fd, page_size); + + test_madvise_remove_releases_pages(vm, fd, page_size); } =20 static void test_file_size(int fd, size_t page_size, size_t total_size) @@ -180,18 +361,17 @@ static void test_guest_memfd(struct kvm_vm *vm, uint3= 2_t flags, size_t page_size size_t total_size; int fd; =20 - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); - total_size =3D page_size * 4; =20 fd =3D vm_create_guest_memfd(vm, total_size, flags); =20 test_file_read_write(fd); - test_mmap(fd, page_size); test_file_size(fd, page_size, total_size); test_fallocate(fd, page_size, total_size); test_invalid_punch_hole(fd, page_size, total_size); =20 + test_using_memory_directly_from_userspace(vm, fd, page_size); + close(fd); } =20 @@ -201,7 +381,10 @@ int main(int argc, char *argv[]) =20 TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); =20 - vm =3D vm_create_barebones(); + if ((kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM))) + vm =3D vm_create_barebones_type(KVM_X86_SW_PROTECTED_VM); + else + vm =3D vm_create_barebones(); =20 test_create_guest_memfd_invalid(vm); test_create_guest_memfd_multiple(vm); --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:40 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2428D1C2458 for ; Tue, 10 Sep 2024 23:45:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011932; cv=none; b=e/t+oxQ8ulrhmOeeXoj3UdG7CLxB2yrJdAmd07506X0VRs2sHBlft+SzYg9LF5c4GZ2IeltOmmOxjhc64unLH6Dyh80rGC3qGHI8zkAcPUIk+4YupSp461hX7obbgTvIqjw6pocMujInYv87GuaYe+3wG6YP6hpuQyU+ZC4l3PA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011932; c=relaxed/simple; bh=G5DVY9O/BmZ1THKu+vEQ50mgiE7dZQUwvb/YLif0DiY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OxXy+TKC21m+lKXRdwmAlevBGmCZJFqIXIPhEqVCjxJx3+KuNR9lasbDBgZOiA/gu8Beh8wqTZ/tbzKWLCI3U4tAz3Gpe0MNnkMixMzgihZBhhsVR5ffiOMolYjNowRgju7MHsUKx44xUVAczA+PSq+6lDQDaqI4B947Eys8CUY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lWdp0suZ; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lWdp0suZ" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e1aa529e30eso13655562276.1 for ; Tue, 10 Sep 2024 16:45:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011930; x=1726616730; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=afFoS57EFj7PA7MrTOtXA3rUEPuyIQGDyAEgOLUqk8M=; b=lWdp0suZRpXSRu4Hul5FaDqogquQ6QGqb/ToKA+AJnL6m6UpNs9Nund7+l+sLDOAUF /j+ic7xQUewi9UJr/lrvFLfbRPXWeQpO2mI3KfwUKkUKPSrOV1pCDUPInXk/vg9BcMOU wSDg+c4PkWQ6ZJKcb+lHdjC8JPmNV8WxtjmUwQne/eGzJWJ33kEy+oV8AQLGvdpClFPy k0HwsHPW8qATH4D5C7Ki5oScr0ZwKgpUnSDu/dQTwDqjYxZkahu5g8aIej9M+s2qHvib bZwE0oIOqmUuEv1HjuYTPNwWxURQu8GJo2B71KOxBIH63LDusmi/n1VR0WM2CPTsDzJ2 VCWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011930; x=1726616730; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=afFoS57EFj7PA7MrTOtXA3rUEPuyIQGDyAEgOLUqk8M=; b=P2cB/tYCIo0EzPhA5TLJ4Bm9QtJR4sPRrbVwQtQehnO8PJkNZiC669cRx9NVIbYHyK 9XVWIxqxjUzZaTpac0Umsq9TSchDTR31t3ASiGoK7EI9ptHWOap0sWXf7cnVpLpMWCpx HT6IDPMQRfakT3lC+Qt3HPmQSF3EqekhKYCAteTPmSjclEY+xr2zDLB/YkJkX5ePC2+K soFudDjsBFdyb7a9ak2CW1F5n6Z/lE/EG4BYekS16ohCe8juKCHj6Q2VyW9cVbuAMfPi 0oQXMbErai52vPg1JFIolS6FOq9IAaK0nPxRW5yHIA5jIen543WkogB+L2mMgCf2ZO2A U1Dw== X-Forwarded-Encrypted: i=1; AJvYcCXZFIFfXq1d5ecDs5qMRWbQR3OphuskWPg9LAhGk9RTd+YTsq8lQ4UEZ5xAvrBXbCkAhQkN5sm7LVjbLt4=@vger.kernel.org X-Gm-Message-State: AOJu0YxzzcYQTSPK2kie2yweexSqVvITgH3tLQcXVvlNvC0bnJlX9gYL Dz0D8CmL9CUm5V+MV76V/mzdkKkn9c2xHWr3VaNxcBHwhhlTsLkvqM8HD8KKLjgXevmwlPOTWCD uYp7IwpLUBBEe74+sA/d4Dg== X-Google-Smtp-Source: AGHT+IFM7zwKBbA2/Jy7tyIWzTxWbF+45RbnY6QrI6xQ5u7bHZg40qPkgO7/Rd+Sibot2kYeLv7TVh8XJXwwFOEoiw== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a25:ef46:0:b0:e1a:8735:8390 with SMTP id 3f1490d57ef6-e1d3489c673mr70623276.4.1726011929867; Tue, 10 Sep 2024 16:45:29 -0700 (PDT) Date: Tue, 10 Sep 2024 23:44:04 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <19a16094c3e99d83c53931ff5f3147079d03c810.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 33/39] KVM: selftests: Test guest_memfd memory sharing between guest and host From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Minimal test for guest_memfd to test that when memory is marked shared in a VM, the host can read and write to it via an mmap()ed address, and the guest can also read and write to it. Signed-off-by: Ackerley Tng --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/guest_memfd_sharing_test.c | 160 ++++++++++++++++++ 2 files changed, 161 insertions(+) create mode 100644 tools/testing/selftests/kvm/guest_memfd_sharing_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index b3b7e83f39fc..3c1f35456bfc 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -135,6 +135,7 @@ TEST_GEN_PROGS_x86_64 +=3D dirty_log_test TEST_GEN_PROGS_x86_64 +=3D dirty_log_perf_test TEST_GEN_PROGS_x86_64 +=3D guest_memfd_test TEST_GEN_PROGS_x86_64 +=3D guest_memfd_hugetlb_reporting_test +TEST_GEN_PROGS_x86_64 +=3D guest_memfd_sharing_test TEST_GEN_PROGS_x86_64 +=3D guest_print_test TEST_GEN_PROGS_x86_64 +=3D hardware_disable_test TEST_GEN_PROGS_x86_64 +=3D kvm_create_max_vcpus diff --git a/tools/testing/selftests/kvm/guest_memfd_sharing_test.c b/tools= /testing/selftests/kvm/guest_memfd_sharing_test.c new file mode 100644 index 000000000000..fef5a73e5053 --- /dev/null +++ b/tools/testing/selftests/kvm/guest_memfd_sharing_test.c @@ -0,0 +1,160 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Minimal test for guest_memfd to test that when memory is marked shared = in a + * VM, the host can read and write to it via an mmap()ed address, and the = guest + * can also read and write to it. + * + * Copyright (c) 2024, Google LLC. + */ +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "ucall_common.h" + +#define GUEST_MEMFD_SHARING_TEST_SLOT 10 +#define GUEST_MEMFD_SHARING_TEST_GPA 0x50000000ULL +#define GUEST_MEMFD_SHARING_TEST_GVA 0x90000000ULL +#define GUEST_MEMFD_SHARING_TEST_OFFSET 0 +#define GUEST_MEMFD_SHARING_TEST_GUEST_TO_HOST_VALUE 0x11 +#define GUEST_MEMFD_SHARING_TEST_HOST_TO_GUEST_VALUE 0x22 + +static void guest_code(int page_size) +{ + char *mem; + int i; + + mem =3D (char *)GUEST_MEMFD_SHARING_TEST_GVA; + + for (i =3D 0; i < page_size; ++i) { + GUEST_ASSERT_EQ(mem[i], GUEST_MEMFD_SHARING_TEST_HOST_TO_GUEST_VALUE); + } + + memset(mem, GUEST_MEMFD_SHARING_TEST_GUEST_TO_HOST_VALUE, page_size); + + GUEST_DONE(); +} + +int run_test(struct kvm_vcpu *vcpu, void *hva, int page_size) +{ + struct ucall uc; + uint64_t uc_cmd; + + memset(hva, GUEST_MEMFD_SHARING_TEST_HOST_TO_GUEST_VALUE, page_size); + vcpu_args_set(vcpu, 1, page_size); + + /* Reset vCPU to guest_code every time run_test is called. */ + vcpu_arch_set_entry_point(vcpu, guest_code); + + vcpu_run(vcpu); + uc_cmd =3D get_ucall(vcpu, &uc); + + if (uc_cmd =3D=3D UCALL_ABORT) { + REPORT_GUEST_ASSERT(uc); + return 1; + } else if (uc_cmd =3D=3D UCALL_DONE) { + char *mem; + int i; + + mem =3D hva; + for (i =3D 0; i < page_size; ++i) + TEST_ASSERT_EQ(mem[i], GUEST_MEMFD_SHARING_TEST_GUEST_TO_HOST_VALUE); + + return 0; + } else { + TEST_FAIL("Unknown ucall 0x%lx.", uc.cmd); + return 1; + } +} + +void *add_memslot(struct kvm_vm *vm, int guest_memfd, size_t page_size, + bool back_shared_memory_with_guest_memfd) +{ + void *mem; + + if (back_shared_memory_with_guest_memfd) { + mem =3D mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, + guest_memfd, GUEST_MEMFD_SHARING_TEST_OFFSET); + } else { + mem =3D mmap(NULL, page_size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + } + TEST_ASSERT(mem !=3D MAP_FAILED, "mmap should return valid address"); + + /* + * Setting up this memslot with a KVM_X86_SW_PROTECTED_VM marks all + * offsets in the file as shared. + */ + vm_set_user_memory_region2(vm, GUEST_MEMFD_SHARING_TEST_SLOT, + KVM_MEM_GUEST_MEMFD, + GUEST_MEMFD_SHARING_TEST_GPA, page_size, mem, + guest_memfd, GUEST_MEMFD_SHARING_TEST_OFFSET); + + return mem; +} + +void test_sharing(bool back_shared_memory_with_guest_memfd) +{ + const struct vm_shape shape =3D { + .mode =3D VM_MODE_DEFAULT, + .type =3D KVM_X86_SW_PROTECTED_VM, + }; + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + size_t page_size; + int guest_memfd; + void *mem; + + TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_V= M)); + + vm =3D vm_create_shape_with_one_vcpu(shape, &vcpu, &guest_code); + + page_size =3D getpagesize(); + + guest_memfd =3D vm_create_guest_memfd(vm, page_size, 0); + + mem =3D add_memslot(vm, guest_memfd, page_size, back_shared_memory_with_g= uest_memfd); + + virt_map(vm, GUEST_MEMFD_SHARING_TEST_GVA, GUEST_MEMFD_SHARING_TEST_GPA, = 1); + + run_test(vcpu, mem, page_size); + + /* Toggle private flag of memory attributes and run the test again. */ + if (back_shared_memory_with_guest_memfd) { + /* + * Use MADV_REMOVE to release the backing guest_memfd memory + * back to the system before it is used again. Test that this is + * only necessary when guest_memfd is used to back shared + * memory. + */ + madvise(mem, page_size, MADV_REMOVE); + } + vm_mem_set_private(vm, GUEST_MEMFD_SHARING_TEST_GPA, page_size); + vm_mem_set_shared(vm, GUEST_MEMFD_SHARING_TEST_GPA, page_size); + + run_test(vcpu, mem, page_size); + + kvm_vm_free(vm); + munmap(mem, page_size); + close(guest_memfd); +} + +int main(int argc, char *argv[]) +{ + /* + * Confidence check that when guest_memfd is associated with a memslot + * but only anonymous memory is used to back shared memory, sharing + * memory between guest and host works as expected. + */ + test_sharing(false); + + /* + * Memory sharing should work as expected when shared memory is backed + * with guest_memfd. + */ + test_sharing(true); + + return 0; +} --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:40 2024 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4BCB51C2DBB for ; Tue, 10 Sep 2024 23:45:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011933; cv=none; b=ZqfiKcnSH+eGPdJShLx4fUeEh5zRIGDMrTTyldLCYaYhQUsd+Tvobxdz0iun8cUrDo+CHH05FZJg0G69b6wnFx5B3PT4Y9zefzXPWC/Sr+U6asrZ0SpJzvT9rh1ULhvUKtTu0Ie1KcpreLUl0b4RvUgJxnzCIu1kYu6Zwg8vo8g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011933; c=relaxed/simple; bh=iM0jj8S3VKYKj12HU0Sug/HTfaVLGaHYStfmhvHzgm0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OD3ygR6l0rUMV+0SZtt9cm3fPsXft+iXHxch3WjUtK7HlYPYfGlzR7wPCHZJNfF9csEAKWPhcXORaK5bC55wV21kYIEVXhmaSZt+4YHnlX+VGlic8u4BYMhlDppp3GPnK+nJjxL5bfcXlF0yZAV8azNj1GDgYRmuorfxW7cYlCU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XCVC0DPY; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XCVC0DPY" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-718dedf3615so1498102b3a.1 for ; Tue, 10 Sep 2024 16:45:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011932; x=1726616732; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AYJem9uK2cy9PiRW4XqF1AVoy187cj8pbxn6tsiytpU=; b=XCVC0DPYnM33AnLjbbAdn960pEa8qXpOSY3DLkgnNQ31m5/hsUia+5fxEifie4GArS Jew+A+/rskLpbRo9txVq4LtmGFcRuYKaFlxgHDtexf/2OGi2syYrAtOQl6oxrFRmcIU0 SwGFcqCrzSTEux8Yrf/4ePSnthLl4Hbns4DurCCaPQFjRRMT6JeEdl11rcwL5xYdsee+ RP62YB5I2MrCCTFiLq8TaOEfp+JqJlWi24taGtSMjYwr1nI8RF9YvdV5qbVfuOQbBqev IpThbkFsCVWwlTYniRsvBcw53i7Fj+qAF8m8jmd2byDxR9aepbRPaXACEAoONf4nYGpb ATqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011932; x=1726616732; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AYJem9uK2cy9PiRW4XqF1AVoy187cj8pbxn6tsiytpU=; b=SW+JjZOJx446Nh2lqPFrUOI7WTnGmI78Ti1Pl21y/WmFssFWbpVcuisrq+ECzFsWnF MI3RDhYCl8JdIg1IsKAVT2bHunY6Fu3gGb3k+pEwOtKZlqj1yBYay86w7U9bP4wS/C0x oDJPkyNad3M4lqFzB3Lir5w7HmncdWeaP+vPgjk83DlJlfYFgkWlpqcnWF6Ci14v4ROB YDRYHHZ7d0HpzzMzRfzW8VdAPB+UgoWZHLG9WzPuDGlelhzSALxUBnRNNeCEWEr/Sz1P OmOWtIVADYmSogwKCIQBb9157h1iqZ1XzDSCamAjTW4ZNPXo0CUdGwfczDyKR/kQEbnX eAsg== X-Forwarded-Encrypted: i=1; AJvYcCV/MsWit6n+RuHMgC05MHnrc/iCFoSbiQhMquUDEIBzRafbDjfnLmIHqwTddTJaac1iwtbiGwXBVr+mxng=@vger.kernel.org X-Gm-Message-State: AOJu0YxQIro/NU9c+iwE+HhYn1f5nA2ViAx76wKgIMhhp2aPY0Tb7xmw c8g8SvZJbxLYp6KlbhnCyM4GivOk6zmMtejTl9UBehHxrxeZn8Ean0rYpYeT7zKhnMSykFUsD1w d4KkmvnzBGOk/2N4q21G71g== X-Google-Smtp-Source: AGHT+IHaymwmA0a5qHgrrgMyaOagseBsBxvzMqXn3tRr1+SR9wD08VNMmkUJlj2d4F4z5ptyADsmNrcNJBVUlk0nKg== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:aa7:9390:0:b0:706:3153:978a with SMTP id d2e1a72fcca58-7191722c371mr2134b3a.6.1726011931559; Tue, 10 Sep 2024 16:45:31 -0700 (PDT) Date: Tue, 10 Sep 2024 23:44:05 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <0ea30ee1128f7e6d033783034b6bc48dfbabb5db.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 34/39] KVM: selftests: Add notes in private_mem_kvm_exits_test for mmap-able guest_memfd From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Note in comments why madvise() is not needed before setting memory to private. Signed-off-by: Ackerley Tng --- .../selftests/kvm/x86_64/private_mem_kvm_exits_test.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_kvm_exits_test.= c b/tools/testing/selftests/kvm/x86_64/private_mem_kvm_exits_test.c index 13e72fcec8dd..f8bcfc897f6a 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_kvm_exits_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_kvm_exits_test.c @@ -62,7 +62,11 @@ static void test_private_access_memslot_deleted(void) =20 virt_map(vm, EXITS_TEST_GVA, EXITS_TEST_GPA, EXITS_TEST_NPAGES); =20 - /* Request to access page privately */ + /* + * Request to access page privately. madvise(MADV_DONTNEED) not required + * since memory was never mmap()-ed from guest_memfd. Anonymous memory + * was used instead for this memslot's userspace_addr. + */ vm_mem_set_private(vm, EXITS_TEST_GPA, EXITS_TEST_SIZE); =20 pthread_create(&vm_thread, NULL, @@ -98,7 +102,10 @@ static void test_private_access_memslot_not_private(voi= d) =20 virt_map(vm, EXITS_TEST_GVA, EXITS_TEST_GPA, EXITS_TEST_NPAGES); =20 - /* Request to access page privately */ + /* + * Request to access page privately. madvise(MADV_DONTNEED) not required + * since the affected memslot doesn't use guest_memfd. + */ vm_mem_set_private(vm, EXITS_TEST_GPA, EXITS_TEST_SIZE); =20 exit_reason =3D run_vcpu_get_exit_reason(vcpu); --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:40 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ACF471C32F5 for ; Tue, 10 Sep 2024 23:45:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011936; cv=none; b=WU2YY1vCRzzUbSUKqa6RaVxfqXADBRuFdAzamMmCPB2fUtOLqEKMcfDDjO9gVumiVod8Pxk4GCQ3r/RJfdJWa9y897SNGWQHy7hT7eKiesaAJZ/e/D5F1N991Tw6bPhQfhlK2Y9FRlftbEePn6llNVfTPJ5eQ1jx0egDTRzT1/A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011936; c=relaxed/simple; bh=xDhJT0jtukUSht8M1rW68Et+r1brKb18kxNSu0XqBak=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pNWOEKtF5okTV4ZCVcIlxcirv8rVrTogxokpdTHwBeBcUZFtZn4VsZNqRV62MqGFnnajWyYH0AQU+zQaoH+uQenrS6SADifrxkA/WMo/2fp7WmXftVcCBsmZQQmWj4U6e1zMdspNeQA7P0z9XBdoF5f7x3X9up4eG0UTofF71cU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ryFoj5lE; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ryFoj5lE" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-205516d992eso15748285ad.3 for ; Tue, 10 Sep 2024 16:45:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011934; x=1726616734; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PU0eLeT/Khb6QlOSfPgwKdSujIEPbf2Mi98Qjlq+ljA=; b=ryFoj5lEkrpo5/Oin/SfkYzGXzKADf/1aAH+vux+vxk9jWoiuWcQCyLE4oT1FIB7Tu LzR5Q6RPxnEiEha2NzW4AxVN5YlXiB9Q8UE36XAOSURV1kHiXngHx5KVsdmgPnKR60yI j4/g6Qm/BvA160B9TYg1HPdT6gI6+K4My0v3IheLzqbN1lmXS1Zm6BVbc5gUe6KJo7Nx FgGBfCQCpA3GusBRgEDavhyo9PsMQuPE3LTkWskm4KJKtEH4UIDfGiacNWQEvnIoJrsl uIBhWJ4vh72sRLkAbtKyxx78s9UQWq5ynUf5UwLwlcu8sMrpeif44qKb9trZWSU+d088 rFKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011934; x=1726616734; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PU0eLeT/Khb6QlOSfPgwKdSujIEPbf2Mi98Qjlq+ljA=; b=s108lF6j7hEdYWL6Lui/NqkpQ7gzf7dnKsTi/mIoMCQCDXTb+5R1ZQ1hVtnryEPoyx Lq8WWbudOn3VVqoVTKi6lSdxyD28h0FYhYc9KC6rJJ1FAFGHQJnjAnXXdfnH/jZkiVPu vGnPpd6FKaAMmlJdK6m/sMUl/apfyxi6TFdQcDrfp9OzhPos4Zdo0NRwqw9JSuM57CwJ 5CxlCaBphLz8I3Y2aedPDpjUeXaMVQpiXBxCu6QwURSjKSBvlTIztXbW25ljltDUj2Uc ljtp2fyvZ49ik50XvlXcL2ezmTG1ohSp0dMZzJ0Zp0DOWk3YzSXXM2SrvCpFZkdeg2XM a7Zg== X-Forwarded-Encrypted: i=1; AJvYcCUrvp7OrFyaEydgGtUbjJ2qXt2Ga0YWjm06+eChMT+kUYqZIZ1c2VKqX/BrlWxJ11ucy4wxhJ5t7U88RLY=@vger.kernel.org X-Gm-Message-State: AOJu0Yzo8ywo0I9Xrb+3Ar+vGqwPaSc/2F0m/RektGPKZRmMrGnIVfUJ +9lcyFvfBvKqO19N7F/lZF2XICjHWQnBq7F7DKdD8YxH7DBUMma3wwi0zrFPq+QYikemAy9xnqi 20HC1OPtNd+o9DRhyBu15LA== X-Google-Smtp-Source: AGHT+IFVTG93FUeTatq9dmJkXI56Xlmri3CwyYIl4IemmSnQkPMQ4ndLEBPGw8YCyak9fs0iNwaTmELdbdpO3NZKxw== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:902:f687:b0:206:c776:4f11 with SMTP id d9443c01a7336-207522167damr522595ad.8.1726011933308; Tue, 10 Sep 2024 16:45:33 -0700 (PDT) Date: Tue, 10 Sep 2024 23:44:06 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <09892ae14d06596aee8b766b5908c8a7fdda85b4.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 35/39] KVM: selftests: Test that pinned pages block KVM from setting memory attributes to PRIVATE From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" CONFIG_GUP_TEST provides userspace with an ioctl to invoke pin_user_pages(), and this test uses the ioctl to pin pages, to check that memory attributes cannot be set to private if shared pages are pinned. Signed-off-by: Ackerley Tng --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/guest_memfd_pin_test.c | 104 ++++++++++++++++++ 2 files changed, 105 insertions(+) create mode 100644 tools/testing/selftests/kvm/guest_memfd_pin_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests= /kvm/Makefile index 3c1f35456bfc..c5a1c8c7125a 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -136,6 +136,7 @@ TEST_GEN_PROGS_x86_64 +=3D dirty_log_perf_test TEST_GEN_PROGS_x86_64 +=3D guest_memfd_test TEST_GEN_PROGS_x86_64 +=3D guest_memfd_hugetlb_reporting_test TEST_GEN_PROGS_x86_64 +=3D guest_memfd_sharing_test +TEST_GEN_PROGS_x86_64 +=3D guest_memfd_pin_test TEST_GEN_PROGS_x86_64 +=3D guest_print_test TEST_GEN_PROGS_x86_64 +=3D hardware_disable_test TEST_GEN_PROGS_x86_64 +=3D kvm_create_max_vcpus diff --git a/tools/testing/selftests/kvm/guest_memfd_pin_test.c b/tools/tes= ting/selftests/kvm/guest_memfd_pin_test.c new file mode 100644 index 000000000000..b45fb8024970 --- /dev/null +++ b/tools/testing/selftests/kvm/guest_memfd_pin_test.c @@ -0,0 +1,104 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Test that pinned pages block KVM from setting memory attributes to PRIV= ATE. + * + * Copyright (c) 2024, Google LLC. + */ +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "../../../../mm/gup_test.h" + +#define GUEST_MEMFD_PIN_TEST_SLOT 10 +#define GUEST_MEMFD_PIN_TEST_GPA 0x50000000ULL +#define GUEST_MEMFD_PIN_TEST_OFFSET 0 + +static int gup_test_fd; + +void pin_pages(void *vaddr, uint64_t size) +{ + const struct pin_longterm_test args =3D { + .addr =3D (uint64_t)vaddr, + .size =3D size, + .flags =3D PIN_LONGTERM_TEST_FLAG_USE_WRITE, + }; + + TEST_ASSERT_EQ(ioctl(gup_test_fd, PIN_LONGTERM_TEST_START, &args), 0); +} + +void unpin_pages(void) +{ + TEST_ASSERT_EQ(ioctl(gup_test_fd, PIN_LONGTERM_TEST_STOP), 0); +} + +void run_test(void) +{ + struct kvm_vm *vm; + size_t page_size; + void *mem; + int fd; + + vm =3D vm_create_barebones_type(KVM_X86_SW_PROTECTED_VM); + + page_size =3D getpagesize(); + fd =3D vm_create_guest_memfd(vm, page_size, 0); + + mem =3D mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, + GUEST_MEMFD_PIN_TEST_OFFSET); + TEST_ASSERT(mem !=3D MAP_FAILED, "mmap should return valid address"); + + /* + * Setting up this memslot with a KVM_X86_SW_PROTECTED_VM marks all + * offsets in the file as shared. + */ + vm_set_user_memory_region2(vm, GUEST_MEMFD_PIN_TEST_SLOT, + KVM_MEM_GUEST_MEMFD, + GUEST_MEMFD_PIN_TEST_GPA, page_size, mem, fd, + GUEST_MEMFD_PIN_TEST_OFFSET); + + /* Before pinning pages, toggling memory attributes should be fine. */ + vm_mem_set_private(vm, GUEST_MEMFD_PIN_TEST_GPA, page_size); + vm_mem_set_shared(vm, GUEST_MEMFD_PIN_TEST_GPA, page_size); + + pin_pages(mem, page_size); + + /* + * Pinning also faults pages in, so remove these pages from userspace + * page tables to properly test that pinning blocks setting memory + * attributes to private. + */ + TEST_ASSERT_EQ(madvise(mem, page_size, MADV_DONTNEED), 0); + + /* Should fail since the page is still faulted in. */ + TEST_ASSERT_EQ(__vm_set_memory_attributes(vm, GUEST_MEMFD_PIN_TEST_GPA, + page_size, + KVM_MEMORY_ATTRIBUTE_PRIVATE), + -1); + TEST_ASSERT_EQ(errno, EINVAL); + + unpin_pages(); + + /* With the pages unpinned, kvm can set this page to private. */ + vm_mem_set_private(vm, GUEST_MEMFD_PIN_TEST_GPA, page_size); + + kvm_vm_free(vm); + close(fd); +} + +int main(int argc, char *argv[]) +{ + gup_test_fd =3D open("/sys/kernel/debug/gup_test", O_RDWR); + /* + * This test depends on CONFIG_GUP_TEST to provide a kernel module that + * exposes pin_user_pages() to userspace. + */ + TEST_REQUIRE(gup_test_fd !=3D -1); + TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_V= M)); + + run_test(); + + return 0; +} --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:40 2024 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66CBF1C3F00 for ; Tue, 10 Sep 2024 23:45:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011938; cv=none; b=IENs42yYXrs1glxlMea2cAg5nGUAMBWgpVzi4IxtdLkEIr2XxY64NOw9jED9i+1yHNrA2sfTZqp59kJwHTj7yGPWB8sc8FfMUVSOGpJ9V+Nte5Y0tHCCvgy+BoQqr33stqZgnqTo1GE8UqKZ60nwP1VPowB1nkoyRR5LdFFnIhQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011938; c=relaxed/simple; bh=brvMd7lYNVEoPOiD6WGoJLKaTm8Zmz7S7jm/J3yvA4o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=csQSv2Fr09Xbhxz5SDJQhmW3yUPdrxM3DYiSDKyLpTyU4M7idiEArKkZpUPzi6ePJbmH5FjTXIm+YOn6qqNEcPx1Npy/0iQE/+vIdnUIDU3ARbzNxpntQta9jpJJ03ZflNehCmrgYUkdglktMRewUmLK0uXD1jBhQFw42tQA3LI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xYQYSN+a; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xYQYSN+a" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-206da734c53so20338205ad.2 for ; Tue, 10 Sep 2024 16:45:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011936; x=1726616736; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=f/AV4772bz9qe8Bi/YmA4hlRAWxNbxHJTt16v2vi3B8=; b=xYQYSN+aA02dJxaLr6S5swrFSVItqnIGX43qDVo/gByZYCunO+fUGsGYMU4seslDRv HNtaQVKtQrIm/B8pGHdGv86+VD+5g6RdqJ83c+g0frasR95UTIpaOjKSS1hWR9aiZEFC E1WokatInCNhO3tcWoaRYuQOBsiiuhmCTAt+IGdY1OR79rSE575+9he01NJxQalc6OA2 +eHmoT/aWbOmvTF4d7avPpJbuO2uxaQMFaoZDwg7VLU9sNHrRfuO+xQlyqOCgQg1NbFd hH0jmcxE+bsv8xukUeMyMOs2otLbF/zjNrODkXQwovP9BOUx5UNktqgYSz2TDlXtTVyE U2uA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011936; x=1726616736; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=f/AV4772bz9qe8Bi/YmA4hlRAWxNbxHJTt16v2vi3B8=; b=cGKFGK0G4pha94IF71O0joyk5z63gJ5FuXftfhzwf/HNxqSZIDQfeYJ28ooHASiV3U EwU7/ekfIU0UKD/DICGyy4W8p/z+JejxIJHA95VoXy6Q6zCbzo2xEq41E8Ykobus073C jTAX4qMiJOu++QYuunNXjk1L3IEwAXah3ydamQNHXtaQmfeoMQgM8cFx115j6M8RvNIF KP4GlJ15jHJZ6Y1j1JerDp953Iv4vkcPnRlkAYdcqNycPosWamH1fkbykCrLIsqHu/9F Ut+4I8nh2agPJeKIzGauYhG2883KsLlyhIxoImdFv87txP9NpEiLfV2lxSiIuBeKJndD Xq9w== X-Forwarded-Encrypted: i=1; AJvYcCUgiTynRvYJf8hLv3AUG8jIMcLImgKaxy9IS7Asp7XVyQTz41WeRUygL0DlRwSTdmUEsPBB4TQrXurOyEE=@vger.kernel.org X-Gm-Message-State: AOJu0YwVTjHo2lR1JBQHq/U3dOwn3znX5XSVqopbt5oRtSsmGq5Bfnyy LqThKPo9j62G2raLvOlStsanRTrdQmhpRZPMj7TY99DH+bkeCHp302XEyClCn6DGIB0O/ZnwxWN qLQxtmF0hIyFbhTYw+dSpEg== X-Google-Smtp-Source: AGHT+IHXrryjP2y/Skas2BttlctleDJukyVnmiThULgyHNQcrulpwoY9Zba1IZOLxgpQ53M3aNE5Nft3kH97eZ+fNA== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:902:d2ca:b0:205:799f:125e with SMTP id d9443c01a7336-207521f4414mr358015ad.10.1726011935717; Tue, 10 Sep 2024 16:45:35 -0700 (PDT) Date: Tue, 10 Sep 2024 23:44:07 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <2daf579fa5d2ba223fa3a907c1048d3ea4458a57.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 36/39] KVM: selftests: Refactor vm_mem_add to be more flexible From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" enum vm_mem_backing_src_type is encoding too many different possibilities on different axes of (1) whether to mmap from an fd, (2) granularity of mapping for THP, (3) size of hugetlb mapping, and has yet to be extended to support guest_memfd. When guest_memfd supports mmap() and we also want to support testing with mmap()ing from guest_memfd, the number of combinations make enumeration in vm_mem_backing_src_type difficult. This refactor separates out vm_mem_backing_src_type from userspace_mem_region. For now, vm_mem_backing_src_type remains a possible way for tests to specify, on the command line, the combination of backing memory to test. vm_mem_add() is now the last place where vm_mem_backing_src_type is interpreted, to 1. Check validity of requested guest_paddr 2. Align mmap_size appropriately based on the mapping's page_size and architecture 3. Install memory appropriately according to mapping's page size mmap()ing an alias seems to be specific to userfaultfd tests and could be refactored out of struct userspace_mem_region and localized in userfaultfd tests in future. This paves the way for replacing vm_mem_backing_src_type with multiple command line flags that would specify backing memory more flexibly. Future tests are expected to use vm_mem_region_alloc() to allocate a struct userspace_mem_region, then use more fundamental functions like vm_mem_region_mmap(), vm_mem_region_madvise_thp(), kvm_memfd_create(), vm_create_guest_memfd(), and other functions in vm_mem_add() to flexibly build up struct userspace_mem_region before finally adding the region to the vm with vm_mem_region_add(). Signed-off-by: Ackerley Tng --- .../testing/selftests/kvm/include/kvm_util.h | 29 +- .../testing/selftests/kvm/include/test_util.h | 2 + tools/testing/selftests/kvm/lib/kvm_util.c | 413 +++++++++++------- tools/testing/selftests/kvm/lib/test_util.c | 25 ++ 4 files changed, 319 insertions(+), 150 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing= /selftests/kvm/include/kvm_util.h index d336cd0c8f19..1576e7e4aefe 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -35,11 +35,26 @@ struct userspace_mem_region { struct sparsebit *protected_phy_pages; int fd; off_t offset; - enum vm_mem_backing_src_type backing_src_type; + /* + * host_mem is mmap_start aligned upwards to an address suitable for the + * architecture. In most cases, host_mem and mmap_start are the same, + * except for s390x, where the host address must be aligned to 1M (due + * to PGSTEs). + */ +#ifdef __s390x__ +#define S390X_HOST_ADDRESS_ALIGNMENT 0x100000 +#endif void *host_mem; + /* host_alias is to mmap_alias as host_mem is to mmap_start */ void *host_alias; void *mmap_start; void *mmap_alias; + /* + * mmap_size is possibly larger than region.memory_size because in some + * cases, host_mem has to be adjusted upwards (see comment for host_mem + * above). In those cases, mmap_size has to be adjusted upwards so that + * enough memory is available in this memslot. + */ size_t mmap_size; struct rb_node gpa_node; struct rb_node hva_node; @@ -559,6 +574,18 @@ int __vm_set_user_memory_region2(struct kvm_vm *vm, ui= nt32_t slot, uint32_t flag uint64_t gpa, uint64_t size, void *hva, uint32_t guest_memfd, uint64_t guest_memfd_offset); =20 +struct userspace_mem_region *vm_mem_region_alloc(struct kvm_vm *vm); +void *vm_mem_region_mmap(struct userspace_mem_region *region, size_t lengt= h, + int flags, int fd, off_t offset); +void vm_mem_region_install_memory(struct userspace_mem_region *region, + size_t memslot_size, size_t alignment); +void vm_mem_region_madvise_thp(struct userspace_mem_region *region, int ad= vice); +int vm_mem_region_install_guest_memfd(struct userspace_mem_region *region, + int guest_memfd); +void *vm_mem_region_mmap_alias(struct userspace_mem_region *region, int fl= ags, + size_t alignment); +void vm_mem_region_add(struct kvm_vm *vm, struct userspace_mem_region *reg= ion); + void vm_userspace_mem_region_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, uint64_t guest_paddr, uint32_t slot, uint64_t npages, diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testin= g/selftests/kvm/include/test_util.h index 011e757d4e2c..983adeb54c0e 100644 --- a/tools/testing/selftests/kvm/include/test_util.h +++ b/tools/testing/selftests/kvm/include/test_util.h @@ -159,6 +159,8 @@ size_t get_trans_hugepagesz(void); size_t get_def_hugetlb_pagesz(void); const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i= ); size_t get_backing_src_pagesz(uint32_t i); +int backing_src_should_madvise(uint32_t i); +int get_backing_src_madvise_advice(uint32_t i); bool is_backing_src_hugetlb(uint32_t i); void backing_src_help(const char *flag); enum vm_mem_backing_src_type parse_backing_src_type(const char *type_name); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/sel= ftests/kvm/lib/kvm_util.c index 56b170b725b3..9bdd03a5da90 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -774,15 +774,12 @@ void kvm_vm_free(struct kvm_vm *vmp) free(vmp); } =20 -int kvm_memfd_alloc(size_t size, bool hugepages) +int kvm_create_memfd(size_t size, unsigned int flags) { - int memfd_flags =3D MFD_CLOEXEC; - int fd, r; - - if (hugepages) - memfd_flags |=3D MFD_HUGETLB; + int fd; + int r; =20 - fd =3D memfd_create("kvm_selftest", memfd_flags); + fd =3D memfd_create("kvm_selftest", flags); TEST_ASSERT(fd !=3D -1, __KVM_SYSCALL_ERROR("memfd_create()", fd)); =20 r =3D ftruncate(fd, size); @@ -794,6 +791,16 @@ int kvm_memfd_alloc(size_t size, bool hugepages) return fd; } =20 +int kvm_memfd_alloc(size_t size, bool hugepages) +{ + int memfd_flags =3D MFD_CLOEXEC; + + if (hugepages) + memfd_flags |=3D MFD_HUGETLB; + + return kvm_create_memfd(size, memfd_flags); +} + /* * Memory Compare, host virtual to guest virtual * @@ -973,185 +980,293 @@ void vm_set_user_memory_region2(struct kvm_vm *vm, = uint32_t slot, uint32_t flags errno, strerror(errno)); } =20 +/** + * Allocates and returns a struct userspace_mem_region. + */ +struct userspace_mem_region *vm_mem_region_alloc(struct kvm_vm *vm) +{ + struct userspace_mem_region *region; =20 -/* FIXME: This thing needs to be ripped apart and rewritten. */ -void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, - uint64_t guest_paddr, uint32_t slot, uint64_t npages, - uint32_t flags, int guest_memfd, uint64_t guest_memfd_offset) + /* Allocate and initialize new mem region structure. */ + region =3D calloc(1, sizeof(*region)); + TEST_ASSERT(region !=3D NULL, "Insufficient Memory"); + + region->unused_phy_pages =3D sparsebit_alloc(); + if (vm_arch_has_protected_memory(vm)) + region->protected_phy_pages =3D sparsebit_alloc(); + + region->fd =3D -1; + region->region.guest_memfd =3D -1; + + return region; +} + +static size_t compute_page_size(int mmap_flags, int madvise_advice) +{ + if (mmap_flags & MAP_HUGETLB) { + int size_flags =3D (mmap_flags >> MAP_HUGE_SHIFT) & MAP_HUGE_MASK; + if (!size_flags) + return get_def_hugetlb_pagesz(); + + return 1ULL << size_flags; + } + + return madvise_advice =3D=3D MADV_HUGEPAGE ? get_trans_hugepagesz() : get= pagesize(); +} + +/** + * Calls mmap() with @length, @flags, @fd, @offset for @region. + * + * Think of this as the struct userspace_mem_region wrapper for the mmap() + * syscall. + */ +void *vm_mem_region_mmap(struct userspace_mem_region *region, size_t lengt= h, + int flags, int fd, off_t offset) +{ + void *mem; + + if (flags & MAP_SHARED) { + TEST_ASSERT(fd !=3D -1, + "Ensure that fd is provided for shared mappings."); + TEST_ASSERT( + region->fd =3D=3D fd || region->region.guest_memfd =3D=3D fd, + "Ensure that fd is opened before mmap, and is either " + "set up in region->fd or region->region.guest_memfd."); + } + + mem =3D mmap(NULL, length, PROT_READ | PROT_WRITE, flags, fd, offset); + TEST_ASSERT(mem !=3D MAP_FAILED, "Couldn't mmap anonymous memory"); + + region->mmap_start =3D mem; + region->mmap_size =3D length; + region->offset =3D offset; + + return mem; +} + +/** + * Installs mmap()ed memory in @region->mmap_start as @region->host_mem, + * checking constraints. + */ +void vm_mem_region_install_memory(struct userspace_mem_region *region, + size_t memslot_size, size_t alignment) +{ + TEST_ASSERT(region->mmap_size >=3D memslot_size, + "mmap()ed memory insufficient for memslot"); + + region->host_mem =3D align_ptr_up(region->mmap_start, alignment); + region->region.userspace_addr =3D (uint64_t)region->host_mem; + region->region.memory_size =3D memslot_size; +} + + +/** + * Calls madvise with @advice for @region. + * + * Think of this as the struct userspace_mem_region wrapper for the madvis= e() + * syscall. + */ +void vm_mem_region_madvise_thp(struct userspace_mem_region *region, int ad= vice) { int ret; - struct userspace_mem_region *region; - size_t backing_src_pagesz =3D get_backing_src_pagesz(src_type); - size_t mem_size =3D npages * vm->page_size; - size_t alignment; =20 - TEST_REQUIRE_SET_USER_MEMORY_REGION2(); + TEST_ASSERT( + region->host_mem && region->mmap_size, + "vm_mem_region_madvise_thp() must be called after vm_mem_region_mmap()"); =20 - TEST_ASSERT(vm_adjust_num_guest_pages(vm->mode, npages) =3D=3D npages, - "Number of guest pages is not compatible with the host. " - "Try npages=3D%d", vm_adjust_num_guest_pages(vm->mode, npages)); - - TEST_ASSERT((guest_paddr % vm->page_size) =3D=3D 0, "Guest physical " - "address not on a page boundary.\n" - " guest_paddr: 0x%lx vm->page_size: 0x%x", - guest_paddr, vm->page_size); - TEST_ASSERT((((guest_paddr >> vm->page_shift) + npages) - 1) - <=3D vm->max_gfn, "Physical range beyond maximum " - "supported physical address,\n" - " guest_paddr: 0x%lx npages: 0x%lx\n" - " vm->max_gfn: 0x%lx vm->page_size: 0x%x", - guest_paddr, npages, vm->max_gfn, vm->page_size); + ret =3D madvise(region->host_mem, region->mmap_size, advice); + TEST_ASSERT(ret =3D=3D 0, "madvise failed, addr: %p length: 0x%lx", + region->host_mem, region->mmap_size); +} + +/** + * Installs guest_memfd by setting it up in @region. + * + * Returns the guest_memfd that was installed in the @region. + */ +int vm_mem_region_install_guest_memfd(struct userspace_mem_region *region, + int guest_memfd) +{ + /* + * Install a unique fd for each memslot so that the fd can be closed + * when the region is deleted without needing to track if the fd is + * owned by the framework or by the caller. + */ + guest_memfd =3D dup(guest_memfd); + TEST_ASSERT(guest_memfd >=3D 0, __KVM_SYSCALL_ERROR("dup()", guest_memfd)= ); + region->region.guest_memfd =3D guest_memfd; + + return guest_memfd; +} + +/** + * Calls mmap() to create an alias for mmap()ed memory at region->host_mem, + * exactly the same size the was mmap()ed. + * + * This is used mainly for userfaultfd tests. + */ +void *vm_mem_region_mmap_alias(struct userspace_mem_region *region, int fl= ags, + size_t alignment) +{ + region->mmap_alias =3D mmap(NULL, region->mmap_size, + PROT_READ | PROT_WRITE, flags, region->fd, 0); + TEST_ASSERT(region->mmap_alias !=3D MAP_FAILED, + __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED)); + + region->host_alias =3D align_ptr_up(region->mmap_alias, alignment); + + return region->host_alias; +} + +static void vm_mem_region_assert_no_duplicate(struct kvm_vm *vm, uint32_t = slot, + uint64_t gpa, size_t size) +{ + struct userspace_mem_region *region; =20 /* * Confirm a mem region with an overlapping address doesn't * already exist. */ - region =3D (struct userspace_mem_region *) userspace_mem_region_find( - vm, guest_paddr, (guest_paddr + npages * vm->page_size) - 1); - if (region !=3D NULL) - TEST_FAIL("overlapping userspace_mem_region already " - "exists\n" - " requested guest_paddr: 0x%lx npages: 0x%lx " - "page_size: 0x%x\n" - " existing guest_paddr: 0x%lx size: 0x%lx", - guest_paddr, npages, vm->page_size, - (uint64_t) region->region.guest_phys_addr, - (uint64_t) region->region.memory_size); + region =3D userspace_mem_region_find(vm, gpa, gpa + size - 1); + if (region !=3D NULL) { + TEST_FAIL("overlapping userspace_mem_region already exists\n" + " requested gpa: 0x%lx size: 0x%lx" + " existing gpa: 0x%lx size: 0x%lx", + gpa, size, + (uint64_t) region->region.guest_phys_addr, + (uint64_t) region->region.memory_size); + } =20 /* Confirm no region with the requested slot already exists. */ - hash_for_each_possible(vm->regions.slot_hash, region, slot_node, - slot) { + hash_for_each_possible(vm->regions.slot_hash, region, slot_node, slot) { if (region->region.slot !=3D slot) continue; =20 - TEST_FAIL("A mem region with the requested slot " - "already exists.\n" - " requested slot: %u paddr: 0x%lx npages: 0x%lx\n" - " existing slot: %u paddr: 0x%lx size: 0x%lx", - slot, guest_paddr, npages, - region->region.slot, - (uint64_t) region->region.guest_phys_addr, - (uint64_t) region->region.memory_size); + TEST_FAIL("A mem region with the requested slot already exists.\n" + " requested slot: %u paddr: 0x%lx size: 0x%lx\n" + " existing slot: %u paddr: 0x%lx size: 0x%lx", + slot, gpa, size, + region->region.slot, + (uint64_t) region->region.guest_phys_addr, + (uint64_t) region->region.memory_size); } +} =20 - /* Allocate and initialize new mem region structure. */ - region =3D calloc(1, sizeof(*region)); - TEST_ASSERT(region !=3D NULL, "Insufficient Memory"); - region->mmap_size =3D mem_size; +/** + * Add a @region to @vm. All necessary fields in region->region should alr= eady + * be populated. + * + * Think of this as the struct userspace_mem_region wrapper for the + * KVM_SET_USER_MEMORY_REGION2 ioctl. + */ +void vm_mem_region_add(struct kvm_vm *vm, struct userspace_mem_region *reg= ion) +{ + uint64_t npages; + uint64_t gpa; + int ret; =20 -#ifdef __s390x__ - /* On s390x, the host address must be aligned to 1M (due to PGSTEs) */ - alignment =3D 0x100000; -#else - alignment =3D 1; -#endif + TEST_REQUIRE_SET_USER_MEMORY_REGION2(); =20 - /* - * When using THP mmap is not guaranteed to returned a hugepage aligned - * address so we have to pad the mmap. Padding is not needed for HugeTLB - * because mmap will always return an address aligned to the HugeTLB - * page size. - */ - if (src_type =3D=3D VM_MEM_SRC_ANONYMOUS_THP) - alignment =3D max(backing_src_pagesz, alignment); + npages =3D region->region.memory_size / vm->page_size; + TEST_ASSERT(vm_adjust_num_guest_pages(vm->mode, npages) =3D=3D npages, + "Number of guest pages is not compatible with the host. " + "Try npages=3D%d", vm_adjust_num_guest_pages(vm->mode, npages)); + + gpa =3D region->region.guest_phys_addr; + TEST_ASSERT((gpa % vm->page_size) =3D=3D 0, + "Guest physical address not on a page boundary.\n" + " gpa: 0x%lx vm->page_size: 0x%x", + gpa, vm->page_size); + TEST_ASSERT((((gpa >> vm->page_shift) + npages) - 1) <=3D vm->max_gfn, + "Physical range beyond maximum supported physical address,\n" + " gpa: 0x%lx npages: 0x%lx\n" + " vm->max_gfn: 0x%lx vm->page_size: 0x%x", + gpa, npages, vm->max_gfn, vm->page_size); + + vm_mem_region_assert_no_duplicate(vm, region->region.slot, gpa, + region->mmap_size); =20 - TEST_ASSERT_EQ(guest_paddr, align_up(guest_paddr, backing_src_pagesz)); + ret =3D __vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION2, ®ion->region); + TEST_ASSERT(ret =3D=3D 0, "KVM_SET_USER_MEMORY_REGION2 IOCTL failed,\n" + " rc: %i errno: %i\n" + " slot: %u flags: 0x%x\n" + " guest_phys_addr: 0x%lx size: 0x%llx guest_memfd: %d", + ret, errno, region->region.slot, region->region.flags, + gpa, region->region.memory_size, + region->region.guest_memfd); =20 - /* Add enough memory to align up if necessary */ - if (alignment > 1) - region->mmap_size +=3D alignment; + sparsebit_set_num(region->unused_phy_pages, gpa >> vm->page_shift, npages= ); =20 - region->fd =3D -1; - if (backing_src_is_shared(src_type)) - region->fd =3D kvm_memfd_alloc(region->mmap_size, - src_type =3D=3D VM_MEM_SRC_SHARED_HUGETLB); - - region->mmap_start =3D mmap(NULL, region->mmap_size, - PROT_READ | PROT_WRITE, - vm_mem_backing_src_alias(src_type)->flag, - region->fd, 0); - TEST_ASSERT(region->mmap_start !=3D MAP_FAILED, - __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED)); + /* Add to quick lookup data structures */ + vm_userspace_mem_region_gpa_insert(&vm->regions.gpa_tree, region); + vm_userspace_mem_region_hva_insert(&vm->regions.hva_tree, region); + hash_add(vm->regions.slot_hash, ®ion->slot_node, region->region.slot); +} =20 - TEST_ASSERT(!is_backing_src_hugetlb(src_type) || - region->mmap_start =3D=3D align_ptr_up(region->mmap_start, backing_s= rc_pagesz), - "mmap_start %p is not aligned to HugeTLB page size 0x%lx", - region->mmap_start, backing_src_pagesz); +void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, + uint64_t guest_paddr, uint32_t slot, uint64_t npages, + uint32_t flags, int guest_memfd, uint64_t guest_memfd_offset) +{ + struct userspace_mem_region *region; + size_t mapping_page_size; + size_t memslot_size; + int madvise_advice; + size_t mmap_size; + size_t alignment; + int mmap_flags; + int memfd; =20 - /* Align host address */ - region->host_mem =3D align_ptr_up(region->mmap_start, alignment); + memslot_size =3D npages * vm->page_size; + + mmap_flags =3D vm_mem_backing_src_alias(src_type)->flag; + madvise_advice =3D get_backing_src_madvise_advice(src_type); + mapping_page_size =3D compute_page_size(mmap_flags, madvise_advice); + + TEST_ASSERT_EQ(guest_paddr, align_up(guest_paddr, mapping_page_size)); + + alignment =3D mapping_page_size; +#ifdef __s390x__ + alignment =3D max(alignment, S390X_HOST_ADDRESS_ALIGNMENT); +#endif =20 - /* As needed perform madvise */ - if ((src_type =3D=3D VM_MEM_SRC_ANONYMOUS || - src_type =3D=3D VM_MEM_SRC_ANONYMOUS_THP) && thp_configured()) { - ret =3D madvise(region->host_mem, mem_size, - src_type =3D=3D VM_MEM_SRC_ANONYMOUS ? MADV_NOHUGEPAGE : MADV_HUG= EPAGE); - TEST_ASSERT(ret =3D=3D 0, "madvise failed, addr: %p length: 0x%lx src_ty= pe: %s", - region->host_mem, mem_size, - vm_mem_backing_src_alias(src_type)->name); + region =3D vm_mem_region_alloc(vm); + + memfd =3D -1; + if (backing_src_is_shared(src_type)) { + unsigned int memfd_flags =3D MFD_CLOEXEC; + if (src_type =3D=3D VM_MEM_SRC_SHARED_HUGETLB) + memfd_flags |=3D MFD_HUGETLB; + + memfd =3D kvm_create_memfd(memslot_size, memfd_flags); } + region->fd =3D memfd; + + mmap_size =3D align_up(memslot_size, alignment); + vm_mem_region_mmap(region, mmap_size, mmap_flags, memfd, 0); + vm_mem_region_install_memory(region, memslot_size, alignment); =20 - region->backing_src_type =3D src_type; + if (backing_src_should_madvise(src_type)) + vm_mem_region_madvise_thp(region, madvise_advice); + + if (backing_src_is_shared(src_type)) + vm_mem_region_mmap_alias(region, mmap_flags, alignment); =20 if (flags & KVM_MEM_GUEST_MEMFD) { if (guest_memfd < 0) { - uint32_t guest_memfd_flags =3D 0; - TEST_ASSERT(!guest_memfd_offset, - "Offset must be zero when creating new guest_memfd"); - guest_memfd =3D vm_create_guest_memfd(vm, mem_size, guest_memfd_flags); - } else { - /* - * Install a unique fd for each memslot so that the fd - * can be closed when the region is deleted without - * needing to track if the fd is owned by the framework - * or by the caller. - */ - guest_memfd =3D dup(guest_memfd); - TEST_ASSERT(guest_memfd >=3D 0, __KVM_SYSCALL_ERROR("dup()", guest_memf= d)); + TEST_ASSERT( + guest_memfd_offset =3D=3D 0, + "Offset must be zero when creating new guest_memfd"); + guest_memfd =3D vm_create_guest_memfd(vm, memslot_size, 0); } =20 - region->region.guest_memfd =3D guest_memfd; - region->region.guest_memfd_offset =3D guest_memfd_offset; - } else { - region->region.guest_memfd =3D -1; + vm_mem_region_install_guest_memfd(region, guest_memfd); } =20 - region->unused_phy_pages =3D sparsebit_alloc(); - if (vm_arch_has_protected_memory(vm)) - region->protected_phy_pages =3D sparsebit_alloc(); - sparsebit_set_num(region->unused_phy_pages, - guest_paddr >> vm->page_shift, npages); region->region.slot =3D slot; region->region.flags =3D flags; region->region.guest_phys_addr =3D guest_paddr; - region->region.memory_size =3D npages * vm->page_size; - region->region.userspace_addr =3D (uintptr_t) region->host_mem; - ret =3D __vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION2, ®ion->region); - TEST_ASSERT(ret =3D=3D 0, "KVM_SET_USER_MEMORY_REGION2 IOCTL failed,\n" - " rc: %i errno: %i\n" - " slot: %u flags: 0x%x\n" - " guest_phys_addr: 0x%lx size: 0x%lx guest_memfd: %d", - ret, errno, slot, flags, - guest_paddr, (uint64_t) region->region.memory_size, - region->region.guest_memfd); - - /* Add to quick lookup data structures */ - vm_userspace_mem_region_gpa_insert(&vm->regions.gpa_tree, region); - vm_userspace_mem_region_hva_insert(&vm->regions.hva_tree, region); - hash_add(vm->regions.slot_hash, ®ion->slot_node, slot); - - /* If shared memory, create an alias. */ - if (region->fd >=3D 0) { - region->mmap_alias =3D mmap(NULL, region->mmap_size, - PROT_READ | PROT_WRITE, - vm_mem_backing_src_alias(src_type)->flag, - region->fd, 0); - TEST_ASSERT(region->mmap_alias !=3D MAP_FAILED, - __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED)); - - /* Align host alias address */ - region->host_alias =3D align_ptr_up(region->mmap_alias, alignment); - } + region->region.guest_memfd_offset =3D guest_memfd_offset; + vm_mem_region_add(vm, region); } =20 void vm_userspace_mem_region_add(struct kvm_vm *vm, diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/se= lftests/kvm/lib/test_util.c index d0a9b5ee0c01..cbcc1e7ad578 100644 --- a/tools/testing/selftests/kvm/lib/test_util.c +++ b/tools/testing/selftests/kvm/lib/test_util.c @@ -351,6 +351,31 @@ size_t get_private_mem_backing_src_pagesz(uint32_t i) } } =20 +int backing_src_should_madvise(uint32_t i) +{ + switch (i) { + case VM_MEM_SRC_ANONYMOUS: + case VM_MEM_SRC_SHMEM: + case VM_MEM_SRC_ANONYMOUS_THP: + return true; + default: + return false; + } +} + +int get_backing_src_madvise_advice(uint32_t i) +{ + switch (i) { + case VM_MEM_SRC_ANONYMOUS: + case VM_MEM_SRC_SHMEM: + return MADV_NOHUGEPAGE; + case VM_MEM_SRC_ANONYMOUS_THP: + return MADV_NOHUGEPAGE; + default: + return 0; + } +} + bool is_backing_src_hugetlb(uint32_t i) { return !!(vm_mem_backing_src_alias(i)->flag & MAP_HUGETLB); --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:40 2024 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4244C1C3F28 for ; Tue, 10 Sep 2024 23:45:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011939; cv=none; b=jfyeqQv3FGhanwSukJKh1AicFJo+G9yu3J0bp59c1cncMTHpib/uoK7XA1NlUT/8065BIgOhChja2CHKK8XvDFQvZfkVO7tfFdfUwrKKXwGk7vF+5SCyu22jG/LFX/dHyv6ln5S/P0+ANBYvy3MvZn2jlWsJDW+KkXU+v2fLX2Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011939; c=relaxed/simple; bh=IxSPwacj2tBp9bysvRBs5+FfPIi+ClbTW7TLorZFK2Y=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SgnyD2s/mD9AJT9Yvl0XWDLkDyWGJH+Ln37Qlh20//dbsyo0em7tehYloE3xSbXHT9Z1j024BuMSybMZsPy3fLyEthfD3j00EqSE3Z43hIiEZbu5om+i3J1KEfTFcb//cF/0zRWDHXV0ftKOAoYhMxtgSldBlk9ehpF6alVIqnc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rPqDmIQl; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rPqDmIQl" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2d8e59fcd4bso1530306a91.1 for ; Tue, 10 Sep 2024 16:45:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011937; x=1726616737; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mkb+NAnOFL/DwPaS2OVTOWWsNRjgIx4s2eg9q8AJrXM=; b=rPqDmIQlQ3K5oB6TQCGzX1Uccemcp2ErrTDQAH4e2++vmxnJEHH23MmbJ4MxezqSj3 RV725vFACWrbT7fM01owygEYjCgT8hMnxKcL1HolZQHy4LJuswHIlJvIp9wtXLikN0Nz E1+XtmLZaf4hDREhDckFNACapmX55ui0gJeCUAQIPwtL972oUh7OtjcEAkAp0crlqjwR aFSxqs4SJUAN+5XYRl7L2ZVUXkf4EnWu0KWmJ6TjO952RUn1i9+JF7EstqdTflexRI1g VjVTZHlq4ci/ztTCiK9CWuy1WGYkYy1IycvMO4YxHQeqfwClW5mjVn2SWkqaPCCt/evN RvPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011937; x=1726616737; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mkb+NAnOFL/DwPaS2OVTOWWsNRjgIx4s2eg9q8AJrXM=; b=CPqXfY8unnIEus2BVS3c3ruqVHjWZ8QDYXdUG+79BgwMReuBWwTsMyx3HFC3VOtTuB f8/HY4bvMKyE8LmNyr7D+KQEAdyuXXfyX2a0hgIcenqmSukrDwCRcgXZB+HTwY77CY/p RH1lhLU4R0uzJmOAsQ7lF3rmlMrIPVZ9plA1ZlVvciZ3uGJs39GrxlCvT42IM/CDXzob f5DwRCf8zndd0O1sFzmB/RmR+4zOYcSzdRvteHQFCoBc901ri6uupaSpxC3RrfJ1UOOm XQMqmdUQdAuyl9G53lzucIhI0Kl9FUS/XLqDcsaWHN9OC0jylHjk7erAXh51jtrT+C+m M78w== X-Forwarded-Encrypted: i=1; AJvYcCWfZXoUsPmzu5aH2okV0L7MIrkkgSXpnrx2MVkuCcpAz9W7L/Mqx4WkgGAMvRuHeF48t7MiBYcnQNGvop4=@vger.kernel.org X-Gm-Message-State: AOJu0Ywy9Af9ZyPbdjQPXh6Feui4DrpBu3M/FxMjaTnrsQoYZdjM8g89 zp/DeaoEET2c1g8aw1jUaofw35ZA2po/bUXSrDw0hVEH/y21pBd3b+MEzdoQzhmVK/axifF0oTO b1lOEJOFHDDbExtyzwggdNQ== X-Google-Smtp-Source: AGHT+IEIVAikHARJb/9AtMZHbBPnKppzatT8jO6P6FcNUKI8bwXbpm7jyhYnZiKRc4hR4u8fz8EVE4930Msvwij+9Q== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:903:1ca:b0:1fb:8620:e113 with SMTP id d9443c01a7336-20752183657mr936435ad.3.1726011937387; Tue, 10 Sep 2024 16:45:37 -0700 (PDT) Date: Tue, 10 Sep 2024 23:44:08 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: Subject: [RFC PATCH 37/39] KVM: selftests: Add helper to perform madvise by memslots From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A contiguous GPA range may not be contiguous in HVA. This helper performs madvise, given a GPA range, by madvising in blocks according to memslot configuration. Signed-off-by: Ackerley Tng --- tools/include/linux/kernel.h | 4 +-- .../testing/selftests/kvm/include/kvm_util.h | 2 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 30 +++++++++++++++++++ 3 files changed, 34 insertions(+), 2 deletions(-) diff --git a/tools/include/linux/kernel.h b/tools/include/linux/kernel.h index 07cfad817d53..5454cd3272ed 100644 --- a/tools/include/linux/kernel.h +++ b/tools/include/linux/kernel.h @@ -54,8 +54,8 @@ _min1 < _min2 ? _min1 : _min2; }) #endif =20 -#define max_t(type, x, y) max((type)x, (type)y) -#define min_t(type, x, y) min((type)x, (type)y) +#define max_t(type, x, y) max((type)(x), (type)(y)) +#define min_t(type, x, y) min((type)(x), (type)(y)) #define clamp(val, lo, hi) min((typeof(val))max(val, lo), hi) =20 #ifndef BUG_ON diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing= /selftests/kvm/include/kvm_util.h index 1576e7e4aefe..58b516c23574 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -433,6 +433,8 @@ static inline void vm_mem_set_shared(struct kvm_vm *vm,= uint64_t gpa, void vm_guest_mem_fallocate(struct kvm_vm *vm, uint64_t gpa, uint64_t size, bool punch_hole); =20 +void vm_guest_mem_madvise(struct kvm_vm *vm, vm_paddr_t gpa_start, uint64_= t size, + int advice); static inline void vm_guest_mem_punch_hole(struct kvm_vm *vm, uint64_t gpa, uint64_t size) { diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/sel= ftests/kvm/lib/kvm_util.c index 9bdd03a5da90..21ea6616124c 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1416,6 +1416,36 @@ void vm_guest_mem_fallocate(struct kvm_vm *vm, uint6= 4_t base, uint64_t size, } } =20 +void vm_guest_mem_madvise(struct kvm_vm *vm, vm_paddr_t gpa_start, uint64_= t size, + int advice) +{ + size_t madvise_len; + vm_paddr_t gpa_end; + vm_paddr_t gpa; + + gpa_end =3D gpa_start + size; + for (gpa =3D gpa_start; gpa < gpa_end; gpa +=3D madvise_len) { + struct userspace_mem_region *region; + void *hva_start; + uint64_t memslot_end; + int ret; + + region =3D userspace_mem_region_find(vm, gpa, gpa); + TEST_ASSERT(region, "Memory region not found for GPA 0x%lx", gpa); + + hva_start =3D addr_gpa2hva(vm, gpa); + memslot_end =3D region->region.userspace_addr + + region->region.memory_size; + madvise_len =3D min_t(size_t, memslot_end - (uint64_t)hva_start, + gpa_end - gpa); + + ret =3D madvise(hva_start, madvise_len, advice); + TEST_ASSERT(!ret, "madvise(addr=3D%p, len=3D%lx, advice=3D%x) failed\n", + hva_start, madvise_len, advice); + } +} + + /* Returns the size of a vCPU's kvm_run structure. */ static int vcpu_mmap_sz(void) { --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:40 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 070AC1C460B for ; Tue, 10 Sep 2024 23:45:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011943; cv=none; b=lpiYU+bzH13xOeZh+8vVuT/Rnd47pyGyPRS5Q7N33QeHTeXE6cu1IX2zF4hRdD1AnjA69Z5IRM1Wjz3pEfKfEvviwu7bqku5wXq7/IYbfCRbAP7elx951t6OFH/OVRwujmRfWbudBQYDUgN0oRGm6KeEH65879Yd4uwtn8/St10= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011943; c=relaxed/simple; bh=lSgzKR2zgY1yuRxZGAYxWOGW/nuOArA+zW0Hd0c2OlE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mtxVtWK2C3I4/wocQMrtADOixM8ZIhr3PCwK09edu0ipCDzG5D6a+M5Xmo1EFubxDZ+sIePM/nq8dnRqt4y96Wz08Oldh1NlrTqYMz2lt4DGgfWqLhK05l16eFvZqtCMVS0WUYHrC7dvIcnc4eUjGWZyveVat9IEMgweY4Du8fY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pAZyB3D8; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pAZyB3D8" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6d3aa5a6715so163272807b3.2 for ; Tue, 10 Sep 2024 16:45:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011939; x=1726616739; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TlIOiJjV6U0PI3iCsuHTa2oFdtPEU4Vo6B5xI+1eOyo=; b=pAZyB3D8Zvfm3cBCopZr51wJkfi1KIiCSdrD4erGAkN0tCjq84BU0dlEGGYZkwIen7 RkJh86AzHpv21Y8la+nNCxTVfSnCu9NPCFzGiTnNo8v+340PoVCjK+fUOFmWqb+2sBWO TZ+kCvKFopiEbDLJ/+2BaXDnDF33SwFbDAnyi4r7XMN7ZJprhNoNx0ahyiKmNlkPkY5n tuaDkdbagWhCzwBdw4rApnvhvHuEPPAmv8ZV7C+uDWoFIdyP7GgKWNom7SDXmrWf1awF B+bOPG6DDdzWxINgbjRjJnkHgaID1Z+8+MbRySwCiS3udHbg8DwOqY83BdEeIT5SJiC6 rvlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011939; x=1726616739; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TlIOiJjV6U0PI3iCsuHTa2oFdtPEU4Vo6B5xI+1eOyo=; b=jEVmjKQoEl0EZfwmFC3CQEuixFidSvjFxjSC1wr6kQDLylt3kKeJVKP7z4T8Mpa7yo Ra2JeduZE6JcePlLBuxMMDQv84kXc8bbd3n5ciJEguwQOpM2MxTED39otY6xZb6l0LwD xvY59fhwD6sH/tGZXAFJ1n4OdTPJrnPjfmHxIjQN313pIJI2LQ2Kj2WoDXa07l8hvvNH 4tyhMU8m2ctkWRCMbLsUjH+P5dnbKbfBF9I52NFfZY/ZsKIWZ+CW2xzbIwOEg28+NWpd a2rGuFCxX/skxGEJ+S26Nj9GZwVCTq0EwpAZ4fkasfVgcr4dTQFSAVSQ/DH58LSfCOdv 8h5A== X-Forwarded-Encrypted: i=1; AJvYcCWDlcLjNbstonOLlq1Ea2FgfGNbxn3AaGrC3BAXU6+zpkvm47j21kLWvaDQIbcUx05ONjAFzLHayBSkZaA=@vger.kernel.org X-Gm-Message-State: AOJu0Yw4v7tlZoA8NfaZfiCdwQUJAUs29lYTqmJE3w8+p7hNqwNkZL40 txMX6azM53NauYnF2q3Ffe0lhcadSDfzNDSieyz5NaQKV8VZKm49IqBVRCe2j1juA7UIyUeHiMj wSpPhF3Gn2VKdMzwa+n5iKg== X-Google-Smtp-Source: AGHT+IH3N5qL0TDHwb+ahMnzs9ETG7Svm+A6F+qfQL1tieuJZR8fE0gET637CAqDRn32FQw0mJ0L0E81ldHixRjnIw== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a05:690c:c93:b0:6b0:57ec:c5f9 with SMTP id 00721157ae682-6db44a59f17mr6527007b3.0.1726011939045; Tue, 10 Sep 2024 16:45:39 -0700 (PDT) Date: Tue, 10 Sep 2024 23:44:09 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <3ef4b32d32dca6e1b506e967c950dc2d4c3bc7ae.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 38/39] KVM: selftests: Update private_mem_conversions_test for mmap()able guest_memfd From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Signed-off-by: Ackerley Tng --- .../kvm/x86_64/private_mem_conversions_test.c | 146 +++++++++++++++--- .../x86_64/private_mem_conversions_test.sh | 3 + 2 files changed, 124 insertions(+), 25 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_tes= t.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c index 71f480c19f92..6524ef398584 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c @@ -11,6 +11,8 @@ #include #include #include +#include +#include =20 #include #include @@ -202,15 +204,19 @@ static void guest_test_explicit_conversion(uint64_t b= ase_gpa, bool do_fallocate) guest_sync_shared(gpa, size, p3, p4); memcmp_g(gpa, p4, size); =20 - /* Reset the shared memory back to the initial pattern. */ - memset((void *)gpa, init_p, size); - /* * Free (via PUNCH_HOLE) *all* private memory so that the next * iteration starts from a clean slate, e.g. with respect to * whether or not there are pages/folios in guest_mem. */ guest_map_shared(base_gpa, PER_CPU_DATA_SIZE, true); + + /* + * Reset the entire block back to the initial pattern. Do this + * after fallocate(PUNCH_HOLE) because hole-punching zeroes + * memory. + */ + memset((void *)base_gpa, init_p, PER_CPU_DATA_SIZE); } } =20 @@ -286,7 +292,8 @@ static void guest_code(uint64_t base_gpa) GUEST_DONE(); } =20 -static void handle_exit_hypercall(struct kvm_vcpu *vcpu) +static void handle_exit_hypercall(struct kvm_vcpu *vcpu, + bool back_shared_memory_with_guest_memfd) { struct kvm_run *run =3D vcpu->run; uint64_t gpa =3D run->hypercall.args[0]; @@ -303,17 +310,46 @@ static void handle_exit_hypercall(struct kvm_vcpu *vc= pu) if (do_fallocate) vm_guest_mem_fallocate(vm, gpa, size, map_shared); =20 - if (set_attributes) + if (set_attributes) { + if (back_shared_memory_with_guest_memfd && !map_shared) + vm_guest_mem_madvise(vm, gpa, size, MADV_DONTNEED); vm_set_memory_attributes(vm, gpa, size, map_shared ? 0 : KVM_MEMORY_ATTRIBUTE_PRIVATE); + } run->hypercall.ret =3D 0; } =20 +static void assert_not_faultable(uint8_t *address) +{ + pid_t child_pid; + + child_pid =3D fork(); + TEST_ASSERT(child_pid !=3D -1, "fork failed"); + + if (child_pid =3D=3D 0) { + *address =3D 'A'; + } else { + int status; + waitpid(child_pid, &status, 0); + + TEST_ASSERT(WIFSIGNALED(status), + "Child should have exited with a signal"); + TEST_ASSERT_EQ(WTERMSIG(status), SIGBUS); + } +} + static bool run_vcpus; =20 -static void *__test_mem_conversions(void *__vcpu) +struct test_thread_args { - struct kvm_vcpu *vcpu =3D __vcpu; + struct kvm_vcpu *vcpu; + bool back_shared_memory_with_guest_memfd; +}; + +static void *__test_mem_conversions(void *params) +{ + struct test_thread_args *args =3D params; + struct kvm_vcpu *vcpu =3D args->vcpu; struct kvm_run *run =3D vcpu->run; struct kvm_vm *vm =3D vcpu->vm; struct ucall uc; @@ -325,7 +361,8 @@ static void *__test_mem_conversions(void *__vcpu) vcpu_run(vcpu); =20 if (run->exit_reason =3D=3D KVM_EXIT_HYPERCALL) { - handle_exit_hypercall(vcpu); + handle_exit_hypercall(vcpu, + args->back_shared_memory_with_guest_memfd); continue; } =20 @@ -349,8 +386,18 @@ static void *__test_mem_conversions(void *__vcpu) size_t nr_bytes =3D min_t(size_t, vm->page_size, size - i); uint8_t *hva =3D addr_gpa2hva(vm, gpa + i); =20 - /* In all cases, the host should observe the shared data. */ - memcmp_h(hva, gpa + i, uc.args[3], nr_bytes); + /* Check contents of memory */ + if (args->back_shared_memory_with_guest_memfd && + uc.args[0] =3D=3D SYNC_PRIVATE) { + assert_not_faultable(hva); + } else { + /* + * If shared and private memory use + * separate backing memory, the host + * should always observe shared data. + */ + memcmp_h(hva, gpa + i, uc.args[3], nr_bytes); + } =20 /* For shared, write the new pattern to guest memory. */ if (uc.args[0] =3D=3D SYNC_SHARED) @@ -366,11 +413,41 @@ static void *__test_mem_conversions(void *__vcpu) } } =20 -static void -test_mem_conversions(enum vm_mem_backing_src_type src_type, - enum vm_private_mem_backing_src_type private_mem_src_type, - uint32_t nr_vcpus, - uint32_t nr_memslots) +static void add_memslot(struct kvm_vm *vm, uint64_t gpa, uint32_t slot, + uint64_t size, int guest_memfd, + uint64_t guest_memfd_offset, + enum vm_mem_backing_src_type src_type, + bool back_shared_memory_with_guest_memfd) +{ + struct userspace_mem_region *region; + + if (!back_shared_memory_with_guest_memfd) { + vm_mem_add(vm, src_type, gpa, slot, size / vm->page_size, + KVM_MEM_GUEST_MEMFD, guest_memfd, + guest_memfd_offset); + return; + } + + region =3D vm_mem_region_alloc(vm); + + guest_memfd =3D vm_mem_region_install_guest_memfd(region, guest_memfd); + + vm_mem_region_mmap(region, size, MAP_SHARED, guest_memfd, guest_memfd_off= set); + vm_mem_region_install_memory(region, size, getpagesize()); + + region->region.slot =3D slot; + region->region.flags =3D KVM_MEM_GUEST_MEMFD; + region->region.guest_phys_addr =3D gpa; + region->region.guest_memfd_offset =3D guest_memfd_offset; + + vm_mem_region_add(vm, region); +} + +static void test_mem_conversions(enum vm_mem_backing_src_type src_type, + enum vm_private_mem_backing_src_type private_mem_src_type, + uint32_t nr_vcpus, + uint32_t nr_memslots, + bool back_shared_memory_with_guest_memfd) { /* * Allocate enough memory so that each vCPU's chunk of memory can be @@ -381,6 +458,7 @@ test_mem_conversions(enum vm_mem_backing_src_type src_t= ype, get_private_mem_backing_src_pagesz(private_mem_src_type), get_backing_src_pagesz(src_type))); const size_t per_cpu_size =3D align_up(PER_CPU_DATA_SIZE, alignment); + struct test_thread_args *thread_args[KVM_MAX_VCPUS]; const size_t memfd_size =3D per_cpu_size * nr_vcpus; const size_t slot_size =3D memfd_size / nr_memslots; struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; @@ -404,13 +482,14 @@ test_mem_conversions(enum vm_mem_backing_src_type src= _type, vm, memfd_size, vm_private_mem_backing_src_alias(private_mem_src_type)->flag); =20 - for (i =3D 0; i < nr_memslots; i++) - vm_mem_add(vm, src_type, BASE_DATA_GPA + slot_size * i, - BASE_DATA_SLOT + i, slot_size / vm->page_size, - KVM_MEM_GUEST_MEMFD, memfd, slot_size * i); + for (i =3D 0; i < nr_memslots; i++) { + add_memslot(vm, BASE_DATA_GPA + slot_size * i, + BASE_DATA_SLOT + i, slot_size, memfd, slot_size * i, + src_type, back_shared_memory_with_guest_memfd); + } =20 for (i =3D 0; i < nr_vcpus; i++) { - uint64_t gpa =3D BASE_DATA_GPA + i * per_cpu_size; + uint64_t gpa =3D BASE_DATA_GPA + i * per_cpu_size; =20 vcpu_args_set(vcpus[i], 1, gpa); =20 @@ -420,13 +499,23 @@ test_mem_conversions(enum vm_mem_backing_src_type src= _type, */ virt_map(vm, gpa, gpa, PER_CPU_DATA_SIZE / vm->page_size); =20 - pthread_create(&threads[i], NULL, __test_mem_conversions, vcpus[i]); + thread_args[i] =3D malloc(sizeof(struct test_thread_args)); + TEST_ASSERT(thread_args[i] !=3D NULL, + "Could not allocate memory for thread parameters"); + thread_args[i]->vcpu =3D vcpus[i]; + thread_args[i]->back_shared_memory_with_guest_memfd =3D + back_shared_memory_with_guest_memfd; + + pthread_create(&threads[i], NULL, __test_mem_conversions, + (void *)thread_args[i]); } =20 WRITE_ONCE(run_vcpus, true); =20 - for (i =3D 0; i < nr_vcpus; i++) + for (i =3D 0; i < nr_vcpus; i++) { pthread_join(threads[i], NULL); + free(thread_args[i]); + } =20 kvm_vm_free(vm); =20 @@ -448,7 +537,7 @@ test_mem_conversions(enum vm_mem_backing_src_type src_t= ype, static void usage(const char *cmd) { puts(""); - printf("usage: %s [-h] [-m nr_memslots] [-s mem_type] [-p private_mem_typ= e] [-n nr_vcpus]\n", cmd); + printf("usage: %s [-h] [-m nr_memslots] [-s mem_type] [-p private_mem_typ= e] [-n nr_vcpus] [-g]\n", cmd); puts(""); backing_src_help("-s"); puts(""); @@ -458,19 +547,22 @@ static void usage(const char *cmd) puts(""); puts(" -m: specify the number of memslots (default: 1)"); puts(""); + puts(" -g: back shared memory with guest_memfd (default: false)"); + puts(""); } =20 int main(int argc, char *argv[]) { enum vm_mem_backing_src_type src_type =3D DEFAULT_VM_MEM_SRC; enum vm_private_mem_backing_src_type private_mem_src_type =3D DEFAULT_VM_= PRIVATE_MEM_SRC; + bool back_shared_memory_with_guest_memfd =3D false; uint32_t nr_memslots =3D 1; uint32_t nr_vcpus =3D 1; int opt; =20 TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_V= M)); =20 - while ((opt =3D getopt(argc, argv, "hm:s:p:n:")) !=3D -1) { + while ((opt =3D getopt(argc, argv, "hgm:s:p:n:")) !=3D -1) { switch (opt) { case 's': src_type =3D parse_backing_src_type(optarg); @@ -484,6 +576,9 @@ int main(int argc, char *argv[]) case 'm': nr_memslots =3D atoi_positive("nr_memslots", optarg); break; + case 'g': + back_shared_memory_with_guest_memfd =3D true; + break; case 'h': default: usage(argv[0]); @@ -491,7 +586,8 @@ int main(int argc, char *argv[]) } } =20 - test_mem_conversions(src_type, private_mem_src_type, nr_vcpus, nr_memslot= s); + test_mem_conversions(src_type, private_mem_src_type, nr_vcpus, nr_memslot= s, + back_shared_memory_with_guest_memfd); =20 return 0; } diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_tes= t.sh b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.sh index fb6705fef466..c7f3dfee0336 100755 --- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.sh +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.sh @@ -75,6 +75,9 @@ TEST_EXECUTABLE=3D"$(dirname "$0")/private_mem_conversion= s_test" $TEST_EXECUTABLE -s "$src_type" -p "$private_mem_src_type" -n $num_vcpu= s_to_test $TEST_EXECUTABLE -s "$src_type" -p "$private_mem_src_type" -n $num_vcpu= s_to_test -m $num_memslots_to_test =20 + $TEST_EXECUTABLE -s "$src_type" -p "$private_mem_src_type" -n $num_vcpu= s_to_test -g + $TEST_EXECUTABLE -s "$src_type" -p "$private_mem_src_type" -n $num_vcpu= s_to_test -m $num_memslots_to_test -g + { set +x; } 2>/dev/null =20 echo --=20 2.46.0.598.g6f2099f65c-goog From nobody Sat Nov 30 07:37:40 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A8121C3F39 for ; Tue, 10 Sep 2024 23:45:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011945; cv=none; b=WV33brbhTARVJ3Ri+/IKi8W/MDvCDiKaYqk/P0GMQHMfv7JntczBUC+Ye/3GzTYZwPBC9rR+/SoM+5pA0IoH7s04OxFF7al+WFg0Mnazh0uJzklNp2x2X1G2g0vh1BwZAWSSqqsaP6wjxEQn56AZieIIg3rUrzEaD7m20U0kjGI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726011945; c=relaxed/simple; bh=qlWybXpHX73W0c+WoyiK3ZIrz+Xb8gYUp5+RSabhcTs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QbzX7nAcDJ3+UwTAwicTuRcSmbdBNrTfdeMuNzlhIGX4aFEMl6vOeRO7zPOv+8mxcvyUS7PayF24Xpaq0kLaxT6JMHQ5JBcHucRN+DQLk5OJYVOh/gIABhZo9PQoC6K+KDuPY8MKl+SIYo9WlJxeZzs78Lo4qBqkmHQ7Nkklffw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bFaXTy0k; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bFaXTy0k" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6d9e31e66eeso218875977b3.1 for ; Tue, 10 Sep 2024 16:45:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011941; x=1726616741; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=plYJYQpQfTnkaDlyMrD3/y2PoxU/BsueDgrxr9NfLVw=; b=bFaXTy0kxx6flmBZ1ucQVp4NqXXsQJaCVMo4B0NxBxjiZtG5aZCemOerGrYlntVP06 rLjIjkCyf84g3cK8BODligtxBBVe2H3bmvQiOnOqsyni3LMB9WSt9UGuI/NLzOyHsi9r NqEu/teJdgNOFyJZW4Qv8dOubj7zn1WfAbbYCY0S2qK09dw6OWA2156jiZQo0XL5Y1d+ EdcGovMrAXg8vR+whNMR+6Q5ma09jpedQ+2vLQKGm35B3OlNgyWrnBKYImd8fxXnaYTD G9d8KrR7I3qnoLY/4TIxEFLWNZkRLL/hT1YncKC/ATtwkXruGS9IGNc+BSufCMGipX4Z WhTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011941; x=1726616741; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=plYJYQpQfTnkaDlyMrD3/y2PoxU/BsueDgrxr9NfLVw=; b=t+hm4VfkXtaiVL29jMVlhb1mFhFu6PFKO5DUwI16Am7jxlyXp9dDagp4HdvdKXeYP7 +fYozV+gc7+Ct+o5c7VgybBw+/0xpVs+dPYUa/17NjrcfgQWwpq3MXmXcVg/KQdfd8/Q NTbHlaB838vye0Nxx5DKd5RjdM7+BtpehhJICaHmTs5DS8FA5yauUAAQSLD1UT7oj8zB n6WdXF1UDXr+kze0PbVC+NToClhjKEf4poe6yl9LWoJfEbRtGWa/W4R2YknV2ax3Sr23 zQLBNWwqWZGuoNdIv5zBXT8iRTZRTu0ATzT/5sW68j1mWxo95ptoeksw8kvjaL4jZZN6 +4Ow== X-Forwarded-Encrypted: i=1; AJvYcCU/9iORGg6kjfogdguDM1twQl3nZkf0074l+W9fv7f1oamqufrVv8J59m8QY/icQFEWbYXaH9FLRiaaLUs=@vger.kernel.org X-Gm-Message-State: AOJu0YyrnnHXL3t82rYTQ9gnSh4BEZY+3onJZu2Qq/Hf0OszfvgaTW0y 3yl0A4WcKG6nbGHqwPCkZzTkNpVaaZgsOMiZsWIfBw6fYdUKzxDEMzQdn0CvkJ6F0oe0RdqRtP9 tgdKPKwz1ER5YPRFVNPxEKQ== X-Google-Smtp-Source: AGHT+IF4JIC3xXL5PcaZmTn+2qPPsXwRjGrvNWXeluzH7gFrLjYhLnR02nFhF2UYLVydi+C+3P7U7SKVduv8ybLF7g== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a05:690c:6703:b0:6d5:df94:b7f2 with SMTP id 00721157ae682-6db45163299mr10464137b3.5.1726011940637; Tue, 10 Sep 2024 16:45:40 -0700 (PDT) Date: Tue, 10 Sep 2024 23:44:10 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <38723c5d5e9b530e52f28b9f9f4a6d862ed69bcd.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 39/39] KVM: guest_memfd: Dynamically split/reconstruct HugeTLB page From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Vishal Annapurve The faultability of a page is used to determine whether to split or reconstruct a page. If there is any page in a folio that is faultable, split the folio. If all pages in a folio are not faultable, reconstruct the folio. On truncation, always reconstruct and free regardless of faultability (as long as a HugeTLB page's worth of pages is truncated). Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng --- virt/kvm/guest_memfd.c | 678 +++++++++++++++++++++++++++-------------- 1 file changed, 456 insertions(+), 222 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index fb292e542381..0afc111099c0 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -99,6 +99,23 @@ static bool kvm_gmem_is_faultable(struct inode *inode, p= goff_t index) return xa_to_value(xa_load(faultability, index)) =3D=3D KVM_GMEM_FAULTABI= LITY_VALUE; } =20 +/** + * Return true if any of the @nr_pages beginning at @index is allowed to be + * faulted in. + */ +static bool kvm_gmem_is_any_faultable(struct inode *inode, pgoff_t index, + int nr_pages) +{ + pgoff_t i; + + for (i =3D index; i < index + nr_pages; ++i) { + if (kvm_gmem_is_faultable(inode, i)) + return true; + } + + return false; +} + /** * folio_file_pfn - like folio_file_page, but return a pfn. * @folio: The folio which contains this index. @@ -312,6 +329,40 @@ static int kvm_gmem_hugetlb_filemap_add_folio(struct a= ddress_space *mapping, return 0; } =20 +static inline void kvm_gmem_hugetlb_filemap_remove_folio(struct folio *fol= io) +{ + folio_lock(folio); + + folio_clear_dirty(folio); + folio_clear_uptodate(folio); + filemap_remove_folio(folio); + + folio_unlock(folio); +} + +/* + * Locks a block of nr_pages (1 << huge_page_order(h)) pages within @mappi= ng + * beginning at @index. Take either this or filemap_invalidate_lock() when= ever + * the filemap is accessed. + */ +static u32 hugetlb_fault_mutex_lock(struct address_space *mapping, pgoff_t= index) +{ + pgoff_t hindex; + u32 hash; + + hindex =3D index >> huge_page_order(kvm_gmem_hgmem(mapping->host)->h); + hash =3D hugetlb_fault_mutex_hash(mapping, hindex); + + mutex_lock(&hugetlb_fault_mutex_table[hash]); + + return hash; +} + +static void hugetlb_fault_mutex_unlock(u32 hash) +{ + mutex_unlock(&hugetlb_fault_mutex_table[hash]); +} + struct kvm_gmem_split_stash { struct { unsigned long _flags_2; @@ -394,15 +445,136 @@ static int kvm_gmem_hugetlb_reconstruct_folio(struct= hstate *h, struct folio *fo } =20 __folio_set_hugetlb(folio); - - folio_set_count(folio, 1); + hugetlb_folio_list_add(folio, &h->hugepage_activelist); =20 hugetlb_vmemmap_optimize_folio(h, folio); =20 + folio_set_count(folio, 1); + return 0; } =20 -/* Basically folio_set_order(folio, 1) without the checks. */ +/** + * Reconstruct a HugeTLB folio out of folio_nr_pages(@first_folio) pages. = Will + * clean up subfolios from filemap and add back the reconstructed folio. F= olios + * to be reconstructed must not be locked, and reconstructed folio will no= t be + * locked. Return 0 on success or negative error otherwise. + * + * hugetlb_fault_mutex_lock() has to be held when calling this function. + * + * Expects that before this call, the filemap's refcounts are the only ref= counts + * for the folios in the filemap. After this function returns, the filemap= 's + * refcount will be the only refcount on the reconstructed folio. + */ +static int kvm_gmem_reconstruct_folio_in_filemap(struct hstate *h, + struct folio *first_folio) +{ + struct address_space *mapping; + struct folio_batch fbatch; + unsigned long end; + pgoff_t index; + pgoff_t next; + int ret; + int i; + + if (folio_order(first_folio) =3D=3D huge_page_order(h)) + return 0; + + index =3D first_folio->index; + mapping =3D first_folio->mapping; + + next =3D index; + end =3D index + (1UL << huge_page_order(h)); + folio_batch_init(&fbatch); + while (filemap_get_folios(mapping, &next, end - 1, &fbatch)) { + for (i =3D 0; i < folio_batch_count(&fbatch); ++i) { + struct folio *folio; + + folio =3D fbatch.folios[i]; + + /* + * Before removing from filemap, take a reference so + * sub-folios don't get freed when removing from + * filemap. + */ + folio_get(folio); + + kvm_gmem_hugetlb_filemap_remove_folio(folio); + } + folio_batch_release(&fbatch); + } + + ret =3D kvm_gmem_hugetlb_reconstruct_folio(h, first_folio); + if (ret) { + /* TODO: handle cleanup properly. */ + WARN_ON(ret); + return ret; + } + + kvm_gmem_hugetlb_filemap_add_folio(mapping, first_folio, index, + htlb_alloc_mask(h)); + + folio_unlock(first_folio); + folio_put(first_folio); + + return ret; +} + +/** + * Reconstruct any HugeTLB folios in range [@start, @end), if all the subf= olios + * are not faultable. Return 0 on success or negative error otherwise. + * + * Will skip any folios that are already reconstructed. + */ +static int kvm_gmem_try_reconstruct_folios_range(struct inode *inode, + pgoff_t start, pgoff_t end) +{ + unsigned int nr_pages; + pgoff_t aligned_start; + pgoff_t aligned_end; + struct hstate *h; + pgoff_t index; + int ret; + + if (!is_kvm_gmem_hugetlb(inode)) + return 0; + + h =3D kvm_gmem_hgmem(inode)->h; + nr_pages =3D 1UL << huge_page_order(h); + + aligned_start =3D round_up(start, nr_pages); + aligned_end =3D round_down(end, nr_pages); + + ret =3D 0; + for (index =3D aligned_start; !ret && index < aligned_end; index +=3D nr_= pages) { + struct folio *folio; + u32 hash; + + hash =3D hugetlb_fault_mutex_lock(inode->i_mapping, index); + + folio =3D filemap_get_folio(inode->i_mapping, index); + if (!IS_ERR(folio)) { + /* + * Drop refcount because reconstruction expects an equal number + * of refcounts for all subfolios - just keep the refcount taken + * by the filemap. + */ + folio_put(folio); + + /* Merge only when the entire block of nr_pages is not faultable. */ + if (!kvm_gmem_is_any_faultable(inode, index, nr_pages)) { + ret =3D kvm_gmem_reconstruct_folio_in_filemap(h, folio); + WARN_ON(ret); + } + } + + hugetlb_fault_mutex_unlock(hash); + } + + return ret; +} + +/* Basically folio_set_order() without the checks. */ static inline void kvm_gmem_folio_set_order(struct folio *folio, unsigned = int order) { folio->_flags_1 =3D (folio->_flags_1 & ~0xffUL) | order; @@ -414,8 +586,8 @@ static inline void kvm_gmem_folio_set_order(struct foli= o *folio, unsigned int or /** * Split a HugeTLB @folio of size huge_page_size(@h). * - * After splitting, each split folio has a refcount of 1. There are no che= cks on - * refcounts before splitting. + * Folio must have refcount of 1 when this function is called. After split= ting, + * each split folio has a refcount of 1. * * Return 0 on success and negative error otherwise. */ @@ -423,14 +595,18 @@ static int kvm_gmem_hugetlb_split_folio(struct hstate= *h, struct folio *folio) { int ret; =20 + VM_WARN_ON_ONCE_FOLIO(folio_ref_count(folio) !=3D 1, folio); + + folio_set_count(folio, 0); + ret =3D hugetlb_vmemmap_restore_folio(h, folio); if (ret) - return ret; + goto out; =20 ret =3D kvm_gmem_hugetlb_stash_metadata(folio); if (ret) { hugetlb_vmemmap_optimize_folio(h, folio); - return ret; + goto out; } =20 kvm_gmem_folio_set_order(folio, 0); @@ -439,109 +615,183 @@ static int kvm_gmem_hugetlb_split_folio(struct hsta= te *h, struct folio *folio) __folio_clear_hugetlb(folio); =20 /* - * Remove the first folio from h->hugepage_activelist since it is no + * Remove the original folio from h->hugepage_activelist since it is no * longer a HugeTLB page. The other split pages should not be on any * lists. */ hugetlb_folio_list_del(folio); =20 - return 0; + ret =3D 0; +out: + folio_set_count(folio, 1); + return ret; } =20 -static struct folio *kvm_gmem_hugetlb_alloc_and_cache_folio(struct inode *= inode, - pgoff_t index) +/** + * Split a HugeTLB folio into folio_nr_pages(@folio) pages. Will clean up = folio + * from filemap and add back the split folios. @folio must not be locked, = and + * all split folios will not be locked. Return 0 on success or negative er= ror + * otherwise. + * + * hugetlb_fault_mutex_lock() has to be held when calling this function. + * + * Expects that before this call, the filemap's refcounts are the only ref= counts + * for the folio. After this function returns, the filemap's refcounts wil= l be + * the only refcounts on the split folios. + */ +static int kvm_gmem_split_folio_in_filemap(struct hstate *h, struct folio = *folio) { - struct folio *allocated_hugetlb_folio; - pgoff_t hugetlb_first_subpage_index; - struct page *hugetlb_first_subpage; - struct kvm_gmem_hugetlb *hgmem; - struct page *requested_page; + struct address_space *mapping; + struct page *first_subpage; + pgoff_t index; int ret; int i; =20 - hgmem =3D kvm_gmem_hgmem(inode); - allocated_hugetlb_folio =3D kvm_gmem_hugetlb_alloc_folio(hgmem->h, hgmem-= >spool); - if (IS_ERR(allocated_hugetlb_folio)) - return allocated_hugetlb_folio; + if (folio_order(folio) =3D=3D 0) + return 0; =20 - requested_page =3D folio_file_page(allocated_hugetlb_folio, index); - hugetlb_first_subpage =3D folio_file_page(allocated_hugetlb_folio, 0); - hugetlb_first_subpage_index =3D index & (huge_page_mask(hgmem->h) >> PAGE= _SHIFT); + index =3D folio->index; + mapping =3D folio->mapping; =20 - ret =3D kvm_gmem_hugetlb_split_folio(hgmem->h, allocated_hugetlb_folio); + first_subpage =3D folio_page(folio, 0); + + /* + * Take reference so that folio will not be released when removed from + * filemap. + */ + folio_get(folio); + + kvm_gmem_hugetlb_filemap_remove_folio(folio); + + ret =3D kvm_gmem_hugetlb_split_folio(h, folio); if (ret) { - folio_put(allocated_hugetlb_folio); - return ERR_PTR(ret); + WARN_ON(ret); + kvm_gmem_hugetlb_filemap_add_folio(mapping, folio, index, + htlb_alloc_mask(h)); + folio_put(folio); + return ret; } =20 - for (i =3D 0; i < pages_per_huge_page(hgmem->h); ++i) { - struct folio *folio =3D page_folio(nth_page(hugetlb_first_subpage, i)); + for (i =3D 0; i < pages_per_huge_page(h); ++i) { + struct folio *folio =3D page_folio(nth_page(first_subpage, i)); =20 - ret =3D kvm_gmem_hugetlb_filemap_add_folio(inode->i_mapping, - folio, - hugetlb_first_subpage_index + i, - htlb_alloc_mask(hgmem->h)); + ret =3D kvm_gmem_hugetlb_filemap_add_folio(mapping, folio, + index + i, + htlb_alloc_mask(h)); if (ret) { /* TODO: handle cleanup properly. */ - pr_err("Handle cleanup properly index=3D%lx, ret=3D%d\n", - hugetlb_first_subpage_index + i, ret); - dump_page(nth_page(hugetlb_first_subpage, i), "check"); - return ERR_PTR(ret); + WARN_ON(ret); + return ret; } =20 + folio_unlock(folio); + /* - * Skip unlocking for the requested index since - * kvm_gmem_get_folio() returns a locked folio. - * - * Do folio_put() to drop the refcount that came with the folio, - * from splitting the folio. Splitting the folio has a refcount - * to be in line with hugetlb_alloc_folio(), which returns a - * folio with refcount 1. - * - * Skip folio_put() for requested index since - * kvm_gmem_get_folio() returns a folio with refcount 1. + * Drop reference so that the only remaining reference is the + * one held by the filemap. */ - if (hugetlb_first_subpage_index + i !=3D index) { - folio_unlock(folio); - folio_put(folio); - } + folio_put(folio); } =20 + return ret; +} + +/* + * Allocates and then caches a folio in the filemap. Returns a folio with + * refcount of 2: 1 after allocation, and 1 taken by the filemap. + */ +static struct folio *kvm_gmem_hugetlb_alloc_and_cache_folio(struct inode *= inode, + pgoff_t index) +{ + struct kvm_gmem_hugetlb *hgmem; + pgoff_t aligned_index; + struct folio *folio; + int nr_pages; + int ret; + + hgmem =3D kvm_gmem_hgmem(inode); + folio =3D kvm_gmem_hugetlb_alloc_folio(hgmem->h, hgmem->spool); + if (IS_ERR(folio)) + return folio; + + nr_pages =3D 1UL << huge_page_order(hgmem->h); + aligned_index =3D round_down(index, nr_pages); + + ret =3D kvm_gmem_hugetlb_filemap_add_folio(inode->i_mapping, folio, + aligned_index, + htlb_alloc_mask(hgmem->h)); + WARN_ON(ret); + spin_lock(&inode->i_lock); inode->i_blocks +=3D blocks_per_huge_page(hgmem->h); spin_unlock(&inode->i_lock); =20 - return page_folio(requested_page); + return folio; +} + +/** + * Split @folio if any of the subfolios are faultable. Returns the split + * (locked, refcount=3D2) folio at @index. + * + * Expects a locked folio with 1 refcount in addition to filemap's refcoun= ts. + * + * After splitting, the subfolios in the filemap will be unlocked and have + * refcount 1 (other than the returned folio, which will be locked and have + * refcount 2). + */ +static struct folio *kvm_gmem_maybe_split_folio(struct folio *folio, pgoff= _t index) +{ + pgoff_t aligned_index; + struct inode *inode; + struct hstate *h; + int nr_pages; + int ret; + + inode =3D folio->mapping->host; + h =3D kvm_gmem_hgmem(inode)->h; + nr_pages =3D 1UL << huge_page_order(h); + aligned_index =3D round_down(index, nr_pages); + + if (!kvm_gmem_is_any_faultable(inode, aligned_index, nr_pages)) + return folio; + + /* Drop lock and refcount in preparation for splitting. */ + folio_unlock(folio); + folio_put(folio); + + ret =3D kvm_gmem_split_folio_in_filemap(h, folio); + if (ret) { + kvm_gmem_hugetlb_filemap_remove_folio(folio); + return ERR_PTR(ret); + } + + /* + * At this point, the filemap has the only reference on the folio. Take + * lock and refcount on folio to align with kvm_gmem_get_folio(). + */ + return filemap_lock_folio(inode->i_mapping, index); } =20 static struct folio *kvm_gmem_get_hugetlb_folio(struct inode *inode, pgoff_t index) { - struct address_space *mapping; struct folio *folio; - struct hstate *h; - pgoff_t hindex; u32 hash; =20 - h =3D kvm_gmem_hgmem(inode)->h; - hindex =3D index >> huge_page_order(h); - mapping =3D inode->i_mapping; - - /* To lock, we calculate the hash using the hindex and not index. */ - hash =3D hugetlb_fault_mutex_hash(mapping, hindex); - mutex_lock(&hugetlb_fault_mutex_table[hash]); + hash =3D hugetlb_fault_mutex_lock(inode->i_mapping, index); =20 /* - * The filemap is indexed with index and not hindex. Taking lock on - * folio to align with kvm_gmem_get_regular_folio() + * The filemap is indexed with index and not hindex. Take lock on folio + * to align with kvm_gmem_get_regular_folio() */ - folio =3D filemap_lock_folio(mapping, index); + folio =3D filemap_lock_folio(inode->i_mapping, index); + if (IS_ERR(folio)) + folio =3D kvm_gmem_hugetlb_alloc_and_cache_folio(inode, index); + if (!IS_ERR(folio)) - goto out; + folio =3D kvm_gmem_maybe_split_folio(folio, index); =20 - folio =3D kvm_gmem_hugetlb_alloc_and_cache_folio(inode, index); -out: - mutex_unlock(&hugetlb_fault_mutex_table[hash]); + hugetlb_fault_mutex_unlock(hash); =20 return folio; } @@ -610,17 +860,6 @@ static void kvm_gmem_invalidate_end(struct kvm_gmem *g= mem, pgoff_t start, } } =20 -static inline void kvm_gmem_hugetlb_filemap_remove_folio(struct folio *fol= io) -{ - folio_lock(folio); - - folio_clear_dirty(folio); - folio_clear_uptodate(folio); - filemap_remove_folio(folio); - - folio_unlock(folio); -} - /** * Removes folios in range [@lstart, @lend) from page cache/filemap (@mapp= ing), * returning the number of HugeTLB pages freed. @@ -631,61 +870,30 @@ static int kvm_gmem_hugetlb_filemap_remove_folios(str= uct address_space *mapping, struct hstate *h, loff_t lstart, loff_t lend) { - const pgoff_t end =3D lend >> PAGE_SHIFT; - pgoff_t next =3D lstart >> PAGE_SHIFT; - LIST_HEAD(folios_to_reconstruct); - struct folio_batch fbatch; - struct folio *folio, *tmp; - int num_freed =3D 0; - int i; - - /* - * TODO: Iterate over huge_page_size(h) blocks to avoid taking and - * releasing hugetlb_fault_mutex_table[hash] lock so often. When - * truncating, lstart and lend should be clipped to the size of this - * guest_memfd file, otherwise there would be too many iterations. - */ - folio_batch_init(&fbatch); - while (filemap_get_folios(mapping, &next, end - 1, &fbatch)) { - for (i =3D 0; i < folio_batch_count(&fbatch); ++i) { - struct folio *folio; - pgoff_t hindex; - u32 hash; - - folio =3D fbatch.folios[i]; + loff_t offset; + int num_freed; =20 - hindex =3D folio->index >> huge_page_order(h); - hash =3D hugetlb_fault_mutex_hash(mapping, hindex); - mutex_lock(&hugetlb_fault_mutex_table[hash]); + num_freed =3D 0; + for (offset =3D lstart; offset < lend; offset +=3D huge_page_size(h)) { + struct folio *folio; + pgoff_t index; + u32 hash; =20 - /* - * Collect first pages of HugeTLB folios for - * reconstruction later. - */ - if ((folio->index & ~(huge_page_mask(h) >> PAGE_SHIFT)) =3D=3D 0) - list_add(&folio->lru, &folios_to_reconstruct); + index =3D offset >> PAGE_SHIFT; + hash =3D hugetlb_fault_mutex_lock(mapping, index); =20 - /* - * Before removing from filemap, take a reference so - * sub-folios don't get freed. Don't free the sub-folios - * until after reconstruction. - */ - folio_get(folio); + folio =3D filemap_get_folio(mapping, index); + if (!IS_ERR(folio)) { + /* Drop refcount so that filemap holds only reference. */ + folio_put(folio); =20 + kvm_gmem_reconstruct_folio_in_filemap(h, folio); kvm_gmem_hugetlb_filemap_remove_folio(folio); =20 - mutex_unlock(&hugetlb_fault_mutex_table[hash]); + num_freed++; } - folio_batch_release(&fbatch); - cond_resched(); - } - - list_for_each_entry_safe(folio, tmp, &folios_to_reconstruct, lru) { - kvm_gmem_hugetlb_reconstruct_folio(h, folio); - hugetlb_folio_list_move(folio, &h->hugepage_activelist); =20 - folio_put(folio); - num_freed++; + hugetlb_fault_mutex_unlock(hash); } =20 return num_freed; @@ -705,6 +913,10 @@ static void kvm_gmem_hugetlb_truncate_folios_range(str= uct inode *inode, int gbl_reserve; int num_freed; =20 + /* No point truncating more than inode size. */ + lstart =3D min(lstart, inode->i_size); + lend =3D min(lend, inode->i_size); + hgmem =3D kvm_gmem_hgmem(inode); h =3D hgmem->h; =20 @@ -1042,13 +1254,27 @@ static vm_fault_t kvm_gmem_fault(struct vm_fault *v= mf) bool is_prepared; =20 inode =3D file_inode(vmf->vma->vm_file); - if (!kvm_gmem_is_faultable(inode, vmf->pgoff)) + + /* + * Use filemap_invalidate_lock_shared() to make sure + * kvm_gmem_get_folio() doesn't race with faultability updates. + */ + filemap_invalidate_lock_shared(inode->i_mapping); + + if (!kvm_gmem_is_faultable(inode, vmf->pgoff)) { + filemap_invalidate_unlock_shared(inode->i_mapping); return VM_FAULT_SIGBUS; + } =20 folio =3D kvm_gmem_get_folio(inode, vmf->pgoff); + + filemap_invalidate_unlock_shared(inode->i_mapping); + if (!folio) return VM_FAULT_SIGBUS; =20 + WARN(folio_test_hugetlb(folio), "should not be faulting in hugetlb folio= =3D%p\n", folio); + is_prepared =3D folio_test_uptodate(folio); if (!is_prepared) { unsigned long nr_pages; @@ -1731,8 +1957,6 @@ static bool kvm_gmem_no_mappings_range(struct inode *= inode, pgoff_t start, pgoff pgoff_t index; bool checked_indices_unmapped; =20 - filemap_invalidate_lock_shared(inode->i_mapping); - /* TODO: replace iteration with filemap_get_folios() for efficiency. */ checked_indices_unmapped =3D true; for (index =3D start; checked_indices_unmapped && index < end;) { @@ -1754,98 +1978,130 @@ static bool kvm_gmem_no_mappings_range(struct inod= e *inode, pgoff_t start, pgoff folio_put(folio); } =20 - filemap_invalidate_unlock_shared(inode->i_mapping); return checked_indices_unmapped; } =20 /** - * Returns true if pages in range [@start, @end) in memslot @slot have no - * userspace mappings. + * Split any HugeTLB folios in range [@start, @end), if any of the offsets= in + * the folio are faultable. Return 0 on success or negative error otherwis= e. + * + * Will skip any folios that are already split. */ -static bool kvm_gmem_no_mappings_slot(struct kvm_memory_slot *slot, - gfn_t start, gfn_t end) +static int kvm_gmem_try_split_folios_range(struct inode *inode, + pgoff_t start, pgoff_t end) { - pgoff_t offset_start; - pgoff_t offset_end; - struct file *file; - bool ret; - - offset_start =3D start - slot->base_gfn + slot->gmem.pgoff; - offset_end =3D end - slot->base_gfn + slot->gmem.pgoff; - - file =3D kvm_gmem_get_file(slot); - if (!file) - return false; - - ret =3D kvm_gmem_no_mappings_range(file_inode(file), offset_start, offset= _end); + unsigned int nr_pages; + pgoff_t aligned_start; + pgoff_t aligned_end; + struct hstate *h; + pgoff_t index; + int ret; =20 - fput(file); + if (!is_kvm_gmem_hugetlb(inode)) + return 0; =20 - return ret; -} + h =3D kvm_gmem_hgmem(inode)->h; + nr_pages =3D 1UL << huge_page_order(h); =20 -/** - * Returns true if pages in range [@start, @end) have no host userspace ma= ppings. - */ -static bool kvm_gmem_no_mappings(struct kvm *kvm, gfn_t start, gfn_t end) -{ - int i; + aligned_start =3D round_down(start, nr_pages); + aligned_end =3D round_up(end, nr_pages); =20 - lockdep_assert_held(&kvm->slots_lock); + ret =3D 0; + for (index =3D aligned_start; !ret && index < aligned_end; index +=3D nr_= pages) { + struct folio *folio; + u32 hash; =20 - for (i =3D 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { - struct kvm_memslot_iter iter; - struct kvm_memslots *slots; + hash =3D hugetlb_fault_mutex_lock(inode->i_mapping, index); =20 - slots =3D __kvm_memslots(kvm, i); - kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) { - struct kvm_memory_slot *slot; - gfn_t gfn_start; - gfn_t gfn_end; - - slot =3D iter.slot; - gfn_start =3D max(start, slot->base_gfn); - gfn_end =3D min(end, slot->base_gfn + slot->npages); + folio =3D filemap_get_folio(inode->i_mapping, index); + if (!IS_ERR(folio)) { + /* + * Drop refcount so that the only references held are refcounts + * from the filemap. + */ + folio_put(folio); =20 - if (iter.slot->flags & KVM_MEM_GUEST_MEMFD && - !kvm_gmem_no_mappings_slot(iter.slot, gfn_start, gfn_end)) - return false; + if (kvm_gmem_is_any_faultable(inode, index, nr_pages)) { + ret =3D kvm_gmem_split_folio_in_filemap(h, folio); + if (ret) { + /* TODO cleanup properly. */ + WARN_ON(ret); + } + } } + + hugetlb_fault_mutex_unlock(hash); } =20 - return true; + return ret; } =20 /** - * Set faultability of given range of gfns [@start, @end) in memslot @slot= to - * @faultable. + * Returns 0 if guest_memfd permits setting range [@start, @end) with + * faultability @faultable within memslot @slot, or negative error otherwi= se. + * + * If a request was made to set the memory to PRIVATE (not faultable), the= pages + * in the range must not be pinned or mapped for the request to be permitt= ed. + * + * Because this may allow pages to be faulted in to userspace when request= ed to + * set attributes to shared, this must only be called after the pages have= been + * invalidated from guest page tables. */ -static void kvm_gmem_set_faultable_slot(struct kvm_memory_slot *slot, gfn_= t start, - gfn_t end, bool faultable) +static int kvm_gmem_try_set_faultable_slot(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end, + bool faultable) { pgoff_t start_offset; + struct inode *inode; pgoff_t end_offset; struct file *file; + int ret; =20 file =3D kvm_gmem_get_file(slot); if (!file) - return; + return 0; =20 start_offset =3D start - slot->base_gfn + slot->gmem.pgoff; end_offset =3D end - slot->base_gfn + slot->gmem.pgoff; =20 - WARN_ON(kvm_gmem_set_faultable(file_inode(file), start_offset, end_offset, - faultable)); + inode =3D file_inode(file); + + /* + * Use filemap_invalidate_lock_shared() to make sure + * splitting/reconstruction doesn't race with faultability updates. + */ + filemap_invalidate_lock(inode->i_mapping); + + kvm_gmem_set_faultable(inode, start_offset, end_offset, faultable); + + if (faultable) { + ret =3D kvm_gmem_try_split_folios_range(inode, start_offset, + end_offset); + } else { + if (kvm_gmem_no_mappings_range(inode, start_offset, end_offset)) { + ret =3D kvm_gmem_try_reconstruct_folios_range(inode, + start_offset, + end_offset); + } else { + ret =3D -EINVAL; + } + } + + filemap_invalidate_unlock(inode->i_mapping); =20 fput(file); + + return ret; } =20 /** - * Set faultability of given range of gfns [@start, @end) in memslot @slot= to - * @faultable. + * Returns 0 if guest_memfd permits setting range [@start, @end) with + * faultability @faultable within VM @kvm, or negative error otherwise. + * + * See kvm_gmem_try_set_faultable_slot() for details. */ -static void kvm_gmem_set_faultable_vm(struct kvm *kvm, gfn_t start, gfn_t = end, - bool faultable) +static int kvm_gmem_try_set_faultable_vm(struct kvm *kvm, gfn_t start, gfn= _t end, + bool faultable) { int i; =20 @@ -1866,43 +2122,15 @@ static void kvm_gmem_set_faultable_vm(struct kvm *k= vm, gfn_t start, gfn_t end, gfn_end =3D min(end, slot->base_gfn + slot->npages); =20 if (iter.slot->flags & KVM_MEM_GUEST_MEMFD) { - kvm_gmem_set_faultable_slot(slot, gfn_start, - gfn_end, faultable); + int ret; + + ret =3D kvm_gmem_try_set_faultable_slot(slot, gfn_start, + gfn_end, faultable); + if (ret) + return ret; } } } -} - -/** - * Returns true if guest_memfd permits setting range [@start, @end) to PRI= VATE. - * - * If memory is faulted in to host userspace and a request was made to set= the - * memory to PRIVATE, the faulted in pages must not be pinned for the requ= est to - * be permitted. - */ -static int kvm_gmem_should_set_attributes_private(struct kvm *kvm, gfn_t s= tart, - gfn_t end) -{ - kvm_gmem_set_faultable_vm(kvm, start, end, false); - - if (kvm_gmem_no_mappings(kvm, start, end)) - return 0; - - kvm_gmem_set_faultable_vm(kvm, start, end, true); - return -EINVAL; -} - -/** - * Returns true if guest_memfd permits setting range [@start, @end) to SHA= RED. - * - * Because this allows pages to be faulted in to userspace, this must only= be - * called after the pages have been invalidated from guest page tables. - */ -static int kvm_gmem_should_set_attributes_shared(struct kvm *kvm, gfn_t st= art, - gfn_t end) -{ - /* Always okay to set shared, hence set range faultable here. */ - kvm_gmem_set_faultable_vm(kvm, start, end, true); =20 return 0; } @@ -1922,10 +2150,16 @@ static int kvm_gmem_should_set_attributes_shared(st= ruct kvm *kvm, gfn_t start, int kvm_gmem_should_set_attributes(struct kvm *kvm, gfn_t start, gfn_t end, unsigned long attrs) { - if (attrs & KVM_MEMORY_ATTRIBUTE_PRIVATE) - return kvm_gmem_should_set_attributes_private(kvm, start, end); - else - return kvm_gmem_should_set_attributes_shared(kvm, start, end); + bool faultable; + int ret; + + faultable =3D !(attrs & KVM_MEMORY_ATTRIBUTE_PRIVATE); + + ret =3D kvm_gmem_try_set_faultable_vm(kvm, start, end, faultable); + if (ret) + WARN_ON(kvm_gmem_try_set_faultable_vm(kvm, start, end, !faultable)); + + return ret; } =20 #endif --=20 2.46.0.598.g6f2099f65c-goog