From nobody Thu Apr 2 09:46:26 2026 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B04D63B8D4F for ; Thu, 26 Mar 2026 22:25:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774563922; cv=none; b=c0VYntEcmzl33kAtsKjtOnSmHsbALd2WS593hoJFkmp27fDcnlhWFYg1eP75dSmElI64R982VV94n0EbUKH6o/4Vpd7ZdjWdWFfM23jxinaYpA3cvBqN5yHEprx+5w9iB+O0nLrFe5gOfDvqeft//CTPcRuEvL9rbdxQ3D3M7JE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774563922; c=relaxed/simple; bh=wRfY1Jn+9IVOYoMa8jY758mvqEtTI8tV2HC+aaXp76s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pdyF8uWHzwVZlIb9uqmxvFlCDd1KTvHON+2vtnlqqasiWU+H2ltDW2QToPANig6+tyjkJZZR8GUjG6rSv53OaBkEjxyCsk1GbB1Xi8NWHwRPlrPBnodt3inUGgP39PKoZhyRoST5Q0/7iZteBAx4m+0xoXVBNYbArEMRDQCc/WM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nB1sweSw; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nB1sweSw" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-c7423ba5342so2999583a12.0 for ; Thu, 26 Mar 2026 15:25:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774563920; x=1775168720; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sIzVYLR/FSUWm85vLUs87YDAHw4Wlol7nmVi6FxJ/b0=; b=nB1sweSwNzWZKd0eQAQ5szsZKjCprvX2qQFaWRTNMAkmf+6B7S+c4iE9jGpyekiui2 Fn6guvYWjpxNoCcMSPzn9j22SQL9aRMMiv8/eMoGrOOjQhexQHsnJBu/3BLs33xprplH b2Cc6rg6E/Wt8abV/6Jzpn5XSGbKAMMZXj5TDbbeJSTi3uOfhnc2LhEHFv5zQLT/gzRp IW5oo2UTh1TpfcVrX3xl1C3msJ3uaH4Y195LFMSwAx/DzHqAO9LzbKAuxi7/JfeXq0a4 EYS0zWTU7p9h4WwDyBh6nhr+c0zFk6M6dwSq1aQ/5BLkyxpnhjm5p2vqpPWZnAWFAowX 6nmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774563920; x=1775168720; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sIzVYLR/FSUWm85vLUs87YDAHw4Wlol7nmVi6FxJ/b0=; b=o1AXCCUyEley2V7CFss/v6BOCHvB6ABwZ1duom1B2OWmQt4GKOwkIxKq0V+p0dc2YL K4kaZe8rBL8RbYjCDc2zQ0MbZ0ETxcAYBVWH5FKjeddS9sWD0rXuDwy3ecKYufB/6GHl f6golW7yHOuwSMU04qtaLv7zsw/fvkYwkG+2ZtnQfSduWb/EARYCu2J8dTPuDyZJF9z/ LxtjoiHrJfHv021a+zWKjOYQeIE6n3GsQ5G+965Xk1sNJQis+xymocyJbwWSBzjUFMhb /Ykp+2KKJ1I+lEqMLG02AXcdMCKal4JV3eBt3z0KM8YvAKa7ikQrIv7Uakkyhk6vlf17 uqAw== X-Forwarded-Encrypted: i=1; AJvYcCXrvoWKnyB56iFby4xQ6F6h/5Zo5vfbPMI4bV0bHliYpdsvmjOUJjmA2h8k5wXpBGuk3jzOxDFRaRzuxOs=@vger.kernel.org X-Gm-Message-State: AOJu0YxcYE+TtIpo2ZQdgFIk/7eKsl+4nd5eYUGt44km5pWZx4UP0E6v n4XsmCNEDZ4H5vMSPTw9pUV3KPXR+a6aiwYLv83ZNOhZGIWvuOsDpjk18kOL46HjWDU6+G+w9hM lMGvsaSmlCG7T7f6wtfpfOQHMCA== X-Received: from pfjt19.prod.google.com ([2002:a05:6a00:21d3:b0:82c:6ae6:e5b]) (user=ackerleytng job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:9a6:b0:82c:6d88:2a8e with SMTP id d2e1a72fcca58-82c95c1133emr168762b3a.20.1774563919656; Thu, 26 Mar 2026 15:25:19 -0700 (PDT) Date: Thu, 26 Mar 2026 15:24:36 -0700 In-Reply-To: <20260326-gmem-inplace-conversion-v4-0-e202fe950ffd@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260326-gmem-inplace-conversion-v4-0-e202fe950ffd@google.com> X-Developer-Key: i=ackerleytng@google.com; a=ed25519; pk=sAZDYXdm6Iz8FHitpHeFlCMXwabodTm7p8/3/8xUxuU= X-Developer-Signature: v=1; a=ed25519-sha256; t=1774563861; l=4465; i=ackerleytng@google.com; s=20260225; h=from:subject:message-id; bh=wRfY1Jn+9IVOYoMa8jY758mvqEtTI8tV2HC+aaXp76s=; b=Iwk4pQNeckXclXoEM1Aas23TToytRGIPodJHXlnaUzRgEA3ZxoElFgD1mmgzcEoO9mzC7z3Sa MbCRq7O0sgoB82kC/gB6bsmiYm1TnEJqhIm2wTtU37WXhw2ymIQgiGS X-Mailer: b4 0.14.3 Message-ID: <20260326-gmem-inplace-conversion-v4-27-e202fe950ffd@google.com> Subject: [PATCH RFC v4 27/44] KVM: selftests: Test conversion precision in guest_memfd From: Ackerley Tng To: aik@amd.com, andrew.jones@linux.dev, binbin.wu@linux.intel.com, brauner@kernel.org, chao.p.peng@linux.intel.com, david@kernel.org, ira.weiny@intel.com, jmattson@google.com, jroedel@suse.de, jthoughton@google.com, michael.roth@amd.com, oupton@kernel.org, pankaj.gupta@amd.com, qperret@google.com, rick.p.edgecombe@intel.com, rientjes@google.com, shivankg@amd.com, steven.price@arm.com, tabba@google.com, willy@infradead.org, wyihan@google.com, yan.y.zhao@intel.com, forkloop@google.com, pratyush@kernel.org, suzuki.poulose@arm.com, aneesh.kumar@kernel.org, Paolo Bonzini , Sean Christopherson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Jonathan Corbet , Shuah Khan , Shuah Khan , Vishal Annapurve , Andrew Morton , Chris Li , Kairui Song , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Axel Rasmussen , Yuanchu Xie , Wei Xu , Jason Gunthorpe , Vlastimil Babka Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Ackerley Tng Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The existing guest_memfd conversion tests only use single-page memory regions. This provides no coverage for multi-page guest_memfd objects, specifically whether KVM correctly handles the page index for conversion operations. An incorrect implementation could, for example, always operate on the first page regardless of the index provided. Add a new test case to verify that conversions between private and shared memory correctly target the specified page within a multi-page guest_memfd. This test also verifies the precision of memory conversions by converting a single page an then iterating through all other pages ensure they remain in their original state. To support this test, add a new GMEM_CONVERSION_MULTIPAGE_TEST_INIT_SHARED macro that handles setting up and tearing down the VM for each page iteration. The teardown logic is adjusted to prevent a double-free in this new scenario. Signed-off-by: Ackerley Tng Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../selftests/kvm/guest_memfd_conversions_test.c | 70 ++++++++++++++++++= ++++ 1 file changed, 70 insertions(+) diff --git a/tools/testing/selftests/kvm/guest_memfd_conversions_test.c b/t= ools/testing/selftests/kvm/guest_memfd_conversions_test.c index 81cbdb5def565..3388f06bc51db 100644 --- a/tools/testing/selftests/kvm/guest_memfd_conversions_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_conversions_test.c @@ -65,8 +65,13 @@ static void gmem_conversions_do_setup(test_data_t *t, in= t nr_pages, =20 static void gmem_conversions_do_teardown(test_data_t *t) { + /* Use NULL to avoid second free in FIXTURE_TEARDOWN (multipage tests). */ + if (!t->vcpu) + return; + /* No need to close gmem_fd, it's owned by the VM structure. */ kvm_vm_free(t->vcpu->vm); + t->vcpu =3D NULL; } =20 FIXTURE_TEARDOWN(gmem_conversions) @@ -105,6 +110,29 @@ static void __gmem_conversions_##test(test_data_t *t, = int nr_pages) \ #define GMEM_CONVERSION_TEST_INIT_SHARED(test) \ __GMEM_CONVERSION_TEST_INIT_SHARED(test, 1) =20 +/* + * Repeats test over nr_pages in a guest_memfd of size nr_pages, providing= each + * test iteration with test_page, the index of the page under test in + * guest_memfd. test_page takes values 0..(nr_pages - 1) inclusive. + */ +#define GMEM_CONVERSION_MULTIPAGE_TEST_INIT_SHARED(test, __nr_pages) \ +static void __gmem_conversions_multipage_##test(test_data_t *t, int nr_pag= es, \ + const int test_page); \ + \ +TEST_F(gmem_conversions, test) \ +{ \ + const uint64_t flags =3D GUEST_MEMFD_FLAG_MMAP | GUEST_MEMFD_FLAG_INIT_SH= ARED; \ + int i; \ + \ + for (i =3D 0; i < __nr_pages; ++i) { \ + gmem_conversions_do_setup(self, __nr_pages, flags); \ + __gmem_conversions_multipage_##test(self, __nr_pages, i); \ + gmem_conversions_do_teardown(self); \ + } \ +} \ +static void __gmem_conversions_multipage_##test(test_data_t *t, int nr_pag= es, \ + const int test_page) + struct guest_check_data { void *mem; char expected_val; @@ -205,6 +233,48 @@ GMEM_CONVERSION_TEST_INIT_SHARED(init_shared) test_convert_to_shared(t, 0, 'C', 'D', 'E'); } =20 +/* + * Test indexing of pages within guest_memfd, using test data that is a mu= ltiple + * of page index. + */ +GMEM_CONVERSION_MULTIPAGE_TEST_INIT_SHARED(indexing, 4) +{ + int i; + + /* Get a char that varies with both i and v. */ +#define f(x, v) ((x << 4) + (v)) +#define r(v) (f(i, v)) +#define c(v) (f(test_page, v)) + + /* + * Start with the highest index, to catch any errors when, perhaps, the + * first page is returned even for the last index. + */ + for (i =3D nr_pages - 1; i >=3D 0; --i) + test_shared(t, i, 0, r(0), r(2)); + + test_convert_to_private(t, test_page, c(2), c(3)); + + for (i =3D 0; i < nr_pages; ++i) { + if (i =3D=3D test_page) + test_private(t, i, r(3), r(4)); + else + test_shared(t, i, r(2), r(3), r(4)); + } + + test_convert_to_shared(t, test_page, c(4), c(5), c(6)); + + for (i =3D 0; i < nr_pages; ++i) { + char expected =3D i =3D=3D test_page ? r(6) : r(4); + + test_shared(t, i, expected, r(7), r(8)); + } + +#undef c +#undef r +#undef f +} + int main(int argc, char *argv[]) { TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_V= M)); --=20 2.53.0.1018.g2bb0e51243-goog