From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED4AF3D544 for ; Thu, 13 Nov 2025 01:47:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998422; cv=none; b=jJziWrTUwxPq9GEXnPnfLJSVmiovcp9IdtZquYaE3fnWqxUf/AVB+f/4kTUvnNLTxuzrjrsvlwnuI/DgznzoSjwcxDOaUdnuU8YuaPRNNV9rLCUBLRFFsK418hKRJfkQz+pKUu8ti8Z9Bsn5tvz/We9N2+Bkg1QXPWjTHJ0uhYU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998422; c=relaxed/simple; bh=1EDzMrT2p5RjNDiZ7EGuCfhBKoian8SUlheWdpdCA0c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pAlUEKZr+mgCwL3jS0AP6YPx9slBWSQfroerH4wq0Au0mXzJR+eF3EiOgj+icpUJacsvM972k8s6OPK8X+XiXQoFXiRjJuqnsPhlAsLPQanam49EPlpUrONZiO54o4zreIqWSjonr1ly7ZOoGOrS7Ei4hbjv+++aWfk+pxf676Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=m2cOb9ge; arc=none smtp.client-ip=209.85.214.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="m2cOb9ge" Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-298144fb9bcso2807785ad.0 for ; Wed, 12 Nov 2025 17:47:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998420; x=1763603220; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YdtMIGVZlv4MisNDb9tnVG+/irsKPLGgILycxKzmm5k=; b=m2cOb9gewcYeRe0uSWNX8VnNbQwSoYr2Vu+NjMWOCbHdhApXR8kNsQ93P9BR6I0OtP +HSZnsXUGClf/ze19w1y0BT7pdeFhS855REK1RJOZX9EPtPS7RiOZuMu11YFM6TTbXUW 5Xb020+nO5doMsVt51Kp0vPw0PoIQ+3KNDmghP+9/EJNHEvtgkd40ljJ0Sx1RfFZUe+b c2EmRDPshXKSCNLN2OKmHwOTvcTXYd5GaJ+Sa2itSk5dDcjJJP97o6tYInOzLwsg6MqF mG49GN3hIouo6UtnuVWjO/HupFnx+rPandxe+0MKR7V6iLpxv30BWmeTWVRd0LLj7jkN 54VQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998420; x=1763603220; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=YdtMIGVZlv4MisNDb9tnVG+/irsKPLGgILycxKzmm5k=; b=SQ0K9z3s11PrpXK8KAcQ5/nvXwWBbFpPoWOY3c+IC6lHtyT/SNF9kLhLzsP3Ke5/3h B9XuBVs8y50kABXm/Yn7/be6gqVCjvI2eE3B2XXd+u+6P7Y25R+xdFSXaD5l/aqK6Lhp 30qgURdETSQkGYPtNw8ots39KG/cYTG5dFb6Pfs8+8zHrGvdOSn1OSI26ypQgRAUvUzo mddBMwNdx0nu71Lb4W6tDfw9JMpTVaC9jav3Y+48hA7DgAV082OHdllKsK1X6p/SiVH0 Ba/Yd9ES0WvPF/Tf/E+EKZrdiggcZczxJL1LXVL7KzgkQp1wycxtznP5zv47TH0uhc3U IZOg== X-Forwarded-Encrypted: i=1; AJvYcCUjSvDfUUul1UaTCnZ1S9CxG6dqe9oMshutN30rCMQ0VlfFsA13FcxTfLfRUamugiNL3M4ORfXCytH6hb0=@vger.kernel.org X-Gm-Message-State: AOJu0YyW+crrZ0lgNTExTxmppOyEdenaSAKOgNnh2nEBupeWEAfjrLtW Otoh6IgyTx2CzE8WFCNwR3MLj/ptcO+1UeMLmAR2MH+7FbJm76MR929VYJFjpN/1aSM= X-Gm-Gg: ASbGncuy+vxojEY5L9AcAU/p+xPh+86r7vE9FQbeBf4vpAa8jzmMKkdPTQwi9SGqczP m1XSL4CEz769NpWVnlzno0ylZrJQvTh0GnMEzrXb2zF2wnSPvB1+NXll46q71Et5aD/ZWjfmU5B 5+kyPgljsr+HDPhta7EDvDbm5uYCqyYi6mP/6wPPwCVFGqtspYO5I21trE8c+c4SX/9AQqv0aC0 cHV2qg2dK9ZpeYhQDI2a9WUlVFR4EoAhEr3bpxn64BNv3idnaTvVMlRLBV7TID3CiY/tt8Q+8xL 3Ca7cD68jmkEt1xzHTnCIYFSLKBswyXay6vCJ/+yK8OdS/VsEDKOUFTdXqPxuq+uaNGmAK2Zz4m rUJbAuH7eyG8ej1sKSYPG7YhZ3gS82mqFpIu06RgdOalyz1TRiPRh/JDmHSG/0Y2BcsxytaTYSW T+GwhbYl3Ad7OlJQh7eSGz8w== X-Google-Smtp-Source: AGHT+IFYjWF2Je2zdGG5G71XUhLWD4KFs37feNtxzRSLtVmw8LZxUlSD+Pjrpw1Q7n3dWJ2wHP2y9w== X-Received: by 2002:a17:903:3c64:b0:295:3e80:9aa4 with SMTP id d9443c01a7336-2984ed46fcamr62445305ad.22.1762998420290; Wed, 12 Nov 2025 17:47:00 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.46.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:00 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Anshuman Khandual , Dev Jain , Lance Yang , SeongJae Park , Samuel Holland Subject: [PATCH v3 01/22] mm/ptdump: replace READ_ONCE() with standard page table accessors Date: Wed, 12 Nov 2025 17:45:14 -0800 Message-ID: <20251113014656.2605447-2-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Anshuman Khandual Replace READ_ONCE() with standard page table accessors i.e pxdp_get() which anyways default into READ_ONCE() in cases where platform does not override. Also convert ptep_get_lockless() into ptep_get() as well. Link: https://lkml.kernel.org/r/20251001042502.1400726-1-anshuman.khandual@= arm.com Signed-off-by: Anshuman Khandual Reviewed-by: Dev Jain Acked-by: Lance Yang Acked-by: SeongJae Park Acked-by: David Hildenbrand Signed-off-by: Andrew Morton Signed-off-by: Samuel Holland --- Changes in v3: - Replace patch with cherry-pick from linux-next Changes in v2: - New patch for v2 (taken from LKML) mm/ptdump.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/ptdump.c b/mm/ptdump.c index b600c7f864b8..973020000096 100644 --- a/mm/ptdump.c +++ b/mm/ptdump.c @@ -31,7 +31,7 @@ static int ptdump_pgd_entry(pgd_t *pgd, unsigned long add= r, unsigned long next, struct mm_walk *walk) { struct ptdump_state *st =3D walk->private; - pgd_t val =3D READ_ONCE(*pgd); + pgd_t val =3D pgdp_get(pgd); =20 #if CONFIG_PGTABLE_LEVELS > 4 && \ (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) @@ -54,7 +54,7 @@ static int ptdump_p4d_entry(p4d_t *p4d, unsigned long add= r, unsigned long next, struct mm_walk *walk) { struct ptdump_state *st =3D walk->private; - p4d_t val =3D READ_ONCE(*p4d); + p4d_t val =3D p4dp_get(p4d); =20 #if CONFIG_PGTABLE_LEVELS > 3 && \ (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) @@ -77,7 +77,7 @@ static int ptdump_pud_entry(pud_t *pud, unsigned long add= r, unsigned long next, struct mm_walk *walk) { struct ptdump_state *st =3D walk->private; - pud_t val =3D READ_ONCE(*pud); + pud_t val =3D pudp_get(pud); =20 #if CONFIG_PGTABLE_LEVELS > 2 && \ (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) @@ -100,7 +100,7 @@ static int ptdump_pmd_entry(pmd_t *pmd, unsigned long a= ddr, unsigned long next, struct mm_walk *walk) { struct ptdump_state *st =3D walk->private; - pmd_t val =3D READ_ONCE(*pmd); + pmd_t val =3D pmdp_get(pmd); =20 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) if (pmd_page(val) =3D=3D virt_to_page(lm_alias(kasan_early_shadow_pte))) @@ -121,7 +121,7 @@ static int ptdump_pte_entry(pte_t *pte, unsigned long a= ddr, unsigned long next, struct mm_walk *walk) { struct ptdump_state *st =3D walk->private; - pte_t val =3D ptep_get_lockless(pte); + pte_t val =3D ptep_get(pte); =20 if (st->effective_prot_pte) st->effective_prot_pte(st, val); --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 88A38296BC4 for ; Thu, 13 Nov 2025 01:47:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998424; cv=none; b=o4/Wrq+botJ40VS1tIhtZJayF7wBWcHt1aF4/nawApUQqjVPDm5cfYbpEJ5a4dvDKG0CLuoFl0f39/16x5zXMFROMOdmLEehQqoySZYl0iqOEVzuBLFsU+Udb2CwVsbeTANpRnU7rlSoNkHmJbjzbyeVufXThYQ3PZbwyfgviSE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998424; c=relaxed/simple; bh=ZJRXoNK6NlJYUiv4dcbkBDt1UpeLKJVItB0z0KLWp0g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JvYswU/jhFq3ZpJA2Bo/U8KKbsSbfWKTwQ0GBJrZtBDX415Pe+TjbP43NFOzYUKTah00ZnO1ZMOBO3xCCQy17JmH/TX1jjWQ0WnLDhOo36EBchhFOtKaW9bwtjbWV7lG2mii2n1fUgSiOfj7wJc0h0qb2kmIwdihyShkBs2JoLo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=kI8E//OL; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="kI8E//OL" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-29586626fbeso2735455ad.0 for ; Wed, 12 Nov 2025 17:47:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998422; x=1763603222; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XK0WcjEaQwvPVjRkMLc2/R9gHzLJpbaLIGkWXKqwGcI=; b=kI8E//OLYgprPYYAWxaK/iZYHGJCDlRfu52jqs0FA4ZGHGM6//6lALhgIZpElHy4NV WvQRHk8ucHkaGjLWPC58687Ei27tca/f0w36pusrfm0rncfkFfMoQiC6rAP1tXeH7xq4 e5JEks5JYEGxFAX+8E4mCNJjbrQjUneK0c8jmXTnrldfI47IprGVhWslj+eR0H03S1wd MneX7Q6Y56xEqdYCZmnB8JltUrv4T1UCm8rbj7uFF+ctRwtzUHMp3BSIXkJnWpACKlYB g6EVXS7v6dvrxiaw375lE3auJwTMO2R4tGsm1mCLMQkfwGnAsXlKWkX/UZ9EOd9OdBZn p4+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998422; x=1763603222; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=XK0WcjEaQwvPVjRkMLc2/R9gHzLJpbaLIGkWXKqwGcI=; b=eCVhkBRhE+3rN8ECBiKjW3sBm9up3WWVcwguxttCAKrJJJp5sgqtekOSU1JawNUclm yr6BWEcL4YbUl6bzl1pGC85DqQ1TPWYhjASq6Inn8zGW3BRzGZkfcLHYpU6Istkbgzq+ ofqw/pYbt8UtgmwKUebazOFVRJ9boR5h7T8vq/t0XRuN3FF9ENhNVQpC5VjmpY2WcF5v 5+AoQ2Rv2N9AZfqK921Biui7D9JLoUXQ+jT87H/J2fVVPz7W16NGZzu0VbM90quBWZML 2Q3VCtW7sVMcxzFNX94rMSuIDgmE8EbHj6jcFUpALj76jT42VSHzWK9lad4buwinJPpZ zVTQ== X-Forwarded-Encrypted: i=1; AJvYcCUgXGdav2reONRivRetXOOrn8E/RcVZJZWf98DDg3dEngftvHArwdwJwonyR9L/3JLjLsmexs5gkKbp5AA=@vger.kernel.org X-Gm-Message-State: AOJu0YyFC5zcfAqvqcuSHhKkSJXXKNRogw57KU2AfGr6yemOyBsov4wr fLUubWLu7NnAW5KRAeyzhoIoRnjNIncH+JkyBUxoBLbq/SSXJWg8JqZXuezFFzFo0RA= X-Gm-Gg: ASbGncs+lwrwyAYXfaj61xCLBa2JtYri1afG/h0xmNzuDdlwtdGgS54AfYZxRLo4jKo 4YYo9km2Gjrprs2iSyxFNxnV4NeTpbRR6b2uzgNSND9+0DX/9UfLl1MZYmxG8VZeutK1aLKOZIl vkr/7N2qjLJtqWtRMGDTkKDlgZHTFyyNdDlwxWtVYXZ3Jwlv65M3HlaCVAE3kyv4xN7EdpcQswh Yv2QjHVR1fROfBoTeKOu9tc41y/N9nuxWnD/DYA/WN1qXrj+dwbXz6b+Ol8sUzByy/G93Yoha44 29O+/hia6gmlEQY46uZyOYtRCdQIBr0x7WKoy/lxu7dNGDr8XcQ5l4ksooMzKXugcFPMXaPePLd IxwcGgPQcVzOcKTgJXtSoHAnfs4hBjPJFbjneT3boFV6cLaVKP9El7VT+fbegmk3McG1zO6Njtw KENOyquS/r51W3fZxtNTHZGQ== X-Google-Smtp-Source: AGHT+IGdLwQpJ+C02LpNGU1KZK8FopHeRZPwq9hf7+jq0mst5qCQq34alBPH74+bRQy7F+ajxgP+5A== X-Received: by 2002:a17:902:dac3:b0:27e:ec72:f67 with SMTP id d9443c01a7336-2984ed27ec5mr64198655ad.6.1762998421878; Wed, 12 Nov 2025 17:47:01 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:01 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Anshuman Khandual , Lance Yang , Wei Yang , Dev Jain , Samuel Holland Subject: [PATCH v3 02/22] mm: replace READ_ONCE() with standard page table accessors Date: Wed, 12 Nov 2025 17:45:15 -0800 Message-ID: <20251113014656.2605447-3-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Anshuman Khandual Replace all READ_ONCE() with a standard page table accessors i.e pxdp_get() that defaults into READ_ONCE() in cases where platform does not override. Link: https://lkml.kernel.org/r/20251007063100.2396936-1-anshuman.khandual@= arm.com Signed-off-by: Anshuman Khandual Acked-by: David Hildenbrand Reviewed-by: Lance Yang Reviewed-by: Wei Yang Cc: Dev Jain Signed-off-by: Andrew Morton Signed-off-by: Samuel Holland Reviewed-by: Dev Jain --- Changes in v3: - New patch for v3 (cherry-picked from linux-next) mm/gup.c | 10 +++++----- mm/hmm.c | 2 +- mm/memory.c | 4 ++-- mm/mprotect.c | 2 +- mm/sparse-vmemmap.c | 2 +- mm/vmscan.c | 2 +- 6 files changed, 11 insertions(+), 11 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index a8ba5112e4d0..b46112d36f7e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -950,7 +950,7 @@ static struct page *follow_pud_mask(struct vm_area_stru= ct *vma, struct mm_struct *mm =3D vma->vm_mm; =20 pudp =3D pud_offset(p4dp, address); - pud =3D READ_ONCE(*pudp); + pud =3D pudp_get(pudp); if (!pud_present(pud)) return no_page_table(vma, flags, address); if (pud_leaf(pud)) { @@ -975,7 +975,7 @@ static struct page *follow_p4d_mask(struct vm_area_stru= ct *vma, p4d_t *p4dp, p4d; =20 p4dp =3D p4d_offset(pgdp, address); - p4d =3D READ_ONCE(*p4dp); + p4d =3D p4dp_get(p4dp); BUILD_BUG_ON(p4d_leaf(p4d)); =20 if (!p4d_present(p4d) || p4d_bad(p4d)) @@ -3060,7 +3060,7 @@ static int gup_fast_pud_range(p4d_t *p4dp, p4d_t p4d,= unsigned long addr, =20 pudp =3D pud_offset_lockless(p4dp, p4d, addr); do { - pud_t pud =3D READ_ONCE(*pudp); + pud_t pud =3D pudp_get(pudp); =20 next =3D pud_addr_end(addr, end); if (unlikely(!pud_present(pud))) @@ -3086,7 +3086,7 @@ static int gup_fast_p4d_range(pgd_t *pgdp, pgd_t pgd,= unsigned long addr, =20 p4dp =3D p4d_offset_lockless(pgdp, pgd, addr); do { - p4d_t p4d =3D READ_ONCE(*p4dp); + p4d_t p4d =3D p4dp_get(p4dp); =20 next =3D p4d_addr_end(addr, end); if (!p4d_present(p4d)) @@ -3108,7 +3108,7 @@ static void gup_fast_pgd_range(unsigned long addr, un= signed long end, =20 pgdp =3D pgd_offset(current->mm, addr); do { - pgd_t pgd =3D READ_ONCE(*pgdp); + pgd_t pgd =3D pgdp_get(pgdp); =20 next =3D pgd_addr_end(addr, end); if (pgd_none(pgd)) diff --git a/mm/hmm.c b/mm/hmm.c index 87562914670a..a56081d67ad6 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -491,7 +491,7 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long = start, unsigned long end, /* Normally we don't want to split the huge page */ walk->action =3D ACTION_CONTINUE; =20 - pud =3D READ_ONCE(*pudp); + pud =3D pudp_get(pudp); if (!pud_present(pud)) { spin_unlock(ptl); return hmm_vma_walk_hole(start, end, -1, walk); diff --git a/mm/memory.c b/mm/memory.c index b59ae7ce42eb..0c295e2fe8e8 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6690,12 +6690,12 @@ int follow_pfnmap_start(struct follow_pfnmap_args *= args) goto out; =20 p4dp =3D p4d_offset(pgdp, address); - p4d =3D READ_ONCE(*p4dp); + p4d =3D p4dp_get(p4dp); if (p4d_none(p4d) || unlikely(p4d_bad(p4d))) goto out; =20 pudp =3D pud_offset(p4dp, address); - pud =3D READ_ONCE(*pudp); + pud =3D pudp_get(pudp); if (pud_none(pud)) goto out; if (pud_leaf(pud)) { diff --git a/mm/mprotect.c b/mm/mprotect.c index 113b48985834..988c366137d5 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -599,7 +599,7 @@ static inline long change_pud_range(struct mmu_gather *= tlb, break; } =20 - pud =3D READ_ONCE(*pudp); + pud =3D pudp_get(pudp); if (pud_none(pud)) continue; =20 diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index dbd8daccade2..37522d6cb398 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -439,7 +439,7 @@ int __meminit vmemmap_populate_hugepages(unsigned long = start, unsigned long end, return -ENOMEM; =20 pmd =3D pmd_offset(pud, addr); - if (pmd_none(READ_ONCE(*pmd))) { + if (pmd_none(pmdp_get(pmd))) { void *p; =20 p =3D vmemmap_alloc_block_buf(PMD_SIZE, node, altmap); diff --git a/mm/vmscan.c b/mm/vmscan.c index b2fc8b626d3d..2239de111fa6 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3773,7 +3773,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long s= tart, unsigned long end, pud =3D pud_offset(p4d, start & P4D_MASK); restart: for (i =3D pud_index(start), addr =3D start; addr !=3D end; i++, addr =3D= next) { - pud_t val =3D READ_ONCE(pud[i]); + pud_t val =3D pudp_get(pud + i); =20 next =3D pud_addr_end(addr, end); =20 --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A1C82D1907 for ; Thu, 13 Nov 2025 01:47:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998425; cv=none; b=Jq/d3GFDa4r9A1a1jZ+o0WpMYkR5BWrpYos6feS8X+gZp3Jh7T5D1r4D3CmeNPwTW7jlafISIFVSAN8W//BOomedpoE4RwrW/bKLUiioABJtOi4LsFo4Rk/nWM1z5ch3z8E7CZ3/bhXIVGoHracpPwuY0yucAsjrVATju3+71z8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998425; c=relaxed/simple; bh=yMuACDg4jwZADYcl43+ysWHy4EK2N5esH8pc0xnPjgo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=inaFhXr1wKPw9dZqt5OgDhmVSIfeDutzrmoVgFQn96R308a7QPptJ2HS0K0s45EgIhbtPbMqXielD6UleXan8OcT4vBhdL/8QrvV8cEtlf28PWWV7ybZ0jrR8hZAKOYrEm4uXQK9fjO4Lgc9swuCyRWqYok2ZLyWLe6mKcw+OhQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=ZKIxAztd; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="ZKIxAztd" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-297f35be2ffso3843035ad.2 for ; Wed, 12 Nov 2025 17:47:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998423; x=1763603223; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=V+yqNsk4liq+0TyYbfqqqgCxYeLqjkF7jXxObMUTUHM=; b=ZKIxAztdm8cok/TI2TUBzyh/aKNJSLAuqak7fa1zowq/nwrf06knh3zz/8301dZ/qy LGVBa/wHfbyOdlF+IBmB5qNReNM+jurcEIYqd08ZxRYgaTO5R8J9Ky1n6vm/oddt+VA5 FcObH4dPgNvzD8NLH0D2GYuVLZnexcnhcqnQq4UvCawBTMMSleuZ9ihmu2MQELNyKNhy aj5XM8EJhCuAVYohPQpcJDeT5WCoaevOWUYfBNErTg5/oHID5g4RwBN/z3/5ARQuTIyw OCTY/Tn1lFoMebaC+DYOmluzCaeUqmWGAEDbqRtBAzEzBHps4iHDFs/buKZW2L7Tw5Wh xCUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998423; x=1763603223; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=V+yqNsk4liq+0TyYbfqqqgCxYeLqjkF7jXxObMUTUHM=; b=WPd4pwKLx91hvxTtiX9TP8mjYtq2pxV1cHtcUA8IxY85lH8Ge2jnhr/mbU3stQXoLQ HYvCFPtY5j+lLbPX6sZQfa28kgI5fJRDu5qLUzWrfW5HaxTdOM59T7yyT9st6K7r8RnG i+trpcFhAT7rar1SRW6ZiPeubemDdn5gAIInuqmBqKmaomhKj5Vmco65sPReKXT9XEa0 mRb8uDN8VrYk83u6pPvp7hEcXO26igIigcSCG+fMS1WRHuNmLkBsp0YW0Cafy7qOTI/+ GrohvOYTmJw0l4lWdAPKixoWhTtSGnzxP+Ibzi3zYlx4dLrGOnnwuAeNj5uQgcorbN3W JFAw== X-Forwarded-Encrypted: i=1; AJvYcCXoby2qtOo1JkKzttU0dy2GWH31D2PEwEWdd4oRoFOiDfyuoEDKRlcr3NasXMp7oaiJgHORnS4u/D5ZPxI=@vger.kernel.org X-Gm-Message-State: AOJu0YzPjexnYygcMoJb8EfHD3+oPSIH89QHymMY3p3uKTCuGBH2Jyvu qFXn5vmODWnCzUBnn8PPRnAbKDavSwnHVAnI97e0UhZN0Ba53lMkPX1oxGFEygYAJsE= X-Gm-Gg: ASbGncu30XocozxfOZe6+u3ueSbcgupm0CwA14LpXKpKBxJRkqsTuSevC61SVVMbA5A oR9ggQ3QA2BTxYvnBqXZA0I0B1m7kQLfwgxjHkUSW+mrdqW6Ip6pFScukVo2xTJVklB5pQexyqw DsOiNDpKXPOyhqPpPm2LcjdVc4Wjsb/vC4nfme/hiA/cRN9OxGEinSTrzg1gMzMAiaynnw2hIsa Qek/sqXHcyk0UdXrej90SKt9SlaXeAYUnjM6391l+rPTQiv5IxVVseQ8vLi/D07g5OsLihl2bMN kWN4PBcCWmLkyPRj9xm7gaWRsB9NXrMejMnT4hTklBPaV/Liu0pvu4zpHhFDAWadyC+z/gCrsb1 3LQsnUUfk/YNKekc/Odkm0yvURrqYpSWCzvo2buF0gdYX5E+4dGTqR1u4/Q89xOCYGUf1mi5TAR lAPnajVUuRWiEjDqpmJz1nUIZLIndvPhMy X-Google-Smtp-Source: AGHT+IE5/byJkN6PUIJz0PFtm0m4DzcmYy5ZdKT+rzV4QNwdcHdn7EgBhDmqTxsB7L2xrr78NQwjSQ== X-Received: by 2002:a17:902:d4d2:b0:295:1351:f629 with SMTP id d9443c01a7336-2984ed328abmr54095875ad.9.1762998423477; Wed, 12 Nov 2025 17:47:03 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:03 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Anshuman Khandual , Dev Jain , Oscar Salvador , Lance Yang , Samuel Holland Subject: [PATCH v3 03/22] mm/dirty: replace READ_ONCE() with pudp_get() Date: Wed, 12 Nov 2025 17:45:16 -0800 Message-ID: <20251113014656.2605447-4-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Anshuman Khandual Replace READ_ONCE() with a standard page table accessor i.e pudp_get() that anyways defaults into READ_ONCE() in cases where platform does not override Link: https://lkml.kernel.org/r/20251006055214.1845342-1-anshuman.khandual@= arm.com Signed-off-by: Anshuman Khandual Acked-by: David Hildenbrand Reviewed-by: Dev Jain Reviewed-by: Oscar Salvador Cc: Lance Yang Signed-off-by: Andrew Morton Signed-off-by: Samuel Holland --- Changes in v3: - New patch for v3 (cherry-picked from linux-next) mm/mapping_dirty_helpers.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/mapping_dirty_helpers.c b/mm/mapping_dirty_helpers.c index c193de6cb23a..737c407f4081 100644 --- a/mm/mapping_dirty_helpers.c +++ b/mm/mapping_dirty_helpers.c @@ -149,7 +149,7 @@ static int wp_clean_pud_entry(pud_t *pud, unsigned long= addr, unsigned long end, struct mm_walk *walk) { #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD - pud_t pudval =3D READ_ONCE(*pud); + pud_t pudval =3D pudp_get(pud); =20 /* Do not split a huge pud */ if (pud_trans_huge(pudval)) { --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D5DA62D323F for ; Thu, 13 Nov 2025 01:47:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998427; cv=none; b=Lhs0WcMxZoNs2ewZTyvMtAGw0CqaUZRTC8SCql5bviYQNGCjXacdX0zj0Xgv0AhpRQvRBgs0OeKwDwo79K2YIZmWG0P9nFMXKbM4a26AOc9N2xBS0M55cRMZcCkKJIn9poxceWwk1mXzBk9XSebS1JcHAKkl/L+qK1jsNOqLeLM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998427; c=relaxed/simple; bh=eKBqONdiSsBUuobmEdY1mAtYTYL7O9m2hVGhf0V16mA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=T3RdltlpdSCuhkL+2fm49I9o1OXclXyRN1l9QLoKRLHSaDGJ5QpMx2+qq8XlaWN6qOU5RJSjMhWrwZUW+3djDqgmJUwg+Oa9lNj348+PfEJx4CrmxI/dSTiVk63CclwzClzjVErk46i03EiaQE9l7OIECxDoRT9at7d3vElO58g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=KG7OcXl7; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="KG7OcXl7" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-297dc3e299bso2684385ad.1 for ; Wed, 12 Nov 2025 17:47:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998425; x=1763603225; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7sKtwP//OgsTzC5xxNl/Xd6UYX/1wT9VConzzoLFoks=; b=KG7OcXl7nxmT2EksNPHe/AKIJuk5Vy9O35FpuQ2rjXW5WqrT3/YWNFJ6h/1s6Tws8P KFQ5IzmnR0fKvxLQLB9y31oIJLHsdKqLRbM9ixCzRIx5rZSlidMXLFPZu06zUzf9lnU8 HNxkrzksKrwM9iIddJ3vSGeqaUXPoeWnHbcLFRBho8/jVB4PFqZXiDY48UWX+w4dUe6s KnQm6JsaoEmdBWrw268jdp05SZhLPEKLyGyBXula8iLNwedZK8GXQ1moW8snzTgx2yje OH/PxclioA4yXk5wUhRmI9m+9nbPLjlvfZClqjPUuMIQGMigZ0FyWEQTStKYBSYq/zoB 0nbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998425; x=1763603225; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=7sKtwP//OgsTzC5xxNl/Xd6UYX/1wT9VConzzoLFoks=; b=hgzmc6zZOaxm/zLaeRcjBnji0o5x7Edzz6dx4wGYUHCpU0yY+Zpcrn8TMYN6Sdk28A +XCmKekUa50/6wjuvhVoPsjUnwu5erCn3qVfUcD/UdhxCibjzyGfGM0TKrPtpc66QSFk sxNT2ojQqB3jkL7zzLcOKHsBrsJybLJmW4zW0cSbrOc9SwD1W59ZB8pakgPRkHpeQzZz YVQtrC0/Y643t+5ooMDM1WYkoJA2l7jpVW1FMOBMiokGEHY+1nwKXccwx7U2bp/HONYV dTVxgAwn/0/SdU+skIoiV7LfsM9cYUj579tG9B/sgWxiDs4EHRfrFZMJ0x8ZFKvdDb6U USeQ== X-Forwarded-Encrypted: i=1; AJvYcCVusB9YCwZcvydX1GvpFORmkUx7Xe9PTUsLDtfL+EMXosYGvMIpVbpvteo3cvHZBtOWbmuyZpUZgv8ACvw=@vger.kernel.org X-Gm-Message-State: AOJu0YyMRXT/xUyMSPDM7uipaNBnKHw/F5gyY/4ed6kLIeSwsEsoOdQn dVmKSpoXT7ISHYjXvU+JfCRHvgwvblMqlLG7pqDou6h9mw1IDt810PdZ9vSxUV3Odx4= X-Gm-Gg: ASbGncs24vVjJes5L3CW9viFKj/t4Sf9Pi7fqpiR/TgXhzIAXXWkly9auAR6f+4W44b FIrSt1WQOQ9L/pwZyu19XtjK2icYtguyQnM0xhtmKFJxPWE6n6X42jp2XtXB4xlhOB9F+GgyHxB sVNnrBRik2EBpEpAGjsZHlZoJYMYFzxBQrBY2t2OUH0UFtELBKE3w99zMiiUfdv7W0MdWp45Z3c H0yU1WG25lG/b+NrtmQa0ooW5QgSDt6p4jvZ2zpSKD0LJEij9ADut7MsRemAmPA4+3LY6jTyxQk rXlmonVy/1JbG7U2inVnINzPdTiNfzGdMV/L7kOhsS1WwdKbLg1zcZrKOtCP2NU0gRgzmvJIrzF 9ajjJ068teDVteNmzJ4AwIR++Xb2AHLg2E5D+WqLzh85NztrwOosP6+2VHnCh2UtTXoCG9oEcfB kd4EofkhQmQBYoIvD0ryV7RQ== X-Google-Smtp-Source: AGHT+IFyAjw4zT369i2V/HQmfngUMRA4j/oLD3IR8XNvQ6Htb0EplxsCGv5RKISA4ePCqfHnQtITTg== X-Received: by 2002:a17:902:e788:b0:297:d4c4:4d99 with SMTP id d9443c01a7336-2984ed30d02mr63149505ad.6.1762998425114; Wed, 12 Nov 2025 17:47:05 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:04 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Anshuman Khandual , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , linux-perf-users@vger.kernel.org, Samuel Holland Subject: [PATCH v3 04/22] perf/events: replace READ_ONCE() with standard page table accessors Date: Wed, 12 Nov 2025 17:45:17 -0800 Message-ID: <20251113014656.2605447-5-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Anshuman Khandual Replace READ_ONCE() with standard page table accessors i.e pxdp_get() which anyways default into READ_ONCE() in cases where platform does not override. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Namhyung Kim Cc: linux-perf-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual Link: https://lore.kernel.org/r/20251006042622.1743675-1-anshuman.khandual@= arm.com/ Signed-off-by: Samuel Holland Acked-by: David Hildenbrand (Red Hat) --- Changes in v3: - Replace my patch with Anshuman Khandual's patch from LKML Changes in v2: - New patch for v2 kernel/events/core.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 1fd347da9026..fa4f9165bd94 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -8122,7 +8122,7 @@ static u64 perf_get_pgtable_size(struct mm_struct *mm= , unsigned long addr) pte_t *ptep, pte; =20 pgdp =3D pgd_offset(mm, addr); - pgd =3D READ_ONCE(*pgdp); + pgd =3D pgdp_get(pgdp); if (pgd_none(pgd)) return 0; =20 @@ -8130,7 +8130,7 @@ static u64 perf_get_pgtable_size(struct mm_struct *mm= , unsigned long addr) return pgd_leaf_size(pgd); =20 p4dp =3D p4d_offset_lockless(pgdp, pgd, addr); - p4d =3D READ_ONCE(*p4dp); + p4d =3D p4dp_get(p4dp); if (!p4d_present(p4d)) return 0; =20 @@ -8138,7 +8138,7 @@ static u64 perf_get_pgtable_size(struct mm_struct *mm= , unsigned long addr) return p4d_leaf_size(p4d); =20 pudp =3D pud_offset_lockless(p4dp, p4d, addr); - pud =3D READ_ONCE(*pudp); + pud =3D pudp_get(pudp); if (!pud_present(pud)) return 0; =20 --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 542882D4B6D for ; Thu, 13 Nov 2025 01:47:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998429; cv=none; b=AFDWvl+cdFwYHETAfHRiREaoFgsRo87LNiyY1rvea5uUkYt9iZCqW2ZPXoP/Kdi7G6Rb1D2uRovh7J/BxGrchbzMiav2Akl4uHdV5mqSrhrSby+SdFYawOjTobR0HKgTCXJWwNnAOAiMmcC17K+N7fLOpfTSLB9dkmHT71kCqGQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998429; c=relaxed/simple; bh=41A8tN6rl++qotcHaVWp474wXnOzdy2HP/mfIQAnWzg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=njAg2xa5NmuEgwZUL6dynpwF1xpXqU1rqQm85IbhhYR2fEpf1XykkB2LocRopCicEeKaRQ1gNmzE5TQXUd2idfMS7TNEbJOMRKKzY/tQW8ETuolca18DG7ElbhrnNy8oDhiTYgEGuHmUs+nG++58UzwBhGomfRKoKWGh+0iQeX4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=nXV6M/5G; arc=none smtp.client-ip=209.85.214.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="nXV6M/5G" Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-29558061c68so3642325ad.0 for ; Wed, 12 Nov 2025 17:47:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998427; x=1763603227; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VLUmTi++1HGkiLtqpPoIfkvNz5bT2yFaPhtAlSBbrkg=; b=nXV6M/5GiiTv/+TiOqJ0WSQwyw7vtitOEypvu5o/9/oOA+Lnedhdg7diU93xJZ5KtV 9+zxCrmBdfMq6FioKn4gT5Ql3RkU9EUwXicLCADW6s/FIxqTu8k9gmV4LAA3F8Hf8TTf pJNNsTeH59vjvyJEfKWQLSL9uMYXfLYMHXUDaMiUUuLL3DoHSwKwwjqfX9lQjToS9uXV uE/lR5I3cvreYXG8sNDbsLZyZFa56miBYnN5fV6C5lJLZgOTwdnvNLmjzzRi5k7u9TvM lZBYsmCCTpvyG911gKuL9cbGBKHg0689k2k5aMWPNEfZinqAmJnmaG+P3WdchB2qv9B7 LJAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998427; x=1763603227; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=VLUmTi++1HGkiLtqpPoIfkvNz5bT2yFaPhtAlSBbrkg=; b=Qmd5RUtaXooSJIc6/EDboOfzbHUk5U8UhuZMYOqjOu105ZC38ltDRXSRZOXxe0kMlw 8lXm4IepuNzs+IaWy0zbWMFAMvjJ+KgO3Eyw/oNG15KAShiC/s837Z1ZnMpZWG6J6tjf AG5pGg4PRhog0Xttd6CoEz4PbtOMwi0+sQX5Qj6OsXDmhYXcp6e89RlqISwV8+k6Fm/v nF9yyDcsrYqRMMSb/nLi1LEnjqME+yKv7L0MG13s3MyhonjfcVAhJJnt6v3TeU21IcGf PZi69YKrEqTbewNnMr6iqTLI3yKCk1y4OLl1YtHIIoeTca/J+TD+0vgrnCoEuyvoQflQ P0kg== X-Forwarded-Encrypted: i=1; AJvYcCWdipdEDjLGw2rodXbzvwhtGl/iMvfciCJd/5xJjve0dbhGSCS1Dg4RoiPS4o9GzBdFB5gtM43ayT2L9yk=@vger.kernel.org X-Gm-Message-State: AOJu0Yz27i8fga6glu9Z+Eo6RCT4vxcYpMqE50y+UBYvoyv/pBaXoI88 AiVFnMNc1AUfX7vM/On/t9KMvhWazYCz1I9BKHvDRY4/DqL7+2ucmeb9WHsZvpwna1k= X-Gm-Gg: ASbGnctx7IVc0M4Te6B5ayP+DS+EUe6uy+GWsRh/TcVzgie9MCRVhOUxNSRafdVbO4J CGCgwUoB7zDmuNBFqkzasPkB159YCWAUugv+5qhv+RBt/yOknWzGpPYnuwVZEXk7JlS62cIjEKm 7AcDmDsGVoGyphCOIZaMcgdvlW8LLa0thw35KqqpYrIFo8VU/W95JjBX8LFJ6/SBgQaFKtxYlwK I4FCIyUIHkcC+HTLvQoIincgb59uR7JblMLpZfArXAYvyfzlj4VFIniqUex/edahb87krmprNOB K3Z+RrY0m5K3kCqpVS4u8CruKC2dV2PjubQsOVUeHcdWsoC8YUCVn3BkMbHB/2FX4WyDbonr7wV onbEUJuSX+MM4hBieztFcfVbFq+SGLYhTaXygpfAy4cfHZ+lJT6JOThe5l05FOfq7InZeK6UR9u Qpmxge6OPNiB1qVLzM91xJbQ== X-Google-Smtp-Source: AGHT+IGHyS+mDyrQaZDGXxDAQVdZw9gnQRb0FUGE5S/L7D9yFzt8E/UzPtcwSd6KHOBX5fl5Cq42AQ== X-Received: by 2002:a17:902:e5c3:b0:298:3545:81e2 with SMTP id d9443c01a7336-2984ed359e1mr63898395ad.22.1762998426605; Wed, 12 Nov 2025 17:47:06 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:06 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland Subject: [PATCH v3 05/22] mm: Move the fallback definitions of pXXp_get() Date: Wed, 12 Nov 2025 17:45:18 -0800 Message-ID: <20251113014656.2605447-6-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Some platforms need to fix up the values when reading or writing page tables. Because of this, the accessors must always be used; it is not valid to simply dereference a pXX_t pointer. Move these definitions up by a few lines, so they will be in scope everywhere that currently dereferences a pXX_t pointer. Signed-off-by: Samuel Holland Acked-by: David Hildenbrand (Red Hat) --- (no changes since v2) Changes in v2: - New patch for v2 include/linux/pgtable.h | 70 ++++++++++++++++++++--------------------- 1 file changed, 35 insertions(+), 35 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 32e8457ad535..ca8c99cdc1cc 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -90,6 +90,41 @@ static inline unsigned long pud_index(unsigned long addr= ess) #define pgd_index(a) (((a) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) #endif =20 +#ifndef ptep_get +static inline pte_t ptep_get(pte_t *ptep) +{ + return READ_ONCE(*ptep); +} +#endif + +#ifndef pmdp_get +static inline pmd_t pmdp_get(pmd_t *pmdp) +{ + return READ_ONCE(*pmdp); +} +#endif + +#ifndef pudp_get +static inline pud_t pudp_get(pud_t *pudp) +{ + return READ_ONCE(*pudp); +} +#endif + +#ifndef p4dp_get +static inline p4d_t p4dp_get(p4d_t *p4dp) +{ + return READ_ONCE(*p4dp); +} +#endif + +#ifndef pgdp_get +static inline pgd_t pgdp_get(pgd_t *pgdp) +{ + return READ_ONCE(*pgdp); +} +#endif + #ifndef kernel_pte_init static inline void kernel_pte_init(void *addr) { @@ -334,41 +369,6 @@ static inline int pudp_set_access_flags(struct vm_area= _struct *vma, #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif =20 -#ifndef ptep_get -static inline pte_t ptep_get(pte_t *ptep) -{ - return READ_ONCE(*ptep); -} -#endif - -#ifndef pmdp_get -static inline pmd_t pmdp_get(pmd_t *pmdp) -{ - return READ_ONCE(*pmdp); -} -#endif - -#ifndef pudp_get -static inline pud_t pudp_get(pud_t *pudp) -{ - return READ_ONCE(*pudp); -} -#endif - -#ifndef p4dp_get -static inline p4d_t p4dp_get(p4d_t *p4dp) -{ - return READ_ONCE(*p4dp); -} -#endif - -#ifndef pgdp_get -static inline pgd_t pgdp_get(pgd_t *pgdp) -{ - return READ_ONCE(*pgdp); -} -#endif - #ifndef __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long address, --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF97C2D9498 for ; Thu, 13 Nov 2025 01:47:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998434; cv=none; b=Wa2GsA5W348//rw2miFlZ0jqk3oXowssgjEQ7ypYod1vyxQlnwk1V/zKrMwR9673g96yB5mYt/w3YaQ4qtS20oBCP+XOONyDqYOnwW9JU8c5kv6N2uT/l2e+oC6EV6cDiM23Uy3vkfJGbc5xE6To1KM/+aG+h+yvUTBGJIfqBSM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998434; c=relaxed/simple; bh=PmMWjwYgMfnHlBHRRt2b+5pFPto1TqFTize9d36GIYg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qxi08rJJniLunZqlxyBMrnq53s+O8hbQ2vfKP40DvIPynlhBzJBL4kfgo2nI0mII8cFf5tY/tRf+OBfNrLhprbKkrnDYgL6MQV8/NQD1Wdsapy2jX3yXhpZMeliLq5iM/3NekaK+yhAyOzRcE33giO8JqeXas8pS9yeA2s0vgjA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=DN+pRTO/; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="DN+pRTO/" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-29558061c68so3642625ad.0 for ; Wed, 12 Nov 2025 17:47:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998429; x=1763603229; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mhsksH5TY7zHUQyYYMQ5gSBvYKSV+PbxxgD/olkl100=; b=DN+pRTO/Yw1Z2VDDzzlsUa/cq08qo3ALU1AWmaivMIls7vp8SFfHHOcE1lkzM5hc35 wrUAuZmo4Y/Mku4LL6mvaMVG12J85BqgEgjoukliyNuj65vEv66wu0DtN4WHPe58QaDE NxpR5T63LxHsOijXJ9TW+y7shEmcAqnvvDM7asb5HachA/hzRw3OppJghzAkL4MUZFg5 WBZJ4yp8XZr+Yz/ncMGn2bweZl6a0h4aUccacTeARyoXDWnPFYaYBtMW5A4991il4AsW cKp65D4Fa1qLBSTszCiR8CMVp26OBA4geEvfFFm9L6ncuvj8sRzKrx8w23ZbX2rDmkdf Jdyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998429; x=1763603229; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=mhsksH5TY7zHUQyYYMQ5gSBvYKSV+PbxxgD/olkl100=; b=IB0gD7YZvcLigWsu48fu4pGCYAZ6tExPz+7Gjb+uQ6CwlJOAA7B51whDDxUhkK2rm0 BucQ/LlTMkMmBOSyz0QkNTVIXqKJ5eBCCNRDtKxV1S3cVz+K4oAjEvMX4VP862V5libW eKo/Xdi//898iovH1oZLi44wOUEtUAIxWVYR8lyQ/mMlYMMaGtdJGYYi+hluWogzzHT6 lrPqZ7dhjfSrFYoS7f0gu6RUjqCuccW1tc7VMvxCjo9RaqWuhv3HSBPcxj2bYdf9DbHr 14VkMx4dhxTJbw2ZbQHG7XikTopyaFYtqWgamm95kcdWXuSjJ4HgYELpMjYobSf68fxb WcCA== X-Forwarded-Encrypted: i=1; AJvYcCVfe+G04SDjyNkVoHlMmHf8qW35C1HFjSsFoa0m+NmW0o4fysqBe4TOQJsxH6Khbac2YtUJVXJdarc69r4=@vger.kernel.org X-Gm-Message-State: AOJu0Yy/zxGzqQcZPQ1vzSjKUs4APKCgDXgZ5rdL7aj/u4uc1/jV+3a2 2rm7mMwdkkb6DdOJWqRpfn30FyWX7Gg3G6TwH5FYYwbvrK/nhmVQMKXz9xbifCE/N3s= X-Gm-Gg: ASbGncvvNAXe2M+ls/2PXn04qcqjDErNIkewnVauNYyxYEQaSM2JJCTYnkeCs3y6qlT xjvymqp4A5+3pyZ383fC+FITlIbhAgkPYn8jUU56zJGby3tx4RQyuebUnd6+jDVv4mFERsoJhxT 0ATVzaCwSFaPL23PQ4zi81Rx2Rq/OSHcqk9ltJQD5RNNtzjzdxnrRk42ma6T87L/3hUW8yXSoE0 aePCZ31EVDkOf3goqkEz1hlgsdOGXt9zqw3c1yn0Z9RB854cuh2dX+D6t7bSWREtd0Kufz+pGyD JpkT4nc1biLtqio2fhFm+C7J0NUlpzeU46KUT9FddXW00PJIm1S1Iw75XeBlIuErJhm/k7d/67K d/Skug3R71QWaG7X/hoxp9yGPcX/K+OFF6W37b4LdpX06SV22yOgykd97a/uOKfspt5yAXA47AU xceAhDKLktSID3UUT1cmTKlQ== X-Google-Smtp-Source: AGHT+IFXeUN3eaV14WZpHgvbi7hPgYQDt1XLt7ffmOzKTCb1WdGJzOWKZBJexU7W3gqF2YW+deNZag== X-Received: by 2002:a17:903:fb0:b0:295:9b3a:16b7 with SMTP id d9443c01a7336-2984ed248c4mr61533915ad.4.1762998428351; Wed, 12 Nov 2025 17:47:08 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:07 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland , Julia Lawall , Nicolas Palix Subject: [PATCH v3 06/22] mm: Always use page table accessor functions Date: Wed, 12 Nov 2025 17:45:19 -0800 Message-ID: <20251113014656.2605447-7-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Some platforms need to fix up the values when reading or writing page tables. Because of this, the accessors must always be used; it is not valid to simply dereference a pXX_t pointer. Fix all of the instances of this pattern in generic code, mostly by applying the below coccinelle semantic patch, repeated for each page table level. Some additional fixes were applied manually, mostly to macros where type information is unavailable. In a few places, a `pte_t *` or `pmd_t *` is actually a pointer to a PTE or PMDE value stored on the stack, not a pointer to a page table. In those cases, it is not appropriate to use the accessors, because the value is not globally visible, and any transformation from pXXp_get() has already been applied. Those places are marked by naming the pointer `ptentp` or `pmdvalp`, as opposed to `ptep` or `pmdp`. @@ pte_t *P; expression E; expression I; @@ - P[I] =3D E + set_pte(P + I, E) @@ pte_t *P; expression E; @@ ( - WRITE_ONCE(*P, E) + set_pte(P, E) | - *P =3D E + set_pte(P, E) ) @@ pte_t *P; expression I; @@ ( &P[I] | - READ_ONCE(P[I]) + ptep_get(P + I) | - P[I] + ptep_get(P + I) ) @@ pte_t *P; @@ ( - READ_ONCE(*P) + ptep_get(P) | - *P + ptep_get(P) ) Additionally, the following semantic patch was used to convert PMD and PUD references inside struct vm_fault: @@ struct vm_fault vmf; @@ ( - *vmf.pmd + pmdp_get(vmf.pmd) | - *vmf.pud + pudp_get(vmf.pud) ) @@ struct vm_fault *vmf; @@ ( - *vmf->pmd + pmdp_get(vmf->pmd) | - *vmf->pud + pudp_get(vmf->pud) ) Signed-off-by: Samuel Holland --- This commit covers some of the same changes as an existing series from Anshuman Khandual[1]. Unlike that series, this commit is a purely mechanical conversion to demonstrate the RISC-V changes, so it does not insert local variables to avoid redundant calls to the accessors. A manual conversion like in that series could improve performance. [1]: https://lore.kernel.org/linux-mm/20240917073117.1531207-1-anshuman.kha= ndual@arm.com/ Changes in v3: - Rebased on top of torvalds/master (v6.18-rc5+) Changes in v2: - New patch for v2 fs/dax.c | 4 +- fs/proc/task_mmu.c | 27 +++++++------ fs/userfaultfd.c | 6 +-- include/linux/huge_mm.h | 8 ++-- include/linux/mm.h | 14 +++---- include/linux/pgtable.h | 42 +++++++++---------- mm/damon/vaddr.c | 2 +- mm/debug_vm_pgtable.c | 4 +- mm/filemap.c | 6 +-- mm/gup.c | 24 +++++------ mm/huge_memory.c | 90 ++++++++++++++++++++--------------------- mm/hugetlb.c | 10 ++--- mm/hugetlb_vmemmap.c | 4 +- mm/kasan/init.c | 39 +++++++++--------- mm/kasan/shadow.c | 12 +++--- mm/khugepaged.c | 4 +- mm/ksm.c | 2 +- mm/madvise.c | 8 ++-- mm/memory-failure.c | 14 +++---- mm/memory.c | 76 +++++++++++++++++----------------- mm/mempolicy.c | 4 +- mm/migrate.c | 4 +- mm/migrate_device.c | 10 ++--- mm/mlock.c | 6 +-- mm/mprotect.c | 2 +- mm/mremap.c | 30 +++++++------- mm/page_table_check.c | 4 +- mm/page_vma_mapped.c | 6 +-- mm/pagewalk.c | 14 +++---- mm/percpu.c | 8 ++-- mm/pgalloc-track.h | 8 ++-- mm/pgtable-generic.c | 23 ++++++----- mm/rmap.c | 8 ++-- mm/sparse-vmemmap.c | 8 ++-- mm/userfaultfd.c | 10 ++--- mm/vmalloc.c | 49 +++++++++++----------- mm/vmscan.c | 14 +++---- 37 files changed, 304 insertions(+), 300 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 516f995a988c..e09a80ee44a0 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1900,7 +1900,7 @@ static vm_fault_t dax_iomap_pte_fault(struct vm_fault= *vmf, unsigned long *pfnp, * the PTE we need to set up. If so just return and the fault will be * retried. */ - if (pmd_trans_huge(*vmf->pmd)) { + if (pmd_trans_huge(pmdp_get(vmf->pmd))) { ret =3D VM_FAULT_NOPAGE; goto unlock_entry; } @@ -2023,7 +2023,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault= *vmf, unsigned long *pfnp, * the PMD we need to set up. If so just return and the fault will be * retried. */ - if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd)) { + if (!pmd_none(pmdp_get(vmf->pmd)) && !pmd_trans_huge(pmdp_get(vmf->pmd)))= { ret =3D 0; goto unlock_entry; } diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index fc35a0543f01..4f80704b78af 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1060,11 +1060,11 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned lo= ng addr, bool present =3D false; struct folio *folio; =20 - if (pmd_present(*pmd)) { - page =3D vm_normal_page_pmd(vma, addr, *pmd); + if (pmd_present(pmdp_get(pmd))) { + page =3D vm_normal_page_pmd(vma, addr, pmdp_get(pmd)); present =3D true; - } else if (unlikely(thp_migration_supported() && is_swap_pmd(*pmd))) { - swp_entry_t entry =3D pmd_to_swp_entry(*pmd); + } else if (unlikely(thp_migration_supported() && is_swap_pmd(pmdp_get(pmd= )))) { + swp_entry_t entry =3D pmd_to_swp_entry(pmdp_get(pmd)); =20 if (is_pfn_swap_entry(entry)) page =3D pfn_swap_entry_to_page(entry); @@ -1081,7 +1081,8 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long= addr, else mss->file_thp +=3D HPAGE_PMD_SIZE; =20 - smaps_account(mss, page, true, pmd_young(*pmd), pmd_dirty(*pmd), + smaps_account(mss, page, true, pmd_young(pmdp_get(pmd)), + pmd_dirty(pmdp_get(pmd)), locked, present); } #else @@ -1636,7 +1637,7 @@ static inline void clear_soft_dirty(struct vm_area_st= ruct *vma, static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp) { - pmd_t old, pmd =3D *pmdp; + pmd_t old, pmd =3D pmdp_get(pmdp); =20 if (pmd_present(pmd)) { /* See comment in change_huge_pmd() */ @@ -1678,10 +1679,10 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigne= d long addr, goto out; } =20 - if (!pmd_present(*pmd)) + if (!pmd_present(pmdp_get(pmd))) goto out; =20 - folio =3D pmd_folio(*pmd); + folio =3D pmd_folio(pmdp_get(pmd)); =20 /* Clear accessed and referenced bits. */ pmdp_test_and_clear_young(vma, addr, pmd); @@ -1989,7 +1990,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned lo= ng addr, unsigned long end, if (ptl) { unsigned int idx =3D (addr & ~PMD_MASK) >> PAGE_SHIFT; u64 flags =3D 0, frame =3D 0; - pmd_t pmd =3D *pmdp; + pmd_t pmd =3D pmdp_get(pmdp); struct page *page =3D NULL; struct folio *folio =3D NULL; =20 @@ -2416,7 +2417,7 @@ static unsigned long pagemap_thp_category(struct page= map_scan_private *p, static void make_uffd_wp_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp) { - pmd_t old, pmd =3D *pmdp; + pmd_t old, pmd =3D pmdp_get(pmdp); =20 if (pmd_present(pmd)) { old =3D pmdp_invalidate_ad(vma, addr, pmdp); @@ -2646,7 +2647,7 @@ static int pagemap_scan_thp_entry(pmd_t *pmd, unsigne= d long start, return -ENOENT; =20 categories =3D p->cur_vma_category | - pagemap_thp_category(p, vma, start, *pmd); + pagemap_thp_category(p, vma, start, pmdp_get(pmd)); =20 if (!pagemap_scan_is_interesting_page(categories, p)) goto out_unlock; @@ -3181,9 +3182,9 @@ static int gather_pte_stats(pmd_t *pmd, unsigned long= addr, if (ptl) { struct page *page; =20 - page =3D can_gather_numa_stats_pmd(*pmd, vma, addr); + page =3D can_gather_numa_stats_pmd(pmdp_get(pmd), vma, addr); if (page) - gather_stats(page, md, pmd_dirty(*pmd), + gather_stats(page, md, pmd_dirty(pmdp_get(pmd)), HPAGE_PMD_SIZE/PAGE_SIZE); spin_unlock(ptl); return 0; diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 54c6cc7fe9c6..2e2a6b326c2f 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -289,13 +289,13 @@ static inline bool userfaultfd_must_wait(struct userf= aultfd_ctx *ctx, assert_fault_locked(vmf); =20 pgd =3D pgd_offset(mm, address); - if (!pgd_present(*pgd)) + if (!pgd_present(pgdp_get(pgd))) goto out; p4d =3D p4d_offset(pgd, address); - if (!p4d_present(*p4d)) + if (!p4d_present(p4dp_get(p4d))) goto out; pud =3D pud_offset(p4d, address); - if (!pud_present(*pud)) + if (!pud_present(pudp_get(pud))) goto out; pmd =3D pmd_offset(pud, address); again: diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 71ac78b9f834..d2840221e7cd 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -409,7 +409,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t= *pmd, #define split_huge_pmd(__vma, __pmd, __address) \ do { \ pmd_t *____pmd =3D (__pmd); \ - if (is_swap_pmd(*____pmd) || pmd_trans_huge(*____pmd)) \ + if (is_swap_pmd(pmdp_get(____pmd)) || pmd_trans_huge(pmdp_get(____pmd)))= \ __split_huge_pmd(__vma, __pmd, __address, \ false); \ } while (0) @@ -434,7 +434,7 @@ change_huge_pud(struct mmu_gather *tlb, struct vm_area_= struct *vma, #define split_huge_pud(__vma, __pud, __address) \ do { \ pud_t *____pud =3D (__pud); \ - if (pud_trans_huge(*____pud)) \ + if (pud_trans_huge(pudp_get(____pud))) \ __split_huge_pud(__vma, __pud, __address); \ } while (0) =20 @@ -456,7 +456,7 @@ static inline int is_swap_pmd(pmd_t pmd) static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) { - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd)) + if (is_swap_pmd(pmdp_get(pmd)) || pmd_trans_huge(pmdp_get(pmd))) return __pmd_trans_huge_lock(pmd, vma); else return NULL; @@ -464,7 +464,7 @@ static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pm= d, static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) { - if (pud_trans_huge(*pud)) + if (pud_trans_huge(pudp_get(pud))) return __pud_trans_huge_lock(pud, vma); else return NULL; diff --git a/include/linux/mm.h b/include/linux/mm.h index d16b33bacc32..fdc333384190 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2921,20 +2921,20 @@ int __pte_alloc_kernel(pmd_t *pmd); static inline p4d_t *p4d_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address) { - return (unlikely(pgd_none(*pgd)) && __p4d_alloc(mm, pgd, address)) ? + return (unlikely(pgd_none(pgdp_get(pgd))) && __p4d_alloc(mm, pgd, address= )) ? NULL : p4d_offset(pgd, address); } =20 static inline pud_t *pud_alloc(struct mm_struct *mm, p4d_t *p4d, unsigned long address) { - return (unlikely(p4d_none(*p4d)) && __pud_alloc(mm, p4d, address)) ? + return (unlikely(p4d_none(p4dp_get(p4d))) && __pud_alloc(mm, p4d, address= )) ? NULL : pud_offset(p4d, address); } =20 static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned = long address) { - return (unlikely(pud_none(*pud)) && __pmd_alloc(mm, pud, address))? + return (unlikely(pud_none(pudp_get(pud))) && __pmd_alloc(mm, pud, address= )) ? NULL: pmd_offset(pud, address); } #endif /* CONFIG_MMU */ @@ -3027,9 +3027,9 @@ static inline spinlock_t *ptlock_ptr(struct ptdesc *p= tdesc) } #endif /* ALLOC_SPLIT_PTLOCKS */ =20 -static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) +static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmdvalp) { - return ptlock_ptr(page_ptdesc(pmd_page(*pmd))); + return ptlock_ptr(page_ptdesc(pmd_page(*pmdvalp))); } =20 static inline spinlock_t *ptep_lockptr(struct mm_struct *mm, pte_t *pte) @@ -3146,7 +3146,7 @@ pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm,= pmd_t *pmd, pte_unmap(pte); \ } while (0) =20 -#define pte_alloc(mm, pmd) (unlikely(pmd_none(*(pmd))) && __pte_alloc(mm, = pmd)) +#define pte_alloc(mm, pmd) (unlikely(pmd_none(pmdp_get(pmd))) && __pte_all= oc(mm, pmd)) =20 #define pte_alloc_map(mm, pmd, address) \ (pte_alloc(mm, pmd) ? NULL : pte_offset_map(pmd, address)) @@ -3156,7 +3156,7 @@ pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm,= pmd_t *pmd, NULL : pte_offset_map_lock(mm, pmd, address, ptlp)) =20 #define pte_alloc_kernel(pmd, address) \ - ((unlikely(pmd_none(*(pmd))) && __pte_alloc_kernel(pmd))? \ + ((unlikely(pmd_none(pmdp_get(pmd))) && __pte_alloc_kernel(pmd)) ? \ NULL: pte_offset_kernel(pmd, address)) =20 #if defined(CONFIG_SPLIT_PMD_PTLOCKS) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index ca8c99cdc1cc..7ebb884fb328 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -149,14 +149,14 @@ static inline void pud_init(void *addr) #ifndef pte_offset_kernel static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long address) { - return (pte_t *)pmd_page_vaddr(*pmd) + pte_index(address); + return (pte_t *)pmd_page_vaddr(pmdp_get(pmd)) + pte_index(address); } #define pte_offset_kernel pte_offset_kernel #endif =20 #ifdef CONFIG_HIGHPTE #define __pte_map(pmd, address) \ - ((pte_t *)kmap_local_page(pmd_page(*(pmd))) + pte_index((address))) + ((pte_t *)kmap_local_page(pmd_page(pmdp_get(pmd))) + pte_index((address))) #define pte_unmap(pte) do { \ kunmap_local((pte)); \ rcu_read_unlock(); \ @@ -178,7 +178,7 @@ void pte_free_defer(struct mm_struct *mm, pgtable_t pgt= able); #ifndef pmd_offset static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) { - return pud_pgtable(*pud) + pmd_index(address); + return pud_pgtable(pudp_get(pud)) + pmd_index(address); } #define pmd_offset pmd_offset #endif @@ -186,7 +186,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned lo= ng address) #ifndef pud_offset static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address) { - return p4d_pgtable(*p4d) + pud_index(address); + return p4d_pgtable(p4dp_get(p4d)) + pud_index(address); } #define pud_offset pud_offset #endif @@ -230,7 +230,7 @@ static inline pte_t *virt_to_kpte(unsigned long vaddr) { pmd_t *pmd =3D pmd_off_k(vaddr); =20 - return pmd_none(*pmd) ? NULL : pte_offset_kernel(pmd, vaddr); + return pmd_none(pmdp_get(pmd)) ? NULL : pte_offset_kernel(pmd, vaddr); } =20 #ifndef pmd_young @@ -390,7 +390,7 @@ static inline int pmdp_test_and_clear_young(struct vm_a= rea_struct *vma, unsigned long address, pmd_t *pmdp) { - pmd_t pmd =3D *pmdp; + pmd_t pmd =3D pmdp_get(pmdp); int r =3D 1; if (!pmd_young(pmd)) r =3D 0; @@ -645,7 +645,7 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_s= truct *mm, unsigned long address, pmd_t *pmdp) { - pmd_t pmd =3D *pmdp; + pmd_t pmd =3D pmdp_get(pmdp); =20 pmd_clear(pmdp); page_table_check_pmd_clear(mm, pmd); @@ -658,7 +658,7 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_s= truct *mm, unsigned long address, pud_t *pudp) { - pud_t pud =3D *pudp; + pud_t pud =3D pudp_get(pudp); =20 pud_clear(pudp); page_table_check_pud_clear(mm, pud); @@ -968,7 +968,7 @@ static inline pte_t pte_sw_mkyoung(pte_t pte) static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long address, pmd_t *pmdp) { - pmd_t old_pmd =3D *pmdp; + pmd_t old_pmd =3D pmdp_get(pmdp); set_pmd_at(mm, address, pmdp, pmd_wrprotect(old_pmd)); } #else @@ -985,7 +985,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct = *mm, static inline void pudp_set_wrprotect(struct mm_struct *mm, unsigned long address, pud_t *pudp) { - pud_t old_pud =3D *pudp; + pud_t old_pud =3D pudp_get(pudp); =20 set_pud_at(mm, address, pudp, pud_wrprotect(old_pud)); } @@ -1009,7 +1009,7 @@ static inline pmd_t pmdp_collapse_flush(struct vm_are= a_struct *vma, pmd_t *pmdp) { BUILD_BUG(); - return *pmdp; + return pmdp_get(pmdp); } #define pmdp_collapse_flush pmdp_collapse_flush #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -1037,7 +1037,7 @@ extern pgtable_t pgtable_trans_huge_withdraw(struct m= m_struct *mm, pmd_t *pmdp); static inline pmd_t generic_pmdp_establish(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp, pmd_t pmd) { - pmd_t old_pmd =3D *pmdp; + pmd_t old_pmd =3D pmdp_get(pmdp); set_pmd_at(vma->vm_mm, address, pmdp, pmd); return old_pmd; } @@ -1287,9 +1287,9 @@ void pmd_clear_bad(pmd_t *); =20 static inline int pgd_none_or_clear_bad(pgd_t *pgd) { - if (pgd_none(*pgd)) + if (pgd_none(pgdp_get(pgd))) return 1; - if (unlikely(pgd_bad(*pgd))) { + if (unlikely(pgd_bad(pgdp_get(pgd)))) { pgd_clear_bad(pgd); return 1; } @@ -1298,9 +1298,9 @@ static inline int pgd_none_or_clear_bad(pgd_t *pgd) =20 static inline int p4d_none_or_clear_bad(p4d_t *p4d) { - if (p4d_none(*p4d)) + if (p4d_none(p4dp_get(p4d))) return 1; - if (unlikely(p4d_bad(*p4d))) { + if (unlikely(p4d_bad(p4dp_get(p4d)))) { p4d_clear_bad(p4d); return 1; } @@ -1309,9 +1309,9 @@ static inline int p4d_none_or_clear_bad(p4d_t *p4d) =20 static inline int pud_none_or_clear_bad(pud_t *pud) { - if (pud_none(*pud)) + if (pud_none(pudp_get(pud))) return 1; - if (unlikely(pud_bad(*pud))) { + if (unlikely(pud_bad(pudp_get(pud)))) { pud_clear_bad(pud); return 1; } @@ -1320,9 +1320,9 @@ static inline int pud_none_or_clear_bad(pud_t *pud) =20 static inline int pmd_none_or_clear_bad(pmd_t *pmd) { - if (pmd_none(*pmd)) + if (pmd_none(pmdp_get(pmd))) return 1; - if (unlikely(pmd_bad(*pmd))) { + if (unlikely(pmd_bad(pmdp_get(pmd)))) { pmd_clear_bad(pmd); return 1; } @@ -1798,7 +1798,7 @@ static inline int pud_trans_unstable(pud_t *pud) { #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \ defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) - pud_t pudval =3D READ_ONCE(*pud); + pud_t pudval =3D pudp_get(pud); =20 if (pud_none(pudval) || pud_trans_huge(pudval)) return 1; diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index 7e834467b2d8..b750cbe56bc6 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -910,7 +910,7 @@ static int damos_va_stat_pmd_entry(pmd_t *pmd, unsigned= long addr, int nr; =20 #ifdef CONFIG_TRANSPARENT_HUGEPAGE - if (pmd_trans_huge(*pmd)) { + if (pmd_trans_huge(pmdp_get(pmd))) { pmd_t pmde; =20 ptl =3D pmd_trans_huge_lock(pmd, vma); diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index 830107b6dd08..fb5596e2e426 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -431,7 +431,7 @@ static void __init pmd_huge_tests(struct pgtable_debug_= args *args) * X86 defined pmd_set_huge() verifies that the given * PMD is not a populated non-leaf entry. */ - WRITE_ONCE(*args->pmdp, __pmd(0)); + set_pmd(args->pmdp, __pmd(0)); WARN_ON(!pmd_set_huge(args->pmdp, __pfn_to_phys(args->fixed_pmd_pfn), arg= s->page_prot)); WARN_ON(!pmd_clear_huge(args->pmdp)); pmd =3D pmdp_get(args->pmdp); @@ -451,7 +451,7 @@ static void __init pud_huge_tests(struct pgtable_debug_= args *args) * X86 defined pud_set_huge() verifies that the given * PUD is not a populated non-leaf entry. */ - WRITE_ONCE(*args->pudp, __pud(0)); + set_pud(args->pudp, __pud(0)); WARN_ON(!pud_set_huge(args->pudp, __pfn_to_phys(args->fixed_pud_pfn), arg= s->page_prot)); WARN_ON(!pud_clear_huge(args->pudp)); pud =3D pudp_get(args->pudp); diff --git a/mm/filemap.c b/mm/filemap.c index 2f1e7e283a51..76027cf534c9 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3611,13 +3611,13 @@ static bool filemap_map_pmd(struct vm_fault *vmf, s= truct folio *folio, struct mm_struct *mm =3D vmf->vma->vm_mm; =20 /* Huge page is mapped? No need to proceed. */ - if (pmd_trans_huge(*vmf->pmd)) { + if (pmd_trans_huge(pmdp_get(vmf->pmd))) { folio_unlock(folio); folio_put(folio); return true; } =20 - if (pmd_none(*vmf->pmd) && folio_test_pmd_mappable(folio)) { + if (pmd_none(pmdp_get(vmf->pmd)) && folio_test_pmd_mappable(folio)) { struct page *page =3D folio_file_page(folio, start); vm_fault_t ret =3D do_set_pmd(vmf, folio, page); if (!ret) { @@ -3627,7 +3627,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, str= uct folio *folio, } } =20 - if (pmd_none(*vmf->pmd) && vmf->prealloc_pte) + if (pmd_none(pmdp_get(vmf->pmd)) && vmf->prealloc_pte) pmd_install(mm, vmf->pmd, &vmf->prealloc_pte); =20 return false; diff --git a/mm/gup.c b/mm/gup.c index b46112d36f7e..549f9e868311 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -652,7 +652,7 @@ static struct page *follow_huge_pud(struct vm_area_stru= ct *vma, { struct mm_struct *mm =3D vma->vm_mm; struct page *page; - pud_t pud =3D *pudp; + pud_t pud =3D pudp_get(pudp); unsigned long pfn =3D pud_pfn(pud); int ret; =20 @@ -704,7 +704,7 @@ static struct page *follow_huge_pmd(struct vm_area_stru= ct *vma, unsigned long *page_mask) { struct mm_struct *mm =3D vma->vm_mm; - pmd_t pmdval =3D *pmd; + pmd_t pmdval =3D pmdp_get(pmd); struct page *page; int ret; =20 @@ -719,7 +719,7 @@ static struct page *follow_huge_pmd(struct vm_area_stru= ct *vma, if ((flags & FOLL_DUMP) && is_huge_zero_pmd(pmdval)) return ERR_PTR(-EFAULT); =20 - if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags)) + if (pmd_protnone(pmdp_get(pmd)) && !gup_can_follow_protnone(vma, flags)) return NULL; =20 if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page)) @@ -918,7 +918,7 @@ static struct page *follow_pmd_mask(struct vm_area_stru= ct *vma, return no_page_table(vma, flags, address); =20 ptl =3D pmd_lock(mm, pmd); - pmdval =3D *pmd; + pmdval =3D pmdp_get(pmd); if (unlikely(!pmd_present(pmdval))) { spin_unlock(ptl); return no_page_table(vma, flags, address); @@ -1017,7 +1017,7 @@ static struct page *follow_page_mask(struct vm_area_s= truct *vma, *page_mask =3D 0; pgd =3D pgd_offset(mm, address); =20 - if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) + if (pgd_none(pgdp_get(pgd)) || unlikely(pgd_bad(pgdp_get(pgd)))) page =3D no_page_table(vma, flags, address); else page =3D follow_p4d_mask(vma, address, pgd, flags, page_mask); @@ -1043,16 +1043,16 @@ static int get_gate_page(struct mm_struct *mm, unsi= gned long address, if (gup_flags & FOLL_WRITE) return -EFAULT; pgd =3D pgd_offset(mm, address); - if (pgd_none(*pgd)) + if (pgd_none(pgdp_get(pgd))) return -EFAULT; p4d =3D p4d_offset(pgd, address); - if (p4d_none(*p4d)) + if (p4d_none(p4dp_get(p4d))) return -EFAULT; pud =3D pud_offset(p4d, address); - if (pud_none(*pud)) + if (pud_none(pudp_get(pud))) return -EFAULT; pmd =3D pmd_offset(pud, address); - if (!pmd_present(*pmd)) + if (!pmd_present(pmdp_get(pmd))) return -EFAULT; pte =3D pte_offset_map(pmd, address); if (!pte) @@ -2876,7 +2876,7 @@ static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp,= unsigned long addr, if (!folio) goto pte_unmap; =20 - if (unlikely(pmd_val(pmd) !=3D pmd_val(*pmdp)) || + if (unlikely(pmd_val(pmd) !=3D pmd_val(pmdp_get(pmdp))) || unlikely(pte_val(pte) !=3D pte_val(ptep_get(ptep)))) { gup_put_folio(folio, 1, flags); goto pte_unmap; @@ -2953,7 +2953,7 @@ static int gup_fast_pmd_leaf(pmd_t orig, pmd_t *pmdp,= unsigned long addr, if (!folio) return 0; =20 - if (unlikely(pmd_val(orig) !=3D pmd_val(*pmdp))) { + if (unlikely(pmd_val(orig) !=3D pmd_val(pmdp_get(pmdp)))) { gup_put_folio(folio, refs, flags); return 0; } @@ -2996,7 +2996,7 @@ static int gup_fast_pud_leaf(pud_t orig, pud_t *pudp,= unsigned long addr, if (!folio) return 0; =20 - if (unlikely(pud_val(orig) !=3D pud_val(*pudp))) { + if (unlikely(pud_val(orig) !=3D pud_val(pudp_get(pudp)))) { gup_put_folio(folio, refs, flags); return 0; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 323654fb4f8c..cee70fdbe475 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1254,7 +1254,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct= vm_fault *vmf) } =20 vmf->ptl =3D pmd_lock(vma->vm_mm, vmf->pmd); - if (unlikely(!pmd_none(*vmf->pmd))) { + if (unlikely(!pmd_none(pmdp_get(vmf->pmd)))) { goto unlock_release; } else { ret =3D check_stable_address_space(vma->vm_mm); @@ -1367,7 +1367,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault= *vmf) } vmf->ptl =3D pmd_lock(vma->vm_mm, vmf->pmd); ret =3D 0; - if (pmd_none(*vmf->pmd)) { + if (pmd_none(pmdp_get(vmf->pmd))) { ret =3D check_stable_address_space(vma->vm_mm); if (ret) { spin_unlock(vmf->ptl); @@ -1420,16 +1420,16 @@ static vm_fault_t insert_pmd(struct vm_area_struct = *vma, unsigned long addr, } =20 ptl =3D pmd_lock(mm, pmd); - if (!pmd_none(*pmd)) { + if (!pmd_none(pmdp_get(pmd))) { const unsigned long pfn =3D fop.is_folio ? folio_pfn(fop.folio) : fop.pfn; =20 if (write) { - if (pmd_pfn(*pmd) !=3D pfn) { - WARN_ON_ONCE(!is_huge_zero_pmd(*pmd)); + if (pmd_pfn(pmdp_get(pmd)) !=3D pfn) { + WARN_ON_ONCE(!is_huge_zero_pmd(pmdp_get(pmd))); goto out_unlock; } - entry =3D pmd_mkyoung(*pmd); + entry =3D pmd_mkyoung(pmdp_get(pmd)); entry =3D maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); if (pmdp_set_access_flags(vma, addr, pmd, entry, 1)) update_mmu_cache_pmd(vma, addr, pmd); @@ -1544,14 +1544,14 @@ static vm_fault_t insert_pud(struct vm_area_struct = *vma, unsigned long addr, return VM_FAULT_SIGBUS; =20 ptl =3D pud_lock(mm, pud); - if (!pud_none(*pud)) { + if (!pud_none(pudp_get(pud))) { const unsigned long pfn =3D fop.is_folio ? folio_pfn(fop.folio) : fop.pfn; =20 if (write) { - if (WARN_ON_ONCE(pud_pfn(*pud) !=3D pfn)) + if (WARN_ON_ONCE(pud_pfn(pudp_get(pud)) !=3D pfn)) goto out_unlock; - entry =3D pud_mkyoung(*pud); + entry =3D pud_mkyoung(pudp_get(pud)); entry =3D maybe_pud_mkwrite(pud_mkdirty(entry), vma); if (pudp_set_access_flags(vma, addr, pud, entry, 1)) update_mmu_cache_pud(vma, addr, pud); @@ -1647,7 +1647,7 @@ void touch_pmd(struct vm_area_struct *vma, unsigned l= ong addr, { pmd_t _pmd; =20 - _pmd =3D pmd_mkyoung(*pmd); + _pmd =3D pmd_mkyoung(pmdp_get(pmd)); if (write) _pmd =3D pmd_mkdirty(_pmd); if (pmdp_set_access_flags(vma, addr & HPAGE_PMD_MASK, @@ -1698,7 +1698,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm= _struct *src_mm, spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); =20 ret =3D -EAGAIN; - pmd =3D *src_pmd; + pmd =3D pmdp_get(src_pmd); =20 #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION if (unlikely(is_swap_pmd(pmd))) { @@ -1709,9 +1709,9 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm= _struct *src_mm, entry =3D make_readable_migration_entry( swp_offset(entry)); pmd =3D swp_entry_to_pmd(entry); - if (pmd_swp_soft_dirty(*src_pmd)) + if (pmd_swp_soft_dirty(pmdp_get(src_pmd))) pmd =3D pmd_swp_mksoft_dirty(pmd); - if (pmd_swp_uffd_wp(*src_pmd)) + if (pmd_swp_uffd_wp(pmdp_get(src_pmd))) pmd =3D pmd_swp_mkuffd_wp(pmd); set_pmd_at(src_mm, addr, src_pmd, pmd); } @@ -1785,7 +1785,7 @@ void touch_pud(struct vm_area_struct *vma, unsigned l= ong addr, { pud_t _pud; =20 - _pud =3D pud_mkyoung(*pud); + _pud =3D pud_mkyoung(pudp_get(pud)); if (write) _pud =3D pud_mkdirty(_pud); if (pudp_set_access_flags(vma, addr & HPAGE_PUD_MASK, @@ -1806,7 +1806,7 @@ int copy_huge_pud(struct mm_struct *dst_mm, struct mm= _struct *src_mm, spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); =20 ret =3D -EAGAIN; - pud =3D *src_pud; + pud =3D pudp_get(src_pud); if (unlikely(!pud_trans_huge(pud))) goto out_unlock; =20 @@ -1833,7 +1833,7 @@ void huge_pud_set_accessed(struct vm_fault *vmf, pud_= t orig_pud) bool write =3D vmf->flags & FAULT_FLAG_WRITE; =20 vmf->ptl =3D pud_lock(vmf->vma->vm_mm, vmf->pud); - if (unlikely(!pud_same(*vmf->pud, orig_pud))) + if (unlikely(!pud_same(pudp_get(vmf->pud), orig_pud))) goto unlock; =20 touch_pud(vmf->vma, vmf->address, vmf->pud, write); @@ -1847,7 +1847,7 @@ void huge_pmd_set_accessed(struct vm_fault *vmf) bool write =3D vmf->flags & FAULT_FLAG_WRITE; =20 vmf->ptl =3D pmd_lock(vmf->vma->vm_mm, vmf->pmd); - if (unlikely(!pmd_same(*vmf->pmd, vmf->orig_pmd))) + if (unlikely(!pmd_same(pmdp_get(vmf->pmd), vmf->orig_pmd))) goto unlock; =20 touch_pmd(vmf->vma, vmf->address, vmf->pmd, write); @@ -1912,7 +1912,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) =20 spin_lock(vmf->ptl); =20 - if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) { + if (unlikely(!pmd_same(pmdp_get(vmf->pmd), orig_pmd))) { spin_unlock(vmf->ptl); return 0; } @@ -1930,7 +1930,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) spin_unlock(vmf->ptl); folio_lock(folio); spin_lock(vmf->ptl); - if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) { + if (unlikely(!pmd_same(pmdp_get(vmf->pmd), orig_pmd))) { spin_unlock(vmf->ptl); folio_unlock(folio); folio_put(folio); @@ -2108,7 +2108,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, st= ruct vm_area_struct *vma, if (!ptl) goto out_unlocked; =20 - orig_pmd =3D *pmd; + orig_pmd =3D pmdp_get(pmd); if (is_huge_zero_pmd(orig_pmd)) goto out; =20 @@ -2296,8 +2296,8 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsign= ed long old_addr, * should have released it; but move_page_tables() might have already * inserted a page table, if racing against shmem/file collapse. */ - if (!pmd_none(*new_pmd)) { - VM_BUG_ON(pmd_trans_huge(*new_pmd)); + if (!pmd_none(pmdp_get(new_pmd))) { + VM_BUG_ON(pmd_trans_huge(pmdp_get(new_pmd))); return false; } =20 @@ -2313,7 +2313,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsign= ed long old_addr, pmd =3D pmdp_huge_get_and_clear(mm, old_addr, old_pmd); if (pmd_present(pmd)) force_flush =3D true; - VM_BUG_ON(!pmd_none(*new_pmd)); + VM_BUG_ON(!pmd_none(pmdp_get(new_pmd))); =20 if (pmd_move_must_withdraw(new_ptl, old_ptl, vma)) { pgtable_t pgtable; @@ -2363,12 +2363,12 @@ int change_huge_pmd(struct mmu_gather *tlb, struct = vm_area_struct *vma, return 0; =20 #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION - if (is_swap_pmd(*pmd)) { - swp_entry_t entry =3D pmd_to_swp_entry(*pmd); + if (is_swap_pmd(pmdp_get(pmd))) { + swp_entry_t entry =3D pmd_to_swp_entry(pmdp_get(pmd)); struct folio *folio =3D pfn_swap_entry_folio(entry); pmd_t newpmd; =20 - VM_BUG_ON(!is_pmd_migration_entry(*pmd)); + VM_BUG_ON(!is_pmd_migration_entry(pmdp_get(pmd))); if (is_writable_migration_entry(entry)) { /* * A protection check is difficult so @@ -2379,17 +2379,17 @@ int change_huge_pmd(struct mmu_gather *tlb, struct = vm_area_struct *vma, else entry =3D make_readable_migration_entry(swp_offset(entry)); newpmd =3D swp_entry_to_pmd(entry); - if (pmd_swp_soft_dirty(*pmd)) + if (pmd_swp_soft_dirty(pmdp_get(pmd))) newpmd =3D pmd_swp_mksoft_dirty(newpmd); } else { - newpmd =3D *pmd; + newpmd =3D pmdp_get(pmd); } =20 if (uffd_wp) newpmd =3D pmd_swp_mkuffd_wp(newpmd); else if (uffd_wp_resolve) newpmd =3D pmd_swp_clear_uffd_wp(newpmd); - if (!pmd_same(*pmd, newpmd)) + if (!pmd_same(pmdp_get(pmd), newpmd)) set_pmd_at(mm, addr, pmd, newpmd); goto unlock; } @@ -2403,13 +2403,13 @@ int change_huge_pmd(struct mmu_gather *tlb, struct = vm_area_struct *vma, * data is likely to be read-cached on the local CPU and * local/remote hits to the zero page are not interesting. */ - if (is_huge_zero_pmd(*pmd)) + if (is_huge_zero_pmd(pmdp_get(pmd))) goto unlock; =20 - if (pmd_protnone(*pmd)) + if (pmd_protnone(pmdp_get(pmd))) goto unlock; =20 - folio =3D pmd_folio(*pmd); + folio =3D pmd_folio(pmdp_get(pmd)); toptier =3D node_is_toptier(folio_nid(folio)); /* * Skip scanning top tier node if normal numa @@ -2540,7 +2540,7 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *= dst_pmd, pmd_t *src_pmd, pm struct mmu_notifier_range range; int err =3D 0; =20 - src_pmdval =3D *src_pmd; + src_pmdval =3D pmdp_get(src_pmd); src_ptl =3D pmd_lockptr(mm, src_pmd); =20 lockdep_assert_held(src_ptl); @@ -2602,8 +2602,8 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *= dst_pmd, pmd_t *src_pmd, pm =20 dst_ptl =3D pmd_lockptr(mm, dst_pmd); double_pt_lock(src_ptl, dst_ptl); - if (unlikely(!pmd_same(*src_pmd, src_pmdval) || - !pmd_same(*dst_pmd, dst_pmdval))) { + if (unlikely(!pmd_same(pmdp_get(src_pmd), src_pmdval) || + !pmd_same(pmdp_get(dst_pmd), dst_pmdval))) { err =3D -EAGAIN; goto unlock_ptls; } @@ -2669,7 +2669,7 @@ spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct = vm_area_struct *vma) { spinlock_t *ptl; ptl =3D pmd_lock(vma->vm_mm, pmd); - if (likely(is_swap_pmd(*pmd) || pmd_trans_huge(*pmd))) + if (likely(is_swap_pmd(pmdp_get(pmd)) || pmd_trans_huge(pmdp_get(pmd)))) return ptl; spin_unlock(ptl); return NULL; @@ -2686,7 +2686,7 @@ spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct = vm_area_struct *vma) spinlock_t *ptl; =20 ptl =3D pud_lock(vma->vm_mm, pud); - if (likely(pud_trans_huge(*pud))) + if (likely(pud_trans_huge(pudp_get(pud)))) return ptl; spin_unlock(ptl); return NULL; @@ -2738,7 +2738,7 @@ static void __split_huge_pud_locked(struct vm_area_st= ruct *vma, pud_t *pud, VM_BUG_ON(haddr & ~HPAGE_PUD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PUD_SIZE, vma); - VM_BUG_ON(!pud_trans_huge(*pud)); + VM_BUG_ON(!pud_trans_huge(pudp_get(pud))); =20 count_vm_event(THP_SPLIT_PUD); =20 @@ -2771,7 +2771,7 @@ void __split_huge_pud(struct vm_area_struct *vma, pud= _t *pud, (address & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE); mmu_notifier_invalidate_range_start(&range); ptl =3D pud_lock(vma->vm_mm, pud); - if (unlikely(!pud_trans_huge(*pud))) + if (unlikely(!pud_trans_huge(pudp_get(pud)))) goto out; __split_huge_pud_locked(vma, pud, range.start); =20 @@ -2844,7 +2844,7 @@ static void __split_huge_pmd_locked(struct vm_area_st= ruct *vma, pmd_t *pmd, VM_BUG_ON(haddr & ~HPAGE_PMD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma); - VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd)); + VM_BUG_ON(!is_pmd_migration_entry(pmdp_get(pmd)) && !pmd_trans_huge(pmdp_= get(pmd))); =20 count_vm_event(THP_SPLIT_PMD); =20 @@ -2879,7 +2879,7 @@ static void __split_huge_pmd_locked(struct vm_area_st= ruct *vma, pmd_t *pmd, return; } =20 - if (is_huge_zero_pmd(*pmd)) { + if (is_huge_zero_pmd(pmdp_get(pmd))) { /* * FIXME: Do we want to invalidate secondary mmu by calling * mmu_notifier_arch_invalidate_secondary_tlbs() see comments below @@ -2892,11 +2892,11 @@ static void __split_huge_pmd_locked(struct vm_area_= struct *vma, pmd_t *pmd, return __split_huge_zero_page_pmd(vma, haddr, pmd); } =20 - pmd_migration =3D is_pmd_migration_entry(*pmd); + pmd_migration =3D is_pmd_migration_entry(pmdp_get(pmd)); if (unlikely(pmd_migration)) { swp_entry_t entry; =20 - old_pmd =3D *pmd; + old_pmd =3D pmdp_get(pmd); entry =3D pmd_to_swp_entry(old_pmd); page =3D pfn_swap_entry_to_page(entry); write =3D is_writable_migration_entry(entry); @@ -3052,7 +3052,7 @@ void split_huge_pmd_locked(struct vm_area_struct *vma= , unsigned long address, pmd_t *pmd, bool freeze) { VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE)); - if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd)) + if (pmd_trans_huge(pmdp_get(pmd)) || is_pmd_migration_entry(pmdp_get(pmd)= )) __split_huge_pmd_locked(vma, pmd, address, freeze); } =20 @@ -3140,7 +3140,7 @@ static bool __discard_anon_folio_pmd_locked(struct vm= _area_struct *vma, { struct mm_struct *mm =3D vma->vm_mm; int ref_count, map_count; - pmd_t orig_pmd =3D *pmdp; + pmd_t orig_pmd =3D pmdp_get(pmdp); =20 if (pmd_dirty(orig_pmd)) folio_set_dirty(folio); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0455119716ec..41cbc85b5051 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7584,7 +7584,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm= _area_struct *vma, goto out; =20 spin_lock(&mm->page_table_lock); - if (pud_none(*pud)) { + if (pud_none(pudp_get(pud))) { pud_populate(mm, pud, (pmd_t *)((unsigned long)spte & PAGE_MASK)); mm_inc_nr_pmds(mm); @@ -7677,7 +7677,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm= _area_struct *vma, pte =3D (pte_t *)pud; } else { BUG_ON(sz !=3D PMD_SIZE); - if (want_pmd_share(vma, addr) && pud_none(*pud)) + if (want_pmd_share(vma, addr) && pud_none(pudp_get(pud))) pte =3D huge_pmd_share(mm, vma, addr, pud); else pte =3D (pte_t *)pmd_alloc(mm, pud, addr); @@ -7711,17 +7711,17 @@ pte_t *huge_pte_offset(struct mm_struct *mm, pmd_t *pmd; =20 pgd =3D pgd_offset(mm, addr); - if (!pgd_present(*pgd)) + if (!pgd_present(pgdp_get(pgd))) return NULL; p4d =3D p4d_offset(pgd, addr); - if (!p4d_present(*p4d)) + if (!p4d_present(p4dp_get(p4d))) return NULL; =20 pud =3D pud_offset(p4d, addr); if (sz =3D=3D PUD_SIZE) /* must be pud huge, non-present or none */ return (pte_t *)pud; - if (!pud_present(*pud)) + if (!pud_present(pudp_get(pud))) return NULL; /* must have a valid entry and size to go further */ =20 diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index ba0fb1b6a5a8..059eb78480f5 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -72,7 +72,7 @@ static int vmemmap_split_pmd(pmd_t *pmd, struct page *hea= d, unsigned long start, } =20 spin_lock(&init_mm.page_table_lock); - if (likely(pmd_leaf(*pmd))) { + if (likely(pmd_leaf(pmdp_get(pmd)))) { /* * Higher order allocations from buddy allocator must be able to * be treated as indepdenent small pages (as they can be freed @@ -106,7 +106,7 @@ static int vmemmap_pmd_entry(pmd_t *pmd, unsigned long = addr, walk->action =3D ACTION_CONTINUE; =20 spin_lock(&init_mm.page_table_lock); - head =3D pmd_leaf(*pmd) ? pmd_page(*pmd) : NULL; + head =3D pmd_leaf(pmdp_get(pmd)) ? pmd_page(pmdp_get(pmd)) : NULL; /* * Due to HugeTLB alignment requirements and the vmemmap * pages being at the start of the hotplugged memory diff --git a/mm/kasan/init.c b/mm/kasan/init.c index f084e7a5df1e..8e0fc4d0cd1e 100644 --- a/mm/kasan/init.c +++ b/mm/kasan/init.c @@ -121,7 +121,7 @@ static int __ref zero_pmd_populate(pud_t *pud, unsigned= long addr, continue; } =20 - if (pmd_none(*pmd)) { + if (pmd_none(pmdp_get(pmd))) { pte_t *p; =20 if (slab_is_available()) @@ -160,7 +160,7 @@ static int __ref zero_pud_populate(p4d_t *p4d, unsigned= long addr, continue; } =20 - if (pud_none(*pud)) { + if (pud_none(pudp_get(pud))) { pmd_t *p; =20 if (slab_is_available()) { @@ -202,7 +202,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned= long addr, continue; } =20 - if (p4d_none(*p4d)) { + if (p4d_none(p4dp_get(p4d))) { pud_t *p; =20 if (slab_is_available()) { @@ -265,7 +265,7 @@ int __ref kasan_populate_early_shadow(const void *shado= w_start, continue; } =20 - if (pgd_none(*pgd)) { + if (pgd_none(pgdp_get(pgd))) { =20 if (slab_is_available()) { if (!p4d_alloc(&init_mm, pgd, addr)) @@ -292,7 +292,8 @@ static void kasan_free_pte(pte_t *pte_start, pmd_t *pmd) return; } =20 - pte_free_kernel(&init_mm, (pte_t *)page_to_virt(pmd_page(*pmd))); + pte_free_kernel(&init_mm, + (pte_t *)page_to_virt(pmd_page(pmdp_get(pmd)))); pmd_clear(pmd); } =20 @@ -303,11 +304,11 @@ static void kasan_free_pmd(pmd_t *pmd_start, pud_t *p= ud) =20 for (i =3D 0; i < PTRS_PER_PMD; i++) { pmd =3D pmd_start + i; - if (!pmd_none(*pmd)) + if (!pmd_none(pmdp_get(pmd))) return; } =20 - pmd_free(&init_mm, (pmd_t *)page_to_virt(pud_page(*pud))); + pmd_free(&init_mm, (pmd_t *)page_to_virt(pud_page(pudp_get(pud)))); pud_clear(pud); } =20 @@ -318,11 +319,11 @@ static void kasan_free_pud(pud_t *pud_start, p4d_t *p= 4d) =20 for (i =3D 0; i < PTRS_PER_PUD; i++) { pud =3D pud_start + i; - if (!pud_none(*pud)) + if (!pud_none(pudp_get(pud))) return; } =20 - pud_free(&init_mm, (pud_t *)page_to_virt(p4d_page(*p4d))); + pud_free(&init_mm, (pud_t *)page_to_virt(p4d_page(p4dp_get(p4d)))); p4d_clear(p4d); } =20 @@ -333,11 +334,11 @@ static void kasan_free_p4d(p4d_t *p4d_start, pgd_t *p= gd) =20 for (i =3D 0; i < PTRS_PER_P4D; i++) { p4d =3D p4d_start + i; - if (!p4d_none(*p4d)) + if (!p4d_none(p4dp_get(p4d))) return; } =20 - p4d_free(&init_mm, (p4d_t *)page_to_virt(pgd_page(*pgd))); + p4d_free(&init_mm, (p4d_t *)page_to_virt(pgd_page(pgdp_get(pgd)))); pgd_clear(pgd); } =20 @@ -373,10 +374,10 @@ static void kasan_remove_pmd_table(pmd_t *pmd, unsign= ed long addr, =20 next =3D pmd_addr_end(addr, end); =20 - if (!pmd_present(*pmd)) + if (!pmd_present(pmdp_get(pmd))) continue; =20 - if (kasan_pte_table(*pmd)) { + if (kasan_pte_table(pmdp_get(pmd))) { if (IS_ALIGNED(addr, PMD_SIZE) && IS_ALIGNED(next, PMD_SIZE)) { pmd_clear(pmd); @@ -399,10 +400,10 @@ static void kasan_remove_pud_table(pud_t *pud, unsign= ed long addr, =20 next =3D pud_addr_end(addr, end); =20 - if (!pud_present(*pud)) + if (!pud_present(pudp_get(pud))) continue; =20 - if (kasan_pmd_table(*pud)) { + if (kasan_pmd_table(pudp_get(pud))) { if (IS_ALIGNED(addr, PUD_SIZE) && IS_ALIGNED(next, PUD_SIZE)) { pud_clear(pud); @@ -426,10 +427,10 @@ static void kasan_remove_p4d_table(p4d_t *p4d, unsign= ed long addr, =20 next =3D p4d_addr_end(addr, end); =20 - if (!p4d_present(*p4d)) + if (!p4d_present(p4dp_get(p4d))) continue; =20 - if (kasan_pud_table(*p4d)) { + if (kasan_pud_table(p4dp_get(p4d))) { if (IS_ALIGNED(addr, P4D_SIZE) && IS_ALIGNED(next, P4D_SIZE)) { p4d_clear(p4d); @@ -460,10 +461,10 @@ void kasan_remove_zero_shadow(void *start, unsigned l= ong size) next =3D pgd_addr_end(addr, end); =20 pgd =3D pgd_offset_k(addr); - if (!pgd_present(*pgd)) + if (!pgd_present(pgdp_get(pgd))) continue; =20 - if (kasan_p4d_table(*pgd)) { + if (kasan_p4d_table(pgdp_get(pgd))) { if (IS_ALIGNED(addr, PGDIR_SIZE) && IS_ALIGNED(next, PGDIR_SIZE)) { pgd_clear(pgd); diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index 5d2a876035d6..331bbb7ff025 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -191,20 +191,20 @@ static bool shadow_mapped(unsigned long addr) pmd_t *pmd; pte_t *pte; =20 - if (pgd_none(*pgd)) + if (pgd_none(pgdp_get(pgd))) return false; p4d =3D p4d_offset(pgd, addr); - if (p4d_none(*p4d)) + if (p4d_none(p4dp_get(p4d))) return false; pud =3D pud_offset(p4d, addr); - if (pud_none(*pud)) + if (pud_none(pudp_get(pud))) return false; - if (pud_leaf(*pud)) + if (pud_leaf(pudp_get(pud))) return true; pmd =3D pmd_offset(pud, addr); - if (pmd_none(*pmd)) + if (pmd_none(pmdp_get(pmd))) return false; - if (pmd_leaf(*pmd)) + if (pmd_leaf(pmdp_get(pmd))) return true; pte =3D pte_offset_kernel(pmd, addr); return !pte_none(ptep_get(pte)); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index abe54f0043c7..1bff8ade751a 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1191,7 +1191,7 @@ static int collapse_huge_page(struct mm_struct *mm, u= nsigned long address, if (pte) pte_unmap(pte); spin_lock(pmd_ptl); - BUG_ON(!pmd_none(*pmd)); + BUG_ON(!pmd_none(pmdp_get(pmd))); /* * We can only use set_pmd_at when establishing * hugepmds and never for establishing regular pmds that @@ -1228,7 +1228,7 @@ static int collapse_huge_page(struct mm_struct *mm, u= nsigned long address, _pmd =3D maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); =20 spin_lock(pmd_ptl); - BUG_ON(!pmd_none(*pmd)); + BUG_ON(!pmd_none(pmdp_get(pmd))); folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); pgtable_trans_huge_deposit(mm, pmd, pgtable); diff --git a/mm/ksm.c b/mm/ksm.c index c4e730409949..0a0eeb667fe6 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1322,7 +1322,7 @@ static int write_protect_page(struct vm_area_struct *= vma, struct folio *folio, =20 set_pte_at(mm, pvmw.address, pvmw.pte, entry); } - *orig_pte =3D entry; + set_pte(orig_pte, entry); err =3D 0; =20 out_unlock: diff --git a/mm/madvise.c b/mm/madvise.c index fb1c86e630b6..53e60565f3e5 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -377,7 +377,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, !can_do_file_pageout(vma); =20 #ifdef CONFIG_TRANSPARENT_HUGEPAGE - if (pmd_trans_huge(*pmd)) { + if (pmd_trans_huge(pmdp_get(pmd))) { pmd_t orig_pmd; unsigned long next =3D pmd_addr_end(addr, end); =20 @@ -386,7 +386,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, if (!ptl) return 0; =20 - orig_pmd =3D *pmd; + orig_pmd =3D pmdp_get(pmd); if (is_huge_zero_pmd(orig_pmd)) goto huge_unlock; =20 @@ -668,7 +668,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned = long addr, int nr, max_nr; =20 next =3D pmd_addr_end(addr, end); - if (pmd_trans_huge(*pmd)) + if (pmd_trans_huge(pmdp_get(pmd))) if (madvise_free_huge_pmd(tlb, vma, pmd, addr, next)) return 0; =20 @@ -1116,7 +1116,7 @@ static int guard_install_set_pte(unsigned long addr, = unsigned long next, unsigned long *nr_pages =3D (unsigned long *)walk->private; =20 /* Simply install a PTE marker, this causes segfault on access. */ - *ptep =3D make_pte_marker(PTE_MARKER_GUARD); + set_pte(ptep, make_pte_marker(PTE_MARKER_GUARD)); (*nr_pages)++; =20 return 0; diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 3edebb0cda30..5231febc6345 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -339,20 +339,20 @@ static unsigned long dev_pagemap_mapping_shift(struct= vm_area_struct *vma, =20 VM_BUG_ON_VMA(address =3D=3D -EFAULT, vma); pgd =3D pgd_offset(vma->vm_mm, address); - if (!pgd_present(*pgd)) + if (!pgd_present(pgdp_get(pgd))) return 0; p4d =3D p4d_offset(pgd, address); - if (!p4d_present(*p4d)) + if (!p4d_present(p4dp_get(p4d))) return 0; pud =3D pud_offset(p4d, address); - if (!pud_present(*pud)) + if (!pud_present(pudp_get(pud))) return 0; - if (pud_trans_huge(*pud)) + if (pud_trans_huge(pudp_get(pud))) return PUD_SHIFT; pmd =3D pmd_offset(pud, address); - if (!pmd_present(*pmd)) + if (!pmd_present(pmdp_get(pmd))) return 0; - if (pmd_trans_huge(*pmd)) + if (pmd_trans_huge(pmdp_get(pmd))) return PMD_SHIFT; pte =3D pte_offset_map(pmd, address); if (!pte) @@ -705,7 +705,7 @@ static int check_hwpoisoned_entry(pte_t pte, unsigned l= ong addr, short shift, static int check_hwpoisoned_pmd_entry(pmd_t *pmdp, unsigned long addr, struct hwpoison_walk *hwp) { - pmd_t pmd =3D *pmdp; + pmd_t pmd =3D pmdp_get(pmdp); unsigned long pfn; unsigned long hwpoison_vaddr; =20 diff --git a/mm/memory.c b/mm/memory.c index 0c295e2fe8e8..1880bae463c6 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -189,7 +189,7 @@ void mm_trace_rss_stat(struct mm_struct *mm, int member) static void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr) { - pgtable_t token =3D pmd_pgtable(*pmd); + pgtable_t token =3D pmd_pgtable(pmdp_get(pmd)); pmd_clear(pmd); pte_free_tlb(tlb, token, addr); mm_dec_nr_ptes(tlb->mm); @@ -426,7 +426,7 @@ void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgta= ble_t *pte) { spinlock_t *ptl =3D pmd_lock(mm, pmd); =20 - if (likely(pmd_none(*pmd))) { /* Has another populated it ? */ + if (likely(pmd_none(pmdp_get(pmd)))) { /* Has another populated it ? */ mm_inc_nr_ptes(mm); /* * Ensure all pte setup (eg. pte page lock and page clearing) are @@ -467,7 +467,7 @@ int __pte_alloc_kernel(pmd_t *pmd) return -ENOMEM; =20 spin_lock(&init_mm.page_table_lock); - if (likely(pmd_none(*pmd))) { /* Has another populated it ? */ + if (likely(pmd_none(pmdp_get(pmd)))) { /* Has another populated it ? */ smp_wmb(); /* See comment in pmd_install() */ pmd_populate_kernel(&init_mm, pmd, new); new =3D NULL; @@ -532,9 +532,9 @@ static void __print_bad_page_map_pgtable(struct mm_stru= ct *mm, unsigned long add * see locking requirements for print_bad_page_map(). */ pgdp =3D pgd_offset(mm, addr); - pgdv =3D pgd_val(*pgdp); + pgdv =3D pgd_val(pgdp_get(pgdp)); =20 - if (!pgd_present(*pgdp) || pgd_leaf(*pgdp)) { + if (!pgd_present(pgdp_get(pgdp)) || pgd_leaf(pgdp_get(pgdp))) { pr_alert("pgd:%08llx\n", pgdv); return; } @@ -1374,7 +1374,7 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct= vm_area_struct *src_vma, src_pmd =3D pmd_offset(src_pud, addr); do { next =3D pmd_addr_end(addr, end); - if (is_swap_pmd(*src_pmd) || pmd_trans_huge(*src_pmd)) { + if (is_swap_pmd(pmdp_get(src_pmd)) || pmd_trans_huge(pmdp_get(src_pmd)))= { int err; VM_BUG_ON_VMA(next-addr !=3D HPAGE_PMD_SIZE, src_vma); err =3D copy_huge_pmd(dst_mm, src_mm, dst_pmd, src_pmd, @@ -1410,7 +1410,7 @@ copy_pud_range(struct vm_area_struct *dst_vma, struct= vm_area_struct *src_vma, src_pud =3D pud_offset(src_p4d, addr); do { next =3D pud_addr_end(addr, end); - if (pud_trans_huge(*src_pud)) { + if (pud_trans_huge(pudp_get(src_pud))) { int err; =20 VM_BUG_ON_VMA(next-addr !=3D HPAGE_PUD_SIZE, src_vma); @@ -1921,7 +1921,7 @@ static inline unsigned long zap_pmd_range(struct mmu_= gather *tlb, pmd =3D pmd_offset(pud, addr); do { next =3D pmd_addr_end(addr, end); - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd)) { + if (is_swap_pmd(pmdp_get(pmd)) || pmd_trans_huge(pmdp_get(pmd))) { if (next - addr !=3D HPAGE_PMD_SIZE) __split_huge_pmd(vma, pmd, addr, false); else if (zap_huge_pmd(tlb, vma, pmd, addr)) { @@ -1931,7 +1931,7 @@ static inline unsigned long zap_pmd_range(struct mmu_= gather *tlb, /* fall through */ } else if (details && details->single_folio && folio_test_pmd_mappable(details->single_folio) && - next - addr =3D=3D HPAGE_PMD_SIZE && pmd_none(*pmd)) { + next - addr =3D=3D HPAGE_PMD_SIZE && pmd_none(pmdp_get(pmd))) { spinlock_t *ptl =3D pmd_lock(tlb->mm, pmd); /* * Take and drop THP pmd lock so that we cannot return @@ -1940,7 +1940,7 @@ static inline unsigned long zap_pmd_range(struct mmu_= gather *tlb, */ spin_unlock(ptl); } - if (pmd_none(*pmd)) { + if (pmd_none(pmdp_get(pmd))) { addr =3D next; continue; } @@ -1963,7 +1963,7 @@ static inline unsigned long zap_pud_range(struct mmu_= gather *tlb, pud =3D pud_offset(p4d, addr); do { next =3D pud_addr_end(addr, end); - if (pud_trans_huge(*pud)) { + if (pud_trans_huge(pudp_get(pud))) { if (next - addr !=3D HPAGE_PUD_SIZE) { mmap_assert_locked(tlb->mm); split_huge_pud(vma, pud, addr); @@ -2211,7 +2211,7 @@ static pmd_t *walk_to_pmd(struct mm_struct *mm, unsig= ned long addr) if (!pmd) return NULL; =20 - VM_BUG_ON(pmd_trans_huge(*pmd)); + VM_BUG_ON(pmd_trans_huge(pmdp_get(pmd))); return pmd; } =20 @@ -2845,7 +2845,7 @@ static inline int remap_pmd_range(struct mm_struct *m= m, pud_t *pud, pmd =3D pmd_alloc(mm, pud, addr); if (!pmd) return -ENOMEM; - VM_BUG_ON(pmd_trans_huge(*pmd)); + VM_BUG_ON(pmd_trans_huge(pmdp_get(pmd))); do { next =3D pmd_addr_end(addr, end); err =3D remap_pte_range(mm, pmd, addr, next, @@ -3164,7 +3164,7 @@ static int apply_to_pmd_range(struct mm_struct *mm, p= ud_t *pud, unsigned long next; int err =3D 0; =20 - BUG_ON(pud_leaf(*pud)); + BUG_ON(pud_leaf(pudp_get(pud))); =20 if (create) { pmd =3D pmd_alloc_track(mm, pud, addr, mask); @@ -3175,11 +3175,11 @@ static int apply_to_pmd_range(struct mm_struct *mm,= pud_t *pud, } do { next =3D pmd_addr_end(addr, end); - if (pmd_none(*pmd) && !create) + if (pmd_none(pmdp_get(pmd)) && !create) continue; - if (WARN_ON_ONCE(pmd_leaf(*pmd))) + if (WARN_ON_ONCE(pmd_leaf(pmdp_get(pmd)))) return -EINVAL; - if (!pmd_none(*pmd) && WARN_ON_ONCE(pmd_bad(*pmd))) { + if (!pmd_none(pmdp_get(pmd)) && WARN_ON_ONCE(pmd_bad(pmdp_get(pmd)))) { if (!create) continue; pmd_clear_bad(pmd); @@ -3211,11 +3211,11 @@ static int apply_to_pud_range(struct mm_struct *mm,= p4d_t *p4d, } do { next =3D pud_addr_end(addr, end); - if (pud_none(*pud) && !create) + if (pud_none(pudp_get(pud)) && !create) continue; - if (WARN_ON_ONCE(pud_leaf(*pud))) + if (WARN_ON_ONCE(pud_leaf(pudp_get(pud)))) return -EINVAL; - if (!pud_none(*pud) && WARN_ON_ONCE(pud_bad(*pud))) { + if (!pud_none(pudp_get(pud)) && WARN_ON_ONCE(pud_bad(pudp_get(pud)))) { if (!create) continue; pud_clear_bad(pud); @@ -3247,11 +3247,11 @@ static int apply_to_p4d_range(struct mm_struct *mm,= pgd_t *pgd, } do { next =3D p4d_addr_end(addr, end); - if (p4d_none(*p4d) && !create) + if (p4d_none(p4dp_get(p4d)) && !create) continue; - if (WARN_ON_ONCE(p4d_leaf(*p4d))) + if (WARN_ON_ONCE(p4d_leaf(p4dp_get(p4d)))) return -EINVAL; - if (!p4d_none(*p4d) && WARN_ON_ONCE(p4d_bad(*p4d))) { + if (!p4d_none(p4dp_get(p4d)) && WARN_ON_ONCE(p4d_bad(p4dp_get(p4d)))) { if (!create) continue; p4d_clear_bad(p4d); @@ -3281,13 +3281,13 @@ static int __apply_to_page_range(struct mm_struct *= mm, unsigned long addr, pgd =3D pgd_offset(mm, addr); do { next =3D pgd_addr_end(addr, end); - if (pgd_none(*pgd) && !create) + if (pgd_none(pgdp_get(pgd)) && !create) continue; - if (WARN_ON_ONCE(pgd_leaf(*pgd))) { + if (WARN_ON_ONCE(pgd_leaf(pgdp_get(pgd)))) { err =3D -EINVAL; break; } - if (!pgd_none(*pgd) && WARN_ON_ONCE(pgd_bad(*pgd))) { + if (!pgd_none(pgdp_get(pgd)) && WARN_ON_ONCE(pgd_bad(pgdp_get(pgd)))) { if (!create) continue; pgd_clear_bad(pgd); @@ -5272,7 +5272,7 @@ static vm_fault_t __do_fault(struct vm_fault *vmf) * unlock_page(B) * # flush A, B to clear the writeback */ - if (pmd_none(*vmf->pmd) && !vmf->prealloc_pte) { + if (pmd_none(pmdp_get(vmf->pmd)) && !vmf->prealloc_pte) { vmf->prealloc_pte =3D pte_alloc_one(vma->vm_mm); if (!vmf->prealloc_pte) return VM_FAULT_OOM; @@ -5367,7 +5367,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct fo= lio *folio, struct page *pa } =20 vmf->ptl =3D pmd_lock(vma->vm_mm, vmf->pmd); - if (unlikely(!pmd_none(*vmf->pmd))) + if (unlikely(!pmd_none(pmdp_get(vmf->pmd)))) goto out; =20 flush_icache_pages(vma, page, HPAGE_PMD_NR); @@ -5519,7 +5519,7 @@ vm_fault_t finish_fault(struct vm_fault *vmf) file_end < folio_next_index(folio); } =20 - if (pmd_none(*vmf->pmd)) { + if (pmd_none(pmdp_get(vmf->pmd))) { if (!needs_fallback && folio_test_pmd_mappable(folio)) { ret =3D do_set_pmd(vmf, folio, page); if (ret !=3D VM_FAULT_FALLBACK) @@ -5664,7 +5664,7 @@ static vm_fault_t do_fault_around(struct vm_fault *vm= f) to_pte =3D min3(from_pte + nr_pages, (pgoff_t)PTRS_PER_PTE, pte_off + vma_pages(vmf->vma) - vma_off) - 1; =20 - if (pmd_none(*vmf->pmd)) { + if (pmd_none(pmdp_get(vmf->pmd))) { vmf->prealloc_pte =3D pte_alloc_one(vmf->vma->vm_mm); if (!vmf->prealloc_pte) return VM_FAULT_OOM; @@ -6152,7 +6152,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *v= mf) { pte_t entry; =20 - if (unlikely(pmd_none(*vmf->pmd))) { + if (unlikely(pmd_none(pmdp_get(vmf->pmd)))) { /* * Leave __pte_alloc() until later: because vm_ops->fault may * want to allocate huge page, and if we expose page table @@ -6268,13 +6268,13 @@ static vm_fault_t __handle_mm_fault(struct vm_area_= struct *vma, if (!vmf.pud) return VM_FAULT_OOM; retry_pud: - if (pud_none(*vmf.pud) && + if (pud_none(pudp_get(vmf.pud)) && thp_vma_allowable_order(vma, vm_flags, TVA_PAGEFAULT, PUD_ORDER)) { ret =3D create_huge_pud(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; } else { - pud_t orig_pud =3D *vmf.pud; + pud_t orig_pud =3D pudp_get(vmf.pud); =20 barrier(); if (pud_trans_huge(orig_pud)) { @@ -6302,7 +6302,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_st= ruct *vma, if (pud_trans_unstable(vmf.pud)) goto retry_pud; =20 - if (pmd_none(*vmf.pmd) && + if (pmd_none(pmdp_get(vmf.pmd)) && thp_vma_allowable_order(vma, vm_flags, TVA_PAGEFAULT, PMD_ORDER)) { ret =3D create_huge_pmd(&vmf); if (!(ret & VM_FAULT_FALLBACK)) @@ -6546,7 +6546,7 @@ int __p4d_alloc(struct mm_struct *mm, pgd_t *pgd, uns= igned long address) return -ENOMEM; =20 spin_lock(&mm->page_table_lock); - if (pgd_present(*pgd)) { /* Another has populated it */ + if (pgd_present(pgdp_get(pgd))) { /* Another has populated it */ p4d_free(mm, new); } else { smp_wmb(); /* See comment in pmd_install() */ @@ -6569,7 +6569,7 @@ int __pud_alloc(struct mm_struct *mm, p4d_t *p4d, uns= igned long address) return -ENOMEM; =20 spin_lock(&mm->page_table_lock); - if (!p4d_present(*p4d)) { + if (!p4d_present(p4dp_get(p4d))) { mm_inc_nr_puds(mm); smp_wmb(); /* See comment in pmd_install() */ p4d_populate(mm, p4d, new); @@ -6593,7 +6593,7 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, uns= igned long address) return -ENOMEM; =20 ptl =3D pud_lock(mm, pud); - if (!pud_present(*pud)) { + if (!pud_present(pudp_get(pud))) { mm_inc_nr_pmds(mm); smp_wmb(); /* See comment in pmd_install() */ pud_populate(mm, pud, new); @@ -6686,7 +6686,7 @@ int follow_pfnmap_start(struct follow_pfnmap_args *ar= gs) goto out; retry: pgdp =3D pgd_offset(mm, address); - if (pgd_none(*pgdp) || unlikely(pgd_bad(*pgdp))) + if (pgd_none(pgdp_get(pgdp)) || unlikely(pgd_bad(pgdp_get(pgdp)))) goto out; =20 p4dp =3D p4d_offset(pgdp, address); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index eb83cff7db8c..8eef680d0f0e 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -645,11 +645,11 @@ static void queue_folios_pmd(pmd_t *pmd, struct mm_wa= lk *walk) struct folio *folio; struct queue_pages *qp =3D walk->private; =20 - if (unlikely(is_pmd_migration_entry(*pmd))) { + if (unlikely(is_pmd_migration_entry(pmdp_get(pmd)))) { qp->nr_failed++; return; } - folio =3D pmd_folio(*pmd); + folio =3D pmd_folio(pmdp_get(pmd)); if (is_huge_zero_folio(folio)) { walk->action =3D ACTION_CONTINUE; return; diff --git a/mm/migrate.c b/mm/migrate.c index c0e9f15be2a2..98b5fe2a8994 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -542,9 +542,9 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd= _t *pmd) spinlock_t *ptl; =20 ptl =3D pmd_lock(mm, pmd); - if (!is_pmd_migration_entry(*pmd)) + if (!is_pmd_migration_entry(pmdp_get(pmd))) goto unlock; - migration_entry_wait_on_locked(pmd_to_swp_entry(*pmd), ptl); + migration_entry_wait_on_locked(pmd_to_swp_entry(pmdp_get(pmd)), ptl); return; unlock: spin_unlock(ptl); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index abd9f6850db6..9714448eb97d 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -69,19 +69,19 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, pte_t *ptep; =20 again: - if (pmd_none(*pmdp)) + if (pmd_none(pmdp_get(pmdp))) return migrate_vma_collect_hole(start, end, -1, walk); =20 - if (pmd_trans_huge(*pmdp)) { + if (pmd_trans_huge(pmdp_get(pmdp))) { struct folio *folio; =20 ptl =3D pmd_lock(mm, pmdp); - if (unlikely(!pmd_trans_huge(*pmdp))) { + if (unlikely(!pmd_trans_huge(pmdp_get(pmdp)))) { spin_unlock(ptl); goto again; } =20 - folio =3D pmd_folio(*pmdp); + folio =3D pmd_folio(pmdp_get(pmdp)); if (is_huge_zero_folio(folio)) { spin_unlock(ptl); split_huge_pmd(vma, pmdp, addr); @@ -615,7 +615,7 @@ static void migrate_vma_insert_page(struct migrate_vma = *migrate, pmdp =3D pmd_alloc(mm, pudp, addr); if (!pmdp) goto abort; - if (pmd_trans_huge(*pmdp)) + if (pmd_trans_huge(pmdp_get(pmdp))) goto abort; if (pte_alloc(mm, pmdp)) goto abort; diff --git a/mm/mlock.c b/mm/mlock.c index bb0776f5ef7c..c55ab38656d0 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -361,11 +361,11 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long = addr, =20 ptl =3D pmd_trans_huge_lock(pmd, vma); if (ptl) { - if (!pmd_present(*pmd)) + if (!pmd_present(pmdp_get(pmd))) goto out; - if (is_huge_zero_pmd(*pmd)) + if (is_huge_zero_pmd(pmdp_get(pmd))) goto out; - folio =3D pmd_folio(*pmd); + folio =3D pmd_folio(pmdp_get(pmd)); if (folio_is_zone_device(folio)) goto out; if (vma->vm_flags & VM_LOCKED) diff --git a/mm/mprotect.c b/mm/mprotect.c index 988c366137d5..912a5847a4f3 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -530,7 +530,7 @@ static inline long change_pmd_range(struct mmu_gather *= tlb, break; } =20 - if (pmd_none(*pmd)) + if (pmd_none(pmdp_get(pmd))) goto next; =20 _pmd =3D pmdp_get_lockless(pmd); diff --git a/mm/mremap.c b/mm/mremap.c index 419a0ea0a870..5b43ef4ff547 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -103,7 +103,7 @@ static pmd_t *get_old_pmd(struct mm_struct *mm, unsigne= d long addr) return NULL; =20 pmd =3D pmd_offset(pud, addr); - if (pmd_none(*pmd)) + if (pmd_none(pmdp_get(pmd))) return NULL; =20 return pmd; @@ -135,7 +135,7 @@ static pmd_t *alloc_new_pmd(struct mm_struct *mm, unsig= ned long addr) if (!pmd) return NULL; =20 - VM_BUG_ON(pmd_trans_huge(*pmd)); + VM_BUG_ON(pmd_trans_huge(pmdp_get(pmd))); =20 return pmd; } @@ -260,7 +260,7 @@ static int move_ptes(struct pagetable_move_control *pmc, =20 for (; old_addr < old_end; old_ptep +=3D nr_ptes, old_addr +=3D nr_ptes *= PAGE_SIZE, new_ptep +=3D nr_ptes, new_addr +=3D nr_ptes * PAGE_SIZE) { - VM_WARN_ON_ONCE(!pte_none(*new_ptep)); + VM_WARN_ON_ONCE(!pte_none(ptep_get(new_ptep))); =20 nr_ptes =3D 1; max_nr_ptes =3D (old_end - old_addr) >> PAGE_SHIFT; @@ -379,7 +379,7 @@ static bool move_normal_pmd(struct pagetable_move_contr= ol *pmc, * One alternative might be to just unmap the target pmd at * this point, and verify that it really is empty. We'll see. */ - if (WARN_ON_ONCE(!pmd_none(*new_pmd))) + if (WARN_ON_ONCE(!pmd_none(pmdp_get(new_pmd)))) return false; =20 /* @@ -391,7 +391,7 @@ static bool move_normal_pmd(struct pagetable_move_contr= ol *pmc, if (new_ptl !=3D old_ptl) spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); =20 - pmd =3D *old_pmd; + pmd =3D pmdp_get(old_pmd); =20 /* Racing with collapse? */ if (unlikely(!pmd_present(pmd) || pmd_leaf(pmd))) @@ -400,7 +400,7 @@ static bool move_normal_pmd(struct pagetable_move_contr= ol *pmc, pmd_clear(old_pmd); res =3D true; =20 - VM_BUG_ON(!pmd_none(*new_pmd)); + VM_BUG_ON(!pmd_none(pmdp_get(new_pmd))); =20 pmd_populate(mm, new_pmd, pmd_pgtable(pmd)); flush_tlb_range(vma, pmc->old_addr, pmc->old_addr + PMD_SIZE); @@ -436,7 +436,7 @@ static bool move_normal_pud(struct pagetable_move_contr= ol *pmc, * The destination pud shouldn't be established, free_pgtables() * should have released it. */ - if (WARN_ON_ONCE(!pud_none(*new_pud))) + if (WARN_ON_ONCE(!pud_none(pudp_get(new_pud)))) return false; =20 /* @@ -449,10 +449,10 @@ static bool move_normal_pud(struct pagetable_move_con= trol *pmc, spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); =20 /* Clear the pud */ - pud =3D *old_pud; + pud =3D pudp_get(old_pud); pud_clear(old_pud); =20 - VM_BUG_ON(!pud_none(*new_pud)); + VM_BUG_ON(!pud_none(pudp_get(new_pud))); =20 pud_populate(mm, new_pud, pud_pgtable(pud)); flush_tlb_range(vma, pmc->old_addr, pmc->old_addr + PUD_SIZE); @@ -483,7 +483,7 @@ static bool move_huge_pud(struct pagetable_move_control= *pmc, * The destination pud shouldn't be established, free_pgtables() * should have released it. */ - if (WARN_ON_ONCE(!pud_none(*new_pud))) + if (WARN_ON_ONCE(!pud_none(pudp_get(new_pud)))) return false; =20 /* @@ -496,10 +496,10 @@ static bool move_huge_pud(struct pagetable_move_contr= ol *pmc, spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); =20 /* Clear the pud */ - pud =3D *old_pud; + pud =3D pudp_get(old_pud); pud_clear(old_pud); =20 - VM_BUG_ON(!pud_none(*new_pud)); + VM_BUG_ON(!pud_none(pudp_get(new_pud))); =20 /* Set the new pud */ /* mark soft_ditry when we add pud level soft dirty support */ @@ -828,7 +828,7 @@ unsigned long move_page_tables(struct pagetable_move_co= ntrol *pmc) new_pud =3D alloc_new_pud(mm, pmc->new_addr); if (!new_pud) break; - if (pud_trans_huge(*old_pud)) { + if (pud_trans_huge(pudp_get(old_pud))) { if (extent =3D=3D HPAGE_PUD_SIZE) { move_pgt_entry(pmc, HPAGE_PUD, old_pud, new_pud); /* We ignore and continue on error? */ @@ -847,7 +847,7 @@ unsigned long move_page_tables(struct pagetable_move_co= ntrol *pmc) if (!new_pmd) break; again: - if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd)) { + if (is_swap_pmd(pmdp_get(old_pmd)) || pmd_trans_huge(pmdp_get(old_pmd)))= { if (extent =3D=3D HPAGE_PMD_SIZE && move_pgt_entry(pmc, HPAGE_PMD, old_pmd, new_pmd)) continue; @@ -861,7 +861,7 @@ unsigned long move_page_tables(struct pagetable_move_co= ntrol *pmc) if (move_pgt_entry(pmc, NORMAL_PMD, old_pmd, new_pmd)) continue; } - if (pmd_none(*old_pmd)) + if (pmd_none(pmdp_get(old_pmd))) continue; if (pte_alloc(pmc->new->vm_mm, new_pmd)) break; diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 4eeca782b888..31f4c39d20ef 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -230,7 +230,7 @@ void __page_table_check_pmds_set(struct mm_struct *mm, = pmd_t *pmdp, pmd_t pmd, page_table_check_pmd_flags(pmd); =20 for (i =3D 0; i < nr; i++) - __page_table_check_pmd_clear(mm, *(pmdp + i)); + __page_table_check_pmd_clear(mm, pmdp_get(pmdp + i)); if (pmd_user_accessible_page(pmd)) page_table_check_set(pmd_pfn(pmd), stride * nr, pmd_write(pmd)); } @@ -246,7 +246,7 @@ void __page_table_check_puds_set(struct mm_struct *mm, = pud_t *pudp, pud_t pud, return; =20 for (i =3D 0; i < nr; i++) - __page_table_check_pud_clear(mm, *(pudp + i)); + __page_table_check_pud_clear(mm, pudp_get((pudp + i))); if (pud_user_accessible_page(pud)) page_table_check_set(pud_pfn(pud), stride * nr, pud_write(pud)); } diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index c498a91b6706..6c08d0215308 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -223,17 +223,17 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk= *pvmw) restart: do { pgd =3D pgd_offset(mm, pvmw->address); - if (!pgd_present(*pgd)) { + if (!pgd_present(pgdp_get(pgd))) { step_forward(pvmw, PGDIR_SIZE); continue; } p4d =3D p4d_offset(pgd, pvmw->address); - if (!p4d_present(*p4d)) { + if (!p4d_present(p4dp_get(p4d))) { step_forward(pvmw, P4D_SIZE); continue; } pud =3D pud_offset(p4d, pvmw->address); - if (!pud_present(*pud)) { + if (!pud_present(pudp_get(pud))) { step_forward(pvmw, PUD_SIZE); continue; } diff --git a/mm/pagewalk.c b/mm/pagewalk.c index 9f91cf85a5be..269ba20b63cf 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -109,7 +109,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long add= r, unsigned long end, do { again: next =3D pmd_addr_end(addr, end); - if (pmd_none(*pmd)) { + if (pmd_none(pmdp_get(pmd))) { if (has_install) err =3D __pte_alloc(walk->mm, pmd); else if (ops->pte_hole) @@ -143,13 +143,13 @@ static int walk_pmd_range(pud_t *pud, unsigned long a= ddr, unsigned long end, * We are ONLY installing, so avoid unnecessarily * splitting a present huge page. */ - if (pmd_present(*pmd) && pmd_trans_huge(*pmd)) + if (pmd_present(pmdp_get(pmd)) && pmd_trans_huge(pmdp_get(pmd))) continue; } =20 if (walk->vma) split_huge_pmd(walk->vma, pmd, addr); - else if (pmd_leaf(*pmd) || !pmd_present(*pmd)) + else if (pmd_leaf(pmdp_get(pmd)) || !pmd_present(pmdp_get(pmd))) continue; /* Nothing to do. */ =20 err =3D walk_pte_range(pmd, addr, next, walk); @@ -179,7 +179,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long add= r, unsigned long end, do { again: next =3D pud_addr_end(addr, end); - if (pud_none(*pud)) { + if (pud_none(pudp_get(pud))) { if (has_install) err =3D __pmd_alloc(walk->mm, pud, addr); else if (ops->pte_hole) @@ -209,16 +209,16 @@ static int walk_pud_range(p4d_t *p4d, unsigned long a= ddr, unsigned long end, * We are ONLY installing, so avoid unnecessarily * splitting a present huge page. */ - if (pud_present(*pud) && pud_trans_huge(*pud)) + if (pud_present(pudp_get(pud)) && pud_trans_huge(pudp_get(pud))) continue; } =20 if (walk->vma) split_huge_pud(walk->vma, pud, addr); - else if (pud_leaf(*pud) || !pud_present(*pud)) + else if (pud_leaf(pudp_get(pud)) || !pud_present(pudp_get(pud))) continue; /* Nothing to do. */ =20 - if (pud_none(*pud)) + if (pud_none(pudp_get(pud))) goto again; =20 err =3D walk_pmd_range(pud, addr, next, walk); diff --git a/mm/percpu.c b/mm/percpu.c index 81462ce5866e..1652beb28917 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -3136,25 +3136,25 @@ void __init __weak pcpu_populate_pte(unsigned long = addr) pud_t *pud; pmd_t *pmd; =20 - if (pgd_none(*pgd)) { + if (pgd_none(pgdp_get(pgd))) { p4d =3D memblock_alloc_or_panic(P4D_TABLE_SIZE, P4D_TABLE_SIZE); pgd_populate_kernel(addr, pgd, p4d); } =20 p4d =3D p4d_offset(pgd, addr); - if (p4d_none(*p4d)) { + if (p4d_none(p4dp_get(p4d))) { pud =3D memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE); p4d_populate_kernel(addr, p4d, pud); } =20 pud =3D pud_offset(p4d, addr); - if (pud_none(*pud)) { + if (pud_none(pudp_get(pud))) { pmd =3D memblock_alloc_or_panic(PMD_TABLE_SIZE, PMD_TABLE_SIZE); pud_populate(&init_mm, pud, pmd); } =20 pmd =3D pmd_offset(pud, addr); - if (!pmd_present(*pmd)) { + if (!pmd_present(pmdp_get(pmd))) { pte_t *new; =20 new =3D memblock_alloc_or_panic(PTE_TABLE_SIZE, PTE_TABLE_SIZE); diff --git a/mm/pgalloc-track.h b/mm/pgalloc-track.h index e9e879de8649..c5bb948416f0 100644 --- a/mm/pgalloc-track.h +++ b/mm/pgalloc-track.h @@ -7,7 +7,7 @@ static inline p4d_t *p4d_alloc_track(struct mm_struct *mm, = pgd_t *pgd, unsigned long address, pgtbl_mod_mask *mod_mask) { - if (unlikely(pgd_none(*pgd))) { + if (unlikely(pgd_none(pgdp_get(pgd)))) { if (__p4d_alloc(mm, pgd, address)) return NULL; *mod_mask |=3D PGTBL_PGD_MODIFIED; @@ -20,7 +20,7 @@ static inline pud_t *pud_alloc_track(struct mm_struct *mm= , p4d_t *p4d, unsigned long address, pgtbl_mod_mask *mod_mask) { - if (unlikely(p4d_none(*p4d))) { + if (unlikely(p4d_none(p4dp_get(p4d)))) { if (__pud_alloc(mm, p4d, address)) return NULL; *mod_mask |=3D PGTBL_P4D_MODIFIED; @@ -33,7 +33,7 @@ static inline pmd_t *pmd_alloc_track(struct mm_struct *mm= , pud_t *pud, unsigned long address, pgtbl_mod_mask *mod_mask) { - if (unlikely(pud_none(*pud))) { + if (unlikely(pud_none(pudp_get(pud)))) { if (__pmd_alloc(mm, pud, address)) return NULL; *mod_mask |=3D PGTBL_PUD_MODIFIED; @@ -44,7 +44,7 @@ static inline pmd_t *pmd_alloc_track(struct mm_struct *mm= , pud_t *pud, #endif /* CONFIG_MMU */ =20 #define pte_alloc_kernel_track(pmd, address, mask) \ - ((unlikely(pmd_none(*(pmd))) && \ + ((unlikely(pmd_none(pmdp_get(pmd))) && \ (__pte_alloc_kernel(pmd) || ({*(mask)|=3DPGTBL_PMD_MODIFIED;0;})))?\ NULL: pte_offset_kernel(pmd, address)) =20 diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 567e2d084071..63a573306bfa 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -24,14 +24,14 @@ =20 void pgd_clear_bad(pgd_t *pgd) { - pgd_ERROR(*pgd); + pgd_ERROR(pgdp_get(pgd)); pgd_clear(pgd); } =20 #ifndef __PAGETABLE_P4D_FOLDED void p4d_clear_bad(p4d_t *p4d) { - p4d_ERROR(*p4d); + p4d_ERROR(p4dp_get(p4d)); p4d_clear(p4d); } #endif @@ -39,7 +39,7 @@ void p4d_clear_bad(p4d_t *p4d) #ifndef __PAGETABLE_PUD_FOLDED void pud_clear_bad(pud_t *pud) { - pud_ERROR(*pud); + pud_ERROR(pudp_get(pud)); pud_clear(pud); } #endif @@ -51,7 +51,7 @@ void pud_clear_bad(pud_t *pud) */ void pmd_clear_bad(pmd_t *pmd) { - pmd_ERROR(*pmd); + pmd_ERROR(pmdp_get(pmd)); pmd_clear(pmd); } =20 @@ -110,7 +110,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp, pmd_t entry, int dirty) { - int changed =3D !pmd_same(*pmdp, entry); + int changed =3D !pmd_same(pmdp_get(pmdp), entry); VM_BUG_ON(address & ~HPAGE_PMD_MASK); if (changed) { set_pmd_at(vma->vm_mm, address, pmdp, entry); @@ -139,7 +139,7 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma,= unsigned long address, { pmd_t pmd; VM_BUG_ON(address & ~HPAGE_PMD_MASK); - VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp)); + VM_BUG_ON(pmd_present(pmdp_get(pmdp)) && !pmd_trans_huge(pmdp_get(pmdp))); pmd =3D pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp); flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); return pmd; @@ -152,7 +152,7 @@ pud_t pudp_huge_clear_flush(struct vm_area_struct *vma,= unsigned long address, pud_t pud; =20 VM_BUG_ON(address & ~HPAGE_PUD_MASK); - VM_BUG_ON(!pud_trans_huge(*pudp)); + VM_BUG_ON(!pud_trans_huge(pudp_get(pudp))); pud =3D pudp_huge_get_and_clear(vma->vm_mm, address, pudp); flush_pud_tlb_range(vma, address, address + HPAGE_PUD_SIZE); return pud; @@ -197,8 +197,9 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct = *mm, pmd_t *pmdp) pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) { - VM_WARN_ON_ONCE(!pmd_present(*pmdp)); - pmd_t old =3D pmdp_establish(vma, address, pmdp, pmd_mkinvalid(*pmdp)); + VM_WARN_ON_ONCE(!pmd_present(pmdp_get(pmdp))); + pmd_t old =3D pmdp_establish(vma, address, pmdp, + pmd_mkinvalid(pmdp_get(pmdp))); flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); return old; } @@ -208,7 +209,7 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsig= ned long address, pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) { - VM_WARN_ON_ONCE(!pmd_present(*pmdp)); + VM_WARN_ON_ONCE(!pmd_present(pmdp_get(pmdp))); return pmdp_invalidate(vma, address, pmdp); } #endif @@ -224,7 +225,7 @@ pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, u= nsigned long address, pmd_t pmd; =20 VM_BUG_ON(address & ~HPAGE_PMD_MASK); - VM_BUG_ON(pmd_trans_huge(*pmdp)); + VM_BUG_ON(pmd_trans_huge(pmdp_get(pmdp))); pmd =3D pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp); =20 /* collapse entails shooting down ptes not pmd */ diff --git a/mm/rmap.c b/mm/rmap.c index ac4f783d6ec2..aafefc1d7955 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -819,15 +819,15 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned lon= g address) pmd_t *pmd =3D NULL; =20 pgd =3D pgd_offset(mm, address); - if (!pgd_present(*pgd)) + if (!pgd_present(pgdp_get(pgd))) goto out; =20 p4d =3D p4d_offset(pgd, address); - if (!p4d_present(*p4d)) + if (!p4d_present(p4dp_get(p4d))) goto out; =20 pud =3D pud_offset(p4d, address); - if (!pud_present(*pud)) + if (!pud_present(pudp_get(pud))) goto out; =20 pmd =3D pmd_offset(pud, address); @@ -1048,7 +1048,7 @@ static int page_vma_mkclean_one(struct page_vma_mappe= d_walk *pvmw) pmd_t *pmd =3D pvmw->pmd; pmd_t entry; =20 - if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) + if (!pmd_dirty(pmdp_get(pmd)) && !pmd_write(pmdp_get(pmd))) continue; =20 flush_cache_range(vma, address, diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 37522d6cb398..be065c57611d 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -198,7 +198,7 @@ static void * __meminit vmemmap_alloc_block_zero(unsign= ed long size, int node) pmd_t * __meminit vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int= node) { pmd_t *pmd =3D pmd_offset(pud, addr); - if (pmd_none(*pmd)) { + if (pmd_none(pmdp_get(pmd))) { void *p =3D vmemmap_alloc_block_zero(PAGE_SIZE, node); if (!p) return NULL; @@ -211,7 +211,7 @@ pmd_t * __meminit vmemmap_pmd_populate(pud_t *pud, unsi= gned long addr, int node) pud_t * __meminit vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int= node) { pud_t *pud =3D pud_offset(p4d, addr); - if (pud_none(*pud)) { + if (pud_none(pudp_get(pud))) { void *p =3D vmemmap_alloc_block_zero(PAGE_SIZE, node); if (!p) return NULL; @@ -224,7 +224,7 @@ pud_t * __meminit vmemmap_pud_populate(p4d_t *p4d, unsi= gned long addr, int node) p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int= node) { p4d_t *p4d =3D p4d_offset(pgd, addr); - if (p4d_none(*p4d)) { + if (p4d_none(p4dp_get(p4d))) { void *p =3D vmemmap_alloc_block_zero(PAGE_SIZE, node); if (!p) return NULL; @@ -237,7 +237,7 @@ p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsi= gned long addr, int node) pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node) { pgd_t *pgd =3D pgd_offset_k(addr); - if (pgd_none(*pgd)) { + if (pgd_none(pgdp_get(pgd))) { void *p =3D vmemmap_alloc_block_zero(PAGE_SIZE, node); if (!p) return NULL; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index af61b95c89e4..931c26914ef5 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -1306,8 +1306,8 @@ static long move_pages_ptes(struct mm_struct *mm, pmd= _t *dst_pmd, pmd_t *src_pmd } =20 /* Sanity checks before the operation */ - if (pmd_none(*dst_pmd) || pmd_none(*src_pmd) || - pmd_trans_huge(*dst_pmd) || pmd_trans_huge(*src_pmd)) { + if (pmd_none(pmdp_get(dst_pmd)) || pmd_none(pmdp_get(src_pmd)) || + pmd_trans_huge(pmdp_get(dst_pmd)) || pmd_trans_huge(pmdp_get(src_pmd)= )) { ret =3D -EINVAL; goto out; } @@ -1897,8 +1897,8 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsig= ned long dst_start, if (move_splits_huge_pmd(dst_addr, src_addr, src_start + len) || !pmd_none(dst_pmdval)) { /* Can be a migration entry */ - if (pmd_present(*src_pmd)) { - struct folio *folio =3D pmd_folio(*src_pmd); + if (pmd_present(pmdp_get(src_pmd))) { + struct folio *folio =3D pmd_folio(pmdp_get(src_pmd)); =20 if (!is_huge_zero_folio(folio) && !PageAnonExclusive(&folio->page)) { @@ -1921,7 +1921,7 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsig= ned long dst_start, } else { long ret; =20 - if (pmd_none(*src_pmd)) { + if (pmd_none(pmdp_get(src_pmd))) { if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES)) { err =3D -ENOENT; break; diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 798b2ed21e46..7bafe94d501f 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -155,7 +155,7 @@ static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long = addr, unsigned long end, if (!IS_ALIGNED(phys_addr, PMD_SIZE)) return 0; =20 - if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr)) + if (pmd_present(pmdp_get(pmd)) && !pmd_free_pte_page(pmd, addr)) return 0; =20 return pmd_set_huge(pmd, phys_addr, prot); @@ -205,7 +205,7 @@ static int vmap_try_huge_pud(pud_t *pud, unsigned long = addr, unsigned long end, if (!IS_ALIGNED(phys_addr, PUD_SIZE)) return 0; =20 - if (pud_present(*pud) && !pud_free_pmd_page(pud, addr)) + if (pud_present(pudp_get(pud)) && !pud_free_pmd_page(pud, addr)) return 0; =20 return pud_set_huge(pud, phys_addr, prot); @@ -256,7 +256,7 @@ static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long = addr, unsigned long end, if (!IS_ALIGNED(phys_addr, P4D_SIZE)) return 0; =20 - if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr)) + if (p4d_present(p4dp_get(p4d)) && !p4d_free_pud_page(p4d, addr)) return 0; =20 return p4d_set_huge(p4d, phys_addr, prot); @@ -367,7 +367,8 @@ static void vunmap_pte_range(pmd_t *pmd, unsigned long = addr, unsigned long end, if (size !=3D PAGE_SIZE) { if (WARN_ON(!IS_ALIGNED(addr, size))) { addr =3D ALIGN_DOWN(addr, size); - pte =3D PTR_ALIGN_DOWN(pte, sizeof(*pte) * (size >> PAGE_SHIFT)); + pte =3D PTR_ALIGN_DOWN(pte, + sizeof(ptep_get(pte)) * (size >> PAGE_SHIFT)); } ptent =3D huge_ptep_get_and_clear(&init_mm, addr, pte, size); if (WARN_ON(end - addr < size)) @@ -394,7 +395,7 @@ static void vunmap_pmd_range(pud_t *pud, unsigned long = addr, unsigned long end, next =3D pmd_addr_end(addr, end); =20 cleared =3D pmd_clear_huge(pmd); - if (cleared || pmd_bad(*pmd)) + if (cleared || pmd_bad(pmdp_get(pmd))) *mask |=3D PGTBL_PMD_MODIFIED; =20 if (cleared) { @@ -421,7 +422,7 @@ static void vunmap_pud_range(p4d_t *p4d, unsigned long = addr, unsigned long end, next =3D pud_addr_end(addr, end); =20 cleared =3D pud_clear_huge(pud); - if (cleared || pud_bad(*pud)) + if (cleared || pud_bad(pudp_get(pud))) *mask |=3D PGTBL_PUD_MODIFIED; =20 if (cleared) { @@ -445,7 +446,7 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned long = addr, unsigned long end, next =3D p4d_addr_end(addr, end); =20 p4d_clear_huge(p4d); - if (p4d_bad(*p4d)) + if (p4d_bad(p4dp_get(p4d))) *mask |=3D PGTBL_P4D_MODIFIED; =20 if (p4d_none_or_clear_bad(p4d)) @@ -477,7 +478,7 @@ void __vunmap_range_noflush(unsigned long start, unsign= ed long end) pgd =3D pgd_offset_k(addr); do { next =3D pgd_addr_end(addr, end); - if (pgd_bad(*pgd)) + if (pgd_bad(pgdp_get(pgd))) mask |=3D PGTBL_PGD_MODIFIED; if (pgd_none_or_clear_bad(pgd)) continue; @@ -622,7 +623,7 @@ static int vmap_small_pages_range_noflush(unsigned long= addr, unsigned long end, pgd =3D pgd_offset_k(addr); do { next =3D pgd_addr_end(addr, end); - if (pgd_bad(*pgd)) + if (pgd_bad(pgdp_get(pgd))) mask |=3D PGTBL_PGD_MODIFIED; err =3D vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, &mask); if (err) @@ -792,35 +793,35 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) */ VIRTUAL_BUG_ON(!is_vmalloc_or_module_addr(vmalloc_addr)); =20 - if (pgd_none(*pgd)) + if (pgd_none(pgdp_get(pgd))) return NULL; - if (WARN_ON_ONCE(pgd_leaf(*pgd))) + if (WARN_ON_ONCE(pgd_leaf(pgdp_get(pgd)))) return NULL; /* XXX: no allowance for huge pgd */ - if (WARN_ON_ONCE(pgd_bad(*pgd))) + if (WARN_ON_ONCE(pgd_bad(pgdp_get(pgd)))) return NULL; =20 p4d =3D p4d_offset(pgd, addr); - if (p4d_none(*p4d)) + if (p4d_none(p4dp_get(p4d))) return NULL; - if (p4d_leaf(*p4d)) - return p4d_page(*p4d) + ((addr & ~P4D_MASK) >> PAGE_SHIFT); - if (WARN_ON_ONCE(p4d_bad(*p4d))) + if (p4d_leaf(p4dp_get(p4d))) + return p4d_page(p4dp_get(p4d)) + ((addr & ~P4D_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(p4d_bad(p4dp_get(p4d)))) return NULL; =20 pud =3D pud_offset(p4d, addr); - if (pud_none(*pud)) + if (pud_none(pudp_get(pud))) return NULL; - if (pud_leaf(*pud)) - return pud_page(*pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - if (WARN_ON_ONCE(pud_bad(*pud))) + if (pud_leaf(pudp_get(pud))) + return pud_page(pudp_get(pud)) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(pud_bad(pudp_get(pud)))) return NULL; =20 pmd =3D pmd_offset(pud, addr); - if (pmd_none(*pmd)) + if (pmd_none(pmdp_get(pmd))) return NULL; - if (pmd_leaf(*pmd)) - return pmd_page(*pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - if (WARN_ON_ONCE(pmd_bad(*pmd))) + if (pmd_leaf(pmdp_get(pmd))) + return pmd_page(pmdp_get(pmd)) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(pmd_bad(pmdp_get(pmd)))) return NULL; =20 ptep =3D pte_offset_kernel(pmd, addr); diff --git a/mm/vmscan.c b/mm/vmscan.c index 2239de111fa6..4401d20548e0 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3612,7 +3612,7 @@ static void walk_pmd_range_locked(pud_t *pud, unsigne= d long addr, struct vm_area DEFINE_MAX_SEQ(walk->lruvec); int gen =3D lru_gen_from_seq(max_seq); =20 - VM_WARN_ON_ONCE(pud_leaf(*pud)); + VM_WARN_ON_ONCE(pud_leaf(pudp_get(pud))); =20 /* try to batch at most 1+MIN_LRU_BATCH+1 entries */ if (*first =3D=3D -1) { @@ -3642,17 +3642,17 @@ static void walk_pmd_range_locked(pud_t *pud, unsig= ned long addr, struct vm_area /* don't round down the first address */ addr =3D i ? (*first & PMD_MASK) + i * PMD_SIZE : *first; =20 - if (!pmd_present(pmd[i])) + if (!pmd_present(pmdp_get(pmd + i))) goto next; =20 - if (!pmd_trans_huge(pmd[i])) { + if (!pmd_trans_huge(pmdp_get(pmd + i))) { if (!walk->force_scan && should_clear_pmd_young() && !mm_has_notifiers(args->mm)) pmdp_test_and_clear_young(vma, addr, pmd + i); goto next; } =20 - pfn =3D get_pmd_pfn(pmd[i], vma, addr, pgdat); + pfn =3D get_pmd_pfn(pmdp_get(pmd + i), vma, addr, pgdat); if (pfn =3D=3D -1) goto next; =20 @@ -3670,7 +3670,7 @@ static void walk_pmd_range_locked(pud_t *pud, unsigne= d long addr, struct vm_area dirty =3D false; } =20 - if (pmd_dirty(pmd[i])) + if (pmd_dirty(pmdp_get(pmd + i))) dirty =3D true; =20 walk->mm_stats[MM_LEAF_YOUNG]++; @@ -3699,7 +3699,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long = start, unsigned long end, struct lru_gen_mm_walk *walk =3D args->private; struct lru_gen_mm_state *mm_state =3D get_mm_state(walk->lruvec); =20 - VM_WARN_ON_ONCE(pud_leaf(*pud)); + VM_WARN_ON_ONCE(pud_leaf(pudp_get(pud))); =20 /* * Finish an entire PMD in two passes: the first only reaches to PTE @@ -3768,7 +3768,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long s= tart, unsigned long end, unsigned long next; struct lru_gen_mm_walk *walk =3D args->private; =20 - VM_WARN_ON_ONCE(p4d_leaf(*p4d)); + VM_WARN_ON_ONCE(p4d_leaf(p4dp_get(p4d))); =20 pud =3D pud_offset(p4d, start & P4D_MASK); restart: --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB2832DA755 for ; Thu, 13 Nov 2025 01:47:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998432; cv=none; b=Zg67q7wX1fPRrRXxbQAsKAOYSBg0Rc4OrdaAtyCoWlohLriJ8gQYqXRyhSJsA09x6kDrM+3+hKqmZ5rKBhuj6IrS9jCsKkCH95mqT+EV75ebDmIW7xNoDaW9GvzFC04dbTu/e71jS724BuGaRiy3Sev+PvcW57mud8HlhgF9mOE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998432; c=relaxed/simple; bh=DC8FuULDAPtQHN2narc7WgVVrVXyRDKjRqZjq5V9jfg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uKDwbXw3UC81BqSmb/DF5X3aV70OCnuXEl0EqKlHp2T8+8XO39HP10LfaMIKYluUVILqbF+mhLKYdvoCQrDs5PP5+EY1Syp4uNfYCYkaiR2UYcBRUBCzjjjh0hkM/V0+uFZVg7Y6vQshHQBGsPh6bu+EUZZXuYpdeMQuGfIeprA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=hg9WZ1HK; arc=none smtp.client-ip=209.85.214.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="hg9WZ1HK" Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-298144fb9bcso2808605ad.0 for ; Wed, 12 Nov 2025 17:47:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998430; x=1763603230; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8qhRE3U098N+H4DinfMhjjqM++0OwmN0vpW3M8gUQyI=; b=hg9WZ1HK/zWuGsBnn6iwQmKKAWFbYuk8wl+OI0N3CUyKTGpJ0Thy1u8Pb2WgnEfDUF im/RsSRjOG668EuLXJpOJ7lRfTtvY/YGpYFl/TSsWu/RHddblPr05yHG6QYq6C8eVShg 7NoLA3UF8DPvIAW0gKq5E4eMtEN9U6yXSIDIBsztrNd5xm/DjSk1Vx5zzj9BAuRJaOAP 90kSaVVHJzIrjU6l6ZQU839Qldn4jm2yWCQUld3ma4VtjGdVMV0D9monoZZqLSEtqnAK xG/QwtfVBnKixNMQOJGh1IullqzB3g3P0C6kcTM7JtES3c1O0zkuHLGMDvha3W3qwGWo 2xxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998430; x=1763603230; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=8qhRE3U098N+H4DinfMhjjqM++0OwmN0vpW3M8gUQyI=; b=EauADWrEstFqUwFIdhkn9XyjB3PmAf9CmAv5TrzNASAhpu13s7kVLZiQFPhkGJvol+ M2KC62rAw+AnUXG052JUZv/Q/B5Ea2Hg8dykyEvdkN9rer96MlnK1xFASNQY0f7SFDQs aMzNL9eRNe2we995ZLdkTAcVtJoJMy6bpOecrNdglGzBA7+cpM5ASlUxl3kjFbtp/GK7 1Uw/Yu2mZltqjNY31/sXBmKmjtMkXOcbc3C0E2gy0XZ87NQgq+T24ddQAWzDrSssFxfx ngzjfW+VST8DrK91A3am9xlotu8hnn0GeG0KXETb5r0FupNrpygN64GMJS8QhI92oONj Ig2Q== X-Forwarded-Encrypted: i=1; AJvYcCVB51P0E4AtfoHQs2g4g2uuqQs/U6Wp6uMwoG+KUYhvJ5fAVtoLczwllanZnEiz72Z7RtGwxvCnnfpOdgI=@vger.kernel.org X-Gm-Message-State: AOJu0YyY2q1Xd9uf6FLW1uRPQZxigqjR+NxgwmpSuZA9kCGDoSs3thrR NBTP6vONNqtYcuQ7VRNM8tFUs8IhpYxy9sf88wiGjh8p32Jb2UlIxfyakg8JvtzXpsw= X-Gm-Gg: ASbGncumf+NMvHF+ge2HTk9eWQAoCdYINMhOOPNQoRrCHBe0OR1whk1618dMSHo3Gbz 1pL0GidCSvKWXhDqA+wvlRSWOCH/tYLcFRBLqN0kBxiQcNj7s+k3pzuQrtwbxzomVYeWjW0E5R/ tRg4YohERu4eeeLCbX6TZX2ykFVBbWbCVgRcXZin8vclWO/nJnvGWaCgf5KrkqBlRjbSGwOm8MU 1KN3tBlFYzu/jV4NeN5Bw8EDOrpYgNdZJWeNZsbIoWKgJsbWGatcBATXLZ5CujPFbuk3LgOyocu Lg8ruZcMEwC5+d/Fueq/Bg1uQS3gucBdUuvSfL1AHjDAflUzh6fVBGI76sIeZ0D4TAtiJ8m1wF9 s6Ue5v/xPhW/LfRQU0NaheKxR8jHkSOSbX8OYSh7Qc50LTLTtFkEhrnE6thWM2BPnghVnTNC3BY FRYNujhgZYBumHubk1o9lrMQ== X-Google-Smtp-Source: AGHT+IHi0LBlNRw0NLDEi/GdsrGYk9xzGMjJnjfOoAMMFPTbg+YMkWroJLJ/pOiFoVdYD5QP6kQ95w== X-Received: by 2002:a17:902:ec89:b0:297:d764:9874 with SMTP id d9443c01a7336-2984ed46fe0mr62614185ad.21.1762998429959; Wed, 12 Nov 2025 17:47:09 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:09 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland , Andy Whitcroft , Dwaipayan Ray , Joe Perches , Lukas Bulwahn Subject: [PATCH v3 07/22] checkpatch: Warn on page table access without accessors Date: Wed, 12 Nov 2025 17:45:20 -0800 Message-ID: <20251113014656.2605447-8-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Architectures may have special rules for accessing the hardware page tables (for example, atomicity/ordering requirements), so the generic MM code provides the pXXp_get() and set_pXX() hooks for architectures to implement. These accessor functions are often omitted where a raw pointer dereference is believed to be safe (i.e. race-free). However, RISC-V needs to use these hooks to rewrite the page table values at read/write time on some platforms. A raw pointer dereference will no longer produce the correct value on those platforms, so the generic code must always use the accessor functions. sparse can only report improper pointer dereferences if every page table pointer (variable, function argument, struct member) is individually marked with an attribute (similar to __user). So while this is possible, it would require invasive changes across all architectures. Instead, as an immediate first solution, add a checkpatch warning that will generally catch the prohibited pointer dereferences. Architecture code is ignored, as the raw dereferences may be safe on some architectures. Signed-off-by: Samuel Holland --- Changes in v3: - New patch for v3 scripts/checkpatch.pl | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index 92669904eecc..55984d7361ea 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -7721,6 +7721,13 @@ sub process { ERROR("MISSING_SENTINEL", "missing sentinel in ID array\n" . "$here\n$= stat\n"); } } + +# check for raw dereferences of hardware page table pointers + if ($realfile !~ m@^arch/@ && + $line =3D~ /(?))?(pte|p[= mu4g]d)p?\b/) { + WARN("PAGE_TABLE_ACCESSORS", + "Use $3p_get()/set_$3() instead of dereferencing page table pointe= rs\n" . $herecurr); + } } =20 # If we have no input at all, then there is nothing to report on --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B2742DC320 for ; Thu, 13 Nov 2025 01:47:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998433; cv=none; b=jYsiWunsTBN9PMKxP98S3USYmRJb5lEkQVGkYGvamz4P2uJJWljWYo2AttBDQwW7P5fXvoqBPuTC6ou86Aig/ngUqQHy4kOB4I0bBOmdcrOnLAQ2333+qHoQHGEBo1r9Izo44J6joTPtIcwwB8EDQ8SJ2lYyvHyD3uJ1rX4n710= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998433; c=relaxed/simple; bh=+C06/zSjfGX1udawurUHHB8xzM56WaF73/3egno5ghs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LhdEeY/HExXSSYwQIiXa4yapRiDY0yG/go7yenVU0Myr1MNJXdxfdzojO1/ev21oY/ZxeypGqmV9Px9ec8L6Fk6eLkqnUzAHHdR6ZREfU/kMiyl7xqjP+Udyupty/gZxDJpKwsvgVxbC4QgnUldPSi8lXeNbCdlYr+0SIowyoKA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=kxvJLmVG; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="kxvJLmVG" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-297ef378069so2423865ad.3 for ; Wed, 12 Nov 2025 17:47:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998431; x=1763603231; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fpe1iExYP0zyH90A/cjYKaPPTKIVrHTbjuAaN9Ted2E=; b=kxvJLmVGMWRqcvcrzMoM9K30KzD+FDPR0W5XPhoiULzT72RMz8HO91Ney/r3Q52mBi NPcIRtB83emaSt6QfUJcJM/IDnRQubmgN6EV6IH17ozUQo5aSsBXVwkQ2GKgSR5vDn9O MMPTJ/nr6UnqHLgLc2FYLpESGJBK8l8KzNQK84UlyGbaXl+3UovLm/1vLAk35dvtADaC lYIfBGn1yC6X93zpFT2FedtMbAdHPfkIN6qGGBXsiyJ9cjD2Y6SZJv71Z2TZMsf+6TQS U+6apNTOUQE8fPYCbV0cJRTnZRfsewm8gX2gc7Om7W5GYKn887j1Ax32J5lLn7HYk+6S BR+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998431; x=1763603231; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=fpe1iExYP0zyH90A/cjYKaPPTKIVrHTbjuAaN9Ted2E=; b=W3bkkWSLuUoLPK1Ahm0RPjIAoolIPsGUHDG53rbcgJZ8ehyTvvKbKMCeboaGQrJLhf fLfLta4QmEyHG1uF0dhEHl9WB9ole6fuAYjntAQ3DEntkfvCBtKOg7hwubZnvQdwEjiL Ybw5NxOV28oRoJTEvrknCCb4y41rMbE05vT0tQqRvH7mmmbhRJ9u4Xoi7uhdbeZh3GC8 QuXyPw3qBflTB4wPfShiIgsE66+FPnwjkqG/5kP/KqSGMmBVbwjt4Jyi2uEnUy3M3usK h1xB/ejSb2lrCI35hRGAzkyl0N99XbCnJ1tmdmnPjqgR0GOqkyDEZi8H58IfJg08oy5E uIWQ== X-Forwarded-Encrypted: i=1; AJvYcCUEdmsHdxuYQBYJflZ/qijJgIdsKAjCz6KKT+2ysTahEAXQ0YSletB1t+1vL4ES69kieIBNjUQVWDmeais=@vger.kernel.org X-Gm-Message-State: AOJu0YyhvCzpjIsm+RI+cbE4pAQR1gTIgYRACJ4G0ZfEFry3wU79sDwc tTFOtbKugC6KqAJAoh+2CuXriOnCB1nLkeLviKQi4os9Q1sJoM2F0+1S3zpMEvM4RyA= X-Gm-Gg: ASbGncueTyIVNwlp4vVOAa53UZEbpDAGvkpR4FNcGxeBvfXbcEHibW6qx2sD7/8h1Sb GaA+mBkNxOKaqxi+3LTpRIe91dXQhPPdxUjypPAsgPCRvX3Kl3npiRDp1+pgcVJl8SR5y3ZFkvJ OpVlfwO8NugoJR/qx7v7hoXS/0sNd5cxPymtma83HocY+hn7rc38AbroixySDDEhddWRoCigoBS X8ZYMkxCeVJZah+fIZB1DCf6ZyRajoRf+/mJtikUEUTD0FAxbBX0Vd3HiRh06EqQ5odWBxmmRdj a/kUAXmGmFmJR4R0Utvn2LguSyTeXneLvwBxgSwAXuno0P4IaGXzePG+43+28TdwjfLAuwA0SCH itO17zeO6KU/CaFIHzg/axqSyt9xWzRRJBXHrMHuwTT8T2sEqlun5/fsw4gl1j4bXCDXZUzhfg/ 9jfFmh8AP7lja2Y1514EZfIg== X-Google-Smtp-Source: AGHT+IHfQlkxOaBsnfDtOhaN9Eg9wG5LhwHH/6CgE2hK5xO5V/1h7VmoyWUWgcNESoHw3XrIzaUDgw== X-Received: by 2002:a17:902:ebc7:b0:298:481c:cbd4 with SMTP id d9443c01a7336-2984eda4101mr66654135ad.26.1762998431452; Wed, 12 Nov 2025 17:47:11 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:11 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland Subject: [PATCH v3 08/22] mm: Allow page table accessors to be non-idempotent Date: Wed, 12 Nov 2025 17:45:21 -0800 Message-ID: <20251113014656.2605447-9-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, some functions such as pte_offset_map() are passed both pointers to hardware page tables, and pointers to previously-read PMD entries on the stack. To ensure correctness in the first case, these functions must use the page table accessor function (pmdp_get()) to dereference the supplied pointer. However, this means pmdp_get() is called twice in the second case. This double call must be avoided if pmdp_get() applies some non-idempotent transformation to the value. Avoid the double transformation by calling set_pmd() on the stack variables where necessary to keep set_pmd()/pmdp_get() calls balanced. Signed-off-by: Samuel Holland --- (no changes since v2) Changes in v2: - New patch for v2 kernel/events/core.c | 2 ++ mm/gup.c | 3 +++ mm/khugepaged.c | 6 ++++-- mm/page_table_check.c | 3 +++ mm/pgtable-generic.c | 2 ++ 5 files changed, 14 insertions(+), 2 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index fa4f9165bd94..7969b060bf2d 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -8154,6 +8154,8 @@ static u64 perf_get_pgtable_size(struct mm_struct *mm= , unsigned long addr) if (pmd_leaf(pmd)) return pmd_leaf_size(pmd); =20 + /* transform pmd as if &pmd pointed to a hardware page table */ + set_pmd(&pmd, pmd); ptep =3D pte_offset_map(&pmd, addr); if (!ptep) goto again; diff --git a/mm/gup.c b/mm/gup.c index 549f9e868311..aba61704049e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2844,7 +2844,10 @@ static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp= , unsigned long addr, int ret =3D 0; pte_t *ptep, *ptem; =20 + /* transform pmd as if &pmd pointed to a hardware page table */ + set_pmd(&pmd, pmd); ptem =3D ptep =3D pte_offset_map(&pmd, addr); + pmd =3D pmdp_get(&pmd); if (!ptep) return 0; do { diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 1bff8ade751a..ab1f68a7bc83 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1724,7 +1724,7 @@ static void retract_page_tables(struct address_space = *mapping, pgoff_t pgoff) struct mmu_notifier_range range; struct mm_struct *mm; unsigned long addr; - pmd_t *pmd, pgt_pmd; + pmd_t *pmd, pgt_pmd, pmdval; spinlock_t *pml; spinlock_t *ptl; bool success =3D false; @@ -1777,7 +1777,9 @@ static void retract_page_tables(struct address_space = *mapping, pgoff_t pgoff) */ if (check_pmd_state(pmd) !=3D SCAN_SUCCEED) goto drop_pml; - ptl =3D pte_lockptr(mm, pmd); + /* pte_lockptr() needs a value, not a pointer to a page table */ + pmdval =3D pmdp_get(pmd); + ptl =3D pte_lockptr(mm, &pmdval); if (ptl !=3D pml) spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); =20 diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 31f4c39d20ef..77d6688db0de 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -260,7 +260,10 @@ void __page_table_check_pte_clear_range(struct mm_stru= ct *mm, return; =20 if (!pmd_bad(pmd) && !pmd_leaf(pmd)) { + /* transform pmd as if &pmd pointed to a hardware page table */ + set_pmd(&pmd, pmd); pte_t *ptep =3D pte_offset_map(&pmd, addr); + pmd =3D pmdp_get(&pmd); unsigned long i; =20 if (WARN_ON(!ptep)) diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 63a573306bfa..6602deb002f1 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -299,6 +299,8 @@ pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr= , pmd_t *pmdvalp) pmd_clear_bad(pmd); goto nomap; } + /* transform pmdval as if &pmdval pointed to a hardware page table */ + set_pmd(&pmdval, pmdval); return __pte_map(&pmdval, addr); nomap: rcu_read_unlock(); --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DECB2DE715 for ; Thu, 13 Nov 2025 01:47:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998435; cv=none; b=OIl+2KtzlhMdwBE81S+L8CIBQIMAQcgFUlH9NSeM00lF3iJLKRWQt2mjJByYrIyOfio3Ied2ha7tDsRUKC4se5V1v0A/N50UKtz5hdurhvhiZ3ufNDbFHkDIFZc85SxnNsJg8RETzUri59ZJgep97eLkubhmctMUVTWpITHsy5I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998435; c=relaxed/simple; bh=gTvtvYKDEfGV0F0vdSkFMjg0H8V74zQ4iEre+dvdH9s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cyriL0O/swUK3NONQPuMwneHyhGUTO5Iv92hBmbpkM0iN0Vzvd5bnvkVi9OOxTqL3zFp1Rb0h6O5O74wIwM4MEDtLquqYPz/NDBXmzL+pMAHn9GFI6nDLbhr6Biz3nbvuU/tIolhTBcnAeVLZqyIgiZZuBYXkVQAl/hl9oaOZ0E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=LFP/Xo9j; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="LFP/Xo9j" Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-7b18c6cc278so315409b3a.0 for ; Wed, 12 Nov 2025 17:47:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998433; x=1763603233; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4ZA59qppRNlQ/gH4ZwlxZVtJjkZuT0TezTthNgG0I08=; b=LFP/Xo9jV2BPmMa1u4Yj/sisH2pM7mF3IoPo0pV7BIQ/+mASz4aNMPeeOwAECTqHPQ 6zRf3+bJ24Pu8+c9AXlVbDZ7D+djFEIaTRIAt0fHxPN49v5Zpv8tMWJivg5pzVTKRAjc 9Q5cZcNWGrpacLUrSWqSZgy7YJ0lXTHpTJ7rjF0qaIkNQAXyVTxLnCVXk5yp+VGdBpfx rdRyH6xNREkiPHo8b+SFSoBzHliyeSV52FFY59ScFApq8bw9C4Wkikv0d2HxSJoY6yzA YuPuAk+XPHUuBhy6MiRTJyfiW4iHuXhVS8WiVeiuZKqNpVMta27kkoAWNHB4yitVZM4u TcnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998433; x=1763603233; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=4ZA59qppRNlQ/gH4ZwlxZVtJjkZuT0TezTthNgG0I08=; b=OXpPWPVc31hylfLTvBBNeg8uXFDxORzVLJtEOW9OHLYHn65/ODorspmcJWbEi171zT xERDZC0frT9rAjL+GWEN6drbGi0WY2EOKPPIttYv5F9W6WagsgkRkjpWGfSe41SF4yn2 iLNcB0vEJ3nDxaECVzRSXubUdPdaEC/6PCC4nSOrOxRZRh04JDKlymdCOKdXzfWqwBSH T3Rb2b0B8jXGdFpBtEECtGfE8OwhzZrSeDG3D+bmvloNPHXaF3yw5A3fW1F2iCAlYUuH RcqH+oP4NaTi9S00gtoZiDYSd4zcwvd4JHMngMSYw9xMyvafbQN6P48K2RU25i1AmZJA 3hGA== X-Forwarded-Encrypted: i=1; AJvYcCWYEm0h4paB+4HfQgw7XRmxpS695MwmPbXatoxX7ZxUGnwwLZh2HdwX8MLkYe08Z9lCrzSzWrIy8RT9vdc=@vger.kernel.org X-Gm-Message-State: AOJu0YyyHTNwBifqtSTQj/DhCJ9KM7yzFRYfmV2BkfMVjC5GZ9rNMNOu UM04GdIfxIa4cJltlrwGSKR+vlqQqBfylN02X9nNGKqhg5oHssyCO3n1+BroN0t04OU= X-Gm-Gg: ASbGnctgfC3vKcNJFVCtaC6BZt6jSW/54miHFInl2oLpBDoRO25nfInhVogfYmppsgk SKKt1uW3KTfl9E2YTD350YMBetcdigxB0KKQMBjEHGhVr/Wf8ubPo1dhG4qGjK15MdxTmIgyQN3 gA/VI7wd3+UBoXXfNMdvfSunCzOKY2GGqEtDy2Pwg7N8VCZ99UFkQC16loJcykdd6jx28CrKiue sBJy4JNHnPRXe1pOp49CNOvpym3FujznW4/dBPNJxDihsKqdRQapBMo/C0/G7CLiKPqeIfkp5VD blYvmVYTwEVXXy4TKfmmbsMZmlgldLkFKsM8m+cD344D7W6Dx/cEu0o3Q3QIZ9rF7WhlAbEoL+C M2q4kLmfDkoGFnIdUyLQ7DBCtDOumr8BVG4lt0dq2Be/IoxU9JBxBqwHgIQHL1S2gQUrTJHKk5S pKPBPceZr22UfLenJNTbC76BBlC3GCjiB7 X-Google-Smtp-Source: AGHT+IEls7n+vPL3afsY1HaiLuKxeTZ1hxOSzGPwxU5ZDyX/ao4U4NfMKSzGsMG6BjM8vErLRKhy9g== X-Received: by 2002:a17:902:ccd1:b0:295:7806:1d7b with SMTP id d9443c01a7336-2984eddee65mr65789335ad.45.1762998432890; Wed, 12 Nov 2025 17:47:12 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:12 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland Subject: [PATCH v3 09/22] riscv: hibernate: Replace open-coded pXXp_get() Date: Wed, 12 Nov 2025 17:45:22 -0800 Message-ID: <20251113014656.2605447-10-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use the semantically appropriate accessor function instead of open coding the implementation. This will become important once these functions start transforming the PTE value on some platforms. Signed-off-by: Samuel Holland --- (no changes since v2) Changes in v2: - New patch for v2 arch/riscv/kernel/hibernate.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/riscv/kernel/hibernate.c b/arch/riscv/kernel/hibernate.c index 671b686c0158..2a9bc9d9e776 100644 --- a/arch/riscv/kernel/hibernate.c +++ b/arch/riscv/kernel/hibernate.c @@ -171,7 +171,7 @@ static int temp_pgtable_map_pte(pmd_t *dst_pmdp, pmd_t = *src_pmdp, unsigned long pte_t *src_ptep; pte_t *dst_ptep; =20 - if (pmd_none(READ_ONCE(*dst_pmdp))) { + if (pmd_none(pmdp_get(dst_pmdp))) { dst_ptep =3D (pte_t *)get_safe_page(GFP_ATOMIC); if (!dst_ptep) return -ENOMEM; @@ -183,7 +183,7 @@ static int temp_pgtable_map_pte(pmd_t *dst_pmdp, pmd_t = *src_pmdp, unsigned long src_ptep =3D pte_offset_kernel(src_pmdp, start); =20 do { - pte_t pte =3D READ_ONCE(*src_ptep); + pte_t pte =3D ptep_get(src_ptep); =20 if (pte_present(pte)) set_pte(dst_ptep, __pte(pte_val(pte) | pgprot_val(prot))); @@ -200,7 +200,7 @@ static int temp_pgtable_map_pmd(pud_t *dst_pudp, pud_t = *src_pudp, unsigned long pmd_t *src_pmdp; pmd_t *dst_pmdp; =20 - if (pud_none(READ_ONCE(*dst_pudp))) { + if (pud_none(pudp_get(dst_pudp))) { dst_pmdp =3D (pmd_t *)get_safe_page(GFP_ATOMIC); if (!dst_pmdp) return -ENOMEM; @@ -212,7 +212,7 @@ static int temp_pgtable_map_pmd(pud_t *dst_pudp, pud_t = *src_pudp, unsigned long src_pmdp =3D pmd_offset(src_pudp, start); =20 do { - pmd_t pmd =3D READ_ONCE(*src_pmdp); + pmd_t pmd =3D pmdp_get(src_pmdp); =20 next =3D pmd_addr_end(start, end); =20 @@ -239,7 +239,7 @@ static int temp_pgtable_map_pud(p4d_t *dst_p4dp, p4d_t = *src_p4dp, unsigned long pud_t *dst_pudp; pud_t *src_pudp; =20 - if (p4d_none(READ_ONCE(*dst_p4dp))) { + if (p4d_none(p4dp_get(dst_p4dp))) { dst_pudp =3D (pud_t *)get_safe_page(GFP_ATOMIC); if (!dst_pudp) return -ENOMEM; @@ -251,7 +251,7 @@ static int temp_pgtable_map_pud(p4d_t *dst_p4dp, p4d_t = *src_p4dp, unsigned long src_pudp =3D pud_offset(src_p4dp, start); =20 do { - pud_t pud =3D READ_ONCE(*src_pudp); + pud_t pud =3D pudp_get(src_pudp); =20 next =3D pud_addr_end(start, end); =20 @@ -278,7 +278,7 @@ static int temp_pgtable_map_p4d(pgd_t *dst_pgdp, pgd_t = *src_pgdp, unsigned long p4d_t *dst_p4dp; p4d_t *src_p4dp; =20 - if (pgd_none(READ_ONCE(*dst_pgdp))) { + if (pgd_none(pgdp_get(dst_pgdp))) { dst_p4dp =3D (p4d_t *)get_safe_page(GFP_ATOMIC); if (!dst_p4dp) return -ENOMEM; @@ -290,7 +290,7 @@ static int temp_pgtable_map_p4d(pgd_t *dst_pgdp, pgd_t = *src_pgdp, unsigned long src_p4dp =3D p4d_offset(src_pgdp, start); =20 do { - p4d_t p4d =3D READ_ONCE(*src_p4dp); + p4d_t p4d =3D p4dp_get(src_p4dp); =20 next =3D p4d_addr_end(start, end); =20 @@ -317,7 +317,7 @@ static int temp_pgtable_mapping(pgd_t *pgdp, unsigned l= ong start, unsigned long unsigned long ret; =20 do { - pgd_t pgd =3D READ_ONCE(*src_pgdp); + pgd_t pgd =3D pgdp_get(src_pgdp); =20 next =3D pgd_addr_end(start, end); =20 --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0DFD82E03E1 for ; Thu, 13 Nov 2025 01:47:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998437; cv=none; b=ivAzTc5fgKmVuIuooHS/wVQhhwhtrWBzwijGe+QLr18Sa/kDpWhwtV62t47XmzwcMHneI3zu6hQbSYBWsdCeYiEIWQzBxDp7i6zqbkphfBseqe5dlYWfNgEWS7pvTDsa2xNx0xUpXfIyogVWGNvA16s5Snv2r2r2QyBsd4fRxUY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998437; c=relaxed/simple; bh=86DWhTWxpA+Wn6OoSODTxcACeGJ6z7eq0QJodNfrB5g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bJQ9JY9LcnGsAysVVoRVkM2/NST765R0RYQ3AEdFkBZuWmT5O0A81yDVKLrYv+Fg+ewt2TZD9eX6u+Toro/vbhpnt9weFd8sSFOe3966m5kx5lkRGoyHXLh1SGnt5Iy1ESKwwz2B5XAngItd67rLOO4LymSLkJziFRprDfrRpJs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=h9BbaI20; arc=none smtp.client-ip=209.85.210.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="h9BbaI20" Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-7ad1cd0db3bso217560b3a.1 for ; Wed, 12 Nov 2025 17:47:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998434; x=1763603234; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=onXS1ZyyTIk7E4Phjch+Gs0PpMpoB1JPFOZexalawi0=; b=h9BbaI20ErXz7k96wyZ4jxSnKtn+lcez+ImlndUqs1usZiu6F8k8KatSou3fHsfogG DnDBnkpxDkAQyVm3OgdOcv5LoG8lk+UUNUFVEC4XeXzaIZ3R7Xf+DeZwckgAJlXfXGbl DQUrQiMYg6qj0f9bUwiFfizAB+GaF2uxnLlgH19UxvGUlh82R/GSOJjmtb5obCd4m1v9 KlqwiayULydoNS+fRpE+dd3oRC6getLmEu95kpOVtXAjAzbLPKNte7ZEM013EUeM2WE4 QuSIAPTGdtAspfZiFku+a55vi67F3qNeXDiSnte56gBCiiJ9RyD9ulxobPP4Ayeili1j 2ncQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998434; x=1763603234; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=onXS1ZyyTIk7E4Phjch+Gs0PpMpoB1JPFOZexalawi0=; b=DldyaZr2Llgy5/oLSp0mnPNZ3ZChNFkRI/+I1RGK3F0JYUROXmiXJv2fJ6hFECgpZK nPuLvUjvVB3SiV8DGPAzwaTDw7tv9VWIL/+7USjNQxCcWG/eIogD/wtZ6jwcWmELnAOy AVTrMyo7NzJZuf0a3cXShNllV+Jbb9JTQMw3TKER7lLcY2ev5i1as6U+TA/tZVYoWI4B tLMaZdoUstUUIJH0M4dXPqfAXwsgFOb5rP6Zs3AJv5+sx7j++Fm8QNUCG4ZNrJ6+NQbx NksjJrQz3OgmuBhoYye9dKYWRWPRq1C/200P1+5s04ZK0wQ8gkzD3kJLuo3FHCoiXMgj 68eA== X-Forwarded-Encrypted: i=1; AJvYcCWm/fVUocn4DT5vRk9VeWik9cvxFJu7vRiQ/AXVRrZaLo6vJmtTF5eBxljBq4hiaJgI/j2j2Wt3XAjpsS0=@vger.kernel.org X-Gm-Message-State: AOJu0Ywzs4o8CR+HbkOQG9d/o7n78zA3aG/6LmjyqeneK1FA3ehddLHg DTuo4bVecu12e7ZteQ3nOORCnMzTxkI/U1wXzlMXOd1vAT8AZ0pFjqKziJx0XqDGpVY= X-Gm-Gg: ASbGnctuYQlvRRILRbpr5n0Q81RT53v55FC7CXgvzg5PLJ834TS068xCJpTMW/HBG9T jJGSxsqFn77NcORBzkLAYfULbF6Qw5mc58nlYSvzp5/781n1TFtsYgmxvakEoKEWMOUWlIUncv7 7UEcjtPJcg62KN3vLYAVZLh8vDax2ViKrnd2EI5DpoauZzHs3ow7azM7hRe6hjen27LE+SysEJY WXPqjq3m2hIZt79pznObHUgB2wWLUSAXz/E4qHjkSF/h9NNvuEJFjFuxK66T+dXQYlPGpankc19 FZ38GnxRURskbhcQ6NLBryj85g6+b3WGOL3I6cN7c+DG1D2fRioO2dAIeYa8rg1fdja2pW6S9mL d98iqfvOqxsucHSDVsxlK5dP+w5J6L5RIKL29LhnP2pqSuG0WaQvN+98x6Q+xZz8Sln2HwAjyhc m5edAc9++Ai7Ek5RuZBK8VMA== X-Google-Smtp-Source: AGHT+IEPUQpGJSnlpoH1wKnMxfvk73znW6zYh27mDLEJKnxaxWn+hw/P1W+C7Nu+pbcglz2Jny/U/g== X-Received: by 2002:a17:902:cccf:b0:297:f8d9:aad7 with SMTP id d9443c01a7336-2984ede5755mr62262705ad.50.1762998434336; Wed, 12 Nov 2025 17:47:14 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:14 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland Subject: [PATCH v3 10/22] riscv: mm: Always use page table accessor functions Date: Wed, 12 Nov 2025 17:45:23 -0800 Message-ID: <20251113014656.2605447-11-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use the semantically appropriate accessor function instead of a raw pointer dereference. This will become important once these functions start transforming the PTE value on some platforms. Signed-off-by: Samuel Holland --- (no changes since v2) Changes in v2: - New patch for v2 arch/riscv/include/asm/pgtable.h | 8 ++-- arch/riscv/kvm/gstage.c | 6 +-- arch/riscv/mm/init.c | 68 +++++++++++++++++--------------- arch/riscv/mm/pgtable.c | 9 +++-- 4 files changed, 49 insertions(+), 42 deletions(-) diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgta= ble.h index 5a08eb5fe99f..acfd48f92010 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -952,7 +952,7 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_s= truct *mm, #ifdef CONFIG_SMP pud_t pud =3D __pud(xchg(&pudp->pud, 0)); #else - pud_t pud =3D *pudp; + pud_t pud =3D pudp_get(pudp); =20 pud_clear(pudp); #endif @@ -1129,13 +1129,15 @@ extern unsigned long empty_zero_page[PAGE_SIZE / si= zeof(unsigned long)]; */ #define set_p4d_safe(p4dp, p4d) \ ({ \ - WARN_ON_ONCE(p4d_present(*p4dp) && !p4d_same(*p4dp, p4d)); \ + p4d_t old =3D p4dp_get(p4dp); \ + WARN_ON_ONCE(p4d_present(old) && !p4d_same(old, p4d)); \ set_p4d(p4dp, p4d); \ }) =20 #define set_pgd_safe(pgdp, pgd) \ ({ \ - WARN_ON_ONCE(pgd_present(*pgdp) && !pgd_same(*pgdp, pgd)); \ + pgd_t old =3D pgdp_get(pgdp); \ + WARN_ON_ONCE(pgd_present(old) && !pgd_same(old, pgd)); \ set_pgd(pgdp, pgd); \ }) #endif /* !__ASSEMBLER__ */ diff --git a/arch/riscv/kvm/gstage.c b/arch/riscv/kvm/gstage.c index b67d60d722c2..297744e2ab5d 100644 --- a/arch/riscv/kvm/gstage.c +++ b/arch/riscv/kvm/gstage.c @@ -154,7 +154,7 @@ int kvm_riscv_gstage_set_pte(struct kvm_gstage *gstage, ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; } =20 - if (pte_val(*ptep) !=3D pte_val(map->pte)) { + if (pte_val(ptep_get(ptep)) !=3D pte_val(map->pte)) { set_pte(ptep, map->pte); if (gstage_pte_leaf(ptep)) gstage_tlb_flush(gstage, current_level, map->addr); @@ -241,12 +241,12 @@ void kvm_riscv_gstage_op_pte(struct kvm_gstage *gstag= e, gpa_t addr, if (op =3D=3D GSTAGE_OP_CLEAR) put_page(virt_to_page(next_ptep)); } else { - old_pte =3D *ptep; + old_pte =3D ptep_get(ptep); if (op =3D=3D GSTAGE_OP_CLEAR) set_pte(ptep, __pte(0)); else if (op =3D=3D GSTAGE_OP_WP) set_pte(ptep, __pte(pte_val(ptep_get(ptep)) & ~_PAGE_WRITE)); - if (pte_val(*ptep) !=3D pte_val(old_pte)) + if (pte_val(ptep_get(ptep)) !=3D pte_val(old_pte)) gstage_tlb_flush(gstage, ptep_level, addr); } } diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index d85efe74a4b6..ac686c1b2f85 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -459,8 +459,8 @@ static void __meminit create_pte_mapping(pte_t *ptep, u= intptr_t va, phys_addr_t =20 BUG_ON(sz !=3D PAGE_SIZE); =20 - if (pte_none(ptep[pte_idx])) - ptep[pte_idx] =3D pfn_pte(PFN_DOWN(pa), prot); + if (pte_none(ptep_get(ptep + pte_idx))) + set_pte(ptep + pte_idx, pfn_pte(PFN_DOWN(pa), prot)); } =20 #ifndef __PAGETABLE_PMD_FOLDED @@ -542,18 +542,19 @@ static void __meminit create_pmd_mapping(pmd_t *pmdp, uintptr_t pmd_idx =3D pmd_index(va); =20 if (sz =3D=3D PMD_SIZE) { - if (pmd_none(pmdp[pmd_idx])) - pmdp[pmd_idx] =3D pfn_pmd(PFN_DOWN(pa), prot); + if (pmd_none(pmdp_get(pmdp + pmd_idx))) + set_pmd(pmdp + pmd_idx, pfn_pmd(PFN_DOWN(pa), prot)); return; } =20 - if (pmd_none(pmdp[pmd_idx])) { + if (pmd_none(pmdp_get(pmdp + pmd_idx))) { pte_phys =3D pt_ops.alloc_pte(va); - pmdp[pmd_idx] =3D pfn_pmd(PFN_DOWN(pte_phys), PAGE_TABLE); + set_pmd(pmdp + pmd_idx, + pfn_pmd(PFN_DOWN(pte_phys), PAGE_TABLE)); ptep =3D pt_ops.get_pte_virt(pte_phys); memset(ptep, 0, PAGE_SIZE); } else { - pte_phys =3D PFN_PHYS(_pmd_pfn(pmdp[pmd_idx])); + pte_phys =3D PFN_PHYS(_pmd_pfn(pmdp_get(pmdp + pmd_idx))); ptep =3D pt_ops.get_pte_virt(pte_phys); } =20 @@ -644,18 +645,19 @@ static void __meminit create_pud_mapping(pud_t *pudp,= uintptr_t va, phys_addr_t uintptr_t pud_index =3D pud_index(va); =20 if (sz =3D=3D PUD_SIZE) { - if (pud_val(pudp[pud_index]) =3D=3D 0) - pudp[pud_index] =3D pfn_pud(PFN_DOWN(pa), prot); + if (pud_val(pudp_get(pudp + pud_index)) =3D=3D 0) + set_pud(pudp + pud_index, pfn_pud(PFN_DOWN(pa), prot)); return; } =20 - if (pud_val(pudp[pud_index]) =3D=3D 0) { + if (pud_val(pudp_get(pudp + pud_index)) =3D=3D 0) { next_phys =3D pt_ops.alloc_pmd(va); - pudp[pud_index] =3D pfn_pud(PFN_DOWN(next_phys), PAGE_TABLE); + set_pud(pudp + pud_index, + pfn_pud(PFN_DOWN(next_phys), PAGE_TABLE)); nextp =3D pt_ops.get_pmd_virt(next_phys); memset(nextp, 0, PAGE_SIZE); } else { - next_phys =3D PFN_PHYS(_pud_pfn(pudp[pud_index])); + next_phys =3D PFN_PHYS(_pud_pfn(pudp_get(pudp + pud_index))); nextp =3D pt_ops.get_pmd_virt(next_phys); } =20 @@ -670,18 +672,19 @@ static void __meminit create_p4d_mapping(p4d_t *p4dp,= uintptr_t va, phys_addr_t uintptr_t p4d_index =3D p4d_index(va); =20 if (sz =3D=3D P4D_SIZE) { - if (p4d_val(p4dp[p4d_index]) =3D=3D 0) - p4dp[p4d_index] =3D pfn_p4d(PFN_DOWN(pa), prot); + if (p4d_val(p4dp_get(p4dp + p4d_index)) =3D=3D 0) + set_p4d(p4dp + p4d_index, pfn_p4d(PFN_DOWN(pa), prot)); return; } =20 - if (p4d_val(p4dp[p4d_index]) =3D=3D 0) { + if (p4d_val(p4dp_get(p4dp + p4d_index)) =3D=3D 0) { next_phys =3D pt_ops.alloc_pud(va); - p4dp[p4d_index] =3D pfn_p4d(PFN_DOWN(next_phys), PAGE_TABLE); + set_p4d(p4dp + p4d_index, + pfn_p4d(PFN_DOWN(next_phys), PAGE_TABLE)); nextp =3D pt_ops.get_pud_virt(next_phys); memset(nextp, 0, PAGE_SIZE); } else { - next_phys =3D PFN_PHYS(_p4d_pfn(p4dp[p4d_index])); + next_phys =3D PFN_PHYS(_p4d_pfn(p4dp_get(p4dp + p4d_index))); nextp =3D pt_ops.get_pud_virt(next_phys); } =20 @@ -727,18 +730,19 @@ void __meminit create_pgd_mapping(pgd_t *pgdp, uintpt= r_t va, phys_addr_t pa, phy uintptr_t pgd_idx =3D pgd_index(va); =20 if (sz =3D=3D PGDIR_SIZE) { - if (pgd_val(pgdp[pgd_idx]) =3D=3D 0) - pgdp[pgd_idx] =3D pfn_pgd(PFN_DOWN(pa), prot); + if (pgd_val(pgdp_get(pgdp + pgd_idx)) =3D=3D 0) + set_pgd(pgdp + pgd_idx, pfn_pgd(PFN_DOWN(pa), prot)); return; } =20 - if (pgd_val(pgdp[pgd_idx]) =3D=3D 0) { + if (pgd_val(pgdp_get(pgdp + pgd_idx)) =3D=3D 0) { next_phys =3D alloc_pgd_next(va); - pgdp[pgd_idx] =3D pfn_pgd(PFN_DOWN(next_phys), PAGE_TABLE); + set_pgd(pgdp + pgd_idx, + pfn_pgd(PFN_DOWN(next_phys), PAGE_TABLE)); nextp =3D get_pgd_next_virt(next_phys); memset(nextp, 0, PAGE_SIZE); } else { - next_phys =3D PFN_PHYS(_pgd_pfn(pgdp[pgd_idx])); + next_phys =3D PFN_PHYS(_pgd_pfn(pgdp_get(pgdp + pgd_idx))); nextp =3D get_pgd_next_virt(next_phys); } =20 @@ -1574,14 +1578,14 @@ struct execmem_info __init *execmem_arch_setup(void) #ifdef CONFIG_MEMORY_HOTPLUG static void __meminit free_pte_table(pte_t *pte_start, pmd_t *pmd) { - struct page *page =3D pmd_page(*pmd); + struct page *page =3D pmd_page(pmdp_get(pmd)); struct ptdesc *ptdesc =3D page_ptdesc(page); pte_t *pte; int i; =20 for (i =3D 0; i < PTRS_PER_PTE; i++) { pte =3D pte_start + i; - if (!pte_none(*pte)) + if (!pte_none(ptep_get(pte))) return; } =20 @@ -1595,14 +1599,14 @@ static void __meminit free_pte_table(pte_t *pte_sta= rt, pmd_t *pmd) =20 static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud, bool is= _vmemmap) { - struct page *page =3D pud_page(*pud); + struct page *page =3D pud_page(pudp_get(pud)); struct ptdesc *ptdesc =3D page_ptdesc(page); pmd_t *pmd; int i; =20 for (i =3D 0; i < PTRS_PER_PMD; i++) { pmd =3D pmd_start + i; - if (!pmd_none(*pmd)) + if (!pmd_none(pmdp_get(pmd))) return; } =20 @@ -1617,13 +1621,13 @@ static void __meminit free_pmd_table(pmd_t *pmd_sta= rt, pud_t *pud, bool is_vmemm =20 static void __meminit free_pud_table(pud_t *pud_start, p4d_t *p4d) { - struct page *page =3D p4d_page(*p4d); + struct page *page =3D p4d_page(p4dp_get(p4d)); pud_t *pud; int i; =20 for (i =3D 0; i < PTRS_PER_PUD; i++) { pud =3D pud_start + i; - if (!pud_none(*pud)) + if (!pud_none(pudp_get(pud))) return; } =20 @@ -1668,7 +1672,7 @@ static void __meminit remove_pte_mapping(pte_t *pte_b= ase, unsigned long addr, un =20 ptep =3D pte_base + pte_index(addr); pte =3D ptep_get(ptep); - if (!pte_present(*ptep)) + if (!pte_present(ptep_get(ptep))) continue; =20 pte_clear(&init_mm, addr, ptep); @@ -1698,7 +1702,7 @@ static void __meminit remove_pmd_mapping(pmd_t *pmd_b= ase, unsigned long addr, un continue; } =20 - pte_base =3D (pte_t *)pmd_page_vaddr(*pmdp); + pte_base =3D (pte_t *)pmd_page_vaddr(pmdp_get(pmdp)); remove_pte_mapping(pte_base, addr, next, is_vmemmap, altmap); free_pte_table(pte_base, pmdp); } @@ -1777,10 +1781,10 @@ static void __meminit remove_pgd_mapping(unsigned l= ong va, unsigned long end, bo next =3D pgd_addr_end(addr, end); pgd =3D pgd_offset_k(addr); =20 - if (!pgd_present(*pgd)) + if (!pgd_present(pgdp_get(pgd))) continue; =20 - if (pgd_leaf(*pgd)) + if (pgd_leaf(pgdp_get(pgd))) continue; =20 p4d_base =3D p4d_offset(pgd, 0); diff --git a/arch/riscv/mm/pgtable.c b/arch/riscv/mm/pgtable.c index 8b6c0a112a8d..c4b85a828797 100644 --- a/arch/riscv/mm/pgtable.c +++ b/arch/riscv/mm/pgtable.c @@ -95,8 +95,8 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr) flush_tlb_kernel_range(addr, addr + PUD_SIZE); =20 for (i =3D 0; i < PTRS_PER_PMD; i++) { - if (!pmd_none(pmd[i])) { - pte_t *pte =3D (pte_t *)pmd_page_vaddr(pmd[i]); + if (!pmd_none(pmdp_get(pmd + i))) { + pte_t *pte =3D (pte_t *)pmd_page_vaddr(pmdp_get(pmd + i)); =20 pte_free_kernel(NULL, pte); } @@ -158,8 +158,9 @@ pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, pud_t pudp_invalidate(struct vm_area_struct *vma, unsigned long address, pud_t *pudp) { - VM_WARN_ON_ONCE(!pud_present(*pudp)); - pud_t old =3D pudp_establish(vma, address, pudp, pud_mkinvalid(*pudp)); + VM_WARN_ON_ONCE(!pud_present(pudp_get(pudp))); + pud_t old =3D pudp_establish(vma, address, pudp, + pud_mkinvalid(pudp_get(pudp))); =20 flush_pud_tlb_range(vma, address, address + HPAGE_PUD_SIZE); return old; --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B23A2E4266 for ; Thu, 13 Nov 2025 01:47:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998438; cv=none; b=sF8oEzFTFMjIwX+5vI6TqTDl5qMrGYhTXQkK6JTPvBW1jLjrdWWj/x4fOPlXKySAjO5sLlAuVlcnirRY8RBN8FMU3zVisIlG9jf9+RXpPCM3Ewf1uKpBjNIWnentgNL2i0VVBLaaUTiMJ2TRhvxbx/k0qzqih+l/CuaUGct8krM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998438; c=relaxed/simple; bh=kgf9Y1avYCG16k+2jIsQME77PnvA1E0FdJ9r57gO51w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Y0/FESDqOjz3U4fDAzJTCUEAwHHT97BNMmADy+VtwWbLHj58molZVklGvXTRTvzWlm2tSlF0OIx6arNd7Ir+qiLrEiFIwhdeVXw5CPKvpchCropkS9y3J+LISwwWETRbBwEu3lVHDHgUcyMZE9IJOWes1Cv6euduGk5hdZg+iOY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=UYKTvH1a; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="UYKTvH1a" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-2964d616df7so3358495ad.3 for ; Wed, 12 Nov 2025 17:47:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998436; x=1763603236; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FhqbfzFBgO5voylyQ1oE0qebaNihNkFkz8eAAFF/X6s=; b=UYKTvH1aC/3YI6iSGCF72suUdlUaPVRoOachIMj8luGune8VFUNw8//CFLViMVnThO E/eV6zt0BjWK3b1oRFndk4heO+6KSn+7422rhowdwKvhlzk/yxZlOHqzoeFlQHOnzcKl ViBCz+18XDGZpJjiXRXTFBwmxcre17jHfEn2e66bZl7IUWkpZ0KY5Yk5xpMt8/2Qi8VV SuFJHuiinfE5NCNNqcbXbMEtSXkzrlfAD7cAhsscbUzTbv7R7m2YElK31ZV8BXteaZ5w 3I9L/iHmeQDRZANNz8FVtCv2lj/OOz7aYwzKMrsVKZdmpcGMAcFmhG/nCrnlHc8zHyDS s9jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998436; x=1763603236; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=FhqbfzFBgO5voylyQ1oE0qebaNihNkFkz8eAAFF/X6s=; b=jeIC87FVf0X8wLGOAHpY7O0mU9PkWOJvrI9N4jQG3wr6mrGbYEx9OcXNvzwZG0fWgs VzqoyzCWJt9v+eGK0W7fRx77E/WuzVZBRv33vOstWu0+mhWv0stdQ3xG9/GcC8wZgGVb Ph+tLsqN/RkHr6EjagwySJpWFlqsTlNxapxl10YpyJVNIqKi7ebPQTD1265JG4EJai75 zyWEfNxMhncEJS6C5Gqx79Bc9FMjlhYmeqofq8wB7yBPpt3KISLncGQ4s+Xp7MalCO6K 3ANipf/Gr6N5TJKcppy4zJ91N1kyrhJX5uquLLS0KMhCbcYcae0BTIGGIKOCD1tQfd0u bbZQ== X-Forwarded-Encrypted: i=1; AJvYcCWGehjSHRT8SrEQWVjrsoxcHMZ0iEM9muOyUzG3t3KBu+Y8tTR+WrMg8O8eJSYM/3d4rLKYKkWfJQ85Y/M=@vger.kernel.org X-Gm-Message-State: AOJu0Yx2por4PnAn+1yHJIM0mnJfEDnSwXRVfXQoJOnSja4123MjjmAn hYcDrURdlA8s3h7P6RHg+VArnwC9xqa4B182uqcjN4HRwT++WsxT3y+eXre03kruiSc= X-Gm-Gg: ASbGncvMymwxFy4gNcNbluqw/iADVHkTZ0LbmlwbNwwid81WyhgjL8VB/6oK6teosFI bIM8vZSHkIpsDt0YEYpo+6UzwCWx2/3rn342+r96F2zKMdQUVakOqrS/9KfRSPL34vi/EY09rQK DD/kZFTd8eLrZ8iJttfm+fOMUs4/GDKxxOFiMgn9XDkPsfTiQLFdMmCljncOqYwescIrtAzxMpX KZHThPxCWU9Uc+bilVWsx907wYgfzeTPijdkk2n2geQAeaItOQYQWR//GeOIpxQbvDKRoChy2BC qy/jbw2kR0EtBYo63Xlr7EUsr28yKHGKQ1H7rSgeLJLG3YGzg6+Geh/xMNpBK6Wg0rMSMd84O5T uPVWW5yDKpbt056CwLKE2j2q7ncGuGa3gM+TGqHW3Ym52T98TBBYEBRnTSrEof6cdYtOx9i5ThC 2mO3TVxfzuTvdW00Dqr0QMrER47BpuSDr1 X-Google-Smtp-Source: AGHT+IFU3ZD0WgCFLGesvoHcSqmefP/DlaLr5QtMhOO16RVbV2tzjU8K9lCg5h0K2mU/+/PVAIgbRA== X-Received: by 2002:a17:902:fc86:b0:298:1156:acd5 with SMTP id d9443c01a7336-2984edcaeccmr67224805ad.39.1762998435872; Wed, 12 Nov 2025 17:47:15 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:15 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland Subject: [PATCH v3 11/22] riscv: mm: Simplify set_p4d() and set_pgd() Date: Wed, 12 Nov 2025 17:45:24 -0800 Message-ID: <20251113014656.2605447-12-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" RISC-V uses the same page table entry format and has the same atomicity requirements at all page table levels, so these setter functions use the same underlying implementation at all levels. Checking the translation mode to pick between two identical branches only serves to make these functions less efficient. Signed-off-by: Samuel Holland --- (no changes since v2) Changes in v2: - New patch for v2 arch/riscv/include/asm/pgtable-64.h | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/p= gtable-64.h index 6e789fa58514..5532f8515450 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -275,10 +275,7 @@ static inline unsigned long _pmd_pfn(pmd_t pmd) =20 static inline void set_p4d(p4d_t *p4dp, p4d_t p4d) { - if (pgtable_l4_enabled) - WRITE_ONCE(*p4dp, p4d); - else - set_pud((pud_t *)p4dp, (pud_t){ p4d_val(p4d) }); + WRITE_ONCE(*p4dp, p4d); } =20 static inline int p4d_none(p4d_t p4d) @@ -342,10 +339,7 @@ pud_t *pud_offset(p4d_t *p4d, unsigned long address); =20 static inline void set_pgd(pgd_t *pgdp, pgd_t pgd) { - if (pgtable_l5_enabled) - WRITE_ONCE(*pgdp, pgd); - else - set_p4d((p4d_t *)pgdp, (p4d_t){ pgd_val(pgd) }); + WRITE_ONCE(*pgdp, pgd); } =20 static inline int pgd_none(pgd_t pgd) --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 07D902E888A for ; Thu, 13 Nov 2025 01:47:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998439; cv=none; b=toEunkg2h+RGtlgyHpwryvdngYL/7cBpvvA97EzFlYT+cfWMv5lLEJ+ChS0gMNYTTz6AO8e1HbX1FZUWF3bhrb/BVmFXT3004/k1iaNVqoJhzK28htMNH1NyaBEDZNo4wM+rlmUU5+iZj3hjoy59y20hTdkk3DN+OaRFIl94Ewc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998439; c=relaxed/simple; bh=2MK06TEO5Wl8HoHuiJTTbGxXMQ4dUctY6WDGRltXzFY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hGSTHHV38CdxJxnyxCgB9NAHO1DAMVCa7ND5zAOKKd492pOHJr2EoMJAGVXC4RnXt9XA1au6f5k0zWOWqD+4Y4L1ckAKZL77HFd59YrjkAL/BgKlg1YuO0ZGPKYSoNXDtzfO2Rq+3L3fLVJPNqctT+Z/E0QGoODZiNXxE7t1A84= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=MQb4IhZz; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="MQb4IhZz" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-2980d9b7df5so2260375ad.3 for ; Wed, 12 Nov 2025 17:47:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998437; x=1763603237; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=O4yUqe5I7KDh5tDtHE97viYfWgw9C/nYwzFH0pWC45k=; b=MQb4IhZz1O7SHym6wa76DjZHzL1F4V+Az/eoICPsccnt01kKfzK23egfxJF1hOzlBG PWbKVo134XxEUCLU/vz1b30BgJ7rF0pXlf90tEzpCMAxj8hsKZEnM0ftaVBTSoDjnBCQ EYq8LYDOIZzjDVTNudG+MMdndXhfVMqS8MUN2QphPyU/UfImfqCIuCgwHaLT4PYaAnxh /KfTT+rL1VOXVYttFLBzCaNaGbuFJn8ltE5j63p41/HmQ3ePTx0j/+MRQDT+5wqdsXMq bac+ijfzLOcTKB4EixGleYdO1YN+uqLSHROl1dC/Y7F43TZy1GxTFNAmsRDDpzwGz/Wf gJVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998437; x=1763603237; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=O4yUqe5I7KDh5tDtHE97viYfWgw9C/nYwzFH0pWC45k=; b=XgLjAXoTIh2uvbiyPaJkoO0JJmv7TZVToOXQoHYeoThjrAQsTZrqjSVxdV1tL6XQ/I hjsbBEhdjSx21QFx/fzuqqQt5hL5K26p6fU/fGOxBukwu5ti0GIEHhYoe0u6IiZ6Yatz emwPYdN9gF/aYTiOzjcEK20b6OzRh9cF0jr4xt0a3ry2FX4vrIlxBDxRQFx4Pey1Xfhk qRSel7btkzxq2r5sMk/QpXpJ1BgfocepC6ELqsJMbLSKr0hBq1+3OgluhfhyCB8BLoh5 Fgc1tl/Wg1knG4qMuqrYFbGu1Ewhf404fNcXlN9ij61aaKX95NZdTLeAzV19iA73VmPt Ue3A== X-Forwarded-Encrypted: i=1; AJvYcCV69oWm1nBg1Q0to5kzAF3M9HARybKCi2iG4+hWXqAv1LyeJlCy/S/pAZYtB1pcZiNycoJh/GMqMGy4hqo=@vger.kernel.org X-Gm-Message-State: AOJu0YyKfp1idlV4N5LAZUhhpeJcY3fjXwJh+S4kJJy7Vw70BJ68OqpM G9HSx72CWb1X79k8RzzzjOCCqJdg2Fcac2iVm06L0PaTaG3hwdTpFw1NmXBxluscjdY= X-Gm-Gg: ASbGnct3xMpVJ5G5p5pmzlj0zY9STqM/SNFVsNRduM5v6lVMXK1eGcl3AP4zAuvD5Br IB/eZC1xDivZWLK8sZXO5bpKrQR4EycqGgV7qtP9pkGj/K/VmbryHBKVXD9ywHuNrnMysMw7v7I vy0Nkefdo0ZZ4P7atDBTcHQVeG7yeirefGjkg1pWG+Zrf9AcrjrLP62KJ9rej7nhCJ5ZkjZR/Xl Oe+C4hFpFXLTlvZ68t3xp+Avy2MhkqLwSyjzAwct76wLFGPVXgM1IAe+OcJ5ULhXFdr4tKpVQ2O eEe/lHk3CduvKXKHPbTYJ3yFx0Vq+06UPXH0FxtTSZcOluejJIAp+RsfQztTmT7zOwiVAILirL9 B86vcxrhXR6PbxzYhYFVSdnD47xDBD+9t+h8kUyEZmVed+KLqXs6BlAWEh63zaI5Jui1/snescu 5CEEJHMED/N/GrlpbVUwut8w== X-Google-Smtp-Source: AGHT+IEA5LZxSWMzNWQDrd0LMkvyoxWg+oEw41rcCZaseACh+qy8KkMElZZ7w0AL6ikNL4f+5tupBA== X-Received: by 2002:a17:902:da4b:b0:297:c0f0:42b3 with SMTP id d9443c01a7336-2984ed4b960mr68446245ad.25.1762998437335; Wed, 12 Nov 2025 17:47:17 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:17 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland , Alexandre Ghiti Subject: [PATCH v3 12/22] riscv: mm: Deduplicate _PAGE_CHG_MASK definition Date: Wed, 12 Nov 2025 17:45:25 -0800 Message-ID: <20251113014656.2605447-13-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The two existing definitions are equivalent because _PAGE_MTMASK is defined as 0 on riscv32. Reviewed-by: Alexandre Ghiti Signed-off-by: Samuel Holland --- (no changes since v1) arch/riscv/include/asm/pgtable-32.h | 5 ----- arch/riscv/include/asm/pgtable-64.h | 7 ------- arch/riscv/include/asm/pgtable.h | 6 ++++++ 3 files changed, 6 insertions(+), 12 deletions(-) diff --git a/arch/riscv/include/asm/pgtable-32.h b/arch/riscv/include/asm/p= gtable-32.h index 00f3369570a8..fa6c87015c48 100644 --- a/arch/riscv/include/asm/pgtable-32.h +++ b/arch/riscv/include/asm/pgtable-32.h @@ -28,11 +28,6 @@ #define _PAGE_IO 0 #define _PAGE_MTMASK 0 =20 -/* Set of bits to preserve across pte_modify() */ -#define _PAGE_CHG_MASK (~(unsigned long)(_PAGE_PRESENT | _PAGE_READ | \ - _PAGE_WRITE | _PAGE_EXEC | \ - _PAGE_USER | _PAGE_GLOBAL)) - static const __maybe_unused int pgtable_l4_enabled; static const __maybe_unused int pgtable_l5_enabled; =20 diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/p= gtable-64.h index 5532f8515450..093f0f41fd23 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -66,7 +66,6 @@ typedef struct { =20 #define pmd_val(x) ((x).pmd) #define __pmd(x) ((pmd_t) { (x) }) - #define PTRS_PER_PMD (PAGE_SIZE / sizeof(pmd_t)) =20 #define MAX_POSSIBLE_PHYSMEM_BITS 56 @@ -168,12 +167,6 @@ static inline u64 riscv_page_io(void) #define _PAGE_IO riscv_page_io() #define _PAGE_MTMASK riscv_page_mtmask() =20 -/* Set of bits to preserve across pte_modify() */ -#define _PAGE_CHG_MASK (~(unsigned long)(_PAGE_PRESENT | _PAGE_READ | \ - _PAGE_WRITE | _PAGE_EXEC | \ - _PAGE_USER | _PAGE_GLOBAL | \ - _PAGE_MTMASK)) - static inline int pud_present(pud_t pud) { return (pud_val(pud) & _PAGE_PRESENT); diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgta= ble.h index acfd48f92010..ba2fb1d475a3 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -207,6 +207,12 @@ extern struct pt_alloc_ops pt_ops __meminitdata; #define _PAGE_IOREMAP ((_PAGE_KERNEL & ~_PAGE_MTMASK) | _PAGE_IO) #define PAGE_KERNEL_IO __pgprot(_PAGE_IOREMAP) =20 +/* Set of bits to preserve across pte_modify() */ +#define _PAGE_CHG_MASK (~(unsigned long)(_PAGE_PRESENT | _PAGE_READ | \ + _PAGE_WRITE | _PAGE_EXEC | \ + _PAGE_USER | _PAGE_GLOBAL | \ + _PAGE_MTMASK)) + extern pgd_t swapper_pg_dir[]; extern pgd_t trampoline_pg_dir[]; extern pgd_t early_pg_dir[]; --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA7862EA169 for ; Thu, 13 Nov 2025 01:47:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998441; cv=none; b=e38kCRDwa4FDmI/HlVcfk8a8EEfUpUG294iJI7KDqpfRExmLWygfyKmCLafBce6/BmMHVxF3pp55UYgE7gvooIQB+lg01OO/Yztj+ZZYJ/R4aSweXGLBYHLIqVPYfv5s7vQ/KA8j5XVqYa26s2ugg4j156mK05qCRR/DKtWSvXc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998441; c=relaxed/simple; bh=iLgqPY34e6LHnAK1SrHQ7ZHSON9/8Yws1CAsFU3netg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mgh59vh2eHYfVT7UkcKxT5Yt813q/mGzaTsvapltNpk4avFY02aiTte83fJ4wMERAiEeCiHiKmirTuhnvmb5l475D3S9q8E8so3NzKj6NbeBycnTL+wOC+TNrsAoymn92BX7gC4lJw8yuR0BSL1KHHxPB28JPJWHs+4sB2knlAg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=cMIfCgiF; arc=none smtp.client-ip=209.85.215.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="cMIfCgiF" Received: by mail-pg1-f169.google.com with SMTP id 41be03b00d2f7-bc0d7255434so186270a12.0 for ; Wed, 12 Nov 2025 17:47:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998439; x=1763603239; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=61IAp/RLZglhPiTIaYZ0zqBWXh5TU3+Z1ZyRv4wlW+o=; b=cMIfCgiFpgZbr5MtVWRz9zCHhtaomfBI0eXgVtTUNJpHGlqnbovUZHaRIML8Tb7vR/ Xkc5dRVRhVP+idjgATGWQiHk7sBxIMSNcXt30QJa6A+sXZ8dcl8gaoioZdJGOhXGnbsp omce5Y8q/F7Qa7E2CnPqpVQh75muVWUl34eIV9p+YBwSizZp0ucZLc8oMAV1NNUyGMbL lT0G+MLZ4yLzFt5h8UrhRM2WJgy+1SfQ6iTQeRUioxL7wwjIA8wFWdqnbBDJFSXw7Egv 1Vv+N0UEEirykp0415yBp3rrwjfkvWpn6Qse0JPbLDlrkZsQ49M3LYzlzO0DOnzhn81e bSyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998439; x=1763603239; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=61IAp/RLZglhPiTIaYZ0zqBWXh5TU3+Z1ZyRv4wlW+o=; b=bCmqWH381+kPrWVjSGTfbF0M420OHgKQrhTgN51R7CqnaQl/C6P+wwUKx7FCXSzhbG 0lGOXZlIFUomY4xPKqvijs+z6/akx3MpI4Ps55c9wSi/0WRWms6wpXy5zRZ/E/a4ExT/ jcvuraEkef8aw8dvAMeoKCWobPwb2X1e2ruBnXWEnJd53nHXurxehhYtePtXCeGtJiWA PHpkCW85iHJk22Shyy2NDH575w5yUZBu8ZtNBfbTfPoNA8Powr5uMzLR8oUxfiH9R+hF ugmj7Arm+dUmJbNlNXpS15xmB1WbsWKzFUCm+8GSEovYYxj40Z4/SXeKprUASop3vk1s vmCg== X-Forwarded-Encrypted: i=1; AJvYcCXCMJS7Mt4ONC5YkH8RsysV2jyNlwtZxOotVyW4c35VjwgVEftSFfCbJi1Q2Swkmxd8l48L7oUMK1jCr5I=@vger.kernel.org X-Gm-Message-State: AOJu0YzGaaO48WEq0vyiG5gR9lOic1ihwiNnWd8a7JoD7INvZZMcJ7Co gxds2gEsnQ+WUUcx2ChTw093SoFGlshG/oYcs0ur6r8NIv8YQbzonocnWf7FS9gQvDg= X-Gm-Gg: ASbGncuNoMPTmtDnYZN3b64l0PZY8lOc7xXmelZXIXWWkjtLddaCeNXqTWOR0lBiuUU q2J0RN5Y8/PGQRLCOb4Q6+jlkT36wbgx06lW4Dse2pxinHKgg5+h3NFnE0SNNJgZkSwvX61hnKK ZYJsw5XFG7NHTjSPQFsT6lfXsbL+9AX3Uj4x8iWODMw03F27iiMEDJ0gaSdTBE1uERT+abmSQze s/mZBp7yhKzGXoeRz+hgjrCY7IEG2zcmtOsEBZsutqDCzrfsioVkhQWZMv/Kuf3/xt0WlL7KLrq +3EjdufZ19ex/QMdqLL8DqWcUs1SEYqWpK57ziPUWXvUTzKXq0u03DUb7WIFJaeZ6nCw3skngRA KbQYIyTJXY4MirzDNpLBzkVUvitYjzMzdI9CCRCqQbOpc/QTshgETtiTt7IdtboT9X/K2LZYYXp z+u35bxG3dWMEYTGS/WgYkgg== X-Google-Smtp-Source: AGHT+IFQeTKY4sW5xPwKJekSx5UPto2J8a08JXV7cfYp4pOZ2iobnw+lqJ4IrO6RHskdtg/6++UDAg== X-Received: by 2002:a17:902:e5c3:b0:295:ed6:4625 with SMTP id d9443c01a7336-2984ee0459dmr58942605ad.47.1762998438873; Wed, 12 Nov 2025 17:47:18 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:18 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland , Alexandre Ghiti Subject: [PATCH v3 13/22] riscv: ptdump: Only show N and MT bits when enabled in the kernel Date: Wed, 12 Nov 2025 17:45:26 -0800 Message-ID: <20251113014656.2605447-14-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When the Svnapot or Svpbmt extension is not implemented, the corresponding page table bits are reserved, and must be zero. There is no need to show them in the ptdump output. When the Kconfig option for an extension is disabled, we assume it is not implemented. In that case, the kernel may provide a fallback definition for the fields, like how _PAGE_MTMASK is defined on riscv32. Using those fallback definitions in ptdump would produce incorrect results. To avoid this, hide the fields from the ptdump output. Reviewed-by: Alexandre Ghiti Signed-off-by: Samuel Holland --- (no changes since v1) arch/riscv/mm/ptdump.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/riscv/mm/ptdump.c b/arch/riscv/mm/ptdump.c index 34299c2b231f..0dd6ee282953 100644 --- a/arch/riscv/mm/ptdump.c +++ b/arch/riscv/mm/ptdump.c @@ -134,11 +134,13 @@ struct prot_bits { =20 static const struct prot_bits pte_bits[] =3D { { -#ifdef CONFIG_64BIT +#ifdef CONFIG_RISCV_ISA_SVNAPOT .mask =3D _PAGE_NAPOT, .set =3D "N", .clear =3D ".", }, { +#endif +#ifdef CONFIG_RISCV_ISA_SVPBMT .mask =3D _PAGE_MTMASK_SVPBMT, .set =3D "MT(%s)", .clear =3D " .. ", @@ -214,7 +216,7 @@ static void dump_prot(struct pg_state *st) if (val) { if (pte_bits[i].mask =3D=3D _PAGE_SOFT) sprintf(s, pte_bits[i].set, val >> 8); -#ifdef CONFIG_64BIT +#ifdef CONFIG_RISCV_ISA_SVPBMT else if (pte_bits[i].mask =3D=3D _PAGE_MTMASK_SVPBMT) { if (val =3D=3D _PAGE_NOCACHE_SVPBMT) sprintf(s, pte_bits[i].set, "NC"); --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24A9B2EBB8F for ; Thu, 13 Nov 2025 01:47:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998444; cv=none; b=AseY8gVQCJTGSNbzGmcxSFl9ETNy+vslTQQaxOBgmTKfegEUJO3CDXmLJDwPjExn4pTjwSwvZG9tkKIsYuzL2pHksI6qKIJyhD4MKaO8nKIS/W7hsVbPwTrdsylMXxEQA/v7AjUkeezIkxsd5brP3mlPKmrZTpTNEaJQMDMq7Nk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998444; c=relaxed/simple; bh=NS9YCI4a0x93bSFKB6oegiYW0lMPevBYZESbtc9nlLA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZtDM7ELCy4kTL8ROWZwd8fYzRxj1fxge6mewA0uqvquIGDm3vi4EriPb1Kjn5RvY5TXhVMuv+h9ltCLV+MbskXmeg5KLOjl2SW3YyE/7FBle7RhckNy+RGX734eFBdmpRpowStitPj39nZWryAMPUZA5J5Glef3fxaZopvu7aGA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=R1GVgvZk; arc=none smtp.client-ip=209.85.215.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="R1GVgvZk" Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-ba488b064cbso218467a12.1 for ; Wed, 12 Nov 2025 17:47:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998440; x=1763603240; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZB9Jv5e+J/nnZHZhnhots3i8EjJxTeX7spRmBxkznRQ=; b=R1GVgvZktCU9vyBG0H63s/W43DPq801FJdcM9aEXsWN+IbfWBk4JzWk/1XIO/Gctq4 /gK/pmTj8x7/kBxUBNEfdYxk38dtWz08PwL/DRtCiMr48iNWBdl1aH6j/h/sJsLp5B52 hqYWWtmGZ2eNQ21GJabKRanibZ50kTivIl0wp7kWo6dPvbjTWbIDHC5K8ORVUn0QYFWi UW+KILoc9n6w96PiwGU3CO00hVo7K+sDtQ4Q56FxM19ERO3EApIpvylizVqc+4IInpED AHGHkkofIJD4XVdyMpoqh4+udMzYLn4he1qgx3jx0yQmHUjkOmHAk45LxHL+0IkrsljP Ws0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998440; x=1763603240; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=ZB9Jv5e+J/nnZHZhnhots3i8EjJxTeX7spRmBxkznRQ=; b=h7gOhejqfuDFh2fXY8v3PrBSaDoPH5rlUwVdT/UXeTLKqx8ouucXbzAJeQ1t7MvWaO kWbvsQmtlizx5Bdwcus9mK0ekw+8Ga+2koR/ZNKmPkmPNj05UuX1fUblD2WCLhAKQYAq vXlC/EZ4gLgDKYqdsP7PFkKFwJXf9F/NrGocBBCqrHESxdxKyhFNGKvCz5UWrKk+UIA4 6EFflHkPRj+vFHbAaRyeJp4MK8USy8xdEpovdcLXoZw8fOc6zaZSoiNzPQflnIw6V+21 OE8uXJsX9HIe/SqHLR6ABDvZ+hoMPXxhy2OfKTqrazFXwz1zoEVo+Nq0qVMTLyXpdA8R v1ow== X-Forwarded-Encrypted: i=1; AJvYcCVwoxKvDogdBKWzi+sNzQPKg8gUT479wdYzlUOvIOr8aBN9bJOcKFvEsqCSst+40GWVn7LjXRsJh/RWXUM=@vger.kernel.org X-Gm-Message-State: AOJu0Yx948FQ8H+MGR5MiX0sphFe4IePbH1VKnHHv2R6egZSGkw+YUpi u0zGMD/ARn6gtGwCOYJROoKlWG3llLf+i8XYp5xykfw3/UWZBax7uLXU7rcmJfxaCdk= X-Gm-Gg: ASbGncvuKHyPI1GrwwRBI8KNF2vEOyhBbhbE8Ovbp4jhU33yPB+5ekNFevYKegxHaft B1kPzoUdrIDpYxstSeFrJffbWRG+R/QvouWukQ2T2OlPF+tojxhxqgZv41xEAw0bSJqcdUpbCmT Ks2sKdwWjLQjXh+vbxPkSKOjh906h5iP+AVmrLmctU+Iu+JDhqhtucIWx4L2celZ6L+B8aNpJuu S+LveL/2Mwjs1vSlWq4gUqCoFVIjX8/+upGCZUNAfyFsxC7RtqSRaOF+qRAh/K8A1BME9/YIKuB PzODGSCoC26tbIXfzVW8FReIMYswCi4oEP52ZngsN9v8JE/Y8d8zqrsZcOiAbVkV0rC5DCAFvkX G89H7DcrA1CWQeCdheRpY9HazIIYTGPMrUpCrgU6CC7QRvWdrcVf9LAVlPjy5/G/1zTwhJX/muI QOmbfa8hXapM4nMFL+t5YwUg== X-Google-Smtp-Source: AGHT+IF7iLJyShxHKC7U969MaIOXMFAFaPrs9T06x1cgDMhJWNaYEsQEWxTnOb0Z/OpUrPfiBHQTzQ== X-Received: by 2002:a17:902:d588:b0:297:c889:ba37 with SMTP id d9443c01a7336-2984eddda58mr63974625ad.41.1762998440376; Wed, 12 Nov 2025 17:47:20 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:20 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland Subject: [PATCH v3 14/22] riscv: mm: Fix up memory types when writing page tables Date: Wed, 12 Nov 2025 17:45:27 -0800 Message-ID: <20251113014656.2605447-15-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, Linux on RISC-V has three ways to specify the cacheability and ordering PMAs of a page: 1) Do nothing; assume the system is entirely cache-coherent and rely on the hardware for any ordering requirements 2) Use the page table bits specified by Svpbmt 3) Use the page table bits specified by XTheadMae To support all three methods, the kernel dynamically determines the definitions of the _PAGE_NOCACHE and _PAGE_IO fields. However, this alone is not sufficient, as XTheadMae uses a nonzero memory type value for normal memory pages. So the kernel uses an additional alternative sequence (ALT_THEAD_PMA) to insert the correct memory type when generating page table entries. Some RISC-V platforms use a fourth method to specify the cacheability of a page of RAM: RAM is mapped to multiple physical address ranges, with each alias having a different set of statically-determined PMAs. Software selects the PMAs for a page by choosing the corresponding PFN from one of the available physical address ranges. Like for XTheadMae, this strategy also requires applying a transformation when writing page table entries. Since these physical memory aliases should be invisible to the rest of the kernel, the opposite transformation must be applied when reading page table entries. However, with this last method of specifying PMAs, there is no inherent way to indicate the cacheability of a page in the pgprot_t value, since the PFN itself determines cacheability. This implementation reuses the PTE bits from Svpbmt, as Svpbmt is the standard RISC-V extension, and thus ought to be the most common way to indicate per-page PMAs. Thus, the Svpbmt variant of _PAGE_NOCACHE and _PAGE_IO is made available even when the CPU does not support the extension. It turns out that with some clever bit manipulation, it is just as efficient to transform all three Svpbmt memory type values to the corresponding XTheadMae values, as it is to check for and insert the one XTheadMae memory type value for normal memory. Thus, we implement XTheadMae as a variant on top of Svpbmt. This allows the _PAGE_NOCACHE and _PAGE_IO definitions to be compile-time constants, and centralizes all memory type handling to a single set of alternative macros. However, there is a tradeoff for platforms relying on hardware for all memory type handling: the memory type PTE bits must now be masked off when writing page table entries, whereas previously no transformation was needed. As a side effect, since the inverse transformation is applied when reading back page table entries, this change fixes the reporting of the memory type bits from ptdump on platforms with XTheadMae. Signed-off-by: Samuel Holland --- (no changes since v2) Changes in v2: - Keep Kconfig options for each PBMT variant separate/non-overlapping - Move fixup code sequences to set_pXX() and pXXp_get() - Only define ALT_UNFIX_MT in configurations that need it - Improve inline documentation of ALT_FIXUP_MT/ALT_UNFIX_MT arch/riscv/include/asm/errata_list.h | 45 ------- arch/riscv/include/asm/pgtable-32.h | 3 + arch/riscv/include/asm/pgtable-64.h | 171 ++++++++++++++++++++++----- arch/riscv/include/asm/pgtable.h | 47 ++++---- arch/riscv/mm/pgtable.c | 14 +-- arch/riscv/mm/ptdump.c | 12 +- 6 files changed, 174 insertions(+), 118 deletions(-) diff --git a/arch/riscv/include/asm/errata_list.h b/arch/riscv/include/asm/= errata_list.h index 6694b5ccdcf8..fa03021b7074 100644 --- a/arch/riscv/include/asm/errata_list.h +++ b/arch/riscv/include/asm/errata_list.h @@ -53,51 +53,6 @@ asm(ALTERNATIVE( \ : /* no inputs */ \ : "memory") =20 -/* - * _val is marked as "will be overwritten", so need to set it to 0 - * in the default case. - */ -#define ALT_SVPBMT_SHIFT 61 -#define ALT_THEAD_MAE_SHIFT 59 -#define ALT_SVPBMT(_val, prot) \ -asm(ALTERNATIVE_2("li %0, 0\t\nnop", \ - "li %0, %1\t\nslli %0,%0,%3", 0, \ - RISCV_ISA_EXT_SVPBMT, CONFIG_RISCV_ISA_SVPBMT, \ - "li %0, %2\t\nslli %0,%0,%4", THEAD_VENDOR_ID, \ - ERRATA_THEAD_MAE, CONFIG_ERRATA_THEAD_MAE) \ - : "=3Dr"(_val) \ - : "I"(prot##_SVPBMT >> ALT_SVPBMT_SHIFT), \ - "I"(prot##_THEAD >> ALT_THEAD_MAE_SHIFT), \ - "I"(ALT_SVPBMT_SHIFT), \ - "I"(ALT_THEAD_MAE_SHIFT)) - -#ifdef CONFIG_ERRATA_THEAD_MAE -/* - * IO/NOCACHE memory types are handled together with svpbmt, - * so on T-Head chips, check if no other memory type is set, - * and set the non-0 PMA type if applicable. - */ -#define ALT_THEAD_PMA(_val) \ -asm volatile(ALTERNATIVE( \ - __nops(7), \ - "li t3, %1\n\t" \ - "slli t3, t3, %3\n\t" \ - "and t3, %0, t3\n\t" \ - "bne t3, zero, 2f\n\t" \ - "li t3, %2\n\t" \ - "slli t3, t3, %3\n\t" \ - "or %0, %0, t3\n\t" \ - "2:", THEAD_VENDOR_ID, \ - ERRATA_THEAD_MAE, CONFIG_ERRATA_THEAD_MAE) \ - : "+r"(_val) \ - : "I"(_PAGE_MTMASK_THEAD >> ALT_THEAD_MAE_SHIFT), \ - "I"(_PAGE_PMA_THEAD >> ALT_THEAD_MAE_SHIFT), \ - "I"(ALT_THEAD_MAE_SHIFT) \ - : "t3") -#else -#define ALT_THEAD_PMA(_val) -#endif - #define ALT_CMO_OP(_op, _start, _size, _cachesize) \ asm volatile(ALTERNATIVE( \ __nops(5), \ diff --git a/arch/riscv/include/asm/pgtable-32.h b/arch/riscv/include/asm/p= gtable-32.h index fa6c87015c48..90ef35a7c1a5 100644 --- a/arch/riscv/include/asm/pgtable-32.h +++ b/arch/riscv/include/asm/pgtable-32.h @@ -28,6 +28,9 @@ #define _PAGE_IO 0 #define _PAGE_MTMASK 0 =20 +#define ALT_FIXUP_MT(_val) +#define ALT_UNFIX_MT(_val) + static const __maybe_unused int pgtable_l4_enabled; static const __maybe_unused int pgtable_l5_enabled; =20 diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/p= gtable-64.h index 093f0f41fd23..aad34c754325 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -8,7 +8,7 @@ =20 #include #include -#include +#include =20 extern bool pgtable_l4_enabled; extern bool pgtable_l5_enabled; @@ -111,6 +111,8 @@ enum napot_cont_order { #define HUGE_MAX_HSTATE 2 #endif =20 +#if defined(CONFIG_RISCV_ISA_SVPBMT) || defined(CONFIG_ERRATA_THEAD_MAE) + /* * [62:61] Svpbmt Memory Type definitions: * @@ -119,53 +121,152 @@ enum napot_cont_order { * 10 - IO Non-cacheable, non-idempotent, strongly-ordered I/O memory * 11 - Rsvd Reserved for future standard use */ -#define _PAGE_NOCACHE_SVPBMT (1UL << 61) -#define _PAGE_IO_SVPBMT (1UL << 62) -#define _PAGE_MTMASK_SVPBMT (_PAGE_NOCACHE_SVPBMT | _PAGE_IO_SVPBMT) +#define _PAGE_NOCACHE (1UL << 61) +#define _PAGE_IO (2UL << 61) +#define _PAGE_MTMASK (3UL << 61) =20 /* + * ALT_FIXUP_MT + * + * On systems that do not support any form of page-based memory type + * configuration, this code sequence clears the memory type bits in the PT= E. + * + * On systems that support Svpbmt, the memory type bits are left alone. + * + * On systems that support XTheadMae, a Svpbmt memory type is transformed + * into the corresponding XTheadMae memory type. + * * [63:59] T-Head Memory Type definitions: * bit[63] SO - Strong Order * bit[62] C - Cacheable * bit[61] B - Bufferable * bit[60] SH - Shareable * bit[59] Sec - Trustable - * 00110 - NC Weakly-ordered, Non-cacheable, Bufferable, Shareable, Non-= trustable * 01110 - PMA Weakly-ordered, Cacheable, Bufferable, Shareable, Non-trus= table + * 00110 - NC Weakly-ordered, Non-cacheable, Bufferable, Shareable, Non-= trustable * 10010 - IO Strongly-ordered, Non-cacheable, Non-bufferable, Shareable= , Non-trustable + * + * Pseudocode operating on bits [63:60]: + * t0 =3D mt << 1 + * if (t0 =3D=3D 0) + * t0 |=3D 2 + * t0 ^=3D 0x5 + * mt ^=3D t0 + */ + +#define ALT_FIXUP_MT(_val) \ + asm(ALTERNATIVE_2("addi t0, zero, 0x3\n\t" \ + "slli t0, t0, 61\n\t" \ + "not t0, t0\n\t" \ + "and %0, %0, t0\n\t" \ + "nop\n\t" \ + "nop\n\t" \ + "nop", \ + __nops(7), \ + 0, RISCV_ISA_EXT_SVPBMT, CONFIG_RISCV_ISA_SVPBMT, \ + "srli t0, %0, 59\n\t" \ + "seqz t1, t0\n\t" \ + "slli t1, t1, 1\n\t" \ + "or t0, t0, t1\n\t" \ + "xori t0, t0, 0x5\n\t" \ + "slli t0, t0, 60\n\t" \ + "xor %0, %0, t0", \ + THEAD_VENDOR_ID, ERRATA_THEAD_MAE, CONFIG_ERRATA_THEAD_MAE) \ + : "+r" (_val) :: "t0", "t1") + +#else + +#define _PAGE_NOCACHE 0 +#define _PAGE_IO 0 +#define _PAGE_MTMASK 0 + +#define ALT_FIXUP_MT(_val) + +#endif /* CONFIG_RISCV_ISA_SVPBMT || CONFIG_ERRATA_THEAD_MAE */ + +#if defined(CONFIG_ERRATA_THEAD_MAE) + +/* + * ALT_UNFIX_MT + * + * On systems that support Svpbmt, or do not support any form of page-based + * memory type configuration, the memory type bits are left alone. + * + * On systems that support XTheadMae, the XTheadMae memory type (or zero) = is + * transformed back into the corresponding Svpbmt memory type. + * + * Pseudocode operating on bits [63:60]: + * t0 =3D mt & 0xd + * t0 ^=3D t0 >> 1 + * mt ^=3D t0 */ -#define _PAGE_PMA_THEAD ((1UL << 62) | (1UL << 61) | (1UL << 60)) -#define _PAGE_NOCACHE_THEAD ((1UL << 61) | (1UL << 60)) -#define _PAGE_IO_THEAD ((1UL << 63) | (1UL << 60)) -#define _PAGE_MTMASK_THEAD (_PAGE_PMA_THEAD | _PAGE_IO_THEAD | (1UL << 59)) =20 -static inline u64 riscv_page_mtmask(void) +#define ALT_UNFIX_MT(_val) \ + asm(ALTERNATIVE(__nops(6), \ + "srli t0, %0, 60\n\t" \ + "andi t0, t0, 0xd\n\t" \ + "srli t1, t0, 1\n\t" \ + "xor t0, t0, t1\n\t" \ + "slli t0, t0, 60\n\t" \ + "xor %0, %0, t0", \ + THEAD_VENDOR_ID, ERRATA_THEAD_MAE, CONFIG_ERRATA_THEAD_MAE) \ + : "+r" (_val) :: "t0", "t1") + +#define ptep_get ptep_get +static inline pte_t ptep_get(pte_t *ptep) { - u64 val; + pte_t pte =3D READ_ONCE(*ptep); =20 - ALT_SVPBMT(val, _PAGE_MTMASK); - return val; + ALT_UNFIX_MT(pte); + + return pte; } =20 -static inline u64 riscv_page_nocache(void) +#define pmdp_get pmdp_get +static inline pmd_t pmdp_get(pmd_t *pmdp) { - u64 val; + pmd_t pmd =3D READ_ONCE(*pmdp); + + ALT_UNFIX_MT(pmd); =20 - ALT_SVPBMT(val, _PAGE_NOCACHE); - return val; + return pmd; } =20 -static inline u64 riscv_page_io(void) +#define pudp_get pudp_get +static inline pud_t pudp_get(pud_t *pudp) { - u64 val; + pud_t pud =3D READ_ONCE(*pudp); + + ALT_UNFIX_MT(pud); =20 - ALT_SVPBMT(val, _PAGE_IO); - return val; + return pud; } =20 -#define _PAGE_NOCACHE riscv_page_nocache() -#define _PAGE_IO riscv_page_io() -#define _PAGE_MTMASK riscv_page_mtmask() +#define p4dp_get p4dp_get +static inline p4d_t p4dp_get(p4d_t *p4dp) +{ + p4d_t p4d =3D READ_ONCE(*p4dp); + + ALT_UNFIX_MT(p4d); + + return p4d; +} + +#define pgdp_get pgdp_get +static inline pgd_t pgdp_get(pgd_t *pgdp) +{ + pgd_t pgd =3D READ_ONCE(*pgdp); + + ALT_UNFIX_MT(pgd); + + return pgd; +} + +#else + +#define ALT_UNFIX_MT(_val) + +#endif /* CONFIG_ERRATA_THEAD_MAE */ =20 static inline int pud_present(pud_t pud) { @@ -195,6 +296,7 @@ static inline int pud_user(pud_t pud) =20 static inline void set_pud(pud_t *pudp, pud_t pud) { + ALT_FIXUP_MT(pud); WRITE_ONCE(*pudp, pud); } =20 @@ -245,11 +347,7 @@ static inline bool mm_pud_folded(struct mm_struct *mm) =20 static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t prot) { - unsigned long prot_val =3D pgprot_val(prot); - - ALT_THEAD_PMA(prot_val); - - return __pmd((pfn << _PAGE_PFN_SHIFT) | prot_val); + return __pmd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot)); } =20 static inline unsigned long _pmd_pfn(pmd_t pmd) @@ -257,6 +355,9 @@ static inline unsigned long _pmd_pfn(pmd_t pmd) return __page_val_to_pfn(pmd_val(pmd)); } =20 +#define pmd_offset_lockless(pudp, pud, address) \ + (pud_pgtable(pud) + pmd_index(address)) + #define pmd_ERROR(e) \ pr_err("%s:%d: bad pmd %016lx.\n", __FILE__, __LINE__, pmd_val(e)) =20 @@ -268,6 +369,7 @@ static inline unsigned long _pmd_pfn(pmd_t pmd) =20 static inline void set_p4d(p4d_t *p4dp, p4d_t p4d) { + ALT_FIXUP_MT(p4d); WRITE_ONCE(*p4dp, p4d); } =20 @@ -327,11 +429,15 @@ static inline struct page *p4d_page(p4d_t p4d) =20 #define pud_index(addr) (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1)) =20 +#define pud_offset_lockless(p4dp, p4d, address) \ + (pgtable_l4_enabled ? p4d_pgtable(p4d) + pud_index(address) : (pud_t *)(p= 4dp)) + #define pud_offset pud_offset -pud_t *pud_offset(p4d_t *p4d, unsigned long address); +pud_t *pud_offset(p4d_t *p4dp, unsigned long address); =20 static inline void set_pgd(pgd_t *pgdp, pgd_t pgd) { + ALT_FIXUP_MT(pgd); WRITE_ONCE(*pgdp, pgd); } =20 @@ -382,8 +488,11 @@ static inline struct page *pgd_page(pgd_t pgd) =20 #define p4d_index(addr) (((addr) >> P4D_SHIFT) & (PTRS_PER_P4D - 1)) =20 +#define p4d_offset_lockless(pgdp, pgd, address) \ + (pgtable_l5_enabled ? pgd_pgtable(pgd) + p4d_index(address) : (p4d_t *)(p= gdp)) + #define p4d_offset p4d_offset -p4d_t *p4d_offset(pgd_t *pgd, unsigned long address); +p4d_t *p4d_offset(pgd_t *pgdp, unsigned long address); =20 #ifdef CONFIG_TRANSPARENT_HUGEPAGE static inline pte_t pmd_pte(pmd_t pmd); diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgta= ble.h index ba2fb1d475a3..8b622f901707 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -253,6 +253,7 @@ static inline bool pmd_leaf(pmd_t pmd) =20 static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) { + ALT_FIXUP_MT(pmd); WRITE_ONCE(*pmdp, pmd); } =20 @@ -263,11 +264,7 @@ static inline void pmd_clear(pmd_t *pmdp) =20 static inline pgd_t pfn_pgd(unsigned long pfn, pgprot_t prot) { - unsigned long prot_val =3D pgprot_val(prot); - - ALT_THEAD_PMA(prot_val); - - return __pgd((pfn << _PAGE_PFN_SHIFT) | prot_val); + return __pgd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot)); } =20 static inline unsigned long _pgd_pfn(pgd_t pgd) @@ -343,11 +340,7 @@ static inline unsigned long pte_pfn(pte_t pte) /* Constructs a page table entry */ static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot) { - unsigned long prot_val =3D pgprot_val(prot); - - ALT_THEAD_PMA(prot_val); - - return __pte((pfn << _PAGE_PFN_SHIFT) | prot_val); + return __pte((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot)); } =20 #define pte_pgprot pte_pgprot @@ -486,11 +479,7 @@ static inline int pmd_protnone(pmd_t pmd) /* Modify page protection bits */ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) { - unsigned long newprot_val =3D pgprot_val(newprot); - - ALT_THEAD_PMA(newprot_val); - - return __pte((pte_val(pte) & _PAGE_CHG_MASK) | newprot_val); + return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); } =20 #define pgd_ERROR(e) \ @@ -547,9 +536,10 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b) * a page table are directly modified. Thus, the following hook is * made available. */ -static inline void set_pte(pte_t *ptep, pte_t pteval) +static inline void set_pte(pte_t *ptep, pte_t pte) { - WRITE_ONCE(*ptep, pteval); + ALT_FIXUP_MT(pte); + WRITE_ONCE(*ptep, pte); } =20 void flush_icache_pte(struct mm_struct *mm, pte_t pte); @@ -598,6 +588,7 @@ static inline pte_t ptep_get_and_clear(struct mm_struct= *mm, { pte_t pte =3D __pte(atomic_long_xchg((atomic_long_t *)ptep, 0)); =20 + ALT_UNFIX_MT(pte); page_table_check_pte_clear(mm, pte); =20 return pte; @@ -869,6 +860,7 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_s= truct *mm, { pmd_t pmd =3D __pmd(atomic_long_xchg((atomic_long_t *)pmdp, 0)); =20 + ALT_UNFIX_MT(pmd); page_table_check_pmd_clear(mm, pmd); =20 return pmd; @@ -886,7 +878,11 @@ static inline pmd_t pmdp_establish(struct vm_area_stru= ct *vma, unsigned long address, pmd_t *pmdp, pmd_t pmd) { page_table_check_pmd_set(vma->vm_mm, pmdp, pmd); - return __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd))); + ALT_FIXUP_MT(pmd); + pmd =3D __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd))); + ALT_UNFIX_MT(pmd); + + return pmd; } =20 #define pmdp_collapse_flush pmdp_collapse_flush @@ -955,14 +951,9 @@ static inline int pudp_test_and_clear_young(struct vm_= area_struct *vma, static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm, unsigned long address, pud_t *pudp) { -#ifdef CONFIG_SMP - pud_t pud =3D __pud(xchg(&pudp->pud, 0)); -#else - pud_t pud =3D pudp_get(pudp); - - pud_clear(pudp); -#endif + pud_t pud =3D __pud(atomic_long_xchg((atomic_long_t *)pudp, 0)); =20 + ALT_UNFIX_MT(pud); page_table_check_pud_clear(mm, pud); =20 return pud; @@ -985,7 +976,11 @@ static inline pud_t pudp_establish(struct vm_area_stru= ct *vma, unsigned long address, pud_t *pudp, pud_t pud) { page_table_check_pud_set(vma->vm_mm, pudp, pud); - return __pud(atomic_long_xchg((atomic_long_t *)pudp, pud_val(pud))); + ALT_FIXUP_MT(pud); + pud =3D __pud(atomic_long_xchg((atomic_long_t *)pudp, pud_val(pud))); + ALT_UNFIX_MT(pud); + + return pud; } =20 static inline pud_t pud_mkinvalid(pud_t pud) diff --git a/arch/riscv/mm/pgtable.c b/arch/riscv/mm/pgtable.c index c4b85a828797..604744d6924f 100644 --- a/arch/riscv/mm/pgtable.c +++ b/arch/riscv/mm/pgtable.c @@ -42,20 +42,14 @@ int ptep_test_and_clear_young(struct vm_area_struct *vm= a, EXPORT_SYMBOL_GPL(ptep_test_and_clear_young); =20 #ifdef CONFIG_64BIT -pud_t *pud_offset(p4d_t *p4d, unsigned long address) +pud_t *pud_offset(p4d_t *p4dp, unsigned long address) { - if (pgtable_l4_enabled) - return p4d_pgtable(p4dp_get(p4d)) + pud_index(address); - - return (pud_t *)p4d; + return pud_offset_lockless(p4dp, p4dp_get(p4dp), address); } =20 -p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) +p4d_t *p4d_offset(pgd_t *pgdp, unsigned long address) { - if (pgtable_l5_enabled) - return pgd_pgtable(pgdp_get(pgd)) + p4d_index(address); - - return (p4d_t *)pgd; + return p4d_offset_lockless(pgdp, pgdp_get(pgdp), address); } #endif =20 diff --git a/arch/riscv/mm/ptdump.c b/arch/riscv/mm/ptdump.c index 0dd6ee282953..763ffde8ab5e 100644 --- a/arch/riscv/mm/ptdump.c +++ b/arch/riscv/mm/ptdump.c @@ -140,8 +140,8 @@ static const struct prot_bits pte_bits[] =3D { .clear =3D ".", }, { #endif -#ifdef CONFIG_RISCV_ISA_SVPBMT - .mask =3D _PAGE_MTMASK_SVPBMT, +#if defined(CONFIG_RISCV_ISA_SVPBMT) || defined(CONFIG_ERRATA_THEAD_MAE) + .mask =3D _PAGE_MTMASK, .set =3D "MT(%s)", .clear =3D " .. ", }, { @@ -216,11 +216,11 @@ static void dump_prot(struct pg_state *st) if (val) { if (pte_bits[i].mask =3D=3D _PAGE_SOFT) sprintf(s, pte_bits[i].set, val >> 8); -#ifdef CONFIG_RISCV_ISA_SVPBMT - else if (pte_bits[i].mask =3D=3D _PAGE_MTMASK_SVPBMT) { - if (val =3D=3D _PAGE_NOCACHE_SVPBMT) +#if defined(CONFIG_RISCV_ISA_SVPBMT) || defined(CONFIG_ERRATA_THEAD_MAE) + else if (pte_bits[i].mask =3D=3D _PAGE_MTMASK) { + if (val =3D=3D _PAGE_NOCACHE) sprintf(s, pte_bits[i].set, "NC"); - else if (val =3D=3D _PAGE_IO_SVPBMT) + else if (val =3D=3D _PAGE_IO) sprintf(s, pte_bits[i].set, "IO"); else sprintf(s, pte_bits[i].set, "??"); --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79D2C2ECE82 for ; Thu, 13 Nov 2025 01:47:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998444; cv=none; b=JHW42NRw5579Zd5zvkaPeQJtWmhKMLUhq/zb8j1TgsVX9KMrHRk13wZE+uwLRaOTdRd2XyD37G8Z+bO4L7swRQkQPN5NBzZXrlEpFXasrbI42sE8voxOw6mwCgLB1exhxL1rJyAxH3hXaniSz3s8Ts9Jz+fGhpgCfffpAQE7zXg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998444; c=relaxed/simple; bh=J1/8A3IXTM9bzlcP42Q0I0NkjdkS9JJwgbBrPUlH8CY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UjFPXMKOfADB4IeUJf/Cew1RrLa8hK/emSagu2wum9byq80bZMTKAQEAa7qlMqQcw0ODu1VYPS/SPWgSfA6v3ChfHRdRCcgVJ9OZujFuBE72ORK0DdqGT6eQm4Fj85c9nSe1WA3Qclh+MUCohffrP4E00h+JOGaO7VCEzqAnyYw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=K516f/cH; arc=none smtp.client-ip=209.85.215.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="K516f/cH" Received: by mail-pg1-f174.google.com with SMTP id 41be03b00d2f7-bc09b3d3b06so168844a12.2 for ; Wed, 12 Nov 2025 17:47:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998442; x=1763603242; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LCfGLyHTlwnlWI3FvQiRrpmgPRI/lUHNtrwWz9sKS0g=; b=K516f/cHWy/Yl622VfKBus6XAOT1LXjV+84YodxD/rTcjjp1DRgU98bx84x7lX1E+3 sz+x2y9BPBZN2+dKv+RwcSKsYK3ZHYr9Wn4usGBdnEBUHr1iFsnGXXuqdh101INphpxn ZPdvRH6cv09EQLdNtvYUOw+dxb+i8nzFtOcsVupJPShVt64Nulb3CCdMQvPtSn6Ww/W0 jVTwJKkDYlr2oLa2cZKS1LJgc1LVvHSfm9tmIax0We/ackgfoamiEdQEFIsLWAR67pw3 GAk9yos95MKYy8A02YUddNv1lM3lZQsz/+WFaViaRThePb2iuTgzlUKGIMtmkeU0B7cm 09mA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998442; x=1763603242; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=LCfGLyHTlwnlWI3FvQiRrpmgPRI/lUHNtrwWz9sKS0g=; b=qr+j8gpvCtNhZebq1o1nfLX+E5S3rR9S6QqrcH5PwwqTDMWW+oR2TYUxLrrEJCpdxV bNtRr5YXnnsh8GNIVgpEvavugXN3ZyqAT2NsxS9/dg/mDKaAEnyBTUcQDLoZ2xbQcC0F lF8+q00q+/TW4qaKGHNivMA6b7COLTpbN3otfBTfTzToBT8Tpys80pXF8A355Gh/gzHH gbrfzcY6egK2qW4uCHSZlmZoSOdC44eps5SFmV8lv01XTg3ejbQ63ytS1rLxF+JrCicP zK8mqzhXKXCYfimC+xCJKDMJxsZREqVE5C0JCVjCAePenIt1p+prvRJYQTK+0N0gdtcY P7og== X-Forwarded-Encrypted: i=1; AJvYcCUXghXswqkQfA/6n0BES/FXvZMWb3CWj8UQnhL/hylEJh7jjM5/zuEPQ6TcvrxvIysY2sEt56uvbNp2JIA=@vger.kernel.org X-Gm-Message-State: AOJu0Yxr4sEGL9fUgX6K313QCtmsnjxu42OGbmxqf3y2rlIVG1+4y24L 08cZp0lki8fj+ps3uwO4ul5MqADaUMflwnW4NXlvTCVNbznKj8ceTlLvuHLx88dNcLQ= X-Gm-Gg: ASbGncuOcZlTWhrozWnOVbM/Q68OXNlh7QKIq3XIJrGhyHPq5+/bZyeMaNqgZHlPzqC uMlZkhKwGU1bR/PvY1qmqBTX4ebsjW/LkY29vDZO4bV54Bx9WEynQeoHY80WcrDRD+dO+kFCVtU 4OxLBqsZbuIbtBcUCDjHAp9r8qg4Az3aTAV3Z+QsL+oo69678bja1HRcIqbgFf6sF7lAKlLN/47 wA9ytGmw3ul3LcoBw1LP4HnonEH4PKL0YUgouWT4GpoumeoM+nyNXx2V4GQEdtCR66KkPDhUTJa n0U6i56imqynnSJgrHs+jYXPFXJYe/ZWusNGF7IgC9FEhQccM1id8Fpxj/NrDfemM0cbGTEVbzz 9FBsWFkkiQe7XR1ID5do2XH0HUGxKpO7UzTBkgGfbC0iovXcaHRm15F95XFvokVYO9sbpdo7SUO WcgwESCsuZUvFoosHOWCbV0A== X-Google-Smtp-Source: AGHT+IFq0UAOun8d9XJx744M7CwKUVs4/xq+EwynwZ5LpaPcXyVk6iynKKn4ZFAafmUlsME9mB3yew== X-Received: by 2002:a17:902:ef45:b0:295:738f:73fe with SMTP id d9443c01a7336-2984edec698mr65123165ad.30.1762998441862; Wed, 12 Nov 2025 17:47:21 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:21 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland Subject: [PATCH v3 15/22] riscv: mm: Expose all page table bits to assembly code Date: Wed, 12 Nov 2025 17:45:28 -0800 Message-ID: <20251113014656.2605447-16-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" pgtable-32.h and pgtable-64.h are not usable by assembly code files, so move all page table field definitions to pgtable-bits.h. This allows handling more complex PTE transformations in out-of-line assembly code. Signed-off-by: Samuel Holland --- (no changes since v1) arch/riscv/include/asm/pgtable-32.h | 11 ------- arch/riscv/include/asm/pgtable-64.h | 30 ------------------- arch/riscv/include/asm/pgtable-bits.h | 42 +++++++++++++++++++++++++-- 3 files changed, 40 insertions(+), 43 deletions(-) diff --git a/arch/riscv/include/asm/pgtable-32.h b/arch/riscv/include/asm/p= gtable-32.h index 90ef35a7c1a5..eb556ab95732 100644 --- a/arch/riscv/include/asm/pgtable-32.h +++ b/arch/riscv/include/asm/pgtable-32.h @@ -17,17 +17,6 @@ =20 #define MAX_POSSIBLE_PHYSMEM_BITS 34 =20 -/* - * rv32 PTE format: - * | XLEN-1 10 | 9 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 - * PFN reserved for SW D A G U X W R V - */ -#define _PAGE_PFN_MASK GENMASK(31, 10) - -#define _PAGE_NOCACHE 0 -#define _PAGE_IO 0 -#define _PAGE_MTMASK 0 - #define ALT_FIXUP_MT(_val) #define ALT_UNFIX_MT(_val) =20 diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/p= gtable-64.h index aad34c754325..fa2c1dcb6f72 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -70,20 +70,6 @@ typedef struct { =20 #define MAX_POSSIBLE_PHYSMEM_BITS 56 =20 -/* - * rv64 PTE format: - * | 63 | 62 61 | 60 54 | 53 10 | 9 8 | 7 | 6 | 5 | 4 | 3 | 2= | 1 | 0 - * N MT RSV PFN reserved for SW D A G U X W= R V - */ -#define _PAGE_PFN_MASK GENMASK(53, 10) - -/* - * [63] Svnapot definitions: - * 0 Svnapot disabled - * 1 Svnapot enabled - */ -#define _PAGE_NAPOT_SHIFT 63 -#define _PAGE_NAPOT BIT(_PAGE_NAPOT_SHIFT) /* * Only 64KB (order 4) napot ptes supported. */ @@ -113,18 +99,6 @@ enum napot_cont_order { =20 #if defined(CONFIG_RISCV_ISA_SVPBMT) || defined(CONFIG_ERRATA_THEAD_MAE) =20 -/* - * [62:61] Svpbmt Memory Type definitions: - * - * 00 - PMA Normal Cacheable, No change to implied PMA memory type - * 01 - NC Non-cacheable, idempotent, weakly-ordered Main Memory - * 10 - IO Non-cacheable, non-idempotent, strongly-ordered I/O memory - * 11 - Rsvd Reserved for future standard use - */ -#define _PAGE_NOCACHE (1UL << 61) -#define _PAGE_IO (2UL << 61) -#define _PAGE_MTMASK (3UL << 61) - /* * ALT_FIXUP_MT * @@ -176,10 +150,6 @@ enum napot_cont_order { =20 #else =20 -#define _PAGE_NOCACHE 0 -#define _PAGE_IO 0 -#define _PAGE_MTMASK 0 - #define ALT_FIXUP_MT(_val) =20 #endif /* CONFIG_RISCV_ISA_SVPBMT || CONFIG_ERRATA_THEAD_MAE */ diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm= /pgtable-bits.h index 179bd4afece4..18c50cbd78bf 100644 --- a/arch/riscv/include/asm/pgtable-bits.h +++ b/arch/riscv/include/asm/pgtable-bits.h @@ -6,6 +6,16 @@ #ifndef _ASM_RISCV_PGTABLE_BITS_H #define _ASM_RISCV_PGTABLE_BITS_H =20 +/* + * rv32 PTE format: + * | XLEN-1 10 | 9 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 + * PFN reserved for SW D A G U X W R V + * + * rv64 PTE format: + * | 63 | 62 61 | 60 54 | 53 10 | 9 8 | 7 | 6 | 5 | 4 | 3 | 2 = | 1 | 0 + * N MT RSV PFN reserved for SW D A G U X W = R V + */ + #define _PAGE_ACCESSED_OFFSET 6 =20 #define _PAGE_PRESENT (1 << 0) @@ -21,6 +31,36 @@ #define _PAGE_SPECIAL (1 << 8) /* RSW: 0x1 */ #define _PAGE_TABLE _PAGE_PRESENT =20 +#define _PAGE_PFN_SHIFT 10 +#ifdef CONFIG_64BIT +#define _PAGE_PFN_MASK GENMASK(53, 10) +#else +#define _PAGE_PFN_MASK GENMASK(31, 10) +#endif /* CONFIG_64BIT */ + +#if defined(CONFIG_RISCV_ISA_SVPBMT) || defined(CONFIG_ERRATA_THEAD_MAE) +/* + * [62:61] Svpbmt Memory Type definitions: + * + * 00 - PMA Normal Cacheable, No change to implied PMA memory type + * 01 - NC Non-cacheable, idempotent, weakly-ordered Main Memory + * 10 - IO Non-cacheable, non-idempotent, strongly-ordered I/O memory + * 11 - Rsvd Reserved for future standard use + */ +#define _PAGE_NOCACHE (UL(1) << 61) +#define _PAGE_IO (UL(2) << 61) +#define _PAGE_MTMASK (UL(3) << 61) +#else +#define _PAGE_NOCACHE 0 +#define _PAGE_IO 0 +#define _PAGE_MTMASK 0 +#endif /* CONFIG_RISCV_ISA_SVPBMT || CONFIG_ERRATA_THEAD_MAE */ + +#ifdef CONFIG_RISCV_ISA_SVNAPOT +#define _PAGE_NAPOT_SHIFT 63 +#define _PAGE_NAPOT BIT(_PAGE_NAPOT_SHIFT) +#endif /* CONFIG_RISCV_ISA_SVNAPOT */ + /* * _PAGE_PROT_NONE is set on not-present pages (and ignored by the hardwar= e) to * distinguish them from swapped out pages @@ -30,8 +70,6 @@ /* Used for swap PTEs only. */ #define _PAGE_SWP_EXCLUSIVE _PAGE_ACCESSED =20 -#define _PAGE_PFN_SHIFT 10 - /* * when all of R/W/X are zero, the PTE is a pointer to the next level * of the page table; otherwise, it is a leaf PTE. --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E649F2EDD53 for ; Thu, 13 Nov 2025 01:47:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998445; cv=none; b=R6rsDtgKIdbImONkXHs39TvIlQl8o6cN5+3NONJrgjNd3fuYVQucHUzPVPd7Q5s14kiqX6fGpoKx/P0dGea+nVM7lUTRkjZy+H2fs0NxjPaeP9oDJMmmVGyuI/ap1If4BEPRRZGdDOjDy+CnmEX4Wo86E7H2pwft8CJlRfmV41o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998445; c=relaxed/simple; bh=j2BJ9jr3qIUwMRtdlyMGrTsEOHKt3SRkyCv8EtD9ZFQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ur+Qd753bCeuKW+5aSsrWEvummBlBuzN2PgmqnS4PccSKOc+0IARSNrLY3AObLpRv9qJqSeXt5a4mpFDmu5yyuCadnTyvkE+4S75ezPZGS4IbZIOYCwH1HSDZEjqh5Digp2w+zWGpkvfesB4zmYpFOP7uS20Ni/sPz1RPsIMCv0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=AGcW9K25; arc=none smtp.client-ip=209.85.214.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="AGcW9K25" Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-2958db8ae4fso2531245ad.2 for ; Wed, 12 Nov 2025 17:47:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998443; x=1763603243; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3tvwsP9vvABxxjFo83LMGwuoc9gJFKDbkPSolQxDxUQ=; b=AGcW9K25p81VD0iyTlU6hxyJKK2N6fkXXBta+RUtyWRb0NUUsuSxHJS9AcZqBJBCXJ xQUXrwkIN4nbkPCwGYzAOBes8Flz29a18+doreBELPdOy9LCab4vc99GKW4LLalZTz3A 9c71PlNd4w1aDbh/DC/+Cdbw7l3FFuJpMrPV4fnIkm4nmsuNZOZoN7oPbiw8pRhcaVK3 k7o984G76ceF4mQtDjJDq4v+3CVZKSlG5fJ3xZZ5if9E3j64J4gPybVBMGxWM4WIHc9d 9FhRQIdXg/EymsUPQRwYia9Ahy0+5IZJ8R70C3hpWhoyqA6hD50wX8hUAQcgU9u15hID AJ1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998443; x=1763603243; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=3tvwsP9vvABxxjFo83LMGwuoc9gJFKDbkPSolQxDxUQ=; b=KHYRXKnu1pyjZLXolzqArw2bsbQj9iGw3rX9Jq506LbUq+qlIz9xaAoFsBW7lHiUkX 5BA2L1hON1yJzbOAWeIDk7ym8P6atqLlB9M7zM3cxwz3SS/LKMlUA/bXnPv0YpKRl6mM 5Sikl94k9WLUTQpogpaDYoArMUw20SueecBtAhk6PlUKpcDklzGJBTBi6CmlLlthOV+n xldVbYhrtiLs0KXWxn1GlGT40Q1gwhypqF0dtN63lzkgRv6hxHLARrwd0ge9UWVa52P9 c6pxtc3/Lu07SCznh6rb5k/Ms2TDrIMQeH3CUWrbg/Is1X2wHKL6PEHVTn6kUYG5xuHJ bAng== X-Forwarded-Encrypted: i=1; AJvYcCXk7MgnVRb7F6PpdoV0gPz3I2td93+Q2JAxtadNVonYYvBHn4Alf/S4p/F2XQE1KBdXUU8WQmocdwOiQ+E=@vger.kernel.org X-Gm-Message-State: AOJu0YyBlDFoDfpk2KOZn4X2tUVivTaqCFYMv0IxFASPLsarDI/51Bss HksIRQb1serqNVfAr8xgSSXFY/EfB7FEID1+Ue8JP7j2hAer30KzMBOLWDGPg+cMEn0= X-Gm-Gg: ASbGncterpCrnobmGILBmIcF4OpL2Aa9Lg1oPQ5BzWWyAzKB7xAqjNvECKbwycfm1BK 30AR52yHhEp+3xRFDXhKnNE+HO8zZbUGTq0y7WQn6q0jFcMvvUi+qxTEYNBm7zO8op6UUBKtYn1 yNTDHWtkktayBPiy5T1m0gp/fokAa4laCl8n39otW6q5HMU2BQKaxckoMH78sv2BacvhopDVyrX 7S+4xwG2DEqoSpceCbXW5fwnbSZYIsUaQBZHqrJ87WNbkRVDAnxFHhQo95ruo33Ra5HVuNyfZUS 1mvmuPW5mAN1AOokYLlORcvctuD8Qo5s47BzOZfYc9BipCpUCW25GCBrtGPNN3B1RM7TIhFa5NU 5rA7UDYImU8qu2fBzukVks3W1qjDWiy/DjcFGOEbXtiv0nIszuIoOUyQLNvBV2yIoui9Bh3mg1O nqu1Sq0Mr2i740necNaUw/eQ== X-Google-Smtp-Source: AGHT+IEzjGNl3Wyy3hzWqL7on4A+1Kg8RojXA788MVa4vlvrwXnh7VKjWrR3/ZaKHRkhd2fy7O+m8w== X-Received: by 2002:a17:902:d4c8:b0:297:f8d9:aae9 with SMTP id d9443c01a7336-2984edde647mr68226655ad.51.1762998443331; Wed, 12 Nov 2025 17:47:23 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:23 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland , Andrew Jones Subject: [PATCH v3 16/22] riscv: alternative: Add an ALTERNATIVE_3 macro Date: Wed, 12 Nov 2025 17:45:29 -0800 Message-ID: <20251113014656.2605447-17-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" ALT_FIXUP_MT() is already using ALTERNATIVE_2(), but it needs to be extended to handle a fourth case. Add ALTERNATIVE_3(), which extends ALTERNATIVE_2() with another block of new content. Reviewed-by: Andrew Jones Signed-off-by: Samuel Holland --- (no changes since v2) Changes in v2: - Fix erroneously-escaped newline in assembly ALTERNATIVE_CFG_3 macro arch/riscv/include/asm/alternative-macros.h | 45 ++++++++++++++++++--- 1 file changed, 40 insertions(+), 5 deletions(-) diff --git a/arch/riscv/include/asm/alternative-macros.h b/arch/riscv/inclu= de/asm/alternative-macros.h index 9619bd5c8eba..e8bf384da5c2 100644 --- a/arch/riscv/include/asm/alternative-macros.h +++ b/arch/riscv/include/asm/alternative-macros.h @@ -50,8 +50,17 @@ ALT_NEW_CONTENT \vendor_id_2, \patch_id_2, \enable_2, "\new_c_2" .endm =20 +.macro ALTERNATIVE_CFG_3 old_c, new_c_1, vendor_id_1, patch_id_1, enable_1= , \ + new_c_2, vendor_id_2, patch_id_2, enable_2, \ + new_c_3, vendor_id_3, patch_id_3, enable_3 + ALTERNATIVE_CFG_2 "\old_c", "\new_c_1", \vendor_id_1, \patch_id_1, \enabl= e_1 \ + "\new_c_2", \vendor_id_2, \patch_id_2, \enable_2 + ALT_NEW_CONTENT \vendor_id_3, \patch_id_3, \enable_3, "\new_c_3" +.endm + #define __ALTERNATIVE_CFG(...) ALTERNATIVE_CFG __VA_ARGS__ #define __ALTERNATIVE_CFG_2(...) ALTERNATIVE_CFG_2 __VA_ARGS__ +#define __ALTERNATIVE_CFG_3(...) ALTERNATIVE_CFG_3 __VA_ARGS__ =20 #else /* !__ASSEMBLER__ */ =20 @@ -98,6 +107,13 @@ __ALTERNATIVE_CFG(old_c, new_c_1, vendor_id_1, patch_id_1, enable_1) \ ALT_NEW_CONTENT(vendor_id_2, patch_id_2, enable_2, new_c_2) =20 +#define __ALTERNATIVE_CFG_3(old_c, new_c_1, vendor_id_1, patch_id_1, enabl= e_1, \ + new_c_2, vendor_id_2, patch_id_2, enable_2, \ + new_c_3, vendor_id_3, patch_id_3, enable_3) \ + __ALTERNATIVE_CFG_2(old_c, new_c_1, vendor_id_1, patch_id_1, enable_1, \ + new_c_2, vendor_id_2, patch_id_2, enable_2) \ + ALT_NEW_CONTENT(vendor_id_3, patch_id_3, enable_3, new_c_3) + #endif /* __ASSEMBLER__ */ =20 #define _ALTERNATIVE_CFG(old_c, new_c, vendor_id, patch_id, CONFIG_k) \ @@ -108,6 +124,13 @@ __ALTERNATIVE_CFG_2(old_c, new_c_1, vendor_id_1, patch_id_1, IS_ENABLED(C= ONFIG_k_1), \ new_c_2, vendor_id_2, patch_id_2, IS_ENABLED(CONFIG_k_2)) =20 +#define _ALTERNATIVE_CFG_3(old_c, new_c_1, vendor_id_1, patch_id_1, CONFIG= _k_1, \ + new_c_2, vendor_id_2, patch_id_2, CONFIG_k_2, \ + new_c_3, vendor_id_3, patch_id_3, CONFIG_k_3) \ + __ALTERNATIVE_CFG_3(old_c, new_c_1, vendor_id_1, patch_id_1, IS_ENABLED(C= ONFIG_k_1), \ + new_c_2, vendor_id_2, patch_id_2, IS_ENABLED(CONFIG_k_2), \ + new_c_3, vendor_id_3, patch_id_3, IS_ENABLED(CONFIG_k_3)) + #else /* CONFIG_RISCV_ALTERNATIVE */ #ifdef __ASSEMBLER__ =20 @@ -118,11 +141,17 @@ #define __ALTERNATIVE_CFG(old_c, ...) ALTERNATIVE_CFG old_c #define __ALTERNATIVE_CFG_2(old_c, ...) ALTERNATIVE_CFG old_c =20 +#define _ALTERNATIVE_CFG_3(old_c, ...) \ + ALTERNATIVE_CFG old_c + #else /* !__ASSEMBLER__ */ =20 #define __ALTERNATIVE_CFG(old_c, ...) old_c "\n" #define __ALTERNATIVE_CFG_2(old_c, ...) old_c "\n" =20 +#define _ALTERNATIVE_CFG_3(old_c, ...) \ + __ALTERNATIVE_CFG(old_c) + #endif /* __ASSEMBLER__ */ =20 #define _ALTERNATIVE_CFG(old_c, ...) __ALTERNATIVE_CFG(old_c) @@ -147,15 +176,21 @@ _ALTERNATIVE_CFG(old_content, new_content, vendor_id, patch_id, CONFIG_k) =20 /* - * A vendor wants to replace an old_content, but another vendor has used - * ALTERNATIVE() to patch its customized content at the same location. In - * this case, this vendor can create a new macro ALTERNATIVE_2() based - * on the following sample code and then replace ALTERNATIVE() with - * ALTERNATIVE_2() to append its customized content. + * Variant of ALTERNATIVE() that supports two sets of replacement content. */ #define ALTERNATIVE_2(old_content, new_content_1, vendor_id_1, patch_id_1,= CONFIG_k_1, \ new_content_2, vendor_id_2, patch_id_2, CONFIG_k_2) \ _ALTERNATIVE_CFG_2(old_content, new_content_1, vendor_id_1, patch_id_1, C= ONFIG_k_1, \ new_content_2, vendor_id_2, patch_id_2, CONFIG_k_2) =20 +/* + * Variant of ALTERNATIVE() that supports three sets of replacement conten= t. + */ +#define ALTERNATIVE_3(old_content, new_content_1, vendor_id_1, patch_id_1,= CONFIG_k_1, \ + new_content_2, vendor_id_2, patch_id_2, CONFIG_k_2, \ + new_content_3, vendor_id_3, patch_id_3, CONFIG_k_3) \ + _ALTERNATIVE_CFG_3(old_content, new_content_1, vendor_id_1, patch_id_1, C= ONFIG_k_1, \ + new_content_2, vendor_id_2, patch_id_2, CONFIG_k_2, \ + new_content_3, vendor_id_3, patch_id_3, CONFIG_k_3) + #endif --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD9802F0C58 for ; Thu, 13 Nov 2025 01:47:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998447; cv=none; b=MSBbX+GxiwEj7FIffVMVoj6+RGyX1RM6B2UwBJhsktNa3cBBxmwFfHMJuMiBMtJVLWmmFGnnOMMZcIJd4QX+alfbF9PcLDIDFxzSdwcpl9Y9yMX1nqsCl4lRLZ7yI9guCX2GHTdWjIs81/X6+eAi+Hyn6aq+8iYjG6mxd1+DZkw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998447; c=relaxed/simple; bh=7U/3jWkDl0qA7lpXkEq5MXx7Raqg0ckxjMa3eHyDnac=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QoWNbC87dZEhz1UCJrxsSkjhFsVYF9Nz6MZAgTP4hXO37nn99/Vu7ckgWIbLyfvEydGzjwXnk611embtEmvzsBHBhRJH7iLZb4cNlAjLKdjAxaZuI7cDk9U56QiLRuHHziMlnJuqzh4Bx3VcwQSbbbpozxB2mu2sNjQMdGxiv/8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=izNJ5Tvu; arc=none smtp.client-ip=209.85.215.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="izNJ5Tvu" Received: by mail-pg1-f172.google.com with SMTP id 41be03b00d2f7-b98a619f020so215327a12.2 for ; Wed, 12 Nov 2025 17:47:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998445; x=1763603245; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mnUrJZR22oRvK4b9Hq0rnS55dGa+J0g2wL6xk4xR6+s=; b=izNJ5TvuJX4M3VQMrXJO29Ggn6MJs4AfQTdLCbZ6Kxu80+uZfhC+qeWWTXj9+Szfoy bzCfJa0FXykm5GQQH4dUNOU9sBlUXUSEAJNy4pNz2LJDJxpClDFUulPj+AAkoGh4HfUJ C+t1N/ivr01Cs6VXTrgQMN6xvvLwqwMVy2iFe+k3qchB9I2Z2WWT+l5PvSf5cftlRrmU Th1MfbCr1VwdskZvln0VLAnGY0koj9dk9BYTznuoFg3Ulk85qm4+pBuKxfFTRGArN6AH HYBKaeq93L0JKLISgDbQLT6U+kPoLyz7Td0MDmwszIXW1Kbv5EkkzJvF2bvHskUBU7st fjlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998445; x=1763603245; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=mnUrJZR22oRvK4b9Hq0rnS55dGa+J0g2wL6xk4xR6+s=; b=umtbhEOTmtuLnXgqRI+dyjllOwVjLIEY+NCJrvV1EzivOlZ+Rqws1WOlTMNgNVWkMA RXDMENY6P6CtqZPgLAjd+x4ZxAaEvGbHUcNqST/aiYsBrT6CyU8sSCxjOoO+XVYiqGVM rM9szmuROOJHmUIFUh16l/SrOFw1XefEMilDNJbOHirbNzF4csdSXEm85yR4v47CZJJP v5RZ3xFUc7IV3sEKPj1JInBi131+7TytnchmDw9zkiyTfp+ez+krkTUJ/fjMrNhnNQVV LVOLaW2bvQVEhKgqMNP2ku96q9aLrUBgYmEFuFgVDmYWjC6t462cbrg+7zdS5CVR1vmq kfBw== X-Forwarded-Encrypted: i=1; AJvYcCUPxcEIRUH3BYHCcrTLIFmRBibgf9FcyGUh1K6MEO3QIPbDfkKJeTUphIM6lYkpWX4as872OXy4GAh39ww=@vger.kernel.org X-Gm-Message-State: AOJu0YwmJ5DbTwNknO967wf7KuVmkvf2hiAS1qjL71IlOYqbOjECGEHT tdJwu4h4Fq7klC9omxjLXvjK/avZvS2BtLmWcTlQcSbMFutAU6MUPkX650hM59PSNSk= X-Gm-Gg: ASbGnctRaR/FiYWbN3Hq1N5++o/ogdD8QOwQF2JCx416x2G3kgJt1MGQ7sbyVJ6S86h Nv2JnkLwji1ywIN7lxXh/Ghl65sSsvrGRhLQyDI8cg1eLIgLIZq1JaTGScn9AeK1DGZHNwNFy+c 6Y+pml2Z0WHpeQLiPtBQfg6MNZCf97jFVHg2woNuGiK4se2GTu9rjrLq7+J7/9AsMoQSWIr0s9a JAL9khSTNETxzkVtlJj9PTZzqgtesQ+9xb/iVWYW3B4h9ry07mUBmpgMVeIfKno1SsgRsRdpl3k s9K6shQP0DQxSa4FMTWQn0NgMkb0KczB6DrfLTZWpywBu84CZWvQC8onwVWE8VyIZ1I/r/k+PC8 7T1dPu7PQNjJP3rtf6dcgyXgfdhYa/R4BRwt1OKufQk8zIjyH1e6N2/Q4iqRr0zr+VX15eirh+I XhSZe/kn92qfuygnUn9ty7gA== X-Google-Smtp-Source: AGHT+IFqAfqeVcZ22DpgtEmtnZ9MrEayC7yLg/9qX8OddaBkDjNDdLzIDATV5Jsr3mCugzkW5GzqeA== X-Received: by 2002:a17:902:f54e:b0:28b:4ca5:d522 with SMTP id d9443c01a7336-2984ede9839mr68599425ad.39.1762998444849; Wed, 12 Nov 2025 17:47:24 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:24 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland Subject: [PATCH v3 17/22] riscv: alternative: Allow calls with alternate link registers Date: Wed, 12 Nov 2025 17:45:30 -0800 Message-ID: <20251113014656.2605447-18-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Alternative assembly code may wish to use an alternate link register to minimize the number of clobbered registers. Apply the offset fix to all jalr (not jr) instructions, i.e. where rd is not x0. Signed-off-by: Samuel Holland --- (no changes since v1) arch/riscv/kernel/alternative.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/riscv/kernel/alternative.c b/arch/riscv/kernel/alternativ= e.c index 7642704c7f18..e3eb2585faea 100644 --- a/arch/riscv/kernel/alternative.c +++ b/arch/riscv/kernel/alternative.c @@ -126,8 +126,8 @@ void riscv_alternative_fix_offsets(void *alt_ptr, unsig= ned int len, if (!riscv_insn_is_jalr(insn2)) continue; =20 - /* if instruction pair is a call, it will use the ra register */ - if (RV_EXTRACT_RD_REG(insn) !=3D 1) + /* if instruction pair is a call, it will save a link register */ + if (RV_EXTRACT_RD_REG(insn) =3D=3D 0) continue; =20 riscv_alternative_fix_auipc_jalr(alt_ptr + i * sizeof(u32), --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 084572F1FFE for ; Thu, 13 Nov 2025 01:47:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998448; cv=none; b=SfKeMO0LtCZi+m52CNRFQAqzgDcOIxB6j0UJ5IebUQtSUGHoLJBHX/4OlD6HxrMy+KIOiwOt9/Yt9eFuf2PviPpm5s84tSYmCloF/2mC8nnYyswB+vgimzUgKQjL9lisV2rto0K+aRj89NgvarO4QSJOzJbhNICxTXQm96OjWLk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998448; c=relaxed/simple; bh=dpDUJYbCUG5DE8ks5it/OO03BiZ/1BxCSuYFxo0W7R4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=vGYL5r1pZoEWEO9t12nRsACSUE+4DLK2L152FXMeTwOMqb4ffEd6huNXYAPWKV+2HuuI6c+f/WaL86lhhS5MTDYMC3yTq4Fk7ltFvKrowuasTkGtWPJ6sJeXqr4eB75EQEY1IHNIYHqPxUXwgE5cwudigZfLmIFhoaoAz1qCjOA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=GLFWgebK; arc=none smtp.client-ip=209.85.215.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="GLFWgebK" Received: by mail-pg1-f176.google.com with SMTP id 41be03b00d2f7-bb2447d11ceso180132a12.0 for ; Wed, 12 Nov 2025 17:47:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998446; x=1763603246; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tJUQdwbie0o7hgZvF0fBCgnf1I3C+S9yUEzkmCq2LO8=; b=GLFWgebKN00QlKqL7HbDjZoN9r/Skdq1GViY0ma1TDXfm5Ihcz2Ru2trFzwmvTQMS3 wN+JYB0u8H1x4asuxQeVhEfKzyySc/G/THY+lsLeMzg7mw3Ga7SyaUzhX6/kt9s2bpwP X/glgU1cQg5vmVNLko6SBx56rSM6lZhsSQ9af2RzTaWWdC9jdiIGk1GdwZo3GLUMmdlN 5yP8WvwwW506UeodpGJAvcuveUIE7PTQEWQdID95y17FMVi8KJQlZQa1cbP4zaSg9Ing Nlpv0ztQAlev2DxaF7Ipr23nWXGaayQUl9+bu+cVAH7Uqs+F2BDHwXU8f2l5xqkTidLk 5TQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998446; x=1763603246; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=tJUQdwbie0o7hgZvF0fBCgnf1I3C+S9yUEzkmCq2LO8=; b=A0Ax/LNrlSf16IZgOW/uNVa6nVHyqILjBRk1VcHscNn9p1YU4AWbiIr1//ZyD7gq/u gAD8C7rJF8psnd4+3/IErJS4HA/hh3AIyIxcrEt99hU/flkMe7vDqJ+XhtwM60aionvU maLv4bW2Pkedj8seZ21G10uNkODsXaF0STJCSheOJc63yK/wKweljuKoPLBgoqZ3P9X0 7vdTjd9AGe7e2ql7WoOAdT1q86BwvhyyxWmj4VEy1Iz2RVvxNSl7rOAQpRPmXuCM/aQc xtHoqlz9p5RoFTV7g7CAkr5bKc+bz64eiWhA+N5swP8kkQDm2R2INBvBu3qseLUYWSkF nn6Q== X-Forwarded-Encrypted: i=1; AJvYcCUV8k05DG8dImo4gxgIcHy2sH33xKfCovfCfkMhUtbavaQa3CVSU9mUPd3nx+1ijt9ag3kCD+VzCdWDEpY=@vger.kernel.org X-Gm-Message-State: AOJu0YzwHp/W5ZdCNTS9rnTBIwWTshNPnJEwmI3xBDfNRraasFF+lfn7 hUaWIgFRG2oPZgMabpZZs9EY173zBPnJHBTvpbK9YBfTMf2iIu4FRq+PiSH19RvMKWI= X-Gm-Gg: ASbGncsQdIRCKLzdcsi5yYi5ye4OkanHKPQxwbCLBeQRW2yTkxv3dD5Ja/T/Kwrp90o cEvwiWIlXW1gR/ctfxtMFPIQ1Nia6k9KIkEBhJuvzZ330U/hUZUgMYqeU33Egh2YjaY8ov8Dnt9 5DDrc+lYcU9jIhWRZhLTfQAvEpbYROpJgGGlDYMMTuLsmbCzvKVSoD9w7iE2WsBmuOC3LYjcWXH NaS3zPnoiO5COxTzjNuZC6H2otYoAx166Iw001t/rj7tLucqPY024JUhEBIzIZ1VEd8WyzclaVO IcS5g2mY/d01EdKTOMr2S8aqaTPd28d6nECzJhpnNE+t5C9p5aLQBrFvSnmYp37S445jjyxQd7E sHRaKSzViOpIwxEYTDY96qsCwid4TTvOHd2goIhzWaEPKQjrHNCC+XUtWEcy5BdV9bBwCFFbOok +DXBsCirHS4IyPUB4/cRs2uQ== X-Google-Smtp-Source: AGHT+IHaQsvIpnkLWjWR0rawHh2zoCdjthdQ4AoC5mETU8kGh9msFPBhPFlU6AScOiEA10G8iQrBZw== X-Received: by 2002:a17:903:3c64:b0:295:3e80:9aa4 with SMTP id d9443c01a7336-2984ed46fcamr62454685ad.22.1762998446285; Wed, 12 Nov 2025 17:47:26 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:26 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland Subject: [PATCH v3 18/22] riscv: Fix logic for selecting DMA_DIRECT_REMAP Date: Wed, 12 Nov 2025 17:45:31 -0800 Message-ID: <20251113014656.2605447-19-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" DMA_DIRECT_REMAP allows the kernel to make pages coherent for DMA by remapping them in the page tables with a different pgprot_t value. On RISC-V, this is supported by the page-based memory type extensions (Svpbmt and Xtheadmae). It is independent from the software cache maintenance extensions (Zicbom and Xtheadcmo). Signed-off-by: Samuel Holland --- Changes in v3: - New patch for v3 arch/riscv/Kconfig | 2 +- arch/riscv/Kconfig.errata | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index fadec20b87a8..cf5a4b5cdcd4 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -598,6 +598,7 @@ config RISCV_ISA_SVPBMT depends on 64BIT && MMU depends on RISCV_ALTERNATIVE default y + select DMA_DIRECT_REMAP help Add support for the Svpbmt ISA-extension (Supervisor-mode: page-based memory types) in the kernel when it is detected at boot. @@ -811,7 +812,6 @@ config RISCV_ISA_ZICBOM depends on RISCV_ALTERNATIVE default y select RISCV_DMA_NONCOHERENT - select DMA_DIRECT_REMAP help Add support for the Zicbom extension (Cache Block Management Operations) and enable its use in the kernel when it is detected diff --git a/arch/riscv/Kconfig.errata b/arch/riscv/Kconfig.errata index aca9b0cfcfec..46a353a266e5 100644 --- a/arch/riscv/Kconfig.errata +++ b/arch/riscv/Kconfig.errata @@ -108,6 +108,7 @@ config ERRATA_THEAD config ERRATA_THEAD_MAE bool "Apply T-Head's memory attribute extension (XTheadMae) errata" depends on ERRATA_THEAD && 64BIT && MMU + select DMA_DIRECT_REMAP select RISCV_ALTERNATIVE_EARLY default y help @@ -119,7 +120,6 @@ config ERRATA_THEAD_MAE config ERRATA_THEAD_CMO bool "Apply T-Head cache management errata" depends on ERRATA_THEAD && MMU - select DMA_DIRECT_REMAP select RISCV_DMA_NONCOHERENT select RISCV_NONSTANDARD_CACHE_OPS default y --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 913172F3630 for ; Thu, 13 Nov 2025 01:47:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998450; cv=none; b=jyGzCUQe1myoxcBGFu2qpjJ8P1+2xnfPHj3J0fx7Jawa2Q1mP14lKRG7YoQWt7qRlU7Y3ZzdXME0lM+tXR1UvOB1IFTePcT062ZjI4zKIXAzaTmg95SKz4axH6A1LV5YPNJulv96/J7n4XSdz9fhWT9l/Y8evkN2A49pbi8hG9Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998450; c=relaxed/simple; bh=EeiR5aA490hV6LpATsKJeC72aHRgLy5iQoQTBEKLnR4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VcnQ4NCoWPlcMF0yiDUoFPAPhMq004K6zVQaHkmFs7yJg3KTWwahyA5JiTPEV0u03DeAHIFpYSHWY3G5wxdMbe55ELU5on10ddJA/6CgqlooSCdKkW74kh2suYcwSnZKngDD2lOZJhZr9Lrbi4kizYCz8RKxa2whVsvYNXGb2ZE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=nbZev6N0; arc=none smtp.client-ip=209.85.214.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="nbZev6N0" Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-2955623e6faso2309865ad.1 for ; Wed, 12 Nov 2025 17:47:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998448; x=1763603248; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jKST6UqYk+QaqGuOiC3PtA1F5GA6hglNlMJ+dkxeIto=; b=nbZev6N0/r8VeOR2iIH4PItxtfl5KwsbkXNQXnNIe8orNKjaA0Yr02xcjW4XcDUQJE HWpxHINSqOyfkGieXHvcC6Js2AwPBaHdqK40ycmrf+lWQ+0n9uNG1mxW0Y/ZwKXobXSq MySl2+rQTQuKoRoMVybAY+BZz+TJ85AruQaLv3L+FnHJf1rXLXb08bpP8k0WFh4T0WGg Y4Coov0DG/Jm+1EDIIelzU2KyLEnU8aqZIwPmwr+Haqb2K0uv/Kbi2EgZhITo2AyFIt6 QhajPifoNkq/qnyefUNyQsiEKZNcVVkPVl3yDnkG+vfl6abXb0n/4TZY5vGJZPVeOtlp WSaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998448; x=1763603248; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=jKST6UqYk+QaqGuOiC3PtA1F5GA6hglNlMJ+dkxeIto=; b=ZHy1wyKtoTh2pFyZUhkyvKrpU9DtEpk0nd9hb3YK0waedMp2UBTOCUZ2JXIVi+8rgx 8+8DfB0XmYxHKKJC8JHiaf9R/2fePWSmleKAAFTfgK6QgTSAYOsn9Y8XOGzGoKZRGjXa o//BfXgkcdiTPKcpBKBMVxmggpaq1u+S9r0s1MDQZG1mhVD5tRZFJHLoAXyDCa7WHIhO 9it2a4u9YVhjzwsW33NODqLYvOMCtOIdlv22z11XQO6DoEJOEmraawALCIxvqv8zoZR6 T/6L4ZgQkApVuuygICLk+EqUsjUv0tQR+oC5JicIqA2Ev8ByOA5Jk6b4n1GEaf1MC5oh CK4A== X-Forwarded-Encrypted: i=1; AJvYcCVVlrw6BKXWuvnS5hJzet6GRoazYZrhNPDJQJP6v5UOKS/lQMzszr1fJuvcl/nOGentbjeUnjfNFyCyvBs=@vger.kernel.org X-Gm-Message-State: AOJu0YyOeUyXmUESlW7orxiqfRJE3iLqb5dP+9NjZnRLDHGjd6qUVDJy Lw9SPwA5dIKoescwM35kMh3DabvgB2xmEkGUmZE738dJdlkB3efaZypjWWb7P16PSrk= X-Gm-Gg: ASbGncuLzMJ9cJ7d17ruF0GCTW3kNlXfziY06QQrgEnbHYlPwD4tLFfrNGDyNZw/a9X 73EH3CrrBSfx9+qaoYHUxG1hhNXJGCRHkwK7KDS3IRUriSvZieM1DPGszFe3U2FMj+Vd5KjUa5P 5YpqzHChUgN/rmTSzHyuu+jTlSszN3LHaZOHmc9V7m9AB/VnJ0hQ4IFaK8SuYG+UQc9UXJoulLo SNX/rt8LyeO5lSH/mh2dSvyFE5DNs6tJnUf5+r4tUSFPWhXESMziRdC6AC2sF0BKhrbA9ZxU1QT PIRqM/kwm59NWqp618A1gZ8oU4FNNzsYLQOEDIaTqAOakKdbagbMBlMMgtzOW4+P/XFHTPUuWYq jNvM9UlUfPpPMhs0vFyiU6VHMeu+PLz+Z+7wVcqS4LPRmkc8F5Wk/m55rF8azTXn4Qo5Y03Al9y 1Y7bLhHwuRBrT1hilmyzTY5g== X-Google-Smtp-Source: AGHT+IHEN2CLBCBQn7bcySaSxAzHnvyn3lSrJ3SkfwuCWTMT+513mFJW8h5K2OPvmFqqAe8aAa+xIw== X-Received: by 2002:a17:902:cf05:b0:295:59ef:df96 with SMTP id d9443c01a7336-2984ed923femr62192415ad.13.1762998447807; Wed, 12 Nov 2025 17:47:27 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:27 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland Subject: [PATCH v3 19/22] dt-bindings: riscv: Describe physical memory regions Date: Wed, 12 Nov 2025 17:45:32 -0800 Message-ID: <20251113014656.2605447-20-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Information about physical memory regions is needed by both the kernel and M-mode firmware. For example, the kernel needs to know about noncacheable aliases of cacheable memory in order to allocate coherent memory pages for DMA. M-mode firmware needs to know about those aliases so it can protect itself from lower-privileged software. The RISC-V Privileged Architecture delegates the description of Physical Memory Attributes (PMAs) to the platform. On DT-based platforms, it makes sense to put this information in the devicetree. Signed-off-by: Samuel Holland --- Changes in v3: - Split PMR_IS_ALIAS flag from PMR_ALIAS_MASK number - Add "model" property to DT binding example to fix validation Changes in v2: - Remove references to Physical Address Width (no longer part of Smmpt) - Remove special first entry from the list of physical memory regions - Fix compatible string in DT binding example .../bindings/riscv/physical-memory.yaml | 92 +++++++++++++++++++ include/dt-bindings/riscv/physical-memory.h | 45 +++++++++ 2 files changed, 137 insertions(+) create mode 100644 Documentation/devicetree/bindings/riscv/physical-memory= .yaml create mode 100644 include/dt-bindings/riscv/physical-memory.h diff --git a/Documentation/devicetree/bindings/riscv/physical-memory.yaml b= /Documentation/devicetree/bindings/riscv/physical-memory.yaml new file mode 100644 index 000000000000..8beaa588c71c --- /dev/null +++ b/Documentation/devicetree/bindings/riscv/physical-memory.yaml @@ -0,0 +1,92 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/riscv/physical-memory.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: RISC-V Physical Memory Regions + +maintainers: + - Samuel Holland + +description: + The RISC-V Privileged Architecture defines a number of Physical Memory + Attributes (PMAs) which apply to a given region of memory. These include= the + types of accesses (read, write, execute, LR/SC, and/or AMO) allowed with= in + a region, the supported access widths and alignments, the cacheability a= nd + coherence of the region, and whether or not accesses to the region may h= ave + side effects. + + Some RISC-V platforms provide multiple physical address mappings for main + memory or certain peripherals. Each alias of a region generally has diff= erent + PMAs (e.g. cacheable vs non-cacheable), which allows software to dynamic= ally + select the PMAs for an access by referencing the corresponding alias. + + On DT-based RISC-V platforms, this information is provided by the + riscv,physical-memory-regions property of the root node. + +properties: + $nodename: + const: '/' + + riscv,physical-memory-regions: + $ref: /schemas/types.yaml#/definitions/uint32-matrix + description: + Each table entry provides PMAs for a specific physical memory region, + which must not overlap with any other table entry. + minItems: 1 + maxItems: 256 + items: + minItems: 4 + maxItems: 6 + additionalItems: true + items: + - description: CPU physical address (#address-cells) + - description: > + Size (#size-cells). For entry 0, if the size is zero, the size= is + assumed to be 2^(32 * #size-cells). + - description: > + Flags describing the most restrictive PMAs for any address wit= hin + the region. + + The least significant byte indicates the types of accesses all= owed + for this region. Note that a memory region may support a type = of + access (e.g. AMOs) even if the CPU does not. + + The next byte describes the cacheability, coherence, idempoten= cy, + and ordering PMAs for this region. It also includes a flag to + indicate that accesses to a region are unsafe and must be + prohibited by software (for example using PMPs or Smmpt). + + The third byte is reserved for future PMAs. + + The most significant byte is the index of the lowest-numbered = entry + which this entry is an alias of, if any. Aliases need not be t= he + same size, for example if a smaller memory region repeats with= in a + larger alias. + - description: Reserved for describing future PMAs + +additionalProperties: true + +examples: + - | + #include + + / { + compatible =3D "beagle,beaglev-starlight-jh7100-r0", "starfive,jh710= 0"; + model =3D "BeagleV Starlight Beta"; + #address-cells =3D <2>; + #size-cells =3D <2>; + + riscv,physical-memory-regions =3D + <0x00 0x18000000 0x00 0x00020000 (PMA_RWX | PMA_NONCACHEABLE_MEM= ORY) 0x0>, + <0x00 0x18080000 0x00 0x00020000 (PMA_RWX | PMA_NONCACHEABLE_MEM= ORY) 0x0>, + <0x00 0x41000000 0x00 0x1f000000 (PMA_RWX | PMA_NONCACHEABLE_MEM= ORY) 0x0>, + <0x00 0x61000000 0x00 0x1f000000 (PMA_RWXA | PMA_NONCOHERENT_MEM= ORY | PMR_ALIAS(3)) 0x0>, + <0x00 0x80000000 0x08 0x00000000 (PMA_RWXA | PMA_NONCOHERENT_MEM= ORY) 0x0>, + <0x10 0x00000000 0x08 0x00000000 (PMA_RWX | PMA_NONCACHEABLE_MEM= ORY | PMR_ALIAS(5)) 0x0>, + <0x20 0x00000000 0x10 0x00000000 (PMA_RWX | PMA_NONCACHEABLE_MEM= ORY) 0x0>, + <0x30 0x00000000 0x10 0x00000000 (PMA_RWXA | PMA_NONCOHERENT_MEM= ORY | PMR_ALIAS(7)) 0x0>; + }; + +... diff --git a/include/dt-bindings/riscv/physical-memory.h b/include/dt-bindi= ngs/riscv/physical-memory.h new file mode 100644 index 000000000000..d6ed8015c535 --- /dev/null +++ b/include/dt-bindings/riscv/physical-memory.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */ + +#ifndef _DT_BINDINGS_RISCV_PHYSICAL_MEMORY_H +#define _DT_BINDINGS_RISCV_PHYSICAL_MEMORY_H + +#define PMA_READ (1 << 0) +#define PMA_WRITE (1 << 1) +#define PMA_EXECUTE (1 << 2) +#define PMA_AMO_MASK (3 << 4) +#define PMA_AMO_NONE (0 << 4) +#define PMA_AMO_SWAP (1 << 4) +#define PMA_AMO_LOGICAL (2 << 4) +#define PMA_AMO_ARITHMETIC (3 << 4) +#define PMA_RSRV_MASK (3 << 6) +#define PMA_RSRV_NONE (0 << 6) +#define PMA_RSRV_NON_EVENTUAL (1 << 6) +#define PMA_RSRV_EVENTUAL (2 << 6) + +#define PMA_RW (PMA_READ | PMA_WRITE) +#define PMA_RWA (PMA_RW | PMA_AMO_ARITHMETIC | PMA_RSRV_EVENTUAL) +#define PMA_RWX (PMA_RW | PMA_EXECUTE) +#define PMA_RWXA (PMA_RWA | PMA_EXECUTE) + +#define PMA_ORDER_MASK (3 << 8) +#define PMA_ORDER_IO_RELAXED (0 << 8) +#define PMA_ORDER_IO_STRONG (1 << 8) +#define PMA_ORDER_MEMORY (2 << 8) +#define PMA_READ_IDEMPOTENT (1 << 10) +#define PMA_WRITE_IDEMPOTENT (1 << 11) +#define PMA_CACHEABLE (1 << 12) +#define PMA_COHERENT (1 << 13) + +#define PMA_UNSAFE (1 << 15) + +#define PMA_IO (PMA_ORDER_IO_RELAXED) +#define PMA_NONCACHEABLE_MEMORY (PMA_ORDER_MEMORY | PMA_READ_IDEMPOTENT |= \ + PMA_WRITE_IDEMPOTENT) +#define PMA_NONCOHERENT_MEMORY (PMA_NONCACHEABLE_MEMORY | PMA_CACHEABLE) +#define PMA_NORMAL_MEMORY (PMA_NONCOHERENT_MEMORY | PMA_COHERENT) + +#define PMR_ALIAS_MASK (0x7f << 24) +#define PMR_IS_ALIAS (0x80 << 24) +#define PMR_ALIAS(n) (PMR_IS_ALIAS | ((n) << 24)) + +#endif /* _DT_BINDINGS_RISCV_PHYSICAL_MEMORY_H */ --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1BFC62F5318 for ; Thu, 13 Nov 2025 01:47:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998452; cv=none; b=jn2s/8csM54GyzZsilJnNvtaJLO6+IyTR4363Nd3BRSqUcSZn0SvmSunSp4wW5mZuSMZtQRlWpAvzbf2m2avucNVOrK58XZXQOe0CGyPEwesW9kcWTXMrtgCT/VJtLhOXsc9xvLNhoRfa0uccObL8dSlBV7CXuSsFTsvdre7Xfk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998452; c=relaxed/simple; bh=N6A8T4qywKGtmfh/1yC9rPFZpHNOQTVOB86CbX19NA4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qYds0OuaAvXnTBja5jicBJnlcmmwxrlffJV5VyTG2Qjg+wOn/H+wayrLeNSQGsRQr4E5E3JXboIV6K2CCupuDsUtqDQ5/3byDpXzKLrTa4tpdb9VCP/irSwBquwcvihN59a/3Vq78HWoHQNjVIj6vVsr5rjAidTBHXkmLoEQYUw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=SnkybLZr; arc=none smtp.client-ip=209.85.215.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="SnkybLZr" Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-b9a5b5b47bfso166937a12.1 for ; Wed, 12 Nov 2025 17:47:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998449; x=1763603249; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HOtRFITo02H4IQSZeZ1BamFbY19pkW2KaAlb2iNDjWw=; b=SnkybLZr4cuB3MzSHMTd8M7U8ScTjoh/thnAfEEai7BaphFjfR57o29d2lYiXnmBa+ GQk/MfRFBhUr7RXuYDMhR3o2+DK4RLRcprQI324CmQMuoTC3pIRQ3+bbj9ale2AZYKsJ w5mEPjYTL6aVJ0mOPtr5rT/Z2xW2Np+63Gc+0XJB8WMuIU2anIwgV5V5tv0C9Km1Pyl4 Rb/mp+y2AsGl39nGdatbaMcrT5RKAmbwgvJtOlw+y2v2Jz1KZuQJyDsH0VyfBXYduA2v iWhsbCStY2RTunU0uJVZCv/q1YNFlbaf3C7xYRp1PNG52APCBf5KrgcW9jHB21vY/2wX rVsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998449; x=1763603249; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=HOtRFITo02H4IQSZeZ1BamFbY19pkW2KaAlb2iNDjWw=; b=ir6isaO5+S+9WVo2vl+m7MYRfG2/EPbGhhbiunj2gWa3txbYYF/Z4cGBUFypn4h6T6 +K66nJdPQbsB6BPZHTL6njr5SxXPurQbTwzpbGdIk3/ukUd0gdVlCHyVG6ikbbQJWw+i binHhiBcjJ63jng+y/bKnQARpzjrKDGADYEsJijmKQmNkTfm6utZVvlsGVCFt6hr0YGW NRUH2LYcpyM6O0g4q9mhIlrvrec7M5/0+bOQx+hHk5z5m1V7oFm4t0IE5HuVeH9FN/oW sucZrhzCUyL8luMJSbrwBvxDQEiefttEeNih5E3Z6QFYydRJmqrFaokm6nCYaoULhdMl a3NQ== X-Forwarded-Encrypted: i=1; AJvYcCVDbH+57s8vaCbv8nDYIBESHJh+w8oIlElmGXAdaFcfROiWJRPjuyXNkI/GPWGdom8VEeF7o3uV5+h9fKk=@vger.kernel.org X-Gm-Message-State: AOJu0YzTZOXVHBNyqJz22TeHzu4Lk2lXg7RvHZUdvxRysoqiFDT4ZBvX uGSwfFxaw2Rsql9HAXRxYtZ+4TQPTs+OysFpkiVcPTfnFyLSBU1OpNsmY2oquGrAhFA= X-Gm-Gg: ASbGncvaUxaf/5ZRPzoaNbhv2ic8TOG4cT46Ah5gYRdvZBntTN9nl+toy5HTZ3J69Lr uT8/6aayX53RiWLs723qKe4nt+UmpIT71f5VgJgNq9RktIIQ8K7+BSIBUKzDxLA0pZaBlQvR7vS CR0Go1TxlU4xSNdIXnQLUv0kxfNArVcBtz5VPKtxZhZMXfzgwMG1Mk5TbwYPUhh0zr9qXzOFkAV LcphUmFfo6JKiX9GMK3X7PrkIHmctutQh1HtuA819CMoK74vhWqc2d/mDM/ePJ9+jCVZmCPDVou 8QDGVDPafJKO+mGLESc9dPIv7x3stXKYEhlByzsUxN0JaQqn0hS3JeUkFyCGVQLwXtJfn27/fSo oHdImP8pOevbbE1INZrpji9ifqeyq0P5bmYCglE0gFkrabIDQTwzXmKVfSLt7wT23zwQMfwYpVO vgrI1fiAxhc/65VK96E4YXNw== X-Google-Smtp-Source: AGHT+IGSKyxeN0VSp7kzP7NfJ86/9I9ANN0lDslbNbt/QqR2hftPu3Zqd6k0IFT90ar8rvdWpbD7XQ== X-Received: by 2002:a17:902:f788:b0:298:49db:a9c5 with SMTP id d9443c01a7336-2984edd243amr57875185ad.43.1762998449290; Wed, 12 Nov 2025 17:47:29 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:29 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland Subject: [PATCH v3 20/22] riscv: mm: Use physical memory aliases to apply PMAs Date: Wed, 12 Nov 2025 17:45:33 -0800 Message-ID: <20251113014656.2605447-21-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" On some RISC-V platforms, RAM is mapped simultaneously to multiple physical address ranges, with each alias having a different set of statically-determined Physical Memory Attributes (PMAs). Software alters the PMAs for a particular page at runtime by selecting a PFN from among the aliases of that page's physical address. Implement this by transforming the PFN when writing page tables. If the memory type field is nonzero, replace the PFN with the corresponding PFN from the noncached alias. Similarly, when reading from the page tables, if the PFN is found in a noncached alias, replace it with the PFN from the normal memory alias, and insert _PAGE_NOCACHE. The rest of the kernel sees only PFNs from normal memory and _PAGE_MTMASK values as if Svpbmt was implemented. Memory alias pairs are determined from the devicetree. A Linux custom ISA extension is added to trigger the alternative patching, as alternatives must be linked to an extension or a vendor erratum, and this behavior is not associated with any particular processor vendor. Signed-off-by: Samuel Holland --- Changes in v3: - Fix the logic to allow an alias to be paired with region entry 0 - Select DMA_DIRECT_REMAP Changes in v2: - Put new code behind a new Kconfig option RISCV_ISA_XLINUXMEMALIAS - Document the calling convention of riscv_fixup/unfix_memory_alias() - Do not transform !pte_present() (e.g. swap) PTEs - Export riscv_fixup/unfix_memory_alias() to fix module compilation arch/riscv/Kconfig | 17 ++++ arch/riscv/include/asm/hwcap.h | 1 + arch/riscv/include/asm/pgtable-64.h | 44 +++++++-- arch/riscv/include/asm/pgtable-bits.h | 5 +- arch/riscv/include/asm/pgtable.h | 8 ++ arch/riscv/kernel/cpufeature.c | 6 ++ arch/riscv/kernel/setup.c | 1 + arch/riscv/mm/Makefile | 1 + arch/riscv/mm/memory-alias.S | 123 ++++++++++++++++++++++++++ arch/riscv/mm/pgtable.c | 91 +++++++++++++++++++ arch/riscv/mm/ptdump.c | 6 +- 11 files changed, 291 insertions(+), 12 deletions(-) create mode 100644 arch/riscv/mm/memory-alias.S diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index cf5a4b5cdcd4..21efa0d9bdd4 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -877,6 +877,23 @@ config TOOLCHAIN_NEEDS_OLD_ISA_SPEC versions of clang and GCC to be passed to GAS, which has the same result as passing zicsr and zifencei to -march. =20 +config RISCV_ISA_XLINUXMEMALIAS + bool "Use physical memory aliases to emulate page-based memory types" + depends on 64BIT && MMU + depends on RISCV_ALTERNATIVE + default y + select DMA_DIRECT_REMAP + help + Add support for the kernel to alter the Physical Memory Attributes + (PMAs) of a page at runtime by selecting from among the aliases of + that page in the physical address space. + + On systems where physical memory aliases are present, this option + is required in order to mark pages as non-cacheable for use with + non-coherent DMA devices. + + If you don't know what to do here, say Y. + config FPU bool "FPU support" default y diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index affd63e11b0a..6c6349fe15a7 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -107,6 +107,7 @@ #define RISCV_ISA_EXT_ZALRSC 98 #define RISCV_ISA_EXT_ZICBOP 99 =20 +#define RISCV_ISA_EXT_XLINUXMEMALIAS 126 #define RISCV_ISA_EXT_XLINUXENVCFG 127 =20 #define RISCV_ISA_EXT_MAX 128 diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/p= gtable-64.h index fa2c1dcb6f72..f1ecd022e3ee 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -97,7 +97,8 @@ enum napot_cont_order { #define HUGE_MAX_HSTATE 2 #endif =20 -#if defined(CONFIG_RISCV_ISA_SVPBMT) || defined(CONFIG_ERRATA_THEAD_MAE) +#if defined(CONFIG_RISCV_ISA_SVPBMT) || defined(CONFIG_RISCV_ISA_XLINUXMEM= ALIAS) || \ + defined(CONFIG_ERRATA_THEAD_MAE) =20 /* * ALT_FIXUP_MT @@ -107,6 +108,9 @@ enum napot_cont_order { * * On systems that support Svpbmt, the memory type bits are left alone. * + * On systems that support XLinuxMemalias, PTEs with a nonzero memory type= have + * the memory type bits cleared and the PFN replaced with the matching ali= as. + * * On systems that support XTheadMae, a Svpbmt memory type is transformed * into the corresponding XTheadMae memory type. * @@ -129,22 +133,35 @@ enum napot_cont_order { */ =20 #define ALT_FIXUP_MT(_val) \ - asm(ALTERNATIVE_2("addi t0, zero, 0x3\n\t" \ + asm(ALTERNATIVE_3("addi t0, zero, 0x3\n\t" \ "slli t0, t0, 61\n\t" \ "not t0, t0\n\t" \ "and %0, %0, t0\n\t" \ "nop\n\t" \ "nop\n\t" \ + "nop\n\t" \ "nop", \ - __nops(7), \ + __nops(8), \ 0, RISCV_ISA_EXT_SVPBMT, CONFIG_RISCV_ISA_SVPBMT, \ + "addi t0, zero, 0x3\n\t" \ + "slli t0, t0, 61\n\t" \ + "and t0, %0, t0\n\t" \ + "beqz t0, 2f\n\t" \ + "xor t1, %0, t0\n\t" \ + "1: auipc t0, %%pcrel_hi(riscv_fixup_memory_alias)\n\t" \ + "jalr t0, t0, %%pcrel_lo(1b)\n\t" \ + "mv %0, t1\n" \ + "2:", \ + 0, RISCV_ISA_EXT_XLINUXMEMALIAS, \ + CONFIG_RISCV_ISA_XLINUXMEMALIAS, \ "srli t0, %0, 59\n\t" \ "seqz t1, t0\n\t" \ "slli t1, t1, 1\n\t" \ "or t0, t0, t1\n\t" \ "xori t0, t0, 0x5\n\t" \ "slli t0, t0, 60\n\t" \ - "xor %0, %0, t0", \ + "xor %0, %0, t0\n\t" \ + "nop", \ THEAD_VENDOR_ID, ERRATA_THEAD_MAE, CONFIG_ERRATA_THEAD_MAE) \ : "+r" (_val) :: "t0", "t1") =20 @@ -152,9 +169,9 @@ enum napot_cont_order { =20 #define ALT_FIXUP_MT(_val) =20 -#endif /* CONFIG_RISCV_ISA_SVPBMT || CONFIG_ERRATA_THEAD_MAE */ +#endif /* CONFIG_RISCV_ISA_SVPBMT || CONFIG_RISCV_ISA_XLINUXMEMALIAS || CO= NFIG_ERRATA_THEAD_MAE */ =20 -#if defined(CONFIG_ERRATA_THEAD_MAE) +#if defined(CONFIG_RISCV_ISA_XLINUXMEMALIAS) || defined(CONFIG_ERRATA_THEA= D_MAE) =20 /* * ALT_UNFIX_MT @@ -162,6 +179,9 @@ enum napot_cont_order { * On systems that support Svpbmt, or do not support any form of page-based * memory type configuration, the memory type bits are left alone. * + * On systems that support XLinuxMemalias, PTEs with an aliased PFN have t= he + * matching memory type set and the PFN replaced with the normal memory al= ias. + * * On systems that support XTheadMae, the XTheadMae memory type (or zero) = is * transformed back into the corresponding Svpbmt memory type. * @@ -172,7 +192,15 @@ enum napot_cont_order { */ =20 #define ALT_UNFIX_MT(_val) \ - asm(ALTERNATIVE(__nops(6), \ + asm(ALTERNATIVE_2(__nops(6), \ + "mv t1, %0\n\t" \ + "1: auipc t0, %%pcrel_hi(riscv_unfix_memory_alias)\n\t" \ + "jalr t0, t0, %%pcrel_lo(1b)\n\t" \ + "mv %0, t1\n\t" \ + "nop\n\t" \ + "nop", \ + 0, RISCV_ISA_EXT_XLINUXMEMALIAS, \ + CONFIG_RISCV_ISA_XLINUXMEMALIAS, \ "srli t0, %0, 60\n\t" \ "andi t0, t0, 0xd\n\t" \ "srli t1, t0, 1\n\t" \ @@ -236,7 +264,7 @@ static inline pgd_t pgdp_get(pgd_t *pgdp) =20 #define ALT_UNFIX_MT(_val) =20 -#endif /* CONFIG_ERRATA_THEAD_MAE */ +#endif /* CONFIG_RISCV_ISA_XLINUXMEMALIAS || CONFIG_ERRATA_THEAD_MAE */ =20 static inline int pud_present(pud_t pud) { diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm= /pgtable-bits.h index 18c50cbd78bf..4586917b2d98 100644 --- a/arch/riscv/include/asm/pgtable-bits.h +++ b/arch/riscv/include/asm/pgtable-bits.h @@ -38,7 +38,8 @@ #define _PAGE_PFN_MASK GENMASK(31, 10) #endif /* CONFIG_64BIT */ =20 -#if defined(CONFIG_RISCV_ISA_SVPBMT) || defined(CONFIG_ERRATA_THEAD_MAE) +#if defined(CONFIG_RISCV_ISA_SVPBMT) || defined(CONFIG_RISCV_ISA_XLINUXMEM= ALIAS) || \ + defined(CONFIG_ERRATA_THEAD_MAE) /* * [62:61] Svpbmt Memory Type definitions: * @@ -54,7 +55,7 @@ #define _PAGE_NOCACHE 0 #define _PAGE_IO 0 #define _PAGE_MTMASK 0 -#endif /* CONFIG_RISCV_ISA_SVPBMT || CONFIG_ERRATA_THEAD_MAE */ +#endif /* CONFIG_RISCV_ISA_SVPBMT || CONFIG_RISCV_ISA_XLINUXMEMALIAS || CO= NFIG_ERRATA_THEAD_MAE */ =20 #ifdef CONFIG_RISCV_ISA_SVNAPOT #define _PAGE_NAPOT_SHIFT 63 diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgta= ble.h index 8b622f901707..27e8c20af0e2 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -1113,6 +1113,14 @@ extern u64 satp_mode; void paging_init(void); void misc_mem_init(void); =20 +#ifdef CONFIG_RISCV_ISA_XLINUXMEMALIAS +bool __init riscv_have_memory_alias(void); +void __init riscv_init_memory_alias(void); +#else +static inline bool riscv_have_memory_alias(void) { return false; } +static inline void riscv_init_memory_alias(void) {} +#endif /* CONFIG_RISCV_ISA_XLINUXMEMALIAS */ + /* * ZERO_PAGE is a global shared page that is always zero, * used for zero-mapped memory areas, etc. diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 72ca768f4e91..ee59b160e886 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -1093,6 +1093,12 @@ void __init riscv_fill_hwcap(void) riscv_v_setup_vsize(); } =20 + /* Vendor-independent alternatives require a bit in the ISA bitmap. */ + if (riscv_have_memory_alias()) { + set_bit(RISCV_ISA_EXT_XLINUXMEMALIAS, riscv_isa); + pr_info("Using physical memory alias for noncached mappings\n"); + } + memset(print_str, 0, sizeof(print_str)); for (i =3D 0, j =3D 0; i < NUM_ALPHA_EXTS; i++) if (riscv_isa[0] & BIT_MASK(i)) diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index b5bc5fc65cea..a6f821150101 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -357,6 +357,7 @@ void __init setup_arch(char **cmdline_p) } =20 riscv_init_cbo_blocksizes(); + riscv_init_memory_alias(); riscv_fill_hwcap(); apply_boot_alternatives(); init_rt_signal_env(); diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile index b916a68d324a..b4d757226efb 100644 --- a/arch/riscv/mm/Makefile +++ b/arch/riscv/mm/Makefile @@ -33,3 +33,4 @@ endif obj-$(CONFIG_DEBUG_VIRTUAL) +=3D physaddr.o obj-$(CONFIG_RISCV_DMA_NONCOHERENT) +=3D dma-noncoherent.o obj-$(CONFIG_RISCV_NONSTANDARD_CACHE_OPS) +=3D cache-ops.o +obj-$(CONFIG_RISCV_ISA_XLINUXMEMALIAS) +=3D memory-alias.o diff --git a/arch/riscv/mm/memory-alias.S b/arch/riscv/mm/memory-alias.S new file mode 100644 index 000000000000..e37b83d11591 --- /dev/null +++ b/arch/riscv/mm/memory-alias.S @@ -0,0 +1,123 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2024 SiFive + */ + +#include +#include +#include +#include + +#define CACHED_BASE_OFFSET (0 * RISCV_SZPTR) +#define NONCACHED_BASE_OFFSET (1 * RISCV_SZPTR) +#define SIZE_OFFSET (2 * RISCV_SZPTR) + +#define SIZEOF_PAIR (4 * RISCV_SZPTR) + +/* + * Called from ALT_FIXUP_MT with a non-standard calling convention: + * t0 =3D> return address + * t1 =3D> page table entry + * all other registers are callee-saved + */ +SYM_CODE_START(riscv_fixup_memory_alias) + addi sp, sp, -4 * SZREG + REG_S t2, (0 * SZREG)(sp) + REG_S t3, (1 * SZREG)(sp) + REG_S t4, (2 * SZREG)(sp) +#ifdef CONFIG_RISCV_ISA_SVNAPOT + REG_S t5, (3 * SZREG)(sp) + + /* Save and mask off _PAGE_NAPOT if present. */ + li t5, _PAGE_NAPOT + and t5, t1, t5 + xor t1, t1, t5 +#endif + + /* Ignore !pte_present() PTEs, including swap PTEs. */ + andi t2, t1, (_PAGE_PRESENT | _PAGE_PROT_NONE) + beqz t2, .Lfixup_end + + lla t2, memory_alias_pairs +.Lfixup_loop: + REG_L t3, SIZE_OFFSET(t2) + beqz t3, .Lfixup_end + REG_L t4, CACHED_BASE_OFFSET(t2) + sub t4, t1, t4 + bltu t4, t3, .Lfixup_found + addi t2, t2, SIZEOF_PAIR + j .Lfixup_loop + +.Lfixup_found: + REG_L t3, NONCACHED_BASE_OFFSET(t2) + add t1, t3, t4 + +.Lfixup_end: +#ifdef CONFIG_RISCV_ISA_SVNAPOT + xor t1, t1, t5 + + REG_L t5, (3 * SZREG)(sp) +#endif + REG_L t4, (2 * SZREG)(sp) + REG_L t3, (1 * SZREG)(sp) + REG_L t2, (0 * SZREG)(sp) + addi sp, sp, 4 * SZREG + jr t0 +SYM_CODE_END(riscv_fixup_memory_alias) +EXPORT_SYMBOL(riscv_fixup_memory_alias) + +/* + * Called from ALT_UNFIX_MT with a non-standard calling convention: + * t0 =3D> return address + * t1 =3D> page table entry + * all other registers are callee-saved + */ +SYM_CODE_START(riscv_unfix_memory_alias) + addi sp, sp, -4 * SZREG + REG_S t2, (0 * SZREG)(sp) + REG_S t3, (1 * SZREG)(sp) + REG_S t4, (2 * SZREG)(sp) +#ifdef CONFIG_RISCV_ISA_SVNAPOT + REG_S t5, (3 * SZREG)(sp) + + /* Save and mask off _PAGE_NAPOT if present. */ + li t5, _PAGE_NAPOT + and t5, t1, t5 + xor t1, t1, t5 +#endif + + /* Ignore !pte_present() PTEs, including swap PTEs. */ + andi t2, t1, (_PAGE_PRESENT | _PAGE_PROT_NONE) + beqz t2, .Lunfix_end + + lla t2, memory_alias_pairs +.Lunfix_loop: + REG_L t3, SIZE_OFFSET(t2) + beqz t3, .Lunfix_end + REG_L t4, NONCACHED_BASE_OFFSET(t2) + sub t4, t1, t4 + bltu t4, t3, .Lunfix_found + addi t2, t2, SIZEOF_PAIR + j .Lunfix_loop + +.Lunfix_found: + REG_L t3, CACHED_BASE_OFFSET(t2) + add t1, t3, t4 + + /* PFN was in the noncached alias, so mark it as such. */ + li t2, _PAGE_NOCACHE + or t1, t1, t2 + +.Lunfix_end: +#ifdef CONFIG_RISCV_ISA_SVNAPOT + xor t1, t1, t5 + + REG_L t5, (3 * SZREG)(sp) +#endif + REG_L t4, (2 * SZREG)(sp) + REG_L t3, (1 * SZREG)(sp) + REG_L t2, (0 * SZREG)(sp) + addi sp, sp, 4 * SZREG + jr t0 +SYM_CODE_END(riscv_unfix_memory_alias) +EXPORT_SYMBOL(riscv_unfix_memory_alias) diff --git a/arch/riscv/mm/pgtable.c b/arch/riscv/mm/pgtable.c index 604744d6924f..45f6a0ac22fa 100644 --- a/arch/riscv/mm/pgtable.c +++ b/arch/riscv/mm/pgtable.c @@ -1,8 +1,12 @@ // SPDX-License-Identifier: GPL-2.0 =20 #include +#include +#include #include #include +#include +#include #include =20 int ptep_set_access_flags(struct vm_area_struct *vma, @@ -160,3 +164,90 @@ pud_t pudp_invalidate(struct vm_area_struct *vma, unsi= gned long address, return old; } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + +#ifdef CONFIG_RISCV_ISA_XLINUXMEMALIAS +struct memory_alias_pair { + unsigned long cached_base; + unsigned long noncached_base; + unsigned long size; + int index; +} memory_alias_pairs[5]; + +bool __init riscv_have_memory_alias(void) +{ + return memory_alias_pairs[0].size; +} + +void __init riscv_init_memory_alias(void) +{ + int na =3D of_n_addr_cells(of_root); + int ns =3D of_n_size_cells(of_root); + int nc =3D na + ns + 2; + const __be32 *prop; + int pairs =3D 0; + int len; + + prop =3D of_get_property(of_root, "riscv,physical-memory-regions", &len); + if (!prop) + return; + + len /=3D sizeof(__be32); + for (int i =3D 0; len >=3D nc; i++, prop +=3D nc, len -=3D nc) { + unsigned long base =3D of_read_ulong(prop, na); + unsigned long size =3D of_read_ulong(prop + na, ns); + unsigned long flags =3D be32_to_cpup(prop + na + ns); + struct memory_alias_pair *pair; + + /* We only care about non-coherent memory. */ + if ((flags & PMA_ORDER_MASK) !=3D PMA_ORDER_MEMORY || (flags & PMA_COHER= ENT)) + continue; + + /* The cacheable alias must be usable memory. */ + if ((flags & PMA_CACHEABLE) && + !memblock_overlaps_region(&memblock.memory, base, size)) + continue; + + if (flags & PMR_IS_ALIAS) { + int alias =3D FIELD_GET(PMR_ALIAS_MASK, flags); + + pair =3D NULL; + for (int j =3D 0; j < pairs; j++) { + if (alias =3D=3D memory_alias_pairs[j].index) { + pair =3D &memory_alias_pairs[j]; + break; + } + } + if (!pair) + continue; + } else { + /* Leave room for the null sentinel. */ + if (pairs =3D=3D ARRAY_SIZE(memory_alias_pairs) - 1) + continue; + pair =3D &memory_alias_pairs[pairs++]; + pair->index =3D i; + } + + /* Align the address and size with the page table PFN field. */ + base >>=3D PAGE_SHIFT - _PAGE_PFN_SHIFT; + size >>=3D PAGE_SHIFT - _PAGE_PFN_SHIFT; + + if (flags & PMA_CACHEABLE) + pair->cached_base =3D base; + else + pair->noncached_base =3D base; + pair->size =3D min_not_zero(pair->size, size); + } + + /* Remove any unmatched pairs. */ + for (int i =3D 0; i < pairs; i++) { + struct memory_alias_pair *pair =3D &memory_alias_pairs[i]; + + if (pair->cached_base && pair->noncached_base && pair->size) + continue; + + for (int j =3D i + 1; j < pairs; j++) + memory_alias_pairs[j - 1] =3D memory_alias_pairs[j]; + memory_alias_pairs[--pairs].size =3D 0; + } +} +#endif /* CONFIG_RISCV_ISA_XLINUXMEMALIAS */ diff --git a/arch/riscv/mm/ptdump.c b/arch/riscv/mm/ptdump.c index 763ffde8ab5e..29a7be14cca5 100644 --- a/arch/riscv/mm/ptdump.c +++ b/arch/riscv/mm/ptdump.c @@ -140,7 +140,8 @@ static const struct prot_bits pte_bits[] =3D { .clear =3D ".", }, { #endif -#if defined(CONFIG_RISCV_ISA_SVPBMT) || defined(CONFIG_ERRATA_THEAD_MAE) +#if defined(CONFIG_RISCV_ISA_SVPBMT) || defined(CONFIG_RISCV_ISA_XLINUXMEM= ALIAS) || \ + defined(CONFIG_ERRATA_THEAD_MAE) .mask =3D _PAGE_MTMASK, .set =3D "MT(%s)", .clear =3D " .. ", @@ -216,7 +217,8 @@ static void dump_prot(struct pg_state *st) if (val) { if (pte_bits[i].mask =3D=3D _PAGE_SOFT) sprintf(s, pte_bits[i].set, val >> 8); -#if defined(CONFIG_RISCV_ISA_SVPBMT) || defined(CONFIG_ERRATA_THEAD_MAE) +#if defined(CONFIG_RISCV_ISA_SVPBMT) || defined(CONFIG_RISCV_ISA_XLINUXMEM= ALIAS) || \ + defined(CONFIG_ERRATA_THEAD_MAE) else if (pte_bits[i].mask =3D=3D _PAGE_MTMASK) { if (val =3D=3D _PAGE_NOCACHE) sprintf(s, pte_bits[i].set, "NC"); --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 908462BEC43 for ; Thu, 13 Nov 2025 01:47:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998453; cv=none; b=Zrxs5M5xrhaefukT1VhkHEm8Et+ovrZNTUOc4rmsiUDQTD5LJF6D86Lbsy1Kv28Bx0KRFvNz9qt+Stb5j9I7LKzv2SQ5ErdXAZYqW3A+SqyKnhbz5YR1Vntq6Y/b5aumnOoU5EECoZhqh91ofS2lO+rrlYoNd1/ModKpiH0qsKk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998453; c=relaxed/simple; bh=EpUetIQr71Lt/mjrpgCKfCfZgwS8UnkNlGqQfVuYjcE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iM0DqcL/g8HdKPU0WqeetmzPDA3s8rjiZJ5rxdnWhjNj3E1mmma5yMpeXTRi6SfrJMOYSLpn8e0k93kP4dOKFkG36cAB9s8jK70RZfQsQx0d9bsOKwJihZU5CPO3cI649wy/vlGkc7u9PNxF6hfkyiJm5ro0ojOoO1DACc556lY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=Akv4npV8; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="Akv4npV8" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-297e264528aso2472165ad.2 for ; Wed, 12 Nov 2025 17:47:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998451; x=1763603251; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wIyypInid2MwkGhGpFAF1f6X6i4zsYo4yuxmmc6uQIY=; b=Akv4npV8x+bRhywWeP0/VRjaF6lfIvspJXim4Hg+g1XxXMYv8EDlOQPXR0OpjOZCVO VYg85QgOjbI4zcHUW7yWtThjK9xxwlPPybtj6dX639+pAWZ6u8yaB6vFgRbKYXM2qJCh 0rfZLvJi8xUwq+x0ZJrrKaHG5XQTc+Ukc4sLKl9RwlfbL+15D/PDZQA9qMMUfoNJsdp7 ex0fat+lBAmmpk+GgagR+kXbFZxvRBcTreTUmXZuQ7d5rtA0vrUTNT+wWfl4QgtWe0bY cT4v6Z2GYrjANAgx9cV1HyGJ5YrfojFgzS0le/b385f+8vDKhOQpAkg/LezH6lgPatNc KnbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998451; x=1763603251; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=wIyypInid2MwkGhGpFAF1f6X6i4zsYo4yuxmmc6uQIY=; b=ZYLQ1Kk3EOuXV0W+Ef+V41NheGSvZ2oEkQzNrTriRTU1GctbIj4eiQn6iMMenwv5E/ l/qJcL3DeQgukFtf4zNFxcCxctqmi7g7v73pCbYx6Utu8qN6y1ArX00X/CsXp8Z7qo4c PUcYRqx0Z6DDGsvMvXOqVX0qCEIPvmoSBzQZsiRM9X+1QN5kHIlMtbDbA+BBqHQg3nSK 38NVxaDRaT7AmDGgXrHzfy783OpUFoqSNctD9T5CCFrHik/vmPoDEeQALMKzXhP1obAi h68rqCG4BIvRtGCWWkrTfW+cQNuU7vkXIlERT0J7dUi5/0S3hXbl1+4hwY4GV6MveMkn CYvQ== X-Forwarded-Encrypted: i=1; AJvYcCVl8n5IuosXRNylIo83U0Ep1TNk+1OLCoMQ16cKu0zkkSDNF5BUMVkNobQnQ/C4l4cJK1zZca1rvCbPtNk=@vger.kernel.org X-Gm-Message-State: AOJu0YyE39UFIE5GB2Id4nxp9lz+F8k/KauoI4i45ulaES9t+huEQYlq ar4xkf11Y6hBfhpyTNPVwdE9ywudyAhJEkLufEmSaAbWDr1IFCcH0zg96ggWmvGwo+I= X-Gm-Gg: ASbGncuMDefgq1DOmC8kYVRR/99424xwht2OK9xhpeiKpSWGrYBV+dXGGdSRQX4XDeZ 1qBpk8Wm8s8gYyYoohZdA4zKpZf/fAaiQeTa1niGrJMxYDA2/C1+UzH3DXdGOwEgw2OMEic3v58 PDht+XtpQDjV+XTgcNF2jwnw9Q4Yj5pswb8YYGdbKHygey6cOX5tsgXoeH3zSaqON6sd+frfNex +2rWr1ATG7ynMxWOarHVsoXTH/PZsh4xv/gkYCke+Li8TeOOhrcWVv/9HAwhEBOeGlolbA3PPMb ugOEX37AuHLspVa4MuXWJhWuJzt6EH8AhURV/YUhMqjJitqfGZiJoBuDaovFe55CrGBOyYt7b9k rc6Vl0MzoXJuAL/GBVAd6dEODvP06VwM1eS/PF7Da7tklRSGqioIVrtTxcjYJYzhRbzSd53sRWV Gp66c2hG6FVyZAeuMB1iBO5w== X-Google-Smtp-Source: AGHT+IGhzNzxLSTgH/bqs1xeC15DK1E1CUXXoyAWh/vc5uUnq6cTBbEqNw4BT6XDJfCz+C9XpqJ0wg== X-Received: by 2002:a17:902:ec84:b0:268:cc5:5e4e with SMTP id d9443c01a7336-2984ed30d0dmr70448205ad.1.1762998450749; Wed, 12 Nov 2025 17:47:30 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:30 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland Subject: [PATCH v3 21/22] riscv: dts: starfive: jh7100: Use physical memory ranges for DMA Date: Wed, 12 Nov 2025 17:45:34 -0800 Message-ID: <20251113014656.2605447-22-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" JH7100 provides a physical memory region which is a noncached alias of normal cacheable DRAM. Now that Linux can apply PMAs by selecting between aliases of a physical memory region, any page of DRAM can be marked as noncached for use with DMA, and the preallocated DMA pool is no longer needed. This allows portable kernels to boot on JH7100 boards. Signed-off-by: Samuel Holland --- Changes in v3: - Fix the entry number of the paired region in the DT - Keep the ERRATA_STARFIVE_JH7100 option but update its description Changes in v2: - Move the JH7100 DT changes from jh7100-common.dtsi to jh7100.dtsi - Keep RISCV_DMA_NONCOHERENT and RISCV_NONSTANDARD_CACHE_OPS selected arch/riscv/Kconfig.errata | 9 +++---- arch/riscv/Kconfig.socs | 2 ++ .../boot/dts/starfive/jh7100-common.dtsi | 24 ------------------- arch/riscv/boot/dts/starfive/jh7100.dtsi | 4 ++++ 4 files changed, 11 insertions(+), 28 deletions(-) diff --git a/arch/riscv/Kconfig.errata b/arch/riscv/Kconfig.errata index 46a353a266e5..be5afec66eaa 100644 --- a/arch/riscv/Kconfig.errata +++ b/arch/riscv/Kconfig.errata @@ -77,13 +77,11 @@ config ERRATA_SIFIVE_CIP_1200 If you don't know what to do here, say "Y". =20 config ERRATA_STARFIVE_JH7100 - bool "StarFive JH7100 support" + bool "StarFive JH7100 support for old devicetrees" depends on ARCH_STARFIVE depends on !DMA_DIRECT_REMAP depends on NONPORTABLE select DMA_GLOBAL_POOL - select RISCV_DMA_NONCOHERENT - select RISCV_NONSTANDARD_CACHE_OPS select SIFIVE_CCACHE default n help @@ -93,7 +91,10 @@ config ERRATA_STARFIVE_JH7100 cache operations through the SiFive cache controller. =20 Say "Y" if you want to support the BeagleV Starlight and/or - StarFive VisionFive V1 boards. + StarFive VisionFive V1 boards with older devicetrees that reserve + memory for DMA using a "shared-dma-pool". If your devicetree has + the "riscv,physical-memory-regions" property, you should instead + enable RISCV_ISA_XLINUXMEMALIAS and use a portable kernel. =20 config ERRATA_THEAD bool "T-HEAD errata" diff --git a/arch/riscv/Kconfig.socs b/arch/riscv/Kconfig.socs index 848e7149e443..a8950206fb75 100644 --- a/arch/riscv/Kconfig.socs +++ b/arch/riscv/Kconfig.socs @@ -50,6 +50,8 @@ config SOC_STARFIVE bool "StarFive SoCs" select PINCTRL select RESET_CONTROLLER + select RISCV_DMA_NONCOHERENT + select RISCV_NONSTANDARD_CACHE_OPS select ARM_AMBA help This enables support for StarFive SoC platform hardware. diff --git a/arch/riscv/boot/dts/starfive/jh7100-common.dtsi b/arch/riscv/b= oot/dts/starfive/jh7100-common.dtsi index ae1a6aeb0aea..47d0cf55bfc0 100644 --- a/arch/riscv/boot/dts/starfive/jh7100-common.dtsi +++ b/arch/riscv/boot/dts/starfive/jh7100-common.dtsi @@ -42,30 +42,6 @@ led-ack { }; }; =20 - reserved-memory { - #address-cells =3D <2>; - #size-cells =3D <2>; - ranges; - - dma-reserved@fa000000 { - reg =3D <0x0 0xfa000000 0x0 0x1000000>; - no-map; - }; - - linux,dma@107a000000 { - compatible =3D "shared-dma-pool"; - reg =3D <0x10 0x7a000000 0x0 0x1000000>; - no-map; - linux,dma-default; - }; - }; - - soc { - dma-ranges =3D <0x00 0x80000000 0x00 0x80000000 0x00 0x7a000000>, - <0x00 0xfa000000 0x10 0x7a000000 0x00 0x01000000>, - <0x00 0xfb000000 0x00 0xfb000000 0x07 0x85000000>; - }; - wifi_pwrseq: wifi-pwrseq { compatible =3D "mmc-pwrseq-simple"; reset-gpios =3D <&gpio 37 GPIO_ACTIVE_LOW>; diff --git a/arch/riscv/boot/dts/starfive/jh7100.dtsi b/arch/riscv/boot/dts= /starfive/jh7100.dtsi index 7de0732b8eab..c7d7ec9ed8c9 100644 --- a/arch/riscv/boot/dts/starfive/jh7100.dtsi +++ b/arch/riscv/boot/dts/starfive/jh7100.dtsi @@ -7,11 +7,15 @@ /dts-v1/; #include #include +#include =20 / { compatible =3D "starfive,jh7100"; #address-cells =3D <2>; #size-cells =3D <2>; + riscv,physical-memory-regions =3D + <0x00 0x80000000 0x08 0x00000000 (PMA_RWXA | PMA_NONCOHERENT_MEMORY) 0x0= >, + <0x10 0x00000000 0x08 0x00000000 (PMA_RWX | PMA_NONCACHEABLE_MEMORY | PM= R_ALIAS(0)) 0x0>; =20 cpus: cpus { #address-cells =3D <1>; --=20 2.47.2 From nobody Tue Dec 9 02:55:26 2025 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CB0422F616D for ; Thu, 13 Nov 2025 01:47:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998454; cv=none; b=h6gWGU7VGejaAkSnyiOcmRYNHaOCJ58bi0mHaFWVGCusdh7if1bBzUBl2mj0G5wgSAQz8VnP1cdYafOt40m336zn549Fr1+fAX5hpa52/HMAWQ76jeTB0h2KMun7ax6FUad8ThU2IjEqFowKC19sQ+GDNWqW2rIcFQ6m4zY/oZc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762998454; c=relaxed/simple; bh=DpKcrTFSXMeznvqkI5iYlK6aDcUsXzIaprU9WCwGV2Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Wfc2vP9Caj6a/qZAXdnLHCg6ldmIVDGS9pwHfpYLI24VHiY73zXyNlGiucGUEcL+FdU8HuoYa4wz3IVIDpJbYvS3QFyDYUMT6SkAmRsp2cFbFPwKfJsLudHsKhXO2Du01VvevoBYjSc3al2+s7LgNqsGNMrcpvPrDILVcJ/lRLE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=l8hbQNKt; arc=none smtp.client-ip=209.85.214.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="l8hbQNKt" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-2955623e6faso2310185ad.1 for ; Wed, 12 Nov 2025 17:47:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1762998452; x=1763603252; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hmm07HI+8fQZ9dsXPTz2+pE8ZdXEYclRtD5hDrzxz+A=; b=l8hbQNKtfpr3CcUQrWA4fF+haHAk32uyKO1PUswuVflU3srHyNJ3tC/JfY6ALP6XI2 icRwY7+2KAl54HQIpdMlm1EJWJPZ9xiPzGLaIJPDkNcoNW69kokmfMNPHsf4FTEqQi8A MT88jXftjN7p6dtpujK954AkIoMtCx+J0CzPH0KxHVwBEIM5HplMF83kT7fWPXV00HKy 6gOTG9wIHZ8O58ZTC15bHXAmbHEJ9S5+IniDC7mmu6lOc1eA/cJkXkzY5kgscHJjIB/Z PyjTWcvHrivtlSVjCigZOVVomLonitj3oqb10Z32F/7Bgmyhr74XJQwRu/bQbFSqq8LC OqDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762998452; x=1763603252; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=hmm07HI+8fQZ9dsXPTz2+pE8ZdXEYclRtD5hDrzxz+A=; b=v7WuG+aiRCnam+i8B9gr5MiSNr3KlNiqAfhlq3Qyi35KVjS5vXGm7W48+3qLbloseZ saDNXoDyHu4Ct8x6qCmN+V8fdbAieU/FI9NerLnSdzBOVEaQf+f8LFwmJ3r2Z+fCSv0b UDPv4+/IssvU3nEcZgzlYcEGsnRg6G7KVVtg0a/H3TxZIIB12N16rHM6OR50Htwvu8YG 5s3gcwVVRfBk7bUpWnWGCztmFIwD1JsmDQ9BYElV8rj8F/UQxJheZ8HipApX9pz/paJq tobkCEx7e2ZAtX85VogibBwS93ac2/wwUhNAdEX/Ov9xmTmRBf6j/6/ViMwrJpzU4Y8t jOhw== X-Forwarded-Encrypted: i=1; AJvYcCUzxjl8xKXfKh5Wbyl8klPJjH2Bdp+InUmtcng/34FAlux2Skujn/WspOk6cmuY9xJkOWBUGFRJi+naLps=@vger.kernel.org X-Gm-Message-State: AOJu0Yxdq0V4bLrz+2sIemTH+zUxjq9ifa0h9DyiJA/aJaM8dcAWeh2r emyn1ZBDdu+BGVpjfdOxJLVhIKhhptluP5m7LObkt46z36bxaCuqoqtrWnt4xti192Q= X-Gm-Gg: ASbGncuo7lYhOmNDqDJaLVTuZLcvPFV870qqp3PUffDJ4JXXHL2jCKietlRvsH4ErvH 9pv/RrL1g4B14tY5J7OlDzNzGH382Ih/6WfL0fjGyOOzos9xjs81EIHsMN76m4cSxuaXbnJ3TLk whf7FGiit8w8I+rgtJJbDS8+55v1X7lUvL9UIWpcW9csl4rgdaALbsYDWTCm01PSVDPU/TokmJj jNY5xUBQQ0GsBqoCs3aAnU+oyd7YGKIf2RtvFXuf77Y1Sz68e9bJVIWVl74G+e2ZS0M2BosV5Na RzRBzMkF+p/I729jOYhh8Ez1rqNLoN5UOhawbRKXCAx/AsxUQt4M9+i3bNuFdZgWP+VVsB2R4Wi 5zWLGQOEhDMGabs0wUa8rZW81G40ZFDIbRfIkYzr3r94k9YnXiNFgxB9MLVb4C6YNoqb8wulAEK GTa+VdBaeFbGAThyRAhTP2pA== X-Google-Smtp-Source: AGHT+IHJJ32S+3DmD1CuHHRA6pREy7usForarOqF2N3irQZblubt51Eou/L7IV170eSGC/5u/UdVRQ== X-Received: by 2002:a17:902:e841:b0:295:86a1:5008 with SMTP id d9443c01a7336-2984edec288mr76669175ad.38.1762998452211; Wed, 12 Nov 2025 17:47:32 -0800 (PST) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2985c2ccae8sm4986485ad.98.2025.11.12.17.47.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Nov 2025 17:47:31 -0800 (PST) From: Samuel Holland To: Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, Andrew Morton , David Hildenbrand , linux-mm@kvack.org Cc: devicetree@vger.kernel.org, Suren Baghdasaryan , linux-kernel@vger.kernel.org, Mike Rapoport , Michal Hocko , Conor Dooley , Lorenzo Stoakes , Krzysztof Kozlowski , Alexandre Ghiti , Emil Renner Berthing , Rob Herring , Vlastimil Babka , "Liam R . Howlett" , Samuel Holland Subject: [PATCH v3 22/22] riscv: dts: eswin: eic7700: Use physical memory ranges for DMA Date: Wed, 12 Nov 2025 17:45:35 -0800 Message-ID: <20251113014656.2605447-23-samuel.holland@sifive.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20251113014656.2605447-1-samuel.holland@sifive.com> References: <20251113014656.2605447-1-samuel.holland@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" EIC7700 provides a physical memory region which is a noncached alias of normal cacheable DRAM. Declare this alias in the devicetree so Linux can allocate noncached pages for noncoherent DMA, and M-mode firmware can protect the noncached alias with PMPs. Signed-off-by: Samuel Holland --- Changes in v3: - Fix the entry number of the paired region in the DT Changes in v2: - New patch for v2 arch/riscv/Kconfig.socs | 2 ++ arch/riscv/boot/dts/eswin/eic7700.dtsi | 5 +++++ 2 files changed, 7 insertions(+) diff --git a/arch/riscv/Kconfig.socs b/arch/riscv/Kconfig.socs index a8950206fb75..df3ed1d322fe 100644 --- a/arch/riscv/Kconfig.socs +++ b/arch/riscv/Kconfig.socs @@ -9,6 +9,8 @@ config ARCH_ANDES =20 config ARCH_ESWIN bool "ESWIN SoCs" + select RISCV_DMA_NONCOHERENT + select RISCV_NONSTANDARD_CACHE_OPS help This enables support for ESWIN SoC platform hardware, including the ESWIN EIC7700 SoC. diff --git a/arch/riscv/boot/dts/eswin/eic7700.dtsi b/arch/riscv/boot/dts/e= swin/eic7700.dtsi index c3ed93008bca..d566bca4e09e 100644 --- a/arch/riscv/boot/dts/eswin/eic7700.dtsi +++ b/arch/riscv/boot/dts/eswin/eic7700.dtsi @@ -5,9 +5,14 @@ =20 /dts-v1/; =20 +#include + / { #address-cells =3D <2>; #size-cells =3D <2>; + riscv,physical-memory-regions =3D + <0x000 0x80000000 0x00f 0x80000000 (PMA_RWXA | PMA_NONCOHERENT_MEMORY) 0= x0>, + <0x0c0 0x00000000 0x010 0x00000000 (PMA_RWX | PMA_NONCACHEABLE_MEMORY | = PMR_ALIAS(0)) 0x0>; =20 cpus { #address-cells =3D <1>; --=20 2.47.2