Documentation/admin-guide/kernel-parameters.txt | 8 + arch/Kconfig | 13 + arch/x86/.kunitconfig | 7 + arch/x86/Kconfig | 8 + arch/x86/include/asm/asi.h | 19 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/set_memory.h | 13 + arch/x86/mm/Makefile | 3 + arch/x86/mm/asi.c | 47 ++ arch/x86/mm/asi_test.c | 145 ++++++ arch/x86/mm/init.c | 10 +- arch/x86/mm/init_64.c | 54 +- arch/x86/mm/pat/set_memory.c | 118 ++++- include/linux/asi.h | 19 + include/linux/gfp.h | 16 +- include/linux/gfp_types.h | 15 +- include/linux/mmzone.h | 98 +++- include/linux/pageblock-flags.h | 24 +- include/linux/set_memory.h | 8 + include/trace/events/mmflags.h | 1 + init/main.c | 1 + kernel/panic.c | 2 + kernel/power/snapshot.c | 7 +- mm/Kconfig | 5 + mm/Makefile | 1 + mm/compaction.c | 32 +- mm/init-mm.c | 3 + mm/internal.h | 44 +- mm/mm_init.c | 11 +- mm/page_alloc.c | 664 +++++++++++++++++------- mm/page_alloc_test.c | 70 +++ mm/page_isolation.c | 2 +- mm/page_owner.c | 7 +- mm/page_reporting.c | 4 +- mm/show_mem.c | 2 +- mm/slub.c | 4 +- 36 files changed, 1205 insertions(+), 281 deletions(-)
As per [0] I think ASI is ready to start merging. This is the first
step. The scope of this series is: everything needed to set up the
direct map in the restricted address spaces.
.:: Scope
Why is this the scope of the first series? The objective here is to
reach a MVP of ASI that people can actually run, as soon as possible.
Very broadly, this requires a) a restricted address space to exist and
b) a bunch of logic for transitioning in and out of it. An MVP of
ASI doesn't require too much flexibility w.r.t. the contents of the
restricted address space, but at least being able to omit user data from
the direct map seems like a good starting point. The rest of the address
space can be constructed trivially by just cloning the unrestricted
address space as illustrated in [1] (a commit from the branch published
in [0]), but that isn't included in this series, this is just for the
direct map.
So this series focuses on part a). The alternative would be to focus on
part b) first, instead just trivially creating the entire restricted
address space as a clone of the unrestricted one (i.e. starting from an
ASI that protects nothing).
.:: Design
Whether or not memory will be mapped into the restricted address space
("sensitivity") is determined at allocation time. This is encoded in a
new GFP flag called __GFP_SENSITIVE, which is added to GFP_USER. Some
early discussions questioned whether this GFP flag is really needed or
if we could instead determine sensitivity by some contextual hint. I'm
not aware of something that could provide this hint at the moment, but
if one exists I'd be happy to use it here. However, in the long term
it should be assumed that a GFP flag will need to appear eventually,
since we'll need to be able to annotate the sensitivity of pretty much
arbitrary memory.
So, the important thing we end up needing to design here is what
the allocator does with __GFP_SENSITIVE. This was discussed in [2] and
at LSF/MM/BPF 2024 [3]. The allocator needs to be able to map and unmap
pages into the restricted address space. Problems with this are:
1. Changing mappings might require allocating pagetables (allocating
while allocating).
2. Unmapping pages requires a TLB shootdown, which is slow and anyway
can't be done with IRQs off.
3. Mapping pages into the restricted address space, in the general case,
requires zeroing them in case they contain leftover data that was
previously sensitive.
The simple solution for point 1 is to just set a minimum granularity at
which sensitivity can change, and pre-allocate direct map pagetables
down to that granularity. This suggests that pages need to be physically
grouped by sensitivity. The second 2 points illustrate that changing
sensitivity is highly undesirable from a performance point of view. All
of this adds up to needing to be able to index free pages by
sensitivity, leading to the conclusion that we want separate freelists
for sensitive and nonsensitive pages.
The page allocator already has a mechanism to physically group, and to
index pages, by a property, namely migratetype. So the approach taken
here is to extend this concept to additionally encode sensitivity. So
when ASI is enabled, we basically double the number of free-page lists,
and add a pageblock flag that can be used to check a page's sensitivity
without needing to walk pagetables.
.:: Structure of the series
Some generic boilerplate for ASI:
x86/mm/asi: Add CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION
x86/mm/asi: add X86_FEATURE_ASI and asi=
Minimal ASI setup specifically for direct map management:
x86/mm: factor out phys_pgd_init()
x86/mm/asi: set up asi_nonsensitive_pgd
x86/mm/pat: mirror direct map changes to ASI
mm/page_alloc: add __GFP_SENSITIVE and always set it
Misc preparatory patches for easier review:
mm: introduce for_each_free_list()
mm: rejig pageblock mask definitions
mm/page_alloc: Invert is_check_pages_enabled() check
mm/page_alloc: remove ifdefs from pindex helpers
One very big annoying preparatory patch, separated to try and mitigate
review pain (sorry, I don't love this, but I think it's the best way):
mm: introduce freetype_t
The interesting bit where the actual functionality gets added:
mm/asi: encode sensitivity in freetypes and pageblocks
mm/page_alloc_test: unit test pindex helpers
x86/mm/pat: introduce cpa_fault option
mm/page_alloc: rename ALLOC_NON_BLOCK back to _HARDER
mm/page_alloc: introduce ALLOC_NOBLOCK
mm/slub: defer application of gfp_allowed_mask
mm/asi: support changing pageblock sensitivity
Misc other stuff that feels just related enough to go in this series:
mm/asi: bad_page() when ASI mappings are wrong
x86/mm/asi: don't use global pages when ASI enabled
mm: asi_test: Smoke test for [non]sensitive page allocs
.:: Testing
Google is running ASI in production but this implementation is totally
different (the way we manage the direct map internally is not good,
things are working nicely so far but as we expand its footprint we're
expecting to run into an unfixable performance issue sooner or later).
Aside from the KUnit tests I've just tested this in a VM by running
these tests from run_vmtests.sh:
compaction, cow, migration, mmap, hugetlb
thp fails, but this also happens without these patches - I think it's a
bug with the ksft_set_plan(), I'll try to investigate this when I can.
Anyway if anyone has more tests they'd like me to do please let me
know. In particular I don't think anything on the list above will
exercise CMA or memory hotplug, but I don't know a good way to do that.
Also note that aside from the KUnit tests which do a super minimal
check, nothing here cares about the actual validity of the restricted
address space, it's just to try and catch cases where ASI breaks non-ASI
logic.
If people are interested, I can start a kind of "asi-next" branch that
contains everything from this patchset plus all the remaining prototype
logic to actually run ASI. Let me know if that seems useful to you
(I will have to do it sooner or later for benchmarking anyway).
[0] [Discuss] First steps for ASI (ASI is fast again)
https://lore.kernel.org/all/20250812173109.295750-1-jackmanb@google.com/
[1] mm: asi: Share most of the kernel address space with unrestricted
https://github.com/bjackman/linux/commit/04fd7a0b0098a
[2] [PATCH RFC 00/11] mm: ASI integration for the page allocator
https://lore.kernel.org/lkml/20250313-asi-page-alloc-v1-0-04972e046cea@google.com/
[3] LSF/MM/BPF 2025 slides
https://docs.google.com/presentation/d/1waibhMBXhfJ2qVEz8KtXop9MZ6UyjlWmK71i0WIH7CY/edit?slide=id.p#slide=id.p
CP:
https://lore.kernel.org/all/20250129124034.2612562-1-jackmanb@google.com/
Signed-off-by: Brendan Jackman <jackmanb@google.com>
---
Brendan Jackman (21):
x86/mm/asi: Add CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION
x86/mm/asi: add X86_FEATURE_ASI and asi=
x86/mm: factor out phys_pgd_init()
x86/mm/asi: set up asi_nonsensitive_pgd
x86/mm/pat: mirror direct map changes to ASI
mm/page_alloc: add __GFP_SENSITIVE and always set it
mm: introduce for_each_free_list()
mm: rejig pageblock mask definitions
mm/page_alloc: Invert is_check_pages_enabled() check
mm/page_alloc: remove ifdefs from pindex helpers
mm: introduce freetype_t
mm/asi: encode sensitivity in freetypes and pageblocks
mm/page_alloc_test: unit test pindex helpers
x86/mm/pat: introduce cpa_fault option
mm/page_alloc: rename ALLOC_NON_BLOCK back to _HARDER
mm/page_alloc: introduce ALLOC_NOBLOCK
mm/slub: defer application of gfp_allowed_mask
mm/asi: support changing pageblock sensitivity
mm/asi: bad_page() when ASI mappings are wrong
x86/mm/asi: don't use global pages when ASI enabled
mm: asi_test: smoke test for [non]sensitive page allocs
Documentation/admin-guide/kernel-parameters.txt | 8 +
arch/Kconfig | 13 +
arch/x86/.kunitconfig | 7 +
arch/x86/Kconfig | 8 +
arch/x86/include/asm/asi.h | 19 +
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/set_memory.h | 13 +
arch/x86/mm/Makefile | 3 +
arch/x86/mm/asi.c | 47 ++
arch/x86/mm/asi_test.c | 145 ++++++
arch/x86/mm/init.c | 10 +-
arch/x86/mm/init_64.c | 54 +-
arch/x86/mm/pat/set_memory.c | 118 ++++-
include/linux/asi.h | 19 +
include/linux/gfp.h | 16 +-
include/linux/gfp_types.h | 15 +-
include/linux/mmzone.h | 98 +++-
include/linux/pageblock-flags.h | 24 +-
include/linux/set_memory.h | 8 +
include/trace/events/mmflags.h | 1 +
init/main.c | 1 +
kernel/panic.c | 2 +
kernel/power/snapshot.c | 7 +-
mm/Kconfig | 5 +
mm/Makefile | 1 +
mm/compaction.c | 32 +-
mm/init-mm.c | 3 +
mm/internal.h | 44 +-
mm/mm_init.c | 11 +-
mm/page_alloc.c | 664 +++++++++++++++++-------
mm/page_alloc_test.c | 70 +++
mm/page_isolation.c | 2 +-
mm/page_owner.c | 7 +-
mm/page_reporting.c | 4 +-
mm/show_mem.c | 2 +-
mm/slub.c | 4 +-
36 files changed, 1205 insertions(+), 281 deletions(-)
---
base-commit: bf2602a3cb2381fb1a04bf1c39a290518d2538d1
change-id: 20250923-b4-asi-page-alloc-74b5383a72fc
Best regards,
--
Brendan Jackman <jackmanb@google.com>
On Wed Sep 24, 2025 at 2:59 PM UTC, Brendan Jackman wrote: > base-commit: bf2602a3cb2381fb1a04bf1c39a290518d2538d1 I forgot to mention that this is based on linux-next from 2025-09-22. I have pushed this series here: https://github.com/bjackman/linux/tree/asi/direct-map-v1 And I'll be keeping this branch up-to-date between [PATCH] revisions as I respond to feedback (I've already pushed fixes for the build failures identified by the bot): https://github.com/bjackman/linux/tree/asi/direct-map Also, someone pointed out that this post doesn't explain what ASI actually is. This information is all online if you chase my references, but so people don't have to do that, I will add something to Documentation/ for v2. For the benefit of anyone reading this version who isn't already familiar with ASI, I'm pasting my draft below. Let me know if I can clarify anything here. Cheers, Brendan --- ============================= Address Space Isolation (ASI) ============================= .. Warning:: ASI is incomplete. It is available to enable for testing but doesn't offer security guarantees. See the "Status" section for details. Introduction ============ ASI is a mechanism to mitigate a broad class of CPU vulnerabilities. While the precise scope of these vulnerabilities is complex, ASI, when appropriately configured, mitigates most well-known CPU exploits. This class of vulnerabilities could be mitigated by the following *blanket mitigation*: 1. Remove all potentially secret data from the attacker's address space (i.e. enable PTI). 2. Disable SMT. 3. Whenever transitioning from an untrusted domain (i.e. a userspace processe or a KVM guest) into a potential victim domain (in this case, the kernel), clear all state from the branch predictor. 4. Whenever transitionin from the victim domain into an untrusted domain, clear all microarchitectural state that might be exploited to leak data from a sidechannel (e.g. L1D$, load and store buffers, etc). The performance overhead of this mitigation is unacceptable for most use-cases. In the abstract, ASI works by doing these things, but only *selectively*. What ASI does ============= Memory is divided into *sensitive* and *nonsensitive* memory. Sensitive memory refers to memory that might contain data the kernel is obliged to protect from an attacker. Specifically, this includes any memory that might contain user data or could be indirectly used to steal user data (such as keys). All other memory is nonsensitive. A new address space, called the *restricted address space*, is introduced, where sensitive memory is not mapped. The "normal" address space where everything is mapped (equivalent to the address space used by the kernel when ASI is disabled) is called the *unrestricted address space*. When the CPU enters the does so in the restricted address space (no sensitive memory mapped). If the kernel accesses sensitive memory, it triggers a page fault. In this page fault handler, the kernel transitions from the restricted to the unrestricted address space. At this point, a security boundary is crossed: just before the transition, the kernel flushes branch predictor state as it would in point 3 of the blanket mitigation above. Furthermore, SMT is disabled (the sibling hyperthread is paused). .. Note:: Because the restricted -> unrestricted transition is triggered by a page fault, it is totally automatic and transparent to the rest of the kernel. Kernel code is not generally aware of memory sensitivity. Before returning to the untrusted domain, the kernel transitions back to the restricted address space. Immediately afterwards, it flushes any potential side-channels, like in step 4 of the blanket mitigation above. At this point SMT is also re-enabled. Why it works ============ In terms of security, this is equivalent to the blanket mitigation. However, instead of doing these expensive things on every transition into and out of the kernel, ASI does them only on transitions between its address spaces. Most entries to the kernel do not require access to any sensitive data. This means that a roundtrip can be performed without doing any of the flushes mentioned above. This selectivity means that much more aggressive mitigation techniques are available for a dramatically reduced performance cost. In turn, these more aggressive techniques tend to be more generic. For example, instead of needing to develop new microarchitecture-specific techniques to efficiently eliminate attacker "mistraining", ASI makes it viable to just use generic flush operations like IBPB. Status ====== ASI is currently still in active development. None of the features described above actually work yet. Prototypes only exist for ASI on x86 and in its initial development it will remain x86-specific. This is not fundamental to its design, it could eventually be extended for other architectures too as needed. Resources ========= * Presentation at LSF/MM/BPF 2024, introducing ASI: https://www.youtube.com/watch?v=DxaN6X_fdlI * RFCs on LKML: * `Junaid Shahid, 2022 <https://lore.kernel.org/all/20220223052223.1202152-1-junaids@google.com/>`__ * `Brendan Jackman, 2025 <https://lore.kernel.org/linux-mm/20250110-asi-rfc-v2-v2-0-8419288bc805@google.com>`__
On 9/24/25 07:59, Brendan Jackman wrote: > As per [0] I think ASI is ready to start merging. This is the first > step. The scope of this series is: everything needed to set up the > direct map in the restricted address spaces. Brendan! Generally, we ask that patches get review tags before we consider them for being merged. Is there a reason this series doesn't need reviews before it gets merged?
On Wed, Oct 01, 2025 at 12:54:42PM -0700, Dave Hansen wrote: > On 9/24/25 07:59, Brendan Jackman wrote: > > As per [0] I think ASI is ready to start merging. This is the first > > step. The scope of this series is: everything needed to set up the > > direct map in the restricted address spaces. > > Brendan! > > Generally, we ask that patches get review tags before we consider them > for being merged. Is there a reason this series doesn't need reviews > before it gets merged? I think Brendan just meant that this is not an RFC aimed at prompting discussion anymore, these are fully functional patches aimed at being merged after they are reviewed and iterated on accordingly.
On 10/1/25 13:22, Yosry Ahmed wrote: > On Wed, Oct 01, 2025 at 12:54:42PM -0700, Dave Hansen wrote: >> On 9/24/25 07:59, Brendan Jackman wrote: >>> As per [0] I think ASI is ready to start merging. This is the first >>> step. The scope of this series is: everything needed to set up the >>> direct map in the restricted address spaces. >> Brendan! >> >> Generally, we ask that patches get review tags before we consider them >> for being merged. Is there a reason this series doesn't need reviews >> before it gets merged? > I think Brendan just meant that this is not an RFC aimed at prompting > discussion anymore, these are fully functional patches aimed at being > merged after they are reviewed and iterated on accordingly. Just setting expectations ... I think Brendan has probably rewritten this two or three times. I suggest he's about halfway done; only two or three rewrites left. ;) But, seriously, this _is_ a big deal. It's not going to be something that gets a few tags slapped on it and gets merged. At least that's not how I expect it to go.
On Wed Oct 1, 2025 at 8:30 PM UTC, Dave Hansen wrote: > On 10/1/25 13:22, Yosry Ahmed wrote: >> On Wed, Oct 01, 2025 at 12:54:42PM -0700, Dave Hansen wrote: >>> On 9/24/25 07:59, Brendan Jackman wrote: >>>> As per [0] I think ASI is ready to start merging. This is the first >>>> step. The scope of this series is: everything needed to set up the >>>> direct map in the restricted address spaces. >>> Brendan! >>> >>> Generally, we ask that patches get review tags before we consider them >>> for being merged. Is there a reason this series doesn't need reviews >>> before it gets merged? >> I think Brendan just meant that this is not an RFC aimed at prompting >> discussion anymore, these are fully functional patches aimed at being >> merged after they are reviewed and iterated on accordingly. > > Just setting expectations ... I think Brendan has probably rewritten > this two or three times. I suggest he's about halfway done; only two or > three rewrites left. ;) Yeah, I'd love to say "... and we have become exceedingly efficient at it" [0], but no, debugging my idiotic freelist and pagetable corruptions was just as hard this time as the first and second times... [0] https://www.youtube.com/watch?v=r51EomcIqA0 > But, seriously, this _is_ a big deal. It's not going to be something > that gets a few tags slapped on it and gets merged. At least that's not > how I expect it to go. Yeah, sorry if this was poorly worded, I'm DEFINITELY not asking anyone to merge this without the requisite acks - "ready for merge" just means "please review this as real grown-up code, I no longer consider this a PoC". And I'm not expecting this to get merged in v2 either :) Maybe worth noting here: there are two broad parties of important reviewers - mm folks and x86 folks. I think we're at risk of a chicken-and-egg problem where party A is thinking "no point in reviewing this too carefully, it's not yet clear that party B is ever gonna accept ASI even in theory". Meanwhile party B says "yeah ASI seems desirable, but I'll keep my nose out until party A has ironed out the details on their side". So, if you can do anything to help develop a consensus on whether we actually want this thing, that would help a lot. Maybe the best way to do that is just to dig into the details anyway, I'm not sure.
On Wed, Sep 24, 2025 at 02:59:35PM +0000, Brendan Jackman wrote: > As per [0] I think ASI is ready to start merging. This is the first > step. The scope of this series is: everything needed to set up the > direct map in the restricted address spaces. There looks to be a different approach taken by other folks to yank the guest pages from the hypervisor: https://lore.kernel.org/kvm/20250912091708.17502-1-roypat@amazon.co.uk/ That looks to have a very similar end result with less changes?
On Tue, 30 Sept 2025 at 21:51, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote: > > On Wed, Sep 24, 2025 at 02:59:35PM +0000, Brendan Jackman wrote: > > As per [0] I think ASI is ready to start merging. This is the first > > step. The scope of this series is: everything needed to set up the > > direct map in the restricted address spaces. > > There looks to be a different approach taken by other folks to > yank the guest pages from the hypervisor: > > https://lore.kernel.org/kvm/20250912091708.17502-1-roypat@amazon.co.uk/ > > That looks to have a very similar end result with less changes? Hey Konrad, Yeah if you only care about the security boundary around VM guests, and you're able to rework your hypervisor stack appropriately (I don't know too much about this but presumably it's just a subset of what's needed to support confidential computing usecases?), that approach seems good to me. But that isn't true for most of Linux's users. We still need to support systems where there is a meaningful security boundary around native processes. Also, unless I'm mistaken Patrick's approach will always require changes to the VMM, I don't think the kernel can just tell all users to go and make those changes. Basically: I support that approach, it's a good idea. It just solves a different set of problems. (I haven't thought about it carefully but I guess it solves some problems that ASI doesn't, since I guess it prevents some set of software exploits too, while ASI only helps with HW vulns). Cheers, Brendan
On 9/24/25 07:59, Brendan Jackman wrote: > Why is this the scope of the first series? The objective here is to > reach a MVP of ASI that people can actually run, as soon as possible. I had to ask ChatGPT what you meant by MVP. Minimum Viable Product? So this series just creates a new address space and then ensures that sensitive data is not mapped there? To me, that's a proof-of-concept, not a bit of valuable functionality that can be merged upstream. I'm curious how far the first bit of functionality that would be useful to end users is from the end of this series.
On Wed Oct 1, 2025 at 8:59 PM UTC, Dave Hansen wrote: > On 9/24/25 07:59, Brendan Jackman wrote: >> Why is this the scope of the first series? The objective here is to >> reach a MVP of ASI that people can actually run, as soon as possible. > > I had to ask ChatGPT what you meant by MVP. Minimum Viable Product? Yeah exactly, sorry I am leaking corporate jargon. > So this series just creates a new address space and then ensures that > sensitive data is not mapped there? To me, that's a proof-of-concept, > not a bit of valuable functionality that can be merged upstream. > > I'm curious how far the first bit of functionality that would be useful > to end users is from the end of this series. I think this series is about half way there. With 2 main series: 1. The bit to get the pagetables set up (this series) 2. The bit to switch in and out of the address space We already have something that delivers security value. It would only perform well for a certain set of usecases, but there are users for whom its still a win - it's already strictly cheaper than IBPB-on-VMExit. [Well, I'm assuming there that we include the actual security flushes in series 2, maybe that would be more like "2b"...] To get to the more interesting cases where it's faster than the current default, I think is not that far away for KVM usecases. I think the branch I posted in my [Discuss] thread[0] gets competitive with existing KVM usecases well before it devolves into the really hacky prototype stuff. To get to the actual goal, where ASI can become the global default (i.e. it's still fast when you sandbox native tasks as well as KVM guests), is further since we need to figure out the details on something like what I called the "ephmap" in [0]. There are competing tensions here - we would prefer not to merge code that "doesn't do anything", but on the other hand I don't think anyone wants to find themselves receiving [PATCH v34 19/40] next July... so I've tried to strike a balance here. Something like: 1. Develop a consensus that "we probably want ASI and it's worth trying" 2. Start working towards it in-tree, by breaking it down into smaller chunks. Do you think it would help if I started also maintaining an asi-next branch with the next few things all queued up and benchmarked, so we can get a look at the "goal state" while also keeping an eye on the here and now? Or do you have other suggestions for the strategy here? [0] https://lore.kernel.org/all/20250812173109.295750-1-jackmanb@google.com/
On 10/2/25 04:23, Brendan Jackman wrote: ... > [Well, I'm assuming there that we include the actual security flushes in > series 2, maybe that would be more like "2b"...] > > To get to the more interesting cases where it's faster than the current > default, I think is not that far away for KVM usecases. I think the > branch I posted in my [Discuss] thread[0] gets competitive with existing > KVM usecases well before it devolves into the really hacky prototype > stuff. > > To get to the actual goal, where ASI can become the global default (i.e. > it's still fast when you sandbox native tasks as well as KVM guests), is > further since we need to figure out the details on something like what I > called the "ephmap" in [0]. > > There are competing tensions here - we would prefer not to merge code > that "doesn't do anything", but on the other hand I don't think anyone > wants to find themselves receiving [PATCH v34 19/40] next July... so > I've tried to strike a balance here. Something like: > > 1. Develop a consensus that "we probably want ASI and it's worth trying" > > 2. Start working towards it in-tree, by breaking it down into smaller > chunks. Just to be clear: we don't merge code that doesn't do anything functional. The bar for inclusion is that it has to do something practical and useful for end users. It can't be purely infrastructure or preparatory. Protection keys is a good example. It was a big, gnarly series that could be roughly divided into two pieces: one that did all the page table gunk, and all the new ABI bits around exposing pkeys to apps. But we found a way to do all the page table gunk with no new ABI and that also gave security folks something they wanted: execute_only_pkey(). So we merged all the page table and internal gunk first, and then the new ABI a release or two later. But the important part was that it had _some_ functionality from day one when it was merged. It wasn't purely infrastructure. > Do you think it would help if I started also maintaining an asi-next > branch with the next few things all queued up and benchmarked, so we can > get a look at the "goal state" while also keeping an eye on the here and > now? Or do you have other suggestions for the strategy here? Yes, I think that would be useful. For instance, imagine you'd had that series sitting around: 6.16-asi-next. Then, all of a sudden you see the vmscape series[1] show up. Ideally, you'd take your 6.16-asi-next branch and show us how much simpler and faster it is to mitigate vmscape with ASI instead of the IBPB silliness that we ended up with. Basically, use your asi-next branch to bludgeon us each time we _should_ have been using it. It's also not too late. You could still go back and do that analysis for vmscape. It's fresh enough in our minds to matter. 1. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=223ba8ee0a3986718c874b66ed24e7f87f6b8124
On Thu Oct 2, 2025 at 5:01 PM UTC, Dave Hansen wrote: > On 10/2/25 04:23, Brendan Jackman wrote: > ... >> [Well, I'm assuming there that we include the actual security flushes in >> series 2, maybe that would be more like "2b"...] >> >> To get to the more interesting cases where it's faster than the current >> default, I think is not that far away for KVM usecases. I think the >> branch I posted in my [Discuss] thread[0] gets competitive with existing >> KVM usecases well before it devolves into the really hacky prototype >> stuff. >> >> To get to the actual goal, where ASI can become the global default (i.e. >> it's still fast when you sandbox native tasks as well as KVM guests), is >> further since we need to figure out the details on something like what I >> called the "ephmap" in [0]. >> >> There are competing tensions here - we would prefer not to merge code >> that "doesn't do anything", but on the other hand I don't think anyone >> wants to find themselves receiving [PATCH v34 19/40] next July... so >> I've tried to strike a balance here. Something like: >> >> 1. Develop a consensus that "we probably want ASI and it's worth trying" >> >> 2. Start working towards it in-tree, by breaking it down into smaller >> chunks. > > Just to be clear: we don't merge code that doesn't do anything > functional. The bar for inclusion is that it has to do something > practical and useful for end users. It can't be purely infrastructure or > preparatory. > > Protection keys is a good example. It was a big, gnarly series that > could be roughly divided into two pieces: one that did all the page > table gunk, and all the new ABI bits around exposing pkeys to apps. But > we found a way to do all the page table gunk with no new ABI and that > also gave security folks something they wanted: execute_only_pkey(). > > So we merged all the page table and internal gunk first, and then the > new ABI a release or two later. > > But the important part was that it had _some_ functionality from day one > when it was merged. It wasn't purely infrastructure. OK thanks, after our IRC chat I understand this now. So in the case of pkeys I guess the internal gunk didn't "do anything" per se but it was a clear improvement in the code in its own right. So I'll look for a way to split out the preparatory stuff to be more like that. And then I'll try to get a single patchset that goes from "no ASI" to "ASI that does _something_ useful". I think it's inevitable that this will still be rather on the large side but I'll do my best. >> Do you think it would help if I started also maintaining an asi-next >> branch with the next few things all queued up and benchmarked, so we can >> get a look at the "goal state" while also keeping an eye on the here and >> now? Or do you have other suggestions for the strategy here? > > Yes, I think that would be useful. > > For instance, imagine you'd had that series sitting around: > 6.16-asi-next. Then, all of a sudden you see the vmscape series[1] show > up. Ideally, you'd take your 6.16-asi-next branch and show us how much > simpler and faster it is to mitigate vmscape with ASI instead of the > IBPB silliness that we ended up with. > > Basically, use your asi-next branch to bludgeon us each time we _should_ > have been using it. > > It's also not too late. You could still go back and do that analysis for > vmscape. It's fresh enough in our minds to matter. > > 1. > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=223ba8ee0a3986718c874b66ed24e7f87f6b8124 And yep, I'll take a look at this too. Thanks very much for taking a look and for all of the valuable pointers.
On 01.10.25 22:59, Dave Hansen wrote: > On 9/24/25 07:59, Brendan Jackman wrote: >> Why is this the scope of the first series? The objective here is to >> reach a MVP of ASI that people can actually run, as soon as possible. > > I had to ask ChatGPT what you meant by MVP. Minimum Viable Product? > > So this series just creates a new address space and then ensures that > sensitive data is not mapped there? To me, that's a proof-of-concept, > not a bit of valuable functionality that can be merged upstream. > > I'm curious how far the first bit of functionality that would be useful > to end users is from the end of this series. There was this mail "[Discuss] First steps for ASI (ASI is fast again)"[1] that I also didn't get to fully digest yet, where there was a question at the very end " Once we have some x86 maintainers saying "yep, it looks like this can work and it's something we want", I can start turning my page_alloc RFC [3] into a proper patchset (or maybe multiple if I can find a way to break things down further). ... So, x86 folks: Does this feel like "line of sight" to you? If not, what would that look like, what experiments should I run? " Unless I am missing something, no x86 maintainer replied to that one so far and I assume this patch set here is the revival of above mentioned RFC " so it might be reasonable to reply there. [1] https://lore.kernel.org/all/20250812173109.295750-1-jackmanb@google.com/T/#u -- Cheers David / dhildenb
© 2016 - 2026 Red Hat, Inc.