arch/arm64/include/asm/gcs.h | 3 +- arch/arm64/mm/gcs.c | 25 ++++- include/uapi/linux/prctl.h | 1 + tools/testing/selftests/arm64/gcs/basic-gcs.c | 121 ++++++++++++++++++++++++ tools/testing/selftests/arm64/gcs/gcs-locking.c | 23 +++++ tools/testing/selftests/arm64/gcs/gcs-util.h | 3 +- 6 files changed, 173 insertions(+), 3 deletions(-)
During the discussion of the clone3() support for shadow stacks concerns were raised from the glibc side that since it is not possible to reuse the allocated shadow stack[1]. This means that the benefit of being able to manage allocations is greatly reduced, for example it is not possible to integrate the shadow stacks into the glibc thread stack cache. The stack can be inspected but otherwise it would have to be unmapped and remapped before it could be used again, it's not clear that this is better than managing things in the kernel. In that discussion I suggested that we could enable reuse by writing a token to the shadow stack of exiting threads, mirroring how the userspace stack pivot instructions write a token to the outgoing stack. As mentioned by Florian[2] glibc already unwinds the stack and exits the thread from the start routine which would integrate nicely with this, the shadow stack pointer will be at the same place as it was when the thread started. This would not write a token if the thread doesn't exit cleanly, that seems viable to me - users should probably handle this by double checking that a token is present after waiting for the thread. This is tagged as a RFC since I put it together fairly quickly to demonstrate the proposal and the suggestion hasn't had much response either way from the glibc developers. At the very least we don't currently handle scheduling during exit(), or distinguish why the thread is exiting. I've also not done anything about x86. [1] https://marc.info/?l=glibc-alpha&m=175821637429537&w=2 [2] https://marc.info/?l=glibc-alpha&m=175733266913483&w=2 Signed-off-by: Mark Brown <broonie@kernel.org> --- Mark Brown (3): arm64/gcs: Support reuse of GCS for exited threads kselftest/arm64: Validate PR_SHADOW_STACK_EXIT_TOKEN in basic-gcs kselftest/arm64: Add PR_SHADOW_STACK_EXIT_TOKEN to gcs-locking arch/arm64/include/asm/gcs.h | 3 +- arch/arm64/mm/gcs.c | 25 ++++- include/uapi/linux/prctl.h | 1 + tools/testing/selftests/arm64/gcs/basic-gcs.c | 121 ++++++++++++++++++++++++ tools/testing/selftests/arm64/gcs/gcs-locking.c | 23 +++++ tools/testing/selftests/arm64/gcs/gcs-util.h | 3 +- 6 files changed, 173 insertions(+), 3 deletions(-) --- base-commit: 0b67d4b724b4afed2690c21bef418b8a803c5be2 change-id: 20250919-arm64-gcs-exit-token-82c3c2570aad prerequisite-change-id: 20231019-clone3-shadow-stack-15d40d2bf536 Best regards, -- Mark Brown <broonie@kernel.org>
On Sun, 2025-09-21 at 14:21 +0100, Mark Brown wrote: > During the discussion of the clone3() support for shadow stacks concerns > were raised from the glibc side that since it is not possible to reuse > the allocated shadow stack[1]. This means that the benefit of being able > to manage allocations is greatly reduced, for example it is not possible > to integrate the shadow stacks into the glibc thread stack cache. The > stack can be inspected but otherwise it would have to be unmapped and > remapped before it could be used again, it's not clear that this is > better than managing things in the kernel. > > In that discussion I suggested that we could enable reuse by writing a > token to the shadow stack of exiting threads, mirroring how the > userspace stack pivot instructions write a token to the outgoing stack. > As mentioned by Florian[2] glibc already unwinds the stack and exits the > thread from the start routine which would integrate nicely with this, > the shadow stack pointer will be at the same place as it was when the > thread started. > > This would not write a token if the thread doesn't exit cleanly, that > seems viable to me - users should probably handle this by double > checking that a token is present after waiting for the thread. > > This is tagged as a RFC since I put it together fairly quickly to > demonstrate the proposal and the suggestion hasn't had much response > either way from the glibc developers. At the very least we don't > currently handle scheduling during exit(), or distinguish why the thread > is exiting. I've also not done anything about x86. Security-wise, it seems reasonable that if you are leaving a shadow stack, that you could leave a token behind. But for the userspace scheme to back up the SSP by doing a longjmp() or similar I have some doubts. IIRC there were some cross stack edge cases that we never figured out how to handle. As far as re-using allocated shadow stacks, there is always the option to enable WRSS (or similar) to write the shadow stack as well as longjmp at will. I think we should see a fuller solution from the glibc side before adding new kernel features like this. (apologies if I missed it). I wonder if we are building something that will have an extremely complicated set of rules for what types of stack operations should be expected to work. Sort of related, I think we might think about msealing shadow stacks, which will have trouble with a lot of these user managed shadow stack schemes. The reason is that as long as shadow stacks can be unmapped while a thread is on them (say a sleeping thread), a new shadow stack can be allocated in the same place with a token. Then a second thread can consume the token and possibly corrupt the shadow stack for the other thread with it's own calls. I don't know how realistic it is in practice, but it's something that guard gaps can't totally prevent. But for automatic thread created shadow stacks, there is no need to allow userspace to unmap a shadow stack, so the automatically created stacks could simply be msealed on creation and unmapped from the kernel. For a lot of apps (most?) this would work perfectly fine. I think we don't want 100 modes of shadow stack. If we have two, I'd think: 1. Msealed, simple more locked down kernel allocated shadow stack. Limited or none user space managed shadow stacks. 2. WRSS enabled, clone3-preferred max compatibility shadow stack. Longjmp via token writes and don't even have to think about taking signals while unwinding across stacks, or whatever other edge case. This RFC seems to be going down the path of addressing one edge case at a time. Alone it's fine, but I'd rather punt these types of usages to (2) by default. Thoughts?
Hi, On Thu, Sep 25, 2025 at 08:40:56PM +0000, Edgecombe, Rick P wrote: > On Sun, 2025-09-21 at 14:21 +0100, Mark Brown wrote: > > During the discussion of the clone3() support for shadow stacks concerns > > were raised from the glibc side that since it is not possible to reuse > > the allocated shadow stack[1]. This means that the benefit of being able > > ... > > > Security-wise, it seems reasonable that if you are leaving a shadow stack, that > you could leave a token behind. But for the userspace scheme to back up the SSP > by doing a longjmp() or similar I have some doubts. IIRC there were some cross > stack edge cases that we never figured out how to handle. > > As far as re-using allocated shadow stacks, there is always the option to enable > WRSS (or similar) to write the shadow stack as well as longjmp at will. > > I think we should see a fuller solution from the glibc side before adding new > kernel features like this. (apologies if I missed it). What do you mean by "a fuller solution from the glibc side"? A solution for re-using shadow stacks? Right now Glibc cannot do anything about shadow stacks for new threads because clone3 interface doesn't allow it. Thanks, Yury
On Fri, 2025-09-26 at 16:07 +0100, Yury Khrustalev wrote: > > I think we should see a fuller solution from the glibc side before > > adding new > > kernel features like this. (apologies if I missed it). > > What do you mean by "a fuller solution from the glibc side"? A > solution > for re-using shadow stacks? I mean some code or a fuller explained solution that uses this new kernel functionality. I think the scheme that Florian suggested in the thread linked above (longjmp() to the start of the stack) will have trouble if the thread pivots to a new shadow stack before exiting (e.g. ucontext). > Right now Glibc cannot do anything about > shadow stacks for new threads because clone3 interface doesn't allow > it. If you enable WRSS (or the arm equivalent) you can re-use shadow stacks today by writing a token.
On Fri, Sep 26, 2025 at 03:39:46PM +0000, Edgecombe, Rick P wrote: > On Fri, 2025-09-26 at 16:07 +0100, Yury Khrustalev wrote: > > What do you mean by "a fuller solution from the glibc side"? A > > solution > > for re-using shadow stacks? > I mean some code or a fuller explained solution that uses this new > kernel functionality. I think the scheme that Florian suggested in the > thread linked above (longjmp() to the start of the stack) will have > trouble if the thread pivots to a new shadow stack before exiting (e.g. > ucontext). Is that supported even without user managed stacks?
On Fri, 2025-09-26 at 17:03 +0100, Mark Brown wrote: > On Fri, Sep 26, 2025 at 03:39:46PM +0000, Edgecombe, Rick P wrote: > > On Fri, 2025-09-26 at 16:07 +0100, Yury Khrustalev wrote: > > > > What do you mean by "a fuller solution from the glibc side"? A > > > solution > > > for re-using shadow stacks? > > > I mean some code or a fuller explained solution that uses this new > > kernel functionality. I think the scheme that Florian suggested in > > the > > thread linked above (longjmp() to the start of the stack) will have > > trouble if the thread pivots to a new shadow stack before exiting > > (e.g. > > ucontext). > > Is that supported even without user managed stacks? IIUC longjmp() is sometimes used to implement user level thread switching. So for non-shadow stack my understanding is that longjmp() between user level threads is supported. For shadow stack, I think user level threads are intended to be supported. So I don't see why a thread could never exit from another shadow stack? Is that what you are asking? Maybe we need to discuss this with the glibc folks. What I read in that thread you linked was that Florian was considering using longjmp() as a way to inherit the solution to these unwinding problems: Not sure if it helps, but glibc always calls the exit system call from the start routine, after unwinding the stack (with longjmp if necessary). https://marc.info/?l=glibc-alpha&m=175733266913483&w=2 Reading into the "...if necessary" a bit on my part. But if longjmp() doesn't work to unwind from a non-thread shadow stack, then my question would be how will glibc deal with if an app exits while on another shadow stack? Just declare shadow stack re-use is not supported with any user shadow stack pivoting? Does it need a new elf header bit then? Again, I'm just thinking that we should vet the solution a bit more before actually adding this to the kernel. If the combined shadow stack effort eventually throws its hands up in frustration and goes with WRSS/GCSSTR for apps that want to do more advanced threading patterns, then we are already done on the kernel side.
On Fri, Sep 26, 2025 at 07:17:16PM +0000, Edgecombe, Rick P wrote: > On Fri, 2025-09-26 at 17:03 +0100, Mark Brown wrote: > > On Fri, Sep 26, 2025 at 03:39:46PM +0000, Edgecombe, Rick P wrote: > > > > What do you mean by "a fuller solution from the glibc side"? A > > > > solution > > > > for re-using shadow stacks? > > > I mean some code or a fuller explained solution that uses this new > > > kernel functionality. I think the scheme that Florian suggested in > > > the > > > thread linked above (longjmp() to the start of the stack) will have > > > trouble if the thread pivots to a new shadow stack before exiting > > > (e.g. > > > ucontext). > > Is that supported even without user managed stacks? > IIUC longjmp() is sometimes used to implement user level thread > switching. So for non-shadow stack my understanding is that longjmp() > between user level threads is supported. Yes, that was more a "does it actually work?" query than a "might someone reasonably want to do this?". > For shadow stack, I think user level threads are intended to be > supported. So I don't see why a thread could never exit from another > shadow stack? Is that what you are asking? Maybe we need to discuss > this with the glibc folks. There's a selection of them directly on this thread, and libc-alpha on CC, so hopefully they'll chime in. Unless I'm getting confused the code does appear to be doing an unwind, there's a lot of layers there so I might've got turned around though. > Again, I'm just thinking that we should vet the solution a bit more > before actually adding this to the kernel. If the combined shadow stack > effort eventually throws its hands up in frustration and goes with > WRSS/GCSSTR for apps that want to do more advanced threading patterns, > then we are already done on the kernel side. Some more input from the glibc side would indeed be useful, I've not seen any thoughts on either this series or just turning on writability both of which are kind of orthogonal to clone3() (which has been dropped from -next today).
On Thu, Sep 25, 2025 at 08:40:56PM +0000, Edgecombe, Rick P wrote: > Security-wise, it seems reasonable that if you are leaving a shadow stack, that > you could leave a token behind. But for the userspace scheme to back up the SSP > by doing a longjmp() or similar I have some doubts. IIRC there were some cross > stack edge cases that we never figured out how to handle. I think those were around the use of alt stacks, which we don't currently do for shadow stacks at all because we couldn't figure out those edge cases. Possibly there's others as well, though - the alt stacks issues dominated discussion a bit. AFAICT those issues exist anyway, if userspace is already unwinding as part of thread exit then they'll exercise that code though perhaps be saved from any issues by virtue of not actually doing any function calls. Anything that actually does a longjmp() with the intent to continue will do so more thoroughly. > As far as re-using allocated shadow stacks, there is always the option to enable > WRSS (or similar) to write the shadow stack as well as longjmp at will. That's obviously a substantial downgrade in security though. > I think we should see a fuller solution from the glibc side before adding new > kernel features like this. (apologies if I missed it). I wonder if we are I agree that we want to see some userspace code here, I'm hoping this can be used for prototyping. Yury has some code for the clone3() part of things in glibc on arm64 already, hopefully that can be extended to include the shadow stack in the thread stack cache. > building something that will have an extremely complicated set of rules for what > types of stack operations should be expected to work. I think restricted more than complex? > Sort of related, I think we might think about msealing shadow stacks, which will > have trouble with a lot of these user managed shadow stack schemes. The reason > is that as long as shadow stacks can be unmapped while a thread is on them (say > a sleeping thread), a new shadow stack can be allocated in the same place with a > token. Then a second thread can consume the token and possibly corrupt the > shadow stack for the other thread with it's own calls. I don't know how > realistic it is in practice, but it's something that guard gaps can't totally > prevent. > But for automatic thread created shadow stacks, there is no need to allow > userspace to unmap a shadow stack, so the automatically created stacks could > simply be msealed on creation and unmapped from the kernel. For a lot of apps > (most?) this would work perfectly fine. Indeed, we should be able to just do that if we're mseal()ing system mappings I think - most likely anything that has a problem with it probably already has a problem the existing mseal() stuff. Yet another reason we should be factoring more of this code out into the generic code, like I say I'll try to look at that. I do wonder if anyone would bother with those attacks if they've got enough control over the process to do them, but equally a lot of this is about how things chain together. > I think we don't want 100 modes of shadow stack. If we have two, I'd think: > 1. Msealed, simple more locked down kernel allocated shadow stack. Limited or > none user space managed shadow stacks. > 2. WRSS enabled, clone3-preferred max compatibility shadow stack. Longjmp via > token writes and don't even have to think about taking signals while unwinding > across stacks, or whatever other edge case. I think the important thing from a kernel ABI point of view is to give userspace the tools to do whatever it wants and get out of the way, and that ideally this should include options that don't just make the shadow stack writable since that's a substantial step down in protection. That said your option 2 is already supported with the existing clone3() on both arm64 and x86_64, policy for switching between that and kernel managed stacks could be set by restricting the writable stacks flag on the enable prctl(), and/or restricting map_shadow_stack(). > This RFC seems to be going down the path of addressing one edge case at a time. > Alone it's fine, but I'd rather punt these types of usages to (2) by default. For me this is in the category of "oh, of course you should be able to do that" where it feels like an obvious usability thing than an edge case.
On Fri, 2025-09-26 at 00:22 +0100, Mark Brown wrote: > On Thu, Sep 25, 2025 at 08:40:56PM +0000, Edgecombe, Rick P wrote: > > > Security-wise, it seems reasonable that if you are leaving a shadow stack, that > > you could leave a token behind. But for the userspace scheme to back up the SSP > > by doing a longjmp() or similar I have some doubts. IIRC there were some cross > > stack edge cases that we never figured out how to handle. > > I think those were around the use of alt stacks, which we don't > currently do for shadow stacks at all because we couldn't figure out > those edge cases. Possibly there's others as well, though - the alt > stacks issues dominated discussion a bit. For longjmp, IIRC there were some plans to search for a token on the target stack and use it, which seems somewhat at odds with the quick efficient jump that longjmp() gets usually used for. But it also doesn't solve the problem of taking a signal while you are unwinding. Like say you do calls all the way to the end of a shadow stack, and it's about to overflow. Then the thread swaps to another shadow stack. If you longjmp back to the original stack you will have to transition to the end of the first stack as you unwind. If at that point the thread gets a signal, it would overflow the shadow stack. This is a subtle difference in behavior compared to non-shadow stack. You also need to know that nothing else could consume that token on that stack in the meantime. So doing it safely is not nearly as simple as normal longjmp(). Anyway, I think you don't need alt shadow stack to hit that. Just normal userspace threading? > > AFAICT those issues exist anyway, if userspace is already unwinding as > part of thread exit then they'll exercise that code though perhaps be > saved from any issues by virtue of not actually doing any function > calls. Anything that actually does a longjmp() with the intent to > continue will do so more thoroughly. > > > As far as re-using allocated shadow stacks, there is always the option to enable > > WRSS (or similar) to write the shadow stack as well as longjmp at will. > > That's obviously a substantial downgrade in security though. I don't know about substantial, but I'd love to hear some offensive security persons analysis. There definitely was a school of thought though, that shadow stack should be turned on as widely as possible. If we need WRSS to make that happen in a sane way, you could argue there is sort of a security at scale benefit. > > > I think we should see a fuller solution from the glibc side before adding new > > kernel features like this. (apologies if I missed it). I wonder if we are > > I agree that we want to see some userspace code here, I'm hoping this > can be used for prototyping. Yury has some code for the clone3() part > of things in glibc on arm64 already, hopefully that can be extended to > include the shadow stack in the thread stack cache. > > > building something that will have an extremely complicated set of rules for what > > types of stack operations should be expected to work. > > I think restricted more than complex? > > > Sort of related, I think we might think about msealing shadow stacks, which will > > have trouble with a lot of these user managed shadow stack schemes. The reason > > is that as long as shadow stacks can be unmapped while a thread is on them (say > > a sleeping thread), a new shadow stack can be allocated in the same place with a > > token. Then a second thread can consume the token and possibly corrupt the > > shadow stack for the other thread with it's own calls. I don't know how > > realistic it is in practice, but it's something that guard gaps can't totally > > prevent. > > > But for automatic thread created shadow stacks, there is no need to allow > > userspace to unmap a shadow stack, so the automatically created stacks could > > simply be msealed on creation and unmapped from the kernel. For a lot of apps > > (most?) this would work perfectly fine. > > Indeed, we should be able to just do that if we're mseal()ing system > mappings I think - most likely anything that has a problem with it > probably already has a problem the existing mseal() stuff. Yet another > reason we should be factoring more of this code out into the generic > code, like I say I'll try to look at that. Agree. But for the mseal stuff, I think you would want to have map_shadow_stack not available. > > I do wonder if anyone would bother with those attacks if they've got > enough control over the process to do them, but equally a lot of this is > about how things chain together. Yea, I don't know. But the guard gaps were added after a suggestion from Jann Horn. This is sort of a similar concern of sharing a shadow stack, but the difference is stack overflow vs controlling args to syscalls. > > > I think we don't want 100 modes of shadow stack. If we have two, I'd think: > > 1. Msealed, simple more locked down kernel allocated shadow stack. Limited or > > none user space managed shadow stacks. > > 2. WRSS enabled, clone3-preferred max compatibility shadow stack. Longjmp via > > token writes and don't even have to think about taking signals while unwinding > > across stacks, or whatever other edge case. > > I think the important thing from a kernel ABI point of view is to give > userspace the tools to do whatever it wants and get out of the way, and > that ideally this should include options that don't just make the shadow > stack writable since that's a substantial step down in protection. Yes I hear that. But also try to avoid creating maintenance issues by adding features that didn't turn out to be useful. It sounds like we agree that we need more proof that this will work out in the long run. > > That said your option 2 is already supported with the existing clone3() > on both arm64 and x86_64, policy for switching between that and kernel > managed stacks could be set by restricting the writable stacks flag on > the enable prctl(), and/or restricting map_shadow_stack(). You mean userspace could already re-use shadow stacks if they enable writable shadow stacks? Yes I agree. > > > This RFC seems to be going down the path of addressing one edge case at a time. > > Alone it's fine, but I'd rather punt these types of usages to (2) by default. > > For me this is in the category of "oh, of course you should be able to > do that" where it feels like an obvious usability thing than an edge > case. True. I guess I was thinking more about the stack unwinding. Badly phrased, sorry.
On Thu, Sep 25, 2025 at 11:58:01PM +0000, Edgecombe, Rick P wrote: > On Fri, 2025-09-26 at 00:22 +0100, Mark Brown wrote: > > I think those were around the use of alt stacks, which we don't > > currently do for shadow stacks at all because we couldn't figure out > > those edge cases. Possibly there's others as well, though - the alt > > stacks issues dominated discussion a bit. > For longjmp, IIRC there were some plans to search for a token on the target > stack and use it, which seems somewhat at odds with the quick efficient jump > that longjmp() gets usually used for. But it also doesn't solve the problem of > taking a signal while you are unwinding. Yeah, that all seemed very unclear to me. > Like say you do calls all the way to the end of a shadow stack, and it's about > to overflow. Then the thread swaps to another shadow stack. If you longjmp back > to the original stack you will have to transition to the end of the first stack > as you unwind. If at that point the thread gets a signal, it would overflow the > shadow stack. This is a subtle difference in behavior compared to non-shadow > stack. You also need to know that nothing else could consume that token on that > stack in the meantime. So doing it safely is not nearly as simple as normal > longjmp(). > Anyway, I think you don't need alt shadow stack to hit that. Just normal > userspace threading? Ah, the backtrack through a pivot case - yes, I don't think anyone had a good story for how that was going to work sensibly without making the stack writable. Sorry, I'd written off that case as entirely so it didn't cross my mind. > > > As far as re-using allocated shadow stacks, there is always the option to enable > > > WRSS (or similar) to write the shadow stack as well as longjmp at will. > > That's obviously a substantial downgrade in security though. > I don't know about substantial, but I'd love to hear some offensive security > persons analysis. There definitely was a school of thought though, that shadow > stack should be turned on as widely as possible. If we need WRSS to make that > happen in a sane way, you could argue there is sort of a security at scale > benefit. I agree it seems clearly better from a security point of view to have writable shadow stacks than none at all, I don't think there's much argument there other than the concerns about the memory consumption and performance tradeoffs. > > > But for automatic thread created shadow stacks, there is no need to allow > > > userspace to unmap a shadow stack, so the automatically created stacks could > > > simply be msealed on creation and unmapped from the kernel. For a lot of apps > > > (most?) this would work perfectly fine. > > Indeed, we should be able to just do that if we're mseal()ing system > > mappings I think - most likely anything that has a problem with it > > probably already has a problem the existing mseal() stuff. Yet another > > reason we should be factoring more of this code out into the generic > > code, like I say I'll try to look at that. > Agree. But for the mseal stuff, I think you would want to have map_shadow_stack > not available. That seems like something userspace could enforce with existing security mechanisms? I can imagine a system might want different policies for different programs. > > I think the important thing from a kernel ABI point of view is to give > > userspace the tools to do whatever it wants and get out of the way, and > > that ideally this should include options that don't just make the shadow > > stack writable since that's a substantial step down in protection. > Yes I hear that. But also try to avoid creating maintenance issues by adding > features that didn't turn out to be useful. It sounds like we agree that we need > more proof that this will work out in the long run. Yes, we need at least some buy in from userspace. > > That said your option 2 is already supported with the existing clone3() > > on both arm64 and x86_64, policy for switching between that and kernel > > managed stacks could be set by restricting the writable stacks flag on > > the enable prctl(), and/or restricting map_shadow_stack(). > You mean userspace could already re-use shadow stacks if they enable writable > shadow stacks? Yes I agree. Yes, exactly. > > > This RFC seems to be going down the path of addressing one edge case at a time. > > > Alone it's fine, but I'd rather punt these types of usages to (2) by default. > > For me this is in the category of "oh, of course you should be able to > > do that" where it feels like an obvious usability thing than an edge > > case. > True. I guess I was thinking more about the stack unwinding. Badly phrased, > sorry. I'm completely with you on stack unwinding over pivot stuff, I wasn't imagining we could address that. There's so many landmines there that we'd need a super solid story before doing anything specific to that case.
On Fri, 2025-09-26 at 01:44 +0100, Mark Brown wrote: > > I don't know about substantial, but I'd love to hear some offensive > > security persons analysis. There definitely was a school of thought > > though, that shadow stack should be turned on as widely as > > possible. If we need WRSS to make that happen in a sane way, you > > could argue there is sort of a security at scale > > benefit. > > I agree it seems clearly better from a security point of view to have > writable shadow stacks than none at all, I don't think there's much > argument there other than the concerns about the memory consumption > and performance tradeoffs. IIRC the WRSS equivalent works the same for ARM where you need to use a special instruction, right? So we are not talking about full writable shadow stacks that could get attacked from any overflow, rather, limited spots that have the WRSS (or similar) instruction. In the presence of forward edge CFI, we might be able to worry less about attackers being able to actually reach it? Still not quite as locked down as having it disabled, but maybe not such a huge gap compared to the mmap/munmap() stuff that is the alternative we are weighing. > > > > > But for automatic thread created shadow stacks, there is no > > > > need to allow userspace to unmap a shadow stack, so the > > > > automatically created stacks could simply be msealed on > > > > creation and unmapped from the kernel. For a lot of apps > > > > (most?) this would work perfectly fine. > > > > Indeed, we should be able to just do that if we're mseal()ing > > > system mappings I think - most likely anything that has a problem > > > with it probably already has a problem the existing mseal() > > > stuff. Yet another reason we should be factoring more of this > > > code out into the generic code, like I say I'll try to look at > > > that. > > > Agree. But for the mseal stuff, I think you would want to have > > map_shadow_stack not available. > > That seems like something userspace could enforce with existing > security mechanisms? I can imagine a system might want different > policies for different programs. Yes, you could already do it with seccomp or something. Not sure if it will work for the distro-wide enabling schemes or not though.
On Fri, Sep 26, 2025 at 03:46:26PM +0000, Edgecombe, Rick P wrote: > On Fri, 2025-09-26 at 01:44 +0100, Mark Brown wrote: > > I agree it seems clearly better from a security point of view to have > > writable shadow stacks than none at all, I don't think there's much > > argument there other than the concerns about the memory consumption > > and performance tradeoffs. > IIRC the WRSS equivalent works the same for ARM where you need to use a > special instruction, right? So we are not talking about full writable Yes, it's GCSSTR for arm64. > shadow stacks that could get attacked from any overflow, rather, > limited spots that have the WRSS (or similar) instruction. In the > presence of forward edge CFI, we might be able to worry less about > attackers being able to actually reach it? Still not quite as locked > down as having it disabled, but maybe not such a huge gap compared to > the mmap/munmap() stuff that is the alternative we are weighing. Agreed, as I said it's a definite win still - just not quite as strong.
On Fri, Sep 26, 2025 at 05:09:08PM +0100, Mark Brown wrote: >On Fri, Sep 26, 2025 at 03:46:26PM +0000, Edgecombe, Rick P wrote: >> On Fri, 2025-09-26 at 01:44 +0100, Mark Brown wrote: > >> > I agree it seems clearly better from a security point of view to have >> > writable shadow stacks than none at all, I don't think there's much >> > argument there other than the concerns about the memory consumption >> > and performance tradeoffs. > >> IIRC the WRSS equivalent works the same for ARM where you need to use a >> special instruction, right? So we are not talking about full writable > >Yes, it's GCSSTR for arm64. sspush / ssamoswap on RISC-V provides write mechanisms to shadow stack. > >> shadow stacks that could get attacked from any overflow, rather, >> limited spots that have the WRSS (or similar) instruction. In the >> presence of forward edge CFI, we might be able to worry less about >> attackers being able to actually reach it? Still not quite as locked >> down as having it disabled, but maybe not such a huge gap compared to >> the mmap/munmap() stuff that is the alternative we are weighing. > >Agreed, as I said it's a definite win still - just not quite as strong. If I have to put philosopher's hat, in order to have wider deployment and adoption, its better to have to have better security posture for majority users rather than making ultra secure system which is difficult to use. This just means that mechanism(s) to write-to-shadow stack flows in user space have to be carefully done. - Sparse and not part of compile codegen. Mostly should be hand coded and reviewed. - Reachability of such gadgets and their usage by adversary should be threat modeled. If forward cfi is enabled, I don't expect gadget of write to shadow stack itself being reachable without disabling fcfi or pivoting/corrupting shadow stack. The only other way to achieve something like that would be to re-use entire function (where sswrite is present) to achieve desired effect. I think we should be focussing more on those.
© 2016 - 2025 Red Hat, Inc.