arch/arm64/include/asm/kvm_asm.h | 50 ++-- arch/arm64/include/asm/kvm_emulate.h | 2 +- arch/arm64/include/asm/kvm_host.h | 41 +++- arch/arm64/kernel/asm-offsets.c | 1 + arch/arm64/kernel/image-vars.h | 2 + arch/arm64/kvm/arm.c | 90 +++++++- arch/arm64/kvm/fpsimd.c | 2 +- arch/arm64/kvm/guest.c | 14 +- arch/arm64/kvm/hyp/entry.S | 15 ++ arch/arm64/kvm/hyp/include/hyp/switch.h | 6 +- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 10 +- .../arm64/kvm/hyp/include/nvhe/trap_handler.h | 2 +- arch/arm64/kvm/hyp/nvhe/host.S | 20 +- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 4 +- arch/arm64/kvm/reset.c | 2 +- arch/arm64/kvm/va_layout.c | 38 ++++ include/linux/secretmem.h | 29 +++ mm/Kconfig | 10 + mm/gup.c | 4 +- mm/internal.h | 7 + mm/mseal.c | 81 ++++--- mm/secretmem.c | 213 ++++++++++++++++++ 22 files changed, 559 insertions(+), 84 deletions(-)
In a series posted a few years ago [1], a proposal was put forward to allow the kernel to allocate memory local to a mm and thus push it out of reach for current and future speculation-based cross-process attacks. We still believe this is a nice thing to have. However, in the time passed since that post Linux mm has grown quite a few new goodies, so we'd like to explore possibilities to implement this functionality with less effort and churn leveraging the now available facilities. An RFC was posted few months back [2] to show the proof of concept and a simple test driver. In this RFC, we're using the same approach of implementing mm-local allocations piggy-backing on memfd_secret(), using regular user addresses but pinning the pages and flipping the user/supervisor flag on the respective PTEs to make them directly accessible from kernel. In addition to that we are submitting 5 patches to use the secret memory to hide the vCPU gp-regs and fp-regs on arm64 VHE systems. The generic drawbacks of using user virtual addresses mentioned in the previous RFC [2] still hold, in addition to a more specific one: - While the user virtual addresses allocated for kernel secret memory are not directly accessible by userspace as the PTEs restrict that, copy_from_user() and copy_to_user() can operate on those ranges, so that e.g. the usermode can guess the address and pass it as the target buffer for read(), making the kernel overwrite it with the user-controlled content. Effectively making the secret memory in the current implementation missing confidentiality and integrity guarantees. In the specific case of vCPU registers, this is fine because the owner process can read and write to them using KVM IOCTLs anyway. But in the general case this represents a security concern and needs to be addressed. A possible way forward for the arch-agnostic implementation is to limit the user virtual addresses used for kernel to specific range that can be checked against in copy_from_user() and copy_to_user(). For arch specific implementation, using separate PGD is the way to go. [1] https://lore.kernel.org/lkml/20190612170834.14855-1-mhillenb@amazon.de/ [2] https://lore.kernel.org/lkml/20240621201501.1059948-1-rkagan@amazon.de/ Fares Mehanna / Roman Kagan (2): mseal: expose interface to seal / unseal user memory ranges mm/secretmem: implement mm-local kernel allocations Fares Mehanna (5): arm64: KVM: Refactor C-code to access vCPU gp-registers through macros KVM: Refactor Assembly-code to access vCPU gp-registers through a macro arm64: KVM: Allocate vCPU gp-regs dynamically on VHE and KERNEL_SECRETMEM enabled systems arm64: KVM: Refactor C-code to access vCPU fp-registers through macros arm64: KVM: Allocate vCPU fp-regs dynamically on VHE and KERNEL_SECRETMEM enabled systems arch/arm64/include/asm/kvm_asm.h | 50 ++-- arch/arm64/include/asm/kvm_emulate.h | 2 +- arch/arm64/include/asm/kvm_host.h | 41 +++- arch/arm64/kernel/asm-offsets.c | 1 + arch/arm64/kernel/image-vars.h | 2 + arch/arm64/kvm/arm.c | 90 +++++++- arch/arm64/kvm/fpsimd.c | 2 +- arch/arm64/kvm/guest.c | 14 +- arch/arm64/kvm/hyp/entry.S | 15 ++ arch/arm64/kvm/hyp/include/hyp/switch.h | 6 +- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 10 +- .../arm64/kvm/hyp/include/nvhe/trap_handler.h | 2 +- arch/arm64/kvm/hyp/nvhe/host.S | 20 +- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 4 +- arch/arm64/kvm/reset.c | 2 +- arch/arm64/kvm/va_layout.c | 38 ++++ include/linux/secretmem.h | 29 +++ mm/Kconfig | 10 + mm/gup.c | 4 +- mm/internal.h | 7 + mm/mseal.c | 81 ++++--- mm/secretmem.c | 213 ++++++++++++++++++ 22 files changed, 559 insertions(+), 84 deletions(-) -- 2.40.1 Amazon Web Services Development Center Germany GmbH Krausenstr. 38 10117 Berlin Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B Sitz: Berlin Ust-ID: DE 365 538 597
On 11.09.24 16:33, Fares Mehanna wrote: > In a series posted a few years ago [1], a proposal was put forward to allow the > kernel to allocate memory local to a mm and thus push it out of reach for > current and future speculation-based cross-process attacks. We still believe > this is a nice thing to have. > > However, in the time passed since that post Linux mm has grown quite a few new > goodies, so we'd like to explore possibilities to implement this functionality > with less effort and churn leveraging the now available facilities. > > An RFC was posted few months back [2] to show the proof of concept and a simple > test driver. > > In this RFC, we're using the same approach of implementing mm-local allocations > piggy-backing on memfd_secret(), using regular user addresses but pinning the > pages and flipping the user/supervisor flag on the respective PTEs to make them > directly accessible from kernel. > In addition to that we are submitting 5 patches to use the secret memory to hide > the vCPU gp-regs and fp-regs on arm64 VHE systems. > > The generic drawbacks of using user virtual addresses mentioned in the previous > RFC [2] still hold, in addition to a more specific one: > > - While the user virtual addresses allocated for kernel secret memory are not > directly accessible by userspace as the PTEs restrict that, copy_from_user() > and copy_to_user() can operate on those ranges, so that e.g. the usermode can > guess the address and pass it as the target buffer for read(), making the > kernel overwrite it with the user-controlled content. Effectively making the > secret memory in the current implementation missing confidentiality and > integrity guarantees. > > In the specific case of vCPU registers, this is fine because the owner process > can read and write to them using KVM IOCTLs anyway. But in the general case this > represents a security concern and needs to be addressed. > > A possible way forward for the arch-agnostic implementation is to limit the user > virtual addresses used for kernel to specific range that can be checked against > in copy_from_user() and copy_to_user(). > > For arch specific implementation, using separate PGD is the way to go. > > [1] https://lore.kernel.org/lkml/20190612170834.14855-1-mhillenb@amazon.de/ > [2] https://lore.kernel.org/lkml/20240621201501.1059948-1-rkagan@amazon.de/ Hey Mark and Mike, We talked at LPC about mm-local memory and you had some inputs. It would be amazing to write them down here so I don't end up playing game of telephone :) Thanks! Amazon Web Services Development Center Germany GmbH Krausenstr. 38 10117 Berlin Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B Sitz: Berlin Ust-ID: DE 365 538 597
Hi, On Wed, Sep 11, 2024 at 02:33:59PM +0000, Fares Mehanna wrote: > In a series posted a few years ago [1], a proposal was put forward to allow the > kernel to allocate memory local to a mm and thus push it out of reach for > current and future speculation-based cross-process attacks. We still believe > this is a nice thing to have. > > However, in the time passed since that post Linux mm has grown quite a few new > goodies, so we'd like to explore possibilities to implement this functionality > with less effort and churn leveraging the now available facilities. > > An RFC was posted few months back [2] to show the proof of concept and a simple > test driver. > > In this RFC, we're using the same approach of implementing mm-local allocations > piggy-backing on memfd_secret(), using regular user addresses but pinning the > pages and flipping the user/supervisor flag on the respective PTEs to make them > directly accessible from kernel. > In addition to that we are submitting 5 patches to use the secret memory to hide > the vCPU gp-regs and fp-regs on arm64 VHE systems. > > The generic drawbacks of using user virtual addresses mentioned in the previous > RFC [2] still hold, in addition to a more specific one: > > - While the user virtual addresses allocated for kernel secret memory are not > directly accessible by userspace as the PTEs restrict that, copy_from_user() > and copy_to_user() can operate on those ranges, so that e.g. the usermode can > guess the address and pass it as the target buffer for read(), making the > kernel overwrite it with the user-controlled content. Effectively making the > secret memory in the current implementation missing confidentiality and > integrity guarantees. Having a VMA in user mappings for kernel memory seems weird to say the least. Core MM does not expect to have VMAs for kernel memory. What will happen if userspace ftruncates that VMA? Or registers it with userfaultfd? > In the specific case of vCPU registers, this is fine because the owner process > can read and write to them using KVM IOCTLs anyway. But in the general case this > represents a security concern and needs to be addressed. > > A possible way forward for the arch-agnostic implementation is to limit the user > virtual addresses used for kernel to specific range that can be checked against > in copy_from_user() and copy_to_user(). > > For arch specific implementation, using separate PGD is the way to go. > > [1] https://lore.kernel.org/lkml/20190612170834.14855-1-mhillenb@amazon.de/ This approach seems much more reasonable and it's not that it was entirely arch-specific. There is some plumbing at arch level, but the allocator is anyway arch-independent. -- Sincerely yours, Mike.
Hi, Thanks for taking a look and apologies for my delayed response. > Having a VMA in user mappings for kernel memory seems weird to say the > least. I see your point and agree with you. Let me explain the motivation, pros and cons of the approach after answering your questions. > Core MM does not expect to have VMAs for kernel memory. What will happen if > userspace ftruncates that VMA? Or registers it with userfaultfd? In the patch, I make sure the pages are faulted in, locked and sealed to make sure the VMA is practically off-limits from the owner process. Only after that I change the permissions to be used by the kernel. > This approach seems much more reasonable and it's not that it was entirely > arch-specific. There is some plumbing at arch level, but the allocator is > anyway arch-independent. So I wanted to explore a simple solution to implement mm-local kernel secret memory without much arch dependent code. I also wanted to reuse as much of memfd_secret() as possible to benefit from what is done already and possible future improvements to it. Keeping the secret pages in user virtual addresses is easier as the page table entries are not global by default so no special handling for spawn(). keeping them tracked in VMA shouldn't require special handling for fork(). The challenge was to keep the virtual addresses / VMA away from user control as long as the kernel is using it, and signal the mm core that this VMA is special so it is not merged with other VMAs. I believe locking the pages, sealing the VMA, prefaulting the pages should make it practicality away of user space influence. But the current approach have those downsides: (That I can think of) 1. Kernel secret user virtual addresses can still be used in functions accepting user virtual addresses like copy_from_user / copy_to_user. 2. Even if we are sure the VMA is off-limits to userspace, adding VMA with kernel addresses will increase attack surface between userspace and the kernel. 3. Since kernel secret memory is mapped in user virtual addresses, it is very easy to guess the exact virtual address (using binary search), and since this functionality is designed to keep user data, it is fair to assume the userspace will always be able to influence what is written there. So it kind of breaks KASLR for those specific pages. 4. It locks user virtual memory away, this may break some software if they assumed they can mmap() into specific places. One way to address most of those concerns while keeping the solution almost arch agnostic is is to allocate reasonable chunk of user virtual memory to be only used for kernel secret memory, and not track them in VMAs. This is similar to the old approach but instead of creating non-global kernel PGD per arch it will use chunk of user virtual memory. This chunk can be defined per arch, and this solution won't use memfd_secret(). We can then easily enlighten the kernel about this range so the kernel can test for this range in functions like access_ok(). This approach however will make downside #4 even worse, as it will reserve bigger chunk of user virtual memory if this feature is enabled. I'm also very okay switching back to the old approach with the expense of: 1. Supporting fewer architectures that can afford to give away single PGD. 2. More complicated arch specific code. Also @graf mentioned how aarch64 uses TTBR0/TTBR1 for user and kernel page tables, I haven't looked at this yet but it probably means that kernel page table will be tracked per process and TTBR1 will be switched during context switching. What do you think? I would appreciate your opinion before working on the next RFC patch set. Thanks! Fares. Amazon Web Services Development Center Germany GmbH Krausenstr. 38 10117 Berlin Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B Sitz: Berlin Ust-ID: DE 365 538 597
On Wed, Sep 25, 2024 at 03:33:47PM +0000, Fares Mehanna wrote: > Hi, > > Thanks for taking a look and apologies for my delayed response. > > > Having a VMA in user mappings for kernel memory seems weird to say the > > least. > > I see your point and agree with you. Let me explain the motivation, pros and > cons of the approach after answering your questions. > > > Core MM does not expect to have VMAs for kernel memory. What will happen if > > userspace ftruncates that VMA? Or registers it with userfaultfd? > > In the patch, I make sure the pages are faulted in, locked and sealed to make > sure the VMA is practically off-limits from the owner process. Only after that > I change the permissions to be used by the kernel. And what about VMA accesses from the kernel? How do you verify that everything that works with VMAs in the kernel can deal with that being a kernel mapping rather than userspace? > > This approach seems much more reasonable and it's not that it was entirely > > arch-specific. There is some plumbing at arch level, but the allocator is > > anyway arch-independent. > > So I wanted to explore a simple solution to implement mm-local kernel secret > memory without much arch dependent code. I also wanted to reuse as much of > memfd_secret() as possible to benefit from what is done already and possible > future improvements to it. Adding functionality that normally belongs to userspace into mm/secretmem.c does not feel like a reuse, sorry. The only thing your actually share is removal of the allocated pages from the direct map. And hijacking userspace mapping instead of properly implementing a kernel mapping does not seem like proper solution. > Keeping the secret pages in user virtual addresses is easier as the page table > entries are not global by default so no special handling for spawn(). keeping > them tracked in VMA shouldn't require special handling for fork(). > > The challenge was to keep the virtual addresses / VMA away from user control as > long as the kernel is using it, and signal the mm core that this VMA is special > so it is not merged with other VMAs. > > I believe locking the pages, sealing the VMA, prefaulting the pages should make > it practicality away of user space influence. > > But the current approach have those downsides: (That I can think of) > 1. Kernel secret user virtual addresses can still be used in functions accepting > user virtual addresses like copy_from_user / copy_to_user. > 2. Even if we are sure the VMA is off-limits to userspace, adding VMA with > kernel addresses will increase attack surface between userspace and the > kernel. > 3. Since kernel secret memory is mapped in user virtual addresses, it is very > easy to guess the exact virtual address (using binary search), and since > this functionality is designed to keep user data, it is fair to assume the > userspace will always be able to influence what is written there. > So it kind of breaks KASLR for those specific pages. There is even no need to guess, it will appear on /proc/pid/maps > 4. It locks user virtual memory away, this may break some software if they > assumed they can mmap() into specific places. > > One way to address most of those concerns while keeping the solution almost arch > agnostic is is to allocate reasonable chunk of user virtual memory to be only > used for kernel secret memory, and not track them in VMAs. > This is similar to the old approach but instead of creating non-global kernel > PGD per arch it will use chunk of user virtual memory. This chunk can be defined > per arch, and this solution won't use memfd_secret(). > We can then easily enlighten the kernel about this range so the kernel can test > for this range in functions like access_ok(). This approach however will make > downside #4 even worse, as it will reserve bigger chunk of user virtual memory > if this feature is enabled. > > I'm also very okay switching back to the old approach with the expense of: > 1. Supporting fewer architectures that can afford to give away single PGD. Only few architectures can modify their direct map, and all these can spare a PGD entry. > 2. More complicated arch specific code. On x86 similar code already exists for LDT, you may want to look at Andy's comments on old proclocal posting: https://lore.kernel.org/lkml/CALCETrXHbS9VXfZ80kOjiTrreM2EbapYeGp68mvJPbosUtorYA@mail.gmail.com/ > Also @graf mentioned how aarch64 uses TTBR0/TTBR1 for user and kernel page > tables, I haven't looked at this yet but it probably means that kernel page > table will be tracked per process and TTBR1 will be switched during context > switching. > > What do you think? I would appreciate your opinion before working on the next > RFC patch set. > > Thanks! > Fares. > > > > Amazon Web Services Development Center Germany GmbH > Krausenstr. 38 > 10117 Berlin > Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss > Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B > Sitz: Berlin > Ust-ID: DE 365 538 597 > -- Sincerely yours, Mike.
> > Hi, > > > > Thanks for taking a look and apologies for my delayed response. > > > > > Having a VMA in user mappings for kernel memory seems weird to say the > > > least. > > > > I see your point and agree with you. Let me explain the motivation, pros and > > cons of the approach after answering your questions. > > > > > Core MM does not expect to have VMAs for kernel memory. What will happen if > > > userspace ftruncates that VMA? Or registers it with userfaultfd? > > > > In the patch, I make sure the pages are faulted in, locked and sealed to make > > sure the VMA is practically off-limits from the owner process. Only after that > > I change the permissions to be used by the kernel. > > And what about VMA accesses from the kernel? How do you verify that > everything that works with VMAs in the kernel can deal with that being a > kernel mapping rather than userspace? I added `VM_MIXEDMAP` if the secret allocation is intended for kernel usage, this should make the VMA special and prevent a lot of operations like VMA merging. Maybe the usage of `VM_MIXEDMAP` is not ideal and we can introduce a new kernel flag for that. But I'm not aware of a destructive VMA operation from kernel side while the VMA is marked special, mixed-map and sealed. > > > This approach seems much more reasonable and it's not that it was entirely > > > arch-specific. There is some plumbing at arch level, but the allocator is > > > anyway arch-independent. > > > > So I wanted to explore a simple solution to implement mm-local kernel secret > > memory without much arch dependent code. I also wanted to reuse as much of > > memfd_secret() as possible to benefit from what is done already and possible > > future improvements to it. > > Adding functionality that normally belongs to userspace into mm/secretmem.c > does not feel like a reuse, sorry. Right, because the mapping in user virtual space most of the operations belongs to userspace yes. I thought this way would be easier to demonstrate the approach for RFC. > The only thing your actually share is removal of the allocated pages from > the direct map. And hijacking userspace mapping instead of properly > implementing a kernel mapping does not seem like proper solution. Also we get: 1. PGD is private when creating new process. 2. Existing kernel-secret mappings for given process will be cloned on fork(), so no need to keep track of them to be cloned on fork(). 3. No special handling for context switching. > > Keeping the secret pages in user virtual addresses is easier as the page table > > entries are not global by default so no special handling for spawn(). keeping > > them tracked in VMA shouldn't require special handling for fork(). > > > > The challenge was to keep the virtual addresses / VMA away from user control as > > long as the kernel is using it, and signal the mm core that this VMA is special > > so it is not merged with other VMAs. > > > > I believe locking the pages, sealing the VMA, prefaulting the pages should make > > it practicality away of user space influence. > > > > But the current approach have those downsides: (That I can think of) > > 1. Kernel secret user virtual addresses can still be used in functions accepting > > user virtual addresses like copy_from_user / copy_to_user. > > 2. Even if we are sure the VMA is off-limits to userspace, adding VMA with > > kernel addresses will increase attack surface between userspace and the > > kernel. > > 3. Since kernel secret memory is mapped in user virtual addresses, it is very > > easy to guess the exact virtual address (using binary search), and since > > this functionality is designed to keep user data, it is fair to assume the > > userspace will always be able to influence what is written there. > > So it kind of breaks KASLR for those specific pages. > > There is even no need to guess, it will appear on /proc/pid/maps Yeah but that is easily fixable, however the other issue stays the same unless I allocated bigger chunk from userspace and moved away from VMA tracking. > > 4. It locks user virtual memory away, this may break some software if they > > assumed they can mmap() into specific places. > > > > One way to address most of those concerns while keeping the solution almost arch > > agnostic is is to allocate reasonable chunk of user virtual memory to be only > > used for kernel secret memory, and not track them in VMAs. > > This is similar to the old approach but instead of creating non-global kernel > > PGD per arch it will use chunk of user virtual memory. This chunk can be defined > > per arch, and this solution won't use memfd_secret(). > > We can then easily enlighten the kernel about this range so the kernel can test > > for this range in functions like access_ok(). This approach however will make > > downside #4 even worse, as it will reserve bigger chunk of user virtual memory > > if this feature is enabled. > > > > I'm also very okay switching back to the old approach with the expense of: > > 1. Supporting fewer architectures that can afford to give away single PGD. > > Only few architectures can modify their direct map, and all these can spare > a PGD entry. > > > 2. More complicated arch specific code. > > On x86 similar code already exists for LDT, you may want to look at Andy's > comments on old proclocal posting: > > https://lore.kernel.org/lkml/CALCETrXHbS9VXfZ80kOjiTrreM2EbapYeGp68mvJPbosUtorYA@mail.gmail.com/ Ah I see, so no need to think about architectures that can't spare a PGD. thanks! I read the discussion and LDT is x86 specific and I wanted to start with aarch64. I'm still thinking about the best approach for aarch64 for my next PoC, aarch64 track two tables in TTBR0/TTBR1, what I'm thinking of is: 1. Have kernel page table per process, with all its PGD entries shared other than a single PGD for kernel secret allocations. 2. On fork, traverse the private PGD part and clone existing page table for the new process. 3. On context switching, write the table to TTBR1, thus the kernel will have access to all secret allocations per this process. This will move away from user vaddr and VMA tracking, with the expense of each architecture will support it on its own way. Does that sound more decent? Thank you! Fares. > > Also @graf mentioned how aarch64 uses TTBR0/TTBR1 for user and kernel page > > tables, I haven't looked at this yet but it probably means that kernel page > > table will be tracked per process and TTBR1 will be switched during context > > switching. > > > > What do you think? I would appreciate your opinion before working on the next > > RFC patch set. > > > > Thanks! > > Fares. Amazon Web Services Development Center Germany GmbH Krausenstr. 38 10117 Berlin Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B Sitz: Berlin Ust-ID: DE 365 538 597
On 11.09.24 16:33, Fares Mehanna wrote: > In a series posted a few years ago [1], a proposal was put forward to allow the > kernel to allocate memory local to a mm and thus push it out of reach for > current and future speculation-based cross-process attacks. We still believe > this is a nice thing to have. > > However, in the time passed since that post Linux mm has grown quite a few new > goodies, so we'd like to explore possibilities to implement this functionality > with less effort and churn leveraging the now available facilities. > > An RFC was posted few months back [2] to show the proof of concept and a simple > test driver. > > In this RFC, we're using the same approach of implementing mm-local allocations > piggy-backing on memfd_secret(), using regular user addresses but pinning the > pages and flipping the user/supervisor flag on the respective PTEs to make them > directly accessible from kernel. > In addition to that we are submitting 5 patches to use the secret memory to hide > the vCPU gp-regs and fp-regs on arm64 VHE systems. I'm a bit lost on what exactly we want to achieve. The point where we start flipping user/supervisor flags confuses me :) With secretmem, you'd get memory allocated that (a) Is accessible by user space -- mapped into user space. (b) Is inaccessible by kernel space -- not mapped into the direct map (c) GUP will fail, but copy_from / copy_to user will work. Another way, without secretmem, would be to consider these "secrets" kernel allocations that can be mapped into user space using mmap() of a special fd. That is, they wouldn't have their origin in secretmem, but in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP with vm_insert_pages(), manually removing them from the directmap. But, I am not sure who is supposed to access what. Let's explore the requirements. I assume we want: (a) Pages accessible by user space -- mapped into user space. (b) Pages inaccessible by kernel space -- not mapped into the direct map (c) GUP to fail (no direct map). (d) copy_from / copy_to user to fail? And on top of that, some way to access these pages on demand from kernel space? (temporary CPU-local mapping?) Or how would the kernel make use of these allocations? -- Cheers, David / dhildenb
> > In a series posted a few years ago [1], a proposal was put forward to allow the > > kernel to allocate memory local to a mm and thus push it out of reach for > > current and future speculation-based cross-process attacks. We still believe > > this is a nice thing to have. > > > > However, in the time passed since that post Linux mm has grown quite a few new > > goodies, so we'd like to explore possibilities to implement this functionality > > with less effort and churn leveraging the now available facilities. > > > > An RFC was posted few months back [2] to show the proof of concept and a simple > > test driver. > > > > In this RFC, we're using the same approach of implementing mm-local allocations > > piggy-backing on memfd_secret(), using regular user addresses but pinning the > > pages and flipping the user/supervisor flag on the respective PTEs to make them > > directly accessible from kernel. > > In addition to that we are submitting 5 patches to use the secret memory to hide > > the vCPU gp-regs and fp-regs on arm64 VHE systems. > > I'm a bit lost on what exactly we want to achieve. The point where we > start flipping user/supervisor flags confuses me :) > > With secretmem, you'd get memory allocated that > (a) Is accessible by user space -- mapped into user space. > (b) Is inaccessible by kernel space -- not mapped into the direct map > (c) GUP will fail, but copy_from / copy_to user will work. > > > Another way, without secretmem, would be to consider these "secrets" > kernel allocations that can be mapped into user space using mmap() of a > special fd. That is, they wouldn't have their origin in secretmem, but > in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP > with vm_insert_pages(), manually removing them from the directmap. > > But, I am not sure who is supposed to access what. Let's explore the > requirements. I assume we want: > > (a) Pages accessible by user space -- mapped into user space. > (b) Pages inaccessible by kernel space -- not mapped into the direct map > (c) GUP to fail (no direct map). > (d) copy_from / copy_to user to fail? > > And on top of that, some way to access these pages on demand from kernel > space? (temporary CPU-local mapping?) > > Or how would the kernel make use of these allocations? > > -- > Cheers, > > David / dhildenb Hi David, Thanks for taking a look at the patches! We're trying to allocate a kernel memory that is accessible to the kernel but only when the context of the process is loaded. So this is a kernel memory that is not needed to operate the kernel itself, it is to store & process data on behalf of a process. The requirement for this memory is that it would never be touched unless the process is scheduled on this core. otherwise any other access will crash the kernel. So this memory should only be directly readable and writable by the kernel, but only when the process context is loaded. The memory shouldn't be readable or writable by the owner process at all. This is basically done by removing those pages from kernel linear address and attaching them only in the process mm_struct. So during context switching the kernel loses access to the secret memory scheduled out and gain access to the new process secret memory. This generally protects against speculation attacks, and if other process managed to trick the kernel to leak data from memory. In this case the kernel will crash if it tries to access other processes secret memory. Since this memory is special in the sense that it is kernel memory but only make sense in the term of the owner process, I tried in this patch series to explore the possibility of reusing memfd_secret() to allocate this memory in user virtual address space, manage it in a VMA, flipping the permissions while keeping the control of the mapping exclusively with the kernel. Right now it is: (a) Pages not accessible by user space -- even though they are mapped into user space, the PTEs are marked for kernel usage. (b) Pages accessible by kernel space -- even though they are not mapped into the direct map, the PTEs in uvaddr are marked for kernel usage. (c) copy_from / copy_to user won't fail -- because it is in the user range, but this can be fixed by allocating specific range in user vaddr to this feature and check against this range there. (d) The secret memory vaddr is guessable by the owner process -- that can also be fixed by allocating bigger chunk of user vaddr for this feature and randomly placing the secret memory there. (e) Mapping is off-limits to the owner process by marking the VMA as locked, sealed and special. Other alternative (that was implemented in the first submission) is to track those allocations in a non-shared kernel PGD per process, then handle creating, forking and context-switching this PGD. What I like about the memfd_secret() approach is the simplicity and being arch agnostic, what I don't like is the increased attack surface by using VMAs to track those allocations. I'm thinking of working on a PoC to implement the first approach of using a non-shared kernel PGD for secret memory allocations on arm64. This includes adding kernel page table per process where all PGDs are shared but one which will be used for secret allocations mapping. And handle the fork & context switching (TTBR1 switching(?)) correctly for the secret memory PGD. What do you think? I'd really appreciate opinions and possible ways forward. Thanks! Fares. Amazon Web Services Development Center Germany GmbH Krausenstr. 38 10117 Berlin Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B Sitz: Berlin Ust-ID: DE 365 538 597
On 10.10.24 17:52, Fares Mehanna wrote: >>> In a series posted a few years ago [1], a proposal was put forward to allow the >>> kernel to allocate memory local to a mm and thus push it out of reach for >>> current and future speculation-based cross-process attacks. We still believe >>> this is a nice thing to have. >>> >>> However, in the time passed since that post Linux mm has grown quite a few new >>> goodies, so we'd like to explore possibilities to implement this functionality >>> with less effort and churn leveraging the now available facilities. >>> >>> An RFC was posted few months back [2] to show the proof of concept and a simple >>> test driver. >>> >>> In this RFC, we're using the same approach of implementing mm-local allocations >>> piggy-backing on memfd_secret(), using regular user addresses but pinning the >>> pages and flipping the user/supervisor flag on the respective PTEs to make them >>> directly accessible from kernel. >>> In addition to that we are submitting 5 patches to use the secret memory to hide >>> the vCPU gp-regs and fp-regs on arm64 VHE systems. >> >> I'm a bit lost on what exactly we want to achieve. The point where we >> start flipping user/supervisor flags confuses me :) >> >> With secretmem, you'd get memory allocated that >> (a) Is accessible by user space -- mapped into user space. >> (b) Is inaccessible by kernel space -- not mapped into the direct map >> (c) GUP will fail, but copy_from / copy_to user will work. >> >> >> Another way, without secretmem, would be to consider these "secrets" >> kernel allocations that can be mapped into user space using mmap() of a >> special fd. That is, they wouldn't have their origin in secretmem, but >> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP >> with vm_insert_pages(), manually removing them from the directmap. >> >> But, I am not sure who is supposed to access what. Let's explore the >> requirements. I assume we want: >> >> (a) Pages accessible by user space -- mapped into user space. >> (b) Pages inaccessible by kernel space -- not mapped into the direct map >> (c) GUP to fail (no direct map). >> (d) copy_from / copy_to user to fail? >> >> And on top of that, some way to access these pages on demand from kernel >> space? (temporary CPU-local mapping?) >> >> Or how would the kernel make use of these allocations? >> >> -- >> Cheers, >> >> David / dhildenb > > Hi David, Hi Fares! > > Thanks for taking a look at the patches! > > We're trying to allocate a kernel memory that is accessible to the kernel but > only when the context of the process is loaded. > > So this is a kernel memory that is not needed to operate the kernel itself, it > is to store & process data on behalf of a process. The requirement for this > memory is that it would never be touched unless the process is scheduled on this > core. otherwise any other access will crash the kernel. > > So this memory should only be directly readable and writable by the kernel, but > only when the process context is loaded. The memory shouldn't be readable or > writable by the owner process at all. > > This is basically done by removing those pages from kernel linear address and > attaching them only in the process mm_struct. So during context switching the > kernel loses access to the secret memory scheduled out and gain access to the > new process secret memory. > > This generally protects against speculation attacks, and if other process managed > to trick the kernel to leak data from memory. In this case the kernel will crash > if it tries to access other processes secret memory. > > Since this memory is special in the sense that it is kernel memory but only make > sense in the term of the owner process, I tried in this patch series to explore > the possibility of reusing memfd_secret() to allocate this memory in user virtual > address space, manage it in a VMA, flipping the permissions while keeping the > control of the mapping exclusively with the kernel. > > Right now it is: > (a) Pages not accessible by user space -- even though they are mapped into user > space, the PTEs are marked for kernel usage. Ah, that is the detail I was missing, now I see what you are trying to achieve, thanks! It is a bit architecture specific, because ... imagine architectures that have separate kernel+user space page table hierarchies, and not a simple PTE flag to change access permissions between kernel/user space. IIRC s390 is one such architecture that uses separate page tables for the user-space + kernel-space portions. > (b) Pages accessible by kernel space -- even though they are not mapped into the > direct map, the PTEs in uvaddr are marked for kernel usage. > (c) copy_from / copy_to user won't fail -- because it is in the user range, but > this can be fixed by allocating specific range in user vaddr to this feature > and check against this range there. > (d) The secret memory vaddr is guessable by the owner process -- that can also > be fixed by allocating bigger chunk of user vaddr for this feature and > randomly placing the secret memory there. > (e) Mapping is off-limits to the owner process by marking the VMA as locked, > sealed and special. Okay, so in this RFC you are jumping through quite some hoops to have a kernel allocation unmapped from the direct map but mapped into a per-process page table only accessible by kernel space. :) So you really don't want this mapped into user space at all (consequently, no GUP, no access, no copy_from_user ...). In this RFC it's mapped but turned inaccessible by flipping the "kernel vs. user" switch. > > Other alternative (that was implemented in the first submission) is to track those > allocations in a non-shared kernel PGD per process, then handle creating, forking > and context-switching this PGD. That sounds like a better approach. So we would remove the pages from the shared kernel direct map and map them into a separate kernel-portion in the per-MM page tables? Can you envision that would also work with architectures like s390x? I assume we would not only need the per-MM user space page table hierarchy, but also a per-MM kernel space page table hierarchy, into which we also map the common/shared-among-all-processes kernel space page tables (e.g., directmap). > > What I like about the memfd_secret() approach is the simplicity and being arch > agnostic, what I don't like is the increased attack surface by using VMAs to > track those allocations. Yes, but memfd_secret() was really design for user space to hold secrets. But I can see how you came to this solution. > > I'm thinking of working on a PoC to implement the first approach of using a > non-shared kernel PGD for secret memory allocations on arm64. This includes > adding kernel page table per process where all PGDs are shared but one which > will be used for secret allocations mapping. And handle the fork & context > switching (TTBR1 switching(?)) correctly for the secret memory PGD. > > What do you think? I'd really appreciate opinions and possible ways forward. Naive question: does arm64 rather resemble the s390x model or the x86-64 model? -- Cheers, David / dhildenb
> On 11. Oct 2024, at 14:04, David Hildenbrand <david@redhat.com> wrote: > > On 10.10.24 17:52, Fares Mehanna wrote: >>>> In a series posted a few years ago [1], a proposal was put forward to allow the >>>> kernel to allocate memory local to a mm and thus push it out of reach for >>>> current and future speculation-based cross-process attacks. We still believe >>>> this is a nice thing to have. >>>> >>>> However, in the time passed since that post Linux mm has grown quite a few new >>>> goodies, so we'd like to explore possibilities to implement this functionality >>>> with less effort and churn leveraging the now available facilities. >>>> >>>> An RFC was posted few months back [2] to show the proof of concept and a simple >>>> test driver. >>>> >>>> In this RFC, we're using the same approach of implementing mm-local allocations >>>> piggy-backing on memfd_secret(), using regular user addresses but pinning the >>>> pages and flipping the user/supervisor flag on the respective PTEs to make them >>>> directly accessible from kernel. >>>> In addition to that we are submitting 5 patches to use the secret memory to hide >>>> the vCPU gp-regs and fp-regs on arm64 VHE systems. >>> >>> I'm a bit lost on what exactly we want to achieve. The point where we >>> start flipping user/supervisor flags confuses me :) >>> >>> With secretmem, you'd get memory allocated that >>> (a) Is accessible by user space -- mapped into user space. >>> (b) Is inaccessible by kernel space -- not mapped into the direct map >>> (c) GUP will fail, but copy_from / copy_to user will work. >>> >>> >>> Another way, without secretmem, would be to consider these "secrets" >>> kernel allocations that can be mapped into user space using mmap() of a >>> special fd. That is, they wouldn't have their origin in secretmem, but >>> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP >>> with vm_insert_pages(), manually removing them from the directmap. >>> >>> But, I am not sure who is supposed to access what. Let's explore the >>> requirements. I assume we want: >>> >>> (a) Pages accessible by user space -- mapped into user space. >>> (b) Pages inaccessible by kernel space -- not mapped into the direct map >>> (c) GUP to fail (no direct map). >>> (d) copy_from / copy_to user to fail? >>> >>> And on top of that, some way to access these pages on demand from kernel >>> space? (temporary CPU-local mapping?) >>> >>> Or how would the kernel make use of these allocations? >>> >>> -- >>> Cheers, >>> >>> David / dhildenb >> Hi David, > > Hi Fares! > >> Thanks for taking a look at the patches! >> We're trying to allocate a kernel memory that is accessible to the kernel but >> only when the context of the process is loaded. >> So this is a kernel memory that is not needed to operate the kernel itself, it >> is to store & process data on behalf of a process. The requirement for this >> memory is that it would never be touched unless the process is scheduled on this >> core. otherwise any other access will crash the kernel. >> So this memory should only be directly readable and writable by the kernel, but >> only when the process context is loaded. The memory shouldn't be readable or >> writable by the owner process at all. >> This is basically done by removing those pages from kernel linear address and >> attaching them only in the process mm_struct. So during context switching the >> kernel loses access to the secret memory scheduled out and gain access to the >> new process secret memory. >> This generally protects against speculation attacks, and if other process managed >> to trick the kernel to leak data from memory. In this case the kernel will crash >> if it tries to access other processes secret memory. >> Since this memory is special in the sense that it is kernel memory but only make >> sense in the term of the owner process, I tried in this patch series to explore >> the possibility of reusing memfd_secret() to allocate this memory in user virtual >> address space, manage it in a VMA, flipping the permissions while keeping the >> control of the mapping exclusively with the kernel. >> Right now it is: >> (a) Pages not accessible by user space -- even though they are mapped into user >> space, the PTEs are marked for kernel usage. > > Ah, that is the detail I was missing, now I see what you are trying to achieve, thanks! > > It is a bit architecture specific, because ... imagine architectures that have separate kernel+user space page table hierarchies, and not a simple PTE flag to change access permissions between kernel/user space. > > IIRC s390 is one such architecture that uses separate page tables for the user-space + kernel-space portions. > >> (b) Pages accessible by kernel space -- even though they are not mapped into the >> direct map, the PTEs in uvaddr are marked for kernel usage. >> (c) copy_from / copy_to user won't fail -- because it is in the user range, but >> this can be fixed by allocating specific range in user vaddr to this feature >> and check against this range there. >> (d) The secret memory vaddr is guessable by the owner process -- that can also >> be fixed by allocating bigger chunk of user vaddr for this feature and >> randomly placing the secret memory there. >> (e) Mapping is off-limits to the owner process by marking the VMA as locked, >> sealed and special. > > Okay, so in this RFC you are jumping through quite some hoops to have a kernel allocation unmapped from the direct map but mapped into a per-process page table only accessible by kernel space. :) > > So you really don't want this mapped into user space at all (consequently, no GUP, no access, no copy_from_user ...). In this RFC it's mapped but turned inaccessible by flipping the "kernel vs. user" switch. > >> Other alternative (that was implemented in the first submission) is to track those >> allocations in a non-shared kernel PGD per process, then handle creating, forking >> and context-switching this PGD. > > That sounds like a better approach. So we would remove the pages from the shared kernel direct map and map them into a separate kernel-portion in the per-MM page tables? > > Can you envision that would also work with architectures like s390x? I assume we would not only need the per-MM user space page table hierarchy, but also a per-MM kernel space page table hierarchy, into which we also map the common/shared-among-all-processes kernel space page tables (e.g., directmap). Yes, that’s also applicable to arm64. There’s currently no separate per-mm user space page hierarchy there. >> What I like about the memfd_secret() approach is the simplicity and being arch >> agnostic, what I don't like is the increased attack surface by using VMAs to >> track those allocations. > > Yes, but memfd_secret() was really design for user space to hold secrets. But I can see how you came to this solution. > >> I'm thinking of working on a PoC to implement the first approach of using a >> non-shared kernel PGD for secret memory allocations on arm64. This includes >> adding kernel page table per process where all PGDs are shared but one which >> will be used for secret allocations mapping. And handle the fork & context >> switching (TTBR1 switching(?)) correctly for the secret memory PGD. >> What do you think? I'd really appreciate opinions and possible ways forward. > > Naive question: does arm64 rather resemble the s390x model or the x86-64 model? arm64 has separate page tables for kernel and user-mode. Except for the KPTI case, the kernel page tables aren’t swapped per-process and stay the same all the time. Thanks, -Mohamed > -- > Cheers, > > David / dhildenb > Amazon Web Services Development Center Germany GmbH Krausenstr. 38 10117 Berlin Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B Sitz: Berlin Ust-ID: DE 365 538 597
> On 11. Oct 2024, at 14:36, Mediouni, Mohamed <mediou@amazon.de> wrote: > > > >> On 11. Oct 2024, at 14:04, David Hildenbrand <david@redhat.com> wrote: >> >> On 10.10.24 17:52, Fares Mehanna wrote: >>>>> In a series posted a few years ago [1], a proposal was put forward to allow the >>>>> kernel to allocate memory local to a mm and thus push it out of reach for >>>>> current and future speculation-based cross-process attacks. We still believe >>>>> this is a nice thing to have. >>>>> >>>>> However, in the time passed since that post Linux mm has grown quite a few new >>>>> goodies, so we'd like to explore possibilities to implement this functionality >>>>> with less effort and churn leveraging the now available facilities. >>>>> >>>>> An RFC was posted few months back [2] to show the proof of concept and a simple >>>>> test driver. >>>>> >>>>> In this RFC, we're using the same approach of implementing mm-local allocations >>>>> piggy-backing on memfd_secret(), using regular user addresses but pinning the >>>>> pages and flipping the user/supervisor flag on the respective PTEs to make them >>>>> directly accessible from kernel. >>>>> In addition to that we are submitting 5 patches to use the secret memory to hide >>>>> the vCPU gp-regs and fp-regs on arm64 VHE systems. >>>> >>>> I'm a bit lost on what exactly we want to achieve. The point where we >>>> start flipping user/supervisor flags confuses me :) >>>> >>>> With secretmem, you'd get memory allocated that >>>> (a) Is accessible by user space -- mapped into user space. >>>> (b) Is inaccessible by kernel space -- not mapped into the direct map >>>> (c) GUP will fail, but copy_from / copy_to user will work. >>>> >>>> >>>> Another way, without secretmem, would be to consider these "secrets" >>>> kernel allocations that can be mapped into user space using mmap() of a >>>> special fd. That is, they wouldn't have their origin in secretmem, but >>>> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP >>>> with vm_insert_pages(), manually removing them from the directmap. >>>> >>>> But, I am not sure who is supposed to access what. Let's explore the >>>> requirements. I assume we want: >>>> >>>> (a) Pages accessible by user space -- mapped into user space. >>>> (b) Pages inaccessible by kernel space -- not mapped into the direct map >>>> (c) GUP to fail (no direct map). >>>> (d) copy_from / copy_to user to fail? >>>> >>>> And on top of that, some way to access these pages on demand from kernel >>>> space? (temporary CPU-local mapping?) >>>> >>>> Or how would the kernel make use of these allocations? >>>> >>>> -- >>>> Cheers, >>>> >>>> David / dhildenb >>> Hi David, >> >> Hi Fares! >> >>> Thanks for taking a look at the patches! >>> We're trying to allocate a kernel memory that is accessible to the kernel but >>> only when the context of the process is loaded. >>> So this is a kernel memory that is not needed to operate the kernel itself, it >>> is to store & process data on behalf of a process. The requirement for this >>> memory is that it would never be touched unless the process is scheduled on this >>> core. otherwise any other access will crash the kernel. >>> So this memory should only be directly readable and writable by the kernel, but >>> only when the process context is loaded. The memory shouldn't be readable or >>> writable by the owner process at all. >>> This is basically done by removing those pages from kernel linear address and >>> attaching them only in the process mm_struct. So during context switching the >>> kernel loses access to the secret memory scheduled out and gain access to the >>> new process secret memory. >>> This generally protects against speculation attacks, and if other process managed >>> to trick the kernel to leak data from memory. In this case the kernel will crash >>> if it tries to access other processes secret memory. >>> Since this memory is special in the sense that it is kernel memory but only make >>> sense in the term of the owner process, I tried in this patch series to explore >>> the possibility of reusing memfd_secret() to allocate this memory in user virtual >>> address space, manage it in a VMA, flipping the permissions while keeping the >>> control of the mapping exclusively with the kernel. >>> Right now it is: >>> (a) Pages not accessible by user space -- even though they are mapped into user >>> space, the PTEs are marked for kernel usage. >> >> Ah, that is the detail I was missing, now I see what you are trying to achieve, thanks! >> >> It is a bit architecture specific, because ... imagine architectures that have separate kernel+user space page table hierarchies, and not a simple PTE flag to change access permissions between kernel/user space. >> >> IIRC s390 is one such architecture that uses separate page tables for the user-space + kernel-space portions. >> >>> (b) Pages accessible by kernel space -- even though they are not mapped into the >>> direct map, the PTEs in uvaddr are marked for kernel usage. >>> (c) copy_from / copy_to user won't fail -- because it is in the user range, but >>> this can be fixed by allocating specific range in user vaddr to this feature >>> and check against this range there. >>> (d) The secret memory vaddr is guessable by the owner process -- that can also >>> be fixed by allocating bigger chunk of user vaddr for this feature and >>> randomly placing the secret memory there. >>> (e) Mapping is off-limits to the owner process by marking the VMA as locked, >>> sealed and special. >> >> Okay, so in this RFC you are jumping through quite some hoops to have a kernel allocation unmapped from the direct map but mapped into a per-process page table only accessible by kernel space. :) >> >> So you really don't want this mapped into user space at all (consequently, no GUP, no access, no copy_from_user ...). In this RFC it's mapped but turned inaccessible by flipping the "kernel vs. user" switch. >> >>> Other alternative (that was implemented in the first submission) is to track those >>> allocations in a non-shared kernel PGD per process, then handle creating, forking >>> and context-switching this PGD. >> >> That sounds like a better approach. So we would remove the pages from the shared kernel direct map and map them into a separate kernel-portion in the per-MM page tables? >> >> Can you envision that would also work with architectures like s390x? I assume we would not only need the per-MM user space page table hierarchy, but also a per-MM kernel space page table hierarchy, into which we also map the common/shared-among-all-processes kernel space page tables (e.g., directmap). > Yes, that’s also applicable to arm64. There’s currently no separate per-mm user space page hierarchy there. typo, read kernel Thanks, -Mohamed >>> What I like about the memfd_secret() approach is the simplicity and being arch >>> agnostic, what I don't like is the increased attack surface by using VMAs to >>> track those allocations. >> >> Yes, but memfd_secret() was really design for user space to hold secrets. But I can see how you came to this solution. >> >>> I'm thinking of working on a PoC to implement the first approach of using a >>> non-shared kernel PGD for secret memory allocations on arm64. This includes >>> adding kernel page table per process where all PGDs are shared but one which >>> will be used for secret allocations mapping. And handle the fork & context >>> switching (TTBR1 switching(?)) correctly for the secret memory PGD. >>> What do you think? I'd really appreciate opinions and possible ways forward. >> >> Naive question: does arm64 rather resemble the s390x model or the x86-64 model? > arm64 has separate page tables for kernel and user-mode. Except for the KPTI case, the kernel page tables aren’t swapped per-process and stay the same all the time. > > Thanks, > -Mohamed >> -- >> Cheers, >> >> David / dhildenb >> > Amazon Web Services Development Center Germany GmbH Krausenstr. 38 10117 Berlin Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B Sitz: Berlin Ust-ID: DE 365 538 597
On 11.10.24 14:56, Mediouni, Mohamed wrote: > > >> On 11. Oct 2024, at 14:36, Mediouni, Mohamed <mediou@amazon.de> wrote: >> >> >> >>> On 11. Oct 2024, at 14:04, David Hildenbrand <david@redhat.com> wrote: >>> >>> On 10.10.24 17:52, Fares Mehanna wrote: >>>>>> In a series posted a few years ago [1], a proposal was put forward to allow the >>>>>> kernel to allocate memory local to a mm and thus push it out of reach for >>>>>> current and future speculation-based cross-process attacks. We still believe >>>>>> this is a nice thing to have. >>>>>> >>>>>> However, in the time passed since that post Linux mm has grown quite a few new >>>>>> goodies, so we'd like to explore possibilities to implement this functionality >>>>>> with less effort and churn leveraging the now available facilities. >>>>>> >>>>>> An RFC was posted few months back [2] to show the proof of concept and a simple >>>>>> test driver. >>>>>> >>>>>> In this RFC, we're using the same approach of implementing mm-local allocations >>>>>> piggy-backing on memfd_secret(), using regular user addresses but pinning the >>>>>> pages and flipping the user/supervisor flag on the respective PTEs to make them >>>>>> directly accessible from kernel. >>>>>> In addition to that we are submitting 5 patches to use the secret memory to hide >>>>>> the vCPU gp-regs and fp-regs on arm64 VHE systems. >>>>> >>>>> I'm a bit lost on what exactly we want to achieve. The point where we >>>>> start flipping user/supervisor flags confuses me :) >>>>> >>>>> With secretmem, you'd get memory allocated that >>>>> (a) Is accessible by user space -- mapped into user space. >>>>> (b) Is inaccessible by kernel space -- not mapped into the direct map >>>>> (c) GUP will fail, but copy_from / copy_to user will work. >>>>> >>>>> >>>>> Another way, without secretmem, would be to consider these "secrets" >>>>> kernel allocations that can be mapped into user space using mmap() of a >>>>> special fd. That is, they wouldn't have their origin in secretmem, but >>>>> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP >>>>> with vm_insert_pages(), manually removing them from the directmap. >>>>> >>>>> But, I am not sure who is supposed to access what. Let's explore the >>>>> requirements. I assume we want: >>>>> >>>>> (a) Pages accessible by user space -- mapped into user space. >>>>> (b) Pages inaccessible by kernel space -- not mapped into the direct map >>>>> (c) GUP to fail (no direct map). >>>>> (d) copy_from / copy_to user to fail? >>>>> >>>>> And on top of that, some way to access these pages on demand from kernel >>>>> space? (temporary CPU-local mapping?) >>>>> >>>>> Or how would the kernel make use of these allocations? >>>>> >>>>> -- >>>>> Cheers, >>>>> >>>>> David / dhildenb >>>> Hi David, >>> >>> Hi Fares! >>> >>>> Thanks for taking a look at the patches! >>>> We're trying to allocate a kernel memory that is accessible to the kernel but >>>> only when the context of the process is loaded. >>>> So this is a kernel memory that is not needed to operate the kernel itself, it >>>> is to store & process data on behalf of a process. The requirement for this >>>> memory is that it would never be touched unless the process is scheduled on this >>>> core. otherwise any other access will crash the kernel. >>>> So this memory should only be directly readable and writable by the kernel, but >>>> only when the process context is loaded. The memory shouldn't be readable or >>>> writable by the owner process at all. >>>> This is basically done by removing those pages from kernel linear address and >>>> attaching them only in the process mm_struct. So during context switching the >>>> kernel loses access to the secret memory scheduled out and gain access to the >>>> new process secret memory. >>>> This generally protects against speculation attacks, and if other process managed >>>> to trick the kernel to leak data from memory. In this case the kernel will crash >>>> if it tries to access other processes secret memory. >>>> Since this memory is special in the sense that it is kernel memory but only make >>>> sense in the term of the owner process, I tried in this patch series to explore >>>> the possibility of reusing memfd_secret() to allocate this memory in user virtual >>>> address space, manage it in a VMA, flipping the permissions while keeping the >>>> control of the mapping exclusively with the kernel. >>>> Right now it is: >>>> (a) Pages not accessible by user space -- even though they are mapped into user >>>> space, the PTEs are marked for kernel usage. >>> >>> Ah, that is the detail I was missing, now I see what you are trying to achieve, thanks! >>> >>> It is a bit architecture specific, because ... imagine architectures that have separate kernel+user space page table hierarchies, and not a simple PTE flag to change access permissions between kernel/user space. >>> >>> IIRC s390 is one such architecture that uses separate page tables for the user-space + kernel-space portions. >>> >>>> (b) Pages accessible by kernel space -- even though they are not mapped into the >>>> direct map, the PTEs in uvaddr are marked for kernel usage. >>>> (c) copy_from / copy_to user won't fail -- because it is in the user range, but >>>> this can be fixed by allocating specific range in user vaddr to this feature >>>> and check against this range there. >>>> (d) The secret memory vaddr is guessable by the owner process -- that can also >>>> be fixed by allocating bigger chunk of user vaddr for this feature and >>>> randomly placing the secret memory there. >>>> (e) Mapping is off-limits to the owner process by marking the VMA as locked, >>>> sealed and special. >>> >>> Okay, so in this RFC you are jumping through quite some hoops to have a kernel allocation unmapped from the direct map but mapped into a per-process page table only accessible by kernel space. :) >>> >>> So you really don't want this mapped into user space at all (consequently, no GUP, no access, no copy_from_user ...). In this RFC it's mapped but turned inaccessible by flipping the "kernel vs. user" switch. >>> >>>> Other alternative (that was implemented in the first submission) is to track those >>>> allocations in a non-shared kernel PGD per process, then handle creating, forking >>>> and context-switching this PGD. >>> >>> That sounds like a better approach. So we would remove the pages from the shared kernel direct map and map them into a separate kernel-portion in the per-MM page tables? >>> >>> Can you envision that would also work with architectures like s390x? I assume we would not only need the per-MM user space page table hierarchy, but also a per-MM kernel space page table hierarchy, into which we also map the common/shared-among-all-processes kernel space page tables (e.g., directmap). >> Yes, that’s also applicable to arm64. There’s currently no separate per-mm user space page hierarchy there. > typo, read kernel Okay, thanks. So going into that direction makes more sense. I do wonder if we really have to deal with fork() ... if the primary users don't really have meaning in the forked child (e.g., just like fork() with KVM IIRC) we might just get away by "losing" these allocations in the child process. Happy to learn why fork() must be supported. -- Cheers, David / dhildenb
>> >> >>> On 11. Oct 2024, at 14:36, Mediouni, Mohamed <mediou@amazon.de> wrote: >>> >>> >>> >>>> On 11. Oct 2024, at 14:04, David Hildenbrand <david@redhat.com> wrote: >>>> >>>> On 10.10.24 17:52, Fares Mehanna wrote: >>>>>>> In a series posted a few years ago [1], a proposal was put forward to allow the >>>>>>> kernel to allocate memory local to a mm and thus push it out of reach for >>>>>>> current and future speculation-based cross-process attacks. We still believe >>>>>>> this is a nice thing to have. >>>>>>> >>>>>>> However, in the time passed since that post Linux mm has grown quite a few new >>>>>>> goodies, so we'd like to explore possibilities to implement this functionality >>>>>>> with less effort and churn leveraging the now available facilities. >>>>>>> >>>>>>> An RFC was posted few months back [2] to show the proof of concept and a simple >>>>>>> test driver. >>>>>>> >>>>>>> In this RFC, we're using the same approach of implementing mm-local allocations >>>>>>> piggy-backing on memfd_secret(), using regular user addresses but pinning the >>>>>>> pages and flipping the user/supervisor flag on the respective PTEs to make them >>>>>>> directly accessible from kernel. >>>>>>> In addition to that we are submitting 5 patches to use the secret memory to hide >>>>>>> the vCPU gp-regs and fp-regs on arm64 VHE systems. >>>>>> >>>>>> I'm a bit lost on what exactly we want to achieve. The point where we >>>>>> start flipping user/supervisor flags confuses me :) >>>>>> >>>>>> With secretmem, you'd get memory allocated that >>>>>> (a) Is accessible by user space -- mapped into user space. >>>>>> (b) Is inaccessible by kernel space -- not mapped into the direct map >>>>>> (c) GUP will fail, but copy_from / copy_to user will work. >>>>>> >>>>>> >>>>>> Another way, without secretmem, would be to consider these "secrets" >>>>>> kernel allocations that can be mapped into user space using mmap() of a >>>>>> special fd. That is, they wouldn't have their origin in secretmem, but >>>>>> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP >>>>>> with vm_insert_pages(), manually removing them from the directmap. >>>>>> >>>>>> But, I am not sure who is supposed to access what. Let's explore the >>>>>> requirements. I assume we want: >>>>>> >>>>>> (a) Pages accessible by user space -- mapped into user space. >>>>>> (b) Pages inaccessible by kernel space -- not mapped into the direct map >>>>>> (c) GUP to fail (no direct map). >>>>>> (d) copy_from / copy_to user to fail? >>>>>> >>>>>> And on top of that, some way to access these pages on demand from kernel >>>>>> space? (temporary CPU-local mapping?) >>>>>> >>>>>> Or how would the kernel make use of these allocations? >>>>>> >>>>>> -- >>>>>> Cheers, >>>>>> >>>>>> David / dhildenb >>>>> Hi David, >>>> >>>> Hi Fares! >>>> >>>>> Thanks for taking a look at the patches! >>>>> We're trying to allocate a kernel memory that is accessible to the kernel but >>>>> only when the context of the process is loaded. >>>>> So this is a kernel memory that is not needed to operate the kernel itself, it >>>>> is to store & process data on behalf of a process. The requirement for this >>>>> memory is that it would never be touched unless the process is scheduled on this >>>>> core. otherwise any other access will crash the kernel. >>>>> So this memory should only be directly readable and writable by the kernel, but >>>>> only when the process context is loaded. The memory shouldn't be readable or >>>>> writable by the owner process at all. >>>>> This is basically done by removing those pages from kernel linear address and >>>>> attaching them only in the process mm_struct. So during context switching the >>>>> kernel loses access to the secret memory scheduled out and gain access to the >>>>> new process secret memory. >>>>> This generally protects against speculation attacks, and if other process managed >>>>> to trick the kernel to leak data from memory. In this case the kernel will crash >>>>> if it tries to access other processes secret memory. >>>>> Since this memory is special in the sense that it is kernel memory but only make >>>>> sense in the term of the owner process, I tried in this patch series to explore >>>>> the possibility of reusing memfd_secret() to allocate this memory in user virtual >>>>> address space, manage it in a VMA, flipping the permissions while keeping the >>>>> control of the mapping exclusively with the kernel. >>>>> Right now it is: >>>>> (a) Pages not accessible by user space -- even though they are mapped into user >>>>> space, the PTEs are marked for kernel usage. >>>> >>>> Ah, that is the detail I was missing, now I see what you are trying to achieve, thanks! >>>> >>>> It is a bit architecture specific, because ... imagine architectures that have separate kernel+user space page table hierarchies, and not a simple PTE flag > to change access permissions between kernel/user space. >>>> >>>> IIRC s390 is one such architecture that uses separate page tables for the user-space + kernel-space portions. >>>> >>>>> (b) Pages accessible by kernel space -- even though they are not mapped into the >>>>> direct map, the PTEs in uvaddr are marked for kernel usage. >>>>> (c) copy_from / copy_to user won't fail -- because it is in the user range, but >>>>> this can be fixed by allocating specific range in user vaddr to this feature >>>>> and check against this range there. >>>>> (d) The secret memory vaddr is guessable by the owner process -- that can also >>>>> be fixed by allocating bigger chunk of user vaddr for this feature and >>>>> randomly placing the secret memory there. >>>>> (e) Mapping is off-limits to the owner process by marking the VMA as locked, >>>>> sealed and special. >>>> >>>> Okay, so in this RFC you are jumping through quite some hoops to have a kernel allocation unmapped from the direct map but mapped into a per-process page > table only accessible by kernel space. :) >>>> >>>> So you really don't want this mapped into user space at all (consequently, no GUP, no access, no copy_from_user ...). In this RFC it's mapped but turned > inaccessible by flipping the "kernel vs. user" switch. >>>> >>>>> Other alternative (that was implemented in the first submission) is to track those >>>>> allocations in a non-shared kernel PGD per process, then handle creating, forking >>>>> and context-switching this PGD. >>>> >>>> That sounds like a better approach. So we would remove the pages from the shared kernel direct map and map them into a separate kernel-portion in the per-MM > page tables? >>>> >>>> Can you envision that would also work with architectures like s390x? I assume we would not only need the per-MM user space page table hierarchy, but also a > per-MM kernel space page table hierarchy, into which we also map the common/shared-among-all-processes kernel space page tables (e.g., directmap). >>> Yes, that’s also applicable to arm64. There’s currently no separate per-mm user space page hierarchy there. >> typo, read kernel > > > Okay, thanks. So going into that direction makes more sense. > > I do wonder if we really have to deal with fork() ... if the primary > users don't really have meaning in the forked child (e.g., just like > fork() with KVM IIRC) we might just get away by "losing" these > allocations in the child process. > > Happy to learn why fork() must be supported. It really depends on the use cases of the kernel secret allocation, but in my mind a troubling scenario: 1. Process A had a resource X. 2. Kernel decided to keep some data related to resource X in process A secret memory. 3. Process A decided to fork, now process B share the resource X. 4. Process B started using resource X. <-- This will crash the kernel as the used kernel page table on process B has no mapping for the secret memory used in resource X. I haven't tried to trigger this crash myself though. I didn't think in depth about this issue yet, but I need to because duplicating the secret memory mappings in the new forked process is easy (To give kernel access on the secret memory), but tearing them down across all forked processes is a bit complicated (To clean stale mappings on parent/child processes). Right now tearing down the mapping will only happen on mm_struct which allocated the secret memory. Thanks! Fares. Amazon Web Services Development Center Germany GmbH Krausenstr. 38 10117 Berlin Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B Sitz: Berlin Ust-ID: DE 365 538 597
On 11.10.24 16:25, Fares Mehanna wrote: >>> >>> >>>> On 11. Oct 2024, at 14:36, Mediouni, Mohamed <mediou@amazon.de> wrote: >>>> >>>> >>>> >>>>> On 11. Oct 2024, at 14:04, David Hildenbrand <david@redhat.com> wrote: >>>>> >>>>> On 10.10.24 17:52, Fares Mehanna wrote: >>>>>>>> In a series posted a few years ago [1], a proposal was put forward to allow the >>>>>>>> kernel to allocate memory local to a mm and thus push it out of reach for >>>>>>>> current and future speculation-based cross-process attacks. We still believe >>>>>>>> this is a nice thing to have. >>>>>>>> >>>>>>>> However, in the time passed since that post Linux mm has grown quite a few new >>>>>>>> goodies, so we'd like to explore possibilities to implement this functionality >>>>>>>> with less effort and churn leveraging the now available facilities. >>>>>>>> >>>>>>>> An RFC was posted few months back [2] to show the proof of concept and a simple >>>>>>>> test driver. >>>>>>>> >>>>>>>> In this RFC, we're using the same approach of implementing mm-local allocations >>>>>>>> piggy-backing on memfd_secret(), using regular user addresses but pinning the >>>>>>>> pages and flipping the user/supervisor flag on the respective PTEs to make them >>>>>>>> directly accessible from kernel. >>>>>>>> In addition to that we are submitting 5 patches to use the secret memory to hide >>>>>>>> the vCPU gp-regs and fp-regs on arm64 VHE systems. >>>>>>> >>>>>>> I'm a bit lost on what exactly we want to achieve. The point where we >>>>>>> start flipping user/supervisor flags confuses me :) >>>>>>> >>>>>>> With secretmem, you'd get memory allocated that >>>>>>> (a) Is accessible by user space -- mapped into user space. >>>>>>> (b) Is inaccessible by kernel space -- not mapped into the direct map >>>>>>> (c) GUP will fail, but copy_from / copy_to user will work. >>>>>>> >>>>>>> >>>>>>> Another way, without secretmem, would be to consider these "secrets" >>>>>>> kernel allocations that can be mapped into user space using mmap() of a >>>>>>> special fd. That is, they wouldn't have their origin in secretmem, but >>>>>>> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP >>>>>>> with vm_insert_pages(), manually removing them from the directmap. >>>>>>> >>>>>>> But, I am not sure who is supposed to access what. Let's explore the >>>>>>> requirements. I assume we want: >>>>>>> >>>>>>> (a) Pages accessible by user space -- mapped into user space. >>>>>>> (b) Pages inaccessible by kernel space -- not mapped into the direct map >>>>>>> (c) GUP to fail (no direct map). >>>>>>> (d) copy_from / copy_to user to fail? >>>>>>> >>>>>>> And on top of that, some way to access these pages on demand from kernel >>>>>>> space? (temporary CPU-local mapping?) >>>>>>> >>>>>>> Or how would the kernel make use of these allocations? >>>>>>> >>>>>>> -- >>>>>>> Cheers, >>>>>>> >>>>>>> David / dhildenb >>>>>> Hi David, >>>>> >>>>> Hi Fares! >>>>> >>>>>> Thanks for taking a look at the patches! >>>>>> We're trying to allocate a kernel memory that is accessible to the kernel but >>>>>> only when the context of the process is loaded. >>>>>> So this is a kernel memory that is not needed to operate the kernel itself, it >>>>>> is to store & process data on behalf of a process. The requirement for this >>>>>> memory is that it would never be touched unless the process is scheduled on this >>>>>> core. otherwise any other access will crash the kernel. >>>>>> So this memory should only be directly readable and writable by the kernel, but >>>>>> only when the process context is loaded. The memory shouldn't be readable or >>>>>> writable by the owner process at all. >>>>>> This is basically done by removing those pages from kernel linear address and >>>>>> attaching them only in the process mm_struct. So during context switching the >>>>>> kernel loses access to the secret memory scheduled out and gain access to the >>>>>> new process secret memory. >>>>>> This generally protects against speculation attacks, and if other process managed >>>>>> to trick the kernel to leak data from memory. In this case the kernel will crash >>>>>> if it tries to access other processes secret memory. >>>>>> Since this memory is special in the sense that it is kernel memory but only make >>>>>> sense in the term of the owner process, I tried in this patch series to explore >>>>>> the possibility of reusing memfd_secret() to allocate this memory in user virtual >>>>>> address space, manage it in a VMA, flipping the permissions while keeping the >>>>>> control of the mapping exclusively with the kernel. >>>>>> Right now it is: >>>>>> (a) Pages not accessible by user space -- even though they are mapped into user >>>>>> space, the PTEs are marked for kernel usage. >>>>> >>>>> Ah, that is the detail I was missing, now I see what you are trying to achieve, thanks! >>>>> >>>>> It is a bit architecture specific, because ... imagine architectures that have separate kernel+user space page table hierarchies, and not a simple PTE flag >> to change access permissions between kernel/user space. >>>>> >>>>> IIRC s390 is one such architecture that uses separate page tables for the user-space + kernel-space portions. >>>>> >>>>>> (b) Pages accessible by kernel space -- even though they are not mapped into the >>>>>> direct map, the PTEs in uvaddr are marked for kernel usage. >>>>>> (c) copy_from / copy_to user won't fail -- because it is in the user range, but >>>>>> this can be fixed by allocating specific range in user vaddr to this feature >>>>>> and check against this range there. >>>>>> (d) The secret memory vaddr is guessable by the owner process -- that can also >>>>>> be fixed by allocating bigger chunk of user vaddr for this feature and >>>>>> randomly placing the secret memory there. >>>>>> (e) Mapping is off-limits to the owner process by marking the VMA as locked, >>>>>> sealed and special. >>>>> >>>>> Okay, so in this RFC you are jumping through quite some hoops to have a kernel allocation unmapped from the direct map but mapped into a per-process page >> table only accessible by kernel space. :) >>>>> >>>>> So you really don't want this mapped into user space at all (consequently, no GUP, no access, no copy_from_user ...). In this RFC it's mapped but turned >> inaccessible by flipping the "kernel vs. user" switch. >>>>> >>>>>> Other alternative (that was implemented in the first submission) is to track those >>>>>> allocations in a non-shared kernel PGD per process, then handle creating, forking >>>>>> and context-switching this PGD. >>>>> >>>>> That sounds like a better approach. So we would remove the pages from the shared kernel direct map and map them into a separate kernel-portion in the per-MM >> page tables? >>>>> >>>>> Can you envision that would also work with architectures like s390x? I assume we would not only need the per-MM user space page table hierarchy, but also a >> per-MM kernel space page table hierarchy, into which we also map the common/shared-among-all-processes kernel space page tables (e.g., directmap). >>>> Yes, that’s also applicable to arm64. There’s currently no separate per-mm user space page hierarchy there. >>> typo, read kernel >> >> >> Okay, thanks. So going into that direction makes more sense. >> >> I do wonder if we really have to deal with fork() ... if the primary >> users don't really have meaning in the forked child (e.g., just like >> fork() with KVM IIRC) we might just get away by "losing" these >> allocations in the child process. >> >> Happy to learn why fork() must be supported. > > It really depends on the use cases of the kernel secret allocation, but in my > mind a troubling scenario: > 1. Process A had a resource X. > 2. Kernel decided to keep some data related to resource X in process A secret > memory. > 3. Process A decided to fork, now process B share the resource X. > 4. Process B started using resource X. <-- This will crash the kernel as the > used kernel page table on process B has no mapping for the secret memory used > in resource X. > > I haven't tried to trigger this crash myself though. > Right, and if we can rule out any users that are supposed to work after fork(), we can just disregard that in the first version. I never played with this, but let's assume you make use of these mm-local allocations in KVM context. What would happens if you fork() with a KVM fd and try accessing that fd from the other process using ioctls? I recall that KVM will not be "duplicated". What would happen if you send that fd over to a completely different process and try accessing that fd from the other process using ioctls? Of course, question being: if you have MM-local allocations in both cases and there is suddenly a different MM ... assuming that both cases are even possible (if they are not possible, great! :) ). I think I am supposed to know if these things are possible or not and what would happen, but it's late Friday and my brain is begging for some Weekend :D > I didn't think in depth about this issue yet, but I need to because duplicating > the secret memory mappings in the new forked process is easy (To give kernel > access on the secret memory), but tearing them down across all forked processes > is a bit complicated (To clean stale mappings on parent/child processes). Right > now tearing down the mapping will only happen on mm_struct which allocated the > secret memory. If an allocation is MM-local, I would assume that fork() would *duplicate* that allocation (leaving CoW out of the picture :D ), but that's where the fun begins (see above regarding my confusion about KVM and fork() behavior ... ). -- Cheers, David / dhildenb
On 18.10.24 20:52, David Hildenbrand wrote: > On 11.10.24 16:25, Fares Mehanna wrote: >>>> >>>> >>>>> On 11. Oct 2024, at 14:36, Mediouni, Mohamed <mediou@amazon.de> wrote: >>>>> >>>>> >>>>> >>>>>> On 11. Oct 2024, at 14:04, David Hildenbrand <david@redhat.com> wrote: >>>>>> >>>>>> On 10.10.24 17:52, Fares Mehanna wrote: >>>>>>>>> In a series posted a few years ago [1], a proposal was put forward to allow the >>>>>>>>> kernel to allocate memory local to a mm and thus push it out of reach for >>>>>>>>> current and future speculation-based cross-process attacks. We still believe >>>>>>>>> this is a nice thing to have. >>>>>>>>> >>>>>>>>> However, in the time passed since that post Linux mm has grown quite a few new >>>>>>>>> goodies, so we'd like to explore possibilities to implement this functionality >>>>>>>>> with less effort and churn leveraging the now available facilities. >>>>>>>>> >>>>>>>>> An RFC was posted few months back [2] to show the proof of concept and a simple >>>>>>>>> test driver. >>>>>>>>> >>>>>>>>> In this RFC, we're using the same approach of implementing mm-local allocations >>>>>>>>> piggy-backing on memfd_secret(), using regular user addresses but pinning the >>>>>>>>> pages and flipping the user/supervisor flag on the respective PTEs to make them >>>>>>>>> directly accessible from kernel. >>>>>>>>> In addition to that we are submitting 5 patches to use the secret memory to hide >>>>>>>>> the vCPU gp-regs and fp-regs on arm64 VHE systems. >>>>>>>> >>>>>>>> I'm a bit lost on what exactly we want to achieve. The point where we >>>>>>>> start flipping user/supervisor flags confuses me :) >>>>>>>> >>>>>>>> With secretmem, you'd get memory allocated that >>>>>>>> (a) Is accessible by user space -- mapped into user space. >>>>>>>> (b) Is inaccessible by kernel space -- not mapped into the direct map >>>>>>>> (c) GUP will fail, but copy_from / copy_to user will work. >>>>>>>> >>>>>>>> >>>>>>>> Another way, without secretmem, would be to consider these "secrets" >>>>>>>> kernel allocations that can be mapped into user space using mmap() of a >>>>>>>> special fd. That is, they wouldn't have their origin in secretmem, but >>>>>>>> in KVM as a kernel allocation. It could be achieved by using VM_MIXEDMAP >>>>>>>> with vm_insert_pages(), manually removing them from the directmap. >>>>>>>> >>>>>>>> But, I am not sure who is supposed to access what. Let's explore the >>>>>>>> requirements. I assume we want: >>>>>>>> >>>>>>>> (a) Pages accessible by user space -- mapped into user space. >>>>>>>> (b) Pages inaccessible by kernel space -- not mapped into the direct map >>>>>>>> (c) GUP to fail (no direct map). >>>>>>>> (d) copy_from / copy_to user to fail? >>>>>>>> >>>>>>>> And on top of that, some way to access these pages on demand from kernel >>>>>>>> space? (temporary CPU-local mapping?) >>>>>>>> >>>>>>>> Or how would the kernel make use of these allocations? >>>>>>>> >>>>>>>> -- >>>>>>>> Cheers, >>>>>>>> >>>>>>>> David / dhildenb >>>>>>> Hi David, >>>>>> >>>>>> Hi Fares! >>>>>> >>>>>>> Thanks for taking a look at the patches! >>>>>>> We're trying to allocate a kernel memory that is accessible to the kernel but >>>>>>> only when the context of the process is loaded. >>>>>>> So this is a kernel memory that is not needed to operate the kernel itself, it >>>>>>> is to store & process data on behalf of a process. The requirement for this >>>>>>> memory is that it would never be touched unless the process is scheduled on this >>>>>>> core. otherwise any other access will crash the kernel. >>>>>>> So this memory should only be directly readable and writable by the kernel, but >>>>>>> only when the process context is loaded. The memory shouldn't be readable or >>>>>>> writable by the owner process at all. >>>>>>> This is basically done by removing those pages from kernel linear address and >>>>>>> attaching them only in the process mm_struct. So during context switching the >>>>>>> kernel loses access to the secret memory scheduled out and gain access to the >>>>>>> new process secret memory. >>>>>>> This generally protects against speculation attacks, and if other process managed >>>>>>> to trick the kernel to leak data from memory. In this case the kernel will crash >>>>>>> if it tries to access other processes secret memory. >>>>>>> Since this memory is special in the sense that it is kernel memory but only make >>>>>>> sense in the term of the owner process, I tried in this patch series to explore >>>>>>> the possibility of reusing memfd_secret() to allocate this memory in user virtual >>>>>>> address space, manage it in a VMA, flipping the permissions while keeping the >>>>>>> control of the mapping exclusively with the kernel. >>>>>>> Right now it is: >>>>>>> (a) Pages not accessible by user space -- even though they are mapped into user >>>>>>> space, the PTEs are marked for kernel usage. >>>>>> >>>>>> Ah, that is the detail I was missing, now I see what you are trying to achieve, thanks! >>>>>> >>>>>> It is a bit architecture specific, because ... imagine architectures that have separate kernel+user space page table hierarchies, and not a simple PTE flag >>> to change access permissions between kernel/user space. >>>>>> >>>>>> IIRC s390 is one such architecture that uses separate page tables for the user-space + kernel-space portions. >>>>>> >>>>>>> (b) Pages accessible by kernel space -- even though they are not mapped into the >>>>>>> direct map, the PTEs in uvaddr are marked for kernel usage. >>>>>>> (c) copy_from / copy_to user won't fail -- because it is in the user range, but >>>>>>> this can be fixed by allocating specific range in user vaddr to this feature >>>>>>> and check against this range there. >>>>>>> (d) The secret memory vaddr is guessable by the owner process -- that can also >>>>>>> be fixed by allocating bigger chunk of user vaddr for this feature and >>>>>>> randomly placing the secret memory there. >>>>>>> (e) Mapping is off-limits to the owner process by marking the VMA as locked, >>>>>>> sealed and special. >>>>>> >>>>>> Okay, so in this RFC you are jumping through quite some hoops to have a kernel allocation unmapped from the direct map but mapped into a per-process page >>> table only accessible by kernel space. :) >>>>>> >>>>>> So you really don't want this mapped into user space at all (consequently, no GUP, no access, no copy_from_user ...). In this RFC it's mapped but turned >>> inaccessible by flipping the "kernel vs. user" switch. >>>>>> >>>>>>> Other alternative (that was implemented in the first submission) is to track those >>>>>>> allocations in a non-shared kernel PGD per process, then handle creating, forking >>>>>>> and context-switching this PGD. >>>>>> >>>>>> That sounds like a better approach. So we would remove the pages from the shared kernel direct map and map them into a separate kernel-portion in the per-MM >>> page tables? >>>>>> >>>>>> Can you envision that would also work with architectures like s390x? I assume we would not only need the per-MM user space page table hierarchy, but also a >>> per-MM kernel space page table hierarchy, into which we also map the common/shared-among-all-processes kernel space page tables (e.g., directmap). >>>>> Yes, that’s also applicable to arm64. There’s currently no separate per-mm user space page hierarchy there. >>>> typo, read kernel >>> >>> >>> Okay, thanks. So going into that direction makes more sense. >>> >>> I do wonder if we really have to deal with fork() ... if the primary >>> users don't really have meaning in the forked child (e.g., just like >>> fork() with KVM IIRC) we might just get away by "losing" these >>> allocations in the child process. >>> >>> Happy to learn why fork() must be supported. >> >> It really depends on the use cases of the kernel secret allocation, but in my >> mind a troubling scenario: >> 1. Process A had a resource X. >> 2. Kernel decided to keep some data related to resource X in process A secret >> memory. >> 3. Process A decided to fork, now process B share the resource X. >> 4. Process B started using resource X. <-- This will crash the kernel as the >> used kernel page table on process B has no mapping for the secret memory used >> in resource X. >> >> I haven't tried to trigger this crash myself though. >> > > Right, and if we can rule out any users that are supposed to work after > fork(), we can just disregard that in the first version. > > I never played with this, but let's assume you make use of these > mm-local allocations in KVM context. > > What would happens if you fork() with a KVM fd and try accessing that fd > from the other process using ioctls? I recall that KVM will not be > "duplicated". > > What would happen if you send that fd over to a completely different > process and try accessing that fd from the other process using ioctls? Stumbling over Documentation/virtual/kvm/api.txt: "In general file descriptors can be migrated among processes by means of fork() and the SCM_RIGHTS facility of unix domain socket. These kinds of tricks are explicitly not supported by kvm. While they will not cause harm to the host, their actual behavior is not guaranteed by the API. See "General description" for details on the ioctl usage model that is supported by KVM. It is important to note that althought VM ioctls may only be issued from the process that created the VM, a VM's lifecycle is associated with its file descriptor, not its creator (process). In other words, the VM and its resources, *including the associated address space*, are not freed until the last reference to the VM's file descriptor has been released. For example, if fork() is issued after ioctl(KVM_CREATE_VM), the VM will not be freed until both the parent (original) process and its child have put their references to the VM's file descriptor. Because a VM's resources are not freed until the last reference to its file descriptor is released, creating additional references to a VM via via fork(), dup(), etc... without careful consideration is strongly discouraged and may have unwanted side effects, e.g. memory allocated by and on behalf of the VM's process may not be freed/unaccounted when the VM is shut down. " The "may only be issued" doesn't make it clear if that is actively enforced. But staring at kvm_vcpu_ioctl(): if (vcpu->kvm->mm != current->mm || vcpu->kvm->vm_dead) return -EIO; So with KVM it would likely work to *not* care about mm-local memory allocations during fork(). But of course, what I am getting at is: if we would have some fd that uses an mm-local allocation, and it could be accessed (ioctl) from another active MM, we would likely be in trouble ... and not only fork() is the problem. We should really not try handling fork() and instead restrict the use cases where this can be used. -- Cheers, David / dhildenb
© 2016 - 2024 Red Hat, Inc.