MAINTAINERS | 10 + configs/devices/i386-softmmu/default.mak | 1 + docs/system/i386/nitro-enclave.rst | 58 +++ hw/core/eif.c | 486 +++++++++++++++++++++++ hw/core/eif.h | 20 + hw/core/meson.build | 1 + hw/i386/Kconfig | 4 + hw/i386/meson.build | 1 + hw/i386/microvm.c | 141 ++++++- hw/i386/nitro_enclave.c | 134 +++++++ include/hw/i386/nitro_enclave.h | 38 ++ 11 files changed, 893 insertions(+), 1 deletion(-) create mode 100644 docs/system/i386/nitro-enclave.rst create mode 100644 hw/core/eif.c create mode 100644 hw/core/eif.h create mode 100644 hw/i386/nitro_enclave.c create mode 100644 include/hw/i386/nitro_enclave.h
This is v2 submission for AWS Nitro Enclave emulation in QEMU. v1 is at: https://mail.gnu.org/archive/html/qemu-devel/2024-05/msg03524.html Changes in v2: - moved eif.c and eif.h files from hw/i386 to hw/core Hi, Hope everyone is doing well. I am working on adding AWS Nitro Enclave[1] emulation support in QEMU. Alexander Graf is mentoring me on this work. This is a patch series adding, not yet complete, but useful emulation support of nitro enclaves. I have a gitlab branch where you can view the patches in the gitlab web UI for each commit: https://gitlab.com/dorjoy03/qemu/-/tree/nitro-enclave-emulation AWS nitro enclaves is an Amazon EC2[2] feature that allows creating isolated execution environments, called enclaves, from Amazon EC2 instances, which are used for processing highly sensitive data. Enclaves have no persistent storage and no external networking. The enclave VMs are based on Firecracker microvm and have a vhost-vsock device for communication with the parent EC2 instance that spawned it and a Nitro Secure Module (NSM) device for cryptographic attestation. The parent instance VM always has CID 3 while the enclave VM gets a dynamic CID. The enclave VMs can communicate with the parent instance over various ports to CID 3, for example, the init process inside an enclave sends a heartbeat to port 9000 upon boot, expecting a heartbeat reply, letting the parent instance know that the enclave VM has successfully booted. From inside an EC2 instance, nitro-cli[3] is used to spawn an enclave VM using an EIF (Enclave Image Format)[4] file. EIF files can be built using nitro-cli as well. There is no official EIF specification apart from the github aws-nitro-enclaves-image-format repository[4]. An EIF file contains the kernel, cmdline and ramdisk(s) in different sections which are used to boot the enclave VM. You can look at the structs in hw/i386/eif.c file for more details about the EIF file format. Adding nitro enclave emulation support in QEMU will make the life of AWS Nitro Enclave users easier as they will be able to test their EIF images locally without having to run real nitro enclaves which can be difficult for debugging due to its roots in security. This will also make quick prototyping easier. In QEMU, the new nitro-enclave machine type is implemented based on the microvm machine type similar to how AWS Nitro Enclaves are based on Firecracker microvm. The vhost-vsock device support is already part of this patch series so that the enclave VM can communicate to CID 3 using vsock. A mandatory 'guest-cid' machine type option is needed which becomes the CID of the enclave VM. Some documentation for the new 'nitro-enclave' machine type has also been added. The NSM device support will be added in the future. The plan is to eventually make the nitro enclave emulation in QEMU standalone i.e., without needing to run another VM with CID 3 with proper vsock communication support. For this to work, one approach could be to teach the vhost-vsock driver in kernel to forward CID 3 messages to another CID (set to CID 2 for host) so that users of the nitro-enclave machine type can run the necessary vsock server/clients in the host machine (some defaults can be implemented in QEMU as well, for example, sending a reply to the heartbeat) which will rid them of the cumbersome way of running another whole VM with CID 3. This way, users of nitro-enclave machine in QEMU, could potentially also run multiple enclaves with their messages for CID 3 forwarded to different CIDs which, in QEMU side, could then be specified using a new machine type option (parent-cid) if implemented. I will be posting an email to the linux virtualization mailing list about this approach asking for feedback and suggestions soon. For local testing you need to generate a hello.eif image by first building nitro-cli locally[5]. Then you can use nitro-cli to build a hello.eif image[6]. You need to build qemu-system-x86_64 after applying the patches and then you can run the following command to boot a hello.eif image using the new 'nitro-enclave' machine type option in QEMU: sudo ./qemu-system-x86_64 -M nitro-enclave,guest-cid=8 -kernel path/to/hello.eif -nographic -m 4G --enable-kvm -cpu host The command needs to be run as sudo because for the vhost-vsock device to work QEMU needs to be able to open vhost device in host. Right now, if you just run the nitro-enclave machine, the kernel panics because the init process exits abnormally because it cannot connect to port 9000 to CID 3 to send its heartbeat message (the connection times out), so another VM with CID 3 with proper vsock communication support must be run for it to be useful. But this restriction can be lifted once the approach about forwarding CID 3 messages is implemented if it gets accepted. Thanks. Regards, Dorjoy [1] https://docs.aws.amazon.com/enclaves/latest/user/nitro-enclave.html [2] https://aws.amazon.com/ec2/ [3] https://docs.aws.amazon.com/enclaves/latest/user/getting-started.html [4] https://github.com/aws/aws-nitro-enclaves-image-format [5] https://github.com/aws/aws-nitro-enclaves-cli/blob/main/docs/ubuntu_20.04_how_to_install_nitro_cli_from_github_sources.md [6] https://github.com/aws/aws-nitro-enclaves-cli/blob/main/examples/x86_64/hello/README.md Dorjoy Chowdhury (2): machine/microvm: support for loading EIF image machine/nitro-enclave: new machine type for AWS nitro enclave MAINTAINERS | 10 + configs/devices/i386-softmmu/default.mak | 1 + docs/system/i386/nitro-enclave.rst | 58 +++ hw/core/eif.c | 486 +++++++++++++++++++++++ hw/core/eif.h | 20 + hw/core/meson.build | 1 + hw/i386/Kconfig | 4 + hw/i386/meson.build | 1 + hw/i386/microvm.c | 141 ++++++- hw/i386/nitro_enclave.c | 134 +++++++ include/hw/i386/nitro_enclave.h | 38 ++ 11 files changed, 893 insertions(+), 1 deletion(-) create mode 100644 docs/system/i386/nitro-enclave.rst create mode 100644 hw/core/eif.c create mode 100644 hw/core/eif.h create mode 100644 hw/i386/nitro_enclave.c create mode 100644 include/hw/i386/nitro_enclave.h -- 2.39.2
On 01.06.24 18:26, Dorjoy Chowdhury wrote: > This is v2 submission for AWS Nitro Enclave emulation in QEMU. v1 is at: > https://mail.gnu.org/archive/html/qemu-devel/2024-05/msg03524.html > > Changes in v2: > - moved eif.c and eif.h files from hw/i386 to hw/core > > Hi, > > Hope everyone is doing well. I am working on adding AWS Nitro Enclave[1] > emulation support in QEMU. Alexander Graf is mentoring me on this work. This is > a patch series adding, not yet complete, but useful emulation support of nitro > enclaves. I have a gitlab branch where you can view the patches in the gitlab > web UI for each commit: > https://gitlab.com/dorjoy03/qemu/-/tree/nitro-enclave-emulation > > AWS nitro enclaves is an Amazon EC2[2] feature that allows creating isolated > execution environments, called enclaves, from Amazon EC2 instances, which are > used for processing highly sensitive data. Enclaves have no persistent storage > and no external networking. The enclave VMs are based on Firecracker microvm > and have a vhost-vsock device for communication with the parent EC2 instance > that spawned it and a Nitro Secure Module (NSM) device for cryptographic > attestation. The parent instance VM always has CID 3 while the enclave VM gets > a dynamic CID. The enclave VMs can communicate with the parent instance over > various ports to CID 3, for example, the init process inside an enclave sends a > heartbeat to port 9000 upon boot, expecting a heartbeat reply, letting the > parent instance know that the enclave VM has successfully booted. > > From inside an EC2 instance, nitro-cli[3] is used to spawn an enclave VM using > an EIF (Enclave Image Format)[4] file. EIF files can be built using nitro-cli > as well. There is no official EIF specification apart from the github > aws-nitro-enclaves-image-format repository[4]. An EIF file contains the kernel, > cmdline and ramdisk(s) in different sections which are used to boot the enclave > VM. You can look at the structs in hw/i386/eif.c file for more details about > the EIF file format. > > Adding nitro enclave emulation support in QEMU will make the life of AWS Nitro > Enclave users easier as they will be able to test their EIF images locally > without having to run real nitro enclaves which can be difficult for debugging > due to its roots in security. This will also make quick prototyping easier. > > In QEMU, the new nitro-enclave machine type is implemented based on the microvm > machine type similar to how AWS Nitro Enclaves are based on Firecracker microvm. > The vhost-vsock device support is already part of this patch series so that the > enclave VM can communicate to CID 3 using vsock. A mandatory 'guest-cid' > machine type option is needed which becomes the CID of the enclave VM. Some > documentation for the new 'nitro-enclave' machine type has also been added. The > NSM device support will be added in the future. > > The plan is to eventually make the nitro enclave emulation in QEMU standalone > i.e., without needing to run another VM with CID 3 with proper vsock > communication support. For this to work, one approach could be to teach the > vhost-vsock driver in kernel to forward CID 3 messages to another CID > (set to CID 2 for host) so that users of the nitro-enclave machine type can > run the necessary vsock server/clients in the host machine (some defaults can > be implemented in QEMU as well, for example, sending a reply to the heartbeat) > which will rid them of the cumbersome way of running another whole VM with CID > 3. This way, users of nitro-enclave machine in QEMU, could potentially also run > multiple enclaves with their messages for CID 3 forwarded to different CIDs > which, in QEMU side, could then be specified using a new machine type option > (parent-cid) if implemented. I will be posting an email to the linux > virtualization mailing list about this approach asking for feedback and > suggestions soon. > > For local testing you need to generate a hello.eif image by first building > nitro-cli locally[5]. Then you can use nitro-cli to build a hello.eif image[6]. > > You need to build qemu-system-x86_64 after applying the patches and then you > can run the following command to boot a hello.eif image using the new > 'nitro-enclave' machine type option in QEMU: > > sudo ./qemu-system-x86_64 -M nitro-enclave,guest-cid=8 -kernel path/to/hello.eif -nographic -m 4G --enable-kvm -cpu host > > The command needs to be run as sudo because for the vhost-vsock device to work > QEMU needs to be able to open vhost device in host. > > Right now, if you just run the nitro-enclave machine, the kernel panics because > the init process exits abnormally because it cannot connect to port 9000 to CID > 3 to send its heartbeat message (the connection times out), so another VM with > CID 3 with proper vsock communication support must be run for it to be useful. > But this restriction can be lifted once the approach about forwarding CID 3 > messages is implemented if it gets accepted. Reviewed-by: Alexander Graf <graf@amazon.com> I'm happy to see Nitro Enclaves guest support merged even if there are still some open items left: Release early, release often :). Given that the functionality as is is already useful for debugging, I think it makes sense to merge these patches. Michael / Marcel, would this go through your tree? Alex Amazon Web Services Development Center Germany GmbH Krausenstr. 38 10117 Berlin Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B Sitz: Berlin Ust-ID: DE 365 538 597
© 2016 - 2024 Red Hat, Inc.