drivers/char/tpm/tpm-chip.c | 33 ++++++++++++++++++++++++++++++++- drivers/char/tpm/tpm-sysfs.c | 10 ++++++++++ drivers/char/tpm/tpm_tis_core.c | 18 ++++++++++++++++-- include/linux/tpm.h | 10 ++++++++++ 4 files changed, 68 insertions(+), 3 deletions(-)
This is my alternative patch set to the TPM patches included into Trenchboot series v11. I don't mind to which tree these are picked in the end. All the patches also have my sob's, so in that sense things are also cleared up. At least slmodule needs to be patched in the series given that tpm_chip_set_locality() returns zero on success. It is not really my problem but I'm also wondering how the initialization order is managed. What if e.g. IMA happens to initialize before slmodule? Cc: Daniel P. Smith <dpsmith@apertussolutions.com> Cc: Ross Philipson <ross.philipson@oracle.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Daniel P. Smith (2): tpm, tpm_tis: Close all localities tpm, tpm_tis: Address positive localities in tpm_tis_request_locality() Ross Philipson (2): tpm, tpm_tis: allow to set locality to a different value tpm: sysfs: Show locality used by kernel drivers/char/tpm/tpm-chip.c | 33 ++++++++++++++++++++++++++++++++- drivers/char/tpm/tpm-sysfs.c | 10 ++++++++++ drivers/char/tpm/tpm_tis_core.c | 18 ++++++++++++++++-- include/linux/tpm.h | 10 ++++++++++ 4 files changed, 68 insertions(+), 3 deletions(-) -- 2.47.0
On 11/2/24 8:22 AM, Jarkko Sakkinen wrote: > This is my alternative patch set to the TPM patches included into > Trenchboot series v11. I don't mind to which tree these are > picked in the end. All the patches also have my sob's, so in that > sense things are also cleared up. > > At least slmodule needs to be patched in the series given that > tpm_chip_set_locality() returns zero on success. > > It is not really my problem but I'm also wondering how the > initialization order is managed. What if e.g. IMA happens to > initialize before slmodule? > > Cc: Daniel P. Smith <dpsmith@apertussolutions.com> > Cc: Ross Philipson <ross.philipson@oracle.com> > Cc: Ard Biesheuvel <ardb@kernel.org> > Cc: Thomas Gleixner <tglx@linutronix.de> > > Daniel P. Smith (2): > tpm, tpm_tis: Close all localities > tpm, tpm_tis: Address positive localities in > tpm_tis_request_locality() > > Ross Philipson (2): > tpm, tpm_tis: allow to set locality to a different value > tpm: sysfs: Show locality used by kernel > > drivers/char/tpm/tpm-chip.c | 33 ++++++++++++++++++++++++++++++++- > drivers/char/tpm/tpm-sysfs.c | 10 ++++++++++ > drivers/char/tpm/tpm_tis_core.c | 18 ++++++++++++++++-- > include/linux/tpm.h | 10 ++++++++++ > 4 files changed, 68 insertions(+), 3 deletions(-) > Jarkko, We have tested with this latest RFC patch set and it does what we need. Things also functioned correctly when we closed down the TXT DRTM and brought up a follow on kernel with kexec. So we are good with dropping our TPM patches and adopting these. The last question is do you want to take these in directly as a standalone patch set or do you want us to submit them with our next patch set (v12)? And for what it is worth if you want it: Tested-by: Ross Philipson <ross.philipson@oracle.com> Thanks again for your help. Ross
On Tue, 5 Nov 2024 at 01:52, <ross.philipson@oracle.com> wrote: > > On 11/2/24 8:22 AM, Jarkko Sakkinen wrote: > > This is my alternative patch set to the TPM patches included into > > Trenchboot series v11. I don't mind to which tree these are > > picked in the end. All the patches also have my sob's, so in that > > sense things are also cleared up. > > > > At least slmodule needs to be patched in the series given that > > tpm_chip_set_locality() returns zero on success. > > > > It is not really my problem but I'm also wondering how the > > initialization order is managed. What if e.g. IMA happens to > > initialize before slmodule? > > > > Cc: Daniel P. Smith <dpsmith@apertussolutions.com> > > Cc: Ross Philipson <ross.philipson@oracle.com> > > Cc: Ard Biesheuvel <ardb@kernel.org> > > Cc: Thomas Gleixner <tglx@linutronix.de> > > > > Daniel P. Smith (2): > > tpm, tpm_tis: Close all localities > > tpm, tpm_tis: Address positive localities in > > tpm_tis_request_locality() > > > > Ross Philipson (2): > > tpm, tpm_tis: allow to set locality to a different value > > tpm: sysfs: Show locality used by kernel > > > > drivers/char/tpm/tpm-chip.c | 33 ++++++++++++++++++++++++++++++++- > > drivers/char/tpm/tpm-sysfs.c | 10 ++++++++++ > > drivers/char/tpm/tpm_tis_core.c | 18 ++++++++++++++++-- > > include/linux/tpm.h | 10 ++++++++++ > > 4 files changed, 68 insertions(+), 3 deletions(-) > > > > Jarkko, > > We have tested with this latest RFC patch set and it does what we need. > Things also functioned correctly when we closed down the TXT DRTM and > brought up a follow on kernel with kexec. So we are good with dropping > our TPM patches and adopting these. The last question is do you want to > take these in directly as a standalone patch set or do you want us to > submit them with our next patch set (v12)? > > And for what it is worth if you want it: > > Tested-by: Ross Philipson <ross.philipson@oracle.com> > If the patches as proposed work for you, please incorporate them into your v12.
On 11/5/24 8:24 AM, Ard Biesheuvel wrote: > On Tue, 5 Nov 2024 at 01:52, <ross.philipson@oracle.com> wrote: >> >> On 11/2/24 8:22 AM, Jarkko Sakkinen wrote: >>> This is my alternative patch set to the TPM patches included into >>> Trenchboot series v11. I don't mind to which tree these are >>> picked in the end. All the patches also have my sob's, so in that >>> sense things are also cleared up. >>> >>> At least slmodule needs to be patched in the series given that >>> tpm_chip_set_locality() returns zero on success. >>> >>> It is not really my problem but I'm also wondering how the >>> initialization order is managed. What if e.g. IMA happens to >>> initialize before slmodule? >>> >>> Cc: Daniel P. Smith <dpsmith@apertussolutions.com> >>> Cc: Ross Philipson <ross.philipson@oracle.com> >>> Cc: Ard Biesheuvel <ardb@kernel.org> >>> Cc: Thomas Gleixner <tglx@linutronix.de> >>> >>> Daniel P. Smith (2): >>> tpm, tpm_tis: Close all localities >>> tpm, tpm_tis: Address positive localities in >>> tpm_tis_request_locality() >>> >>> Ross Philipson (2): >>> tpm, tpm_tis: allow to set locality to a different value >>> tpm: sysfs: Show locality used by kernel >>> >>> drivers/char/tpm/tpm-chip.c | 33 ++++++++++++++++++++++++++++++++- >>> drivers/char/tpm/tpm-sysfs.c | 10 ++++++++++ >>> drivers/char/tpm/tpm_tis_core.c | 18 ++++++++++++++++-- >>> include/linux/tpm.h | 10 ++++++++++ >>> 4 files changed, 68 insertions(+), 3 deletions(-) >>> >> >> Jarkko, >> >> We have tested with this latest RFC patch set and it does what we need. >> Things also functioned correctly when we closed down the TXT DRTM and >> brought up a follow on kernel with kexec. So we are good with dropping >> our TPM patches and adopting these. The last question is do you want to >> take these in directly as a standalone patch set or do you want us to >> submit them with our next patch set (v12)? >> >> And for what it is worth if you want it: >> >> Tested-by: Ross Philipson <ross.philipson@oracle.com> >> > > If the patches as proposed work for you, please incorporate them into your v12. Ok will do, thanks. Ross
On 11/2/24 8:22 AM, Jarkko Sakkinen wrote: > This is my alternative patch set to the TPM patches included into > Trenchboot series v11. I don't mind to which tree these are > picked in the end. All the patches also have my sob's, so in that > sense things are also cleared up. > > At least slmodule needs to be patched in the series given that > tpm_chip_set_locality() returns zero on success. > > It is not really my problem but I'm also wondering how the > initialization order is managed. What if e.g. IMA happens to > initialize before slmodule? > > Cc: Daniel P. Smith <dpsmith@apertussolutions.com> > Cc: Ross Philipson <ross.philipson@oracle.com> > Cc: Ard Biesheuvel <ardb@kernel.org> > Cc: Thomas Gleixner <tglx@linutronix.de> > > Daniel P. Smith (2): > tpm, tpm_tis: Close all localities > tpm, tpm_tis: Address positive localities in > tpm_tis_request_locality() > > Ross Philipson (2): > tpm, tpm_tis: allow to set locality to a different value > tpm: sysfs: Show locality used by kernel > > drivers/char/tpm/tpm-chip.c | 33 ++++++++++++++++++++++++++++++++- > drivers/char/tpm/tpm-sysfs.c | 10 ++++++++++ > drivers/char/tpm/tpm_tis_core.c | 18 ++++++++++++++++-- > include/linux/tpm.h | 10 ++++++++++ > 4 files changed, 68 insertions(+), 3 deletions(-) > Jarkko, Thank you for your efforts here - we appreciate it. I am testing your proposed patch set to see if it meets our needs and if we don't run into the earlier problem of leaving the TPM in a bad state when kexec'ing the next kernel. Ross
On Sat Nov 2, 2024 at 5:22 PM EET, Jarkko Sakkinen wrote: > It is not really my problem but I'm also wondering how the > initialization order is managed. What if e.g. IMA happens to > initialize before slmodule? The first obvious observation from Trenchboot implementation is that it is 9/10 times worst idea ever to have splitted root of trust. Here it is realized by an LKM for slmodule. So based on that usually a literal and unquestionable truth, when it comes to securing platforms, the next question is how to make a single atomic root of trust for Trenchboot. There is really only one answer I think of for this it to make slmodule part of the tpm_tis_core and also init order will be sorted out. I'll describe the steps forward. Step 1: declare and refactor that module into drivers/char/tpm/tpm_tis_slmodule.c and add this to the Makefile: ifdef CONFIG_SECURE_LAUNCH obj-$(CONFIG_TCG_TIS_CORE) += tpm_tis_slmodule.o endif Step 2: add 'int kernel_locality;' to struct tpm_tis_data. Step 3: implement tpm_tis_set_locality() internal function. Step 4: drop sysfs-patch completely (solution is not generic). BR, Jarkko
On 11/2/24 14:00, Jarkko Sakkinen wrote: > On Sat Nov 2, 2024 at 5:22 PM EET, Jarkko Sakkinen wrote: >> It is not really my problem but I'm also wondering how the >> initialization order is managed. What if e.g. IMA happens to >> initialize before slmodule? > > The first obvious observation from Trenchboot implementation is that it > is 9/10 times worst idea ever to have splitted root of trust. Here it > is realized by an LKM for slmodule. First, there is no conflict between IMA and slmodule. With your change to make locality switching a one shot, the only issue would be if IMA were to run first and issue a locality switch to Locality 0, thus blocking slmodule from switching to Locality 2. As for PCR usage, IMA uses the SRTM PCRs, which are completely accessible under Locality 2. The RoT for DRTM is the CPU/microcode, and that is not split. I am going to assume that you are speaking about the delay between the time of collecting measurement to the time of storing measurement? As a refresher, an RTM trust chain is constructed using the transitive trust[1] process. As noted in the definition, the Linux kernel in this case is considered a group of functions that were all evaluated and considered functions to be equally part of the TCB. This means you are trusting actions at time interval M equally to an action taken at time interval N. If one attempts to construct an argument that claims this is invalid, that would mean all RTM trust chains constructed in this manner are invalidated, including SRTM aka SecureBoot. This means as long as the measurements are recorded before the TCB is extended again, then it does not matter if it is done at time M or time N. Bringing this back to SecureLaunch, there would only be an issue if slmodule could be built as an external loadable module, thus not being part of the "group of functions" measured and executed by the SINIT ACM. AFAICT, slmodule can only either be compiled in or out, not as a loadable module. If there is a path we missed that allows it to be built as a loadable module, then that needs correcting. Due to this comment, I went testing KCONFIG options and could not come up with a way for this to occur. I did see that we probably should change CONFIG_SECURE_LAUNCH dependency from TCG_TPM to TPM_TIS and TCG_CRB. Just to avoid an invalid configuration where the necessary interfaces were not present, leading to triggering a TXT reset of the platform. [1] Transitive trust (TCG D-RTM Architecture - Version 1.0.0) Also known as "inductive trust." In this process, the Root of Trust gives a trustworthy description of a second group of functions. Based on this description, an interested entity can determine the trust it is to place in this second group of functions. If the interested entity determines that the trust level of the second group of functions is acceptable, the trust boundary is extended from the Root of Trust to include the second group of functions. In this case, the process can be iterated. The second group of functions can give a trustworthy description of the third group of functions, etc. Transitive trust is used to provide a trustworthy description of platform characteristics. > So based on that usually a literal and unquestionable truth, when it > comes to securing platforms, the next question is how to make a single > atomic root of trust for Trenchboot. As mentioned above, there is no split currently. > There is really only one answer I think of for this it to make slmodule > part of the tpm_tis_core and also init order will be sorted out. Only if your assertion that it was split, which it is not. > I'll describe the steps forward. Honestly, a better path forward would be to revisit the issue that is driving most of that logic existing, which is the lack of a TPM interface code in the setup kernel. As a reminder, this issue is due to the TPM maintainers position that the only TPM code in the kernel can be the mainline driver. Which, unless something has changed, is impossible to compile into the setup kernel due to its use of mainline kernel constructs not present in the setup kernel. v/r, dps
On Mon Nov 4, 2024 at 12:57 PM EET, Daniel P. Smith wrote: > On 11/2/24 14:00, Jarkko Sakkinen wrote: > > On Sat Nov 2, 2024 at 5:22 PM EET, Jarkko Sakkinen wrote: > >> It is not really my problem but I'm also wondering how the > >> initialization order is managed. What if e.g. IMA happens to > >> initialize before slmodule? > > > > The first obvious observation from Trenchboot implementation is that it > > is 9/10 times worst idea ever to have splitted root of trust. Here it > > is realized by an LKM for slmodule. > > First, there is no conflict between IMA and slmodule. With your change > to make locality switching a one shot, the only issue would be if IMA > were to run first and issue a locality switch to Locality 0, thus > blocking slmodule from switching to Locality 2. As for PCR usage, IMA > uses the SRTM PCRs, which are completely accessible under Locality 2. Just pointing out a possible problem (e.g. with TPM2_PolicyLocality). > Honestly, a better path forward would be to revisit the issue that is > driving most of that logic existing, which is the lack of a TPM > interface code in the setup kernel. As a reminder, this issue is due to > the TPM maintainers position that the only TPM code in the kernel can be > the mainline driver. Which, unless something has changed, is impossible > to compile into the setup kernel due to its use of mainline kernel > constructs not present in the setup kernel. I don't categorically reject adding some code to early setup. We have some shared code EFI stub but you have to explain your changes proeprly. Getting rejection in some early version to some approach, and being still pissed about that years forward is not really way to go IMHO. > v/r, > dps BR, Jarkko
On Mon Nov 4, 2024 at 1:18 PM EET, Jarkko Sakkinen wrote: > I don't categorically reject adding some code to early setup. We have > some shared code EFI stub but you have to explain your changes > proeprly. Getting rejection in some early version to some approach, > and being still pissed about that years forward is not really way > to go IMHO. Still this sounds unrealistic given that this was tpm_tis only feature, and even that driver spans to total three different types of drivers: MMIO, SPI and I2C. It would be ridiculous amount of code pulled into early setup. If you still think that would make sense then you could migrate all the functionality under lib/ which would be called by both tpm_tis_core's drivers and Trenchboot. Anyway, if past me did that call, honestly, I do actually get it. It's not a counter-argument to a represented potential concurrency issue, which can cause issues with at least one TPM2 command, or like "it was caused by you because you thought it was a bad idea to accept tons of code to early setup" ;-) I can live with that concurrency issue as long as it is known decision not to support TPM2_PolicyLocality in the in-kernel use cases. Then my patches do address remaining issues and they can be picked given the sob's. If the concurrency issue is unacceptable, then I would merge slmodule to tpm_tis_core. It does solve the concurrency bug. I'm now done with this until new version or the series is applied with my patches. I think I've done enough effort to get this merged and not the contrary. BR, Jarkko
On Mon, 4 Nov 2024 at 12:18, Jarkko Sakkinen <jarkko@kernel.org> wrote: > > On Mon Nov 4, 2024 at 12:57 PM EET, Daniel P. Smith wrote: > > On 11/2/24 14:00, Jarkko Sakkinen wrote: > > > On Sat Nov 2, 2024 at 5:22 PM EET, Jarkko Sakkinen wrote: > > >> It is not really my problem but I'm also wondering how the > > >> initialization order is managed. What if e.g. IMA happens to > > >> initialize before slmodule? > > > > > > The first obvious observation from Trenchboot implementation is that it > > > is 9/10 times worst idea ever to have splitted root of trust. Here it > > > is realized by an LKM for slmodule. > > > > First, there is no conflict between IMA and slmodule. With your change > > to make locality switching a one shot, the only issue would be if IMA > > were to run first and issue a locality switch to Locality 0, thus > > blocking slmodule from switching to Locality 2. As for PCR usage, IMA > > uses the SRTM PCRs, which are completely accessible under Locality 2. > > Just pointing out a possible problem (e.g. with TPM2_PolicyLocality). > > > Honestly, a better path forward would be to revisit the issue that is > > driving most of that logic existing, which is the lack of a TPM > > interface code in the setup kernel. As a reminder, this issue is due to > > the TPM maintainers position that the only TPM code in the kernel can be > > the mainline driver. Which, unless something has changed, is impossible > > to compile into the setup kernel due to its use of mainline kernel > > constructs not present in the setup kernel. > > I don't categorically reject adding some code to early setup. We have > some shared code EFI stub but you have to explain your changes > proeprly. Getting rejection in some early version to some approach, > and being still pissed about that years forward is not really way > to go IMHO. > Daniel has been nothing but courteous and patient, and you've waited 11 revision to come up with some bikeshedding patches that don't materially improve anything. So commenting on Daniel's approach here is uncalled for. Can we please converge on this? Daniel - if no component can be built as a module, there should be no reason for the set_default_locality() hook to be exported to modules right? And do we even need a sysfs node to expose this information?
On 11/4/24 06:27, Ard Biesheuvel wrote: > On Mon, 4 Nov 2024 at 12:18, Jarkko Sakkinen <jarkko@kernel.org> wrote: >> >> On Mon Nov 4, 2024 at 12:57 PM EET, Daniel P. Smith wrote: >>> On 11/2/24 14:00, Jarkko Sakkinen wrote: >>>> On Sat Nov 2, 2024 at 5:22 PM EET, Jarkko Sakkinen wrote: >>>>> It is not really my problem but I'm also wondering how the >>>>> initialization order is managed. What if e.g. IMA happens to >>>>> initialize before slmodule? >>>> >>>> The first obvious observation from Trenchboot implementation is that it >>>> is 9/10 times worst idea ever to have splitted root of trust. Here it >>>> is realized by an LKM for slmodule. >>> >>> First, there is no conflict between IMA and slmodule. With your change >>> to make locality switching a one shot, the only issue would be if IMA >>> were to run first and issue a locality switch to Locality 0, thus >>> blocking slmodule from switching to Locality 2. As for PCR usage, IMA >>> uses the SRTM PCRs, which are completely accessible under Locality 2. >> >> Just pointing out a possible problem (e.g. with TPM2_PolicyLocality). >> >>> Honestly, a better path forward would be to revisit the issue that is >>> driving most of that logic existing, which is the lack of a TPM >>> interface code in the setup kernel. As a reminder, this issue is due to >>> the TPM maintainers position that the only TPM code in the kernel can be >>> the mainline driver. Which, unless something has changed, is impossible >>> to compile into the setup kernel due to its use of mainline kernel >>> constructs not present in the setup kernel. >> >> I don't categorically reject adding some code to early setup. We have >> some shared code EFI stub but you have to explain your changes >> proeprly. Getting rejection in some early version to some approach, >> and being still pissed about that years forward is not really way >> to go IMHO. >> > > Daniel has been nothing but courteous and patient, and you've waited > 11 revision to come up with some bikeshedding patches that don't > materially improve anything. > > So commenting on Daniel's approach here is uncalled for. > > Can we please converge on this? > > Daniel - if no component can be built as a module, there should be no > reason for the set_default_locality() hook to be exported to modules > right? And do we even need a sysfs node to expose this information? Hi Ard, The only reason off the top of my head of why it was exported was to support the fact that the tpm module itself could be built as a module, not that we were looking for it to be done so. As to sysfs, there is the TXT register content that we would like to have exposed, and they should be readonly. For context to contrast with, tboot user space utility txt-stat worked by trying to read the TXT register address space via /dev/mem, think enough is said there. The other purpose we used sysfs was management of the DRTM log. We used it to provide a means to ensure the DRTM eventlog is extended when measurements are sent to the DRTM PCRs and then to be able to retrieve the final log. v/r, dps
On Mon, 4 Nov 2024 at 12:52, Daniel P. Smith <dpsmith@apertussolutions.com> wrote: > > On 11/4/24 06:27, Ard Biesheuvel wrote: > > On Mon, 4 Nov 2024 at 12:18, Jarkko Sakkinen <jarkko@kernel.org> wrote: > >> > >> On Mon Nov 4, 2024 at 12:57 PM EET, Daniel P. Smith wrote: > >>> On 11/2/24 14:00, Jarkko Sakkinen wrote: > >>>> On Sat Nov 2, 2024 at 5:22 PM EET, Jarkko Sakkinen wrote: > >>>>> It is not really my problem but I'm also wondering how the > >>>>> initialization order is managed. What if e.g. IMA happens to > >>>>> initialize before slmodule? > >>>> > >>>> The first obvious observation from Trenchboot implementation is that it > >>>> is 9/10 times worst idea ever to have splitted root of trust. Here it > >>>> is realized by an LKM for slmodule. > >>> > >>> First, there is no conflict between IMA and slmodule. With your change > >>> to make locality switching a one shot, the only issue would be if IMA > >>> were to run first and issue a locality switch to Locality 0, thus > >>> blocking slmodule from switching to Locality 2. As for PCR usage, IMA > >>> uses the SRTM PCRs, which are completely accessible under Locality 2. > >> > >> Just pointing out a possible problem (e.g. with TPM2_PolicyLocality). > >> > >>> Honestly, a better path forward would be to revisit the issue that is > >>> driving most of that logic existing, which is the lack of a TPM > >>> interface code in the setup kernel. As a reminder, this issue is due to > >>> the TPM maintainers position that the only TPM code in the kernel can be > >>> the mainline driver. Which, unless something has changed, is impossible > >>> to compile into the setup kernel due to its use of mainline kernel > >>> constructs not present in the setup kernel. > >> > >> I don't categorically reject adding some code to early setup. We have > >> some shared code EFI stub but you have to explain your changes > >> proeprly. Getting rejection in some early version to some approach, > >> and being still pissed about that years forward is not really way > >> to go IMHO. > >> > > > > Daniel has been nothing but courteous and patient, and you've waited > > 11 revision to come up with some bikeshedding patches that don't > > materially improve anything. > > > > So commenting on Daniel's approach here is uncalled for. > > > > Can we please converge on this? > > > > Daniel - if no component can be built as a module, there should be no > > reason for the set_default_locality() hook to be exported to modules > > right? And do we even need a sysfs node to expose this information? > > Hi Ard, > > The only reason off the top of my head of why it was exported was to > support the fact that the tpm module itself could be built as a module, > not that we were looking for it to be done so. > But the inclusion of the secure launch module will force the TPM module to be builtin too, surely. > As to sysfs, there is the TXT register content that we would like to > have exposed, and they should be readonly. For context to contrast with, > tboot user space utility txt-stat worked by trying to read the TXT > register address space via /dev/mem, think enough is said there. The > other purpose we used sysfs was management of the DRTM log. We used it > to provide a means to ensure the DRTM eventlog is extended when > measurements are sent to the DRTM PCRs and then to be able to retrieve > the final log. > I was referring specifically to the read-write sysfs node that permits user space to update the default TPM locality. Does it need to be writable? And does it need to exist at all?
On 11/4/24 06:55, 'Ard Biesheuvel' via trenchboot-devel wrote: > On Mon, 4 Nov 2024 at 12:52, Daniel P. Smith > <dpsmith@apertussolutions.com> wrote: >> >> On 11/4/24 06:27, Ard Biesheuvel wrote: >>> On Mon, 4 Nov 2024 at 12:18, Jarkko Sakkinen <jarkko@kernel.org> wrote: >>>> >>>> On Mon Nov 4, 2024 at 12:57 PM EET, Daniel P. Smith wrote: >>>>> On 11/2/24 14:00, Jarkko Sakkinen wrote: >>>>>> On Sat Nov 2, 2024 at 5:22 PM EET, Jarkko Sakkinen wrote: >>>>>>> It is not really my problem but I'm also wondering how the >>>>>>> initialization order is managed. What if e.g. IMA happens to >>>>>>> initialize before slmodule? >>>>>> >>>>>> The first obvious observation from Trenchboot implementation is that it >>>>>> is 9/10 times worst idea ever to have splitted root of trust. Here it >>>>>> is realized by an LKM for slmodule. >>>>> >>>>> First, there is no conflict between IMA and slmodule. With your change >>>>> to make locality switching a one shot, the only issue would be if IMA >>>>> were to run first and issue a locality switch to Locality 0, thus >>>>> blocking slmodule from switching to Locality 2. As for PCR usage, IMA >>>>> uses the SRTM PCRs, which are completely accessible under Locality 2. >>>> >>>> Just pointing out a possible problem (e.g. with TPM2_PolicyLocality). >>>> >>>>> Honestly, a better path forward would be to revisit the issue that is >>>>> driving most of that logic existing, which is the lack of a TPM >>>>> interface code in the setup kernel. As a reminder, this issue is due to >>>>> the TPM maintainers position that the only TPM code in the kernel can be >>>>> the mainline driver. Which, unless something has changed, is impossible >>>>> to compile into the setup kernel due to its use of mainline kernel >>>>> constructs not present in the setup kernel. >>>> >>>> I don't categorically reject adding some code to early setup. We have >>>> some shared code EFI stub but you have to explain your changes >>>> proeprly. Getting rejection in some early version to some approach, >>>> and being still pissed about that years forward is not really way >>>> to go IMHO. >>>> >>> >>> Daniel has been nothing but courteous and patient, and you've waited >>> 11 revision to come up with some bikeshedding patches that don't >>> materially improve anything. >>> >>> So commenting on Daniel's approach here is uncalled for. >>> >>> Can we please converge on this? >>> >>> Daniel - if no component can be built as a module, there should be no >>> reason for the set_default_locality() hook to be exported to modules >>> right? And do we even need a sysfs node to expose this information? >> >> Hi Ard, >> >> The only reason off the top of my head of why it was exported was to >> support the fact that the tpm module itself could be built as a module, >> not that we were looking for it to be done so. >> > > But the inclusion of the secure launch module will force the TPM Correct, I was meaning that TPM could be built as a module and since we were extending its public interface, the thought is it would be proper for us to make it exported. We have no requirement for it to be export for our usage. >> As to sysfs, there is the TXT register content that we would like to >> have exposed, and they should be readonly. For context to contrast with, >> tboot user space utility txt-stat worked by trying to read the TXT >> register address space via /dev/mem, think enough is said there. The >> other purpose we used sysfs was management of the DRTM log. We used it >> to provide a means to ensure the DRTM eventlog is extended when >> measurements are sent to the DRTM PCRs and then to be able to retrieve >> the final log. >> > > I was referring specifically to the read-write sysfs node that permits > user space to update the default TPM locality. Does it need to be > writable? And does it need to exist at all? Right, sorry. As I recall, that was introduce due to the sequence of how the TPM driver handled locality, moving back to Locality 0 after done sending cmds. In the Oracle implementation, the initramfs takes integrity measurements of the environment it is about to kexec into, eg. target kernel, initramfs, file system, etc. Some of these measurements should go into PCR 17 and PCR 18, which requires Locality 2 to be able extend those PCRs. If the slmodule is able to set the locality for all PCR extends coming from user space to be Locality 2, that removes the current need for it. v/r, dps
On Mon, 2024-11-04 at 07:19 -0500, Daniel P. Smith wrote: > On 11/4/24 06:55, 'Ard Biesheuvel' via trenchboot-devel wrote: [...] > > I was referring specifically to the read-write sysfs node that > > permits user space to update the default TPM locality. Does it need > > to be writable? And does it need to exist at all? This was my question here, which never got answered as well: https://lore.kernel.org/linux-integrity/685f3f00ddf88e961e2d861b7c783010774fe19d.camel@HansenPartnership.com/ > Right, sorry. As I recall, that was introduce due to the sequence of > how the TPM driver handled locality, moving back to Locality 0 after > done sending cmds. In the Oracle implementation, the initramfs takes > integrity measurements of the environment it is about to kexec into, > eg. target kernel, initramfs, file system, etc. Some of these > measurements should go into PCR 17 and PCR 18, which requires > Locality 2 to be able extend those PCRs. If the slmodule is able to > set the locality for all PCR extends coming from user space to be > Locality 2, that removes the current need for it. Well, no, that's counter to the desire to have user space TPM commands and kernel space TPM commands in different localities. I thought the whole point of having locality restricted PCRs is so that only trusted entities (i.e. those able to access the higher locality) could extend into them. If you run every TPM command, regardless of source, in the trusted locality, that makes the extends accessible to everyone and thus destroys the trust boundary. It also doesn't sound like the above that anything in user space actually needs this facility. The measurements of kernel and initramfs are already done by the boot stub (to PCR9, but that could be changed) so we could do it all from the trusted entity. Regards, James
On 11/4/24 08:21, James Bottomley wrote: > On Mon, 2024-11-04 at 07:19 -0500, Daniel P. Smith wrote: >> On 11/4/24 06:55, 'Ard Biesheuvel' via trenchboot-devel wrote: > [...] >>> I was referring specifically to the read-write sysfs node that >>> permits user space to update the default TPM locality. Does it need >>> to be writable? And does it need to exist at all? > > This was my question here, which never got answered as well: > > https://lore.kernel.org/linux-integrity/685f3f00ddf88e961e2d861b7c783010774fe19d.camel@HansenPartnership.com/ > >> Right, sorry. As I recall, that was introduce due to the sequence of >> how the TPM driver handled locality, moving back to Locality 0 after >> done sending cmds. In the Oracle implementation, the initramfs takes >> integrity measurements of the environment it is about to kexec into, >> eg. target kernel, initramfs, file system, etc. Some of these >> measurements should go into PCR 17 and PCR 18, which requires >> Locality 2 to be able extend those PCRs. If the slmodule is able to >> set the locality for all PCR extends coming from user space to be >> Locality 2, that removes the current need for it. > > Well, no, that's counter to the desire to have user space TPM commands > and kernel space TPM commands in different localities. I thought the > whole point of having locality restricted PCRs is so that only trusted > entities (i.e. those able to access the higher locality) could extend > into them. If you run every TPM command, regardless of source, in the > trusted locality, that makes the extends accessible to everyone and > thus destroys the trust boundary. As to Locality switching: The call sequence is, tpm_pcr_extend -> tpm_find_get_ops -> tpm_try_get_ops -> tpm_chip_start -> if (chip->locality == -1) tpm_request_locality And when the extend completes: out: tpm_put_ops -> tpm_chip_stop -> tpm_relinquish_locality -> chip->locality = -1; We made slmodule set the locality value used by request/relinquish back to 0 when it was done with its initialization and then the sysfs nodes to allow the runtime to request it when it needed to send measurements. This is because we did not want to pin how it works to the one use case currently focused on. By definition I provided earlier, in our use case the initramfs is part of the TCB as it is embedded into the kernel. As to the locality roles, according to TPM Platform Profile: - Locality 2: Dynamically Launched OS (Dynamic OS) “runtime” environment. - Locality 1: An environment for use by the Dynamic OS. > It also doesn't sound like the above that anything in user space > actually needs this facility. The measurements of kernel and initramfs > are already done by the boot stub (to PCR9, but that could be changed) > so we could do it all from the trusted entity. I apologies for not expressing this clearer, as that statement is incorrect. The currently deployed use case works as follows: [SRTM] --> [GRUB] -- (DLE, terminates SRTM chain) --> [CPU] -- (starts DRTM chain) --> [SINIT ACM] --> [SL kernel + initramfs] -- (load/measure/kexec) --> [target kernel] As one can see, the SRTM is terminated and its components are not used in the DRTM chain. This model reproduces the tboot model, with several enhancements, including the ability for a single solution that supports and works on Intel, AMD, and we are currently enabling Arm. It is not the only model that can be used, which several were presented at 2020 Plumbers. A detailed version of a deployed implementation of the secure upgrade use case was detailed in the 2021 FOSSDEM presentation. Where the LCP policy is used to tell the ACM what [SL kernel + initramfs] are allowed to be started by TXT. This allows the ability to launch into an upgrade state without having to reboot. In case the question comes up from those not familiar, the kexec does an GETSEC[SEXIT] which closes off access to Localities 1 and 2, thus locking the DRTM PCR values. It brings the CPUs out of SMX mode so the target kernel does not require to have any knowledge about running in that mode. v/r, dps
On Mon, 2024-11-04 at 11:34 -0500, Daniel P. Smith wrote: [...] > In case the question comes up from those not familiar, the kexec does > an GETSEC[SEXIT] which closes off access to Localities 1 and 2, thus > locking the DRTM PCR values. It brings the CPUs out of SMX mode so > the target kernel does not require to have any knowledge about > running in that mode. So, to repeat the question: why a sysfs interface for setting the default locality? If I understand correctly from what you say above, it can't be used in any kernel except the SL one, and that one could run permanently in it, so there's no requirement at all for user space to be able to change this, is there? Regards, James
On 11/4/24 15:36, James Bottomley wrote: > On Mon, 2024-11-04 at 11:34 -0500, Daniel P. Smith wrote: > [...] >> In case the question comes up from those not familiar, the kexec does >> an GETSEC[SEXIT] which closes off access to Localities 1 and 2, thus >> locking the DRTM PCR values. It brings the CPUs out of SMX mode so >> the target kernel does not require to have any knowledge about >> running in that mode. > > So, to repeat the question: why a sysfs interface for setting the > default locality? If I understand correctly from what you say above, > it can't be used in any kernel except the SL one, and that one could > run permanently in it, so there's no requirement at all for user space > to be able to change this, is there? I responded to Ard this morning that, "If the slmodule is able to set the locality for all PCR extends coming from user space to be Locality 2, that removes the current need for it." Where "it" is the sysfs node for default locality. This series does just that, so in a more direct response, no, a writable sysfs node is no longer needed with this series. v/r dps
On Mon Nov 4, 2024 at 1:55 PM EET, Ard Biesheuvel wrote: > But the inclusion of the secure launch module will force the TPM > module to be builtin too, surely. "config SECURE_LAUNCH" is bool => no export needed. I have it in the patches too simply because missed it. BR, Jarkko
On Mon Nov 4, 2024 at 1:27 PM EET, Ard Biesheuvel wrote: > On Mon, 4 Nov 2024 at 12:18, Jarkko Sakkinen <jarkko@kernel.org> wrote: > > > > On Mon Nov 4, 2024 at 12:57 PM EET, Daniel P. Smith wrote: > > > On 11/2/24 14:00, Jarkko Sakkinen wrote: > > > > On Sat Nov 2, 2024 at 5:22 PM EET, Jarkko Sakkinen wrote: > > > >> It is not really my problem but I'm also wondering how the > > > >> initialization order is managed. What if e.g. IMA happens to > > > >> initialize before slmodule? > > > > > > > > The first obvious observation from Trenchboot implementation is that it > > > > is 9/10 times worst idea ever to have splitted root of trust. Here it > > > > is realized by an LKM for slmodule. > > > > > > First, there is no conflict between IMA and slmodule. With your change > > > to make locality switching a one shot, the only issue would be if IMA > > > were to run first and issue a locality switch to Locality 0, thus > > > blocking slmodule from switching to Locality 2. As for PCR usage, IMA > > > uses the SRTM PCRs, which are completely accessible under Locality 2. > > > > Just pointing out a possible problem (e.g. with TPM2_PolicyLocality). > > > > > Honestly, a better path forward would be to revisit the issue that is > > > driving most of that logic existing, which is the lack of a TPM > > > interface code in the setup kernel. As a reminder, this issue is due to > > > the TPM maintainers position that the only TPM code in the kernel can be > > > the mainline driver. Which, unless something has changed, is impossible > > > to compile into the setup kernel due to its use of mainline kernel > > > constructs not present in the setup kernel. > > > > I don't categorically reject adding some code to early setup. We have > > some shared code EFI stub but you have to explain your changes > > proeprly. Getting rejection in some early version to some approach, > > and being still pissed about that years forward is not really way > > to go IMHO. > > > > Daniel has been nothing but courteous and patient, and you've waited > 11 revision to come up with some bikeshedding patches that don't > materially improve anything. > > So commenting on Daniel's approach here is uncalled for. > > Can we please converge on this? > > Daniel - if no component can be built as a module, there should be no > reason for the set_default_locality() hook to be exported to modules > right? And do we even need a sysfs node to expose this information? I provided patches with my sob's and spent time on making the delta absolute minimal to what exists already. If those are picked, I'm good. They are essentially drop-in replicas to the existing patches. BR, Jarkko
On Mon Nov 4, 2024 at 1:18 PM EET, Jarkko Sakkinen wrote: > On Mon Nov 4, 2024 at 12:57 PM EET, Daniel P. Smith wrote: > > On 11/2/24 14:00, Jarkko Sakkinen wrote: > > > On Sat Nov 2, 2024 at 5:22 PM EET, Jarkko Sakkinen wrote: > > >> It is not really my problem but I'm also wondering how the > > >> initialization order is managed. What if e.g. IMA happens to > > >> initialize before slmodule? > > > > > > The first obvious observation from Trenchboot implementation is that it > > > is 9/10 times worst idea ever to have splitted root of trust. Here it > > > is realized by an LKM for slmodule. > > > > First, there is no conflict between IMA and slmodule. With your change > > to make locality switching a one shot, the only issue would be if IMA > > were to run first and issue a locality switch to Locality 0, thus > > blocking slmodule from switching to Locality 2. As for PCR usage, IMA > > uses the SRTM PCRs, which are completely accessible under Locality 2. > > Just pointing out a possible problem (e.g. with TPM2_PolicyLocality). > > > Honestly, a better path forward would be to revisit the issue that is > > driving most of that logic existing, which is the lack of a TPM > > interface code in the setup kernel. As a reminder, this issue is due to > > the TPM maintainers position that the only TPM code in the kernel can be > > the mainline driver. Which, unless something has changed, is impossible > > to compile into the setup kernel due to its use of mainline kernel > > constructs not present in the setup kernel. > > I don't categorically reject adding some code to early setup. We have > some shared code EFI stub but you have to explain your changes > proeprly. Getting rejection in some early version to some approach, > and being still pissed about that years forward is not really way > to go IMHO. ... and ignoring fixes that took me almost one day to fully get together is neither. These address the awful commit messages, tpm_tis-only filtering and not allowing repetition in the calls. BR, Jarkko
On Mon Nov 4, 2024 at 1:19 PM EET, Jarkko Sakkinen wrote: > > I don't categorically reject adding some code to early setup. We have > > some shared code EFI stub but you have to explain your changes > > proeprly. Getting rejection in some early version to some approach, > > and being still pissed about that years forward is not really way > > to go IMHO. > > ... and ignoring fixes that took me almost one day to fully get together > is neither. > > These address the awful commit messages, tpm_tis-only filtering and not > allowing repetition in the calls. Also considering early setup: it is not part of uapi. It can be reconsidered after the feature is landed as improvement (perhaps also easier to project then). I don't think TPM2_PolicyLocality potential conflict is important for kernel, and that is the only known race I know at this point. I don't really get the problem here. It's almost I like I should not have mentioned potential concurrency issue in order to not get slandered. BR, Jarkko
© 2016 - 2024 Red Hat, Inc.