This is a model specific register which details the current configuration
cores and threads in the package. Because of how Hyperthread and Core
configuration works works in firmware, the MSR it is de-facto constant and
will remain unchanged until the next system reset.
It is a read only MSR, and for now, reject guest attempts to read it, to avoid
the system setting leaking into guest context. Further CPUID/MSR work is
required before we can start virtualising a consistent topology to the guest.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
Observant people will notice that in the latest SDM at the time of writing,
version 069, this MSR is listed in single table for Haswell generation Xeon's,
and that the thread and core count fields are the wrong way around.
I'm informed that this MSR has existed since the Nehalem era (except for some
of the early in-order Atoms), has had consistent behaviour in that time, and
that the documentation will be addressed in future SDM revisions.
---
xen/arch/x86/msr.c | 2 ++
xen/include/asm-x86/msr-index.h | 4 ++++
2 files changed, 6 insertions(+)
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 815d599..948d07d 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -131,6 +131,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
case MSR_PRED_CMD:
case MSR_FLUSH_CMD:
/* Write-only */
+ case MSR_INTEL_CORE_THREAD_COUNT:
case MSR_TSX_FORCE_ABORT:
/* Not offered to guests. */
goto gp_fault;
@@ -267,6 +268,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
{
uint64_t rsvd;
+ case MSR_INTEL_CORE_THREAD_COUNT:
case MSR_INTEL_PLATFORM_INFO:
case MSR_ARCH_CAPABILITIES:
/* Read-only */
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 11512d4..389f95f 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -32,6 +32,10 @@
#define EFER_KNOWN_MASK (EFER_SCE | EFER_LME | EFER_LMA | EFER_NX | \
EFER_SVME | EFER_FFXSE)
+#define MSR_INTEL_CORE_THREAD_COUNT 0x00000035
+#define MSR_CTC_THREAD_MASK 0x0000ffff
+#define MSR_CTC_CORE_MASK 0xffff0000
+
/* Speculation Controls. */
#define MSR_SPEC_CTRL 0x00000048
#define SPEC_CTRL_IBRS (_AC(1, ULL) << 0)
--
2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
>>> Andrew Cooper <andrew.cooper3@citrix.com> 04/13/19 6:22 PM >>>--- a/xen/arch/x86/msr.c+++ b/xen/arch/x86/msr.c >@@ -131,6 +131,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val) >case MSR_PRED_CMD: >case MSR_FLUSH_CMD: >/* Write-only */ >+ case MSR_INTEL_CORE_THREAD_COUNT: >case MSR_TSX_FORCE_ABORT: >/* Not offered to guests. */ >goto gp_fault; In a private talk we had, didn't you mention there is at least one OS (was it OSX) that unconditionally ready this MSR? Despite assuming there are other issues with OSX, wouldn't it be better to avoid giving #GP(0) back here? I realize we can't yet give back a fully consistent value here, looking at what conclusions Linux draws from other CPUID output, couldn't 0x00010001 be used as "fake" value? Nevertheless if you're convinced it should be this way, and if this doesn't actively break any guest OS (due to introducing this as the _only_ thing causing their boot to fail) Acked-by: Jan Beulich <jbeulich@suse.com> Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
On 15/04/2019 09:09, Jan Beulich wrote: >>>> Andrew Cooper <andrew.cooper3@citrix.com> 04/13/19 6:22 PM >>>--- a/xen/arch/x86/msr.c+++ b/xen/arch/x86/msr.c >> @@ -131,6 +131,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val) >> case MSR_PRED_CMD: >> case MSR_FLUSH_CMD: >> /* Write-only */ >> + case MSR_INTEL_CORE_THREAD_COUNT: >> case MSR_TSX_FORCE_ABORT: >> /* Not offered to guests. */ >> goto gp_fault; > In a private talk we had, didn't you mention there is at least one OS (was it OSX) > that unconditionally ready this MSR? Despite assuming there are other issues > with OSX, wouldn't it be better to avoid giving #GP(0) back here? I realize we can't > yet give back a fully consistent value here, looking at what conclusions Linux > draws from other CPUID output, couldn't 0x00010001 be used as "fake" value? I did consider that option, but for anyone looking at this MSR, it is likely to make the situation worse rather than better. The other option to consider is retaining the current leaky behaviour. There haven't been obvious problems thus far, and the accurate topology information is going to be arriving shortly to mesh in with the core scheduling work. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
>>> Andrew Cooper <andrew.cooper3@citrix.com> 04/15/19 10:30 AM >>> >On 15/04/2019 09:09, Jan Beulich wrote: >>>> Andrew Cooper <andrew.cooper3@citrix.com> 04/13/19 6:22 PM >>>--- a/xen/arch/x86/msr.c+++ b/xen/arch/x86/msr.c >>> @@ -131,6 +131,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val) >>> case MSR_PRED_CMD: >>> case MSR_FLUSH_CMD: >>> /* Write-only */ >>> + case MSR_INTEL_CORE_THREAD_COUNT: >>> case MSR_TSX_FORCE_ABORT: >>> /* Not offered to guests. */ >>> goto gp_fault; >> In a private talk we had, didn't you mention there is at least one OS (was it OSX) >> that unconditionally ready this MSR? Despite assuming there are other issues >> with OSX, wouldn't it be better to avoid giving #GP(0) back here? I realize we can't >> yet give back a fully consistent value here, looking at what conclusions Linux >> draws from other CPUID output, couldn't 0x00010001 be used as "fake" value? > >I did consider that option, but for anyone looking at this MSR, it is >likely to make the situation worse rather than better. > >The other option to consider is retaining the current leaky behaviour. >There haven't been obvious problems thus far, and the accurate topology >information is going to be arriving shortly to mesh in with the core >scheduling work. Imo this would still be better than delivering #GP(0). Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
© 2016 - 2026 Red Hat, Inc.