This logic is dead as Xen is 64bit-only these days.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
xen/arch/x86/cpu/mtrr/generic.c | 17 -----------------
1 file changed, 17 deletions(-)
diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
index 89634f918f..06fa0c0420 100644
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -570,23 +570,6 @@ int generic_validate_add_page(unsigned long base, unsigned long size, unsigned i
{
unsigned long lbase, last;
- /* For Intel PPro stepping <= 7, must be 4 MiB aligned
- and not touch 0x70000000->0x7003FFFF */
- if (is_cpu(INTEL) && boot_cpu_data.x86 == 6 &&
- boot_cpu_data.x86_model == 1 &&
- boot_cpu_data.x86_mask <= 7) {
- if (base & ((1 << (22 - PAGE_SHIFT)) - 1)) {
- printk(KERN_WARNING "mtrr: base(%#lx000) is not 4 MiB aligned\n", base);
- return -EINVAL;
- }
- if (!(base + size < 0x70000 || base > 0x7003F) &&
- (type == MTRR_TYPE_WRCOMB
- || type == MTRR_TYPE_WRBACK)) {
- printk(KERN_WARNING "mtrr: writable mtrr between 0x70000000 and 0x7003FFFF may hang the CPU.\n");
- return -EINVAL;
- }
- }
-
/* Check upper bits of base and last are equal and lower bits are 0
for base and 1 for last */
last = base + size - 1;
--
2.11.0
On 08.07.2020 12:14, Andrew Cooper wrote: > This logic is dead as Xen is 64bit-only these days. Ah, yes, this should have long been gone. Jan
On Wed, Jul 08, 2020 at 11:14:43AM +0100, Andrew Cooper wrote:
> This logic is dead as Xen is 64bit-only these days.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> ---
> xen/arch/x86/cpu/mtrr/generic.c | 17 -----------------
> 1 file changed, 17 deletions(-)
>
> diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
> index 89634f918f..06fa0c0420 100644
> --- a/xen/arch/x86/cpu/mtrr/generic.c
> +++ b/xen/arch/x86/cpu/mtrr/generic.c
> @@ -570,23 +570,6 @@ int generic_validate_add_page(unsigned long base, unsigned long size, unsigned i
> {
> unsigned long lbase, last;
>
> - /* For Intel PPro stepping <= 7, must be 4 MiB aligned
> - and not touch 0x70000000->0x7003FFFF */
> - if (is_cpu(INTEL) && boot_cpu_data.x86 == 6 &&
> - boot_cpu_data.x86_model == 1 &&
> - boot_cpu_data.x86_mask <= 7) {
> - if (base & ((1 << (22 - PAGE_SHIFT)) - 1)) {
> - printk(KERN_WARNING "mtrr: base(%#lx000) is not 4 MiB aligned\n", base);
> - return -EINVAL;
> - }
> - if (!(base + size < 0x70000 || base > 0x7003F) &&
> - (type == MTRR_TYPE_WRCOMB
> - || type == MTRR_TYPE_WRBACK)) {
> - printk(KERN_WARNING "mtrr: writable mtrr between 0x70000000 and 0x7003FFFF may hang the CPU.\n");
> - return -EINVAL;
> - }
> - }
> -
> /* Check upper bits of base and last are equal and lower bits are 0
> for base and 1 for last */
> last = base + size - 1;
FWIW, you could also initialize last at definition time.
Roger.
On 08/07/2020 11:48, Roger Pau Monné wrote:
> On Wed, Jul 08, 2020 at 11:14:43AM +0100, Andrew Cooper wrote:
>> This logic is dead as Xen is 64bit-only these days.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Thanks,
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>> xen/arch/x86/cpu/mtrr/generic.c | 17 -----------------
>> 1 file changed, 17 deletions(-)
>>
>> diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
>> index 89634f918f..06fa0c0420 100644
>> --- a/xen/arch/x86/cpu/mtrr/generic.c
>> +++ b/xen/arch/x86/cpu/mtrr/generic.c
>> @@ -570,23 +570,6 @@ int generic_validate_add_page(unsigned long base, unsigned long size, unsigned i
>> {
>> unsigned long lbase, last;
>>
>> - /* For Intel PPro stepping <= 7, must be 4 MiB aligned
>> - and not touch 0x70000000->0x7003FFFF */
>> - if (is_cpu(INTEL) && boot_cpu_data.x86 == 6 &&
>> - boot_cpu_data.x86_model == 1 &&
>> - boot_cpu_data.x86_mask <= 7) {
>> - if (base & ((1 << (22 - PAGE_SHIFT)) - 1)) {
>> - printk(KERN_WARNING "mtrr: base(%#lx000) is not 4 MiB aligned\n", base);
>> - return -EINVAL;
>> - }
>> - if (!(base + size < 0x70000 || base > 0x7003F) &&
>> - (type == MTRR_TYPE_WRCOMB
>> - || type == MTRR_TYPE_WRBACK)) {
>> - printk(KERN_WARNING "mtrr: writable mtrr between 0x70000000 and 0x7003FFFF may hang the CPU.\n");
>> - return -EINVAL;
>> - }
>> - }
>> -
>> /* Check upper bits of base and last are equal and lower bits are 0
>> for base and 1 for last */
>> last = base + size - 1;
> FWIW, you could also initialize last at definition time.
I've got some very different cleanup in mind for that code, seeing as it
can be simplified to a single test expression.
~Andrew
On Wed, Jul 08, 2020 at 11:52:57AM +0100, Andrew Cooper wrote: > On 08/07/2020 11:48, Roger Pau Monné wrote: > > On Wed, Jul 08, 2020 at 11:14:43AM +0100, Andrew Cooper wrote: > >> /* Check upper bits of base and last are equal and lower bits are 0 > >> for base and 1 for last */ > >> last = base + size - 1; > > FWIW, you could also initialize last at definition time. > > I've got some very different cleanup in mind for that code, seeing as it > can be simplified to a single test expression. Oh, I certainly didn't look at it that much :). Thanks, Roger.
© 2016 - 2026 Red Hat, Inc.