From nobody Mon Feb 9 20:36:22 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1691162922239771.9973476545988; Fri, 4 Aug 2023 08:28:42 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.577190.904148 (Exim 4.92) (envelope-from ) id 1qRwis-00084T-2I; Fri, 04 Aug 2023 15:28:14 +0000 Received: by outflank-mailman (output) from mailman id 577190.904148; Fri, 04 Aug 2023 15:28:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRwir-000841-RG; Fri, 04 Aug 2023 15:28:13 +0000 Received: by outflank-mailman (input) for mailman id 577190; Fri, 04 Aug 2023 15:28:12 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRwiq-00080Z-CJ for xen-devel@lists.xenproject.org; Fri, 04 Aug 2023 15:28:12 +0000 Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 8479d03b-32db-11ee-8613-37d641c3527e; Fri, 04 Aug 2023 17:28:09 +0200 (CEST) Received: from nico.bugseng.com (unknown [147.123.100.131]) by support.bugseng.com (Postfix) with ESMTPSA id D589A4EE0740; Fri, 4 Aug 2023 17:28:08 +0200 (CEST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8479d03b-32db-11ee-8613-37d641c3527e From: Nicola Vetrini To: xen-devel@lists.xenproject.org Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, consulting@bugseng.com, Nicola Vetrini , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu Subject: [XEN PATCH 1/6] x86: rename variable 'e820' to address MISRA C:2012 Rule 5.3 Date: Fri, 4 Aug 2023 17:27:44 +0200 Message-Id: <896a2235560fd348f79eded33731609c5d2e74ab.1691162261.git.nicola.vetrini@bugseng.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1691162924606100006 Content-Type: text/plain; charset="utf-8" The variable declared in the header file 'xen/arch/x86/include/asm/e820.h' is shadowed by many function parameters, so it is renamed to avoid these violations. No functional changes. Signed-off-by: Nicola Vetrini --- This patch is similar to other renames done on previous patches, and the preferred strategy there was to rename the global variable. This one has more occurrences that are spread in various files, but the general pattern is the same. --- xen/arch/x86/dom0_build.c | 10 ++-- xen/arch/x86/e820.c | 66 ++++++++++++------------ xen/arch/x86/guest/xen/xen.c | 4 +- xen/arch/x86/hvm/dom0_build.c | 6 +-- xen/arch/x86/include/asm/e820.h | 2 +- xen/arch/x86/mm.c | 49 +++++++++--------- xen/arch/x86/numa.c | 8 +-- xen/arch/x86/setup.c | 22 ++++---- xen/arch/x86/srat.c | 6 +-- xen/arch/x86/x86_64/mmconf-fam10h.c | 2 +- xen/drivers/passthrough/amd/iommu_acpi.c | 2 +- 11 files changed, 89 insertions(+), 88 deletions(-) diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c index 8b1fcc6471..bfb6400376 100644 --- a/xen/arch/x86/dom0_build.c +++ b/xen/arch/x86/dom0_build.c @@ -534,13 +534,13 @@ int __init dom0_setup_permissions(struct domain *d) } =20 /* Remove access to E820_UNUSABLE I/O regions above 1MB. */ - for ( i =3D 0; i < e820.nr_map; i++ ) + for ( i =3D 0; i < e820_map.nr_map; i++ ) { unsigned long sfn, efn; - sfn =3D max_t(unsigned long, paddr_to_pfn(e820.map[i].addr), 0x100= ul); - efn =3D paddr_to_pfn(e820.map[i].addr + e820.map[i].size - 1); - if ( (e820.map[i].type =3D=3D E820_UNUSABLE) && - (e820.map[i].size !=3D 0) && + sfn =3D max_t(unsigned long, paddr_to_pfn(e820_map.map[i].addr), 0= x100ul); + efn =3D paddr_to_pfn(e820_map.map[i].addr + e820_map.map[i].size -= 1); + if ( (e820_map.map[i].type =3D=3D E820_UNUSABLE) && + (e820_map.map[i].size !=3D 0) && (sfn <=3D efn) ) rc |=3D iomem_deny_access(d, sfn, efn); } diff --git a/xen/arch/x86/e820.c b/xen/arch/x86/e820.c index 0b89935510..4425011c01 100644 --- a/xen/arch/x86/e820.c +++ b/xen/arch/x86/e820.c @@ -34,7 +34,7 @@ boolean_param("e820-mtrr-clip", e820_mtrr_clip); static bool __initdata e820_verbose; boolean_param("e820-verbose", e820_verbose); =20 -struct e820map e820; +struct e820map e820_map; struct e820map __initdata e820_raw; =20 /* @@ -47,8 +47,8 @@ int __init e820_all_mapped(u64 start, u64 end, unsigned t= ype) { unsigned int i; =20 - for (i =3D 0; i < e820.nr_map; i++) { - struct e820entry *ei =3D &e820.map[i]; + for (i =3D 0; i < e820_map.nr_map; i++) { + struct e820entry *ei =3D &e820_map.map[i]; =20 if (type && ei->type !=3D type) continue; @@ -75,17 +75,17 @@ int __init e820_all_mapped(u64 start, u64 end, unsigned= type) static void __init add_memory_region(unsigned long long start, unsigned long long size, int type) { - unsigned int x =3D e820.nr_map; + unsigned int x =3D e820_map.nr_map; =20 - if (x =3D=3D ARRAY_SIZE(e820.map)) { + if (x =3D=3D ARRAY_SIZE(e820_map.map)) { printk(KERN_ERR "Ooops! Too many entries in the memory map!\n"); return; } =20 - e820.map[x].addr =3D start; - e820.map[x].size =3D size; - e820.map[x].type =3D type; - e820.nr_map++; + e820_map.map[x].addr =3D start; + e820_map.map[x].size =3D size; + e820_map.map[x].type =3D type; + e820_map.nr_map++; } =20 void __init print_e820_memory_map(const struct e820entry *map, @@ -347,13 +347,13 @@ static unsigned long __init find_max_pfn(void) unsigned int i; unsigned long max_pfn =3D 0; =20 - for (i =3D 0; i < e820.nr_map; i++) { + for (i =3D 0; i < e820_map.nr_map; i++) { unsigned long start, end; /* RAM? */ - if (e820.map[i].type !=3D E820_RAM) + if (e820_map.map[i].type !=3D E820_RAM) continue; - start =3D PFN_UP(e820.map[i].addr); - end =3D PFN_DOWN(e820.map[i].addr + e820.map[i].size); + start =3D PFN_UP(e820_map.map[i].addr); + end =3D PFN_DOWN(e820_map.map[i].addr + e820_map.map[i].size); if (start >=3D end) continue; if (end > max_pfn) @@ -372,21 +372,21 @@ static void __init clip_to_limit(uint64_t limit, cons= t char *warnmsg) for ( ; ; ) { /* Find a RAM region needing clipping. */ - for ( i =3D 0; i < e820.nr_map; i++ ) - if ( (e820.map[i].type =3D=3D E820_RAM) && - ((e820.map[i].addr + e820.map[i].size) > limit) ) + for ( i =3D 0; i < e820_map.nr_map; i++ ) + if ( (e820_map.map[i].type =3D=3D E820_RAM) && + ((e820_map.map[i].addr + e820_map.map[i].size) > limit) ) break; =20 /* If none found, we are done. */ - if ( i =3D=3D e820.nr_map ) - break; =20 + if ( i =3D=3D e820_map.nr_map ) + break; =20 old_limit =3D max_t( - uint64_t, old_limit, e820.map[i].addr + e820.map[i].size); + uint64_t, old_limit, e820_map.map[i].addr + e820_map.map[i].si= ze); =20 /* We try to convert clipped RAM areas to E820_UNUSABLE. */ - if ( e820_change_range_type(&e820, max(e820.map[i].addr, limit), - e820.map[i].addr + e820.map[i].size, + if ( e820_change_range_type(&e820_map, max(e820_map.map[i].addr, l= imit), + e820_map.map[i].addr + e820_map.map[i]= .size, E820_RAM, E820_UNUSABLE) ) continue; =20 @@ -394,15 +394,15 @@ static void __init clip_to_limit(uint64_t limit, cons= t char *warnmsg) * If the type change fails (e.g., not space in table) then we cli= p or=20 * delete the region as appropriate. */ - if ( e820.map[i].addr < limit ) + if ( e820_map.map[i].addr < limit ) { - e820.map[i].size =3D limit - e820.map[i].addr; + e820_map.map[i].size =3D limit - e820_map.map[i].addr; } else { - memmove(&e820.map[i], &e820.map[i+1], - (e820.nr_map - i - 1) * sizeof(struct e820entry)); - e820.nr_map--; + memmove(&e820_map.map[i], &e820_map.map[i+1], + (e820_map.nr_map - i - 1) * sizeof(struct e820entry)); + e820_map.nr_map--; } } =20 @@ -497,7 +497,7 @@ static void __init reserve_dmi_region(void) if ( !what ) break; if ( ((base + len) > base) && - reserve_e820_ram(&e820, base, base + len) ) + reserve_e820_ram(&e820_map, base, base + len) ) printk("WARNING: %s table located in E820 RAM %"PRIpaddr"-%"PR= Ipaddr". Fixed.\n", what, base, base + len); } @@ -517,12 +517,12 @@ static void __init machine_specific_memory_setup(stru= ct e820map *raw) =20 if ( opt_availmem ) { - for ( i =3D size =3D 0; (i < e820.nr_map) && (size <=3D opt_availm= em); i++ ) - if ( e820.map[i].type =3D=3D E820_RAM ) - size +=3D e820.map[i].size; + for ( i =3D size =3D 0; (i < e820_map.nr_map) && (size <=3D opt_av= ailmem); i++ ) + if ( e820_map.map[i].type =3D=3D E820_RAM ) + size +=3D e820_map.map[i].size; if ( size > opt_availmem ) clip_to_limit( - e820.map[i-1].addr + e820.map[i-1].size - (size-opt_availm= em), + e820_map.map[i-1].addr + e820_map.map[i-1].size - (size-op= t_availmem), NULL); } =20 @@ -694,10 +694,10 @@ unsigned long __init init_e820(const char *str, struc= t e820map *raw) machine_specific_memory_setup(raw); =20 if ( cpu_has_hypervisor ) - hypervisor_e820_fixup(&e820); + hypervisor_e820_fixup(&e820_map); =20 printk("%s RAM map:\n", str); - print_e820_memory_map(e820.map, e820.nr_map); + print_e820_memory_map(e820_map.map, e820_map.nr_map); =20 return find_max_pfn(); } diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c index f93dfc89f7..3ec828b98d 100644 --- a/xen/arch/x86/guest/xen/xen.c +++ b/xen/arch/x86/guest/xen/xen.c @@ -147,9 +147,9 @@ static void __init init_memmap(void) PFN_DOWN(GB(4) - 1))) ) panic("unable to add RAM to in-use PFN rangeset\n"); =20 - for ( i =3D 0; i < e820.nr_map; i++ ) + for ( i =3D 0; i < e820_map.nr_map; i++ ) { - struct e820entry *e =3D &e820.map[i]; + struct e820entry *e =3D &e820_map.map[i]; =20 if ( rangeset_add_range(mem, PFN_DOWN(e->addr), PFN_UP(e->addr + e->size - 1)) ) diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index bc0e290db6..98203f7a52 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -333,13 +333,13 @@ static __init void pvh_setup_e820(struct domain *d, u= nsigned long nr_pages) * Add an extra entry in case we have to split a RAM entry into a RAM = and a * UNUSABLE one in order to truncate it. */ - d->arch.e820 =3D xzalloc_array(struct e820entry, e820.nr_map + 1); + d->arch.e820 =3D xzalloc_array(struct e820entry, e820_map.nr_map + 1); if ( !d->arch.e820 ) panic("Unable to allocate memory for Dom0 e820 map\n"); entry_guest =3D d->arch.e820; =20 /* Clamp e820 memory map to match the memory assigned to Dom0 */ - for ( i =3D 0, entry =3D e820.map; i < e820.nr_map; i++, entry++ ) + for ( i =3D 0, entry =3D e820_map.map; i < e820_map.nr_map; i++, entry= ++ ) { *entry_guest =3D *entry; =20 @@ -392,7 +392,7 @@ static __init void pvh_setup_e820(struct domain *d, uns= igned long nr_pages) next: d->arch.nr_e820++; entry_guest++; - ASSERT(d->arch.nr_e820 <=3D e820.nr_map + 1); + ASSERT(d->arch.nr_e820 <=3D e820_map.nr_map + 1); } ASSERT(cur_pages =3D=3D nr_pages); } diff --git a/xen/arch/x86/include/asm/e820.h b/xen/arch/x86/include/asm/e82= 0.h index 213d5b5dd2..0865825f7d 100644 --- a/xen/arch/x86/include/asm/e820.h +++ b/xen/arch/x86/include/asm/e820.h @@ -34,7 +34,7 @@ extern int e820_add_range( extern unsigned long init_e820(const char *str, struct e820map *raw); extern void print_e820_memory_map(const struct e820entry *map, unsigned int entries); -extern struct e820map e820; +extern struct e820map e820_map; extern struct e820map e820_raw; =20 /* These symbols live in the boot trampoline. */ diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index be2b10a391..6920ac939f 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -295,12 +295,12 @@ void __init arch_init_memory(void) /* Any areas not specified as RAM by the e820 map are considered I/O. = */ for ( i =3D 0, pfn =3D 0; pfn < max_page; i++ ) { - while ( (i < e820.nr_map) && - (e820.map[i].type !=3D E820_RAM) && - (e820.map[i].type !=3D E820_UNUSABLE) ) + while ( (i < e820_map.nr_map) && + (e820_map.map[i].type !=3D E820_RAM) && + (e820_map.map[i].type !=3D E820_UNUSABLE) ) i++; =20 - if ( i >=3D e820.nr_map ) + if ( i >=3D e820_map.nr_map ) { /* No more RAM regions: mark as I/O right to end of memory map= . */ rstart_pfn =3D rend_pfn =3D max_page; @@ -309,9 +309,10 @@ void __init arch_init_memory(void) { /* Mark as I/O just up as far as next RAM region. */ rstart_pfn =3D min_t(unsigned long, max_page, - PFN_UP(e820.map[i].addr)); + PFN_UP(e820_map.map[i].addr)); rend_pfn =3D max_t(unsigned long, rstart_pfn, - PFN_DOWN(e820.map[i].addr + e820.map[i].siz= e)); + PFN_DOWN(e820_map.map[i].addr + + e820_map.map[i].size)); } =20 /* @@ -387,9 +388,9 @@ int page_is_ram_type(unsigned long mfn, unsigned long m= em_type) uint64_t maddr =3D pfn_to_paddr(mfn); int i; =20 - for ( i =3D 0; i < e820.nr_map; i++ ) + for ( i =3D 0; i < e820_map.nr_map; i++ ) { - switch ( e820.map[i].type ) + switch ( e820_map.map[i].type ) { case E820_RAM: if ( mem_type & RAM_TYPE_CONVENTIONAL ) @@ -414,8 +415,8 @@ int page_is_ram_type(unsigned long mfn, unsigned long m= em_type) } =20 /* Test the range. */ - if ( (e820.map[i].addr <=3D maddr) && - ((e820.map[i].addr + e820.map[i].size) >=3D (maddr + PAGE_SIZ= E)) ) + if ( (e820_map.map[i].addr <=3D maddr) && + ((e820_map.map[i].addr + e820_map.map[i].size) >=3D (maddr + = PAGE_SIZE)) ) return 1; } =20 @@ -427,17 +428,17 @@ unsigned int page_get_ram_type(mfn_t mfn) uint64_t last =3D 0, maddr =3D mfn_to_maddr(mfn); unsigned int i, type =3D 0; =20 - for ( i =3D 0; i < e820.nr_map; - last =3D e820.map[i].addr + e820.map[i].size, i++ ) + for ( i =3D 0; i < e820_map.nr_map; + last =3D e820_map.map[i].addr + e820_map.map[i].size, i++ ) { - if ( (maddr + PAGE_SIZE) > last && maddr < e820.map[i].addr ) + if ( (maddr + PAGE_SIZE) > last && maddr < e820_map.map[i].addr ) type |=3D RAM_TYPE_UNKNOWN; =20 - if ( (maddr + PAGE_SIZE) <=3D e820.map[i].addr || - maddr >=3D (e820.map[i].addr + e820.map[i].size) ) + if ( (maddr + PAGE_SIZE) <=3D e820_map.map[i].addr || + maddr >=3D (e820_map.map[i].addr + e820_map.map[i].size) ) continue; =20 - switch ( e820.map[i].type ) + switch ( e820_map.map[i].type ) { case E820_RAM: type |=3D RAM_TYPE_CONVENTIONAL; @@ -778,9 +779,9 @@ bool is_memory_hole(mfn_t start, mfn_t end) unsigned long e =3D mfn_x(end); unsigned int i; =20 - for ( i =3D 0; i < e820.nr_map; i++ ) + for ( i =3D 0; i < e820_map.nr_map; i++ ) { - const struct e820entry *entry =3D &e820.map[i]; + const struct e820entry *entry =3D &e820_map.map[i]; =20 if ( !entry->size ) continue; @@ -4763,16 +4764,16 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HA= NDLE_PARAM(void) arg) =20 store =3D !guest_handle_is_null(ctxt.map.buffer); =20 - if ( store && ctxt.map.nr_entries < e820.nr_map + 1 ) + if ( store && ctxt.map.nr_entries < e820_map.nr_map + 1 ) return -EINVAL; =20 buffer =3D guest_handle_cast(ctxt.map.buffer, e820entry_t); if ( store && !guest_handle_okay(buffer, ctxt.map.nr_entries) ) return -EFAULT; =20 - for ( i =3D 0, ctxt.n =3D 0, ctxt.s =3D 0; i < e820.nr_map; ++i, += +ctxt.n ) + for ( i =3D 0, ctxt.n =3D 0, ctxt.s =3D 0; i < e820_map.nr_map; ++= i, ++ctxt.n ) { - unsigned long s =3D PFN_DOWN(e820.map[i].addr); + unsigned long s =3D PFN_DOWN(e820_map.map[i].addr); =20 if ( s > ctxt.s ) { @@ -4786,12 +4787,12 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HA= NDLE_PARAM(void) arg) } if ( store ) { - if ( ctxt.map.nr_entries <=3D ctxt.n + (e820.nr_map - i) ) + if ( ctxt.map.nr_entries <=3D ctxt.n + (e820_map.nr_map - = i) ) return -EINVAL; - if ( __copy_to_guest_offset(buffer, ctxt.n, e820.map + i, = 1) ) + if ( __copy_to_guest_offset(buffer, ctxt.n, e820_map.map += i, 1) ) return -EFAULT; } - ctxt.s =3D PFN_UP(e820.map[i].addr + e820.map[i].size); + ctxt.s =3D PFN_UP(e820_map.map[i].addr + e820_map.map[i].size); } =20 if ( ctxt.s ) diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c index 4b0b297c7e..76827f5f32 100644 --- a/xen/arch/x86/numa.c +++ b/xen/arch/x86/numa.c @@ -102,14 +102,14 @@ unsigned int __init arch_get_dma_bitsize(void) =20 int __init arch_get_ram_range(unsigned int idx, paddr_t *start, paddr_t *e= nd) { - if ( idx >=3D e820.nr_map ) + if ( idx >=3D e820_map.nr_map ) return -ENOENT; =20 - if ( e820.map[idx].type !=3D E820_RAM ) + if ( e820_map.map[idx].type !=3D E820_RAM ) return -ENODATA; =20 - *start =3D e820.map[idx].addr; - *end =3D *start + e820.map[idx].size; + *start =3D e820_map.map[idx].addr; + *end =3D *start + e820_map.map[idx].size; =20 return 0; } diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c index 80ae973d64..9c6003e374 100644 --- a/xen/arch/x86/setup.c +++ b/xen/arch/x86/setup.c @@ -1163,7 +1163,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) } else if ( efi_enabled(EFI_BOOT) ) memmap_type =3D "EFI"; - else if ( (e820_raw.nr_map =3D=20 + else if ( (e820_raw.nr_map =3D copy_bios_e820(e820_raw.map, ARRAY_SIZE(e820_raw.map))) !=3D 0 ) { @@ -1300,13 +1300,13 @@ void __init noreturn __start_xen(unsigned long mbi_= p) } =20 /* Create a temporary copy of the E820 map. */ - memcpy(&boot_e820, &e820, sizeof(e820)); + memcpy(&boot_e820, &e820_map, sizeof(e820_map)); =20 /* Early kexec reservation (explicit static start address). */ nr_pages =3D 0; - for ( i =3D 0; i < e820.nr_map; i++ ) - if ( e820.map[i].type =3D=3D E820_RAM ) - nr_pages +=3D e820.map[i].size >> PAGE_SHIFT; + for ( i =3D 0; i < e820_map.nr_map; i++ ) + if ( e820_map.map[i].type =3D=3D E820_RAM ) + nr_pages +=3D e820_map.map[i].size >> PAGE_SHIFT; set_kexec_crash_area_size((u64)nr_pages << PAGE_SHIFT); kexec_reserve_area(&boot_e820); =20 @@ -1631,7 +1631,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) unsigned long e =3D min(s + PFN_UP(kexec_crash_area.size), PFN_UP(__pa(HYPERVISOR_VIRT_END - 1))); =20 - if ( e > s )=20 + if ( e > s ) map_pages_to_xen((unsigned long)__va(kexec_crash_area.start), _mfn(s), e - s, PAGE_HYPERVISOR); } @@ -1677,9 +1677,9 @@ void __init noreturn __start_xen(unsigned long mbi_p) PAGE_HYPERVISOR_RO); =20 nr_pages =3D 0; - for ( i =3D 0; i < e820.nr_map; i++ ) - if ( e820.map[i].type =3D=3D E820_RAM ) - nr_pages +=3D e820.map[i].size >> PAGE_SHIFT; + for ( i =3D 0; i < e820_map.nr_map; i++ ) + if ( e820_map.map[i].type =3D=3D E820_RAM ) + nr_pages +=3D e820_map.map[i].size >> PAGE_SHIFT; printk("System RAM: %luMB (%lukB)\n", nr_pages >> (20 - PAGE_SHIFT), nr_pages << (PAGE_SHIFT - 10)); @@ -1771,7 +1771,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) =20 open_softirq(NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ, new_tlbflush_clock_per= iod); =20 - if ( opt_watchdog )=20 + if ( opt_watchdog ) nmi_watchdog =3D NMI_LOCAL_APIC; =20 find_smp_config(); @@ -1983,7 +1983,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) =20 do_initcalls(); =20 - if ( opt_watchdog )=20 + if ( opt_watchdog ) watchdog_setup(); =20 if ( !tboot_protect_mem_regions() ) diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c index 3f70338e6e..bbd04978ae 100644 --- a/xen/arch/x86/srat.c +++ b/xen/arch/x86/srat.c @@ -301,11 +301,11 @@ void __init srat_parse_regions(paddr_t addr) acpi_table_parse_srat(ACPI_SRAT_TYPE_MEMORY_AFFINITY, srat_parse_region, 0); =20 - for (mask =3D srat_region_mask, i =3D 0; mask && i < e820.nr_map; i++) { - if (e820.map[i].type !=3D E820_RAM) + for (mask =3D srat_region_mask, i =3D 0; mask && i < e820_map.nr_map; i++= ) { + if (e820_map.map[i].type !=3D E820_RAM) continue; =20 - if (~mask & pdx_region_mask(e820.map[i].addr, e820.map[i].size)) + if (~mask & pdx_region_mask(e820_map.map[i].addr, e820_map.map[i].size)) mask =3D 0; } =20 diff --git a/xen/arch/x86/x86_64/mmconf-fam10h.c b/xen/arch/x86/x86_64/mmco= nf-fam10h.c index a834ab3149..bbebf9219f 100644 --- a/xen/arch/x86/x86_64/mmconf-fam10h.c +++ b/xen/arch/x86/x86_64/mmconf-fam10h.c @@ -135,7 +135,7 @@ static void __init get_fam10h_pci_mmconf_base(void) return; =20 out: - if (e820_add_range(&e820, start, start + SIZE, E820_RESERVED)) + if (e820_add_range(&e820_map, start, start + SIZE, E820_RESERVED)) fam10h_pci_mmconf_base =3D start; } =20 diff --git a/xen/drivers/passthrough/amd/iommu_acpi.c b/xen/drivers/passthr= ough/amd/iommu_acpi.c index 3b577c9b39..7ad9e12b8a 100644 --- a/xen/drivers/passthrough/amd/iommu_acpi.c +++ b/xen/drivers/passthrough/amd/iommu_acpi.c @@ -418,7 +418,7 @@ static int __init parse_ivmd_block(const struct acpi_iv= rs_memory *ivmd_block) =20 if ( type =3D=3D RAM_TYPE_UNKNOWN ) { - if ( e820_add_range(&e820, addr, addr + PAGE_SIZE, + if ( e820_add_range(&e820_map, addr, addr + PAGE_SIZE, E820_RESERVED) ) continue; AMD_IOMMU_ERROR("IVMD: page at %lx couldn't be reserved\n", --=20 2.34.1