Unfortunately aa50f45332f1 ("xen: fix for_each_cpu when NR_CPUS=1") has
caused quite a bit of fallout with gcc10, e.g. (there are at least two
more similar ones, and I didn't bother trying to find them all):
In file included from .../xen/include/xen/config.h:13,
from <command-line>:
core_parking.c: In function ‘core_parking_power’:
.../xen/include/asm/percpu.h:12:51: error: array subscript 1 is above array bounds of ‘long unsigned int[1]’ [-Werror=array-bounds]
12 | (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu]))
.../xen/include/xen/compiler.h:141:29: note: in definition of macro ‘RELOC_HIDE’
141 | (typeof(ptr)) (__ptr + (off)); })
| ^~~
core_parking.c:133:39: note: in expansion of macro ‘per_cpu’
133 | core_tmp = cpumask_weight(per_cpu(cpu_core_mask, cpu));
| ^~~~~~~
In file included from .../xen/include/xen/percpu.h:4,
from .../xen/include/asm/msr.h:7,
from .../xen/include/asm/time.h:5,
from .../xen/include/xen/time.h:76,
from .../xen/include/xen/spinlock.h:4,
from .../xen/include/xen/cpu.h:5,
from core_parking.c:19:
.../xen/include/asm/percpu.h:6:22: note: while referencing ‘__per_cpu_offset’
6 | extern unsigned long __per_cpu_offset[NR_CPUS];
| ^~~~~~~~~~~~~~~~
One of the further errors even went as far as claiming that an array
index (range) of [0, 0] was outside the bounds of a [1] array, so
something fishy is pretty clearly going on there.
The compiler apparently wants to be able to see that the loop isn't
really a loop in order to avoid triggering such warnings, yet what
exactly makes it consider the loop exit condition constant and within
the [0, 1] range isn't obvious - using ((mask)->bits[0] & 1) instead of
cpumask_test_cpu() for example did _not_ help.
Re-instate a special form of for_each_cpu(), experimentally "proven" to
avoid the diagnostics.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
--- a/xen/include/xen/cpumask.h
+++ b/xen/include/xen/cpumask.h
@@ -368,10 +368,15 @@ static inline void free_cpumask_var(cpum
#define FREE_CPUMASK_VAR(m) free_cpumask_var(m)
#endif
+#if NR_CPUS > 1
#define for_each_cpu(cpu, mask) \
for ((cpu) = cpumask_first(mask); \
(cpu) < nr_cpu_ids; \
(cpu) = cpumask_next(cpu, mask))
+#else /* NR_CPUS == 1 */
+#define for_each_cpu(cpu, mask) \
+ for ((cpu) = 0; (cpu) < cpumask_test_cpu(0, mask); ++(cpu))
+#endif /* NR_CPUS */
/*
* The following particular system cpumasks and operations manage
On Wed, 2021-03-31 at 16:52 +0200, Jan Beulich wrote: > Unfortunately aa50f45332f1 ("xen: fix for_each_cpu when NR_CPUS=1") > has > caused quite a bit of fallout with gcc10, e.g. (there are at least > two > more similar ones, and I didn't bother trying to find them all): > Oh, wow... Sorry about that. I was sure I had checked (and with gcc10), but clearly I'm wrong. > [...] > > Re-instate a special form of for_each_cpu(), experimentally "proven" > to > avoid the diagnostics. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > Reviewed-by: Dario Faggioli <dfaggioli@suse.com> Thanks and Regards -- Dario Faggioli, Ph.D http://about.me/dario.faggioli Virtualization Software Engineer SUSE Labs, SUSE https://www.suse.com/ ------------------------------------------------------------------- <<This happens because _I_ choose it to happen!>> (Raistlin Majere)
On 31.03.2021 18:55, Dario Faggioli wrote: > On Wed, 2021-03-31 at 16:52 +0200, Jan Beulich wrote: >> Unfortunately aa50f45332f1 ("xen: fix for_each_cpu when NR_CPUS=1") >> has >> caused quite a bit of fallout with gcc10, e.g. (there are at least >> two >> more similar ones, and I didn't bother trying to find them all): >> > Oh, wow... Sorry about that. I was sure I had checked (and with gcc10), > but clearly I'm wrong. Perhaps you did try a debug build, while I was seeing the issues in a non-debug one? >> Re-instate a special form of for_each_cpu(), experimentally "proven" >> to >> avoid the diagnostics. >> >> Signed-off-by: Jan Beulich <jbeulich@suse.com> >> > Reviewed-by: Dario Faggioli <dfaggioli@suse.com> Thanks. Jan
On Wed, Mar 31, 2021 at 04:52:47PM +0200, Jan Beulich wrote: > Unfortunately aa50f45332f1 ("xen: fix for_each_cpu when NR_CPUS=1") has > caused quite a bit of fallout with gcc10, e.g. (there are at least two > more similar ones, and I didn't bother trying to find them all): > > In file included from .../xen/include/xen/config.h:13, > from <command-line>: > core_parking.c: In function ‘core_parking_power’: > .../xen/include/asm/percpu.h:12:51: error: array subscript 1 is above array bounds of ‘long unsigned int[1]’ [-Werror=array-bounds] > 12 | (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu])) > .../xen/include/xen/compiler.h:141:29: note: in definition of macro ‘RELOC_HIDE’ > 141 | (typeof(ptr)) (__ptr + (off)); }) > | ^~~ > core_parking.c:133:39: note: in expansion of macro ‘per_cpu’ > 133 | core_tmp = cpumask_weight(per_cpu(cpu_core_mask, cpu)); > | ^~~~~~~ > In file included from .../xen/include/xen/percpu.h:4, > from .../xen/include/asm/msr.h:7, > from .../xen/include/asm/time.h:5, > from .../xen/include/xen/time.h:76, > from .../xen/include/xen/spinlock.h:4, > from .../xen/include/xen/cpu.h:5, > from core_parking.c:19: > .../xen/include/asm/percpu.h:6:22: note: while referencing ‘__per_cpu_offset’ > 6 | extern unsigned long __per_cpu_offset[NR_CPUS]; > | ^~~~~~~~~~~~~~~~ At this point, should be consider reverting the original fix from the 4.15 branch, so that we don't release something that's build broken with gcc 10? Roger.
On 01.04.2021 11:00, Roger Pau Monné wrote: > On Wed, Mar 31, 2021 at 04:52:47PM +0200, Jan Beulich wrote: >> Unfortunately aa50f45332f1 ("xen: fix for_each_cpu when NR_CPUS=1") has >> caused quite a bit of fallout with gcc10, e.g. (there are at least two >> more similar ones, and I didn't bother trying to find them all): >> >> In file included from .../xen/include/xen/config.h:13, >> from <command-line>: >> core_parking.c: In function ‘core_parking_power’: >> .../xen/include/asm/percpu.h:12:51: error: array subscript 1 is above array bounds of ‘long unsigned int[1]’ [-Werror=array-bounds] >> 12 | (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu])) >> .../xen/include/xen/compiler.h:141:29: note: in definition of macro ‘RELOC_HIDE’ >> 141 | (typeof(ptr)) (__ptr + (off)); }) >> | ^~~ >> core_parking.c:133:39: note: in expansion of macro ‘per_cpu’ >> 133 | core_tmp = cpumask_weight(per_cpu(cpu_core_mask, cpu)); >> | ^~~~~~~ >> In file included from .../xen/include/xen/percpu.h:4, >> from .../xen/include/asm/msr.h:7, >> from .../xen/include/asm/time.h:5, >> from .../xen/include/xen/time.h:76, >> from .../xen/include/xen/spinlock.h:4, >> from .../xen/include/xen/cpu.h:5, >> from core_parking.c:19: >> .../xen/include/asm/percpu.h:6:22: note: while referencing ‘__per_cpu_offset’ >> 6 | extern unsigned long __per_cpu_offset[NR_CPUS]; >> | ^~~~~~~~~~~~~~~~ > > At this point, should be consider reverting the original fix from the > 4.15 branch, so that we don't release something that's build broken > with gcc 10? Well, I didn't propose reverting (or taking this fix) because I think build breakage is better than runtime breakage. But in the end, Ian, it's up to you. Jan
On Thu, Apr 01, 2021 at 11:26:03AM +0200, Jan Beulich wrote: > On 01.04.2021 11:00, Roger Pau Monné wrote: > > On Wed, Mar 31, 2021 at 04:52:47PM +0200, Jan Beulich wrote: > >> Unfortunately aa50f45332f1 ("xen: fix for_each_cpu when NR_CPUS=1") has > >> caused quite a bit of fallout with gcc10, e.g. (there are at least two > >> more similar ones, and I didn't bother trying to find them all): > >> > >> In file included from .../xen/include/xen/config.h:13, > >> from <command-line>: > >> core_parking.c: In function ‘core_parking_power’: > >> .../xen/include/asm/percpu.h:12:51: error: array subscript 1 is above array bounds of ‘long unsigned int[1]’ [-Werror=array-bounds] > >> 12 | (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu])) > >> .../xen/include/xen/compiler.h:141:29: note: in definition of macro ‘RELOC_HIDE’ > >> 141 | (typeof(ptr)) (__ptr + (off)); }) > >> | ^~~ > >> core_parking.c:133:39: note: in expansion of macro ‘per_cpu’ > >> 133 | core_tmp = cpumask_weight(per_cpu(cpu_core_mask, cpu)); > >> | ^~~~~~~ > >> In file included from .../xen/include/xen/percpu.h:4, > >> from .../xen/include/asm/msr.h:7, > >> from .../xen/include/asm/time.h:5, > >> from .../xen/include/xen/time.h:76, > >> from .../xen/include/xen/spinlock.h:4, > >> from .../xen/include/xen/cpu.h:5, > >> from core_parking.c:19: > >> .../xen/include/asm/percpu.h:6:22: note: while referencing ‘__per_cpu_offset’ > >> 6 | extern unsigned long __per_cpu_offset[NR_CPUS]; > >> | ^~~~~~~~~~~~~~~~ > > > > At this point, should be consider reverting the original fix from the > > 4.15 branch, so that we don't release something that's build broken > > with gcc 10? > > Well, I didn't propose reverting (or taking this fix) because I think > build breakage is better than runtime breakage. But in the end, Ian, > it's up to you. Oh, right, sorry. The build issue only happens with NR_CPUS=1, in which case I agree, there's no need to do anything in 4.15 IMO. Sorry for bothering. Roger.
Roger Pau Monné writes ("Re: Revert NR_CPUS=1 fix from 4.15 (was: Re: [PATCH] fix for_each_cpu() again for NR_CPUS=1)"): > On Thu, Apr 01, 2021 at 11:26:03AM +0200, Jan Beulich wrote: > > Well, I didn't propose reverting (or taking this fix) because I think > > build breakage is better than runtime breakage. But in the end, Ian, > > it's up to you. > > Oh, right, sorry. The build issue only happens with NR_CPUS=1, in > which case I agree, there's no need to do anything in 4.15 IMO. Oh. Right. I had the impression that the build breakage broke other configurations too. Since you're saying that's not the case, please disregard my earlier mail. Ian.
Roger Pau Monné writes ("Revert NR_CPUS=1 fix from 4.15 (was: Re: [PATCH] fix for_each_cpu() again for NR_CPUS=1)"): > At this point, should be consider reverting the original fix from the > 4.15 branch, so that we don't release something that's build broken > with gcc 10? Yes. I think so. Release-Acked-by: Ian Jackson <iwj@xenproject.org> But please leave it to me to commit it. I will do so at or after around 14:00 UK time (13:00 UTC) today unless someone objects. Thanks, Ian.
© 2016 - 2024 Red Hat, Inc.