arch/x86/events/amd/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
The left shift of int 32 bit integer constant 1 is evaluated using 32 bit
arithmetic and then passed as a 64 bit function argument. In the case where
i is 32 or more this can lead to an overflow. Avoid this by shifting
using the BIT_ULL macro instead.
Fixes: 471af006a747 ("perf/x86/amd: Constrain Large Increment per Cycle events")
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
---
arch/x86/events/amd/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index d6f3703e4119..4386b10682ce 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -1387,7 +1387,7 @@ static int __init amd_core_pmu_init(void)
* numbered counter following it.
*/
for (i = 0; i < x86_pmu.num_counters - 1; i += 2)
- even_ctr_mask |= 1 << i;
+ even_ctr_mask |= BIT_ULL(i);
pair_constraint = (struct event_constraint)
__EVENT_CONSTRAINT(0, even_ctr_mask, 0,
--
2.38.1
On Fri, Dec 2, 2022 at 5:52 AM Colin Ian King <colin.i.king@gmail.com> wrote: > > The left shift of int 32 bit integer constant 1 is evaluated using 32 bit > arithmetic and then passed as a 64 bit function argument. In the case where > i is 32 or more this can lead to an overflow. Avoid this by shifting > using the BIT_ULL macro instead. > > Fixes: 471af006a747 ("perf/x86/amd: Constrain Large Increment per Cycle events") > Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Acked-by: Ian Rogers <irogers@google.com> Thanks, Ian > --- > arch/x86/events/amd/core.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c > index d6f3703e4119..4386b10682ce 100644 > --- a/arch/x86/events/amd/core.c > +++ b/arch/x86/events/amd/core.c > @@ -1387,7 +1387,7 @@ static int __init amd_core_pmu_init(void) > * numbered counter following it. > */ > for (i = 0; i < x86_pmu.num_counters - 1; i += 2) > - even_ctr_mask |= 1 << i; > + even_ctr_mask |= BIT_ULL(i); > > pair_constraint = (struct event_constraint) > __EVENT_CONSTRAINT(0, even_ctr_mask, 0, > -- > 2.38.1 >
On 12/2/22 11:36 AM, Ian Rogers wrote: > On Fri, Dec 2, 2022 at 5:52 AM Colin Ian King <colin.i.king@gmail.com> wrote: >> >> The left shift of int 32 bit integer constant 1 is evaluated using 32 bit >> arithmetic and then passed as a 64 bit function argument. In the case where >> i is 32 or more this can lead to an overflow. Avoid this by shifting >> using the BIT_ULL macro instead. >> >> Fixes: 471af006a747 ("perf/x86/amd: Constrain Large Increment per Cycle events") >> Signed-off-by: Colin Ian King <colin.i.king@gmail.com> > > Acked-by: Ian Rogers <irogers@google.com> Acked-by: Kim Phillips <kim.phillips@amd.com> Thanks, Kim
The following commit has been merged into the perf/urgent branch of tip:
Commit-ID: 08245672cdc6505550d1a5020603b0a8d4a6dcc7
Gitweb: https://git.kernel.org/tip/08245672cdc6505550d1a5020603b0a8d4a6dcc7
Author: Colin Ian King <colin.i.king@gmail.com>
AuthorDate: Fri, 02 Dec 2022 13:51:49
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 27 Dec 2022 12:44:00 +01:00
perf/x86/amd: fix potential integer overflow on shift of a int
The left shift of int 32 bit integer constant 1 is evaluated using 32 bit
arithmetic and then passed as a 64 bit function argument. In the case where
i is 32 or more this can lead to an overflow. Avoid this by shifting
using the BIT_ULL macro instead.
Fixes: 471af006a747 ("perf/x86/amd: Constrain Large Increment per Cycle events")
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Ian Rogers <irogers@google.com>
Acked-by: Kim Phillips <kim.phillips@amd.com>
Link: https://lore.kernel.org/r/20221202135149.1797974-1-colin.i.king@gmail.com
---
arch/x86/events/amd/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index d6f3703..4386b10 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -1387,7 +1387,7 @@ static int __init amd_core_pmu_init(void)
* numbered counter following it.
*/
for (i = 0; i < x86_pmu.num_counters - 1; i += 2)
- even_ctr_mask |= 1 << i;
+ even_ctr_mask |= BIT_ULL(i);
pair_constraint = (struct event_constraint)
__EVENT_CONSTRAINT(0, even_ctr_mask, 0,
© 2016 - 2025 Red Hat, Inc.