From nobody Mon Dec 15 19:01:14 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 916DA354AD8; Wed, 29 Oct 2025 10:24:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761733467; cv=none; b=T/vCELfpZfL2/KxLFGKJXcNvDBdx1WI6DME9bCnbrRHFeCILl+EDib9LXzY/QKT7eMAC0poqaUggb7KWM3LuNy0D5+Y8gVmknZ5/vG3FQmJ78jC40tyPYEvqGEi2pFMFj8GdcHzwCIBw6TgYWqQRlmkQrZ6eaHyRteb6R5IjqJ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761733467; c=relaxed/simple; bh=Uo4l5hVY1Wqk1lqe9NLK6OStGbcCYyCm0ytoOPsJrZA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ZZsrTrkffgKMc/Uudb9hCjCdPlmCsYONtwt5RvSXMJB1++ui1wQF8YH+pO+36OH9OOzBljHogDyHPtdGYbLGBg5fUROTr/TYAaDW59OXcWfXglwupaM5KtYTNjoznRyzH6f9qGZ8LqMoUNoDr4/e8OqUkfzOlPwZQNTQTSXkdHE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=X3peKHUn; arc=none smtp.client-ip=192.198.163.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="X3peKHUn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1761733465; x=1793269465; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Uo4l5hVY1Wqk1lqe9NLK6OStGbcCYyCm0ytoOPsJrZA=; b=X3peKHUnsf29d06PhQY9rPCUXWmNkVSYOcvbk3VuRAO8h25Ya34oA3Mk uv/wQa1gUy9zZWlfKN4W+dYPl7ArkurpPQPevtVxyujcArwKD32T3lJa/ 4zPjZDd9PzETQ8kbJuW8Ql3/pI8Pe1YCXz+PNnh5qxEXQc7pg0WMQeVFg xMxLOWZiGLANK/NmYvgC/0jERcUJPyN5jGARVbt0Jw27FFJo/gPw6AW1U HjMCrHR2RQKqKpb8+z83kUEJXRyt7dh1BHZoAU4KOFiJv59zw1drCYMYd tEDAY4lCiD5PtGeRYVhZpZ3tiwv1NlYTE2Pmt69I1FZQpiZ/Fe+az4xIW A==; X-CSE-ConnectionGUID: W4+oywNSQZmatvepyfsPrQ== X-CSE-MsgGUID: moeG7DN2TYWhgm7BbeIISg== X-IronPort-AV: E=McAfee;i="6800,10657,11596"; a="63885946" X-IronPort-AV: E=Sophos;i="6.19,263,1754982000"; d="scan'208";a="63885946" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2025 03:24:25 -0700 X-CSE-ConnectionGUID: GprckcoxT/2Ttz+qAGslHg== X-CSE-MsgGUID: huZ/2oQRRZibXEqNP468Lg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.19,263,1754982000"; d="scan'208";a="185963342" Received: from spr.sh.intel.com ([10.112.229.196]) by fmviesa008.fm.intel.com with ESMTP; 29 Oct 2025 03:24:21 -0700 From: Dapeng Mi To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Ian Rogers , Adrian Hunter , Alexander Shishkin , Andi Kleen , Eranian Stephane Cc: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Dapeng Mi , Zide Chen , Falcon Thomas , Xudong Hao , Dapeng Mi , kernel test robot Subject: [Patch v9 02/12] perf/x86: Fix NULL event access and potential PEBS record loss Date: Wed, 29 Oct 2025 18:21:26 +0800 Message-Id: <20251029102136.61364-3-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20251029102136.61364-1-dapeng1.mi@linux.intel.com> References: <20251029102136.61364-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When intel_pmu_drain_pebs_icl() is called to drain PEBS records, the perf_event_overflow() could be called to process the last PEBS record. While perf_event_overflow() could trigger the interrupt throttle and stop all events of the group, like what the below call-chain shows. perf_event_overflow() -> __perf_event_overflow() ->__perf_event_account_interrupt() -> perf_event_throttle_group() -> perf_event_throttle() -> event->pmu->stop() -> x86_pmu_stop() The side effect of stopping the events is that all corresponding event pointers in cpuc->events[] array are cleared to NULL. Assume there are two PEBS events (event a and event b) in a group. When intel_pmu_drain_pebs_icl() calls perf_event_overflow() to process the last PEBS record of PEBS event a, interrupt throttle is triggered and all pointers of event a and event b are cleared to NULL. Then intel_pmu_drain_pebs_icl() tries to process the last PEBS record of event b and encounters NULL pointer access. To avoid this issue, move cpuc->events[] clearing from x86_pmu_stop() to x86_pmu_del(). It's safe since cpuc->active_mask or cpuc->pebs_enabled is always checked before access the event pointer from cpuc->events[]. Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-lkp/202507042103.a15d2923-lkp@intel.com Fixes: 9734e25fbf5a ("perf: Fix the throttle logic for a group") Suggested-by: Peter Zijlstra Signed-off-by: Dapeng Mi --- arch/x86/events/core.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 745caa6c15a3..74479f9d6eed 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -1344,6 +1344,7 @@ static void x86_pmu_enable(struct pmu *pmu) hwc->state |=3D PERF_HES_ARCH; =20 x86_pmu_stop(event, PERF_EF_UPDATE); + cpuc->events[hwc->idx] =3D NULL; } =20 /* @@ -1365,6 +1366,7 @@ static void x86_pmu_enable(struct pmu *pmu) * if cpuc->enabled =3D 0, then no wrmsr as * per x86_pmu_enable_event() */ + cpuc->events[hwc->idx] =3D event; x86_pmu_start(event, PERF_EF_RELOAD); } cpuc->n_added =3D 0; @@ -1531,7 +1533,6 @@ static void x86_pmu_start(struct perf_event *event, i= nt flags) =20 event->hw.state =3D 0; =20 - cpuc->events[idx] =3D event; __set_bit(idx, cpuc->active_mask); static_call(x86_pmu_enable)(event); perf_event_update_userpage(event); @@ -1610,7 +1611,6 @@ void x86_pmu_stop(struct perf_event *event, int flags) if (test_bit(hwc->idx, cpuc->active_mask)) { static_call(x86_pmu_disable)(event); __clear_bit(hwc->idx, cpuc->active_mask); - cpuc->events[hwc->idx] =3D NULL; WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED); hwc->state |=3D PERF_HES_STOPPED; } @@ -1648,6 +1648,7 @@ static void x86_pmu_del(struct perf_event *event, int= flags) * Not a TXN, therefore cleanup properly. */ x86_pmu_stop(event, PERF_EF_UPDATE); + cpuc->events[event->hw.idx] =3D NULL; =20 for (i =3D 0; i < cpuc->n_events; i++) { if (event =3D=3D cpuc->event_list[i]) --=20 2.34.1