arch/x86/kernel/cpu/sgx/encl.c | 12 ++++++++---- arch/x86/kernel/cpu/sgx/main.c | 3 ++- 2 files changed, 10 insertions(+), 5 deletions(-)
From: Li RongQing <lirongqing@baidu.com>
Replace list_for_each_entry_rcu() with list_for_each_entry_srcu()
when traversing the encl->mm_list protected by SRCU. This ensures
proper synchronization annotation and avoids potential lockdep
warnings about incorrect RCU usage.
The list is protected by encl->srcu, not RCU, so the SRCU-specific
iterator with srcu_read_lock_held() annotation is required.
Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
arch/x86/kernel/cpu/sgx/encl.c | 12 ++++++++----
arch/x86/kernel/cpu/sgx/main.c | 3 ++-
2 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index cf149b9..3c488a0 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -822,7 +822,8 @@ static struct sgx_encl_mm *sgx_encl_find_mm(struct sgx_encl *encl,
idx = srcu_read_lock(&encl->srcu);
- list_for_each_entry_rcu(tmp, &encl->mm_list, list) {
+ list_for_each_entry_srcu(tmp, &encl->mm_list, list,
+ srcu_read_lock_held(&encl->srcu)) {
if (tmp->mm == mm) {
encl_mm = tmp;
break;
@@ -933,7 +934,8 @@ const cpumask_t *sgx_encl_cpumask(struct sgx_encl *encl)
idx = srcu_read_lock(&encl->srcu);
- list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) {
+ list_for_each_entry_srcu(encl_mm, &encl->mm_list, list,
+ srcu_read_lock_held(&encl->srcu)) {
if (!mmget_not_zero(encl_mm->mm))
continue;
@@ -1018,7 +1020,8 @@ static struct mem_cgroup *sgx_encl_get_mem_cgroup(struct sgx_encl *encl)
*/
idx = srcu_read_lock(&encl->srcu);
- list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) {
+ list_for_each_entry_srcu(encl_mm, &encl->mm_list, list,
+ srcu_read_lock_held(&encl->srcu)) {
if (!mmget_not_zero(encl_mm->mm))
continue;
@@ -1212,7 +1215,8 @@ void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsigned long addr)
idx = srcu_read_lock(&encl->srcu);
- list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) {
+ list_for_each_entry_srcu(encl_mm, &encl->mm_list, list,
+ srcu_read_lock_held(&encl->srcu)) {
if (!mmget_not_zero(encl_mm->mm))
continue;
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index dc73194..ead0405 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -120,7 +120,8 @@ static bool sgx_reclaimer_age(struct sgx_epc_page *epc_page)
idx = srcu_read_lock(&encl->srcu);
- list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) {
+ list_for_each_entry_srcu(encl_mm, &encl->mm_list, list,
+ srcu_read_lock_held(&encl->srcu)) {
if (!mmget_not_zero(encl_mm->mm))
continue;
--
2.9.4
On 2/4/26 17:53, lirongqing wrote: > Replace list_for_each_entry_rcu() with list_for_each_entry_srcu() > when traversing the encl->mm_list protected by SRCU. This ensures > proper synchronization annotation and avoids potential lockdep > warnings about incorrect RCU usage. Does lockdep trip on this today? > The list is protected by encl->srcu, not RCU, so the SRCU-specific > iterator with srcu_read_lock_held() annotation is required. From a quick look, list_for_each_entry_rcu() still seems *really* common under SRCU. It also looks like list_for_each_entry_srcu() is a relatively recent (2020) addition to the kernel. So, this wasn't a bug when the SGX code went in, but started causing a problem at some point? Did lockdep add some RCU warnings or something that made this necessary? The patch seems logical and all. I just feel like I'm missing the bigger picture.
> On 2/4/26 17:53, lirongqing wrote:
> > Replace list_for_each_entry_rcu() with list_for_each_entry_srcu() when
> > traversing the encl->mm_list protected by SRCU. This ensures proper
> > synchronization annotation and avoids potential lockdep warnings about
> > incorrect RCU usage.
>
> Does lockdep trip on this today?
>
> > The list is protected by encl->srcu, not RCU, so the SRCU-specific
> > iterator with srcu_read_lock_held() annotation is required.
>
> From a quick look, list_for_each_entry_rcu() still seems *really* common
> under SRCU. It also looks like list_for_each_entry_srcu() is a relatively recent
> (2020) addition to the kernel.
>
> So, this wasn't a bug when the SGX code went in, but started causing a
> problem at some point? Did lockdep add some RCU warnings or something
> that made this necessary?
>
> The patch seems logical and all. I just feel like I'm missing the bigger picture.
Seem this patch adds the check
commit 28875945ba98d1b47a8a706812b6494d165bb0a0
Author: Joel Fernandes (Google) <joel@joelfernandes.org>
Date: Tue Jul 16 18:12:22 2019 -0400
rcu: Add support for consolidated-RCU reader checking
This commit adds RCU-reader checks to list_for_each_entry_rcu() and
hlist_for_each_entry_rcu(). These checks are optional, and are indicated
by a lockdep expression passed to a new optional argument to these two
macros. If this optional lockdep expression is omitted, these two macros
act as before, checking for an RCU read-side critical section.
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
[ paulmck: Update to eliminate return within macro and update comment. ]
Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
And there are several similar fixes:
d681107 nvme-multipath: fix suspicious RCU usage warning
5dd18f0 nvme/multipath: Fix RCU list traversal to use SRCU primitive
6d1c699 nvme/host: Fix RCU list traversal to use SRCU primitive
6a0c617 KVM: eventfd: Fix false positive RCU usage warning
df9a30f kvm: mmu: page_track: Fix RCU list API usage
[Li,Rongqing]
© 2016 - 2026 Red Hat, Inc.