From nobody Tue Apr 14 13:59:23 2026 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1666F345CBD for ; Tue, 14 Apr 2026 03:59:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776139197; cv=none; b=iFSOYlZbx1PIDq535C/5srZJvIK0JZe3qwsvgP8RLAwCfTCWnac/mVRT7mfTHRVA3hT1byectvUPHcL5aIPcAWxJAc3mU6dKbV6Kp0vpRWgkxI1XepSqj7mvGIyEsA/2H+VQoCJ4LLKHpki/yNNTZNZ62/NccDhWafJ8vBgEFVI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776139197; c=relaxed/simple; bh=3x31s0xphA1KzJ7zU0vHbpVL9GF3jKZyINontzZJTKc=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=SR82f7htGp8NxdzsJoWBNRWW5bw+1RejSWXD4F3MJ5GH17MBOJAROOKsmz+WbT1IthIGLDJFa81ZqtebXPYA6FuB1V1eLTxjCtR+3kOpDmEt5xgqAX3Gfi0ijW2bNXocq0M81bUFimgsLSYhJDhB3O0oa1Y95snrcGFhnxGYnc4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ENgEJqUQ; arc=none smtp.client-ip=209.85.214.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ENgEJqUQ" Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-2b45cb89f7eso12075365ad.0 for ; Mon, 13 Apr 2026 20:59:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1776139193; x=1776743993; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=GaQwC6PXCoTitxa7wZplAr6LrVi+IQ8jXIX0gDbXPI4=; b=ENgEJqUQHvUhp/RoLUwGBTDsiy+9/wTceuQKL5yvWpXvSRauWgGBVjf+Y9snVjPEZR FPsxw5rvfV7+0I2rfCBEPoR+beWhv8DXf09lQsoQmf9COWtq333EszztZkEHb67EdMNW X5i+O61QAeFWnkf/VW4aKyh9YppEgKE58lloNANfR0AxOZbPEXJsJFKvTIyf8GZsW0Nd /XNwV+Co4FQrHoPxmtJ5ZgYeasAQYVi/iP2/P3l9kWWTSfdggbrZ1Caj9NuYazk8ims6 c7MsNp/7/fGueQ5wYWU6lV6XSfbdVIIDn8YBZD1Ryxw8uYKr7NPX405rPGMRCZ17JYiz oSgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776139193; x=1776743993; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=GaQwC6PXCoTitxa7wZplAr6LrVi+IQ8jXIX0gDbXPI4=; b=q6ZcDgExe9mGuz/uW4XEcfGN58aLW/IxxgyJY8OvAxe1O1riLBt5sAW3iRWcNJdTae atokq8TqyfUs3G08lB1W/ds83RuI9gL6FIxqbi9pwulrJG4lqidMyOP8QTZfvvvsqYW0 6lTMM/INnadT6Q7r+ewu/o0in6mTaO0/rgkNhMsppXwBbGdLkYVkmXzxXwWzMXn6JyyE 4vK4ofl1gAVsGsyHMXE+mds0HeNlNfjZjAposBUwEylCnMTWRKn2mj21CHBerQTz79ZP MBkVoYwQ5+ahZuNNvvU24MH7ncEF6V2Ain9zNzM5iq7zmRxZAB16S0cKr4TZMvdvSlhm 8uLQ== X-Forwarded-Encrypted: i=1; AFNElJ9m/Rq9xyp8IqIbNfroziGEAdVW8EgBPWTt3cDa7gUD7mU56XkBZ3p/2ENd04DP8RPbZMoGWcQWyPHhKxE=@vger.kernel.org X-Gm-Message-State: AOJu0YxiwGDhEHenAssI1NRGvueDU/d3rlbiksUkLFnGBR98wWSamjTZ DWnKjlc2JmvOYYvnvdieXlJQLeQeRzVIHc3oVjRoFzkPPpsHio11Qf8T X-Gm-Gg: AeBDieuzcJO2344hNpkma9KYnL6QuRzhHjGeRDNisXobmhF6JB/f9SpYtjFqv73Qq21 kH95yHd/IT3WAoSgD7TS2j4XtHJEQPx2s6ZGAOmZ1hpTqIKl34WJHkZQCQL5mbden5mZe7LIKHF Nx0eJlw94oeWahPnfvq/MhdNVc1fAAEJfTpvyFJYSHuoBKSR1SSIEO9wQ64hyPLHm6235jLp87G ZUBGOzdHpiR4djosGb3Scd3W+DjheUb9BfdVzURrhG7hv/N34VB8qQ1kYkmjVSjb60T9ILwgEKS BiI0/ry5zXdAELdBuf1oAnAN46ujrF4rlZ5YHyeN2Hk318o08BtHWPFg7HUoIJ5EfbbX/tpelyx vv35IqoR07VYz+eR3wq8w0OylaBFmQSo54jluI5hSJC52225mSNE20t5PKjDF0SVXZ0oMiwZ7k3 eheGpB5yZeY6iwPLZQ9UJrLOOWhoI7I218wIEHZ5MC0Al/Km5izVdwoh+d+RqJs4o= X-Received: by 2002:a17:903:283:b0:2b4:5c0d:314b with SMTP id d9443c01a7336-2b45c0d3361mr68981475ad.38.1776139192689; Mon, 13 Apr 2026 20:59:52 -0700 (PDT) Received: from localhost.localdomain ([2409:891f:1b61:a738:ec5f:4173:769:6994]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2b2d4dd7faasm131741895ad.26.2026.04.13.20.59.47 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Apr 2026 20:59:52 -0700 (PDT) From: Chuang Wang To: Cc: Chuang Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Stanislav Fomichev , Kuniyuki Iwashima , Samiullah Khawaja , Hangbin Liu , Krishna Kumar , Neal Cardwell , Willem de Bruijn , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net v2] net: reduce RFS/ARFS flow updates by checking LLC affinity Date: Tue, 14 Apr 2026 11:59:20 +0800 Message-ID: <20260414035931.45692-1-nashuiliang@gmail.com> X-Mailer: git-send-email 2.50.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The current implementation of rps_record_sock_flow() updates the flow table every time a socket is processed on a different CPU. In high-load scenarios, especially with Accelerated RFS (ARFS), this triggers frequent flow steering updates via ndo_rx_flow_steer. For drivers like mlx5 that implement hardware flow steering, these constant updates lead to significant contention on internal driver locks (e.g., arfs_lock). This contention often becomes a performance bottleneck that outweighs the steering benefits. This patch introduces a cache-aware update strategy: the flow record is only updated if the flow migrates across Last Level Cache (LLC) boundaries. This minimizes expensive hardware reconfigurations while preserving cache locality for the application. A new sysctl, net.core.rps_feat_llc_affinity, is added to toggle this feature. Performance Test Results: The patch was tested in a K8s environment (AMD CPU 128*2, 16-core Pod with CPU pinning, mlx5 NIC) using brpc[1] echo_server and rpc_press. rpc_press Commands: for i in {1..8}; do ./rpc_press -proto=3D./echo.proto -method=3Dexample.EchoService.Echo -server=3D:8000 -input=3D'{"message":"hello"}' -qps=3D0 -thread_num=3D512 -connection_type=3Dpooled & done Monitor mlx5e_rx_flow_steer frequency: /usr/share/bcc/tools/funccount -i 1 mlx5e_rx_flow_steer Frequency of mlx5e_rx_flow_steer (via funccount[2]): Before: ~335,000 counts/sec After: ~23,000 counts/sec (reduced by ~93%) System Metrics (after enabling rps_feat_llc_affinity): CPU Utilization: 38% -> 32% CPU PSI (Pressure Stall Information): 20% -> 10% These results demonstrate that filtering updates by LLC affinity significantly reduces driver lock contention and improves overall CPU efficiency under heavy network load. [1] https://github.com/apache/brpc/ [2] https://github.com/iovisor/bcc/blob/master/tools/funccount.py Signed-off-by: Chuang Wang --- v1 -> v2: add rps_feat_llc_affinity; add brpc tests include/net/rps.h | 18 ++-------- net/core/dev.c | 72 ++++++++++++++++++++++++++++++++++++++ net/core/sysctl_net_core.c | 34 ++++++++++++++++++ 3 files changed, 108 insertions(+), 16 deletions(-) diff --git a/include/net/rps.h b/include/net/rps.h index e33c6a2fa8bb..37bbb7009c36 100644 --- a/include/net/rps.h +++ b/include/net/rps.h @@ -12,6 +12,7 @@ =20 extern struct static_key_false rps_needed; extern struct static_key_false rfs_needed; +extern struct static_key_false rps_feat_llc_affinity; =20 /* * This structure holds an RPS map which can be of variable length. The @@ -55,22 +56,7 @@ struct rps_sock_flow_table { =20 #define RPS_NO_CPU 0xffff =20 -static inline void rps_record_sock_flow(rps_tag_ptr tag_ptr, u32 hash) -{ - unsigned int index =3D hash & rps_tag_to_mask(tag_ptr); - u32 val =3D hash & ~net_hotdata.rps_cpu_mask; - struct rps_sock_flow_table *table; - - /* We only give a hint, preemption can change CPU under us */ - val |=3D raw_smp_processor_id(); - - table =3D rps_tag_to_table(tag_ptr); - /* The following WRITE_ONCE() is paired with the READ_ONCE() - * here, and another one in get_rps_cpu(). - */ - if (READ_ONCE(table[index].ent) !=3D val) - WRITE_ONCE(table[index].ent, val); -} +void rps_record_sock_flow(rps_tag_ptr tag_ptr, u32 hash); =20 static inline void _sock_rps_record_flow_hash(__u32 hash) { diff --git a/net/core/dev.c b/net/core/dev.c index 203dc36aaed5..630a7f21d8de 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4964,6 +4964,8 @@ struct static_key_false rps_needed __read_mostly; EXPORT_SYMBOL(rps_needed); struct static_key_false rfs_needed __read_mostly; EXPORT_SYMBOL(rfs_needed); +struct static_key_false rps_feat_llc_affinity __read_mostly; +EXPORT_SYMBOL(rps_feat_llc_affinity); =20 static u32 rfs_slot(u32 hash, rps_tag_ptr tag_ptr) { @@ -5175,6 +5177,76 @@ static int get_rps_cpu(struct net_device *dev, struc= t sk_buff *skb, return cpu; } =20 +/** + * rps_record_cond - Determine if RPS flow table should be updated + * @old_val: Previous flow record value + * @new_val: Target flow record value + * + * Returns true if the record needs an update. + */ +static inline bool rps_record_cond(u32 old_val, u32 new_val) +{ + u32 old_cpu =3D old_val & ~net_hotdata.rps_cpu_mask; + u32 new_cpu =3D new_val & ~net_hotdata.rps_cpu_mask; + + if (old_val =3D=3D new_val) + return false; + + /* + * RPS LLC Affinity Feature: + * Reduce RFS/ARFS flow updates by checking LLC affinity. + * + * Frequent flow table updates can trigger constant hardware steering + * reconfigurations (e.g., ndo_rx_flow_steer), leading to significant + * contention on driver internal locks (like mlx5's arfs_lock). + * + * This strategy only updates the flow record if it migrates across LLC + * boundaries. This minimizes expensive hardware updates while preserving + * cache locality for the application. + */ + if (static_branch_unlikely(&rps_feat_llc_affinity)) { + /* Force update if the recorded CPU is invalid or has gone offline */ + if (old_cpu >=3D nr_cpu_ids || !cpu_active(old_cpu)) + return true; + + /* + * Force an update if the current task is no longer permitted + * to run on the old_cpu. + */ + if (!cpumask_test_cpu(old_cpu, current->cpus_ptr)) + return true; + + /* + * If CPUs do not share a cache, allow the update to prevent + * expensive remote memory accesses and cache misses. + */ + if (!cpus_share_cache(old_cpu, new_cpu)) + return true; + + return false; + } + + return true; +} + +void rps_record_sock_flow(rps_tag_ptr tag_ptr, u32 hash) +{ + unsigned int index =3D hash & rps_tag_to_mask(tag_ptr); + u32 val =3D hash & ~net_hotdata.rps_cpu_mask; + struct rps_sock_flow_table *table; + + /* We only give a hint, preemption can change CPU under us */ + val |=3D raw_smp_processor_id(); + + table =3D rps_tag_to_table(tag_ptr); + /* The following WRITE_ONCE() is paired with the READ_ONCE() + * here, and another one in get_rps_cpu(). + */ + if (rps_record_cond(READ_ONCE(table[index].ent), val)) + WRITE_ONCE(table[index].ent, val); +} +EXPORT_SYMBOL(rps_record_sock_flow); + #ifdef CONFIG_RFS_ACCEL =20 /** diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c index 502705e04649..dbc99aea7bb0 100644 --- a/net/core/sysctl_net_core.c +++ b/net/core/sysctl_net_core.c @@ -210,6 +210,32 @@ static int rps_sock_flow_sysctl(const struct ctl_table= *table, int write, kvfree_rcu_mightsleep(tofree); return ret; } + +static int rps_feat_llc_affinity_sysctl(const struct ctl_table *table, int= write, + void *buffer, size_t *lenp, loff_t *ppos) +{ + u8 curr_state; + int ret; + const struct ctl_table tmp =3D { + .data =3D &curr_state, + .maxlen =3D sizeof(curr_state), + .mode =3D table->mode, + .extra1 =3D table->extra1, + .extra2 =3D table->extra2 + }; + + curr_state =3D static_branch_unlikely(&rps_feat_llc_affinity) ? 1 : 0; + + ret =3D proc_dou8vec_minmax(&tmp, write, buffer, lenp, ppos); + if (write && ret =3D=3D 0) { + if (curr_state && !static_branch_unlikely(&rps_feat_llc_affinity)) + static_branch_enable(&rps_feat_llc_affinity); + else if (!curr_state && static_branch_unlikely(&rps_feat_llc_affinity)) + static_branch_disable(&rps_feat_llc_affinity); + } + + return ret; +} #endif /* CONFIG_RPS */ =20 #ifdef CONFIG_NET_FLOW_LIMIT @@ -531,6 +557,14 @@ static struct ctl_table net_core_table[] =3D { .mode =3D 0644, .proc_handler =3D rps_sock_flow_sysctl }, + { + .procname =3D "rps_feat_llc_affinity", + .maxlen =3D sizeof(u8), + .mode =3D 0644, + .proc_handler =3D rps_feat_llc_affinity_sysctl, + .extra1 =3D SYSCTL_ZERO, + .extra2 =3D SYSCTL_ONE + }, #endif #ifdef CONFIG_NET_FLOW_LIMIT { --=20 2.47.3