From nobody Wed Sep 17 19:54:49 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B7DEC2FE584 for ; Tue, 12 Aug 2025 17:59:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755021556; cv=none; b=dA2bPX4yc9b0CLFOeqY2/+UElGtf3nwnmXh0Orkg6XSjf4l3fyfRmonQPb8c9LUBr/erYdw/PXdxWmiucxtRqrgP9PrS8DHLJMd+DEt4LNGpDXbwvxW5eSpZLf+Jww7ATH33XFqazwQRSLPIdklI0PQ8uFXUSirOTShHUWY26YE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755021556; c=relaxed/simple; bh=ZsDUXB/yvo8uZXixpqGuqcq1s3slWdJ/IT4m8a/kwAY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=j1mOaq1xZUUxuJOqxtp77jojGfuVS2DVxHI1Rge5kAy5F4T2aJDsfc2uTS65XwW5eOpGAmMZgoL1piVM8M6Jdfugh3gziuxrliuOmjSKNJ1pphrboEkddDxR3ZMJj1KJspzVoA5/Ed+XONwgHOghuPf2Ok3/uJrD62pttnqM8Dw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--kuniyu.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=A1KgQioU; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--kuniyu.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="A1KgQioU" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-31eb41391f3so5676014a91.0 for ; Tue, 12 Aug 2025 10:59:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1755021554; x=1755626354; darn=lists.linux.dev; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=12VsIRpu33AswUdPPRoU3ogIBZKRgH8+iMLaTz6mZnM=; b=A1KgQioU8rFu5neaE9bLMBH+E2tuWzIeT74iuNiuxAYSwSsb7qpqUXYQNUEf0IjBqE Sb2tYe+Dzu9K59347NrOqA/uiB242mueV0k7c4moNJjsD4SQbhpXuUDEOMjrCbzokAs4 JfbblQodpVVUtDHhb9yB+DDl8blkIxBe+ZbKl+XLG2p82IO5t6LSrYj0hR6ufmKtpaK/ ORAXfdqayHKHV9aeWkkNheDF/Yyji4qq7MBD9pOYc7maJgzlVMfGLgOtnU7ucSA+JfCn /FAotEi+RmHVDveRtt0QHmv78tesYifNojFTQATAfyMJJzdJ00yMB31KLVAgqSj9S/Ko ay+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1755021554; x=1755626354; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=12VsIRpu33AswUdPPRoU3ogIBZKRgH8+iMLaTz6mZnM=; b=W31tw7AkHp6vAkbcxh/qtN7a+ndxhD58iGs7QUPT7r85CT36p7yeL1CbXTiEL43IVW MXhdd6wOUrVFcKZubiZkmTdOYa759QJTrOaVRIA6HsEzcVgWQDIyf9ONXanLX9vX9FoD gU644Qi4PNz0uJF31StUbY1sZHbwxYOioztFC07ABVJFYfdAeODZ26FARj8SMSZBRDFZ dqTNx/G6S56E2xWX7voUpJiXvOUYvninhtMBkYKMpXqQ4ax3UHkCrzuAq50gRWvlp1GC gwjeUm1T9vQiEbC0NcDjLSOpFhGkuUBqw79tUUNazmYLxjzgsrhjkYN2mSo/weVHG1Aa NMdA== X-Forwarded-Encrypted: i=1; AJvYcCW+5fO8OMwCP+waDDYK2JJ3h1s8j2hbOpsQGD3eyQRJL2ftxnd1sQhD7BQ4bFeIQl489i4Uqg==@lists.linux.dev X-Gm-Message-State: AOJu0Yxndo3FvKeJkCWYNI9iVeJmPasU8nunh3ZwCCqYUQ5rmjdPMmeg ifd/1icNfBwnHCiAQMHstd+xlva5DhtroetkPNTPtLMONy9+ybNyuhT+2K0NnSbE5eJfpq3l3a2 j6f6t/g== X-Google-Smtp-Source: AGHT+IEyIJ3QxWoWnwCUmlvpvFc2sz3YPkZ2Y+XguPdyYnKCZK2UJq098iKz0gqOrCQQVUwpkBAfznuiPeg= X-Received: from pjbsx8.prod.google.com ([2002:a17:90b:2cc8:b0:312:187d:382d]) (user=kuniyu job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3a10:b0:321:38e0:d591 with SMTP id 98e67ed59e1d1-321cf976681mr917108a91.17.1755021554281; Tue, 12 Aug 2025 10:59:14 -0700 (PDT) Date: Tue, 12 Aug 2025 17:58:30 +0000 In-Reply-To: <20250812175848.512446-1-kuniyu@google.com> Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250812175848.512446-1-kuniyu@google.com> X-Mailer: git-send-email 2.51.0.rc0.205.g4a044479a3-goog Message-ID: <20250812175848.512446-13-kuniyu@google.com> Subject: [PATCH v3 net-next 12/12] net-memcg: Decouple controlled memcg from global protocol memory accounting. From: Kuniyuki Iwashima To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Neal Cardwell , Paolo Abeni , Willem de Bruijn , Matthieu Baerts , Mat Martineau , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Andrew Morton , "=?UTF-8?q?Michal=20Koutn=C3=BD?=" , Tejun Heo Cc: Simon Horman , Geliang Tang , Muchun Song , Mina Almasry , Kuniyuki Iwashima , Kuniyuki Iwashima , netdev@vger.kernel.org, mptcp@lists.linux.dev, cgroups@vger.kernel.org, linux-mm@kvack.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Some protocols (e.g., TCP, UDP) implement memory accounting for socket buffers and charge memory to per-protocol global counters pointed to by sk->sk_proto->memory_allocated. When running under a non-root cgroup, this memory is also charged to the memcg as "sock" in memory.stat. Even when a memcg controls memory usage, sockets of such protocols are still subject to global limits (e.g., /proc/sys/net/ipv4/tcp_mem). This makes it difficult to accurately estimate and configure appropriate global limits, especially in multi-tenant environments. If all workloads were guaranteed to be controlled under memcg, the issue could be worked around by setting tcp_mem[0~2] to UINT_MAX. In reality, this assumption does not always hold, and processes that belong to the root cgroup or opt out of memcg can consume memory up to the global limit, becoming a noisy neighbour. Let's decouple memcg from the global per-protocol memory accounting if it has a finite memory.max (!=3D "max"). We still keep charging memory to memcg and protocol duplicately if memcg has "max" in memory.max because TCP allows only 10% of physical memory by default. This simplifies memcg configuration while keeping the global limits within a reasonable range. If mem_cgroup_sk_isolated(sk) returns true, the per-protocol memory accounting is skipped. In inet_csk_accept(), we need to reclaim counts that are already charged for child sockets because we do not allocate sk->sk_memcg until accept(). Note that trace_sock_exceed_buf_limit() will always show 0 as accounted for the isolated sockets, but this can be obtained via memory.stat. Tested with a script that creates local socket pairs and send()s a bunch of data without recv()ing. Setup: # mkdir /sys/fs/cgroup/test # echo $$ >> /sys/fs/cgroup/test/cgroup.procs # sysctl -q net.ipv4.tcp_mem=3D"1000 1000 1000" Without setting memory.max: # prlimit -n=3D524288:524288 bash -c "python3 pressure.py" & # cat /sys/fs/cgroup/test/memory.stat | grep sock sock 22642688 # ss -tn | head -n 5 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 2000 0 127.0.0.1:34479 127.0.0.1:53188 ESTAB 2000 0 127.0.0.1:34479 127.0.0.1:49972 ESTAB 2000 0 127.0.0.1:34479 127.0.0.1:53868 ESTAB 2000 0 127.0.0.1:34479 127.0.0.1:53554 # nstat | grep Pressure || echo no pressure TcpExtTCPMemoryPressures 1 0.0 With memory.max: # echo $((64 * 1024 ** 3)) > /sys/fs/cgroup/test/memory.max # prlimit -n=3D524288:524288 bash -c "python3 pressure.py" & # cat /sys/fs/cgroup/test/memory.stat | grep sock sock 2757468160 # ss -tn | head -n 5 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 111000 0 127.0.0.1:36019 127.0.0.1:49026 ESTAB 110000 0 127.0.0.1:36019 127.0.0.1:45630 ESTAB 110000 0 127.0.0.1:36019 127.0.0.1:44870 ESTAB 111000 0 127.0.0.1:36019 127.0.0.1:45274 # nstat | grep Pressure || echo no pressure no pressure Signed-off-by: Kuniyuki Iwashima --- v3: * Fix build failure for kTLS v2: * Add sk_should_enter_memory_pressure() for tcp_enter_memory_pressure() calls not in core * Update example in changelog --- include/net/proto_memory.h | 15 ++++++-- include/net/tcp.h | 10 ++++-- net/core/sock.c | 64 ++++++++++++++++++++++----------- net/ipv4/inet_connection_sock.c | 18 ++++++++-- net/ipv4/tcp.c | 3 +- net/ipv4/tcp_output.c | 10 ++++-- net/mptcp/protocol.c | 4 ++- net/tls/tls_device.c | 4 ++- 8 files changed, 94 insertions(+), 34 deletions(-) diff --git a/include/net/proto_memory.h b/include/net/proto_memory.h index 8e91a8fa31b5..8e8432b13515 100644 --- a/include/net/proto_memory.h +++ b/include/net/proto_memory.h @@ -31,13 +31,22 @@ static inline bool sk_under_memory_pressure(const struc= t sock *sk) if (!sk->sk_prot->memory_pressure) return false; =20 - if (mem_cgroup_sk_enabled(sk) && - mem_cgroup_sk_under_memory_pressure(sk)) - return true; + if (mem_cgroup_sk_enabled(sk)) { + if (mem_cgroup_sk_under_memory_pressure(sk)) + return true; + + if (mem_cgroup_sk_isolated(sk)) + return false; + } =20 return !!READ_ONCE(*sk->sk_prot->memory_pressure); } =20 +static inline bool sk_should_enter_memory_pressure(struct sock *sk) +{ + return !mem_cgroup_sk_enabled(sk) || !mem_cgroup_sk_isolated(sk); +} + static inline long proto_memory_allocated(const struct proto *prot) { diff --git a/include/net/tcp.h b/include/net/tcp.h index 2936b8175950..0191a4585bba 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -275,9 +275,13 @@ extern unsigned long tcp_memory_pressure; /* optimized version of sk_under_memory_pressure() for TCP sockets */ static inline bool tcp_under_memory_pressure(const struct sock *sk) { - if (mem_cgroup_sk_enabled(sk) && - mem_cgroup_sk_under_memory_pressure(sk)) - return true; + if (mem_cgroup_sk_enabled(sk)) { + if (mem_cgroup_sk_under_memory_pressure(sk)) + return true; + + if (mem_cgroup_sk_isolated(sk)) + return false; + } =20 return READ_ONCE(tcp_memory_pressure); } diff --git a/net/core/sock.c b/net/core/sock.c index ab6953d295df..755540215570 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1046,17 +1046,21 @@ static int sock_reserve_memory(struct sock *sk, int= bytes) if (!charged) return -ENOMEM; =20 - /* pre-charge to forward_alloc */ - sk_memory_allocated_add(sk, pages); - allocated =3D sk_memory_allocated(sk); - /* If the system goes into memory pressure with this - * precharge, give up and return error. - */ - if (allocated > sk_prot_mem_limits(sk, 1)) { - sk_memory_allocated_sub(sk, pages); - mem_cgroup_sk_uncharge(sk, pages); - return -ENOMEM; + if (!mem_cgroup_sk_isolated(sk)) { + /* pre-charge to forward_alloc */ + sk_memory_allocated_add(sk, pages); + allocated =3D sk_memory_allocated(sk); + + /* If the system goes into memory pressure with this + * precharge, give up and return error. + */ + if (allocated > sk_prot_mem_limits(sk, 1)) { + sk_memory_allocated_sub(sk, pages); + mem_cgroup_sk_uncharge(sk, pages); + return -ENOMEM; + } } + sk_forward_alloc_add(sk, pages << PAGE_SHIFT); =20 WRITE_ONCE(sk->sk_reserved_mem, @@ -3153,8 +3157,11 @@ bool sk_page_frag_refill(struct sock *sk, struct pag= e_frag *pfrag) if (likely(skb_page_frag_refill(32U, pfrag, sk->sk_allocation))) return true; =20 - sk_enter_memory_pressure(sk); + if (sk_should_enter_memory_pressure(sk)) + sk_enter_memory_pressure(sk); + sk_stream_moderate_sndbuf(sk); + return false; } EXPORT_SYMBOL(sk_page_frag_refill); @@ -3267,18 +3274,30 @@ int __sk_mem_raise_allocated(struct sock *sk, int s= ize, int amt, int kind) { bool memcg_enabled =3D false, charged =3D false; struct proto *prot =3D sk->sk_prot; - long allocated; - - sk_memory_allocated_add(sk, amt); - allocated =3D sk_memory_allocated(sk); + long allocated =3D 0; =20 if (mem_cgroup_sk_enabled(sk)) { + bool isolated =3D mem_cgroup_sk_isolated(sk); + memcg_enabled =3D true; charged =3D mem_cgroup_sk_charge(sk, amt, gfp_memcg_charge()); - if (!charged) + + if (isolated && charged) + return 1; + + if (!charged) { + if (!isolated) { + sk_memory_allocated_add(sk, amt); + allocated =3D sk_memory_allocated(sk); + } + goto suppress_allocation; + } } =20 + sk_memory_allocated_add(sk, amt); + allocated =3D sk_memory_allocated(sk); + /* Under limit. */ if (allocated <=3D sk_prot_mem_limits(sk, 0)) { sk_leave_memory_pressure(sk); @@ -3357,7 +3376,8 @@ int __sk_mem_raise_allocated(struct sock *sk, int siz= e, int amt, int kind) =20 trace_sock_exceed_buf_limit(sk, prot, allocated, kind); =20 - sk_memory_allocated_sub(sk, amt); + if (allocated) + sk_memory_allocated_sub(sk, amt); =20 if (charged) mem_cgroup_sk_uncharge(sk, amt); @@ -3396,11 +3416,15 @@ EXPORT_SYMBOL(__sk_mem_schedule); */ void __sk_mem_reduce_allocated(struct sock *sk, int amount) { - sk_memory_allocated_sub(sk, amount); - - if (mem_cgroup_sk_enabled(sk)) + if (mem_cgroup_sk_enabled(sk)) { mem_cgroup_sk_uncharge(sk, amount); =20 + if (mem_cgroup_sk_isolated(sk)) + return; + } + + sk_memory_allocated_sub(sk, amount); + if (sk_under_global_memory_pressure(sk) && (sk_memory_allocated(sk) < sk_prot_mem_limits(sk, 0))) sk_leave_memory_pressure(sk); diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_soc= k.c index 0ef1eacd539d..9d56085f7f54 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -22,6 +22,7 @@ #include #include #include +#include =20 #if IS_ENABLED(CONFIG_IPV6) /* match_sk*_wildcard =3D=3D true: IPV6_ADDR_ANY equals to any IPv6 addre= sses @@ -710,7 +711,6 @@ struct sock *inet_csk_accept(struct sock *sk, struct pr= oto_accept_arg *arg) =20 if (mem_cgroup_sockets_enabled) { gfp_t gfp =3D GFP_KERNEL | __GFP_NOFAIL; - int amt =3D 0; =20 /* atomically get the memory usage, set and charge the * newsk->sk_memcg. @@ -719,15 +719,27 @@ struct sock *inet_csk_accept(struct sock *sk, struct = proto_accept_arg *arg) =20 mem_cgroup_sk_alloc(newsk); if (mem_cgroup_from_sk(newsk)) { + int amt; + /* The socket has not been accepted yet, no need * to look at newsk->sk_wmem_queued. */ amt =3D sk_mem_pages(newsk->sk_forward_alloc + atomic_read(&newsk->sk_rmem_alloc)); + if (amt) { + /* This amt is already charged globally to + * sk_prot->memory_allocated due to lack of + * sk_memcg until accept(), thus we need to + * reclaim it here if newsk is isolated. + */ + if (mem_cgroup_sk_isolated(newsk)) + sk_memory_allocated_sub(newsk, amt); + + mem_cgroup_sk_charge(newsk, amt, gfp); + } + } =20 - if (amt) - mem_cgroup_sk_charge(newsk, amt, gfp); kmem_cache_charge(newsk, gfp); =20 release_sock(newsk); diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 71a956fbfc55..dcbd49e2f8af 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -908,7 +908,8 @@ struct sk_buff *tcp_stream_alloc_skb(struct sock *sk, g= fp_t gfp, } __kfree_skb(skb); } else { - sk->sk_prot->enter_memory_pressure(sk); + if (sk_should_enter_memory_pressure(sk)) + tcp_enter_memory_pressure(sk); sk_stream_moderate_sndbuf(sk); } return NULL; diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index dfbac0876d96..f7aa86661219 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -3574,12 +3574,18 @@ void sk_forced_mem_schedule(struct sock *sk, int si= ze) delta =3D size - sk->sk_forward_alloc; if (delta <=3D 0) return; + amt =3D sk_mem_pages(delta); sk_forward_alloc_add(sk, amt << PAGE_SHIFT); - sk_memory_allocated_add(sk, amt); =20 - if (mem_cgroup_sk_enabled(sk)) + if (mem_cgroup_sk_enabled(sk)) { mem_cgroup_sk_charge(sk, amt, gfp_memcg_charge() | __GFP_NOFAIL); + + if (mem_cgroup_sk_isolated(sk)) + return; + } + + sk_memory_allocated_add(sk, amt); } =20 /* Send a FIN. The caller locks the socket for us. diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 9a287b75c1b3..1a4089b05a16 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #if IS_ENABLED(CONFIG_MPTCP_IPV6) #include @@ -1016,8 +1017,9 @@ static void mptcp_enter_memory_pressure(struct sock *= sk) mptcp_for_each_subflow(msk, subflow) { struct sock *ssk =3D mptcp_subflow_tcp_sock(subflow); =20 - if (first) + if (first && sk_should_enter_memory_pressure(sk)) tcp_enter_memory_pressure(ssk); + sk_stream_moderate_sndbuf(ssk); =20 first =3D false; diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index f672a62a9a52..6696ef837116 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include #include @@ -371,7 +372,8 @@ static int tls_do_allocation(struct sock *sk, if (!offload_ctx->open_record) { if (unlikely(!skb_page_frag_refill(prepend_size, pfrag, sk->sk_allocation))) { - READ_ONCE(sk->sk_prot)->enter_memory_pressure(sk); + if (sk_should_enter_memory_pressure(sk)) + READ_ONCE(sk->sk_prot)->enter_memory_pressure(sk); sk_stream_moderate_sndbuf(sk); return -ENOMEM; } --=20 2.51.0.rc0.205.g4a044479a3-goog