From nobody Sun Feb 8 02:55:52 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F984EB64DA for ; Mon, 19 Jun 2023 19:38:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229703AbjFSTii (ORCPT ); Mon, 19 Jun 2023 15:38:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230457AbjFSTie (ORCPT ); Mon, 19 Jun 2023 15:38:34 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFBEDE71 for ; Mon, 19 Jun 2023 12:38:31 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-be47a3b3a01so4003385276.2 for ; Mon, 19 Jun 2023 12:38:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687203511; x=1689795511; h=cc:to:from:subject:mime-version:message-id:date:from:to:cc:subject :date:message-id:reply-to; bh=HCw2VH+vwapV6l/h+dF5z2IFI4Xz0i/+vWzvGaN0ULY=; b=ob9C8F/z7W2ZwmjqKz/Cvine6FVUEUnDsKk1zQIsGAIkjG83xxDJzueWto4L0n6VBq V0jkBXab7krBI+QchBRNdl5zRkUv1eEYXNfD54GlT4eg181CEItBbaNF2WGB6SkK9hWB alN+QbwXqOPTUwn3IOEBhW6FQ6gX1+KEah/ycaLWU4tl5xVPL4aJXhvXNAhDb92wLKgG JTv5Mp0z75TlwvOCjFlF6fz976sUTZfB1t/YeUx8BTpYTeRZaEJ7a9Gd6kCdydKWHfDx 4ZtGINXlu88f8GZnqSjrNh93MPA+YKVTVY4aOIaQXDRBafm7QJwqNuAvJ5Ho7Sr1UPRZ Q8WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687203511; x=1689795511; h=cc:to:from:subject:mime-version:message-id:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=HCw2VH+vwapV6l/h+dF5z2IFI4Xz0i/+vWzvGaN0ULY=; b=KU6Rf0W2XM6w74mpGDxPiMJi+vN+m0noZYDC0fbq1+cLYSRBE2GKKyZNPW7ZUrH/kz lAYinRD7evuTrE3oJjgn1duJ0GTq7FLsdFqvenHojcZTsKot1o5L1Xr2+fzyMXGJRzWR vjne6Bfhu8XZH/Xw6VY1bHev0q1VF4BHmcLlPMw+wk+nAK4DFisyawCNAb+kaOUbQkPW /TjnN3f8V5uTVMDykF8vrdgj2UpK2PWh35s77gosSwkUWEy0Y1QM590Zr8hB/sNqUBwY TT84rVWLk9JMOyAZX48ai5ZqHchpsfKg03Kbpyk4EGCao5+gaXMHu0CmT+ydVTQbC3in GN8g== X-Gm-Message-State: AC+VfDwIKcmwPdjSE0Zdj1m/gv0eWjaRWGINW9eNX5gIwrtjw71H4YmZ QGV/Hfge/og+qdOZT6S+v+TbEloV3yA= X-Google-Smtp-Source: ACHHUZ5pfU5AaytiISOSDvFrNLXjh0g8DNDoFNsY2iGWO+BXyG7SrQ5ViqTG3yVka44liPSNzRY5KXR79KI= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:202:6457:2153:f1fa:3f37]) (user=yuzhao job=sendgmr) by 2002:a25:23c1:0:b0:be7:ea0b:2702 with SMTP id j184-20020a2523c1000000b00be7ea0b2702mr1031854ybj.12.1687203511044; Mon, 19 Jun 2023 12:38:31 -0700 (PDT) Date: Mon, 19 Jun 2023 13:38:21 -0600 Message-Id: <20230619193821.2710944-1-yuzhao@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.41.0.185.g7c58973941-goog Subject: [PATCH mm-unstable v1] mm/mglru: make memcg_lru->lock irq safe From: Yu Zhao To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , syzbot+87c490fd2be656269b6a@syzkaller.appspotmail.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" lru_gen_rotate_memcg() can happen in softirq if memory.soft_limit_in_bytes is set. This requires memcg_lru->lock to be irq safe. This problem only affects memcg v1. Reported-by: syzbot+87c490fd2be656269b6a@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=3D87c490fd2be656269b6a Fixes: e4dde56cd208 ("mm: multi-gen LRU: per-node lru_gen_folio lists") Signed-off-by: Yu Zhao Reviewed-by: Yosry Ahmed --- mm/vmscan.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 45d17c7cc555..27f90896f789 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4759,10 +4759,11 @@ static void lru_gen_rotate_memcg(struct lruvec *lru= vec, int op) { int seg; int old, new; + unsigned long flags; int bin =3D get_random_u32_below(MEMCG_NR_BINS); struct pglist_data *pgdat =3D lruvec_pgdat(lruvec); =20 - spin_lock(&pgdat->memcg_lru.lock); + spin_lock_irqsave(&pgdat->memcg_lru.lock, flags); =20 VM_WARN_ON_ONCE(hlist_nulls_unhashed(&lruvec->lrugen.list)); =20 @@ -4797,7 +4798,7 @@ static void lru_gen_rotate_memcg(struct lruvec *lruve= c, int op) if (!pgdat->memcg_lru.nr_memcgs[old] && old =3D=3D get_memcg_gen(pgdat->m= emcg_lru.seq)) WRITE_ONCE(pgdat->memcg_lru.seq, pgdat->memcg_lru.seq + 1); =20 - spin_unlock(&pgdat->memcg_lru.lock); + spin_unlock_irqrestore(&pgdat->memcg_lru.lock, flags); } =20 void lru_gen_online_memcg(struct mem_cgroup *memcg) @@ -4810,7 +4811,7 @@ void lru_gen_online_memcg(struct mem_cgroup *memcg) struct pglist_data *pgdat =3D NODE_DATA(nid); struct lruvec *lruvec =3D get_lruvec(memcg, nid); =20 - spin_lock(&pgdat->memcg_lru.lock); + spin_lock_irq(&pgdat->memcg_lru.lock); =20 VM_WARN_ON_ONCE(!hlist_nulls_unhashed(&lruvec->lrugen.list)); =20 @@ -4821,7 +4822,7 @@ void lru_gen_online_memcg(struct mem_cgroup *memcg) =20 lruvec->lrugen.gen =3D gen; =20 - spin_unlock(&pgdat->memcg_lru.lock); + spin_unlock_irq(&pgdat->memcg_lru.lock); } } =20 @@ -4845,7 +4846,7 @@ void lru_gen_release_memcg(struct mem_cgroup *memcg) struct pglist_data *pgdat =3D NODE_DATA(nid); struct lruvec *lruvec =3D get_lruvec(memcg, nid); =20 - spin_lock(&pgdat->memcg_lru.lock); + spin_lock_irq(&pgdat->memcg_lru.lock); =20 VM_WARN_ON_ONCE(hlist_nulls_unhashed(&lruvec->lrugen.list)); =20 @@ -4857,7 +4858,7 @@ void lru_gen_release_memcg(struct mem_cgroup *memcg) if (!pgdat->memcg_lru.nr_memcgs[gen] && gen =3D=3D get_memcg_gen(pgdat->= memcg_lru.seq)) WRITE_ONCE(pgdat->memcg_lru.seq, pgdat->memcg_lru.seq + 1); =20 - spin_unlock(&pgdat->memcg_lru.lock); + spin_unlock_irq(&pgdat->memcg_lru.lock); } } =20 --=20 2.41.0.185.g7c58973941-goog