From nobody Fri Apr 17 06:13:57 2026 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB64C343D8F for ; Mon, 23 Feb 2026 12:32:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.193 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771849975; cv=none; b=GCMjrPmlUzxCnsPKbNu2/1KNLxa6+w9oak5ArFh8iycTYIx+9uY98KO3U4K0VP8Rtygs8DzZ1ggAkmuWd4zJA5t+JRe3nDPXBkfQb0uHtBQD4WZuZimQIK8NPdN8nxNt3gw95tUmyl+HJxlb+vYC/pcbOc7+r39mrElWzYBvK/Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771849975; c=relaxed/simple; bh=+Gb05PxvumN+Pl9XaeCIZFLytIRa9gHhOeCrXLllhEg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Rx0dR5mIxR5ho1FWyP19roJobWfdd2ptWImOHJmiBM9Bj2wvxRZJdHAIMtRZgCMQvhQ1TzlHyqnIrn7ry2JvzUJ/4erNwkBEuhLWpngGqOseFidtjkXbMv0QxXMWb/wsrmU7Prd6AjbBwfaXoTagjAoP6oHLHCrMtOYzlCaVpt4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=H/b2j5D9; arc=none smtp.client-ip=209.85.215.193 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="H/b2j5D9" Received: by mail-pg1-f193.google.com with SMTP id 41be03b00d2f7-c648bc907ebso2526623a12.3 for ; Mon, 23 Feb 2026 04:32:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1771849973; x=1772454773; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yRjDZcOUW3p8fFG0+tNWgPy65ed/dP2/X89AKu5rG9U=; b=H/b2j5D9dvoBKapcveYPneT/C/8NQHnWTUDk/Rnac6wSqhVzkkSGNwRhdSfB/qj8Sv k24hT0ZpxW4GaXyVyok+AJflu7U4ANYV+X7hxQLHHBGwkgff5fxGLaJcQUC/TAYb3pMU 3O0iZCoop1mJ+cuK+ezHz7uUrVHDljmwwaxSolwvJ0RaD3oYYLHtk9hDYaWc4NK+YlWP TxoSD5vIDMPURPxPu5hK3TqgbfVSO5wNcdoeNy/rYds8JbaProv5IxBi57lRQzE4UQAa DNv21T6432HeMFAep9gEMrb3NquuHZNQppyscQByFKIHk1M0pB3t3/me13y+GeR2SUgy e7zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771849973; x=1772454773; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=yRjDZcOUW3p8fFG0+tNWgPy65ed/dP2/X89AKu5rG9U=; b=pbTJyEGVv/1gyg3/McdArw/Y5N5KOeduT2nOfFwQpbQiDf3gDvpmY2CrPT0QkdNtUT cLlyzvj0SP/Pz5HPiEIqmOPxAu2t1SbdRhRU4x9zDpKZZHdObitv9IcmPse1CtVnigKO 6eIjYGO/3ATKhXq2TMOFk7cP+krjOO4fRlnmD3QNX2xqv/fo+aopIo7J623edSdI28Lz vf91gWfCt3W911nsZddynPUr5YqfsuhQgkqd9NKaV79o19lu1mxB5kEIeqabWgsOhNLk zT/nDixuq/DMaBzzcjoVnOl9II26zS5aT3TRC/J4EElmrndxTRnaouCJgvLRAOhc7Mte rQDw== X-Forwarded-Encrypted: i=1; AJvYcCU2mHZn41/PGQIqRvCYFg88rPF4JGWuwENpNpfiuH3ybfLrljkd3DKO6D8BsNI6PblqLY5LhyVGFepXFw4=@vger.kernel.org X-Gm-Message-State: AOJu0YwFO9sCyUVWhqbc4k/vVL+isYkWPHmovl4HQGX1d3bHohH8ONPl fztd+zE4ll/uvzwIvvEi2VL5ZF4Qr7tmbgYOh+XQC4ElaTWIyuX4m8I= X-Gm-Gg: AZuq6aKH9c2VvcY8o0YyBgmzYnqeNINbZ/wOfWkmPYfZek1vSCkS1Ai+3XsDsU5/UpI JP6Y/Gd11czSTdev+OhhO9DOyi4ZW8xqhwZZFxdQBrvDFJ21rKmVvSKmI5VvqR6pr8I7zCjmIGn /pDttM2wfKhxd4L2qw6DpOBvlDYZ3knVtVGVzos/v0jYAxMd+4i7scQJXyI9CU509490qSnPY9b 1EYdsD4FV6FF5BPsgVe13rx2yEaie03RAHisBRxmbOZwU43Vf55Kvx484WgyUa5fuvZbe+H1wmA X6g/7EoyAWJfkyNovOTfxr4HWxexYWnfJoi0UgQEvO9zFKG41NzzmgsifyHQpkkEmgra67k1Gyv ryfWP3HOHJd4SAzfKd9E7rIq1+linsQ1lNTot2RvXnHD8suJQ1QLGVMDXgyRj20A67yzMZ4+czL CS64U7S9Z7UA246fHH5BEPOpHTxu6BxVPTXiqD/S+8yamr X-Received: by 2002:a05:6a20:d80f:b0:395:291b:f555 with SMTP id adf61e73a8af0-395460224e4mr6425517637.69.1771849973278; Mon, 23 Feb 2026 04:32:53 -0800 (PST) Received: from LAPTOP-FDBL0TVI.localdomain ([49.37.157.71]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-c70b71a73e1sm7454739a12.13.2026.02.23.04.32.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Feb 2026 04:32:52 -0800 (PST) From: Ravi Jonnalagadda To: sj@kernel.org, damon@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, bijan311@gmail.com, ajayjoshi@micron.com, honggyu.kim@sk.com, yunjeong.mun@sk.com, Ravi Jonnalagadda Subject: [RFC PATCH v3 1/4] mm/damon/sysfs: set goal_tuner after scheme creation Date: Mon, 23 Feb 2026 12:32:29 +0000 Message-ID: <20260223123232.12851-2-ravis.opensrc@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260223123232.12851-1-ravis.opensrc@gmail.com> References: <20260223123232.12851-1-ravis.opensrc@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" damon_new_scheme() always sets quota.goal_tuner to CONSIST (the default) regardless of what was passed in the quota struct. This caused the sysfs goal_tuner setting to be ignored. The comment in damon_new_scheme() says "quota.goals and .goal_tuner should be separately set by caller", but the sysfs code wasn't doing this. Add explicit assignment of goal_tuner after damon_new_scheme() returns to properly apply the user's setting. Without this fix, setting goal_tuner to "temporal" via sysfs has no effect - the scheme always uses the CONSIST (feed loop) tuner, causing overshoot when the goal is reached instead of immediate stop. Signed-off-by: Ravi Jonnalagadda Reviewed-by: SeongJae Park --- mm/damon/sysfs-schemes.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/damon/sysfs-schemes.c b/mm/damon/sysfs-schemes.c index bbea908074bb..fe2e3b2db9e1 100644 --- a/mm/damon/sysfs-schemes.c +++ b/mm/damon/sysfs-schemes.c @@ -2809,6 +2809,9 @@ static struct damos *damon_sysfs_mk_scheme( if (!scheme) return NULL; =20 + /* Set goal_tuner after damon_new_scheme() as it defaults to CONSIST */ + scheme->quota.goal_tuner =3D sysfs_quotas->goal_tuner; + err =3D damos_sysfs_add_quota_score(sysfs_quotas->goals, &scheme->quota); if (err) { damon_destroy_scheme(scheme); --=20 2.43.0 From nobody Fri Apr 17 06:13:57 2026 Received: from mail-qv1-f67.google.com (mail-qv1-f67.google.com [209.85.219.67]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05350362137 for ; Mon, 23 Feb 2026 12:39:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.67 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771850364; cv=none; b=pkVMwPIoahPoFHbsIfjIOEitqyj5GtxzONAKygBuRBDTG2vHplevmkmrXA6zrv50of6GxVbyByE7/xnulk1qy7VZYC8OED+zxzp6jBB49/lty2rAwMnfc/7IQ4ZYnjJ7JCs9iU2gqabeW57UbySmEGkzfdPHWcEDnk1yFQ5uUis= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771850364; c=relaxed/simple; bh=tIR9ZTfFc43iogKN4VnltVCT61CN1KDGtyex8888EAI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qpT3GhGG2SLa5gs/em8sLGjF0gKsWLfpf6QUTM8YwMF1UlRRsVYv0vqWB+9zJOT1xWQjKSvDfwK6s22yv+pwCivrB0NfuQdX7HHo5U35o6zAHeQYZjW0Qe7e93uTNM3umc1jLuaOBc0BizayJOPLyPKq8acK32Q25InlwDiON+w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=GygvMZMF; arc=none smtp.client-ip=209.85.219.67 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GygvMZMF" Received: by mail-qv1-f67.google.com with SMTP id 6a1803df08f44-897023602b1so48988646d6.0 for ; Mon, 23 Feb 2026 04:39:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1771850362; x=1772455162; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=m9zxhRiY07QYTGDQv6Cv2204Go1scw2vYwkAtxLUmkI=; b=GygvMZMF44cFtQ93sB1QDDkN3jVICeqdLkEy2zA8DOFN0ipbX6OyHfmU3ylyRvw/3K /OX8xzebg+tpBdVgxPplJOUzJF0VvGZdqwGHrqOooLl5/oDYJyT+GVyVWKLKLfoCHdnw 53v7IT6wl/Y1D+zMMC//4zOMmGJsxUAmQ9T1LI8sSQNYTMYjTnHLMXszMcwlmbwyxUuv is6EMp3HpaFRfpYpdQbWXIGeITzCbXXnX33IcAZRxZ5tC3VYzj3xphaBbf2J6AuU5IO4 U+8jGQxvgLBfVl4OWt/7xyDI5wRqFRTTWYY94dfAkCifQUHzRJb/hC7plxzlA0Ya3X3y eXCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771850362; x=1772455162; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=m9zxhRiY07QYTGDQv6Cv2204Go1scw2vYwkAtxLUmkI=; b=aSm1zpH5G3gazqWwAwT7tWS5hr4dm6uTEuKZPryG/jKl3ifqcc/G1IULwGpI2troNg yiCdeI+Xqa605ZVADOtRb/w/ZHJKtoQF9PvYgjXuJzEqJehEOwoQuSCNeA9d+y3gc67m 4TtL/Vxq6jWBBQWqD7gB7ryjTyJ29xUC0VV5GVxrTEuCO/bq51/y6/dfxNNtuaTOLuRI ROvcU+IPn1TJ2T+glsWHBCzaMytnm4tYAiJgwFMtsHfK3rqII4kwoAwMOpqIIuYW9/Bq O9Dw1yNRVYOxaaK8FBk85CFM/BoMqzRpHd+1SZZZw+XvOZ4YkGdxGzKTkIhrd37ggN8+ AsjQ== X-Forwarded-Encrypted: i=1; AJvYcCWsuRLbqGdpav9+u29oDaJAYiaOcCdeDdcycg5x02KbNf98glcmz+l4RTOqzEHkOO9oC1khViEM6hLhssE=@vger.kernel.org X-Gm-Message-State: AOJu0Yz0cZTNoF4X40qYGSYkciNQqLnHcwL3Zbb6VGc8yU4Agy2VCXNB 6XE71kNO7/s/u4tdrOWb/Ti0wzeHUkgJB8GhQZEGLjBMRr3KFlRu34dvTosGB4NO5w== X-Gm-Gg: ATEYQzzPeLNOWsgyhP/nWqeEMugbYjCu9SdwSxA75Q6IxK5RcuG+vTw/LFB/3QbxneF KWcCQ+STPO3OkIRsvv+OT2XKHeRGF8Fq4F82QS9wGPYRu17KAhuX10X+wYBSTT/s75ma1+Tw7Mn CrTYCgP7vQjpi4/q4pwiYbCWcv20jqpOtNUH+x4dHvBTODGXKpdHk09M/f/vJXWSr2VAA/JDCdP k8vGBLfZLvk5SELRKVNBPtmQsfpRUYOuZ523uvdRqrtySF2E6b7F2kvHchZ2Ipu/3LfVCsjU5KR suMC4EPKhiYN4++qVwx1cRaCd/NG995sWiKgH0jR7VG20ZrenMK8ukNbk+X680bhKx31GoH/6IP uz/hPqlY5LVKGKC7jp6nfihW3lfKgeTYf879m8sQRqgCgd7pilaBj4R5KrxG4Ocu2Ok/uVISbY5 Lj1NAmz3+hvpdrqAkC2sKVYLCKyGAwvrqtMrbD+yMzrbFU X-Received: by 2002:a05:6a20:439e:b0:394:6023:a0fd with SMTP id adf61e73a8af0-39545ed05cfmr7352911637.29.1771849978382; Mon, 23 Feb 2026 04:32:58 -0800 (PST) Received: from LAPTOP-FDBL0TVI.localdomain ([49.37.157.71]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-c70b71a73e1sm7454739a12.13.2026.02.23.04.32.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Feb 2026 04:32:57 -0800 (PST) From: Ravi Jonnalagadda To: sj@kernel.org, damon@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, bijan311@gmail.com, ajayjoshi@micron.com, honggyu.kim@sk.com, yunjeong.mun@sk.com, Ravi Jonnalagadda Subject: [RFC PATCH v3 2/4] mm/damon: fix esz=0 quota bypass allowing unlimited migration Date: Mon, 23 Feb 2026 12:32:30 +0000 Message-ID: <20260223123232.12851-3-ravis.opensrc@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260223123232.12851-1-ravis.opensrc@gmail.com> References: <20260223123232.12851-1-ravis.opensrc@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When the TEMPORAL goal tuner sets esz_bp=3D0 to signal that a goal has been achieved, the quota check was not actually stopping migration. The condition: if (quota->esz && quota->charged_sz >=3D quota->esz) When esz=3D0, this evaluates to (false && ...) =3D false, so the continue is never executed and migration proceeds without limit. Change the logic to: if (!quota->esz || quota->charged_sz >=3D quota->esz) Now when esz=3D0, (!0 =3D true) causes the continue to execute, properly stopping migration when the goal is achieved. This is critical for the TEMPORAL tuner to work correctly - without this fix, setting esz=3D0 has no effect and migration continues until all hot memory is moved, overshooting the target goal. Signed-off-by: Ravi Jonnalagadda --- mm/damon/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/damon/core.c b/mm/damon/core.c index 614f1f08eee9..b438355ab54a 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -2394,8 +2394,8 @@ static void damon_do_apply_schemes(struct damon_ctx *= c, if (!s->wmarks.activated) continue; =20 - /* Check the quota */ - if (quota->esz && quota->charged_sz >=3D quota->esz) + /* Check the quota: skip if esz=3D0 (goal achieved) or exhausted */ + if (!quota->esz || quota->charged_sz >=3D quota->esz) continue; =20 if (damos_skip_charged_region(t, r, s, c->min_region_sz)) --=20 2.43.0 From nobody Fri Apr 17 06:13:57 2026 Received: from mail-vk1-f196.google.com (mail-vk1-f196.google.com [209.85.221.196]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 803CA36923B for ; Mon, 23 Feb 2026 12:38:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.196 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771850296; cv=none; b=KnsKk4PyAo++oNsaC8gzB7OYopH0mYG9S1G1lchCh2+oxWcWzQIBb4jd4VSMkR+tN2E33Tdplo02IKu1VUZOHD2aTLFxsXrQcs/3qX9Ji5tWuoRxHkqDTySIuzg3hVgU7R9q0nCWVPC2osw8OOr336vC3Tr4wRiKYsr3gYcgcyE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771850296; c=relaxed/simple; bh=9A3KDPULYLSQF2TtBzqTrGQAnOGp35GVxCILsbUiNKE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Sklc3KBbcYVVSS7Rx9Ub12ztTbLOi7bFqumWheJLHXZvIFRVEnrcK1ih4kD2wDW/2WgZxcnCwTuGXJrjTQrtG+O7X+KfCasPDtQzzOyedSpkPdPnasf1Y2iAwfBpN8GUml6uLJGHVbqAa4zz1jtMeou/Dz5W2IjsR/HZZtPiRCk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=R8ZmwT+2; arc=none smtp.client-ip=209.85.221.196 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="R8ZmwT+2" Received: by mail-vk1-f196.google.com with SMTP id 71dfb90a1353d-5670ea44187so3361512e0c.0 for ; Mon, 23 Feb 2026 04:38:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1771850293; x=1772455093; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GYNB6/pZb33YeAKubCBWRXUx4k4dpplKbNpn3dWK4iI=; b=R8ZmwT+2qa57eLHfePgqwvbuSNXeS2HYiY4wzME2y1M+FqbvLaOhSHrwY+gYTOx96b 9FafENdul7AzTjJP2lOOVEpwEBWBxy+tuPorYpO7NUgX8Jh42NYtVq7dhEKQmZWBS76m rAfKi0nbfzKJydlFKVite1b2RJtWY9zjCGM9/mwo8E+mo6Fw5i7UV5lSusyBO99zvrH/ DCAhvbQW76+Z7pB/VtY2+iMrfM7CFueipPwLmNiN8Y4ZfI1RwOFYQa1TLiMutJ/JyX1l kpWP3MEbDNDjyEDMj+k/Y9QWE6PdewmSPpHS+Cx98E90LIQfuI5cj2sPX6Czn7o/nh6l wlFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771850293; x=1772455093; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=GYNB6/pZb33YeAKubCBWRXUx4k4dpplKbNpn3dWK4iI=; b=pJINX+rp4Xhv4PZ+phY/+hBWYyCj5ZMZxAn1vbO1niq17TZb1vOrbqxQ+8Gi1KvL+4 Iijf7P4kVPlkJ0yd7+MyQkxgUZMcWOesAoSfQQy7m0Zods1KFA14sBXTvhAlcWR+Hl2l nMydvl0I5rial5/ivil0nVBtTzjbh+t3hO41pXxZO4ocRAb9YydPqtQnQxVEtZXEIg93 b8C3k813Ylc17n10JrAl4XSUrWg93X8FVXLmnyp6ksXhxxhgysyscPUwrmuBnIuCSDwW VYhWowDxgE5sQs/pZxIURWwnyKliGZt3Z5Bo/VivI5cE2iT9fLbifO62bkx5e6uVcJ0t Lj/Q== X-Forwarded-Encrypted: i=1; AJvYcCVfiTzKLoM9J0D1tK3v0b+e/liult5J4hQXzFhARdradh03qfB9hk/IVsMtJfxqPYGG3hdGM3bGnQ4YSRY=@vger.kernel.org X-Gm-Message-State: AOJu0YytucLH2DHPxIqSo8njN74B3zu6N157lDzJ0635FI5aIyZDHT1K D1oBwwjrLHIRHvUSruEV5cEjjvUZlWcarO/XY+OyKFez1FALVcvUHhwIxYl3pXkzqQ== X-Gm-Gg: AZuq6aLTuDshjf/sB2rkMeDkk8ng93p30J8qGgTFq7I6zfQ1BBlk6kaauI9fcQqoddo WymNJxLpHzQ0TcPSjQ1WKKjlr4JPWNHkAZ/C3PlktCX1YqF39eq4gUlLkPjYYTV5cEcZA6TR7Ke OmdqSgky6Uf/bKZTHftXdOsRFsS9UNlLUv1EZcBv9v2Fju0Li6H9Vto2UE3QInEj+zb317c+AnT ycin5RttSfzSawjIf0MrakuSBCFrtKphl6uDvYZ0K4ahinpuLTqXKN2kBN09stAOTLzT2FYg337 NAQWBs6RB1w4/G1x4B7Dt7xZZYwCngwFK/YsuHc+gh3Wif9Y6SaGt472NcY3sKevWWvwBlChHTn 2zwHY/OUC1vGOHWaZI+VQ1n28U7CellviAak1AQHEMYjyvut2Heli/uE0xyl2oiKb/fGNZU9+nA X9+YkgGozLWhTKhOnvbLU9WrUUudQUDHZv6Wvo56vqHJNP X-Received: by 2002:a05:6a20:cc97:b0:35e:11ff:45c1 with SMTP id adf61e73a8af0-39545ebee1cmr6462789637.18.1771849983299; Mon, 23 Feb 2026 04:33:03 -0800 (PST) Received: from LAPTOP-FDBL0TVI.localdomain ([49.37.157.71]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-c70b71a73e1sm7454739a12.13.2026.02.23.04.32.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Feb 2026 04:33:02 -0800 (PST) From: Ravi Jonnalagadda To: sj@kernel.org, damon@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, bijan311@gmail.com, ajayjoshi@micron.com, honggyu.kim@sk.com, yunjeong.mun@sk.com, Ravi Jonnalagadda Subject: [RFC PATCH v3 3/4] mm/damon: add node_eligible_mem_bp and node_ineligible_mem_bp goal metrics Date: Mon, 23 Feb 2026 12:32:31 +0000 Message-ID: <20260223123232.12851-4-ravis.opensrc@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260223123232.12851-1-ravis.opensrc@gmail.com> References: <20260223123232.12851-1-ravis.opensrc@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Add new quota goal metrics for memory tiering that track scheme-eligible (hot) memory distribution across NUMA nodes: - DAMOS_QUOTA_NODE_ELIGIBLE_MEM_BP: ratio of hot memory on a node - DAMOS_QUOTA_NODE_INELIGIBLE_MEM_BP: ratio of hot memory NOT on a node These complementary metrics enable push-pull migration schemes that maintain a target hot memory distribution. For example, to keep 30% of hot memory on CXL node 1: - PUSH scheme (DRAM=E2=86=92CXL): node_eligible_mem_bp, nid=3D1, target=3D3= 000 Activates when node 1 has less than 30% hot memory - PULL scheme (CXL=E2=86=92DRAM): node_ineligible_mem_bp, nid=3D1, target= =3D7000 Activates when node 1 has more than 30% hot memory Together with the TEMPORAL goal tuner, the schemes converge to equilibrium at the target distribution. The metrics use detected eligible bytes per node, calculated by summing the size of regions that match the scheme's access pattern (size, nr_accesses, age) on each NUMA node. Suggested-by: SeongJae Park Signed-off-by: Ravi Jonnalagadda --- include/linux/damon.h | 6 ++ mm/damon/core.c | 123 ++++++++++++++++++++++++++++++++++++++- mm/damon/sysfs-schemes.c | 10 ++++ 3 files changed, 137 insertions(+), 2 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index ee2d0879c292..6df716533fbf 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -191,6 +191,8 @@ enum damos_action { * @DAMOS_QUOTA_NODE_MEM_FREE_BP: MemFree ratio of a node. * @DAMOS_QUOTA_NODE_MEMCG_USED_BP: MemUsed ratio of a node for a cgroup. * @DAMOS_QUOTA_NODE_MEMCG_FREE_BP: MemFree ratio of a node for a cgroup. + * @DAMOS_QUOTA_NODE_ELIGIBLE_MEM_BP: Scheme-eligible memory ratio of a no= de. + * @DAMOS_QUOTA_NODE_INELIGIBLE_MEM_BP: Scheme-ineligible memory ratio of = a node. * @DAMOS_QUOTA_ACTIVE_MEM_BP: Active to total LRU memory ratio. * @DAMOS_QUOTA_INACTIVE_MEM_BP: Inactive to total LRU memory ratio. * @NR_DAMOS_QUOTA_GOAL_METRICS: Number of DAMOS quota goal metrics. @@ -204,6 +206,8 @@ enum damos_quota_goal_metric { DAMOS_QUOTA_NODE_MEM_FREE_BP, DAMOS_QUOTA_NODE_MEMCG_USED_BP, DAMOS_QUOTA_NODE_MEMCG_FREE_BP, + DAMOS_QUOTA_NODE_ELIGIBLE_MEM_BP, + DAMOS_QUOTA_NODE_INELIGIBLE_MEM_BP, DAMOS_QUOTA_ACTIVE_MEM_BP, DAMOS_QUOTA_INACTIVE_MEM_BP, NR_DAMOS_QUOTA_GOAL_METRICS, @@ -555,6 +559,7 @@ struct damos_migrate_dests { * @ops_filters: ops layer handling &struct damos_filter objects list. * @last_applied: Last @action applied ops-managing entity. * @stat: Statistics of this scheme. + * @eligible_bytes_per_node: Scheme-eligible bytes per NUMA node. * @max_nr_snapshots: Upper limit of nr_snapshots stat. * @list: List head for siblings. * @@ -644,6 +649,7 @@ struct damos { struct list_head ops_filters; void *last_applied; struct damos_stat stat; + unsigned long eligible_bytes_per_node[MAX_NUMNODES]; unsigned long max_nr_snapshots; struct list_head list; }; diff --git a/mm/damon/core.c b/mm/damon/core.c index b438355ab54a..3e1cb850f067 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -2544,6 +2544,111 @@ static unsigned long damos_get_node_memcg_used_bp( } #endif =20 +#ifdef CONFIG_NUMA +/* + * damos_scheme_uses_eligible_metrics() - Check if scheme uses eligible me= trics. + * @s: The scheme + * + * Returns true if any quota goal uses node_eligible_mem_bp or + * node_ineligible_mem_bp metrics, which require eligible bytes calculatio= n. + */ +static bool damos_scheme_uses_eligible_metrics(struct damos *s) +{ + struct damos_quota_goal *goal; + struct damos_quota *quota =3D &s->quota; + + damos_for_each_quota_goal(goal, quota) { + if (goal->metric =3D=3D DAMOS_QUOTA_NODE_ELIGIBLE_MEM_BP || + goal->metric =3D=3D DAMOS_QUOTA_NODE_INELIGIBLE_MEM_BP) + return true; + } + return false; +} + +/* + * damos_calc_eligible_bytes_per_node() - Calculate eligible bytes per nod= e. + * @c: The DAMON context + * @s: The scheme + * + * Calculates scheme-eligible bytes per NUMA node based on access pattern + * matching. A region is eligible if it matches the scheme's access pattern + * (size, nr_accesses, age). + */ +static void damos_calc_eligible_bytes_per_node(struct damon_ctx *c, + struct damos *s) +{ + struct damon_target *t; + struct damon_region *r; + phys_addr_t paddr; + int nid; + + memset(s->eligible_bytes_per_node, 0, + sizeof(s->eligible_bytes_per_node)); + + damon_for_each_target(t, c) { + damon_for_each_region(r, t) { + if (!__damos_valid_target(r, s)) + continue; + paddr =3D (phys_addr_t)r->ar.start * c->addr_unit; + nid =3D pfn_to_nid(PHYS_PFN(paddr)); + if (nid >=3D 0 && nid < MAX_NUMNODES) + s->eligible_bytes_per_node[nid] +=3D + damon_sz_region(r) * c->addr_unit; + } + } +} + +static unsigned long damos_get_node_eligible_mem_bp(struct damos *s, int n= id) +{ + unsigned long total_eligible =3D 0; + unsigned long node_eligible; + int n; + + if (nid < 0 || nid >=3D MAX_NUMNODES) + return 0; + + for_each_online_node(n) + total_eligible +=3D s->eligible_bytes_per_node[n]; + + if (!total_eligible) + return 0; + + node_eligible =3D s->eligible_bytes_per_node[nid]; + + return mult_frac(node_eligible, 10000, total_eligible); +} + +static unsigned long damos_get_node_ineligible_mem_bp(struct damos *s, int= nid) +{ + unsigned long eligible_bp =3D damos_get_node_eligible_mem_bp(s, nid); + + if (eligible_bp =3D=3D 0) + return 10000; + + return 10000 - eligible_bp; +} +#else +static bool damos_scheme_uses_eligible_metrics(struct damos *s) +{ + return false; +} + +static void damos_calc_eligible_bytes_per_node(struct damon_ctx *c, + struct damos *s) +{ +} + +static unsigned long damos_get_node_eligible_mem_bp(struct damos *s, int n= id) +{ + return 0; +} + +static unsigned long damos_get_node_ineligible_mem_bp(struct damos *s, int= nid) +{ + return 0; +} +#endif + /* * Returns LRU-active or inactive memory to total LRU memory size ratio. */ @@ -2562,7 +2667,8 @@ static unsigned int damos_get_in_active_mem_bp(bool a= ctive_ratio) return mult_frac(inactive, 10000, total); } =20 -static void damos_set_quota_goal_current_value(struct damos_quota_goal *go= al) +static void damos_set_quota_goal_current_value(struct damos_quota_goal *go= al, + struct damos *s) { u64 now_psi_total; =20 @@ -2584,6 +2690,14 @@ static void damos_set_quota_goal_current_value(struc= t damos_quota_goal *goal) case DAMOS_QUOTA_NODE_MEMCG_FREE_BP: goal->current_value =3D damos_get_node_memcg_used_bp(goal); break; + case DAMOS_QUOTA_NODE_ELIGIBLE_MEM_BP: + goal->current_value =3D damos_get_node_eligible_mem_bp(s, + goal->nid); + break; + case DAMOS_QUOTA_NODE_INELIGIBLE_MEM_BP: + goal->current_value =3D damos_get_node_ineligible_mem_bp(s, + goal->nid); + break; case DAMOS_QUOTA_ACTIVE_MEM_BP: case DAMOS_QUOTA_INACTIVE_MEM_BP: goal->current_value =3D damos_get_in_active_mem_bp( @@ -2597,11 +2711,12 @@ static void damos_set_quota_goal_current_value(stru= ct damos_quota_goal *goal) /* Return the highest score since it makes schemes least aggressive */ static unsigned long damos_quota_score(struct damos_quota *quota) { + struct damos *s =3D container_of(quota, struct damos, quota); struct damos_quota_goal *goal; unsigned long highest_score =3D 0; =20 damos_for_each_quota_goal(goal, quota) { - damos_set_quota_goal_current_value(goal); + damos_set_quota_goal_current_value(goal, s); highest_score =3D max(highest_score, mult_frac(goal->current_value, 10000, goal->target_value)); @@ -2693,6 +2808,10 @@ static void damos_adjust_quota(struct damon_ctx *c, = struct damos *s) if (!quota->ms && !quota->sz && list_empty("a->goals)) return; =20 + /* Calculate eligible bytes per node for quota goal metrics */ + if (damos_scheme_uses_eligible_metrics(s)) + damos_calc_eligible_bytes_per_node(c, s); + /* First charge window */ if (!quota->total_charged_sz && !quota->charged_from) { quota->charged_from =3D jiffies; diff --git a/mm/damon/sysfs-schemes.c b/mm/damon/sysfs-schemes.c index fe2e3b2db9e1..232b33f5cbfb 100644 --- a/mm/damon/sysfs-schemes.c +++ b/mm/damon/sysfs-schemes.c @@ -1079,6 +1079,14 @@ struct damos_sysfs_qgoal_metric_name damos_sysfs_qgo= al_metric_names[] =3D { .metric =3D DAMOS_QUOTA_NODE_MEMCG_FREE_BP, .name =3D "node_memcg_free_bp", }, + { + .metric =3D DAMOS_QUOTA_NODE_ELIGIBLE_MEM_BP, + .name =3D "node_eligible_mem_bp", + }, + { + .metric =3D DAMOS_QUOTA_NODE_INELIGIBLE_MEM_BP, + .name =3D "node_ineligible_mem_bp", + }, { .metric =3D DAMOS_QUOTA_ACTIVE_MEM_BP, .name =3D "active_mem_bp", @@ -2669,6 +2677,8 @@ static int damos_sysfs_add_quota_score( break; case DAMOS_QUOTA_NODE_MEM_USED_BP: case DAMOS_QUOTA_NODE_MEM_FREE_BP: + case DAMOS_QUOTA_NODE_ELIGIBLE_MEM_BP: + case DAMOS_QUOTA_NODE_INELIGIBLE_MEM_BP: goal->nid =3D sysfs_goal->nid; break; case DAMOS_QUOTA_NODE_MEMCG_USED_BP: --=20 2.43.0 From nobody Fri Apr 17 06:13:57 2026 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79AFD18C03E for ; Mon, 23 Feb 2026 12:33:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.193 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771849992; cv=none; b=Ww1buGQMgDzQDIoSf4khWpsHncLEYRhjIILmpHANCHJci537NEu3wP24kh0YIQJUX8prK3yxbl0ljnug6xZlq1sosP9Ym746UNC9uQ0x4A7Jws3jd1PXIBx18HmszAYKSCMqovRjg6EJIjsdLy8O54VUcSyFghpSFx/Q/VkRWdY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771849992; c=relaxed/simple; bh=gRAEKaJmd+w7YMwQhEsS82N9OoA5qJ+I9xh/+WG4K44=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lRAPaXxcb93D+hPgHDq34jO99g0vHBUzSKveyHbWzaRXNSc7rCLo4X6eAfmUwHbxQFGghix53VRCr3ysz5T09QoTjE0M2EsZvEqDro7YMQuSY3W/BS+knJcjNMmZpuVU6UWNU55ycw+ICIoDMm+UE+i+e8sXyRXvzb8MpaIrhVQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=bEiCFdVR; arc=none smtp.client-ip=209.85.215.193 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bEiCFdVR" Received: by mail-pg1-f193.google.com with SMTP id 41be03b00d2f7-c6e72d7a4d7so2783599a12.0 for ; Mon, 23 Feb 2026 04:33:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1771849989; x=1772454789; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LF0oh9yEhb6fIvd4fp6llRGtP4hadGxZKoLp3XCPu6I=; b=bEiCFdVRZ+7elG5v5bYH2+TSfQ5KqKMqtMU8v8+SWMwFroG1nCKqA1iExNlsx+ftso /vzixCQRxX1Qx3t+SclvDyhn47XgjFErVO0JXBHJGmo0s1wUfvgW5ry1EhBLtpjOCoGH HmifAaG7KOjIf16u61eKbkxaTABALyPTzxNwTA5/hXlPAGqYm45Ft8GkUBa2orgImv0W tb1t35YGmMEO4OZJ0zpKBbBye8uKDBDpst+raEGOStnxAw+AF6AsT1qVwQi0QgFLKlxS JZwe9XN49Gc1OLuJ46IPfQDdcITdcPc+8ZyO9qycLdAq6eq48Lk3SpnkjlOux9VGCgcw 7D/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771849989; x=1772454789; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=LF0oh9yEhb6fIvd4fp6llRGtP4hadGxZKoLp3XCPu6I=; b=NYesQEcWpjYc+4jviQh+kR7PJjAOkiBgyrgp0U3o+82JRjWYeLFguXjS8X1SB4KHI/ A09iCRtyo9uqQJWfBKxAQUMsHDZcw7+sJX7aQoKKnvC29V4iWredPJxcFlVZe9HUAOxu KB/mbo0R7qE3huyIHs/npSwT+bb9Kf+RBLqw/vvNPA0MiFboCq0y+7JuYGNqDMD+mF0N 3Hp1jZlvM6+oaXwuS9teeQgVNuWlcsnnjNM3zQhCckM4LIg0LUJFJwv5yfoNXZJQT2o0 k5SJhg+SJbT0XlgCpt/AqM51DwMpFdEbxZ2xi9CtPppWhJWZBDJTiOTCtKvouPj5ENXi 6ItQ== X-Forwarded-Encrypted: i=1; AJvYcCWUITEeFYEqsS0el/C+ocpxvwYbGBfjfKx1Rb6lw+tgppWs7bJ1crjnZyYLs2+PvQd34ZT2NjU8y6mHEPY=@vger.kernel.org X-Gm-Message-State: AOJu0YyZseJCd5ipx+b760iV5Cw7Ew5Qph7bVhNNYDv3b7Y3XXvzfP45 KamNpwMp0/wj7XRRA6ltZq9mKSbaV4fjZwJYG04jytU5a2+VDKpvcGU= X-Gm-Gg: AZuq6aJItq4yvFr1O1A7ikNW/lAKeP16mdl2Hr/szehz2nr8wMs9WdufMDdZiq8ZApG kVxaPIlOOovfvztBzFG40JDG1T8Ia4lHUyv124FLC0TfJRmTxvfJKCiWNxradejrAsAVWjg8YgQ 3qDT6KTNRiPnCVbFrf80d6BXerkslamqPwkajHM4VVnDmtDo05ZiIs1S+FzD8pdLICoi6FpY75Q Abxwo67wgiX7Q0LvR4ti5Ny9h5rf58luzMquqVwt1cFjQGD6HmDGiKaB2xm3+Nm4BmilvfsSBaN 6VbsMId6S7uk+g/7bLs428yNgboQXwgn3BMdB1aLj9RPawtMzYO9R4hK1QmWS66Bjx08gkofx3Y 5bmsAIRu0BoK+euhQjqbBJ4VmOjqDOELImDE9dftEf0EAQfm5UqF9BFO6KKu+43JGm91se7IYAK RljFOVwFW+j7+jRAemABxqXbtCqyxhTcbH4vAkMNSbThce X-Received: by 2002:a05:6a20:9d90:b0:390:ca32:da2c with SMTP id adf61e73a8af0-39545ed46fdmr7373188637.24.1771849988581; Mon, 23 Feb 2026 04:33:08 -0800 (PST) Received: from LAPTOP-FDBL0TVI.localdomain ([49.37.157.71]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-c70b71a73e1sm7454739a12.13.2026.02.23.04.33.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Feb 2026 04:33:08 -0800 (PST) From: Ravi Jonnalagadda To: sj@kernel.org, damon@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, bijan311@gmail.com, ajayjoshi@micron.com, honggyu.kim@sk.com, yunjeong.mun@sk.com, Ravi Jonnalagadda Subject: [RFC PATCH v4 4/4] mm/damon: add PA-mode cache for eligible memory detection lag Date: Mon, 23 Feb 2026 12:32:32 +0000 Message-ID: <20260223123232.12851-5-ravis.opensrc@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260223123232.12851-1-ravis.opensrc@gmail.com> References: <20260223123232.12851-1-ravis.opensrc@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In PA-mode, DAMON needs time to re-detect hot memory at new physical addresses after migration. This causes the goal metrics to temporarily show incorrect values until detection catches up. Add an eligible cache mechanism to compensate for this detection lag: - Track migration deltas per node using a rolling window that automatically expires old data - Use direction-aware adjustment: for target nodes (receiving memory), use max(detected, predicted) to ensure migrated memory is counted even before detection catches up; for source nodes (losing memory), use predicted values when detection shows unreliable low values - Maintain the zero-sum property across nodes to preserve total eligible memory - Include cooldown mechanism to keep cache active while detection stabilizes after migration stops - Add time-based expiry to clear stale cache data when no migration occurs for a configured period The cache uses max_eligible tracking to handle detection oscillation, prioritizing peak observed values over potentially stale snapshots. A threshold check prevents quota oscillation when detection swings between zero and small values. Signed-off-by: Ravi Jonnalagadda --- include/linux/damon.h | 45 +++++ mm/damon/core.c | 421 +++++++++++++++++++++++++++++++++++---- mm/damon/sysfs-schemes.c | 30 +++ 3 files changed, 460 insertions(+), 36 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index 6df716533fbf..230f95910aab 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -541,6 +541,49 @@ struct damos_migrate_dests { size_t nr_dests; }; =20 +#define DAMOS_ELIGIBLE_CACHE_SLOTS 4 +#define DAMOS_ELIGIBLE_CACHE_COOLDOWN 3 +#define DAMOS_ELIGIBLE_CACHE_TIMEOUT_MS 10000 /* 10 seconds */ + +/** + * struct damos_eligible_cache - Cache for bridging detection lag after mi= gration. + * @base_eligible: Snapshot of eligible_bytes_per_node at cache creation. + * @max_eligible: Maximum detected eligible seen while cache active. + * @migration_delta: Net bytes migrated TO each node per slot (negative = =3D FROM). + * @current_slot: Current slot index in the rolling window. + * @cooldown: Intervals remaining before cache can deactivate. + * @active: True if cache has recent migration data. + * @last_migration_jiffies: Timestamp of last migration for time-based exp= iry. + * + * For PA-mode migration, DAMON needs time to re-detect hot memory at new + * physical addresses after migration. This cache tracks recent migrations + * using a rolling window, allowing goal metric calculation to account for + * detection lag. The rolling window automatically expires old migrations + * after DAMOS_ELIGIBLE_CACHE_SLOTS intervals. + * + * The cache maintains zero-sum property: bytes subtracted from source node + * equal bytes added to target node, preserving total eligible memory. + * + * max_eligible tracks the highest detected eligible value seen while the = cache + * is active. This provides a fallback when both base_eligible and current + * detection are 0 due to detection oscillation timing. + * + * Time-based expiry: The cache clears all slots and deactivates if no mig= ration + * occurs for DAMOS_ELIGIBLE_CACHE_TIMEOUT_MS milliseconds. This prevents = stale + * delta data from persisting indefinitely across test runs or after migra= tion + * completes. + */ +struct damos_eligible_cache { + unsigned long base_eligible[MAX_NUMNODES]; + unsigned long max_eligible[MAX_NUMNODES]; + long migration_delta[DAMOS_ELIGIBLE_CACHE_SLOTS][MAX_NUMNODES]; + unsigned int current_slot; + unsigned int cooldown; + bool active; + unsigned long last_migration_jiffies; + unsigned long timeout_ms; /* Configurable timeout, 0 =3D use default */ +}; + /** * struct damos - Represents a Data Access Monitoring-based Operation Sche= me. * @pattern: Access pattern of target regions. @@ -560,6 +603,7 @@ struct damos_migrate_dests { * @last_applied: Last @action applied ops-managing entity. * @stat: Statistics of this scheme. * @eligible_bytes_per_node: Scheme-eligible bytes per NUMA node. + * @eligible_cache: Cache for detection lag compensation. * @max_nr_snapshots: Upper limit of nr_snapshots stat. * @list: List head for siblings. * @@ -650,6 +694,7 @@ struct damos { void *last_applied; struct damos_stat stat; unsigned long eligible_bytes_per_node[MAX_NUMNODES]; + struct damos_eligible_cache eligible_cache; unsigned long max_nr_snapshots; struct list_head list; }; diff --git a/mm/damon/core.c b/mm/damon/core.c index 3e1cb850f067..4d39b5da2865 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -488,6 +488,8 @@ struct damos *damon_new_scheme(struct damos_access_patt= ern *pattern, scheme->migrate_dests =3D (struct damos_migrate_dests){}; scheme->target_nid =3D target_nid; =20 + memset(&scheme->eligible_cache, 0, sizeof(scheme->eligible_cache)); + return scheme; } =20 @@ -2311,6 +2313,11 @@ static void damos_walk_cancel(struct damon_ctx *ctx) mutex_unlock(&ctx->walk_control_lock); } =20 +/* Forward declarations for eligible cache management */ +static bool damos_scheme_uses_eligible_metrics(struct damos *s); +static void damos_update_eligible_cache(struct damos *s, int target_nid, + int source_nid, unsigned long sz_applied); + static void damos_apply_scheme(struct damon_ctx *c, struct damon_target *t, struct damon_region *r, struct damos *s) { @@ -2375,6 +2382,19 @@ static void damos_apply_scheme(struct damon_ctx *c, = struct damon_target *t, if (s->action !=3D DAMOS_STAT) r->age =3D 0; =20 + /* + * Update eligible cache for migration actions. The source_nid is + * derived from the region's physical address before migration. + */ + if (sz_applied > 0 && damos_scheme_uses_eligible_metrics(s) && + (s->action =3D=3D DAMOS_MIGRATE_HOT || s->action =3D=3D DAMOS_MIGRATE= _COLD)) { + phys_addr_t paddr =3D (phys_addr_t)r->ar.start * c->addr_unit; + int source_nid =3D pfn_to_nid(PHYS_PFN(paddr)); + + damos_update_eligible_cache(s, s->target_nid, source_nid, + sz_applied); + } + update_stat: damos_update_stat(s, sz, sz_applied, sz_ops_filter_passed); } @@ -2530,21 +2550,7 @@ static unsigned long damos_get_node_memcg_used_bp( numerator =3D i.totalram - used_pages; return mult_frac(numerator, 10000, i.totalram); } -#else -static __kernel_ulong_t damos_get_node_mem_bp( - struct damos_quota_goal *goal) -{ - return 0; -} - -static unsigned long damos_get_node_memcg_used_bp( - struct damos_quota_goal *goal) -{ - return 0; -} -#endif =20 -#ifdef CONFIG_NUMA /* * damos_scheme_uses_eligible_metrics() - Check if scheme uses eligible me= trics. * @s: The scheme @@ -2565,6 +2571,10 @@ static bool damos_scheme_uses_eligible_metrics(struc= t damos *s) return false; } =20 +/* Forward declarations for cache-adjusted eligible calculations */ +static long damos_get_total_delta(struct damos *s, int nid); +static unsigned long damos_get_effective_eligible(struct damos *s, int nid= ); + /* * damos_calc_eligible_bytes_per_node() - Calculate eligible bytes per nod= e. * @c: The DAMON context @@ -2572,7 +2582,8 @@ static bool damos_scheme_uses_eligible_metrics(struct= damos *s) * * Calculates scheme-eligible bytes per NUMA node based on access pattern * matching. A region is eligible if it matches the scheme's access pattern - * (size, nr_accesses, age). + * (size, nr_accesses, age). This does NOT apply address filters - it shows + * all memory that matches access patterns regardless of source/target nod= es. */ static void damos_calc_eligible_bytes_per_node(struct damon_ctx *c, struct damos *s) @@ -2587,6 +2598,7 @@ static void damos_calc_eligible_bytes_per_node(struct= damon_ctx *c, =20 damon_for_each_target(t, c) { damon_for_each_region(r, t) { + /* Only check access pattern, NOT address filters */ if (!__damos_valid_target(r, s)) continue; paddr =3D (phys_addr_t)r->ar.start * c->addr_unit; @@ -2596,38 +2608,352 @@ static void damos_calc_eligible_bytes_per_node(str= uct damon_ctx *c, damon_sz_region(r) * c->addr_unit; } } + + /* + * Update max_eligible tracking when cache is active. This captures + * peak detection values during migration window. + */ + if (s->eligible_cache.active) { + for_each_online_node(nid) { + if (s->eligible_bytes_per_node[nid] > + s->eligible_cache.max_eligible[nid]) + s->eligible_cache.max_eligible[nid] =3D + s->eligible_bytes_per_node[nid]; + } + } +} + +static void damos_refresh_cache_base(struct damos *s) +{ + int nid; + + for_each_online_node(nid) { + s->eligible_cache.base_eligible[nid] =3D + s->eligible_bytes_per_node[nid]; + s->eligible_cache.max_eligible[nid] =3D 0; + } +} + +static long damos_get_total_delta(struct damos *s, int nid) +{ + long total =3D 0; + int slot; + + for (slot =3D 0; slot < DAMOS_ELIGIBLE_CACHE_SLOTS; slot++) + total +=3D s->eligible_cache.migration_delta[slot][nid]; + + return total; +} + +/* + * damos_get_effective_eligible() - Get cache-adjusted eligible bytes. + * @s: The scheme + * @nid: Node ID + * + * Returns eligible bytes adjusted for detection lag. Uses direction-aware + * logic: max() for nodes that received memory (target), min() for nodes + * that lost memory (source). This prevents both over-counting and under- + * counting while preserving the total across all nodes. + */ +static unsigned long damos_get_effective_eligible(struct damos *s, int nid) +{ + unsigned long detected, predicted, base; + long delta; + + if (nid < 0 || nid >=3D MAX_NUMNODES) + return 0; + + detected =3D s->eligible_bytes_per_node[nid]; + + /* Cache inactive - use detection directly */ + if (!s->eligible_cache.active) + return detected; + + delta =3D damos_get_total_delta(s, nid); + + /* No migration involving this node */ + if (delta =3D=3D 0) + return detected; + + base =3D s->eligible_cache.base_eligible[nid]; + + if (delta > 0) { + /* Target node: memory added, detection lagging behind reality */ + predicted =3D base + delta; + return max(detected, predicted); + } else { + /* + * Source node: memory removed, detection may show stale values. + * Use base_eligible (snapshot at cache activation) for the + * prediction to maintain zero-sum property with target nodes. + * + * Note: We intentionally do NOT use max_seen here because it + * would break zero-sum. max_seen captures the highest detection + * which may include memory that has since been migrated away. + * Using it would prevent source reduction, making cur_value + * unable to reach the goal. + */ + unsigned long removed =3D (unsigned long)(-delta); + unsigned long max_seen =3D s->eligible_cache.max_eligible[nid]; + + /* + * Use base as the prediction anchor. If base is 0 (cache just + * activated), fall back to detected as a reasonable starting + * point. + */ + if (base =3D=3D 0 && detected > 0) + base =3D detected; + + predicted =3D (removed > base) ? 0 : base - removed; + + /* + * If detected is 0 or significantly below predicted, detection + * is at an oscillation trough due to PA-mode sampling noise. + * Trust the prediction rather than the unreliable low detected + * value. Also use max_seen as a sanity check - if detected is + * below max_seen but above predicted, detection is recovering + * and we should trust it. + */ + if (detected =3D=3D 0) + return predicted; + + /* + * If detected dropped significantly below what we've seen, + * it's likely oscillation. Use predicted to smooth it out. + */ + if (max_seen > 0 && detected < max_seen / 4 && predicted > detected) + return predicted; + + /* + * If detected has grown significantly beyond base, new hot + * memory has appeared since cache activation. The cache + * snapshot is stale, so trust detection over the stale + * prediction. This prevents grossly underestimating source + * memory when the workload creates new hot regions. + */ + if (detected > base * 2) + return detected; + + return min(detected, predicted); + } +} + +/* + * damos_get_total_effective_eligible() - Sum effective eligible across no= des. + * @s: The scheme + * + * Used as denominator for goal metrics. Zero-sum property of cache ensures + * this equals the true total of hot memory. + */ +static unsigned long damos_get_total_effective_eligible(struct damos *s) +{ + unsigned long total =3D 0; + int nid; + + for_each_online_node(nid) + total +=3D damos_get_effective_eligible(s, nid); + + return total; } =20 static unsigned long damos_get_node_eligible_mem_bp(struct damos *s, int n= id) { - unsigned long total_eligible =3D 0; + unsigned long total_eligible; unsigned long node_eligible; - int n; =20 if (nid < 0 || nid >=3D MAX_NUMNODES) return 0; =20 - for_each_online_node(n) - total_eligible +=3D s->eligible_bytes_per_node[n]; + /* Use effective eligible which compensates for detection lag */ + total_eligible =3D damos_get_total_effective_eligible(s); =20 + /* + * If no eligible memory anywhere, return 0. The caller + * (damos_set_quota_goal_current_value) should skip updating + * cur_value when total_eligible=3D0 to avoid incorrect adjustments. + */ if (!total_eligible) return 0; =20 - node_eligible =3D s->eligible_bytes_per_node[nid]; + node_eligible =3D damos_get_effective_eligible(s, nid); =20 return mult_frac(node_eligible, 10000, total_eligible); } =20 +/* + * damos_get_node_ineligible_mem_bp() - Get ineligible memory ratio for a = node. + * @s: The DAMOS scheme. + * @nid: The NUMA node ID. + * + * Calculate what percentage of total scheme-eligible (hot) memory is NOT = on + * the specified node. For PUSH schemes migrating from N0 to N1, this metr= ic + * with nid=3D0 represents "what % of hot memory has been pushed away from= N0". + * Uses cache-adjusted effective eligible bytes to compensate for detectio= n lag. + * + * Returns: Basis points (0-10000) of total eligible memory NOT on this no= de. + * Returns 10000 if eligible_bp=3D0 (all hot memory elsewhere or = no data). + * Note: Caller should skip using this when total_eligible=3D0. + */ static unsigned long damos_get_node_ineligible_mem_bp(struct damos *s, int= nid) { unsigned long eligible_bp =3D damos_get_node_eligible_mem_bp(s, nid); =20 + /* + * When eligible_bp=3D0, either: + * - total_eligible=3D0: caller should skip (detection failed) + * - total_eligible>0: all hot memory is on other nodes (100% migrated) + */ if (eligible_bp =3D=3D 0) return 10000; =20 return 10000 - eligible_bp; } + +/* + * damos_update_eligible_cache() - Track migration for goal metric adjustm= ent. + * @s: The scheme + * @target_nid: Destination node + * @source_nid: Source node (derived from region) + * @sz_applied: Bytes successfully migrated + * + * Updates the rolling window cache when migration happens. The delta is + * zero-sum: bytes subtracted from source equal bytes added to target. + */ +static void damos_update_eligible_cache(struct damos *s, int target_nid, + int source_nid, unsigned long sz_applied) +{ + unsigned int slot; + bool was_inactive; + + if (sz_applied =3D=3D 0 || source_nid =3D=3D target_nid) + return; + + was_inactive =3D !s->eligible_cache.active; + + /* First migration after cache inactive? Take fresh base snapshot */ + if (was_inactive) + damos_refresh_cache_base(s); + + slot =3D s->eligible_cache.current_slot; + + /* Update migration delta (zero-sum) */ + if (source_nid >=3D 0 && source_nid < MAX_NUMNODES) + s->eligible_cache.migration_delta[slot][source_nid] -=3D sz_applied; + + if (target_nid >=3D 0 && target_nid < MAX_NUMNODES) + s->eligible_cache.migration_delta[slot][target_nid] +=3D sz_applied; + + s->eligible_cache.active =3D true; + /* Reset cooldown on every migration to allow detection to stabilize */ + s->eligible_cache.cooldown =3D DAMOS_ELIGIBLE_CACHE_COOLDOWN; + /* Track timestamp for time-based expiry */ + s->eligible_cache.last_migration_jiffies =3D jiffies; +} + +/* + * damos_advance_cache_window() - Advance rolling window at interval bound= ary. + * @s: The scheme + * + * Called at end of apply interval. Advances to next slot, clearing old da= ta. + * Uses time-based expiry: if no migration for the configured timeout (or + * default DAMOS_ELIGIBLE_CACHE_TIMEOUT_MS), clear all slots and deactivate + * cache to prevent stale data accumulation. + */ +static void damos_advance_cache_window(struct damos *s) +{ + unsigned int next_slot; + unsigned long timeout_ms; + int nid, slot; + bool has_delta =3D false; + bool timeout_expired; + + if (!s->eligible_cache.active) + return; + + /* Advance to next slot */ + next_slot =3D (s->eligible_cache.current_slot + 1) % DAMOS_ELIGIBLE_CACHE= _SLOTS; + + /* + * Time-based expiry: if no migration for timeout period, clear ALL + * slots and deactivate cache. This prevents stale delta data from + * persisting indefinitely when migration has stopped. + * + * Only check timeout when cache has been used (last_migration_jiffies != =3D 0). + * When last_migration_jiffies is 0 (initial state), the timeout check + * would always be true since jiffies is typically much larger, causing + * immediate cache expiry before any migration can happen. + * + * Use configurable timeout if set, otherwise use default. + */ + timeout_ms =3D s->eligible_cache.timeout_ms; + if (!timeout_ms) + timeout_ms =3D DAMOS_ELIGIBLE_CACHE_TIMEOUT_MS; + + timeout_expired =3D s->eligible_cache.last_migration_jiffies && + time_after(jiffies, + s->eligible_cache.last_migration_jiffies + + msecs_to_jiffies(timeout_ms)); + + if (timeout_expired) { + /* Clear all slots */ + for (slot =3D 0; slot < DAMOS_ELIGIBLE_CACHE_SLOTS; slot++) + memset(s->eligible_cache.migration_delta[slot], 0, + sizeof(s->eligible_cache.migration_delta[slot])); + s->eligible_cache.active =3D false; + s->eligible_cache.cooldown =3D 0; + damos_refresh_cache_base(s); + return; + } + + /* + * Normal operation: only clear slot when cooldown expired. + * During cooldown, preserve deltas for accurate compensation + * while detection stabilizes. + */ + if (s->eligible_cache.cooldown =3D=3D 0) { + memset(s->eligible_cache.migration_delta[next_slot], 0, + sizeof(s->eligible_cache.migration_delta[next_slot])); + } + + s->eligible_cache.current_slot =3D next_slot; + + /* Check if any delta remains in any slot */ + for (slot =3D 0; slot < DAMOS_ELIGIBLE_CACHE_SLOTS && !has_delta; slot++)= { + for_each_online_node(nid) { + if (s->eligible_cache.migration_delta[slot][nid] !=3D 0) { + has_delta =3D true; + break; + } + } + } + + /* + * Deactivate only when no recent migrations AND cooldown expired. + * Cooldown keeps cache active after migration stops, giving detection + * time to stabilize at the new physical addresses. + */ + if (!has_delta) { + if (s->eligible_cache.cooldown > 0) { + s->eligible_cache.cooldown--; + } else { + s->eligible_cache.active =3D false; + damos_refresh_cache_base(s); + } + } +} #else +static __kernel_ulong_t damos_get_node_mem_bp( + struct damos_quota_goal *goal) +{ + return 0; +} + +static unsigned long damos_get_node_memcg_used_bp( + struct damos_quota_goal *goal) +{ + return 0; +} + static bool damos_scheme_uses_eligible_metrics(struct damos *s) { return false; @@ -2647,6 +2973,15 @@ static unsigned long damos_get_node_ineligible_mem_b= p(struct damos *s, int nid) { return 0; } + +static void damos_update_eligible_cache(struct damos *s, int target_nid, + int source_nid, unsigned long sz_applied) +{ +} + +static void damos_advance_cache_window(struct damos *s) +{ +} #endif =20 /* @@ -2691,12 +3026,21 @@ static void damos_set_quota_goal_current_value(stru= ct damos_quota_goal *goal, goal->current_value =3D damos_get_node_memcg_used_bp(goal); break; case DAMOS_QUOTA_NODE_ELIGIBLE_MEM_BP: - goal->current_value =3D damos_get_node_eligible_mem_bp(s, - goal->nid); - break; case DAMOS_QUOTA_NODE_INELIGIBLE_MEM_BP: - goal->current_value =3D damos_get_node_ineligible_mem_bp(s, - goal->nid); + /* + * Only update cur_value when we have valid detection data. + * When detection fails (total_eligible=3D0), keep the previous + * cur_value so auto-tuning continues based on last known state + * rather than making incorrect adjustments based on no data. + */ + if (damos_get_total_effective_eligible(s)) { + if (goal->metric =3D=3D DAMOS_QUOTA_NODE_ELIGIBLE_MEM_BP) + goal->current_value =3D damos_get_node_eligible_mem_bp( + s, goal->nid); + else + goal->current_value =3D damos_get_node_ineligible_mem_bp( + s, goal->nid); + } break; case DAMOS_QUOTA_ACTIVE_MEM_BP: case DAMOS_QUOTA_INACTIVE_MEM_BP: @@ -2709,9 +3053,9 @@ static void damos_set_quota_goal_current_value(struct= damos_quota_goal *goal, } =20 /* Return the highest score since it makes schemes least aggressive */ -static unsigned long damos_quota_score(struct damos_quota *quota) +static unsigned long damos_quota_score(struct damos_quota *quota, + struct damos *s) { - struct damos *s =3D container_of(quota, struct damos, quota); struct damos_quota_goal *goal; unsigned long highest_score =3D 0; =20 @@ -2725,17 +3069,19 @@ static unsigned long damos_quota_score(struct damos= _quota *quota) return highest_score; } =20 -static void damos_goal_tune_esz_bp_consist(struct damos_quota *quota) +static void damos_goal_tune_esz_bp_consist(struct damos_quota *quota, + struct damos *s) { - unsigned long score =3D damos_quota_score(quota); + unsigned long score =3D damos_quota_score(quota, s); =20 quota->esz_bp =3D damon_feed_loop_next_input( max(quota->esz_bp, 10000UL), score); } =20 -static void damos_goal_tune_esz_bp_temporal(struct damos_quota *quota) +static void damos_goal_tune_esz_bp_temporal(struct damos_quota *quota, + struct damos *s) { - unsigned long score =3D damos_quota_score(quota); + unsigned long score =3D damos_quota_score(quota, s); =20 if (score >=3D 10000) quota->esz_bp =3D 0; @@ -2748,7 +3094,8 @@ static void damos_goal_tune_esz_bp_temporal(struct da= mos_quota *quota) /* * Called only if quota->ms, or quota->sz are set, or quota->goals is not = empty */ -static void damos_set_effective_quota(struct damos_quota *quota) +static void damos_set_effective_quota(struct damos_quota *quota, + struct damos *s) { unsigned long throughput; unsigned long esz =3D ULONG_MAX; @@ -2760,9 +3107,9 @@ static void damos_set_effective_quota(struct damos_qu= ota *quota) =20 if (!list_empty("a->goals)) { if (quota->goal_tuner =3D=3D DAMOS_QUOTA_GOAL_TUNER_CONSIST) - damos_goal_tune_esz_bp_consist(quota); + damos_goal_tune_esz_bp_consist(quota, s); else if (quota->goal_tuner =3D=3D DAMOS_QUOTA_GOAL_TUNER_TEMPORAL) - damos_goal_tune_esz_bp_temporal(quota); + damos_goal_tune_esz_bp_temporal(quota, s); esz =3D quota->esz_bp / 10000; } =20 @@ -2815,7 +3162,7 @@ static void damos_adjust_quota(struct damon_ctx *c, s= truct damos *s) /* First charge window */ if (!quota->total_charged_sz && !quota->charged_from) { quota->charged_from =3D jiffies; - damos_set_effective_quota(quota); + damos_set_effective_quota(quota, s); } =20 /* New charge window starts */ @@ -2833,7 +3180,9 @@ static void damos_adjust_quota(struct damon_ctx *c, s= truct damos *s) quota->charged_sz =3D 0; if (trace_damos_esz_enabled()) cached_esz =3D quota->esz; - damos_set_effective_quota(quota); + damos_set_effective_quota(quota, s); + /* Advance cache window at end of apply interval */ + damos_advance_cache_window(s); if (trace_damos_esz_enabled() && quota->esz !=3D cached_esz) damos_trace_esz(c, s, quota); } diff --git a/mm/damon/sysfs-schemes.c b/mm/damon/sysfs-schemes.c index 232b33f5cbfb..bf68c3157c19 100644 --- a/mm/damon/sysfs-schemes.c +++ b/mm/damon/sysfs-schemes.c @@ -1501,6 +1501,7 @@ struct damon_sysfs_quotas { unsigned long reset_interval_ms; unsigned long effective_sz; /* Effective size quota in bytes */ enum damos_quota_goal_tuner goal_tuner; + unsigned long effective_bytes_cache_timeout_ms; }; =20 static struct damon_sysfs_quotas *damon_sysfs_quotas_alloc(void) @@ -1675,6 +1676,27 @@ static ssize_t goal_tuner_store(struct kobject *kobj, return -EINVAL; } =20 +static ssize_t effective_bytes_cache_timeout_ms_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + struct damon_sysfs_quotas *quotas =3D container_of(kobj, + struct damon_sysfs_quotas, kobj); + + return sysfs_emit(buf, "%lu\n", quotas->effective_bytes_cache_timeout_ms); +} + +static ssize_t effective_bytes_cache_timeout_ms_store(struct kobject *kobj, + struct kobj_attribute *attr, const char *buf, size_t count) +{ + struct damon_sysfs_quotas *quotas =3D container_of(kobj, + struct damon_sysfs_quotas, kobj); + int err =3D kstrtoul(buf, 0, "as->effective_bytes_cache_timeout_ms); + + if (err) + return -EINVAL; + return count; +} + static void damon_sysfs_quotas_release(struct kobject *kobj) { kfree(container_of(kobj, struct damon_sysfs_quotas, kobj)); @@ -1695,12 +1717,16 @@ static struct kobj_attribute damon_sysfs_quotas_eff= ective_bytes_attr =3D static struct kobj_attribute damon_sysfs_quotas_goal_tuner_attr =3D __ATTR_RW_MODE(goal_tuner, 0600); =20 +static struct kobj_attribute damon_sysfs_quotas_cache_timeout_ms_attr =3D + __ATTR_RW_MODE(effective_bytes_cache_timeout_ms, 0600); + static struct attribute *damon_sysfs_quotas_attrs[] =3D { &damon_sysfs_quotas_ms_attr.attr, &damon_sysfs_quotas_sz_attr.attr, &damon_sysfs_quotas_reset_interval_ms_attr.attr, &damon_sysfs_quotas_effective_bytes_attr.attr, &damon_sysfs_quotas_goal_tuner_attr.attr, + &damon_sysfs_quotas_cache_timeout_ms_attr.attr, NULL, }; ATTRIBUTE_GROUPS(damon_sysfs_quotas); @@ -2822,6 +2848,10 @@ static struct damos *damon_sysfs_mk_scheme( /* Set goal_tuner after damon_new_scheme() as it defaults to CONSIST */ scheme->quota.goal_tuner =3D sysfs_quotas->goal_tuner; =20 + /* Set cache timeout, use default if 0 */ + scheme->eligible_cache.timeout_ms =3D + sysfs_quotas->effective_bytes_cache_timeout_ms; + err =3D damos_sysfs_add_quota_score(sysfs_quotas->goals, &scheme->quota); if (err) { damon_destroy_scheme(scheme); --=20 2.43.0