From nobody Mon Sep 15 21:14:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0489DC5479D for ; Wed, 11 Jan 2023 05:38:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235787AbjAKFiD (ORCPT ); Wed, 11 Jan 2023 00:38:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236011AbjAKFgy (ORCPT ); Wed, 11 Jan 2023 00:36:54 -0500 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA1193C71B for ; Tue, 10 Jan 2023 21:25:36 -0800 (PST) Received: by mail-pl1-x632.google.com with SMTP id y1so15646050plb.2 for ; Tue, 10 Jan 2023 21:25:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NxesNgIdPvkYanE3SX5ef2YSm8oaM0kD5UaXuDy462Q=; b=fqJll3HKWCB0vcfigp+GvhTJhMi+uNsm9YG48MhUt7KsGHGEJDNZThgCC4hHQFqGv2 XLwUbdDIT7GkhFKeIWTpYzGTlnucAOktdeAXaPlHSGL8/ML7jzHdapRiUHkJXShEt3/0 vSXA9FpyKo6qCidgFBgbvWVRpbwVCHM1ARIwoLx0HuqfjxlMuym4uITQPAlB0w9I5ttF rqCR0InKrWv0mzVfnJix1uqLWnBZJ09frNm4Un7nyfPXHSYIyjjxvCfo40LP7Eg1dyQU GmlSz1rJ+vs3JKPLOUiUnTQcrQSNWUlmUQOw04qgCWgVjYwWXSflAgHqwe9czFA+v/zO B6Vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NxesNgIdPvkYanE3SX5ef2YSm8oaM0kD5UaXuDy462Q=; b=v/dUtkQXlfVOUeXoGVlCGgl8eFcywNS+dwq6mOSJ64c+Nfosx9QBxhmHRGJa6fQfQa 2rRLeQZADlnXVSgT8yaqK47YRnsHWeSls26yubuI+n2KrM/DWMVvQlUcsVK/d/coGeSL yjQNsJ0I+f5WnYAcYjrtEcMOBobSt8sucM6fIleRY1zNuQL+G8TDYN2lTtCiZD64I3c9 QHandVef7khvgQIZpWkNDHtQPIYzRf5fkmq9cKx43AFYCO5buWxkPvkoy9X9t6VxSvLU 2FRDnXWo7j9wXBUaoBMnRa5a/DjL2s1/SMS7DPnMEjnckj2YJbPc/ItfPj53G7+pUJax YcfA== X-Gm-Message-State: AFqh2krkL9mm/YMyW9TAFjEN9AuNsiIL+oQ1HUKibEJo5KAKzMiv/IbB NRcNB91GJ6o2kFrR+qau7eAXFQ== X-Google-Smtp-Source: AMrXdXuO0ZPDXMIOMbCD/Hi/5+FHuAkaicXDahG3V2hxMA5wMMfhntKWagfgbD4/xkr/5xmDfgV4dQ== X-Received: by 2002:a17:902:aa82:b0:193:2f1a:65b1 with SMTP id d2-20020a170902aa8200b001932f1a65b1mr1257700plr.59.1673414736334; Tue, 10 Jan 2023 21:25:36 -0800 (PST) Received: from C02G705SMD6V.bytedance.net ([61.213.176.10]) by smtp.gmail.com with ESMTPSA id l10-20020a170903244a00b0019334350ce6sm4934520pls.244.2023.01.10.21.25.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Jan 2023 21:25:36 -0800 (PST) From: Jia Zhu To: dhowells@redhat.com Cc: linux-cachefs@redhat.com, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jia Zhu , Xin Yin , Jingbo Xu Subject: [PATCH V4 3/5] cachefiles: resend an open request if the read request's object is closed Date: Wed, 11 Jan 2023 13:25:13 +0800 Message-Id: <20230111052515.53941-4-zhujia.zj@bytedance.com> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) In-Reply-To: <20230111052515.53941-1-zhujia.zj@bytedance.com> References: <20230111052515.53941-1-zhujia.zj@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When an anonymous fd is closed by user daemon, if there is a new read request for this file comes up, the anonymous fd should be re-opened to handle that read request rather than fail it directly. 1. Introduce reopening state for objects that are closed but have inflight/subsequent read requests. 2. No longer flush READ requests but only CLOSE requests when anonymous fd is closed. 3. Enqueue the reopen work to workqueue, thus user daemon could get rid of daemon_read context and handle that request smoothly. Otherwise, the user daemon will send a reopen request and wait for itself to process the request. Signed-off-by: Jia Zhu Reviewed-by: Xin Yin Reviewed-by: Jingbo Xu --- fs/cachefiles/internal.h | 3 ++ fs/cachefiles/ondemand.c | 98 ++++++++++++++++++++++++++++------------ 2 files changed, 72 insertions(+), 29 deletions(-) diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h index beaf3a8785ce..2ed836d4169e 100644 --- a/fs/cachefiles/internal.h +++ b/fs/cachefiles/internal.h @@ -47,9 +47,11 @@ struct cachefiles_volume { enum cachefiles_object_state { CACHEFILES_ONDEMAND_OBJSTATE_close, /* Anonymous fd closed by daemon or i= nitial state */ CACHEFILES_ONDEMAND_OBJSTATE_open, /* Anonymous fd associated with object= is available */ + CACHEFILES_ONDEMAND_OBJSTATE_reopening, /* Object that was closed and is = being reopened. */ }; =20 struct cachefiles_ondemand_info { + struct work_struct work; int ondemand_id; enum cachefiles_object_state state; struct cachefiles_object *object; @@ -323,6 +325,7 @@ cachefiles_ondemand_set_object_##_state(struct cachefil= es_object *object) \ =20 CACHEFILES_OBJECT_STATE_FUNCS(open); CACHEFILES_OBJECT_STATE_FUNCS(close); +CACHEFILES_OBJECT_STATE_FUNCS(reopening); #else static inline ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_ca= che *cache, char __user *_buffer, size_t buflen) diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c index 6e47667c6690..8e7f8c152a5b 100644 --- a/fs/cachefiles/ondemand.c +++ b/fs/cachefiles/ondemand.c @@ -18,14 +18,10 @@ static int cachefiles_ondemand_fd_release(struct inode = *inode, info->ondemand_id =3D CACHEFILES_ONDEMAND_ID_CLOSED; cachefiles_ondemand_set_object_close(object); =20 - /* - * Flush all pending READ requests since their completion depends on - * anon_fd. - */ - xas_for_each(&xas, req, ULONG_MAX) { + /* Only flush CACHEFILES_REQ_NEW marked req to avoid race with daemon_rea= d */ + xas_for_each_marked(&xas, req, ULONG_MAX, CACHEFILES_REQ_NEW) { if (req->msg.object_id =3D=3D object_id && - req->msg.opcode =3D=3D CACHEFILES_OP_READ) { - req->error =3D -EIO; + req->msg.opcode =3D=3D CACHEFILES_OP_CLOSE) { complete(&req->done); xas_store(&xas, NULL); } @@ -179,6 +175,7 @@ int cachefiles_ondemand_copen(struct cachefiles_cache *= cache, char *args) trace_cachefiles_ondemand_copen(req->object, id, size); =20 cachefiles_ondemand_set_object_open(req->object); + wake_up_all(&cache->daemon_pollwq); =20 out: complete(&req->done); @@ -222,7 +219,6 @@ static int cachefiles_ondemand_get_fd(struct cachefiles= _req *req) =20 load =3D (void *)req->msg.data; load->fd =3D fd; - req->msg.object_id =3D object_id; object->private->ondemand_id =3D object_id; =20 cachefiles_get_unbind_pincount(cache); @@ -238,6 +234,43 @@ static int cachefiles_ondemand_get_fd(struct cachefile= s_req *req) return ret; } =20 +static void ondemand_object_worker(struct work_struct *work) +{ + struct cachefiles_object *object =3D + ((struct cachefiles_ondemand_info *)work)->object; + + cachefiles_ondemand_init_object(object); +} + +/* + * If there are any inflight or subsequent READ requests on the + * closed object, reopen it. + * Skip read requests whose related object is reopening. + */ +static struct cachefiles_req *cachefiles_ondemand_select_req(struct xa_sta= te *xas, + unsigned long xa_max) +{ + struct cachefiles_req *req; + struct cachefiles_object *object; + struct cachefiles_ondemand_info *info; + + xas_for_each_marked(xas, req, xa_max, CACHEFILES_REQ_NEW) { + if (req->msg.opcode !=3D CACHEFILES_OP_READ) + return req; + object =3D req->object; + info =3D object->private; + if (cachefiles_ondemand_object_is_close(object)) { + cachefiles_ondemand_set_object_reopening(object); + queue_work(fscache_wq, &info->work); + continue; + } else if (cachefiles_ondemand_object_is_reopening(object)) { + continue; + } + return req; + } + return NULL; +} + ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache, char __user *_buffer, size_t buflen) { @@ -248,16 +281,16 @@ ssize_t cachefiles_ondemand_daemon_read(struct cachef= iles_cache *cache, int ret =3D 0; XA_STATE(xas, &cache->reqs, cache->req_id_next); =20 + xa_lock(&cache->reqs); /* * Cyclically search for a request that has not ever been processed, * to prevent requests from being processed repeatedly, and make * request distribution fair. */ - xa_lock(&cache->reqs); - req =3D xas_find_marked(&xas, UINT_MAX, CACHEFILES_REQ_NEW); + req =3D cachefiles_ondemand_select_req(&xas, ULONG_MAX); if (!req && cache->req_id_next > 0) { xas_set(&xas, 0); - req =3D xas_find_marked(&xas, cache->req_id_next - 1, CACHEFILES_REQ_NEW= ); + req =3D cachefiles_ondemand_select_req(&xas, cache->req_id_next - 1); } if (!req) { xa_unlock(&cache->reqs); @@ -277,14 +310,18 @@ ssize_t cachefiles_ondemand_daemon_read(struct cachef= iles_cache *cache, xa_unlock(&cache->reqs); =20 id =3D xas.xa_index; - msg->msg_id =3D id; =20 if (msg->opcode =3D=3D CACHEFILES_OP_OPEN) { ret =3D cachefiles_ondemand_get_fd(req); - if (ret) + if (ret) { + cachefiles_ondemand_set_object_close(req->object); goto error; + } } =20 + msg->msg_id =3D id; + msg->object_id =3D req->object->private->ondemand_id; + if (copy_to_user(_buffer, msg, n) !=3D 0) { ret =3D -EFAULT; goto err_put_fd; @@ -317,19 +354,23 @@ static int cachefiles_ondemand_send_req(struct cachef= iles_object *object, void *private) { struct cachefiles_cache *cache =3D object->volume->cache; - struct cachefiles_req *req; + struct cachefiles_req *req =3D NULL; XA_STATE(xas, &cache->reqs, 0); int ret; =20 if (!test_bit(CACHEFILES_ONDEMAND_MODE, &cache->flags)) return 0; =20 - if (test_bit(CACHEFILES_DEAD, &cache->flags)) - return -EIO; + if (test_bit(CACHEFILES_DEAD, &cache->flags)) { + ret =3D -EIO; + goto out; + } =20 req =3D kzalloc(sizeof(*req) + data_len, GFP_KERNEL); - if (!req) - return -ENOMEM; + if (!req) { + ret =3D -ENOMEM; + goto out; + } =20 req->object =3D object; init_completion(&req->done); @@ -367,7 +408,7 @@ static int cachefiles_ondemand_send_req(struct cachefil= es_object *object, /* coupled with the barrier in cachefiles_flush_reqs() */ smp_mb(); =20 - if (opcode !=3D CACHEFILES_OP_OPEN && + if (opcode =3D=3D CACHEFILES_OP_CLOSE && !cachefiles_ondemand_object_is_open(object)) { WARN_ON_ONCE(object->private->ondemand_id =3D=3D 0); xas_unlock(&xas); @@ -392,7 +433,15 @@ static int cachefiles_ondemand_send_req(struct cachefi= les_object *object, wake_up_all(&cache->daemon_pollwq); wait_for_completion(&req->done); ret =3D req->error; + kfree(req); + return ret; out: + /* Reset the object to close state in error handling path. + * If error occurs after creating the anonymous fd, + * cachefiles_ondemand_fd_release() will set object to close. + */ + if (opcode =3D=3D CACHEFILES_OP_OPEN) + cachefiles_ondemand_set_object_close(object); kfree(req); return ret; } @@ -439,7 +488,6 @@ static int cachefiles_ondemand_init_close_req(struct ca= chefiles_req *req, if (!cachefiles_ondemand_object_is_open(object)) return -ENOENT; =20 - req->msg.object_id =3D object->private->ondemand_id; trace_cachefiles_ondemand_close(object, &req->msg); return 0; } @@ -455,16 +503,7 @@ static int cachefiles_ondemand_init_read_req(struct ca= chefiles_req *req, struct cachefiles_object *object =3D req->object; struct cachefiles_read *load =3D (void *)req->msg.data; struct cachefiles_read_ctx *read_ctx =3D private; - int object_id =3D object->private->ondemand_id; - - /* Stop enqueuing requests when daemon has closed anon_fd. */ - if (!cachefiles_ondemand_object_is_open(object)) { - WARN_ON_ONCE(object_id =3D=3D 0); - pr_info_once("READ: anonymous fd closed prematurely.\n"); - return -EIO; - } =20 - req->msg.object_id =3D object_id; load->off =3D read_ctx->off; load->len =3D read_ctx->len; trace_cachefiles_ondemand_read(object, &req->msg, load); @@ -513,6 +552,7 @@ int cachefiles_ondemand_init_obj_info(struct cachefiles= _object *object, return -ENOMEM; =20 object->private->object =3D object; + INIT_WORK(&object->private->work, ondemand_object_worker); return 0; } =20 --=20 2.20.1