From nobody Mon Apr 13 06:39:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAC11C74A5B for ; Wed, 22 Mar 2023 14:17:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231262AbjCVORp (ORCPT ); Wed, 22 Mar 2023 10:17:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229521AbjCVORn (ORCPT ); Wed, 22 Mar 2023 10:17:43 -0400 Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01FF9474FF; Wed, 22 Mar 2023 07:17:41 -0700 (PDT) Received: by mail-wm1-x333.google.com with SMTP id j18-20020a05600c1c1200b003ee5157346cso1805155wms.1; Wed, 22 Mar 2023 07:17:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679494660; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cY5q38znnygXEk7MQGfhMNT6fWhXSJdRMwyHM9vnLSQ=; b=dIwol5PJw0bli8FE/ZdtKot7+ZBxkF2CumwPnWIKpYhM02k1csFlwUqD9UGL9WmVLn 3kH6HAxlp3zElAHVcqZypb6dzIguKMqxaZdPSUkRnzfNpZ+N4ajFYn8D3b+VhenoWaBp +nnPFx0qewJW0Z9sGBiil1IqIbwimdDzq+2ZHg5jFcHk4R9l9QrA1w7YDnAlgfbxCbiC L2Vc4UCY2jyTccq0OukQszIqjXzgfCupsM/z8pFZQ9QHNq2a5ZvtpPIZc0orj1WSiBoB rc1D+fk/Dfij3pJZZPenfpqa0GCgLTSnTQgL81J54JFfeZPxhebLOK+xsOuXEpHNlC59 2l8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679494660; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cY5q38znnygXEk7MQGfhMNT6fWhXSJdRMwyHM9vnLSQ=; b=3SNGDlVgpcXeSyC9d8rYW24Vuiu2YMIewgIebSZjUeknQPlnxMtFRvpap630jkYjc4 QmZipirXJQHqz2de14i3Ndfmku2Ai+u8COeSCjieEY9huCEcbjJ13OwiqjdoIdd1Muiy ol5+13aJKP3Yv8+koVZZIbf1DWNnrUSzMLyaTuKTkeRiRFH30ZlWcH3KeXc64OUZ6uwS HKUv4mq5VjktNoJfILVqhgp5e8w+CCl3J6lCs38f4yaa8aeBIFWrnV/5/xuP7PcziMYk 0R/KkVVPpAb2SUsBGGCefA0gkkkw3VIxDidrgh1vkcuRqee1JAeWUZSzkNTGEm4pDFeb bhWQ== X-Gm-Message-State: AO0yUKUkqJk3QXngQZSVz2b9Tsg5GvQ7wYxha/4gF8i6tMY7wJcAFdxh Wa/F0fFq52B4aQZayZ4X/szymaDoD7Y= X-Google-Smtp-Source: AK7set/t+90cniOBUsMZ86ATkrJz4GVX1skOO3/GxVxqGNazKFn1JFfEJ5wqrdwgjLijJkQ5YI4B9g== X-Received: by 2002:a7b:c00b:0:b0:3ed:2b27:5bcc with SMTP id c11-20020a7bc00b000000b003ed2b275bccmr5684047wmb.38.1679494660074; Wed, 22 Mar 2023 07:17:40 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id h20-20020a1ccc14000000b003dc522dd25esm16824893wmb.30.2023.03.22.07.17.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Mar 2023 07:17:39 -0700 (PDT) From: Lorenzo Stoakes To: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Jens Axboe , Alexander Viro , Lorenzo Stoakes Subject: [PATCH v5 1/4] fs/proc/kcore: avoid bounce buffer for ktext data Date: Wed, 22 Mar 2023 14:17:32 +0000 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Commit df04abfd181a ("fs/proc/kcore.c: Add bounce buffer for ktext data") introduced the use of a bounce buffer to retrieve kernel text data for /proc/kcore in order to avoid failures arising from hardened user copies enabled by CONFIG_HARDENED_USERCOPY in check_kernel_text_object(). We can avoid doing this if instead of copy_to_user() we use _copy_to_user() which bypasses the hardening check. This is more efficient than using a bounce buffer and simplifies the code. We do so as part an overall effort to eliminate bounce buffer usage in the function with an eye to converting it an iterator read. Signed-off-by: Lorenzo Stoakes Reviewed-by: David Hildenbrand --- fs/proc/kcore.c | 17 +++++------------ 1 file changed, 5 insertions(+), 12 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 71157ee35c1a..556f310d6aa4 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -541,19 +541,12 @@ read_kcore(struct file *file, char __user *buffer, si= ze_t buflen, loff_t *fpos) case KCORE_VMEMMAP: case KCORE_TEXT: /* - * Using bounce buffer to bypass the - * hardened user copy kernel text checks. + * We use _copy_to_user() to bypass usermode hardening + * which would otherwise prevent this operation. */ - if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { - if (clear_user(buffer, tsz)) { - ret =3D -EFAULT; - goto out; - } - } else { - if (copy_to_user(buffer, buf, tsz)) { - ret =3D -EFAULT; - goto out; - } + if (_copy_to_user(buffer, (char *)start, tsz)) { + ret =3D -EFAULT; + goto out; } break; default: --=20 2.39.2 From nobody Mon Apr 13 06:39:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4A86C6FD1C for ; Wed, 22 Mar 2023 14:17:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231407AbjCVORx (ORCPT ); Wed, 22 Mar 2023 10:17:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230307AbjCVORp (ORCPT ); Wed, 22 Mar 2023 10:17:45 -0400 Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com [IPv6:2a00:1450:4864:20::330]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8767153DA5; Wed, 22 Mar 2023 07:17:43 -0700 (PDT) Received: by mail-wm1-x330.google.com with SMTP id r19-20020a05600c459300b003eb3e2a5e7bso11618743wmo.0; Wed, 22 Mar 2023 07:17:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679494661; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CnqbASwlTwT+hpjobkOVo9MU5RGxoqqRsLFO5z2uP7A=; b=aJ/LTBb6q0DbvQJiCkxyM9lzy0iVkgoexZRGhX/u7zsSR64W1ZV/GGnCUbIao+UUjP 0pyW2HC2vQsbMpUpMiYXCNQ6WMIr62iihmLUFBRQP3dC1ZSfZn2lnALe+eTZvSXmC/8B HPAG0o5xlYMV8At7ydx9BgE7m6AFvD/OhOWK56sh3iU0AQbPw77l3As+5ynSt9XoqG7Q HAJKb5BU8twe/oDEq2wt+2+B7V7ISK7P95fpGIhm6VGS4WEL0WTNvLDOmQIzId0l7KxD zThb+1km9kNJ7qDTvDWfj8+13IoQMmGI8fBq3usaP0Z5iYUVjq1y7l99+MjKxnCqeSig toXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679494661; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CnqbASwlTwT+hpjobkOVo9MU5RGxoqqRsLFO5z2uP7A=; b=WZjA2EiqKO1woY7V3ZeQNEqrEP6ckGLPz/GsL/KH8Xzstj8PvSlpzHYmAMgjb3T6Pp 8qdZCufu2iYYk0L4tNtvhpiXo8HDlnTrG75jMVeTKA+FslAswz6pQo7vOP+uQbCIqyRR R51mDa9YY78FAiB0d9o+Pd/aOoukUty5zR/fzkLNDG+UBqJiubqu2ygLJAGkPV4L+jUY /tILaloeH5pYwxCJyjCjDowZRjLQIWFkCH8Ni7U8TnjE2EvidUWZCIZN015litwQEW2L gJi8oYUXBpQtRtQFSde26qbXkZfNFmTo4GhZPyZXoX6UewY/H2GwnHY0KddyFjAMOP+v GZig== X-Gm-Message-State: AO0yUKWX0SEp+FokbSgOfwV/aTuWTUbtjakB8GHx1DUrHb0sfcgTXRZ6 M3k4cMcld6JCz2q06EWg5g3uoiHwLys= X-Google-Smtp-Source: AK7set9YZDV1KWq0mRgYSsaprXk8Wh8RdZgezkLqoJu+gbrTbqUDSFH5kEPNHjK8Ewpe/BT+6f/7Uw== X-Received: by 2002:a1c:7714:0:b0:3ed:8c60:c512 with SMTP id t20-20020a1c7714000000b003ed8c60c512mr5668521wmi.17.1679494661581; Wed, 22 Mar 2023 07:17:41 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id h20-20020a1ccc14000000b003dc522dd25esm16824893wmb.30.2023.03.22.07.17.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Mar 2023 07:17:40 -0700 (PDT) From: Lorenzo Stoakes To: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Jens Axboe , Alexander Viro , Lorenzo Stoakes Subject: [PATCH v5 2/4] fs/proc/kcore: convert read_kcore() to read_kcore_iter() Date: Wed, 22 Mar 2023 14:17:33 +0000 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" For the time being we still use a bounce buffer for vread(), however in the next patch we will convert this to interact directly with the iterator and eliminate the bounce buffer altogether. Signed-off-by: Lorenzo Stoakes Reviewed-by: David Hildenbrand --- fs/proc/kcore.c | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 556f310d6aa4..08b795fd80b4 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -24,7 +24,7 @@ #include #include #include -#include +#include #include #include #include @@ -308,9 +308,12 @@ static void append_kcore_note(char *notes, size_t *i, = const char *name, } =20 static ssize_t -read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *= fpos) +read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) { + struct file *file =3D iocb->ki_filp; char *buf =3D file->private_data; + loff_t *fpos =3D &iocb->ki_pos; + size_t phdrs_offset, notes_offset, data_offset; size_t page_offline_frozen =3D 1; size_t phdrs_len, notes_len; @@ -318,6 +321,7 @@ read_kcore(struct file *file, char __user *buffer, size= _t buflen, loff_t *fpos) size_t tsz; int nphdr; unsigned long start; + size_t buflen =3D iov_iter_count(iter); size_t orig_buflen =3D buflen; int ret =3D 0; =20 @@ -356,12 +360,11 @@ read_kcore(struct file *file, char __user *buffer, si= ze_t buflen, loff_t *fpos) }; =20 tsz =3D min_t(size_t, buflen, sizeof(struct elfhdr) - *fpos); - if (copy_to_user(buffer, (char *)&ehdr + *fpos, tsz)) { + if (copy_to_iter((char *)&ehdr + *fpos, tsz, iter) !=3D tsz) { ret =3D -EFAULT; goto out; } =20 - buffer +=3D tsz; buflen -=3D tsz; *fpos +=3D tsz; } @@ -398,15 +401,14 @@ read_kcore(struct file *file, char __user *buffer, si= ze_t buflen, loff_t *fpos) } =20 tsz =3D min_t(size_t, buflen, phdrs_offset + phdrs_len - *fpos); - if (copy_to_user(buffer, (char *)phdrs + *fpos - phdrs_offset, - tsz)) { + if (copy_to_iter((char *)phdrs + *fpos - phdrs_offset, tsz, + iter) !=3D tsz) { kfree(phdrs); ret =3D -EFAULT; goto out; } kfree(phdrs); =20 - buffer +=3D tsz; buflen -=3D tsz; *fpos +=3D tsz; } @@ -448,14 +450,13 @@ read_kcore(struct file *file, char __user *buffer, si= ze_t buflen, loff_t *fpos) min(vmcoreinfo_size, notes_len - i)); =20 tsz =3D min_t(size_t, buflen, notes_offset + notes_len - *fpos); - if (copy_to_user(buffer, notes + *fpos - notes_offset, tsz)) { + if (copy_to_iter(notes + *fpos - notes_offset, tsz, iter) !=3D tsz) { kfree(notes); ret =3D -EFAULT; goto out; } kfree(notes); =20 - buffer +=3D tsz; buflen -=3D tsz; *fpos +=3D tsz; } @@ -497,7 +498,7 @@ read_kcore(struct file *file, char __user *buffer, size= _t buflen, loff_t *fpos) } =20 if (!m) { - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) !=3D tsz) { ret =3D -EFAULT; goto out; } @@ -508,14 +509,14 @@ read_kcore(struct file *file, char __user *buffer, si= ze_t buflen, loff_t *fpos) case KCORE_VMALLOC: vread(buf, (char *)start, tsz); /* we have to zero-fill user buffer even if no read */ - if (copy_to_user(buffer, buf, tsz)) { + if (copy_to_iter(buf, tsz, iter) !=3D tsz) { ret =3D -EFAULT; goto out; } break; case KCORE_USER: /* User page is handled prior to normal kernel page: */ - if (copy_to_user(buffer, (char *)start, tsz)) { + if (copy_to_iter((char *)start, tsz, iter) !=3D tsz) { ret =3D -EFAULT; goto out; } @@ -531,7 +532,7 @@ read_kcore(struct file *file, char __user *buffer, size= _t buflen, loff_t *fpos) */ if (!page || PageOffline(page) || is_page_hwpoison(page) || !pfn_is_ram(pfn)) { - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) !=3D tsz) { ret =3D -EFAULT; goto out; } @@ -541,17 +542,17 @@ read_kcore(struct file *file, char __user *buffer, si= ze_t buflen, loff_t *fpos) case KCORE_VMEMMAP: case KCORE_TEXT: /* - * We use _copy_to_user() to bypass usermode hardening + * We use _copy_to_iter() to bypass usermode hardening * which would otherwise prevent this operation. */ - if (_copy_to_user(buffer, (char *)start, tsz)) { + if (_copy_to_iter((char *)start, tsz, iter) !=3D tsz) { ret =3D -EFAULT; goto out; } break; default: pr_warn_once("Unhandled KCORE type: %d\n", m->type); - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) !=3D tsz) { ret =3D -EFAULT; goto out; } @@ -559,7 +560,6 @@ read_kcore(struct file *file, char __user *buffer, size= _t buflen, loff_t *fpos) skip: buflen -=3D tsz; *fpos +=3D tsz; - buffer +=3D tsz; start +=3D tsz; tsz =3D (buflen > PAGE_SIZE ? PAGE_SIZE : buflen); } @@ -603,7 +603,7 @@ static int release_kcore(struct inode *inode, struct fi= le *file) } =20 static const struct proc_ops kcore_proc_ops =3D { - .proc_read =3D read_kcore, + .proc_read_iter =3D read_kcore_iter, .proc_open =3D open_kcore, .proc_release =3D release_kcore, .proc_lseek =3D default_llseek, --=20 2.39.2 From nobody Mon Apr 13 06:39:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E0DAC6FD1C for ; Wed, 22 Mar 2023 14:17:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230427AbjCVORz (ORCPT ); Wed, 22 Mar 2023 10:17:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231319AbjCVORq (ORCPT ); Wed, 22 Mar 2023 10:17:46 -0400 Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com [IPv6:2a00:1450:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 015EF3A870; Wed, 22 Mar 2023 07:17:44 -0700 (PDT) Received: by mail-wm1-x32e.google.com with SMTP id u11-20020a05600c19cb00b003edcc414997so6748442wmq.3; Wed, 22 Mar 2023 07:17:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679494663; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sD0MPlFrVhMWSH5JN60JlycyouFVYe+2olLzRC7osbE=; b=SZZrJwTkk1Zp73lTIMKRvwbQGa8ch1PX+jAmldPyOG5gRBURN8BvmtVtUIhmj0HBvA pLzunQEEy4rL6QTv41jMRyOOksFWRsR5twWDy4pJvm8hgO7yzQAkYX2fV+wSvafa3KXP HtrucpKSdU+9bLowW7IO/xa8mJ9XfmDYUkib+6M/DPA5FgDkZaCNByy5GJXoLwfBTSu/ eav9wpZczMJE752bzCcHxoftP1luDewpWFSmXhCDgVvNRvcF8nopae4lpWM+otnMS/Cw FzjFn/2aZOox8FCmeJI1qdrST/WW4+cD48RxBFxuxvPAcy6Im7cVpEbqQHpIhfHbY4NA 0Hyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679494663; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sD0MPlFrVhMWSH5JN60JlycyouFVYe+2olLzRC7osbE=; b=KOL/93q7hi7wSyeQUnjt2EjkjP6sawfGwwfC9tXEgQnDwtqfwcACC4kBEdyyfjLwPR MMLm5aMosKksl4LJZXwFp8J+8tmj3wijbVjcJfB+wJEPTuvmvLooisaZrxbOvuE3OFt1 uLoD1FvHKZ7pqF+zS3oG6geXFvD10v23mDJvZq+Mo9kfw1yhZ7OeFyyMhKVRcL6x5YQf mBaMU7gDqwnBBotVTDrNPbI6ZCoIYif6irudd9OVfZ/yS2gp6x0n390S4ZvwdJTQhb6C cLOsB1d5lae5XuL655H3UvlasvrSWqOn88Hrln/qcLCHj72CdNBez1iDOt+K4Amd2FoS uJtg== X-Gm-Message-State: AO0yUKWqkCQL8YixNUZe+UbLHQ/fizIkA3IocuKSA3ldj7llKH5d/9px VMgUOKzZIflAZNuR4cMKfE5hxPU4LSI= X-Google-Smtp-Source: AK7set+mK+AFWoXkRQPmX4wDJTNtUX6lmlPzgoFe5inhIs5MHAzQ1qRRKbnOt+buQWmG09ni8HmOmg== X-Received: by 2002:a05:600c:21da:b0:3ed:307f:1663 with SMTP id x26-20020a05600c21da00b003ed307f1663mr6011613wmj.15.1679494663036; Wed, 22 Mar 2023 07:17:43 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id h20-20020a1ccc14000000b003dc522dd25esm16824893wmb.30.2023.03.22.07.17.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Mar 2023 07:17:42 -0700 (PDT) From: Lorenzo Stoakes To: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Jens Axboe , Alexander Viro , Lorenzo Stoakes Subject: [PATCH v5 3/4] iov_iter: add copy_page_to_iter_nofault() Date: Wed, 22 Mar 2023 14:17:34 +0000 Message-Id: <50d2f757ab570dbb84e44eb84e25bd9780842d5f.1679494218.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Provide a means to copy a page to user space from an iterator, aborting if a page fault would occur. This supports compound pages, but may be passed a tail page with an offset extending further into the compound page, so we cannot pass a folio. This allows for this function to be called from atomic context and _try_ to user pages if they are faulted in, aborting if not. The function does not use _copy_to_iter() in order to not specify might_fault(), this is similar to copy_page_from_iter_atomic(). This is being added in order that an iteratable form of vread() can be implemented while holding spinlocks. Signed-off-by: Lorenzo Stoakes --- include/linux/uio.h | 2 ++ lib/iov_iter.c | 36 ++++++++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+) diff --git a/include/linux/uio.h b/include/linux/uio.h index 27e3fd942960..29eb18bb6feb 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -173,6 +173,8 @@ static inline size_t copy_folio_to_iter(struct folio *f= olio, size_t offset, { return copy_page_to_iter(&folio->page, offset, bytes, i); } +size_t copy_page_to_iter_nofault(struct page *page, unsigned offset, + size_t bytes, struct iov_iter *i); =20 static __always_inline __must_check size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 274014e4eafe..b286cfea4bee 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -734,6 +734,42 @@ size_t copy_page_to_iter(struct page *page, size_t off= set, size_t bytes, } EXPORT_SYMBOL(copy_page_to_iter); =20 +size_t copy_page_to_iter_nofault(struct page *page, unsigned offset, size_= t bytes, + struct iov_iter *i) +{ + size_t res =3D 0; + + if (!page_copy_sane(page, offset, bytes)) + return 0; + if (WARN_ON_ONCE(i->data_source)) + return 0; + if (unlikely(iov_iter_is_pipe(i))) + return copy_page_to_iter_pipe(page, offset, bytes, i); + page +=3D offset / PAGE_SIZE; // first subpage + offset %=3D PAGE_SIZE; + while (1) { + void *kaddr =3D kmap_local_page(page); + size_t n =3D min(bytes, (size_t)PAGE_SIZE - offset); + + iterate_and_advance(i, n, base, len, off, + copy_to_user_nofault(base, kaddr + offset + off, len), + memcpy(base, kaddr + offset + off, len) + ) + kunmap_local(kaddr); + res +=3D n; + bytes -=3D n; + if (!bytes || !n) + break; + offset +=3D n; + if (offset =3D=3D PAGE_SIZE) { + page++; + offset =3D 0; + } + } + return res; +} +EXPORT_SYMBOL(copy_page_to_iter_nofault); + size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes, struct iov_iter *i) { --=20 2.39.2 From nobody Mon Apr 13 06:39:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FD94C6FD1F for ; Wed, 22 Mar 2023 14:18:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231421AbjCVOR7 (ORCPT ); Wed, 22 Mar 2023 10:17:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231365AbjCVORs (ORCPT ); Wed, 22 Mar 2023 10:17:48 -0400 Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1EC35618AC; Wed, 22 Mar 2023 07:17:46 -0700 (PDT) Received: by mail-wm1-x333.google.com with SMTP id p13-20020a05600c358d00b003ed346d4522so11617203wmq.2; Wed, 22 Mar 2023 07:17:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679494664; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TkwRM4ibwOl17XnOEesY2au90/L8W91g8kqh6qiVpaQ=; b=CEY+CvgG3wBoRyfyKWKQNjq1GpdAruTsnSisVQXOXCMF7LurTuy0WXfcvChvoxxOjN TXEWblI4hN/DMEjBoKoXk8k9ZHeF5yj1Fb9EjP1K89Eo3r/FnkkjIIHAf2Ek3KjGVnSG Z/thslYOCTRp/lVX6pyQ0JnjSCw+v9307dVSy18HhxoHvfE9RsBWcrvskaamTaUvGVie 7q9j1J0berb8uHnm0kYG6csGNtP2mBP1DwmnX0SPcmBsEKrLqTyV9b4l6RzxgxCT3Ewi 07J3veuleXeblWXRydRCsjA/LJSHp42LDu8AH4nwStEBgGk/i4lf9Fy+nQONqspJ4Z0P xv5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679494664; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TkwRM4ibwOl17XnOEesY2au90/L8W91g8kqh6qiVpaQ=; b=NEuhzvlbytS+Kyr8d4Bw2PuL4hw5HhEgKCIx03l7uZ8aqVHe7mJaRZMn0N+27f3HhC owtLZKrEboulC5nhIvYJyiUj+aCUY+A8xSOzMz6WYWadBfKYP22+8WN2wb+/a/Tcz/UY 3qrBCcHYCgeynpQKCHHu4ZbgRQxq3S3lC564sNqyYfYbAMpYaAsz9+nOjQk+TL4esqly Ua8W74x+Jqea41huklDVB5i4sUuVvhjt/QJshYIcBnQo6eGx9XYQegt0lUJKbgO44xtn Od76vZ37pLXyDDY9ATsHeFlk7/6TDGr60EgQHkrVHwr2UTq+UpZ7Y9sduGOhbTYwR+i9 Gxbg== X-Gm-Message-State: AO0yUKUa3bM0aq/F2HrRECB/hQbCnJHxRS8Grt05hYDcViRYIJzbdRra YvaEFuyRKku6zDRfqqmx3N1617jBkRo= X-Google-Smtp-Source: AK7set+wU2PdGbIzzTTtjchkFfuGqpdjEfr2OPUBtYJJNTpN2C3VeMRPCGcYPp+aSuHotk42uImdTw== X-Received: by 2002:a05:600c:d0:b0:3ed:2b49:1571 with SMTP id u16-20020a05600c00d000b003ed2b491571mr5343485wmm.20.1679494664245; Wed, 22 Mar 2023 07:17:44 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id h20-20020a1ccc14000000b003dc522dd25esm16824893wmb.30.2023.03.22.07.17.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Mar 2023 07:17:43 -0700 (PDT) From: Lorenzo Stoakes To: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Jens Axboe , Alexander Viro , Lorenzo Stoakes Subject: [PATCH v5 4/4] mm: vmalloc: convert vread() to vread_iter() Date: Wed, 22 Mar 2023 14:17:35 +0000 Message-Id: <4ad3db8c059806737548cd5309f4c8b3e04ec235.1679494218.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Having previously laid the foundation for converting vread() to an iterator function, pull the trigger and do so. This patch attempts to provide minimal refactoring and to reflect the existing logic as best we can, for example we continue to zero portions of memory not read, as before. Overall, there should be no functional difference other than a performance improvement in /proc/kcore access to vmalloc regions. Now we have eliminated the need for a bounce buffer in read_kcore_iter(), we dispense with it, and try to write to user memory optimistically but with faults disabled via copy_page_to_iter_nofault(). We already have preemption disabled by holding a spin lock. If this fails, we fault in and retry a single time. This is a conservative approach intended to avoid spinning on vread_iter() if we repeatedly encouter issues reading from it. Additionally, we must account for the fact that at any point a copy may fail (most likely due to a fault not being able to occur), we exit indicating fewer bytes retrieved than expected. Signed-off-by: Lorenzo Stoakes --- fs/proc/kcore.c | 37 +++---- include/linux/vmalloc.h | 3 +- mm/nommu.c | 10 +- mm/vmalloc.c | 234 +++++++++++++++++++++++++--------------- 4 files changed, 169 insertions(+), 115 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 08b795fd80b4..177226cbb8ea 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -307,13 +307,9 @@ static void append_kcore_note(char *notes, size_t *i, = const char *name, *i =3D ALIGN(*i + descsz, 4); } =20 -static ssize_t -read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) +static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) { - struct file *file =3D iocb->ki_filp; - char *buf =3D file->private_data; loff_t *fpos =3D &iocb->ki_pos; - size_t phdrs_offset, notes_offset, data_offset; size_t page_offline_frozen =3D 1; size_t phdrs_len, notes_len; @@ -507,13 +503,23 @@ read_kcore_iter(struct kiocb *iocb, struct iov_iter *= iter) =20 switch (m->type) { case KCORE_VMALLOC: - vread(buf, (char *)start, tsz); - /* we have to zero-fill user buffer even if no read */ - if (copy_to_iter(buf, tsz, iter) !=3D tsz) { - ret =3D -EFAULT; - goto out; + { + const char *src =3D (char *)start; + size_t read; + + read =3D vread_iter(iter, src, tsz); + if (read !=3D tsz) { + size_t rem =3D tsz - read; + + /* Fault in and retry once. */ + if (fault_in_iov_iter_writeable(iter, rem) || + vread_iter(iter, src + read, rem) !=3D rem) { + ret =3D -EFAULT; + goto out; + } } break; + } case KCORE_USER: /* User page is handled prior to normal kernel page: */ if (copy_to_iter((char *)start, tsz, iter) !=3D tsz) { @@ -582,10 +588,6 @@ static int open_kcore(struct inode *inode, struct file= *filp) if (ret) return ret; =20 - filp->private_data =3D kmalloc(PAGE_SIZE, GFP_KERNEL); - if (!filp->private_data) - return -ENOMEM; - if (kcore_need_update) kcore_update_ram(); if (i_size_read(inode) !=3D proc_root_kcore->size) { @@ -596,16 +598,9 @@ static int open_kcore(struct inode *inode, struct file= *filp) return 0; } =20 -static int release_kcore(struct inode *inode, struct file *file) -{ - kfree(file->private_data); - return 0; -} - static const struct proc_ops kcore_proc_ops =3D { .proc_read_iter =3D read_kcore_iter, .proc_open =3D open_kcore, - .proc_release =3D release_kcore, .proc_lseek =3D default_llseek, }; =20 diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 69250efa03d1..461aa5637f65 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -9,6 +9,7 @@ #include /* pgprot_t */ #include #include +#include =20 #include =20 @@ -251,7 +252,7 @@ static inline void set_vm_flush_reset_perms(void *addr) #endif =20 /* for /proc/kcore */ -extern long vread(char *buf, char *addr, unsigned long count); +extern long vread_iter(struct iov_iter *iter, const char *addr, size_t cou= nt); =20 /* * Internals. Don't use.. diff --git a/mm/nommu.c b/mm/nommu.c index 57ba243c6a37..f670d9979a26 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -36,6 +36,7 @@ #include =20 #include +#include #include #include #include @@ -198,14 +199,13 @@ unsigned long vmalloc_to_pfn(const void *addr) } EXPORT_SYMBOL(vmalloc_to_pfn); =20 -long vread(char *buf, char *addr, unsigned long count) +long vread_iter(struct iov_iter *iter, const char *addr, size_t count) { /* Don't allow overflow */ - if ((unsigned long) buf + count < count) - count =3D -(unsigned long) buf; + if ((unsigned long) addr + count < count) + count =3D -(unsigned long) addr; =20 - memcpy(buf, addr, count); - return count; + return copy_to_iter(addr, count, iter); } =20 /* diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 978194dc2bb8..629cd87bb403 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -37,7 +37,6 @@ #include #include #include -#include #include #include #include @@ -3442,62 +3441,96 @@ void *vmalloc_32_user(unsigned long size) EXPORT_SYMBOL(vmalloc_32_user); =20 /* - * small helper routine , copy contents to buf from addr. - * If the page is not present, fill zero. + * Atomically zero bytes in the iterator. + * + * Returns the number of zeroed bytes. */ +size_t zero_iter(struct iov_iter *iter, size_t count) +{ + size_t remains =3D count; + + while (remains > 0) { + size_t num, copied; + + num =3D remains < PAGE_SIZE ? remains : PAGE_SIZE; + copied =3D copy_page_to_iter_nofault(ZERO_PAGE(0), 0, num, iter); + remains -=3D copied; + + if (copied < num) + break; + } =20 -static int aligned_vread(char *buf, char *addr, unsigned long count) + return count - remains; +} + +/* + * small helper routine, copy contents to iter from addr. + * If the page is not present, fill zero. + * + * Returns the number of copied bytes. + */ +static size_t aligned_vread_iter(struct iov_iter *iter, + const char *addr, size_t count) { - struct page *p; - int copied =3D 0; + size_t remains =3D count; + struct page *page; =20 - while (count) { + while (remains > 0) { unsigned long offset, length; + size_t copied =3D 0; =20 offset =3D offset_in_page(addr); length =3D PAGE_SIZE - offset; - if (length > count) - length =3D count; - p =3D vmalloc_to_page(addr); + if (length > remains) + length =3D remains; + page =3D vmalloc_to_page(addr); /* - * To do safe access to this _mapped_ area, we need - * lock. But adding lock here means that we need to add - * overhead of vmalloc()/vfree() calls for this _debug_ - * interface, rarely used. Instead of that, we'll use - * kmap() and get small overhead in this access function. + * To do safe access to this _mapped_ area, we need lock. But + * adding lock here means that we need to add overhead of + * vmalloc()/vfree() calls for this _debug_ interface, rarely + * used. Instead of that, we'll use an local mapping via + * copy_page_to_iter_nofault() and accept a small overhead in + * this access function. */ - if (p) { - /* We can expect USER0 is not used -- see vread() */ - void *map =3D kmap_atomic(p); - memcpy(buf, map + offset, length); - kunmap_atomic(map); - } else - memset(buf, 0, length); + if (page) + copied =3D copy_page_to_iter_nofault(page, offset, + length, iter); + else + copied =3D zero_iter(iter, length); =20 - addr +=3D length; - buf +=3D length; - copied +=3D length; - count -=3D length; + addr +=3D copied; + remains -=3D copied; + + if (copied !=3D length) + break; } - return copied; + + return count - remains; } =20 -static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long= flags) +/* + * Read from a vm_map_ram region of memory. + * + * Returns the number of copied bytes. + */ +static size_t vmap_ram_vread_iter(struct iov_iter *iter, const char *addr, + size_t count, unsigned long flags) { char *start; struct vmap_block *vb; unsigned long offset; - unsigned int rs, re, n; + unsigned int rs, re; + size_t remains, n; =20 /* * If it's area created by vm_map_ram() interface directly, but * not further subdividing and delegating management to vmap_block, * handle it here. */ - if (!(flags & VMAP_BLOCK)) { - aligned_vread(buf, addr, count); - return; - } + if (!(flags & VMAP_BLOCK)) + return aligned_vread_iter(iter, addr, count); + + remains =3D count; =20 /* * Area is split into regions and tracked with vmap_block, read out @@ -3505,50 +3538,64 @@ static void vmap_ram_vread(char *buf, char *addr, i= nt count, unsigned long flags */ vb =3D xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); if (!vb) - goto finished; + goto finished_zero; =20 spin_lock(&vb->lock); if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { spin_unlock(&vb->lock); - goto finished; + goto finished_zero; } + for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { - if (!count) - break; + size_t copied; + + if (remains =3D=3D 0) + goto finished; + start =3D vmap_block_vaddr(vb->va->va_start, rs); - while (addr < start) { - if (count =3D=3D 0) - goto unlock; - *buf =3D '\0'; - buf++; - addr++; - count--; + + if (addr < start) { + size_t to_zero =3D min_t(size_t, start - addr, remains); + size_t zeroed =3D zero_iter(iter, to_zero); + + addr +=3D zeroed; + remains -=3D zeroed; + + if (remains =3D=3D 0 || zeroed !=3D to_zero) + goto finished; } + /*it could start reading from the middle of used region*/ offset =3D offset_in_page(addr); n =3D ((re - rs + 1) << PAGE_SHIFT) - offset; - if (n > count) - n =3D count; - aligned_vread(buf, start+offset, n); + if (n > remains) + n =3D remains; + + copied =3D aligned_vread_iter(iter, start + offset, n); =20 - buf +=3D n; - addr +=3D n; - count -=3D n; + addr +=3D copied; + remains -=3D copied; + + if (copied !=3D n) + goto finished; } -unlock: + spin_unlock(&vb->lock); =20 -finished: +finished_zero: /* zero-fill the left dirty or free regions */ - if (count) - memset(buf, 0, count); + return count - remains + zero_iter(iter, remains); +finished: + /* We couldn't copy/zero everything */ + spin_unlock(&vb->lock); + return count - remains; } =20 /** - * vread() - read vmalloc area in a safe way. - * @buf: buffer for reading data - * @addr: vm address. - * @count: number of bytes to be read. + * vread_iter() - read vmalloc area in a safe way to an iterator. + * @iter: the iterator to which data should be written. + * @addr: vm address. + * @count: number of bytes to be read. * * This function checks that addr is a valid vmalloc'ed area, and * copy data from that area to a given buffer. If the given memory range @@ -3568,13 +3615,12 @@ static void vmap_ram_vread(char *buf, char *addr, i= nt count, unsigned long flags * (same number as @count) or %0 if [addr...addr+count) doesn't * include any intersection with valid vmalloc area */ -long vread(char *buf, char *addr, unsigned long count) +long vread_iter(struct iov_iter *iter, const char *addr, size_t count) { struct vmap_area *va; struct vm_struct *vm; - char *vaddr, *buf_start =3D buf; - unsigned long buflen =3D count; - unsigned long n, size, flags; + char *vaddr; + size_t n, size, flags, remains; =20 addr =3D kasan_reset_tag(addr); =20 @@ -3582,18 +3628,22 @@ long vread(char *buf, char *addr, unsigned long cou= nt) if ((unsigned long) addr + count < count) count =3D -(unsigned long) addr; =20 + remains =3D count; + spin_lock(&vmap_area_lock); va =3D find_vmap_area_exceed_addr((unsigned long)addr); if (!va) - goto finished; + goto finished_zero; =20 /* no intersects with alive vmap_area */ - if ((unsigned long)addr + count <=3D va->va_start) - goto finished; + if ((unsigned long)addr + remains <=3D va->va_start) + goto finished_zero; =20 list_for_each_entry_from(va, &vmap_area_list, list) { - if (!count) - break; + size_t copied; + + if (remains =3D=3D 0) + goto finished; =20 vm =3D va->vm; flags =3D va->flags & VMAP_FLAGS_MASK; @@ -3608,6 +3658,7 @@ long vread(char *buf, char *addr, unsigned long count) =20 if (vm && (vm->flags & VM_UNINITIALIZED)) continue; + /* Pair with smp_wmb() in clear_vm_uninitialized_flag() */ smp_rmb(); =20 @@ -3616,38 +3667,45 @@ long vread(char *buf, char *addr, unsigned long cou= nt) =20 if (addr >=3D vaddr + size) continue; - while (addr < vaddr) { - if (count =3D=3D 0) + + if (addr < vaddr) { + size_t to_zero =3D min_t(size_t, vaddr - addr, remains); + size_t zeroed =3D zero_iter(iter, to_zero); + + addr +=3D zeroed; + remains -=3D zeroed; + + if (remains =3D=3D 0 || zeroed !=3D to_zero) goto finished; - *buf =3D '\0'; - buf++; - addr++; - count--; } + n =3D vaddr + size - addr; - if (n > count) - n =3D count; + if (n > remains) + n =3D remains; =20 if (flags & VMAP_RAM) - vmap_ram_vread(buf, addr, n, flags); + copied =3D vmap_ram_vread_iter(iter, addr, n, flags); else if (!(vm->flags & VM_IOREMAP)) - aligned_vread(buf, addr, n); + copied =3D aligned_vread_iter(iter, addr, n); else /* IOREMAP area is treated as memory hole */ - memset(buf, 0, n); - buf +=3D n; - addr +=3D n; - count -=3D n; + copied =3D zero_iter(iter, n); + + addr +=3D copied; + remains -=3D copied; + + if (copied !=3D n) + goto finished; } -finished: - spin_unlock(&vmap_area_lock); =20 - if (buf =3D=3D buf_start) - return 0; +finished_zero: + spin_unlock(&vmap_area_lock); /* zero-fill memory holes */ - if (buf !=3D buf_start + buflen) - memset(buf, 0, buflen - (buf - buf_start)); + return count - remains + zero_iter(iter, remains); +finished: + /* Nothing remains, or We couldn't copy/zero everything. */ + spin_unlock(&vmap_area_lock); =20 - return buflen; + return count - remains; } =20 /** --=20 2.39.2