From nobody Fri Oct 17 10:31:35 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34B91C433FE for ; Wed, 19 Oct 2022 08:45:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231307AbiJSIpp (ORCPT ); Wed, 19 Oct 2022 04:45:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231389AbiJSIoF (ORCPT ); Wed, 19 Oct 2022 04:44:05 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9FDA080F58; Wed, 19 Oct 2022 01:41:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E354F617F0; Wed, 19 Oct 2022 08:41:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01394C433D7; Wed, 19 Oct 2022 08:41:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1666168913; bh=h3+uSWwiOmr3NWvZgRpyVR7rxSNYaUIg45Onin/ZWNM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YW5rqpOfH9eEvo3jMpr76Hz9bpFP1aVSIHaubI2uPNYg6WjtQ/UbHevT9AZtzJvXp 70aGSJZmM/0OFOqeI/hzeZmUn82l4AqmVHLOQA4HUOdDttVz5FM2k/kYdHbmpPuevm M4o29nhV64YRLZU7u1o7yhaCgFbFzILy3rVypOnU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ben Ronallo , Chuck Lever , Jeff Layton Subject: [PATCH 6.0 096/862] NFSD: Protect against send buffer overflow in NFSv3 READDIR Date: Wed, 19 Oct 2022 10:23:03 +0200 Message-Id: <20221019083254.182334043@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221019083249.951566199@linuxfoundation.org> References: <20221019083249.951566199@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Chuck Lever commit 640f87c190e0d1b2a0fcb2ecf6d2cd53b1c41991 upstream. Since before the git era, NFSD has conserved the number of pages held by each nfsd thread by combining the RPC receive and send buffers into a single array of pages. This works because there are no cases where an operation needs a large RPC Call message and a large RPC Reply message at the same time. Once an RPC Call has been received, svc_process() updates svc_rqst::rq_res to describe the part of rq_pages that can be used for constructing the Reply. This means that the send buffer (rq_res) shrinks when the received RPC record containing the RPC Call is large. A client can force this shrinkage on TCP by sending a correctly- formed RPC Call header contained in an RPC record that is excessively large. The full maximum payload size cannot be constructed in that case. Thanks to Aleksi Illikainen and Kari Hulkko for uncovering this issue. Reported-by: Ben Ronallo Cc: Signed-off-by: Chuck Lever Reviewed-by: Jeff Layton Signed-off-by: Chuck Lever Signed-off-by: Greg Kroah-Hartman --- fs/nfsd/nfs3proc.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) --- a/fs/nfsd/nfs3proc.c +++ b/fs/nfsd/nfs3proc.c @@ -563,13 +563,14 @@ static void nfsd3_init_dirlist_pages(str { struct xdr_buf *buf =3D &resp->dirlist; struct xdr_stream *xdr =3D &resp->xdr; - - count =3D clamp(count, (u32)(XDR_UNIT * 2), svc_max_payload(rqstp)); + unsigned int sendbuf =3D min_t(unsigned int, rqstp->rq_res.buflen, + svc_max_payload(rqstp)); =20 memset(buf, 0, sizeof(*buf)); =20 /* Reserve room for the NULL ptr & eof flag (-2 words) */ - buf->buflen =3D count - XDR_UNIT * 2; + buf->buflen =3D clamp(count, (u32)(XDR_UNIT * 2), sendbuf); + buf->buflen -=3D XDR_UNIT * 2; buf->pages =3D rqstp->rq_next_page; rqstp->rq_next_page +=3D (buf->buflen + PAGE_SIZE - 1) >> PAGE_SHIFT;