From nobody Wed Dec 31 13:07:41 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57BF0C4332F for ; Thu, 2 Nov 2023 09:38:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345906AbjKBJiJ (ORCPT ); Thu, 2 Nov 2023 05:38:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345914AbjKBJiG (ORCPT ); Thu, 2 Nov 2023 05:38:06 -0400 Received: from frasgout13.his.huawei.com (frasgout13.his.huawei.com [14.137.139.46]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D46DE123 for ; Thu, 2 Nov 2023 02:38:02 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.18.147.228]) by frasgout13.his.huawei.com (SkyGuard) with ESMTP id 4SLdkQ4QT7z9y19H; Thu, 2 Nov 2023 17:24:46 +0800 (CST) Received: from huaweicloud.com (unknown [10.48.131.78]) by APP2 (Coremail) with SMTP id GxC2BwDnibXJbUNl0QNYAw--.58857S3; Thu, 02 Nov 2023 10:37:38 +0100 (CET) From: Petr Tesarik To: Christoph Hellwig , Marek Szyprowski , Robin Murphy , Greg Kroah-Hartman , Petr Tesarik , iommu@lists.linux.dev (open list:DMA MAPPING HELPERS), linux-kernel@vger.kernel.org (open list), patchwork@huawei.com Cc: Wangkefeng , Roberto Sassu , petr@tesarici.cz, Petr Tesarik , miaoxie@huawei.com, weiyongjun1@huawei.com, guohanjun@huawei.com, huawei.libin@huawei.com, yuehaibing@huawei.com, johnny.chenyi@huawei.com, leijitang@huawei.com, ming.fu@huawei.com, zhujianwei7@huawei.com, linuxarm@huawei.com, stable@vger.kernel.org, Rick Edgecombe Subject: [PATCH v2 1/1] swiotlb: do not free decrypted pages if dynamic Date: Thu, 2 Nov 2023 10:36:49 +0100 Message-Id: <20231102071821.431-2-petrtesarik@huaweicloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231102071821.431-1-petrtesarik@huaweicloud.com> References: <20231102071821.431-1-petrtesarik@huaweicloud.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: GxC2BwDnibXJbUNl0QNYAw--.58857S3 X-Coremail-Antispam: 1UD129KBjvJXoWxCF1xAw1Uury3ur47tw15CFg_yoW5CF1xpF 4fCr1Sgr98tFy7CrWfAF4kCF9xGws5urWUCFW3Xw1rZwn8WryIkr9rCw18uayfJF4kua17 JrW0v3WayrsrZaUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUm014x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr4l82xGYIkIc2 x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_Jr0_JF4l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1l84 ACjcxK6I8E87Iv67AKxVW8JVWxJwA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UM2AI xVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20x vE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xv r2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7M4IIrI8v6xkF7I0E8cxan2IY04 v7MxkF7I0Ew4C26cxK6c8Ij28IcwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWU JVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67 kF1VAFwI0_GFv_WrylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY 6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0x vEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVj vjDU0xZFpf9x0JU7pnQUUUUU= X-CM-SenderInfo: hshw23xhvd2x3n6k3tpzhluzxrxghudrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Fix these two error paths: 1. When set_memory_decrypted() fails, pages may be left fully or partially decrypted. 2. Decrypted pages may be freed if swiotlb_alloc_tlb() determines that the physical address is too high. To fix the first issue, call set_memory_encrypted() on the allocated region after a failed decryption attempt. If that also fails, leak the pages. To fix the second issue, check that the TLB physical address is below the requested limit before decrypting. Let the caller differentiate between unsuitable physical address (=3D> retry from a lower zone) and allocation failures (=3D> no point in retrying). Cc: stable@vger.kernel.org Cc: Rick Edgecombe Fixes: 79636caad361 ("swiotlb: if swiotlb is full, fall back to a transient= memory pool") Signed-off-by: Petr Tesarik --- kernel/dma/swiotlb.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index dff067bd56b1..0e1632f75421 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -558,29 +558,40 @@ void __init swiotlb_exit(void) * alloc_dma_pages() - allocate pages to be used for DMA * @gfp: GFP flags for the allocation. * @bytes: Size of the buffer. + * @phys_limit: Maximum allowed physical address of the buffer. * * Allocate pages from the buddy allocator. If successful, make the alloca= ted * pages decrypted that they can be used for DMA. * - * Return: Decrypted pages, or %NULL on failure. + * Return: Decrypted pages, %NULL on allocation failure, or ERR_PTR(-EAGAI= N) + * if the allocated physical address was above @phys_limit. */ -static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes) +static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes, u64 phys_limi= t) { unsigned int order =3D get_order(bytes); struct page *page; + phys_addr_t paddr; void *vaddr; =20 page =3D alloc_pages(gfp, order); if (!page) return NULL; =20 - vaddr =3D page_address(page); + paddr =3D page_to_phys(page); + if (paddr + bytes - 1 > phys_limit) { + __free_pages(page, order); + return ERR_PTR(-EAGAIN); + } + + vaddr =3D phys_to_virt(paddr); if (set_memory_decrypted((unsigned long)vaddr, PFN_UP(bytes))) goto error; return page; =20 error: - __free_pages(page, order); + /* Intentional leak if pages cannot be encrypted again. */ + if (!set_memory_encrypted((unsigned long)vaddr, PFN_UP(bytes))) + __free_pages(page, order); return NULL; } =20 @@ -618,11 +629,7 @@ static struct page *swiotlb_alloc_tlb(struct device *d= ev, size_t bytes, else if (phys_limit <=3D DMA_BIT_MASK(32)) gfp |=3D __GFP_DMA32; =20 - while ((page =3D alloc_dma_pages(gfp, bytes)) && - page_to_phys(page) + bytes - 1 > phys_limit) { - /* allocated, but too high */ - __free_pages(page, get_order(bytes)); - + while (IS_ERR(page =3D alloc_dma_pages(gfp, bytes, phys_limit))) { if (IS_ENABLED(CONFIG_ZONE_DMA32) && phys_limit < DMA_BIT_MASK(64) && !(gfp & (__GFP_DMA32 | __GFP_DMA))) --=20 2.34.1