From nobody Mon Feb 9 22:19:50 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=chromium.org ARC-Seal: i=1; a=rsa-sha256; t=1612851785; cv=none; d=zohomail.com; s=zohoarc; b=NGVZq00sjL2uKnvrGeT/VUbcKMYLj2t2ekxROCbZcxJNgR9KKnKEss02C8p0VOhBork11nyMnmIREB41zuMTDpqo9lJfh/dmHq9I9bsnxkdJpTi6Qo5fWE/AnbhdEt0d7387tA7t5pEKcZwVoTKjoUhRqmoijM4I+vC+y0Un8+E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1612851785; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=4zqci56JWIAAF0wW2XoOW1nlucKE1HdU8HbBcVGEoj4=; b=TVZQa07+Zgzdqh/GFOP3/C/NXi0U3EnlY5UPZR2mNFU06oBF/8wWfMy2wlAo6OKNuQRq8EPH5fk4rfPeGwDFitlKlebuCtNWKulzGfZIbCOXLrjHVHZT91LXGmk/IDdP5gX04r8DeBwYspjYN+HkcONYai2gKd0Uiaog3gz1gd0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1612851785590698.2295729395728; Mon, 8 Feb 2021 22:23:05 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.83135.154059 (Exim 4.92) (envelope-from ) id 1l9MQF-0002ra-FV; Tue, 09 Feb 2021 06:22:51 +0000 Received: by outflank-mailman (output) from mailman id 83135.154059; Tue, 09 Feb 2021 06:22:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQF-0002rR-BS; Tue, 09 Feb 2021 06:22:51 +0000 Received: by outflank-mailman (input) for mailman id 83135; Tue, 09 Feb 2021 06:22:50 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9MQE-0002qa-0K for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 06:22:50 +0000 Received: from mail-pg1-x530.google.com (unknown [2607:f8b0:4864:20::530]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ef116655-434b-4336-9e46-e03df93aed3e; Tue, 09 Feb 2021 06:22:49 +0000 (UTC) Received: by mail-pg1-x530.google.com with SMTP id m2so5090413pgq.5 for ; Mon, 08 Feb 2021 22:22:48 -0800 (PST) Received: from localhost ([2401:fa00:1:10:a106:46e1:a999:81df]) by smtp.gmail.com with UTF8SMTPSA id n1sm6296866pgn.94.2021.02.08.22.22.42 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 08 Feb 2021 22:22:47 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ef116655-434b-4336-9e46-e03df93aed3e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4zqci56JWIAAF0wW2XoOW1nlucKE1HdU8HbBcVGEoj4=; b=iQ8wBJSEVZzeAe0ffh2h2GM8F5kehjcPxtVFeT4gPFJg8QooaH8d0bbR3DBFmEjh7/ +t81Z22ZQ3jfYQJEtw6EEDXPAwtTfgWAGJvdqJ64cWu+Z/90HeG76Xdlz9LMQYcjebg6 t1DpKPM5Kv3tzEnaDX3egDRfxshz/CzrFwO+0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4zqci56JWIAAF0wW2XoOW1nlucKE1HdU8HbBcVGEoj4=; b=FzgTjVUokJEVezdrNkB7Gxghzd+1xSvmub6pb2AB1dsRfNIqi6w/3/M/pHb/CHL3Zf yEZ5OyLBhnQL3tgdLvhSVzpXOypJXqndRi2IPaRvnyFoRnspPiLZW2c6//5S4FOF+gtZ JXlEA0zsYdL1+DMzTImcSFBUuqQDR3smpfJwf0ggV6WsCyqihQzmW6xQbUPBCZZahxaP mQO+TO1hkWwOhbPsOPlfTjWQfPassNdSvkzvKqePh8RWiVY+CvsGwNny5EsWSejngpSH USRrMPCAQIVj6LmkptFW1FN1t3gfn0nITMmLAkvQTE3f8E0fObxkqjD/qX65pe1UXOUt 5hDQ== X-Gm-Message-State: AOAM531lTsQoFNVrg5zjMxkOKPrm6dxsPUUYcoRiHTWXGOEfF4mFzL52 mufkAAYwnvZJkYUyBHXE67fsxA== X-Google-Smtp-Source: ABdhPJy1CzOC+GQys63fdcsVT9unv4NpcqE4/TesqmaSSFAML/xnfw6XY2m3UHOllfAZ0J4Itqp2AA== X-Received: by 2002:aa7:80cc:0:b029:1da:689d:2762 with SMTP id a12-20020aa780cc0000b02901da689d2762mr14012849pfn.3.1612851768274; Mon, 08 Feb 2021 22:22:48 -0800 (PST) From: Claire Chang To: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski Cc: benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , Claire Chang Subject: [PATCH v4 09/14] swiotlb: Refactor swiotlb_tbl_{map,unmap}_single Date: Tue, 9 Feb 2021 14:21:26 +0800 Message-Id: <20210209062131.2300005-10-tientzu@chromium.org> X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog In-Reply-To: <20210209062131.2300005-1-tientzu@chromium.org> References: <20210209062131.2300005-1-tientzu@chromium.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @chromium.org) Content-Type: text/plain; charset="utf-8" Refactor swiotlb_tbl_{map,unmap}_single to make the code reusable for dev_swiotlb_{alloc,free}. Signed-off-by: Claire Chang --- kernel/dma/swiotlb.c | 116 ++++++++++++++++++++++++++----------------- 1 file changed, 71 insertions(+), 45 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 6fdebde8fb1f..f64cbe6e84cc 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -509,14 +509,12 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phy= s_addr_t tlb_addr, } } =20 -phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_= addr, - size_t mapping_size, size_t alloc_size, - enum dma_data_direction dir, unsigned long attrs) +static int swiotlb_tbl_find_free_region(struct device *hwdev, + dma_addr_t tbl_dma_addr, + size_t alloc_size, unsigned long attrs) { struct swiotlb *swiotlb =3D get_swiotlb(hwdev); - dma_addr_t tbl_dma_addr =3D phys_to_dma_unencrypted(hwdev, swiotlb->start= ); unsigned long flags; - phys_addr_t tlb_addr; unsigned int nslots, stride, index, wrap; int i; unsigned long mask; @@ -531,15 +529,6 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwde= v, phys_addr_t orig_addr, #endif panic("Can not allocate SWIOTLB buffer earlier and can't now provide you= with the DMA bounce buffer"); =20 - if (mem_encrypt_active()) - pr_warn_once("Memory encryption is active and system is using DMA bounce= buffers\n"); - - if (mapping_size > alloc_size) { - dev_warn_once(hwdev, "Invalid sizes (mapping: %zd bytes, alloc: %zd byte= s)", - mapping_size, alloc_size); - return (phys_addr_t)DMA_MAPPING_ERROR; - } - mask =3D dma_get_seg_boundary(hwdev); =20 tbl_dma_addr &=3D mask; @@ -601,7 +590,6 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev= , phys_addr_t orig_addr, swiotlb->list[i] =3D 0; for (i =3D index - 1; (OFFSET(i, IO_TLB_SEGSIZE) !=3D IO_TLB_SEGSIZE - = 1) && swiotlb->list[i]; i--) swiotlb->list[i] =3D ++count; - tlb_addr =3D swiotlb->start + (index << IO_TLB_SHIFT); =20 /* * Update the indices to avoid searching in the next @@ -624,45 +612,20 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwd= ev, phys_addr_t orig_addr, if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slot= s), used %lu (slots)\n", alloc_size, swiotlb->nslabs, tmp_io_tlb_used); - return (phys_addr_t)DMA_MAPPING_ERROR; + return -ENOMEM; + found: swiotlb->used +=3D nslots; spin_unlock_irqrestore(&swiotlb->lock, flags); =20 - /* - * Save away the mapping from the original address to the DMA address. - * This is needed when we sync the memory. Then we sync the buffer if - * needed. - */ - for (i =3D 0; i < nslots; i++) - swiotlb->orig_addr[index+i] =3D orig_addr + (i << IO_TLB_SHIFT); - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (dir =3D=3D DMA_TO_DEVICE || dir =3D=3D DMA_BIDIRECTIONAL)) - swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); - - return tlb_addr; + return index; } =20 -/* - * tlb_addr is the physical address of the bounce buffer to unmap. - */ -void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, - size_t mapping_size, size_t alloc_size, - enum dma_data_direction dir, unsigned long attrs) +static void swiotlb_tbl_release_region(struct device *hwdev, int index, si= ze_t size) { struct swiotlb *swiotlb =3D get_swiotlb(hwdev); unsigned long flags; - int i, count, nslots =3D ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_S= HIFT; - int index =3D (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT; - phys_addr_t orig_addr =3D swiotlb->orig_addr[index]; - - /* - * First, sync the memory before unmapping the entry - */ - if (orig_addr !=3D INVALID_PHYS_ADDR && - !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - ((dir =3D=3D DMA_FROM_DEVICE) || (dir =3D=3D DMA_BIDIRECTIONAL))) - swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_FROM_DEVICE); + int i, count, nslots =3D ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; =20 /* * Return the buffer to the free list by setting the corresponding @@ -694,6 +657,69 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, ph= ys_addr_t tlb_addr, spin_unlock_irqrestore(&swiotlb->lock, flags); } =20 +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_= addr, + size_t mapping_size, size_t alloc_size, + enum dma_data_direction dir, + unsigned long attrs) +{ + struct swiotlb *swiotlb =3D get_swiotlb(hwdev); + dma_addr_t tbl_dma_addr =3D phys_to_dma_unencrypted(hwdev, swiotlb->start= ); + phys_addr_t tlb_addr; + unsigned int nslots, index; + int i; + + if (mem_encrypt_active()) + pr_warn_once("Memory encryption is active and system is using DMA bounce= buffers\n"); + + if (mapping_size > alloc_size) { + dev_warn_once(hwdev, "Invalid sizes (mapping: %zd bytes, alloc: %zd byte= s)", + mapping_size, alloc_size); + return (phys_addr_t)DMA_MAPPING_ERROR; + } + + index =3D swiotlb_tbl_find_free_region(hwdev, tbl_dma_addr, alloc_size, a= ttrs); + if (index < 0) + return (phys_addr_t)DMA_MAPPING_ERROR; + + tlb_addr =3D swiotlb->start + (index << IO_TLB_SHIFT); + + /* + * Save away the mapping from the original address to the DMA address. + * This is needed when we sync the memory. Then we sync the buffer if + * needed. + */ + nslots =3D ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; + for (i =3D 0; i < nslots; i++) + swiotlb->orig_addr[index + i] =3D orig_addr + (i << IO_TLB_SHIFT); + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + (dir =3D=3D DMA_TO_DEVICE || dir =3D=3D DMA_BIDIRECTIONAL)) + swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); + + return tlb_addr; +} + +/* + * tlb_addr is the physical address of the bounce buffer to unmap. + */ +void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, + size_t mapping_size, size_t alloc_size, + enum dma_data_direction dir, unsigned long attrs) +{ + struct swiotlb *swiotlb =3D get_swiotlb(hwdev); + int index =3D (tlb_addr - swiotlb->start) >> IO_TLB_SHIFT; + phys_addr_t orig_addr =3D swiotlb->orig_addr[index]; + + /* + * First, sync the memory before unmapping the entry + */ + if (orig_addr !=3D INVALID_PHYS_ADDR && + !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + ((dir =3D=3D DMA_FROM_DEVICE) || (dir =3D=3D DMA_BIDIRECTIONAL))) + swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_FROM_DEVICE); + + swiotlb_tbl_release_region(hwdev, index, alloc_size); +} + void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir, enum dma_sync_target target) --=20 2.30.0.478.g8a0d178c01-goog