From nobody Sun Nov 10 13:14:45 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 555DCC79F62 for ; Fri, 25 Aug 2023 10:14:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243471AbjHYKNi (ORCPT ); Fri, 25 Aug 2023 06:13:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243707AbjHYKNa (ORCPT ); Fri, 25 Aug 2023 06:13:30 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD26D210C; Fri, 25 Aug 2023 03:13:26 -0700 (PDT) Received: from pps.filterd (m0356517.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37PA6Pce004781; Fri, 25 Aug 2023 10:12:19 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : date : subject : mime-version : content-type : content-transfer-encoding : message-id : references : in-reply-to : to : cc; s=pp1; bh=L12o8Ehc4HxuoKdiEuB+RaDxR46bsD0kundIR3VlH3I=; b=EeQIS8c9/D3jgTDtFJbNwCIAnLk54YiWAHRVO4wLnNUUxu6oJopGbe7U4rmKiteosJoV 7Xq/6RSugA2rUqvAaUJSH3qnIwTT4yxI4E+R99GCTttzMBv0Ow90z/cWKv90MJeuvXQL 0rqAhiW3ajMszYPQUZu3z3D9xnQ+YSrSBDYT/k4Ou3HGRI/gaNuTBH+c0WSraC80EKtR mNDn7kusl0f7XF8fwMdKynSaJos7hCvi29C69dkJe4lF7RZoUQdrOwwNnrrlFFRDUy8P TUqW87KV0doUifLyB3LBNVUnOaeZ7+DBdgqWcxlFAcTba9+5hvvgzpB30aFd6ehZm6h5 gg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3spsw58ku3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 25 Aug 2023 10:12:18 +0000 Received: from m0356517.ppops.net (m0356517.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 37PA9Pvh013980; Fri, 25 Aug 2023 10:12:17 GMT Received: from ppma12.dal12v.mail.ibm.com (dc.9e.1632.ip4.static.sl-reverse.com [50.22.158.220]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3spsw58kth-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 25 Aug 2023 10:12:17 +0000 Received: from pps.filterd (ppma12.dal12v.mail.ibm.com [127.0.0.1]) by ppma12.dal12v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 37P8F81c018234; Fri, 25 Aug 2023 10:12:16 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma12.dal12v.mail.ibm.com (PPS) with ESMTPS id 3sn21sxqc8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 25 Aug 2023 10:12:16 +0000 Received: from smtpav06.fra02v.mail.ibm.com (smtpav06.fra02v.mail.ibm.com [10.20.54.105]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 37PACDBP42467998 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 25 Aug 2023 10:12:13 GMT Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6D3CB20067; Fri, 25 Aug 2023 10:12:13 +0000 (GMT) Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 150D12006A; Fri, 25 Aug 2023 10:12:12 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav06.fra02v.mail.ibm.com (Postfix) with ESMTP; Fri, 25 Aug 2023 10:12:12 +0000 (GMT) From: Niklas Schnelle Date: Fri, 25 Aug 2023 12:11:21 +0200 Subject: [PATCH v12 6/6] iommu/dma: Use a large flush queue and timeout for shadow_on_flush MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20230825-dma_iommu-v12-6-4134455994a7@linux.ibm.com> References: <20230825-dma_iommu-v12-0-4134455994a7@linux.ibm.com> In-Reply-To: <20230825-dma_iommu-v12-0-4134455994a7@linux.ibm.com> To: Joerg Roedel , Matthew Rosato , Will Deacon , Wenjia Zhang , Robin Murphy , Jason Gunthorpe Cc: Gerd Bayer , Julian Ruess , Pierre Morel , Alexandra Winter , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Suravee Suthikulpanit , Hector Martin , Sven Peter , Alyssa Rosenzweig , David Woodhouse , Lu Baolu , Andy Gross , Bjorn Andersson , Konrad Dybcio , Yong Wu , Matthias Brugger , AngeloGioacchino Del Regno , Gerald Schaefer , Orson Zhai , Baolin Wang , Chunyan Zhang , Chen-Yu Tsai , Jernej Skrabec , Samuel Holland , Thierry Reding , Krishna Reddy , Jonathan Hunter , Niklas Schnelle , Jonathan Corbet , linux-s390@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, asahi@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, linux-doc@vger.kernel.org X-Mailer: b4 0.12.3 X-Developer-Signature: v=1; a=openpgp-sha256; l=6202; i=schnelle@linux.ibm.com; h=from:subject:message-id; bh=fiE9zFBOb3i7e30aILqxNsR0EGadcTeMJhp6cDgPcY0=; b=owGbwMvMwCH2Wz534YHOJ2GMp9WSGFJe1OW6ffpxsjKz8+OMAwEpGdO01z9gPP00tHrdiVuKr mFsDafLO0pZGMQ4GGTFFFkWdTn7rSuYYronqL8DZg4rE8gQBi5OAZjI9xSG/9GJ7Pn8qlUJajKr LsQHB2Yebzo6RWmH3eS7iZMEr27klmZkmHVMaXHrgveVfWzzv7f2nTp18/mhzLPGe7faFSdL87m fZQYA X-Developer-Key: i=schnelle@linux.ibm.com; a=openpgp; fpr=9DB000B2D2752030A5F72DDCAFE43F15E8C26090 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: ENNf0qkSrBGinHHtLhO6KnovZeJTHT18 X-Proofpoint-GUID: DoFOfLHZYGoezvtBy8TJxsJztezeiwm_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-25_08,2023-08-24_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 clxscore=1015 suspectscore=0 mlxscore=0 malwarescore=0 phishscore=0 mlxlogscore=999 adultscore=0 impostorscore=0 spamscore=0 priorityscore=1501 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2308250087 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Flush queues currently use a fixed compile time size of 256 entries. This being a power of 2 allows the compiler to use shift and mask instead of more expensive modulo operations. With per-CPU flush queues larger queue sizes would hit per-CPU allocation limits, with a single flush queue these limits do not apply however. Also with single queues being particularly suitable for virtualized environments with expensive IOTLB flushes these benefit especially from larger queues and thus fewer flushes. To this end re-order struct iova_fq so we can use a dynamic array and introduce the flush queue size and timeouts as new options in the iommu_dma_options struct. So as not to lose the shift and mask optimization, use a power of 2 for the length and use explicit shift and mask instead of letting the compiler optimize this. A large queue size and 1 second timeout is then set for the shadow on flush case set by s390 paged memory guests. This then brings performance on par with the previous s390 specific DMA API implementation. Acked-by: Robin Murphy Reviewed-by: Matthew Rosato #s390 Signed-off-by: Niklas Schnelle --- drivers/iommu/dma-iommu.c | 50 ++++++++++++++++++++++++++++++-------------= ---- 1 file changed, 32 insertions(+), 18 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 09660b0af130..9d9a5aefd53d 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -50,6 +50,8 @@ enum iommu_dma_queue_type { =20 struct iommu_dma_options { enum iommu_dma_queue_type qt; + size_t fq_size; + unsigned int fq_timeout; }; =20 struct iommu_dma_cookie { @@ -98,10 +100,12 @@ static int __init iommu_dma_forcedac_setup(char *str) early_param("iommu.forcedac", iommu_dma_forcedac_setup); =20 /* Number of entries per flush queue */ -#define IOVA_FQ_SIZE 256 +#define IOVA_DEFAULT_FQ_SIZE 256 +#define IOVA_SINGLE_FQ_SIZE 32768 =20 /* Timeout (in ms) after which entries are flushed from the queue */ -#define IOVA_FQ_TIMEOUT 10 +#define IOVA_DEFAULT_FQ_TIMEOUT 10 +#define IOVA_SINGLE_FQ_TIMEOUT 1000 =20 /* Flush queue entry for deferred flushing */ struct iova_fq_entry { @@ -113,18 +117,19 @@ struct iova_fq_entry { =20 /* Per-CPU flush queue structure */ struct iova_fq { - struct iova_fq_entry entries[IOVA_FQ_SIZE]; - unsigned int head, tail; spinlock_t lock; + unsigned int head, tail; + unsigned int mod_mask; + struct iova_fq_entry entries[]; }; =20 #define fq_ring_for_each(i, fq) \ - for ((i) =3D (fq)->head; (i) !=3D (fq)->tail; (i) =3D ((i) + 1) % IOVA_FQ= _SIZE) + for ((i) =3D (fq)->head; (i) !=3D (fq)->tail; (i) =3D ((i) + 1) & (fq)->m= od_mask) =20 static inline bool fq_full(struct iova_fq *fq) { assert_spin_locked(&fq->lock); - return (((fq->tail + 1) % IOVA_FQ_SIZE) =3D=3D fq->head); + return (((fq->tail + 1) & fq->mod_mask) =3D=3D fq->head); } =20 static inline unsigned int fq_ring_add(struct iova_fq *fq) @@ -133,7 +138,7 @@ static inline unsigned int fq_ring_add(struct iova_fq *= fq) =20 assert_spin_locked(&fq->lock); =20 - fq->tail =3D (idx + 1) % IOVA_FQ_SIZE; + fq->tail =3D (idx + 1) & fq->mod_mask; =20 return idx; } @@ -155,7 +160,7 @@ static void fq_ring_free_locked(struct iommu_dma_cookie= *cookie, struct iova_fq fq->entries[idx].iova_pfn, fq->entries[idx].pages); =20 - fq->head =3D (fq->head + 1) % IOVA_FQ_SIZE; + fq->head =3D (fq->head + 1) & fq->mod_mask; } } =20 @@ -240,7 +245,7 @@ static void queue_iova(struct iommu_dma_cookie *cookie, if (!atomic_read(&cookie->fq_timer_on) && !atomic_xchg(&cookie->fq_timer_on, 1)) mod_timer(&cookie->fq_timer, - jiffies + msecs_to_jiffies(IOVA_FQ_TIMEOUT)); + jiffies + msecs_to_jiffies(cookie->options.fq_timeout)); } =20 static void iommu_dma_free_fq_single(struct iova_fq *fq) @@ -279,27 +284,29 @@ static void iommu_dma_free_fq(struct iommu_dma_cookie= *cookie) iommu_dma_free_fq_percpu(cookie->percpu_fq); } =20 -static void iommu_dma_init_one_fq(struct iova_fq *fq) +static void iommu_dma_init_one_fq(struct iova_fq *fq, size_t fq_size) { int i; =20 fq->head =3D 0; fq->tail =3D 0; + fq->mod_mask =3D fq_size - 1; =20 spin_lock_init(&fq->lock); =20 - for (i =3D 0; i < IOVA_FQ_SIZE; i++) + for (i =3D 0; i < fq_size; i++) INIT_LIST_HEAD(&fq->entries[i].freelist); } =20 static int iommu_dma_init_fq_single(struct iommu_dma_cookie *cookie) { + size_t fq_size =3D cookie->options.fq_size; struct iova_fq *queue; =20 - queue =3D vmalloc(sizeof(*queue)); + queue =3D vmalloc(struct_size(queue, entries, fq_size)); if (!queue) return -ENOMEM; - iommu_dma_init_one_fq(queue); + iommu_dma_init_one_fq(queue, fq_size); cookie->single_fq =3D queue; =20 return 0; @@ -307,15 +314,17 @@ static int iommu_dma_init_fq_single(struct iommu_dma_= cookie *cookie) =20 static int iommu_dma_init_fq_percpu(struct iommu_dma_cookie *cookie) { + size_t fq_size =3D cookie->options.fq_size; struct iova_fq __percpu *queue; int cpu; =20 - queue =3D alloc_percpu(struct iova_fq); + queue =3D __alloc_percpu(struct_size(queue, entries, fq_size), + __alignof__(*queue)); if (!queue) return -ENOMEM; =20 for_each_possible_cpu(cpu) - iommu_dma_init_one_fq(per_cpu_ptr(queue, cpu)); + iommu_dma_init_one_fq(per_cpu_ptr(queue, cpu), fq_size); cookie->percpu_fq =3D queue; return 0; } @@ -635,11 +644,16 @@ static bool dev_use_sg_swiotlb(struct device *dev, st= ruct scatterlist *sg, static void iommu_dma_init_options(struct iommu_dma_options *options, struct device *dev) { - /* Shadowing IOTLB flushes do better with a single queue */ - if (dev->iommu->shadow_on_flush) + /* Shadowing IOTLB flushes do better with a single large queue */ + if (dev->iommu->shadow_on_flush) { options->qt =3D IOMMU_DMA_OPTS_SINGLE_QUEUE; - else + options->fq_timeout =3D IOVA_SINGLE_FQ_TIMEOUT; + options->fq_size =3D IOVA_SINGLE_FQ_SIZE; + } else { options->qt =3D IOMMU_DMA_OPTS_PER_CPU_QUEUE; + options->fq_size =3D IOVA_DEFAULT_FQ_SIZE; + options->fq_timeout =3D IOVA_DEFAULT_FQ_TIMEOUT; + } } =20 /** --=20 2.39.2