From nobody Sun May 19 02:06:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1626424814; cv=none; d=zohomail.com; s=zohoarc; b=RK2s26Oc/OcQ1k4YZAYpvF4XmH3tPwl/t4l8M7zMIaOyt872P2+AraWX84Ri4r7xlGlYTkUsSo59Z9rP9eCB3jRDmg22kwbldy8WY/9jBNAivBJ8C/rO5k63QpYkFIN8elEu17iYRkfj7gDVUVCHzU9AG8psStUkrZd9S7hIs+g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1626424814; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=1Aw+MvYuySKaeeZ9hfm0Wk5AJLKyuRLmX4WBjc8rKNo=; b=PLQAOTgXvKExAvCMB8GWDR4t+Tl0HHAA222i1pdgakkq3EFuFx1AzXqPEpSIwVubUMTDA4abWi4sDJGgmlocPrkS7JCcUxv/KmZzIvcSjyECmMyVbU1cwlnKqdqVlaiKqnttPR2GpXLqMzHa4TMQZzJe3DgpLS4VORdkU0tkwTE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1626424814427266.9259064678171; Fri, 16 Jul 2021 01:40:14 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.157106.289961 (Exim 4.92) (envelope-from ) id 1m4JNw-0004YE-MQ; Fri, 16 Jul 2021 08:39:52 +0000 Received: by outflank-mailman (output) from mailman id 157106.289961; Fri, 16 Jul 2021 08:39:52 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1m4JNw-0004Y7-J3; Fri, 16 Jul 2021 08:39:52 +0000 Received: by outflank-mailman (input) for mailman id 157106; Fri, 16 Jul 2021 08:39:51 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1m4JNv-0004Xk-0S for xen-devel@lists.xenproject.org; Fri, 16 Jul 2021 08:39:51 +0000 Received: from mail-ej1-x631.google.com (unknown [2a00:1450:4864:20::631]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 86cf1293-3fed-40bf-837d-bdf562ba8e9a; Fri, 16 Jul 2021 08:39:49 +0000 (UTC) Received: by mail-ej1-x631.google.com with SMTP id gb6so14003964ejc.5 for ; Fri, 16 Jul 2021 01:39:49 -0700 (PDT) Received: from localhost ([185.117.121.76]) by smtp.gmail.com with ESMTPSA id p18sm3390639edu.8.2021.07.16.01.39.46 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 16 Jul 2021 01:39:48 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 86cf1293-3fed-40bf-837d-bdf562ba8e9a DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1Aw+MvYuySKaeeZ9hfm0Wk5AJLKyuRLmX4WBjc8rKNo=; b=oUyFrqUYKdou9ruiR6KE+JMXnnLCL8dvqhN70gcyzeACf6NUw7EI0ATRz7wTttz15f 7mOLygE4HmSHan8OZfyp5j8RPsbBrT3a0He5k0WrJB0y+5+L+NOkn9K5yvzS/pYZCO7p u7KZO9StVPaxTPF8Gr1jsGLQfWYOI2yH+fArBC1KqFZmKDSGTfAFVbEAaJhaz4AytnK+ dyb74/h+h4RPLfu4yWREg9BWVCsYAyrrBgnZTYI0oQjZZZAiUpIi/RfegTpwdAhY2A7S DsU9Z4zG4GoeFsMnKMjo7DZYbgBHzHPJ/9lwWvF6RbJwmOcAv7hCambLdViiqWQdNc4U 0rYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1Aw+MvYuySKaeeZ9hfm0Wk5AJLKyuRLmX4WBjc8rKNo=; b=WnmQqzA+QlrKotrUXZhSCO7RLt/lLxdxSnXp9weEFP8P3FRQ5PUFrcOIeglG+2Af08 /N1SyvxbkoCVjl1X7HWH0watZIL1ROU3LMbB1Q1zfykTNoyDQoKhD2wPi9CfvYPPQ0Lp wopLN2bar27Wxv52TQ60fkIRSctqvi7YT3VOeJAVOn+hJBCzyUzMRhb2U6A4A0iP90HV ZHZkNdBBX4y6chEPxwhGd8UBCWMtI8C+7pzxrqUgmubHlYW3iY91kscsOnSh32sAfzqK nAm3xVwwe7VC33KMmf/VxlEdkXnDPy8TNHx/8EuTfFIanxrd0piO0MO9vIls3+pRswSp TuVg== X-Gm-Message-State: AOAM531uAuZA5rF96rIqygbK3J5mx6752oG//GtwFOLtKhMSxLLzPZm2 lBWv/Odu7TabA1cOcIzyLYI= X-Google-Smtp-Source: ABdhPJwfFSjxOhCm0YSnLeoEw5a02XTjG3qrNAIfyswu4/Xx0Hrjsx5GukzWb4EvgN4GswtGTuomdw== X-Received: by 2002:a17:906:c9cb:: with SMTP id hk11mr10449398ejb.544.1626424789035; Fri, 16 Jul 2021 01:39:49 -0700 (PDT) From: Roman Skakun To: Christoph Hellwig Cc: Boris Ostrovsky , Konrad Rzeszutek Wilk , Juergen Gross , Stefano Stabellini , xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Oleksandr Tyshchenko , Oleksandr Andrushchenko , Volodymyr Babchuk , Andrii Anisov , Roman Skakun , Roman Skakun , Roman Skakun Subject: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses Date: Fri, 16 Jul 2021 11:39:34 +0300 Message-Id: <20210716083934.154992-1-rm.skakun@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210715170011.GA17324@lst.de> References: <20210715170011.GA17324@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1626424817644100001 Content-Type: text/plain; charset="utf-8" From: Roman Skakun This commit is dedicated to fix incorrect conversion from cpu_addr to page address in cases when we get virtual address which allocated in the vmalloc range. As the result, virt_to_page() cannot convert this address properly and return incorrect page address. Need to detect such cases and obtains the page address using vmalloc_to_page() instead. Signed-off-by: Roman Skakun Reviewed-by: Andrii Anisov --- Hi, Christoph! It's updated patch in accordance with your and Stefano=20 suggestions.=20 drivers/xen/swiotlb-xen.c | 7 +------ include/linux/dma-map-ops.h | 2 ++ kernel/dma/ops_helpers.c | 16 ++++++++++++++-- 3 files changed, 17 insertions(+), 8 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 92ee6eea30cd..c2f612a10a95 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -337,7 +337,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t = size, void *vaddr, int order =3D get_order(size); phys_addr_t phys; u64 dma_mask =3D DMA_BIT_MASK(32); - struct page *page; + struct page *page =3D cpu_addr_to_page(vaddr); =20 if (hwdev && hwdev->coherent_dma_mask) dma_mask =3D hwdev->coherent_dma_mask; @@ -349,11 +349,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t= size, void *vaddr, /* Convert the size to actually allocated. */ size =3D 1UL << (order + XEN_PAGE_SHIFT); =20 - if (is_vmalloc_addr(vaddr)) - page =3D vmalloc_to_page(vaddr); - else - page =3D virt_to_page(vaddr); - if (!WARN_ON((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size)) && TestClearPageXenRemapped(page)) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index a5f89fc4d6df..ce0edb0bb603 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -226,6 +226,8 @@ struct page *dma_alloc_from_pool(struct device *dev, si= ze_t size, bool (*phys_addr_ok)(struct device *, phys_addr_t, size_t)); bool dma_free_from_pool(struct device *dev, void *start, size_t size); =20 +struct page *cpu_addr_to_page(void *cpu_addr); + #ifdef CONFIG_ARCH_HAS_DMA_COHERENCE_H #include #elif defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 910ae69cae77..472e861750d3 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -5,6 +5,17 @@ */ #include =20 +/* + * This helper converts virtual address to page address. + */ +struct page *cpu_addr_to_page(void *cpu_addr) +{ + if (is_vmalloc_addr(cpu_addr)) + return vmalloc_to_page(cpu_addr); + else + return virt_to_page(cpu_addr); +} + /* * Create scatter-list for the already allocated DMA buffer. */ @@ -12,7 +23,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_= table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - struct page *page =3D virt_to_page(cpu_addr); + struct page *page =3D cpu_addr_to_page(cpu_addr); int ret; =20 ret =3D sg_alloc_table(sgt, 1, GFP_KERNEL); @@ -32,6 +43,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_st= ruct *vma, unsigned long user_count =3D vma_pages(vma); unsigned long count =3D PAGE_ALIGN(size) >> PAGE_SHIFT; unsigned long off =3D vma->vm_pgoff; + struct page *page =3D cpu_addr_to_page(cpu_addr); int ret =3D -ENXIO; =20 vma->vm_page_prot =3D dma_pgprot(dev, vma->vm_page_prot, attrs); @@ -43,7 +55,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_st= ruct *vma, return -ENXIO; =20 return remap_pfn_range(vma, vma->vm_start, - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, + page_to_pfn(page) + vma->vm_pgoff, user_count << PAGE_SHIFT, vma->vm_page_prot); #else return -ENXIO; --=20 2.27.0