From nobody Wed Apr 1 11:13:14 2026 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 628B8329395 for ; Mon, 30 Mar 2026 14:51:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774882309; cv=none; b=LZJTah8rgJwQyiCA/ru/WNHwLLf4fwKl/SASMl0pOpXzkYq4j5onPdQjOKAZGzZtPAad4549fZOb4ZupCbBpF2qucOYhdyb8fJpIdF+RpEShywgUFuPJXgpathJta0kg2r/J2kNMhGwbZUQdwtRMIn7Ke7wBDNwTweWPXAWfOAc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774882309; c=relaxed/simple; bh=AACj5Q1Te+cllukPGpODl/FrzLJ7iqJVyQNdhpktZMg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BW0HhJO9w0OQ16nUkGA5EPyTFLsZSjE6aoi1/A4Za3WPPjo7tvSPnxx+P0RlCC7nqFjPrLiviWy1qWhM1sZ4EyCENySctBs/yemMXQXN9qcXAK813aWbGTAOgWs2ghDwYYYY1rRQBYi+v5OuJPJzASotErKNTNDInGZwsq4UJ4c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dnqKq3J8; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--smostafa.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dnqKq3J8" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-662f12c2747so5705194a12.3 for ; Mon, 30 Mar 2026 07:51:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774882306; x=1775487106; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=mU7Mp0JnalK0jHaAyT2ZTi+XzZ8j8BilljX0NIW7h+I=; b=dnqKq3J8jRQjlsWoUMtwJEZj5mFEv40srQ85knBouHtL48H8SVqb77GK4CDbYzANDE A3HWZFJ50Iwb2YDt8H4FBe+epFhq5hlnPx4wGc0sro2x6Iw/v6LHuQcmTP+V6h8AXMp7 6Oj3XVjV9frOVksn7zJf71dKj9a0sWM+pDIdDBXBogvWPvpBIIW02qUBCse9lnUx4/7t dkati6DitKbV6Br5e1TKsA2h3cZoYr1OZqF6AjuQX/zDHgSS6EDoTz0Z8oQqRWrWpz9S Jsn8/40Wuat3aHl7mI9a1c9jEmN9ds8I5jQhcz+gIK8kVnjknT3l6AjLam5V2qQgdeTm CRdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774882306; x=1775487106; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=mU7Mp0JnalK0jHaAyT2ZTi+XzZ8j8BilljX0NIW7h+I=; b=UTtrk4gZKr5N6/A2Bw9WjLNJ8AwKUAwzXwWo4sKoRUDmM9juTadvGwV+jPnQx/CQfM 4WRXOdfpcsXCiC+gxXdtGt1EkSawvYq+Y6W4MLd2qdieKvEos2FrB9LcnmeaIQZ3ezwI ac8/q4Vx0THe6iO/gKrTqc+ARaQZTU73WS15gDUrjXn1nb5tND+TurmjgBxEOiEOZryl 4Oh6seLg7Eljw1wBoSdvQ/RSOja1awwZTLt7EEKTwBvApaTfxB/xmNJHE3FPJbFlJe9K 8yjJCqqeLCx46R5G1zajGqhkRY3KDU1t57TeQUsWwWyjjj4eplVgTF54SL6JkCS1JSo6 ++tw== X-Forwarded-Encrypted: i=1; AJvYcCWk0B5vcrzGlfLzZcadTweD8Ix4OfkhLcKs5LR6F8fLFHa+PV4yfn9MM5P97jtUrRdnPzuoL6BcHarcbM8=@vger.kernel.org X-Gm-Message-State: AOJu0YxrtdMnMuqDCI2VyKn+Xuc8mVuHP+CqOE4S62yZlsTclq4lmSTr NvZSskTvxXY9hOJ6bGjB5TlDOq0lFDwBelLJJ7MDsou9OvABEcMI/CSMPRNxEnBKw4fYhOC/x10 Yzt4ucJ/cChvDcA== X-Received: from edeb19.prod.google.com ([2002:a05:6402:2793:b0:66b:62f:dced]) (user=smostafa job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:a0d5:b0:66b:aa56:ee5c with SMTP id 4fb4d7f45d1cf-66baa56f065mr3617472a12.28.1774882305564; Mon, 30 Mar 2026 07:51:45 -0700 (PDT) Date: Mon, 30 Mar 2026 14:50:42 +0000 In-Reply-To: <20260330145043.1586623-1-smostafa@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260330145043.1586623-1-smostafa@google.com> X-Mailer: git-send-email 2.53.0.1185.g05d4b7b318-goog Message-ID: <20260330145043.1586623-5-smostafa@google.com> Subject: [RFC PATCH v2 4/5] dma-mapping: Refactor memory encryption usage From: Mostafa Saleh To: iommu@lists.linux.dev, linux-kernel@vger.kernel.org Cc: robin.murphy@arm.com, m.szyprowski@samsung.com, will@kernel.org, maz@kernel.org, suzuki.poulose@arm.com, catalin.marinas@arm.com, jiri@resnulli.us, jgg@ziepe.ca, aneesh.kumar@kernel.org, Mostafa Saleh Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" At the moment dma-direct deals with memory encryption in 2 cases - Pre-decrypted restricted dma-pools - Arch code through force_dma_unencrypted() In the first case, the memory is owned by the pool and the decryption is not managed by the dma-direct. However, it should be aware of it to use the appropriate phys_to_dma* and page table prot. For the second case, it=E2=80=99s the job of the dma-direct to manage the decryption of the allocated memory. As there have been bugs in this code due to wrong or missing checks and there are more use cases coming for memory decryption, we need more robust checks in the code to abstract the core logic, so introduce some local helpers: - dma_external_decryption(): For pages decrypted but managed externally - dma_owns_decryption(): For pages need to be decrypted and managed by dma-direct - is_dma_decrypted(): To check if memory is decrypted Note that this patch is not a no-op as there are some subtle changes which are actually theoretical bug fixes in dma_direct_mmap() and dma_direct_alloc() where the wrong prot might be used for remap. Signed-off-by: Mostafa Saleh --- kernel/dma/direct.c | 37 +++++++++++++++++++++++++++---------- 1 file changed, 27 insertions(+), 10 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index a4260689bcc8..1078e1b38a34 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -23,10 +23,27 @@ */ u64 zone_dma_limit __ro_after_init =3D DMA_BIT_MASK(24); =20 +/* Memory is decrypted and managed externally. */ +static inline bool dma_external_decryption(struct device *dev) +{ + return is_swiotlb_for_alloc(dev); +} + +/* Memory needs to be decrypted by the dma-direct layer. */ +static inline bool dma_owns_decryption(struct device *dev) +{ + return force_dma_unencrypted(dev) && !dma_external_decryption(dev); +} + +static inline bool is_dma_decrypted(struct device *dev) +{ + return force_dma_unencrypted(dev) || dma_external_decryption(dev); +} + static inline dma_addr_t phys_to_dma_direct(struct device *dev, phys_addr_t phys) { - if (force_dma_unencrypted(dev) || is_swiotlb_for_alloc(dev)) + if (is_dma_decrypted(dev)) return phys_to_dma_unencrypted(dev, phys); return phys_to_dma(dev, phys); } @@ -79,7 +96,7 @@ bool dma_coherent_ok(struct device *dev, phys_addr_t phys= , size_t size) =20 static int dma_set_decrypted(struct device *dev, void *vaddr, size_t size) { - if (!force_dma_unencrypted(dev) || is_swiotlb_for_alloc(dev)) + if (!dma_owns_decryption(dev)) return 0; return set_memory_decrypted((unsigned long)vaddr, PFN_UP(size)); } @@ -88,7 +105,7 @@ static int dma_set_encrypted(struct device *dev, void *v= addr, size_t size) { int ret; =20 - if (!force_dma_unencrypted(dev) || is_swiotlb_for_alloc(dev)) + if (!dma_owns_decryption(dev)) return 0; ret =3D set_memory_encrypted((unsigned long)vaddr, PFN_UP(size)); if (ret) @@ -203,7 +220,7 @@ static void *dma_direct_alloc_no_mapping(struct device = *dev, size_t size, void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { - bool allow_highmem =3D !force_dma_unencrypted(dev); + bool allow_highmem =3D !dma_owns_decryption(dev); bool remap =3D false, set_uncached =3D false; struct page *page; void *ret; @@ -213,7 +230,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, gfp |=3D __GFP_NOWARN; =20 if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) + !is_dma_decrypted(dev)) return dma_direct_alloc_no_mapping(dev, size, dma_handle, gfp); =20 if (!dev_is_dma_coherent(dev)) { @@ -247,7 +264,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, * Remapping or decrypting memory may block, allocate the memory from * the atomic pools instead if we aren't allowed block. */ - if ((remap || force_dma_unencrypted(dev)) && + if ((remap || dma_owns_decryption(dev)) && dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); =20 @@ -272,7 +289,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, if (remap) { pgprot_t prot =3D dma_pgprot(dev, PAGE_KERNEL, attrs); =20 - if (force_dma_unencrypted(dev)) + if (is_dma_decrypted(dev)) prot =3D pgprot_decrypted(prot); =20 /* remove any dirty cache lines on the kernel alias */ @@ -314,7 +331,7 @@ void dma_direct_free(struct device *dev, size_t size, unsigned int page_order =3D get_order(size); =20 if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) { + !is_dma_decrypted(dev)) { /* cpu_addr is a struct page cookie, not a kernel address */ dma_free_contiguous(dev, cpu_addr, size); return; @@ -362,7 +379,7 @@ struct page *dma_direct_alloc_pages(struct device *dev,= size_t size, struct page *page; void *ret; =20 - if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp)) + if (dma_owns_decryption(dev) && dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); =20 page =3D __dma_direct_alloc_pages(dev, size, gfp, false); @@ -530,7 +547,7 @@ int dma_direct_mmap(struct device *dev, struct vm_area_= struct *vma, int ret =3D -ENXIO; =20 vma->vm_page_prot =3D dma_pgprot(dev, vma->vm_page_prot, attrs); - if (force_dma_unencrypted(dev)) + if (is_dma_decrypted(dev)) vma->vm_page_prot =3D pgprot_decrypted(vma->vm_page_prot); =20 if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) --=20 2.53.0.1185.g05d4b7b318-goog