From nobody Mon Feb 9 20:34:59 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1702650012; cv=none; d=zohomail.com; s=zohoarc; b=NEHApUZg24eqJd2PIpigWTf8ntczsJ+CP0YTS24EMmimCtOx+Kb0mbDIV5gvyMs59XjmzkgawwWqFqIDObqgslxKxsKBBX7qcNlTnGiXvaCpZC8eJYKaeI1ZJEVqRbI4MRWu3430/o4oXBpwMlletHy9o7pds2wVuYL4NmkNrX4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1702650012; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=FDw1hBjsMh2mJKl/ow003iacu/0pNjCq+t0PtXkwl9g=; b=KkxhHFlEDhWh1mh/OW+8Bo0UXCsZPZvWxMMx7oO3K+xC81nLTUqm1RqftrFcz3/2F9J5lsJc35VjOJXFKcet8fA+KAx7QTnNAnA8I2wriJpNEpCEgEuOuzS2yQN7iTdbLH4KkcPtO3g8IkMo5gKnnW1bUNX1eXU8zKyyhtRxW1U= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1702650012196659.609081320524; Fri, 15 Dec 2023 06:20:12 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.655137.1022890 (Exim 4.92) (envelope-from ) id 1rE92d-0000WR-9v; Fri, 15 Dec 2023 14:19:51 +0000 Received: by outflank-mailman (output) from mailman id 655137.1022890; Fri, 15 Dec 2023 14:19:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rE92d-0000VQ-5m; Fri, 15 Dec 2023 14:19:51 +0000 Received: by outflank-mailman (input) for mailman id 655137; Fri, 15 Dec 2023 14:19:49 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rE92b-0007Jp-7c for xen-devel@lists.xenproject.org; Fri, 15 Dec 2023 14:19:49 +0000 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [2a00:1450:4864:20::332]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id e0166d80-9b54-11ee-9b0f-b553b5be7939; Fri, 15 Dec 2023 15:18:53 +0100 (CET) Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-40c2c5a8150so6716615e9.2 for ; Fri, 15 Dec 2023 06:18:53 -0800 (PST) Received: from localhost ([213.195.127.70]) by smtp.gmail.com with ESMTPSA id jb4-20020a05600c54e400b0040c5cf930e6sm10304639wmb.19.2023.12.15.06.18.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Dec 2023 06:18:52 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e0166d80-9b54-11ee-9b0f-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1702649933; x=1703254733; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FDw1hBjsMh2mJKl/ow003iacu/0pNjCq+t0PtXkwl9g=; b=MA9kWeo8D7n3EFjRQHy72QWjaVBKDBDkyJ8xGyQ02Y/M3Dq5E64UoPpFroWejcvD98 qMFgJ1G8lG9jdN5RS9xnwL9A7XwjbRHbc+hbfWEQWYdcvYwGQTbtktEefHajCejCZuxv YmL+lmOZRo49+Tsoyytqzvopnsdv32S8YPO8Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702649933; x=1703254733; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FDw1hBjsMh2mJKl/ow003iacu/0pNjCq+t0PtXkwl9g=; b=pEXLA/ORfObDc+ZVoAPpXnJ2WLLCom72LjAGqfbHB626XuRbL8x8xvcorbrHzFAL6A Y/QmdsHkRuQv0Z8byHOrFQcodUiU536JPkvqYbCmPHXL+yxazVI4V6F5dofuQ6nI7/9f jtCVah7nY7BsuOk/kjahYIlLv6po60dRuPgvE3Xp1Uhwvwz2TTLgUVUfzkCw5p3weIu0 tkyRG78T9SIEZDdpDDuW4RD0mC6K63e9BnfSBhmzKh48dxAhGf61G/neq9eoO7c4aYUL dfpTfobxzEXDtvqmX/BY6OfIVnTAwbcFruRD/7r2Dbv2Jchej0cbxq1WWeYv5QgaVhMc dIIA== X-Gm-Message-State: AOJu0YxYqty5jJTvK89u1QxROx2/ZJ7N2eNiIK1l5ITxUwf77UYwASKO uI3jBtcAefAZsuI6kWiDPshDql+LLYDx+X6Socw= X-Google-Smtp-Source: AGHT+IEKFd55/ZjZslr+01W7mD8pITw5hJ9VLgKPE1npdtdCJtacXmzMitWqw+BqgZzK1lArug7B/w== X-Received: by 2002:a05:600c:1705:b0:40b:3fc7:c887 with SMTP id c5-20020a05600c170500b0040b3fc7c887mr6586131wmn.20.1702649932808; Fri, 15 Dec 2023 06:18:52 -0800 (PST) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Jan Beulich , Andrew Cooper Subject: [PATCH v3 3/7] amd-vi: set IOMMU page table levels based on guest reported paddr width Date: Fri, 15 Dec 2023 15:18:28 +0100 Message-ID: <20231215141832.9492-4-roger.pau@citrix.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231215141832.9492-1-roger.pau@citrix.com> References: <20231215141832.9492-1-roger.pau@citrix.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @citrix.com) X-ZM-MESSAGEID: 1702650013348100005 However take into account the minimum number of levels required by unity ma= ps when setting the page table levels. The previous setting of the page table levels for PV guests based on the highest RAM address was bogus, as there can be other non-RAM regions past t= he highest RAM address that need to be mapped, for example device MMIO. For HVM we also take amd_iommu_min_paging_mode into account, however if uni= ty maps require more than 4 levels attempting to add those will currently fail= at the p2m level, as 4 levels is the maximum supported. Fixes: 0700c962ac2d ('Add AMD IOMMU support into hypervisor') Signed-off-by: Roger Pau Monn=C3=A9 Reviewed-by: Jan Beulich --- Changes since v2: - Use the renamed domain_max_paddr_bits(). Changes since v1: - Use paging_max_paddr_bits() instead of hardcoding DEFAULT_DOMAIN_ADDRESS_WIDTH. --- xen/drivers/passthrough/amd/pci_amd_iommu.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/pass= through/amd/pci_amd_iommu.c index 6bc73dc21052..cc3e2ccd5ed7 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -359,21 +359,17 @@ int __read_mostly amd_iommu_min_paging_mode =3D 1; static int cf_check amd_iommu_domain_init(struct domain *d) { struct domain_iommu *hd =3D dom_iommu(d); + int pglvl =3D amd_iommu_get_paging_mode( + PFN_DOWN(1UL << domain_max_paddr_bits(d))); + + if ( pglvl < 0 ) + return pglvl; =20 /* - * Choose the number of levels for the IOMMU page tables. - * - PV needs 3 or 4, depending on whether there is RAM (including hot= plug - * RAM) above the 512G boundary. - * - HVM could in principle use 3 or 4 depending on how much guest - * physical address space we give it, but this isn't known yet so us= e 4 - * unilaterally. - * - Unity maps may require an even higher number. + * Choose the number of levels for the IOMMU page tables, taking into + * account unity maps. */ - hd->arch.amd.paging_mode =3D max(amd_iommu_get_paging_mode( - is_hvm_domain(d) - ? 1UL << (DEFAULT_DOMAIN_ADDRESS_WIDTH - PAGE_SHIFT) - : get_upper_mfn_bound() + 1), - amd_iommu_min_paging_mode); + hd->arch.amd.paging_mode =3D max(pglvl, amd_iommu_min_paging_mode); =20 return 0; } --=20 2.43.0