From nobody Sat Nov 23 23:11:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=teddy.astie@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1730730566; cv=none; d=zohomail.com; s=zohoarc; b=KqqKrxrmQdSfzEBdmcYDgqN02mz7L5se8yyWosXWJuqzbml1jGSwgA/frlTFJNAzjFaVv1CBUL1IeHDBv+ooNUfMPZtfr7FtJhXwIxApAxRCBr4D4NmJKKyAsmdkdy6s0/Lz1YoyTPNWsXOs5qf0Uh4IuDNINxxOsDvpGVB2qXs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1730730566; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=lagNsxIirOQVnbaqxV6zpDUMQQ7gHdhShgfs5hmfEI0=; b=nGg54o/IyT0Mq0BNebSgrfmS2gvHud1IHeS7ZGFPJllYnZRIbF+M4btLaPTIn5B+rCF58FJFSlViCPAm8OAmLwtliRukkz27VJWq/TXYDxDYadnIbv2xjl4Qm8wKo/hwyQ38Mqd7ViigahzNmrICZO2FMU4/VFw1yeMd8pT2yfc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=teddy.astie@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1730730566289257.8948774915841; Mon, 4 Nov 2024 06:29:26 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.830028.1244946 (Exim 4.92) (envelope-from ) id 1t7y4U-0006cj-Rv; Mon, 04 Nov 2024 14:28:46 +0000 Received: by outflank-mailman (output) from mailman id 830028.1244946; Mon, 04 Nov 2024 14:28:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t7y4U-0006bX-LG; Mon, 04 Nov 2024 14:28:46 +0000 Received: by outflank-mailman (input) for mailman id 830028; Mon, 04 Nov 2024 14:28:46 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t7y4U-0006XR-4Y for xen-devel@lists.xenproject.org; Mon, 04 Nov 2024 14:28:46 +0000 Received: from mail128-130.atl41.mandrillapp.com (mail128-130.atl41.mandrillapp.com [198.2.128.130]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 16330b5d-9ab9-11ef-a0c5-8be0dac302b0; Mon, 04 Nov 2024 15:28:41 +0100 (CET) Received: from pmta08.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1]) by mail128-130.atl41.mandrillapp.com (Mailchimp) with ESMTP id 4Xhv3B0fpxzS62J3L for ; Mon, 4 Nov 2024 14:28:38 +0000 (GMT) Received: from [37.26.189.201] by mandrillapp.com id c73bd8112cf84b748d25877597e6fecd; Mon, 04 Nov 2024 14:28:38 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 16330b5d-9ab9-11ef-a0c5-8be0dac302b0 X-Custom-Connection: eyJyZW1vdGVpcCI6IjE5OC4yLjEyOC4xMzAiLCJoZWxvIjoibWFpbDEyOC0xMzAuYXRsNDEubWFuZHJpbGxhcHAuY29tIn0= X-Custom-Transaction: eyJpZCI6IjE2MzMwYjVkLTlhYjktMTFlZi1hMGM1LThiZTBkYWMzMDJiMCIsInRzIjoxNzMwNzMwNTIxLjQxOTQ3OSwic2VuZGVyIjoiYm91bmNlLW1kXzMwNTA0OTYyLjY3MjhkYTE2LnYxLWM3M2JkODExMmNmODRiNzQ4ZDI1ODc3NTk3ZTZmZWNkQGJvdW5jZS52YXRlcy50ZWNoIiwicmVjaXBpZW50IjoieGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnIn0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1730730518; x=1730991018; bh=lagNsxIirOQVnbaqxV6zpDUMQQ7gHdhShgfs5hmfEI0=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=S7A4OsWV+sxjicgIdSRTyrIEEzbaoXMma9Txhytr6cDZsrDqTcPWOcp3FtNpWtyB0 wrznw//qrv6UJrxXyqDLznlE38HVPXq5koIVcee3r3XSL7sPae7ljJMYbqy3jNBer1 /N8+JJ2wZWDc/S/DOo8im1bky3B3PxzHT9qxqgUKO9VJgs5kCPGCusy1r+1ZUkL2xt PBIquYwbrVUVMxvinzF6W4XdFKPhOzqGPEM2FuYDWp+2SPriyHYUz0n0hyEfU0iMQD pdNeGDZSEqvtISeBv1kRmuxpz03IU22LV4m95O03Sm+PEDvsxwW3FQggBqpISglRhQ PDA0hvjUZyrBQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1730730518; x=1730991018; i=teddy.astie@vates.tech; bh=lagNsxIirOQVnbaqxV6zpDUMQQ7gHdhShgfs5hmfEI0=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=YrsxMLV2wloinoEvmT7QtNKKZDMO5jbrL8a768H5MRY7ttNZ59GEiz4YsCa+ncm0/ bx1g68l61TlGw3GTaMWzbccoIqnZZA1uyjSMOJGj0TmErJtwBHXLZ3SfDgENlolvp5 IaMCI9uTnhFYJ471xnodHP1Dx24Sr8L4SQoVbFHoGG4eASG3IeGiDjG43ndAejnYkN JE34CRCmFdBePn0jYgVc27j/rcGlRNM35fmKZ+irSeon4L4Nqk6iklc5vwxK4l2uoY uTfWZ3BJFyRWaEJAxR6tbkzRrgvs87570lLnEuGYOyCYn5nlO/nuOOswqlMJC7io2S cjt5xa95yl4/A== From: "Teddy Astie" Subject: =?utf-8?Q?[XEN=20RFC=20PATCH=20v4=201/5]=20docs/designs:=20Add=20a=20design=20document=20for=20PV-IOMMU?= X-Mailer: git-send-email 2.45.2 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1730730517125 To: xen-devel@lists.xenproject.org Cc: "Teddy Astie" , "Andrew Cooper" , "Jan Beulich" , "Julien Grall" , "Stefano Stabellini" Message-Id: <787ca634b46c582dad04ab1cc93c840c4f739fa7.1730718102.git.teddy.astie@vates.tech> In-Reply-To: References: X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.c73bd8112cf84b748d25877597e6fecd?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20241104:md Date: Mon, 04 Nov 2024 14:28:38 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @mandrillapp.com) (identity teddy.astie@vates.tech) X-ZM-MESSAGEID: 1730730568140116600 Content-Type: text/plain; charset="utf-8" Some operating systems want to use IOMMU to implement various features (e.g VFIO) or DMA protection. This patch introduce a proposal for IOMMU paravirtualization for Dom0. Signed-off-by Teddy Astie --- Changes in v4: * added init and remote_op commands --- docs/designs/pv-iommu.md | 116 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 116 insertions(+) create mode 100644 docs/designs/pv-iommu.md diff --git a/docs/designs/pv-iommu.md b/docs/designs/pv-iommu.md new file mode 100644 index 0000000000..7df9fa0b94 --- /dev/null +++ b/docs/designs/pv-iommu.md @@ -0,0 +1,116 @@ +# IOMMU paravirtualization for Dom0 + +Status: Experimental + +# Background + +By default, Xen only uses the IOMMU for itself, either to make device adre= ss +space coherent with guest adress space (x86 HVM/PVH) or to prevent devices +from doing DMA outside it's expected memory regions including the hypervis= or +(x86 PV). + +A limitation is that guests (especially privildged ones) may want to use +IOMMU hardware in order to implement features such as DMA protection and +VFIO [1] as IOMMU functionality is not available outside of the hypervisor +currently. + +[1] VFIO - "Virtual Function I/O" - https://www.kernel.org/doc/html/latest= /driver-api/vfio.html + +# Design + +The operating system may want to have access to various IOMMU features suc= h as +context management and DMA remapping. We can create a new hypercall that a= llows +the guest to have access to a new paravirtualized IOMMU interface. + +This feature is only meant to be available for the Dom0, as DomU have some +emulated devices that can't be managed on Xen side and are not hardware, we +can't rely on the hardware IOMMU to enforce DMA remapping. + +This interface is exposed under the `iommu_op` hypercall. + +In addition, Xen domains are modified in order to allow existence of sever= al +IOMMU context including a default one that implement default behavior (e.g +hardware assisted paging) and can't be modified by guest. DomU cannot have +contexts, and therefore act as if they only have the default domain. + +Each IOMMU context within a Xen domain is identified using a domain-specif= ic +context number that is used in the Xen IOMMU subsystem and the hypercall +interface. + +The number of IOMMU context a domain is specified by either the toolstack = or +the domain itself. + +# IOMMU operations + +## Initialize PV-IOMMU + +Initialize PV-IOMMU for the domain. +It can only be called once. + +## Alloc context + +Create a new IOMMU context for the guest and return the context number to = the +guest. +Fail if the IOMMU context limit of the guest is reached. + +A flag can be specified to create a identity mapping. + +## Free context + +Destroy a IOMMU context created previously. +It is not possible to free the default context. + +Reattach context devices to default context if specified by the guest. + +Fail if there is a device in the context and reattach-to-default flag is n= ot +specified. + +## Reattach device + +Reattach a device to another IOMMU context (including the default one). +The target IOMMU context number must be valid and the context allocated. + +The guest needs to specify a PCI SBDF of a device he has access to. + +## Map/unmap page + +Map/unmap a page on a context. +The guest needs to specify a gfn and target dfn to map. + +Refuse to create the mapping if one already exist for the same dfn. + +## Lookup page + +Get the gfn mapped by a specific dfn. + +## Remote command + +Make a PV-IOMMU operation on behalf of another domain. +Especially useful for implementing IOMMU emulation (e.g using QEMU) +or initializing PV-IOMMU with enforced limits. + +# Implementation considerations + +## Hypercall batching + +In order to prevent unneeded hypercalls and IOMMU flushing, it is advisabl= e to +be able to batch some critical IOMMU operations (e.g map/unmap multiple pa= ges). + +## Hardware without IOMMU support + +Operating system needs to be aware on PV-IOMMU capability, and whether it = is +able to make contexts. However, some operating system may critically fail = in +case they are able to make a new IOMMU context. Which is supposed to happen +if no IOMMU hardware is available. + +The hypercall interface needs a interface to advertise the ability to crea= te +and manage IOMMU contexts including the amount of context the guest is able +to use. Using these informations, the Dom0 may decide whether to use or not +the PV-IOMMU interface. + +## Page pool for contexts + +In order to prevent unexpected starving on the hypervisor memory with a +buggy Dom0. We can preallocate the pages the contexts will use and make +map/unmap use these pages instead of allocating them dynamically. + --=20 2.45.2 Teddy Astie | Vates XCP-ng Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech From nobody Sat Nov 23 23:11:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=teddy.astie@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1730730570; cv=none; d=zohomail.com; s=zohoarc; b=djzWYusQZaikXnbtMpZGifAICR5136PT477g5nZGABxjtpCFQ+Zx2+wtp6KoeArpcwMZIsXdN2plIqPTuJOwYFrzohD9xXrtQ3a+P+gHSRsD1HnrHsnOYCU0qeFnWPMQPrnFQgDRBK3ELvMUxt5A5oXD/L3dqH0X32E7DtsENKE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1730730570; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=D1mN7WnP9SMvYAR3LcZNqcutNoBQ8YN4wFayrsZS04Q=; b=a2O2VcZBvHIqE16csW8TtzdWpo9tzu0FrevObB01TlDz3IYe0pfjeUlrGWfb2ZUJQCPG2TRgoiSnS6Yhys3xgNnsfdCcxD9P6QufzbEKPOJn/K0p7kwlQmyFwaM9WnUOxumCEEAOs7juaqxPUo0yJtM8BsZoW2gYlrIXFdceSPk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=teddy.astie@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1730730570293842.4813593031625; Mon, 4 Nov 2024 06:29:30 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.830029.1244959 (Exim 4.92) (envelope-from ) id 1t7y4V-00070A-W7; Mon, 04 Nov 2024 14:28:47 +0000 Received: by outflank-mailman (output) from mailman id 830029.1244959; Mon, 04 Nov 2024 14:28:47 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t7y4V-000703-T1; Mon, 04 Nov 2024 14:28:47 +0000 Received: by outflank-mailman (input) for mailman id 830029; Mon, 04 Nov 2024 14:28:46 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t7y4U-0006XR-FJ for xen-devel@lists.xenproject.org; Mon, 04 Nov 2024 14:28:46 +0000 Received: from mail128-130.atl41.mandrillapp.com (mail128-130.atl41.mandrillapp.com [198.2.128.130]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 163327b8-9ab9-11ef-a0c5-8be0dac302b0; Mon, 04 Nov 2024 15:28:41 +0100 (CET) Received: from pmta08.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1]) by mail128-130.atl41.mandrillapp.com (Mailchimp) with ESMTP id 4Xhv3B5J1lzS62JQD for ; Mon, 4 Nov 2024 14:28:38 +0000 (GMT) Received: from [37.26.189.201] by mandrillapp.com id 7508323027114daba6c0ca034902f9e2; Mon, 04 Nov 2024 14:28:38 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 163327b8-9ab9-11ef-a0c5-8be0dac302b0 X-Custom-Connection: eyJyZW1vdGVpcCI6IjE5OC4yLjEyOC4xMzAiLCJoZWxvIjoibWFpbDEyOC0xMzAuYXRsNDEubWFuZHJpbGxhcHAuY29tIn0= X-Custom-Transaction: eyJpZCI6IjE2MzMyN2I4LTlhYjktMTFlZi1hMGM1LThiZTBkYWMzMDJiMCIsInRzIjoxNzMwNzMwNTIxLjQyMDMyNSwic2VuZGVyIjoiYm91bmNlLW1kXzMwNTA0OTYyLjY3MjhkYTE2LnYxLTc1MDgzMjMwMjcxMTRkYWJhNmMwY2EwMzQ5MDJmOWUyQGJvdW5jZS52YXRlcy50ZWNoIiwicmVjaXBpZW50IjoieGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnIn0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1730730518; x=1730991018; bh=D1mN7WnP9SMvYAR3LcZNqcutNoBQ8YN4wFayrsZS04Q=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=0XViHl0wfl4HMvkoQADMlPavxemCWShHiTypJhvxZo14p2ONEFvgvuxDzwJRxneal ao3shHuFtao7KBmtJIOd5ILdGHWJ5zxOGujqsCnZz3VkK65FLDD23meiD+njgOpALe MISmEnwZFYpTZFSitLsbDkU8M31eTn8t+VO1BcpyaQneHL9kGLztyp2HJwK01xzFMW P5xPEZebrd8ZYYMG6X5TnI4Yuje5PhcKqOMFxIBGJi3uDuJKyU5qqStYJ+/kEXaO/N N0scAJJ5/+rrUTlMrC7iYCbt6TeIPPNuK2TTAgd4TpOCgY0VWK8HniuXpk9Nx+siUu LHgIWDZNueQ5A== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1730730518; x=1730991018; i=teddy.astie@vates.tech; bh=D1mN7WnP9SMvYAR3LcZNqcutNoBQ8YN4wFayrsZS04Q=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=CUvLai+jL8bxt37xAnUBo+ad6c8ZXTeRgsvb1RG9zxFtXxlkIRHkvG0Q9LY1kg7wl hgFWWNvx87U2JxxMctjrwLrz8so/leYZJine3aFhanDYfJ0RJ2UEikdkGVJwYEEh2B fZgFzLBFZcBs9xB7o4K9Qy/rtsfyZmMF+ZDGRjO6jeNJYhzWrd2VB7DKeyieKMoCaS 20wz5IVp3CJIVQBCmQTX1qPScomcGGj5ITSGyo7OnVyAZRcf3+HvTM8HorWda9mUDb x9HJIXpNFH8RgGPF23WnNZnl379EDgmzsVYzkpZlhkqOQm3qLKbeaqsdEJnzkKHjMT /vBJUShljHC+g== From: "Teddy Astie" Subject: =?utf-8?Q?[XEN=20RFC=20PATCH=20v4=202/5]=20docs/designs:=20Add=20a=20design=20document=20for=20IOMMU=20subsystem=20redesign?= X-Mailer: git-send-email 2.45.2 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1730730517418 To: xen-devel@lists.xenproject.org Cc: "Teddy Astie" , "Andrew Cooper" , "Jan Beulich" , "Julien Grall" , "Stefano Stabellini" Message-Id: In-Reply-To: References: X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.7508323027114daba6c0ca034902f9e2?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20241104:md Date: Mon, 04 Nov 2024 14:28:38 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @mandrillapp.com) (identity teddy.astie@vates.tech) X-ZM-MESSAGEID: 1730730572255116600 Content-Type: text/plain; charset="utf-8" Current IOMMU subsystem has some limitations that make PV-IOMMU practically= impossible. One of them is the assumtion that each domain is bound to a single "IOMMU d= omain", which also causes complications with quarantine implementation. Moreover, current IOMMU subsystem is not entirely well-defined, for instanc= e, the behavior of map_page between ARM SMMUv3 and x86 VT-d/AMD-Vi greatly differs. On ARM,= it can modifies the domain page table while on x86, it may be forbidden (e.g using HAP with= PVH), or only modifying the devices PoV (e.g using PV). The goal of this redesign is to define more explicitely the behavior and in= terface of the IOMMU subsystem while allowing PV-IOMMU to be effectively implemented. Signed-off-by Teddy Astie --- Changed in V2: * nit s/dettach/detach/ Changed in v4: * updated for iommu_context locking changes --- docs/designs/iommu-contexts.md | 403 +++++++++++++++++++++++++++++++++ 1 file changed, 403 insertions(+) create mode 100644 docs/designs/iommu-contexts.md diff --git a/docs/designs/iommu-contexts.md b/docs/designs/iommu-contexts.md new file mode 100644 index 0000000000..9d6fb95549 --- /dev/null +++ b/docs/designs/iommu-contexts.md @@ -0,0 +1,403 @@ +# IOMMU context management in Xen + +Status: Experimental +Revision: 0 + +# Background + +The design for *IOMMU paravirtualization for Dom0* [1] explains that some = guests may +want to access to IOMMU features. In order to implement this in Xen, sever= al adjustments +needs to be made to the IOMMU subsystem. + +This "hardware IOMMU domain" is currently implemented on a per-domain basi= s such as each +domain actually has a specific *hardware IOMMU domain*, this design aims t= o allow a +single Xen domain to manage several "IOMMU context", and allow some domain= s (e.g Dom0 +[1]) to modify their IOMMU contexts. + +In addition to this, quarantine feature can be refactored into using IOMMU= contexts +to reduce the complexity of platform-specific implementations and ensuring= more +consistency across platforms. + +# IOMMU context + +We define a "IOMMU context" as being a *hardware IOMMU domain*, but named = as a context +to avoid confusion with Xen domains. +It represents some hardware-specific data structure that contains mappings= from a device +frame-number to a machine frame-number (e.g using a pagetable) that can be= applied to +a device using IOMMU hardware. + +This structure is bound to a Xen domain, but a Xen domain may have several= IOMMU context. +These contexts may be modifiable using the interface as defined in [1] asi= de some +specific cases (e.g modifying default context). + +This is implemented in Xen as a new structure that will hold context-speci= fic +data. + +```c +struct iommu_context { + u16 id; /* Context id (0 means default context) */ + struct list_head devices; + + struct arch_iommu_context arch; + + bool opaque; /* context can't be modified nor accessed (e.g HAP) */ +}; +``` + +A context is identified by a number that is domain-specific and may be use= d by IOMMU +users such as PV-IOMMU by the guest. + +struct arch_iommu_context is splited from struct arch_iommu + +```c +struct arch_iommu_context +{ + spinlock_t pgtables_lock; + struct page_list_head pgtables; + + union { + /* Intel VT-d */ + struct { + uint64_t pgd_maddr; /* io page directory machine address */ + domid_t *didmap; /* per-iommu DID */ + unsigned long *iommu_bitmap; /* bitmap of iommu(s) that the co= ntext uses */ + } vtd; + /* AMD IOMMU */ + struct { + struct page_info *root_table; + } amd; + }; +}; + +struct arch_iommu +{ + spinlock_t mapping_lock; /* io page table lock */ + struct { + struct page_list_head list; + spinlock_t lock; + } pgtables; + + struct list_head identity_maps; + + union { + /* Intel VT-d */ + struct { + /* no more context-specific values */ + unsigned int agaw; /* adjusted guest address width, 0 is level= 2 30-bit */ + } vtd; + /* AMD IOMMU */ + struct { + unsigned int paging_mode; + struct guest_iommu *g_iommu; + } amd; + }; +}; +``` + +IOMMU context information is now carried by iommu_context rather than bein= g integrated to +struct arch_iommu. + +# Xen domain IOMMU structure + +`struct domain_iommu` is modified to allow multiples context within a sing= le Xen domain +to exist : + +```c +struct iommu_context_list { + uint16_t count; /* Context count excluding default context */ + + /* if count > 0 */ + + uint64_t *bitmap; /* bitmap of context allocation */ + struct iommu_context *map; /* Map of contexts */ +}; + +struct domain_iommu { + /* ... */ + + struct iommu_context default_ctx; + struct iommu_context_list other_contexts; + + /* ... */ +} +``` + +default_ctx is a special context with id=3D0 that holds the page table map= ping the entire +domain, which basically preserve the previous behavior. All devices are ex= pected to be +bound to this context during initialization. + +Along with this default context that always exist, we use a pool of contex= ts that has a +fixed size at domain initialization, where contexts can be allocated (if p= ossible), and +have a id matching their position in the map (considering that id !=3D 0). +These contexts may be used by IOMMU contexts users such as PV-IOMMU or qua= rantine domain +(DomIO). + +# Platform independent context management interface + +A new platform independant interface is introduced in Xen hypervisor to al= low +IOMMU contexts users to create and manage contexts within domains. + +```c +/* Direct context access functions (not supposed to be used directly) */ +struct iommu_context *iommu_get_context(struct domain *d, u16 ctx_no); +void iommu_put_context(struct iommu_context *ctx); + +/* Flag for default context initialization */ +#define IOMMU_CONTEXT_INIT_default (1 << 0) + +/* Flag for quarantine contexts (scratch page, DMA Abort mode, ...) */ +#define IOMMU_CONTEXT_INIT_quarantine (1 << 1) + +int iommu_context_init(struct domain *d, struct iommu_context *ctx, u16 ct= x_no, u32 flags); + +/* Flag to specify that devices will need to be reattached to default doma= in */ +#define IOMMU_TEARDOWN_REATTACH_DEFAULT (1 << 0) + +/* + * Flag to specify that the context needs to be destroyed preemptively + * (multiple calls to iommu_context_teardown will be required) + */ +#define IOMMU_TEARDOWN_PREEMPT (1 << 1) + +int iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u3= 2 flags); + +/* Allocate a new context, uses CONTEXT_INIT flags */ +int iommu_context_alloc(struct domain *d, u16 *ctx_no, u32 flags); + +/* Free a context, uses CONTEXT_TEARDOWN flags */ +int iommu_context_free(struct domain *d, u16 ctx_no, u32 flags); + +/* Move a device from one context to another, including between different = domains. */ +int iommu_reattach_context(struct domain *prev_dom, struct domain *next_do= m, + device_t *dev, u16 ctx_no); + +/* Add a device to a context for first initialization */ +int iommu_attach_context(struct domain *d, device_t *dev, u16 ctx_no); + +/* Remove a device from a context, effectively removing it from the IOMMU.= */ +int iommu_detach_context(struct domain *d, device_t *dev); +``` + +This interface will use a new interface with drivers to implement these fe= atures. + +Some existing functions will have a new parameter to specify on what conte= xt to do the operation. +- iommu_map (iommu_legacy_map untouched) +- iommu_unmap (iommu_legacy_unmap untouched) +- iommu_lookup_page +- iommu_iotlb_flush + +These functions will modify the iommu_context structure to accomodate with= the +operations applied, these functions will be used to replace some operation= s previously +made in the IOMMU driver. + +# IOMMU platform_ops interface changes + +The IOMMU driver needs to expose a way to create and manage IOMMU contexts= , the approach +taken here is to modify the interface to allow specifying a IOMMU context = on operations, +and at the same time, simplifying the interface by relying more on iommu +platform-independent code. + +Added functions in iommu_ops + +```c +/* Initialize a context (creating page tables, allocating hardware, struct= ures, ...) */ +int (*context_init)(struct domain *d, struct iommu_context *ctx, + u32 flags); +/* Destroy a context, assumes no device is bound to the context. */ +int (*context_teardown)(struct domain *d, struct iommu_context *ctx, + u32 flags); +/* Put a device in a context (assumes the device is not attached to anothe= r context) */ +int (*attach)(struct domain *d, device_t *dev, + struct iommu_context *ctx); +/* Remove a device from a context, and from the IOMMU. */ +int (*detach)(struct domain *d, device_t *dev, + struct iommu_context *prev_ctx); +/* Move the device from a context to another, including if the new context= is in + another domain. d corresponds to the target domain. */ +int (*reattach)(struct domain *d, device_t *dev, + struct iommu_context *prev_ctx, + struct iommu_context *ctx); + +#ifdef CONFIG_HAS_PCI +/* Specific interface for phantom function devices. */ +int (*add_devfn)(struct domain *d, struct pci_dev *pdev, u16 devfn, + struct iommu_context *ctx); +int (*remove_devfn)(struct domain *d, struct pci_dev *pdev, u16 devfn, + struct iommu_context *ctx); +#endif + +/* Changes in existing to use a specified iommu_context. */ +int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t mfn, + unsigned int flags, + unsigned int *flush_flags, + struct iommu_context *ctx); +int __must_check (*unmap_page)(struct domain *d, dfn_t dfn, + unsigned int order, + unsigned int *flush_flags, + struct iommu_context *ctx); +int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mfn, + unsigned int *flags, + struct iommu_context *ctx); + +int __must_check (*iotlb_flush)(struct domain *d, + struct iommu_context *ctx, dfn_t dfn, + unsigned long page_count, + unsigned int flush_flags); + +void (*clear_root_pgtable)(struct domain *d, struct iommu_context *ctx); +``` + +These functions are redundant with existing functions, therefore, the foll= owing functions +are replaced with new equivalents : +- quarantine_init : platform-independent code and IOMMU_CONTEXT_INIT_quara= ntine flag +- add_device : attach and add_devfn (phantom) +- assign_device : attach and add_devfn (phantom) +- remove_device : detach and remove_devfn (phantom) +- reassign_device : reattach + +Some functionnal differences with previous functions, the following should= be handled +by platform-independent/arch-specific code instead of IOMMU driver : +- identity mappings (unity mappings and rmrr) +- device list in context and domain +- domain of a device +- quarantine + +The idea behind this is to implement IOMMU context features while simplify= ing IOMMU +drivers implementations and ensuring more consistency between IOMMU driver= s. + +## Phantom function handling + +PCI devices may use additionnal devfn to do DMA operations, in order to su= pport such +devices, an interface is added to map specific device functions without im= plying that +the device is mapped to a new context (that may cause duplicates in Xen da= ta structures). + +Functions add_devfn and remove_devfn allows to map a iommu context on spec= ific devfn +for a pci device, without altering platform-independent data structures. + +It is important for the reattach operation to care about these devices, in= order +to prevent devices from being partially reattached to the new context (see= XSA-449 [2]) +by using a all-or-nothing approach for reattaching such devices. + +# Quarantine refactoring using IOMMU contexts + +The quarantine mecanism can be entirely reimplemented using IOMMU context,= making +it simpler, more consistent between platforms, + +Quarantine is currently only supported with x86 platforms and works by cre= ating a +single *hardware IOMMU domain* per quarantined device. All the quarantine = logic is +the implemented in a platform-specific fashion while actually implementing= the same +concepts : + +The *hardware IOMMU context* data structures for quarantine are currently = stored in +the device structure itself (using arch_pci_dev) and IOMMU driver needs to= care about +whether we are dealing with quarantine operations or regular operations (o= ften dealt +using macros such as QUARANTINE_SKIP or DEVICE_PGTABLE). + +The page table that will apply on the quarantined device is created reserv= ed device +regions, and adding mappings to a scratch page if enabled (quarantine=3Dsc= ratch-page). + +A new approach we can use is allowing the quarantine domain (DomIO) to man= age IOMMU +contexts, and implement all the quarantine logic using IOMMU contexts. + +That way, the quarantine implementation can be platform-independent, thus = have a more +consistent implementation between platforms. It will also allows quarantin= e to work +with other IOMMU implementations without having to implement platform-spec= ific behavior. +Moreover, quarantine operations can be implemented using regular context o= perations +instead of relying on driver-specific code. + +Quarantine implementation can be summarised as + +```c +int iommu_quarantine_dev_init(device_t *dev) +{ + int ret; + u16 ctx_no; + + if ( !iommu_quarantine ) + return -EINVAL; + + ret =3D iommu_context_alloc(dom_io, &ctx_no, IOMMU_CONTEXT_INIT_quaran= tine); + + if ( ret ) + return ret; + + /** TODO: Setup scratch page, mappings... */ + + ret =3D iommu_reattach_context(dev->domain, dom_io, dev, ctx_no); + + if ( ret ) + { + ASSERT(!iommu_context_free(dom_io, ctx_no, 0)); + return ret; + } + + return ret; +} +``` + +# Platform-specific considerations + +## Reference counters on target pages + +When mapping a guest page onto a IOMMU context, we need to make sure that +this page is not reused for something else while being actually referenced +by a IOMMU context. One way of doing it is incrementing the reference coun= ter +of each target page we map (excluding reserved regions), and decrementing = it +when the mapping isn't used anymore. + +One consideration to have is when destroying the context while having exis= ting +mappings in it. We can walk through the entire page table and decrement the +reference counter of all mappings. All of that assumes that there is no re= served +region mapped (which should be the case as a requirement of teardown, or a= s a +consequence of REATTACH_DEFAULT flag). + +Another consideration is that the "cleanup mappings" operation may take a = lot +of time depending on the complexity of the page table. Making the teardown= operation preemptable can allow the hypercall to be preempted if needed al= so preventing a malicious +guest from stalling a CPU in a teardown operation with a specially crafted= IOMMU +context (e.g with several 1G superpages). + +## Limit the amount of pages IOMMU contexts can use + +In order to prevent a (eventually malicious) guest from causing too much a= llocations +in Xen, we can enforce limits on the memory the IOMMU subsystem can use fo= r IOMMU context. +A possible implementation can be to preallocate a reasonably large chunk o= f memory +and split it into pages for use by the IOMMU subsystem only for non-defaul= t IOMMU +contexts (e.g PV-IOMMU interface), if this limitation is overcome, some op= erations +may fail from the guest side. These limitations shouldn't impact "usual" o= perations +of the IOMMU subsystem (e.g default context initialization). + +## x86 Architecture + +TODO + +### Intel VT-d + +VT-d uses DID to tag the *IOMMU domain* applied to a device and assumes th= at all entries +with the same DID uses the same page table (i.e same IOMMU context). +Under certain circonstances (e.g DRHD with DID limit below 16-bits), the *= DID* is +transparently converted into a DRHD-specific DID using a map managed inter= nally. + +The current implementation of the code reuses the Xen domain_id as DID. +However, by using multiples IOMMU contexts per domain, we can't use the do= main_id for +contexts (otherwise, different page tables will be mapped with the same DI= D). +The following strategy is used : +- on the default context, reuse the domain_id (the default context is uniq= ue per domain) +- on non-default context, use a id allocated in the pseudo_domid map, (act= ually used by +quarantine) which is a DID outside of Xen domain_id range + +### AMD-Vi + +TODO + +## Device-tree platforms + +### SMMU and SMMUv3 + +TODO + +* * * + +[1] See pv-iommu.md + +[2] pci: phantom functions assigned to incorrect contexts +https://xenbits.xen.org/xsa/advisory-449.html \ No newline at end of file --=20 2.45.2 Teddy Astie | Vates XCP-ng Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech From nobody Sat Nov 23 23:11:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=teddy.astie@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1730730561; cv=none; d=zohomail.com; s=zohoarc; b=Z2JCCATfQI1SYV5S6pJ4cv8kBDv40vCRp4N2lnrnrwsdSqCygqPP90zGUT3lfbuZbb1rhSrtZI1tanZk/ukUfpPQ3hCqKpdqV6L9eReOuuZk8/s/8OeeAntFXZc2rEX9YmI8E07Q1VG2pwxFoekOPPzS/LEV2WkKs04BaK/w4k0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1730730561; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Kyd63pm5tWlMUH5uImPW51PgJdSmFGMcOx/Jd6O3W+s=; b=cqV6OZASOU/WZmUz97gCo/2tqPoBz16b2KzrdrZmVfNf4jgOWQ4lMMeCcBFKFXoB4y9kQ8G/GQUxPVjqJId0crg9h5SXdzqjzHS9QFprhtz2t+WuIjnCAGptVgZl9R7RHh+vZtMdmNXhEj25uvx6NuFG2TqM2V1nMxAFwtxn+lE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=teddy.astie@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1730730561119243.3012101213942; Mon, 4 Nov 2024 06:29:21 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.830032.1244989 (Exim 4.92) (envelope-from ) id 1t7y4e-0007qf-41; Mon, 04 Nov 2024 14:28:56 +0000 Received: by outflank-mailman (output) from mailman id 830032.1244989; Mon, 04 Nov 2024 14:28:56 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t7y4d-0007qT-VN; Mon, 04 Nov 2024 14:28:55 +0000 Received: by outflank-mailman (input) for mailman id 830032; Mon, 04 Nov 2024 14:28:54 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t7y4c-0006XR-2C for xen-devel@lists.xenproject.org; Mon, 04 Nov 2024 14:28:54 +0000 Received: from mail128-130.atl41.mandrillapp.com (mail128-130.atl41.mandrillapp.com [198.2.128.130]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 1b40930e-9ab9-11ef-a0c5-8be0dac302b0; Mon, 04 Nov 2024 15:28:48 +0100 (CET) Received: from pmta08.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1]) by mail128-130.atl41.mandrillapp.com (Mailchimp) with ESMTP id 4Xhv3D4G02zS62JQV for ; Mon, 4 Nov 2024 14:28:40 +0000 (GMT) Received: from [37.26.189.201] by mandrillapp.com id 36a973ef59514c5cbf8aa254c699b842; Mon, 04 Nov 2024 14:28:40 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1b40930e-9ab9-11ef-a0c5-8be0dac302b0 X-Custom-Connection: eyJyZW1vdGVpcCI6IjE5OC4yLjEyOC4xMzAiLCJoZWxvIjoibWFpbDEyOC0xMzAuYXRsNDEubWFuZHJpbGxhcHAuY29tIn0= X-Custom-Transaction: eyJpZCI6IjFiNDA5MzBlLTlhYjktMTFlZi1hMGM1LThiZTBkYWMzMDJiMCIsInRzIjoxNzMwNzMwNTI5LjMyMDgyNiwic2VuZGVyIjoiYm91bmNlLW1kXzMwNTA0OTYyLjY3MjhkYTE4LnYxLTM2YTk3M2VmNTk1MTRjNWNiZjhhYTI1NGM2OTliODQyQGJvdW5jZS52YXRlcy50ZWNoIiwicmVjaXBpZW50IjoieGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnIn0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1730730520; x=1730991020; bh=Kyd63pm5tWlMUH5uImPW51PgJdSmFGMcOx/Jd6O3W+s=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=DPd7QUNkBJnnSnpBFRdIzHz+RCDXAoSHPQ9RwtXHgMv1BMbLa33eVTbSD/XrssN74 YTaCyJyXZWdSG1Je2JAhPmh9HyoJvzzO1c3ynpNDPi890T5l+S63Vm0qDn3FCgScf6 0y0s1lDmg4XdnXPqF35SnCQ5Hc3gTASauzP+ikGXIg9mnmNQ8yjv+WgM9V33Lb11LK bx+Qa//sE7zqExEpdAVEyv8PcIqpryPpA6EaezhOg+xCHq6JH+j+2MG4UWKRfRXnbR nXOY0e2SwFa4Tz1Pgpj9oMcwllk4SPl2rgMsIRH39vUQGuky3Q01jWtNiLKaSH78zz I/Feg5U0rRGmg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1730730520; x=1730991020; i=teddy.astie@vates.tech; bh=Kyd63pm5tWlMUH5uImPW51PgJdSmFGMcOx/Jd6O3W+s=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=AS6jTlGgbtL9ymzO+aNbkTM1yRnpdYNd+0i/tArmokIisYmvTkAJpCic62R0CdMCi fb69rYZOedVZBKdqDihyOvQOUni3C/SUxrA05dWPSkPOn2BdCKs0AwBBOYS8CMFf8x ddj9uPdaV3LPXiyRgl7uxtZb9AfqI18bkrRM4ius299udvq6309K9mWQGmEem5DumU uWDzxdS67ceY+5zqEQUwgzgAn2mEGUL9ZCsrKuiez28n2bqNfzoCCQ0WLGpwJKfiQp aEfrCGRk9cCL1aMpwJ2NaIOgG5mUXD2Aivc/EcLX+SlrbcmUotrOCMIoyL8cujjaGO 5qKDSjeojL/Vw== From: "Teddy Astie" Subject: =?utf-8?Q?[XEN=20RFC=20PATCH=20v4=203/5]=20IOMMU:=20Introduce=20redesigned=20IOMMU=20subsystem?= X-Mailer: git-send-email 2.45.2 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1730730517889 To: xen-devel@lists.xenproject.org Cc: "Teddy Astie" , "Jan Beulich" , "Andrew Cooper" , =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= , "Julien Grall" , "Stefano Stabellini" , "Lukasz Hawrylko" , "Daniel P. Smith" , =?utf-8?Q?Mateusz=20M=C3=B3wka?= Message-Id: <648b935db05782d672c5b422c0e3ee63c5d70a89.1730718102.git.teddy.astie@vates.tech> In-Reply-To: References: X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.36a973ef59514c5cbf8aa254c699b842?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20241104:md Date: Mon, 04 Nov 2024 14:28:40 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @mandrillapp.com) (identity teddy.astie@vates.tech) X-ZM-MESSAGEID: 1730730562349116600 Content-Type: text/plain; charset="utf-8" Based on docs/designs/iommu-contexts.md, implement the redesigned IOMMU sub= system. Signed-off-by Teddy Astie --- Changed in V2: * cleanup some unneeded includes * fix dangling devices in context on detach Changed in V3: * add unlocked _iommu_lookup_page * iommu_check_context+iommu_get_context -> iommu_get_context and check for = NULL * prevent IOMMU operations on dying contexts Changed in V4: * changed context lock logic : iommu_get_context -> iommu_get_context+iommu= _put_context * added no-dma mode (see cover letter) * use new initialization logic --- xen/arch/x86/domain.c | 2 +- xen/arch/x86/mm/p2m-ept.c | 2 +- xen/arch/x86/pv/dom0_build.c | 4 +- xen/arch/x86/tboot.c | 4 +- xen/common/memory.c | 4 +- xen/drivers/passthrough/Makefile | 3 + xen/drivers/passthrough/context.c | 711 +++++++++++++++++++++++++++ xen/drivers/passthrough/iommu.c | 396 ++++++--------- xen/drivers/passthrough/pci.c | 117 +---- xen/drivers/passthrough/quarantine.c | 49 ++ xen/include/xen/iommu.h | 117 ++++- xen/include/xen/pci.h | 3 + 12 files changed, 1032 insertions(+), 380 deletions(-) create mode 100644 xen/drivers/passthrough/context.c create mode 100644 xen/drivers/passthrough/quarantine.c diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 89aad7e897..abd9c79274 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -2391,7 +2391,7 @@ int domain_relinquish_resources(struct domain *d) =20 PROGRESS(iommu_pagetables): =20 - ret =3D iommu_free_pgtables(d); + ret =3D iommu_free_pgtables(d, iommu_default_context(d)); if ( ret ) return ret; =20 diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index 21728397f9..5ddeefb826 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -974,7 +974,7 @@ out: rc =3D iommu_iotlb_flush(d, _dfn(gfn), 1ul << order, (iommu_flags ? IOMMU_FLUSHF_added : 0) | (vtd_pte_present ? IOMMU_FLUSHF_modified - : 0)); + : 0), 0); else if ( need_iommu_pt_sync(d) ) rc =3D iommu_flags ? iommu_legacy_map(d, _dfn(gfn), mfn, 1ul << order, iommu_fl= ags) : diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c index 262edb6bf2..a6685b6b44 100644 --- a/xen/arch/x86/pv/dom0_build.c +++ b/xen/arch/x86/pv/dom0_build.c @@ -76,7 +76,7 @@ static __init void mark_pv_pt_pages_rdonly(struct domain = *d, * iommu_memory_setup() ended up mapping them. */ if ( need_iommu_pt_sync(d) && - iommu_unmap(d, _dfn(mfn_x(page_to_mfn(page))), 1, 0, flush_fl= ags) ) + iommu_unmap(d, _dfn(mfn_x(page_to_mfn(page))), 1, 0, flush_fl= ags, 0) ) BUG(); =20 /* Read-only mapping + PGC_allocated + page-table page. */ @@ -127,7 +127,7 @@ static void __init iommu_memory_setup(struct domain *d,= const char *what, =20 while ( (rc =3D iommu_map(d, _dfn(mfn_x(mfn)), mfn, nr, IOMMUF_readable | IOMMUF_writable | IOMMUF_pre= empt, - flush_flags)) > 0 ) + flush_flags, 0)) > 0 ) { mfn =3D mfn_add(mfn, rc); nr -=3D rc; diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c index d5db60d335..25a5a66412 100644 --- a/xen/arch/x86/tboot.c +++ b/xen/arch/x86/tboot.c @@ -218,9 +218,9 @@ static void tboot_gen_domain_integrity(const uint8_t ke= y[TB_KEY_SIZE], =20 if ( is_iommu_enabled(d) && is_vtd ) { - const struct domain_iommu *dio =3D dom_iommu(d); + struct domain_iommu *dio =3D dom_iommu(d); =20 - update_iommu_mac(&ctx, dio->arch.vtd.pgd_maddr, + update_iommu_mac(&ctx, iommu_default_context(d)->arch.vtd.pgd_= maddr, agaw_to_level(dio->arch.vtd.agaw)); } } diff --git a/xen/common/memory.c b/xen/common/memory.c index a6f2f6d1b3..acf305bcd0 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -926,7 +926,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_= add_to_physmap *xatp, this_cpu(iommu_dont_flush_iotlb) =3D 0; =20 ret =3D iommu_iotlb_flush(d, _dfn(xatp->idx - done), done, - IOMMU_FLUSHF_modified); + IOMMU_FLUSHF_modified, 0); if ( unlikely(ret) && rc >=3D 0 ) rc =3D ret; =20 @@ -940,7 +940,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_= add_to_physmap *xatp, put_page(pages[i]); =20 ret =3D iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), done, - IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified= ); + IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified= , 0); if ( unlikely(ret) && rc >=3D 0 ) rc =3D ret; } diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Mak= efile index a1621540b7..69327080ab 100644 --- a/xen/drivers/passthrough/Makefile +++ b/xen/drivers/passthrough/Makefile @@ -4,6 +4,9 @@ obj-$(CONFIG_X86) +=3D x86/ obj-$(CONFIG_ARM) +=3D arm/ =20 obj-y +=3D iommu.o +obj-y +=3D context.o +obj-y +=3D quarantine.o + obj-$(CONFIG_HAS_PCI) +=3D pci.o obj-$(CONFIG_HAS_DEVICE_TREE) +=3D device_tree.o obj-$(CONFIG_HAS_PCI) +=3D ats.o diff --git a/xen/drivers/passthrough/context.c b/xen/drivers/passthrough/co= ntext.c new file mode 100644 index 0000000000..edf660b617 --- /dev/null +++ b/xen/drivers/passthrough/context.c @@ -0,0 +1,711 @@ +/* + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License f= or + * more details. + * + * You should have received a copy of the GNU General Public License along= with + * this program; If not, see . + */ + +#include +#include +#include +#include +#include +#include + +bool iommu_check_context(struct domain *d, u16 ctx_no) { + struct domain_iommu *hd =3D dom_iommu(d); + + if (ctx_no =3D=3D 0) + return 1; /* Default context always exist. */ + + if ((ctx_no - 1) >=3D hd->other_contexts.count) + return 0; /* out of bounds */ + + return test_bit(ctx_no - 1, hd->other_contexts.bitmap); +} + +struct iommu_context *iommu_get_context(struct domain *d, u16 ctx_no) { + struct domain_iommu *hd =3D dom_iommu(d); + struct iommu_context *ctx; + + if ( !iommu_check_context(d, ctx_no) ) + return NULL; + + if (ctx_no =3D=3D 0) + ctx =3D &hd->default_ctx; + else + ctx =3D &hd->other_contexts.map[ctx_no - 1]; + + rspin_lock(&ctx->lock); + /* Check if the context is still valid at this point */ + if ( unlikely(!iommu_check_context(d, ctx_no)) ) + { + /* Context has been destroyed in between */ + rspin_unlock(&ctx->lock); + return NULL; + } + + return ctx; +} + +void iommu_put_context(struct iommu_context *ctx) +{ + rspin_unlock(&ctx->lock); +} + +static unsigned int mapping_order(const struct domain_iommu *hd, + dfn_t dfn, mfn_t mfn, unsigned long nr) +{ + unsigned long res =3D dfn_x(dfn) | mfn_x(mfn); + unsigned long sizes =3D hd->platform_ops->page_sizes; + unsigned int bit =3D ffsl(sizes) - 1, order =3D 0; + + ASSERT(bit =3D=3D PAGE_SHIFT); + + while ( (sizes =3D (sizes >> bit) & ~1) ) + { + unsigned long mask; + + bit =3D ffsl(sizes) - 1; + mask =3D (1UL << bit) - 1; + if ( nr <=3D mask || (res & mask) ) + break; + order +=3D bit; + nr >>=3D bit; + res >>=3D bit; + } + + return order; +} + +static long _iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0, + unsigned long page_count, unsigned int flags, + unsigned int *flush_flags, struct iommu_context *ct= x) +{ + struct domain_iommu *hd =3D dom_iommu(d); + unsigned long i; + unsigned int order, j =3D 0; + int rc =3D 0; + + if ( !is_iommu_enabled(d) ) + return 0; + + ASSERT(!IOMMUF_order(flags)); + + for ( i =3D 0; i < page_count; i +=3D 1UL << order ) + { + dfn_t dfn =3D dfn_add(dfn0, i); + mfn_t mfn =3D mfn_add(mfn0, i); + + order =3D mapping_order(hd, dfn, mfn, page_count - i); + + if ( (flags & IOMMUF_preempt) && + ((!(++j & 0xfff) && general_preempt_check()) || + i > LONG_MAX - (1UL << order)) ) + return i; + + rc =3D iommu_call(hd->platform_ops, map_page, d, dfn, mfn, + flags | IOMMUF_order(order), flush_flags, ctx); + + if ( likely(!rc) ) + continue; + + if ( !d->is_shutting_down && printk_ratelimit() ) + printk(XENLOG_ERR + "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" fa= iled: %d\n", + d->domain_id, dfn_x(dfn), mfn_x(mfn), rc); + + /* while statement to satisfy __must_check */ + while ( iommu_unmap(d, dfn0, i, 0, flush_flags, ctx->id) ) + break; + + if ( !ctx->id && !is_hardware_domain(d) ) + domain_crash(d); + + break; + } + + /* + * Something went wrong so, if we were dealing with more than a single + * page, flush everything and clear flush flags. + */ + if ( page_count > 1 && unlikely(rc) && + !iommu_iotlb_flush_all(d, *flush_flags) ) + *flush_flags =3D 0; + + return rc; +} + +long iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0, + unsigned long page_count, unsigned int flags, + unsigned int *flush_flags, u16 ctx_no) +{ + struct iommu_context *ctx; + long ret; + + if ( !(ctx =3D iommu_get_context(d, ctx_no)) ) + return -ENOENT; + + ret =3D _iommu_map(d, dfn0, mfn0, page_count, flags, flush_flags, ctx); + + iommu_put_context(ctx); + + return ret; +} + +int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn, + unsigned long page_count, unsigned int flags) +{ + struct iommu_context *ctx; + unsigned int flush_flags =3D 0; + int rc =3D 0; + + ASSERT(!(flags & IOMMUF_preempt)); + + if ( dom_iommu(d)->no_dma ) + return 0; + + ctx =3D iommu_get_context(d, 0); + + if ( !ctx->opaque ) + { + rc =3D iommu_map(d, dfn, mfn, page_count, flags, &flush_flags, 0); + + if ( !this_cpu(iommu_dont_flush_iotlb) && !rc ) + rc =3D iommu_iotlb_flush(d, dfn, page_count, flush_flags, 0); + } + + iommu_put_context(ctx); + + return rc; +} + +static long _iommu_unmap(struct domain *d, dfn_t dfn0, unsigned long page_= count, + unsigned int flags, unsigned int *flush_flags, + struct iommu_context *ctx) +{ + struct domain_iommu *hd =3D dom_iommu(d); + unsigned long i; + unsigned int order, j =3D 0; + int rc =3D 0; + + if ( !is_iommu_enabled(d) ) + return 0; + + ASSERT(!(flags & ~IOMMUF_preempt)); + + for ( i =3D 0; i < page_count; i +=3D 1UL << order ) + { + dfn_t dfn =3D dfn_add(dfn0, i); + int err; + + order =3D mapping_order(hd, dfn, _mfn(0), page_count - i); + + if ( (flags & IOMMUF_preempt) && + ((!(++j & 0xfff) && general_preempt_check()) || + i > LONG_MAX - (1UL << order)) ) + return i; + + err =3D iommu_call(hd->platform_ops, unmap_page, d, dfn, + flags | IOMMUF_order(order), flush_flags, + ctx); + + if ( likely(!err) ) + continue; + + if ( !d->is_shutting_down && printk_ratelimit() ) + printk(XENLOG_ERR + "d%d: IOMMU unmapping dfn %"PRI_dfn" failed: %d\n", + d->domain_id, dfn_x(dfn), err); + + if ( !rc ) + rc =3D err; + + if ( !ctx->id && !is_hardware_domain(d) ) + { + domain_crash(d); + break; + } + } + + /* + * Something went wrong so, if we were dealing with more than a single + * page, flush everything and clear flush flags. + */ + if ( page_count > 1 && unlikely(rc) && + !iommu_iotlb_flush_all(d, *flush_flags) ) + *flush_flags =3D 0; + + return rc; +} + +long iommu_unmap(struct domain *d, dfn_t dfn0, unsigned long page_count, + unsigned int flags, unsigned int *flush_flags, + u16 ctx_no) +{ + struct iommu_context *ctx; + long ret; + + if ( !(ctx =3D iommu_get_context(d, ctx_no)) ) + return -ENOENT; + + ret =3D _iommu_unmap(d, dfn0, page_count, flags, flush_flags, ctx); + + iommu_put_context(ctx); + + return ret; +} + +int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned long page_cou= nt) +{ + unsigned int flush_flags =3D 0; + struct iommu_context *ctx; + int rc; + + if ( dom_iommu(d)->no_dma ) + return 0; + + ctx =3D iommu_get_context(d, 0); + + if ( ctx->opaque ) + return 0; + + rc =3D iommu_unmap(d, dfn, page_count, 0, &flush_flags, 0); + + if ( !this_cpu(iommu_dont_flush_iotlb) && !rc ) + rc =3D iommu_iotlb_flush(d, dfn, page_count, flush_flags, 0); + + iommu_put_context(ctx); + + return rc; +} + +int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn, + unsigned int *flags, u16 ctx_no) +{ + struct domain_iommu *hd =3D dom_iommu(d); + struct iommu_context *ctx; + int ret =3D 0; + + if ( !is_iommu_enabled(d) || !hd->platform_ops->lookup_page ) + return -EOPNOTSUPP; + + if ( !(ctx =3D iommu_get_context(d, ctx_no)) ) + return -ENOENT; + + ret =3D iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags, = ctx); + + iommu_put_context(ctx); + return ret; +} + +int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned long page_coun= t, + unsigned int flush_flags, u16 ctx_no) +{ + struct domain_iommu *hd =3D dom_iommu(d); + struct iommu_context *ctx; + int rc; + + if ( !is_iommu_enabled(d) || !hd->platform_ops->iotlb_flush || + !page_count || !flush_flags ) + return 0; + + if ( dfn_eq(dfn, INVALID_DFN) ) + return -EINVAL; + + if ( !(ctx =3D iommu_get_context(d, ctx_no)) ) + return -ENOENT; + + rc =3D iommu_call(hd->platform_ops, iotlb_flush, d, ctx, dfn, page_cou= nt, + flush_flags); + if ( unlikely(rc) ) + { + if ( !d->is_shutting_down && printk_ratelimit() ) + printk(XENLOG_ERR + "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", pag= e count %lu flags %x\n", + d->domain_id, rc, dfn_x(dfn), page_count, flush_flags); + + if ( !ctx->id && !is_hardware_domain(d) ) + domain_crash(d); + } + + iommu_put_context(ctx); + + return rc; +} + +int iommu_context_init(struct domain *d, struct iommu_context *ctx, u16 ct= x_no, + u32 flags) +{ + if ( !dom_iommu(d)->platform_ops->context_init ) + return -ENOSYS; + + INIT_LIST_HEAD(&ctx->devices); + ctx->id =3D ctx_no; + ctx->dying =3D false; + ctx->opaque =3D false; /* assume opaque by default */ + + return iommu_call(dom_iommu(d)->platform_ops, context_init, d, ctx, fl= ags); +} + +int iommu_context_alloc(struct domain *d, u16 *ctx_no, u32 flags) +{ + unsigned int i; + int ret; + struct domain_iommu *hd =3D dom_iommu(d); + struct iommu_context *ctx; + + do { + i =3D find_first_zero_bit(hd->other_contexts.bitmap, hd->other_con= texts.count); + + if ( i >=3D hd->other_contexts.count ) + return -ENOSPC; + + ctx =3D &hd->other_contexts.map[i]; + + /* Try to lock the mutex, can fail on concurrent accesses */ + if ( !rspin_trylock(&ctx->lock) ) + continue; + + /* We can now set it as used, we keep the lock for initialization.= */ + set_bit(i, hd->other_contexts.bitmap); + } while (0); + + *ctx_no =3D i + 1; + + ret =3D iommu_context_init(d, ctx, *ctx_no, flags); + + if ( ret ) + clear_bit(*ctx_no, hd->other_contexts.bitmap); + + iommu_put_context(ctx); + return ret; +} + +/** + * Attach dev phantom functions to ctx, override any existing + * mapped context. + */ +static int iommu_reattach_phantom(struct domain *d, device_t *dev, + struct iommu_context *ctx) +{ + int ret =3D 0; + uint8_t devfn =3D dev->devfn; + struct domain_iommu *hd =3D dom_iommu(d); + + while ( dev->phantom_stride ) + { + devfn +=3D dev->phantom_stride; + + if ( PCI_SLOT(devfn) !=3D PCI_SLOT(dev->devfn) ) + break; + + ret =3D iommu_call(hd->platform_ops, add_devfn, d, dev, devfn, ctx= ); + + if ( ret ) + break; + } + + return ret; +} + +/** + * Detach all device phantom functions. + */ +static int iommu_detach_phantom(struct domain *d, device_t *dev) +{ + int ret =3D 0; + uint8_t devfn =3D dev->devfn; + struct domain_iommu *hd =3D dom_iommu(d); + + while ( dev->phantom_stride ) + { + devfn +=3D dev->phantom_stride; + + if ( PCI_SLOT(devfn) !=3D PCI_SLOT(dev->devfn) ) + break; + + ret =3D iommu_call(hd->platform_ops, remove_devfn, d, dev, devfn); + + if ( ret ) + break; + } + + return ret; +} + +int iommu_attach_context(struct domain *d, device_t *dev, u16 ctx_no) +{ + struct iommu_context *ctx =3D NULL; + int ret, rc; + + if ( !(ctx =3D iommu_get_context(d, ctx_no)) ) + { + ret =3D -ENOENT; + goto unlock; + } + + pcidevs_lock(); + + if ( ctx->dying ) + { + ret =3D -EINVAL; + goto unlock; + } + + ret =3D iommu_call(dom_iommu(d)->platform_ops, attach, d, dev, ctx); + + if ( ret ) + goto unlock; + + /* See iommu_reattach_context() */ + rc =3D iommu_reattach_phantom(d, dev, ctx); + + if ( rc ) + { + printk(XENLOG_ERR "IOMMU: Unable to attach %pp phantom functions\n= ", + &dev->sbdf); + + if( iommu_call(dom_iommu(d)->platform_ops, detach, d, dev, ctx) + || iommu_detach_phantom(d, dev) ) + { + printk(XENLOG_ERR "IOMMU: Improperly detached %pp\n", &dev->sb= df); + WARN(); + } + + ret =3D -EIO; + goto unlock; + } + + dev->context =3D ctx_no; + list_add(&dev->context_list, &ctx->devices); + +unlock: + pcidevs_unlock(); + + if ( ctx ) + iommu_put_context(ctx); + + return ret; +} + +int iommu_detach_context(struct domain *d, device_t *dev) +{ + struct iommu_context *ctx; + int ret, rc; + + if ( !dev->domain ) + { + printk(XENLOG_WARNING "IOMMU: Trying to detach a non-attached devi= ce\n"); + WARN(); + return 0; + } + + /* Make sure device is actually in the domain. */ + ASSERT(d =3D=3D dev->domain); + + pcidevs_lock(); + + ctx =3D iommu_get_context(d, dev->context); + ASSERT(ctx); /* device is using an invalid context ? + dev->context invalid ? */ + + ret =3D iommu_call(dom_iommu(d)->platform_ops, detach, d, dev, ctx); + + if ( ret ) + goto unlock; + + rc =3D iommu_detach_phantom(d, dev); + + if ( rc ) + printk(XENLOG_WARNING "IOMMU: " + "Improperly detached device functions (%d)\n", rc); + + list_del(&dev->context_list); + +unlock: + pcidevs_unlock(); + iommu_put_context(ctx); + return ret; +} + +int iommu_reattach_context(struct domain *prev_dom, struct domain *next_do= m, + device_t *dev, u16 ctx_no) +{ + u16 prev_ctx_no; + device_t *ctx_dev; + struct domain_iommu *prev_hd, *next_hd; + struct iommu_context *prev_ctx =3D NULL, *next_ctx =3D NULL; + int ret, rc; + bool same_domain; + + /* Make sure we actually are doing something meaningful */ + BUG_ON(!prev_dom && !next_dom); + + /// TODO: Do such cases exists ? + // /* Platform ops must match */ + // if (dom_iommu(prev_dom)->platform_ops !=3D dom_iommu(next_dom)->pla= tform_ops) + // return -EINVAL; + + if ( !prev_dom ) + return iommu_attach_context(next_dom, dev, ctx_no); + + if ( !next_dom ) + return iommu_detach_context(prev_dom, dev); + + prev_hd =3D dom_iommu(prev_dom); + next_hd =3D dom_iommu(next_dom); + + pcidevs_lock(); + + same_domain =3D prev_dom =3D=3D next_dom; + + prev_ctx_no =3D dev->context; + + if ( !same_domain && (ctx_no =3D=3D prev_ctx_no) ) + { + printk(XENLOG_DEBUG + "IOMMU: Reattaching %pp to same IOMMU context c%hu\n", + &dev, ctx_no); + ret =3D 0; + goto unlock; + } + + if ( !(prev_ctx =3D iommu_get_context(prev_dom, prev_ctx_no)) ) + { + ret =3D -ENOENT; + goto unlock; + } + + if ( !(next_ctx =3D iommu_get_context(next_dom, ctx_no)) ) + { + ret =3D -ENOENT; + goto unlock; + } + + if ( next_ctx->dying ) + { + ret =3D -EINVAL; + goto unlock; + } + + ret =3D iommu_call(prev_hd->platform_ops, reattach, next_dom, dev, pre= v_ctx, + next_ctx); + + if ( ret ) + goto unlock; + + /* + * We need to do special handling for phantom devices as they + * also use some other PCI functions behind the scenes. + */ + rc =3D iommu_reattach_phantom(next_dom, dev, next_ctx); + + if ( rc ) + { + /** + * Device is being partially reattached (we have primary function = and + * maybe some phantom functions attached to next_ctx, some others = to prev_ctx), + * some functions of the device will be attached to next_ctx. + */ + printk(XENLOG_WARNING "IOMMU: " + "Device %pp improperly reattached due to phantom function" + " reattach failure between %dd%dc and %dd%dc (%d)\n", dev, + prev_dom->domain_id, prev_ctx->id, next_dom->domain_id, + next_dom->domain_id, rc); + + /* Try reattaching to previous context, reverting into a consisten= t state. */ + if ( iommu_call(prev_hd->platform_ops, reattach, prev_dom, dev, ne= xt_ctx, + prev_ctx) || iommu_reattach_phantom(prev_dom, dev,= prev_ctx) ) + { + printk(XENLOG_ERR "Unable to reattach %pp back to %dd%dc\n", + &dev->sbdf, prev_dom->domain_id, prev_ctx->id); + + if ( !is_hardware_domain(prev_dom) ) + domain_crash(prev_dom); + + if ( prev_dom !=3D next_dom && !is_hardware_domain(next_dom) ) + domain_crash(next_dom); + + rc =3D -EIO; + } + + ret =3D rc; + goto unlock; + } + + /* Remove device from previous context, and add it to new one. */ + list_for_each_entry(ctx_dev, &prev_ctx->devices, context_list) + { + if ( ctx_dev =3D=3D dev ) + { + list_del(&ctx_dev->context_list); + list_add(&ctx_dev->context_list, &next_ctx->devices); + break; + } + } + + if (!ret) + dev->context =3D ctx_no; /* update device context*/ + +unlock: + pcidevs_unlock(); + + if ( prev_ctx ) + iommu_put_context(prev_ctx); + + if ( next_ctx ) + iommu_put_context(next_ctx); + + return ret; +} + +int iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u3= 2 flags) +{ + struct domain_iommu *hd =3D dom_iommu(d); + + if ( !hd->platform_ops->context_teardown ) + return -ENOSYS; + + ctx->dying =3D true; + + /* first reattach devices back to default context if needed */ + if ( flags & IOMMU_TEARDOWN_REATTACH_DEFAULT ) + { + struct pci_dev *device; + list_for_each_entry(device, &ctx->devices, context_list) + iommu_reattach_context(d, d, device, 0); + } + else if (!list_empty(&ctx->devices)) + return -EBUSY; /* there is a device in context */ + + return iommu_call(hd->platform_ops, context_teardown, d, ctx, flags); +} + +int iommu_context_free(struct domain *d, u16 ctx_no, u32 flags) +{ + int ret; + struct domain_iommu *hd =3D dom_iommu(d); + struct iommu_context *ctx; + + if ( ctx_no =3D=3D 0 ) + return -EINVAL; + + if ( !(ctx =3D iommu_get_context(d, ctx_no)) ) + return -ENOENT; + + ret =3D iommu_context_teardown(d, ctx, flags); + + if ( !ret ) + clear_bit(ctx_no - 1, hd->other_contexts.bitmap); + + iommu_put_context(ctx); + return ret; +} diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index 9e74a1fc72..e109ebe404 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -12,15 +12,18 @@ * this program; If not, see . */ =20 +#include +#include +#include +#include #include +#include #include -#include -#include -#include #include -#include #include -#include +#include +#include +#include =20 #ifdef CONFIG_X86 #include @@ -35,26 +38,11 @@ bool __read_mostly force_iommu; bool __read_mostly iommu_verbose; static bool __read_mostly iommu_crash_disable; =20 -#define IOMMU_quarantine_none 0 /* aka false */ -#define IOMMU_quarantine_basic 1 /* aka true */ -#define IOMMU_quarantine_scratch_page 2 -#ifdef CONFIG_HAS_PCI -uint8_t __read_mostly iommu_quarantine =3D -# if defined(CONFIG_IOMMU_QUARANTINE_NONE) - IOMMU_quarantine_none; -# elif defined(CONFIG_IOMMU_QUARANTINE_BASIC) - IOMMU_quarantine_basic; -# elif defined(CONFIG_IOMMU_QUARANTINE_SCRATCH_PAGE) - IOMMU_quarantine_scratch_page; -# endif -#else -# define iommu_quarantine IOMMU_quarantine_none -#endif /* CONFIG_HAS_PCI */ - static bool __hwdom_initdata iommu_hwdom_none; bool __hwdom_initdata iommu_hwdom_strict; bool __read_mostly iommu_hwdom_passthrough; bool __hwdom_initdata iommu_hwdom_inclusive; +bool __read_mostly iommu_hwdom_no_dma =3D false; int8_t __hwdom_initdata iommu_hwdom_reserved =3D -1; =20 #ifndef iommu_hap_pt_share @@ -172,6 +160,8 @@ static int __init cf_check parse_dom0_iommu_param(const= char *s) iommu_hwdom_reserved =3D val; else if ( !cmdline_strcmp(s, "none") ) iommu_hwdom_none =3D true; + else if ( (val =3D parse_boolean("dma", s, ss)) >=3D 0 ) + iommu_hwdom_no_dma =3D !val; else rc =3D -EINVAL; =20 @@ -193,6 +183,98 @@ static void __hwdom_init check_hwdom_reqs(struct domai= n *d) arch_iommu_check_autotranslated_hwdom(d); } =20 +int iommu_domain_pviommu_init(struct domain *d, uint16_t nb_ctx, uint32_t = arena_order) +{ + struct domain_iommu *hd =3D dom_iommu(d); + int rc; + + BUG_ON(nb_ctx =3D=3D 0); /* sanity check (prevent underflow) */ + + /* + * hd->other_contexts.count is always reported as 0 during initializat= ion + * preventing misuse of partially initialized IOMMU contexts. + */ + + if ( atomic_cmpxchg(&hd->other_contexts.initialized, 0, 1) =3D=3D 1 ) + return -EACCES; + + if ( (nb_ctx - 1) > 0 ) { + /* Initialize context bitmap */ + size_t i; + + hd->other_contexts.bitmap =3D xzalloc_array(unsigned long, + BITS_TO_LONGS(nb_ctx - 1= )); + + if (!hd->other_contexts.bitmap) + { + rc =3D -ENOMEM; + goto cleanup; + } + + hd->other_contexts.map =3D xzalloc_array(struct iommu_context, nb_= ctx - 1); + + if (!hd->other_contexts.map) + { + rc =3D -ENOMEM; + goto cleanup; + } + + for (i =3D 0; i < (nb_ctx - 1); i++) + rspin_lock_init(&hd->other_contexts.map[i].lock); + } + + rc =3D arch_iommu_pviommu_init(d, nb_ctx, arena_order); + + if ( rc ) + goto cleanup; + + /* Make sure initialization is complete before making it visible to ot= her CPUs. */ + smp_wmb(); + + hd->other_contexts.count =3D nb_ctx - 1; + + printk(XENLOG_INFO "Dom%d uses %lu IOMMU contexts (%llu pages arena)\n= ", + d->domain_id, (unsigned long)nb_ctx, 1llu << arena_order); + + return 0; + +cleanup: + /* TODO: Reset hd->other_contexts.initialized */ + if ( hd->other_contexts.bitmap ) + { + xfree(hd->other_contexts.bitmap); + hd->other_contexts.bitmap =3D NULL; + } + + if ( hd->other_contexts.map ) + { + xfree(hd->other_contexts.map); + hd->other_contexts.bitmap =3D NULL; + } + + return rc; +} + +int iommu_domain_pviommu_teardown(struct domain *d) +{ + struct domain_iommu *hd =3D dom_iommu(d); + int i; + /* FIXME: Potential race condition with remote_op ? */ + + for (i =3D 0; i < hd->other_contexts.count; i++) + WARN_ON(iommu_context_free(d, i, IOMMU_TEARDOWN_REATTACH_DEFAULT) = !=3D ENOENT); + + hd->other_contexts.count =3D 0; + + if ( hd->other_contexts.bitmap ) + xfree(hd->other_contexts.bitmap); + + if ( hd->other_contexts.map ) + xfree(hd->other_contexts.map); + + return 0; +} + int iommu_domain_init(struct domain *d, unsigned int opts) { struct domain_iommu *hd =3D dom_iommu(d); @@ -208,6 +290,8 @@ int iommu_domain_init(struct domain *d, unsigned int op= ts) hd->node =3D NUMA_NO_NODE; #endif =20 + rspin_lock_init(&hd->default_ctx.lock); + ret =3D arch_iommu_domain_init(d); if ( ret ) return ret; @@ -236,6 +320,23 @@ int iommu_domain_init(struct domain *d, unsigned int o= pts) =20 ASSERT(!(hd->need_sync && hd->hap_pt_share)); =20 + if ( hd->no_dma ) + { + /* No-DMA mode is exclusive with HAP and sync_pt. */ + hd->hap_pt_share =3D false; + hd->need_sync =3D false; + } + + hd->allow_pv_iommu =3D true; + + iommu_context_init(d, &hd->default_ctx, 0, IOMMU_CONTEXT_INIT_default); + + rwlock_init(&hd->other_contexts.lock); + hd->other_contexts.initialized =3D (atomic_t)ATOMIC_INIT(0); + hd->other_contexts.count =3D 0; + hd->other_contexts.bitmap =3D NULL; + hd->other_contexts.map =3D NULL; + return 0; } =20 @@ -249,13 +350,12 @@ static void cf_check iommu_dump_page_tables(unsigned = char key) =20 for_each_domain(d) { - if ( is_hardware_domain(d) || !is_iommu_enabled(d) ) + if ( !is_iommu_enabled(d) ) continue; =20 if ( iommu_use_hap_pt(d) ) { printk("%pd sharing page tables\n", d); - continue; } =20 iommu_vcall(dom_iommu(d)->platform_ops, dump_page_tables, d); @@ -274,10 +374,13 @@ void __hwdom_init iommu_hwdom_init(struct domain *d) iommu_vcall(hd->platform_ops, hwdom_init, d); } =20 -static void iommu_teardown(struct domain *d) +void iommu_domain_destroy(struct domain *d) { struct domain_iommu *hd =3D dom_iommu(d); =20 + if ( !is_iommu_enabled(d) ) + return; + /* * During early domain creation failure, we may reach here with the * ops not yet initialized. @@ -286,222 +389,9 @@ static void iommu_teardown(struct domain *d) return; =20 iommu_vcall(hd->platform_ops, teardown, d); -} - -void iommu_domain_destroy(struct domain *d) -{ - if ( !is_iommu_enabled(d) ) - return; - - iommu_teardown(d); =20 arch_iommu_domain_destroy(d); -} - -static unsigned int mapping_order(const struct domain_iommu *hd, - dfn_t dfn, mfn_t mfn, unsigned long nr) -{ - unsigned long res =3D dfn_x(dfn) | mfn_x(mfn); - unsigned long sizes =3D hd->platform_ops->page_sizes; - unsigned int bit =3D ffsl(sizes) - 1, order =3D 0; - - ASSERT(bit =3D=3D PAGE_SHIFT); - - while ( (sizes =3D (sizes >> bit) & ~1) ) - { - unsigned long mask; - - bit =3D ffsl(sizes) - 1; - mask =3D (1UL << bit) - 1; - if ( nr <=3D mask || (res & mask) ) - break; - order +=3D bit; - nr >>=3D bit; - res >>=3D bit; - } - - return order; -} - -long iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0, - unsigned long page_count, unsigned int flags, - unsigned int *flush_flags) -{ - const struct domain_iommu *hd =3D dom_iommu(d); - unsigned long i; - unsigned int order, j =3D 0; - int rc =3D 0; - - if ( !is_iommu_enabled(d) ) - return 0; - - ASSERT(!IOMMUF_order(flags)); - - for ( i =3D 0; i < page_count; i +=3D 1UL << order ) - { - dfn_t dfn =3D dfn_add(dfn0, i); - mfn_t mfn =3D mfn_add(mfn0, i); - - order =3D mapping_order(hd, dfn, mfn, page_count - i); - - if ( (flags & IOMMUF_preempt) && - ((!(++j & 0xfff) && general_preempt_check()) || - i > LONG_MAX - (1UL << order)) ) - return i; - - rc =3D iommu_call(hd->platform_ops, map_page, d, dfn, mfn, - flags | IOMMUF_order(order), flush_flags); - - if ( likely(!rc) ) - continue; - - if ( !d->is_shutting_down && printk_ratelimit() ) - printk(XENLOG_ERR - "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" fa= iled: %d\n", - d->domain_id, dfn_x(dfn), mfn_x(mfn), rc); - - /* while statement to satisfy __must_check */ - while ( iommu_unmap(d, dfn0, i, 0, flush_flags) ) - break; - - if ( !is_hardware_domain(d) ) - domain_crash(d); - - break; - } - - /* - * Something went wrong so, if we were dealing with more than a single - * page, flush everything and clear flush flags. - */ - if ( page_count > 1 && unlikely(rc) && - !iommu_iotlb_flush_all(d, *flush_flags) ) - *flush_flags =3D 0; - - return rc; -} - -int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn, - unsigned long page_count, unsigned int flags) -{ - unsigned int flush_flags =3D 0; - int rc; - - ASSERT(!(flags & IOMMUF_preempt)); - rc =3D iommu_map(d, dfn, mfn, page_count, flags, &flush_flags); - - if ( !this_cpu(iommu_dont_flush_iotlb) && !rc ) - rc =3D iommu_iotlb_flush(d, dfn, page_count, flush_flags); - - return rc; -} - -long iommu_unmap(struct domain *d, dfn_t dfn0, unsigned long page_count, - unsigned int flags, unsigned int *flush_flags) -{ - const struct domain_iommu *hd =3D dom_iommu(d); - unsigned long i; - unsigned int order, j =3D 0; - int rc =3D 0; - - if ( !is_iommu_enabled(d) ) - return 0; - - ASSERT(!(flags & ~IOMMUF_preempt)); - - for ( i =3D 0; i < page_count; i +=3D 1UL << order ) - { - dfn_t dfn =3D dfn_add(dfn0, i); - int err; - - order =3D mapping_order(hd, dfn, _mfn(0), page_count - i); - - if ( (flags & IOMMUF_preempt) && - ((!(++j & 0xfff) && general_preempt_check()) || - i > LONG_MAX - (1UL << order)) ) - return i; - - err =3D iommu_call(hd->platform_ops, unmap_page, d, dfn, - flags | IOMMUF_order(order), flush_flags); - - if ( likely(!err) ) - continue; - - if ( !d->is_shutting_down && printk_ratelimit() ) - printk(XENLOG_ERR - "d%d: IOMMU unmapping dfn %"PRI_dfn" failed: %d\n", - d->domain_id, dfn_x(dfn), err); - - if ( !rc ) - rc =3D err; - - if ( !is_hardware_domain(d) ) - { - domain_crash(d); - break; - } - } - - /* - * Something went wrong so, if we were dealing with more than a single - * page, flush everything and clear flush flags. - */ - if ( page_count > 1 && unlikely(rc) && - !iommu_iotlb_flush_all(d, *flush_flags) ) - *flush_flags =3D 0; - - return rc; -} - -int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned long page_cou= nt) -{ - unsigned int flush_flags =3D 0; - int rc =3D iommu_unmap(d, dfn, page_count, 0, &flush_flags); - - if ( !this_cpu(iommu_dont_flush_iotlb) && !rc ) - rc =3D iommu_iotlb_flush(d, dfn, page_count, flush_flags); - - return rc; -} - -int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn, - unsigned int *flags) -{ - const struct domain_iommu *hd =3D dom_iommu(d); - - if ( !is_iommu_enabled(d) || !hd->platform_ops->lookup_page ) - return -EOPNOTSUPP; - - return iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags); -} - -int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned long page_coun= t, - unsigned int flush_flags) -{ - const struct domain_iommu *hd =3D dom_iommu(d); - int rc; - - if ( !is_iommu_enabled(d) || !hd->platform_ops->iotlb_flush || - !page_count || !flush_flags ) - return 0; - - if ( dfn_eq(dfn, INVALID_DFN) ) - return -EINVAL; - - rc =3D iommu_call(hd->platform_ops, iotlb_flush, d, dfn, page_count, - flush_flags); - if ( unlikely(rc) ) - { - if ( !d->is_shutting_down && printk_ratelimit() ) - printk(XENLOG_ERR - "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", pag= e count %lu flags %x\n", - d->domain_id, rc, dfn_x(dfn), page_count, flush_flags); - - if ( !is_hardware_domain(d) ) - domain_crash(d); - } - - return rc; + iommu_domain_pviommu_teardown(d); } =20 int iommu_iotlb_flush_all(struct domain *d, unsigned int flush_flags) @@ -513,7 +403,7 @@ int iommu_iotlb_flush_all(struct domain *d, unsigned in= t flush_flags) !flush_flags ) return 0; =20 - rc =3D iommu_call(hd->platform_ops, iotlb_flush, d, INVALID_DFN, 0, + rc =3D iommu_call(hd->platform_ops, iotlb_flush, d, NULL, INVALID_DFN,= 0, flush_flags | IOMMU_FLUSHF_all); if ( unlikely(rc) ) { @@ -529,24 +419,6 @@ int iommu_iotlb_flush_all(struct domain *d, unsigned i= nt flush_flags) return rc; } =20 -int iommu_quarantine_dev_init(device_t *dev) -{ - const struct domain_iommu *hd =3D dom_iommu(dom_io); - - if ( !iommu_quarantine || !hd->platform_ops->quarantine_init ) - return 0; - - return iommu_call(hd->platform_ops, quarantine_init, - dev, iommu_quarantine =3D=3D IOMMU_quarantine_scratc= h_page); -} - -static int __init iommu_quarantine_init(void) -{ - dom_io->options |=3D XEN_DOMCTL_CDF_iommu; - - return iommu_domain_init(dom_io, 0); -} - int __init iommu_setup(void) { int rc =3D -ENODEV; @@ -682,6 +554,16 @@ bool iommu_has_feature(struct domain *d, enum iommu_fe= ature feature) return is_iommu_enabled(d) && test_bit(feature, dom_iommu(d)->features= ); } =20 +uint64_t iommu_get_max_iova(struct domain *d) +{ + struct domain_iommu *hd =3D dom_iommu(d); + + if ( !hd->platform_ops->get_max_iova ) + return 0; + + return iommu_call(hd->platform_ops, get_max_iova, d); +} + #define MAX_EXTRA_RESERVED_RANGES 20 struct extra_reserved_range { unsigned long start; diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index 5a446d3dce..e87f91f0e3 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -1,6 +1,6 @@ /* * Copyright (C) 2008, Netronome Systems, Inc. - * =20 + * * This program is free software; you can redistribute it and/or modify it * under the terms and conditions of the GNU General Public License, * version 2, as published by the Free Software Foundation. @@ -286,14 +286,14 @@ static void apply_quirks(struct pci_dev *pdev) * Device [8086:2fc0] * Erratum HSE43 * CONFIG_TDP_NOMINAL CSR Implemented at Incorrect Offset - * http://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-= v3-spec-update.html=20 + * http://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-= v3-spec-update.html */ { PCI_VENDOR_ID_INTEL, 0x2fc0 }, /* * Devices [8086:6f60,6fa0,6fc0] * Errata BDF2 / BDX2 * PCI BARs in the Home Agent Will Return Non-Zero Values During E= numeration - * http://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-= v4-spec-update.html=20 + * http://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-= v4-spec-update.html */ { PCI_VENDOR_ID_INTEL, 0x6f60 }, { PCI_VENDOR_ID_INTEL, 0x6fa0 }, @@ -870,8 +870,8 @@ static int deassign_device(struct domain *d, uint16_t s= eg, uint8_t bus, devfn +=3D pdev->phantom_stride; if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) ) break; - ret =3D iommu_call(hd->platform_ops, reassign_device, d, target, d= evfn, - pci_to_dev(pdev)); + ret =3D iommu_call(hd->platform_ops, add_devfn, d, pci_to_dev(pdev= ), devfn, + &target->iommu.default_ctx); if ( ret ) goto out; } @@ -880,9 +880,8 @@ static int deassign_device(struct domain *d, uint16_t s= eg, uint8_t bus, vpci_deassign_device(pdev); write_unlock(&d->pci_lock); =20 - devfn =3D pdev->devfn; - ret =3D iommu_call(hd->platform_ops, reassign_device, d, target, devfn, - pci_to_dev(pdev)); + ret =3D iommu_reattach_context(pdev->domain, target, pci_to_dev(pdev),= 0); + if ( ret ) goto out; =20 @@ -890,6 +889,7 @@ static int deassign_device(struct domain *d, uint16_t s= eg, uint8_t bus, pdev->quarantine =3D false; =20 pdev->fault.count =3D 0; + pdev->domain =3D target; =20 write_lock(&target->pci_lock); /* Re-assign back to hardware_domain */ @@ -1139,25 +1139,18 @@ struct setup_hwdom { static void __hwdom_init setup_one_hwdom_device(const struct setup_hwdom *= ctxt, struct pci_dev *pdev) { - u8 devfn =3D pdev->devfn; int err; =20 - do { - err =3D ctxt->handler(devfn, pdev); - if ( err ) - { - printk(XENLOG_ERR "setup %pp for d%d failed (%d)\n", - &pdev->sbdf, ctxt->d->domain_id, err); - if ( devfn =3D=3D pdev->devfn ) - return; - } - devfn +=3D pdev->phantom_stride; - } while ( devfn !=3D pdev->devfn && - PCI_SLOT(devfn) =3D=3D PCI_SLOT(pdev->devfn) ); + err =3D ctxt->handler(pdev->devfn, pdev); + + if ( err ) + goto done; =20 write_lock(&ctxt->d->pci_lock); err =3D vpci_assign_device(pdev); write_unlock(&ctxt->d->pci_lock); + +done: if ( err ) printk(XENLOG_ERR "setup of vPCI for d%d failed: %d\n", ctxt->d->domain_id, err); @@ -1329,12 +1322,7 @@ static int cf_check _dump_pci_devices(struct pci_seg= *pseg, void *arg) list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list ) { printk("%pp - ", &pdev->sbdf); -#ifdef CONFIG_X86 - if ( pdev->domain =3D=3D dom_io ) - printk("DomIO:%x", pdev->arch.pseudo_domid); - else -#endif - printk("%pd", pdev->domain); + printk("%pd", pdev->domain); printk(" - node %-3d", (pdev->node !=3D NUMA_NO_NODE) ? pdev->node= : -1); pdev_dump_msi(pdev); printk("\n"); @@ -1361,8 +1349,6 @@ __initcall(setup_dump_pcidevs); static int iommu_add_device(struct pci_dev *pdev) { const struct domain_iommu *hd; - int rc; - unsigned int devfn =3D pdev->devfn; =20 if ( !pdev->domain ) return -EINVAL; @@ -1373,20 +1359,7 @@ static int iommu_add_device(struct pci_dev *pdev) if ( !is_iommu_enabled(pdev->domain) ) return 0; =20 - rc =3D iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev= )); - if ( rc || !pdev->phantom_stride ) - return rc; - - for ( ; ; ) - { - devfn +=3D pdev->phantom_stride; - if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) ) - return 0; - rc =3D iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(= pdev)); - if ( rc ) - printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n", - &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc); - } + return iommu_attach_context(pdev->domain, pci_to_dev(pdev), 0); } =20 static int iommu_enable_device(struct pci_dev *pdev) @@ -1408,36 +1381,13 @@ static int iommu_enable_device(struct pci_dev *pdev) =20 static int iommu_remove_device(struct pci_dev *pdev) { - const struct domain_iommu *hd; - u8 devfn; - if ( !pdev->domain ) return -EINVAL; =20 - hd =3D dom_iommu(pdev->domain); if ( !is_iommu_enabled(pdev->domain) ) return 0; =20 - for ( devfn =3D pdev->devfn ; pdev->phantom_stride; ) - { - int rc; - - devfn +=3D pdev->phantom_stride; - if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) ) - break; - rc =3D iommu_call(hd->platform_ops, remove_device, devfn, - pci_to_dev(pdev)); - if ( !rc ) - continue; - - printk(XENLOG_ERR "IOMMU: remove %pp failed (%d)\n", - &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc); - return rc; - } - - devfn =3D pdev->devfn; - - return iommu_call(hd->platform_ops, remove_device, devfn, pci_to_dev(p= dev)); + return iommu_detach_context(pdev->domain, pdev); } =20 static int device_assigned(u16 seg, u8 bus, u8 devfn) @@ -1465,7 +1415,6 @@ static int device_assigned(u16 seg, u8 bus, u8 devfn) /* Caller should hold the pcidevs_lock */ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 = flag) { - const struct domain_iommu *hd =3D dom_iommu(d); struct pci_dev *pdev; int rc =3D 0; =20 @@ -1503,17 +1452,7 @@ static int assign_device(struct domain *d, u16 seg, = u8 bus, u8 devfn, u32 flag) =20 pdev->fault.count =3D 0; =20 - rc =3D iommu_call(hd->platform_ops, assign_device, d, devfn, pci_to_de= v(pdev), - flag); - - while ( pdev->phantom_stride && !rc ) - { - devfn +=3D pdev->phantom_stride; - if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) ) - break; - rc =3D iommu_call(hd->platform_ops, assign_device, d, devfn, - pci_to_dev(pdev), flag); - } + rc =3D iommu_reattach_context(pdev->domain, d, pci_to_dev(pdev), 0); =20 if ( rc ) goto done; @@ -1523,27 +1462,9 @@ static int assign_device(struct domain *d, u16 seg, = u8 bus, u8 devfn, u32 flag) write_unlock(&d->pci_lock); =20 done: - if ( rc ) - { - printk(XENLOG_G_WARNING "%pd: assign %s(%pp) failed (%d)\n", - d, devfn !=3D pdev->devfn ? "phantom function " : "", - &PCI_SBDF(seg, bus, devfn), rc); =20 - if ( devfn !=3D pdev->devfn && deassign_device(d, seg, bus, pdev->= devfn) ) - { - /* - * Device with phantom functions that failed to both assign and - * rollback. Mark the device as broken and crash the target d= omain, - * as the state of the functions at this point is unknown and = Xen - * has no way to assert consistent context assignment among th= em. - */ - pdev->broken =3D true; - if ( !is_hardware_domain(d) && d !=3D dom_io ) - domain_crash(d); - } - } /* The device is assigned to dom_io so mark it as quarantined */ - else if ( d =3D=3D dom_io ) + if ( !rc && d =3D=3D dom_io ) pdev->quarantine =3D true; =20 return rc; diff --git a/xen/drivers/passthrough/quarantine.c b/xen/drivers/passthrough= /quarantine.c new file mode 100644 index 0000000000..b58f136ad8 --- /dev/null +++ b/xen/drivers/passthrough/quarantine.c @@ -0,0 +1,49 @@ +#include +#include +#include + +#ifdef CONFIG_HAS_PCI +uint8_t __read_mostly iommu_quarantine =3D +# if defined(CONFIG_IOMMU_QUARANTINE_NONE) + IOMMU_quarantine_none; +# elif defined(CONFIG_IOMMU_QUARANTINE_BASIC) + IOMMU_quarantine_basic; +# elif defined(CONFIG_IOMMU_QUARANTINE_SCRATCH_PAGE) + IOMMU_quarantine_scratch_page; +# endif +#else +# define iommu_quarantine IOMMU_quarantine_none +#endif /* CONFIG_HAS_PCI */ + +int iommu_quarantine_dev_init(device_t *dev) +{ + int ret; + u16 ctx_no; + + if ( !iommu_quarantine ) + return 0; + + ret =3D iommu_context_alloc(dom_io, &ctx_no, IOMMU_CONTEXT_INIT_quaran= tine); + + if ( ret ) + return ret; + + /** TODO: Setup scratch page, mappings... */ + + ret =3D iommu_reattach_context(dev->domain, dom_io, dev, ctx_no); + + if ( ret ) + { + ASSERT(!iommu_context_free(dom_io, ctx_no, 0)); + return ret; + } + + return ret; +} + +int __init iommu_quarantine_init(void) +{ + dom_io->options |=3D XEN_DOMCTL_CDF_iommu; + + return iommu_domain_init(dom_io, 0); +} diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 442ae5322d..5ae579ae6a 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -52,7 +52,11 @@ static inline bool dfn_eq(dfn_t x, dfn_t y) #ifdef CONFIG_HAS_PASSTHROUGH extern bool iommu_enable, iommu_enabled; extern bool force_iommu, iommu_verbose; + /* Boolean except for the specific purposes of drivers/passthrough/iommu.c= . */ +#define IOMMU_quarantine_none 0 /* aka false */ +#define IOMMU_quarantine_basic 1 /* aka true */ +#define IOMMU_quarantine_scratch_page 2 extern uint8_t iommu_quarantine; #else #define iommu_enabled false @@ -106,6 +110,7 @@ extern bool iommu_debug; extern bool amd_iommu_perdev_intremap; =20 extern bool iommu_hwdom_strict, iommu_hwdom_passthrough, iommu_hwdom_inclu= sive; +extern bool iommu_hwdom_no_dma; extern int8_t iommu_hwdom_reserved; =20 extern unsigned int iommu_dev_iotlb_timeout; @@ -161,11 +166,10 @@ enum */ long __must_check iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0, unsigned long page_count, unsigned int flags, - unsigned int *flush_flags); + unsigned int *flush_flags, u16 ctx_no); long __must_check iommu_unmap(struct domain *d, dfn_t dfn0, unsigned long page_count, unsigned int flags, - unsigned int *flush_flags); - + unsigned int *flush_flags, u16 ctx_no); int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn, unsigned long page_count, unsigned int flags); @@ -173,11 +177,12 @@ int __must_check iommu_legacy_unmap(struct domain *d,= dfn_t dfn, unsigned long page_count); =20 int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn, - unsigned int *flags); + unsigned int *flags, u16 ctx_no); =20 int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned long page_count, - unsigned int flush_flags); + unsigned int flush_flags, + u16 ctx_no); int __must_check iommu_iotlb_flush_all(struct domain *d, unsigned int flush_flags); =20 @@ -250,20 +255,30 @@ struct page_info; */ typedef int iommu_grdm_t(xen_pfn_t start, xen_ulong_t nr, u32 id, void *ct= xt); =20 +struct iommu_context; + struct iommu_ops { unsigned long page_sizes; int (*init)(struct domain *d); void (*hwdom_init)(struct domain *d); - int (*quarantine_init)(device_t *dev, bool scratch_page); - int (*add_device)(uint8_t devfn, device_t *dev); + int (*context_init)(struct domain *d, struct iommu_context *ctx, + u32 flags); + int (*context_teardown)(struct domain *d, struct iommu_context *ctx, + u32 flags); + int (*attach)(struct domain *d, device_t *dev, + struct iommu_context *ctx); + int (*detach)(struct domain *d, device_t *dev, + struct iommu_context *prev_ctx); + int (*reattach)(struct domain *d, device_t *dev, + struct iommu_context *prev_ctx, + struct iommu_context *ctx); + int (*enable_device)(device_t *dev); - int (*remove_device)(uint8_t devfn, device_t *dev); - int (*assign_device)(struct domain *d, uint8_t devfn, device_t *dev, - uint32_t flag); - int (*reassign_device)(struct domain *s, struct domain *t, - uint8_t devfn, device_t *dev); #ifdef CONFIG_HAS_PCI int (*get_device_group_id)(uint16_t seg, uint8_t bus, uint8_t devfn); + int (*add_devfn)(struct domain *d, struct pci_dev *pdev, u16 devfn, + struct iommu_context *ctx); + int (*remove_devfn)(struct domain *d, struct pci_dev *pdev, u16 devfn); #endif /* HAS_PCI */ =20 void (*teardown)(struct domain *d); @@ -274,12 +289,15 @@ struct iommu_ops { */ int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t mfn, unsigned int flags, - unsigned int *flush_flags); + unsigned int *flush_flags, + struct iommu_context *ctx); int __must_check (*unmap_page)(struct domain *d, dfn_t dfn, unsigned int order, - unsigned int *flush_flags); + unsigned int *flush_flags, + struct iommu_context *ctx); int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mf= n, - unsigned int *flags); + unsigned int *flags, + struct iommu_context *ctx); =20 #ifdef CONFIG_X86 int (*enable_x2apic)(void); @@ -292,14 +310,15 @@ struct iommu_ops { int (*setup_hpet_msi)(struct msi_desc *msi_desc); =20 void (*adjust_irq_affinities)(void); - void (*clear_root_pgtable)(struct domain *d); + void (*clear_root_pgtable)(struct domain *d, struct iommu_context *ctx= ); int (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg *= msg); #endif /* CONFIG_X86 */ =20 int __must_check (*suspend)(void); void (*resume)(void); void (*crash_shutdown)(void); - int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn, + int __must_check (*iotlb_flush)(struct domain *d, + struct iommu_context *ctx, dfn_t dfn, unsigned long page_count, unsigned int flush_flags); int (*get_reserved_device_memory)(iommu_grdm_t *func, void *ctxt); @@ -314,6 +333,8 @@ struct iommu_ops { */ int (*dt_xlate)(device_t *dev, const struct dt_phandle_args *args); #endif + + uint64_t (*get_max_iova)(struct domain *d); }; =20 /* @@ -343,11 +364,39 @@ extern int iommu_get_extra_reserved_device_memory(iom= mu_grdm_t *func, # define iommu_vcall iommu_call #endif =20 +struct iommu_context { + u16 id; /* Context id (0 means default context) */ + rspinlock_t lock; /* context lock */ + + struct list_head devices; + + struct arch_iommu_context arch; + + bool opaque; /* context can't be modified nor accessed (e.g HAP) */ + bool dying; /* the context is tearing down */ +}; + +struct iommu_context_list { + atomic_t initialized; /* has/is context list being initialized ? */ + rwlock_t lock; /* prevent concurrent destruction and access of context= s */ + uint16_t count; /* Context count excluding default context */ + + /* if count > 0 */ + + uint64_t *bitmap; /* bitmap of context allocation */ + struct iommu_context *map; /* Map of contexts */ +}; + + struct domain_iommu { + #ifdef CONFIG_HAS_PASSTHROUGH struct arch_iommu arch; #endif =20 + struct iommu_context default_ctx; + struct iommu_context_list other_contexts; + /* iommu_ops */ const struct iommu_ops *platform_ops; =20 @@ -365,6 +414,12 @@ struct domain_iommu { /* SAF-2-safe enum constant in arithmetic operation */ DECLARE_BITMAP(features, IOMMU_FEAT_count); =20 + /* Do the IOMMU block all DMA on default context (implies !has_pt_shar= e) ? */ + bool no_dma; + + /* Is the domain allowed to use PV-IOMMU ? */ + bool allow_pv_iommu; + /* Does the guest share HAP mapping with the IOMMU? */ bool hap_pt_share; =20 @@ -380,6 +435,7 @@ struct domain_iommu { #define dom_iommu(d) (&(d)->iommu) #define iommu_set_feature(d, f) set_bit(f, dom_iommu(d)->features) #define iommu_clear_feature(d, f) clear_bit(f, dom_iommu(d)->features) +#define iommu_default_context(d) (&dom_iommu(d)->default_ctx) /* does not = lock ! */ =20 /* Are we using the domain P2M table as its IOMMU pagetable? */ #define iommu_use_hap_pt(d) (IS_ENABLED(CONFIG_HVM) && \ @@ -401,10 +457,14 @@ static inline int iommu_do_domctl(struct xen_domctl *= domctl, struct domain *d, } #endif =20 +int iommu_domain_pviommu_init(struct domain *d, uint16_t nb_ctx, uint32_t = arena_order); + int __must_check iommu_suspend(void); void iommu_resume(void); void iommu_crash_shutdown(void); int iommu_get_reserved_device_memory(iommu_grdm_t *func, void *ctxt); + +int __init iommu_quarantine_init(void); int iommu_quarantine_dev_init(device_t *dev); =20 #ifdef CONFIG_HAS_PCI @@ -414,6 +474,27 @@ int iommu_do_pci_domctl(struct xen_domctl *domctl, str= uct domain *d, =20 void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev); =20 +uint64_t iommu_get_max_iova(struct domain *d); + +struct iommu_context *iommu_get_context(struct domain *d, u16 ctx_no); +void iommu_put_context(struct iommu_context *ctx); + +#define IOMMU_CONTEXT_INIT_default (1 << 0) +#define IOMMU_CONTEXT_INIT_quarantine (1 << 1) +int iommu_context_init(struct domain *d, struct iommu_context *ctx, u16 ct= x_no, u32 flags); + +#define IOMMU_TEARDOWN_REATTACH_DEFAULT (1 << 0) +#define IOMMU_TEARDOWN_PREEMPT (1 << 1) +int iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u3= 2 flags); + +int iommu_context_alloc(struct domain *d, u16 *ctx_no, u32 flags); +int iommu_context_free(struct domain *d, u16 ctx_no, u32 flags); + +int iommu_reattach_context(struct domain *prev_dom, struct domain *next_do= m, + device_t *dev, u16 ctx_no); +int iommu_attach_context(struct domain *d, device_t *dev, u16 ctx_no); +int iommu_detach_context(struct domain *d, device_t *dev); + /* * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to * avoid unecessary iotlb_flush in the low level IOMMU code. @@ -429,6 +510,8 @@ DECLARE_PER_CPU(bool, iommu_dont_flush_iotlb); extern struct spinlock iommu_pt_cleanup_lock; extern struct page_list_head iommu_pt_cleanup_list; =20 +int arch_iommu_pviommu_init(struct domain *d, uint16_t nb_ctx, uint32_t ar= ena_order); +int arch_iommu_pviommu_teardown(struct domain *d); bool arch_iommu_use_permitted(const struct domain *d); =20 #ifdef CONFIG_X86 diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h index 63e49f0117..d6d4aaa6a5 100644 --- a/xen/include/xen/pci.h +++ b/xen/include/xen/pci.h @@ -97,6 +97,7 @@ struct pci_dev_info { struct pci_dev { struct list_head alldevs_list; struct list_head domain_list; + struct list_head context_list; =20 struct list_head msi_list; =20 @@ -104,6 +105,8 @@ struct pci_dev { =20 struct domain *domain; =20 + uint16_t context; /* IOMMU context number of domain */ + const union { struct { uint8_t devfn; --=20 2.45.2 Teddy Astie | Vates XCP-ng Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech From nobody Sat Nov 23 23:11:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=teddy.astie@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1730730557; cv=none; d=zohomail.com; s=zohoarc; b=NQnbiCvOTJqQf8Qw5nWqNJle6CLtxRptFb7yqtFe2KXYn0O9AktrPmHQZtM6timoQ5tMLrMKj9YFjkombfDLrCqzWKUL1KvUW8pKdN5DcTHw3xfjb3UI1rxC2n9Vxz0xCKwcMX3SMpK8Iz4NWpfGqTKt/Zk7fHLDsU0kiZp9l4o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1730730557; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=FUjt+mT8Pwu/BVxhEtSMdShv/mmeEmV9wgs/qrMzUx0=; b=nN2fYkOstjLz8DGdpZ8uCyraNylUa3r0Wfa9GqU6WAPD2RK9d3uk55QIQ+8+mNSbJd6ywJQJ+mDMEm8jRYmASH3DjiVRdOlwQNbryznfG1dJc5domC3U1xMrj7jQQS1heYpCYaS8/HeLB8q1I2+08hltQsvwAjOHuh90jFi3wdI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=teddy.astie@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 173073055698810.898310213316222; Mon, 4 Nov 2024 06:29:16 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.830031.1244978 (Exim 4.92) (envelope-from ) id 1t7y4b-0007Yj-Ii; Mon, 04 Nov 2024 14:28:53 +0000 Received: by outflank-mailman (output) from mailman id 830031.1244978; Mon, 04 Nov 2024 14:28:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t7y4b-0007Yc-FM; Mon, 04 Nov 2024 14:28:53 +0000 Received: by outflank-mailman (input) for mailman id 830031; Mon, 04 Nov 2024 14:28:51 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t7y4Z-0006XR-7M for xen-devel@lists.xenproject.org; Mon, 04 Nov 2024 14:28:51 +0000 Received: from mail128-130.atl41.mandrillapp.com (mail128-130.atl41.mandrillapp.com [198.2.128.130]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 191ed1cb-9ab9-11ef-a0c5-8be0dac302b0; Mon, 04 Nov 2024 15:28:46 +0100 (CET) Received: from pmta08.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1]) by mail128-130.atl41.mandrillapp.com (Mailchimp) with ESMTP id 4Xhv3D1YTRzS62J2K for ; Mon, 4 Nov 2024 14:28:40 +0000 (GMT) Received: from [37.26.189.201] by mandrillapp.com id a5238e467f4843fe9cd9f855abfe221b; Mon, 04 Nov 2024 14:28:40 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 191ed1cb-9ab9-11ef-a0c5-8be0dac302b0 X-Custom-Connection: eyJyZW1vdGVpcCI6IjE5OC4yLjEyOC4xMzAiLCJoZWxvIjoibWFpbDEyOC0xMzAuYXRsNDEubWFuZHJpbGxhcHAuY29tIn0= X-Custom-Transaction: eyJpZCI6IjE5MWVkMWNiLTlhYjktMTFlZi1hMGM1LThiZTBkYWMzMDJiMCIsInRzIjoxNzMwNzMwNTI2LjU1Mzk5LCJzZW5kZXIiOiJib3VuY2UtbWRfMzA1MDQ5NjIuNjcyOGRhMTgudjEtYTUyMzhlNDY3ZjQ4NDNmZTljZDlmODU1YWJmZTIyMWJAYm91bmNlLnZhdGVzLnRlY2giLCJyZWNpcGllbnQiOiJ4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcifQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1730730520; x=1730991020; bh=FUjt+mT8Pwu/BVxhEtSMdShv/mmeEmV9wgs/qrMzUx0=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=GBvMmcMvMWUz2cI0Z7qpJa/QSK/TPXOWVg0tvANDDbJGK2+0FYcq8FWk50sGpRN7R 60HRKjKSVyDCNaXhyKIzxtbXeAitqNu6+McE74QBxLnMb7rzr8w8uQkJRkQawJyweD MRW16R7fY9cAJCz4wwCIMNH6IeDCTGeUL19HAk+XFWuNFyKRQcuRcTDyQ/u/kjroEF A8ofXI8sIz5CAPKaAmSfY2KPXFxAgz5NAe1GfduvNMu+1OuFmUt8A6+n2Ixp/AgXKw 9WnATzIqqKIf//mIF5+jItTcPo5ow3ljmbsvKn9pShLuKwMsCc+aJ8Biu9rIynnuO2 CC3GvKu46osbg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1730730520; x=1730991020; i=teddy.astie@vates.tech; bh=FUjt+mT8Pwu/BVxhEtSMdShv/mmeEmV9wgs/qrMzUx0=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=SOpXY68ZQWeqzI7XgbzbOvAJMnLuZr/jgEB679UfcUJR+BxMKNfsD++azN7HF2RhX C5d2Pf27NXE+PY/5eHggi784KtF+8igepCjx2j/90Y4W17aTXrWv6kWSTyleNiKwUH D0z/mrRDA4WuRH+jv7rqKDm9YtUgib771dePVA02IXi1uAMlje2AnH/Y+xySDI3wKm Y8k7xGJuDLW3ktzGf4BRY1RJL+Mq/vFJF/oFAtfL/x2bIJpSUic9H7Zt0lTTuK+3sv 8AQ96Ov973rnIt/gXFTg75Ggpxdcord6jyJzUFWIjqlVwtYNkPDuogRGVNY5OwGp6D ehqzUDVk8b0Xg== From: "Teddy Astie" Subject: =?utf-8?Q?[XEN=20RFC=20PATCH=20v4=204/5]=20VT-d:=20Port=20IOMMU=20driver=20to=20new=20subsystem?= X-Mailer: git-send-email 2.45.2 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1730730518275 To: xen-devel@lists.xenproject.org Cc: "Teddy Astie" , "Jan Beulich" , "Andrew Cooper" , =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= Message-Id: <05a4114976be6f72fbaba653d10fe705bb86f8f4.1730718102.git.teddy.astie@vates.tech> In-Reply-To: References: X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.a5238e467f4843fe9cd9f855abfe221b?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20241104:md Date: Mon, 04 Nov 2024 14:28:40 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @mandrillapp.com) (identity teddy.astie@vates.tech) X-ZM-MESSAGEID: 1730730558360116600 Content-Type: text/plain; charset="utf-8" Port the driver with guidances specified in iommu-contexts.md. Add a arena-based allocator for allocating a fixed chunk of memory and split it into 4k pages for use by the IOMMU contexts. This chunk size is configurable with X86_ARENA_ORDER and dom0-iommu=3Darena-order=3DN. Signed-off-by Teddy Astie --- Changed in V2: * cleanup some unneeded includes * s/dettach/detach/ * don't dump IOMMU context of non-iommu domains (fix crash with DomUs) Changed in v4: * add "no-dma" support * use new locking logic --- xen/arch/x86/include/asm/arena.h | 54 + xen/arch/x86/include/asm/iommu.h | 58 +- xen/arch/x86/include/asm/pci.h | 17 - xen/drivers/passthrough/vtd/Makefile | 2 +- xen/drivers/passthrough/vtd/extern.h | 14 +- xen/drivers/passthrough/vtd/iommu.c | 1478 +++++++++----------------- xen/drivers/passthrough/vtd/quirks.c | 20 +- xen/drivers/passthrough/x86/Makefile | 1 + xen/drivers/passthrough/x86/arena.c | 157 +++ xen/drivers/passthrough/x86/iommu.c | 270 +++-- 10 files changed, 984 insertions(+), 1087 deletions(-) create mode 100644 xen/arch/x86/include/asm/arena.h create mode 100644 xen/drivers/passthrough/x86/arena.c diff --git a/xen/arch/x86/include/asm/arena.h b/xen/arch/x86/include/asm/ar= ena.h new file mode 100644 index 0000000000..7555b100e0 --- /dev/null +++ b/xen/arch/x86/include/asm/arena.h @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/** + * Simple arena-based page allocator. + */ + +#ifndef __XEN_IOMMU_ARENA_H__ +#define __XEN_IOMMU_ARENA_H__ + +#include "xen/domain.h" +#include "xen/atomic.h" +#include "xen/mm-frame.h" +#include "xen/types.h" + +/** + * struct page_arena: Page arena structure + */ +struct iommu_arena { + /* mfn of the first page of the memory region */ + mfn_t region_start; + /* bitmap of allocations */ + unsigned long *map; + + /* Order of the arena */ + unsigned int order; + + /* Used page count */ + atomic_t used_pages; +}; + +/** + * Initialize a arena using domheap allocator. + * @param [out] arena Arena to allocate + * @param [in] domain domain that has ownership of arena pages + * @param [in] order order of the arena (power of two of the size) + * @param [in] memflags Flags for domheap_alloc_pages() + * @return -ENOMEM on arena allocation error, 0 otherwise + */ +int iommu_arena_initialize(struct iommu_arena *arena, struct domain *domai= n, + unsigned int order, unsigned int memflags); + +/** + * Teardown a arena. + * @param [out] arena arena to allocate + * @param [in] check check for existing allocations + * @return -EBUSY if check is specified + */ +int iommu_arena_teardown(struct iommu_arena *arena, bool check); + +struct page_info *iommu_arena_allocate_page(struct iommu_arena *arena); +bool iommu_arena_free_page(struct iommu_arena *arena, struct page_info *pa= ge); + +#define iommu_arena_size(arena) (1LLU << (arena)->order) + +#endif diff --git a/xen/arch/x86/include/asm/iommu.h b/xen/arch/x86/include/asm/io= mmu.h index 8dc464fbd3..533bb8d777 100644 --- a/xen/arch/x86/include/asm/iommu.h +++ b/xen/arch/x86/include/asm/iommu.h @@ -2,14 +2,18 @@ #ifndef __ARCH_X86_IOMMU_H__ #define __ARCH_X86_IOMMU_H__ =20 +#include #include #include #include #include +#include #include #include #include =20 +#include "arena.h" + #define DEFAULT_DOMAIN_ADDRESS_WIDTH 48 =20 struct g2m_ioport { @@ -31,27 +35,45 @@ typedef uint64_t daddr_t; #define dfn_to_daddr(dfn) __dfn_to_daddr(dfn_x(dfn)) #define daddr_to_dfn(daddr) _dfn(__daddr_to_dfn(daddr)) =20 -struct arch_iommu +struct arch_iommu_context { - spinlock_t mapping_lock; /* io page table lock */ - struct { - struct page_list_head list; - spinlock_t lock; - } pgtables; - + struct page_list_head pgtables; struct list_head identity_maps; =20 + /* Queue for freeing pages */ + struct page_list_head free_queue; + + /* Is this context reusing domain P2M ? */ + bool hap_context; + union { /* Intel VT-d */ struct { uint64_t pgd_maddr; /* io page directory machine address */ + domid_t *didmap; /* per-iommu DID */ + unsigned long *iommu_bitmap; /* bitmap of iommu(s) that the co= ntext uses */ + uint32_t superpage_progress; /* superpage progress during tear= down */ + } vtd; + /* AMD IOMMU */ + struct { + struct page_info *root_table; + } amd; + }; +}; + +struct arch_iommu +{ + struct iommu_arena pt_arena; /* allocator for non-default contexts */ + + union { + /* Intel VT-d */ + struct { unsigned int agaw; /* adjusted guest address width, 0 is level= 2 30-bit */ - unsigned long *iommu_bitmap; /* bitmap of iommu(s) that the do= main uses */ } vtd; /* AMD IOMMU */ struct { unsigned int paging_mode; - struct page_info *root_table; + struct guest_iommu *g_iommu; } amd; }; }; @@ -109,10 +131,13 @@ static inline void iommu_disable_x2apic(void) iommu_vcall(&iommu_ops, disable_x2apic); } =20 -int iommu_identity_mapping(struct domain *d, p2m_access_t p2ma, - paddr_t base, paddr_t end, +int iommu_identity_mapping(struct domain *d, struct iommu_context *ctx, + p2m_access_t p2ma, paddr_t base, paddr_t end, unsigned int flag); -void iommu_identity_map_teardown(struct domain *d); +void iommu_identity_map_teardown(struct domain *d, struct iommu_context *c= tx); +bool iommu_identity_map_check(struct domain *d, struct iommu_context *ctx, + mfn_t mfn); + =20 extern bool untrusted_msi; =20 @@ -128,14 +153,19 @@ unsigned long *iommu_init_domid(domid_t reserve); domid_t iommu_alloc_domid(unsigned long *map); void iommu_free_domid(domid_t domid, unsigned long *map); =20 -int __must_check iommu_free_pgtables(struct domain *d); +struct iommu_context; +int __must_check iommu_free_pgtables(struct domain *d, struct iommu_contex= t *ctx); struct domain_iommu; struct page_info *__must_check iommu_alloc_pgtable(struct domain_iommu *hd, + struct iommu_context *c= tx, uint64_t contig_mask); -void iommu_queue_free_pgtable(struct domain_iommu *hd, struct page_info *p= g); +void iommu_queue_free_pgtable(struct iommu_context *ctx, struct page_info = *pg); =20 /* Check [start, end] unity map range for correctness. */ bool iommu_unity_region_ok(const char *prefix, mfn_t start, mfn_t end); +int arch_iommu_context_init(struct domain *d, struct iommu_context *ctx, u= 32 flags); +int arch_iommu_context_teardown(struct domain *d, struct iommu_context *ct= x, u32 flags); +int arch_iommu_flush_free_queue(struct domain *d, struct iommu_context *ct= x); =20 #endif /* !__ARCH_X86_IOMMU_H__ */ /* diff --git a/xen/arch/x86/include/asm/pci.h b/xen/arch/x86/include/asm/pci.h index fd5480d67d..214c1a0948 100644 --- a/xen/arch/x86/include/asm/pci.h +++ b/xen/arch/x86/include/asm/pci.h @@ -15,23 +15,6 @@ =20 struct arch_pci_dev { vmask_t used_vectors; - /* - * These fields are (de)initialized under pcidevs-lock. Other uses of - * them don't race (de)initialization and hence don't strictly need any - * locking. - */ - union { - /* Subset of struct arch_iommu's fields, to be used in dom_io. */ - struct { - uint64_t pgd_maddr; - } vtd; - struct { - struct page_info *root_table; - } amd; - }; - domid_t pseudo_domid; - mfn_t leaf_mfn; - struct page_list_head pgtables_list; }; =20 int pci_conf_write_intercept(unsigned int seg, unsigned int bdf, diff --git a/xen/drivers/passthrough/vtd/Makefile b/xen/drivers/passthrough= /vtd/Makefile index fde7555fac..81e1f46179 100644 --- a/xen/drivers/passthrough/vtd/Makefile +++ b/xen/drivers/passthrough/vtd/Makefile @@ -5,4 +5,4 @@ obj-y +=3D dmar.o obj-y +=3D utils.o obj-y +=3D qinval.o obj-y +=3D intremap.o -obj-y +=3D quirks.o +obj-y +=3D quirks.o \ No newline at end of file diff --git a/xen/drivers/passthrough/vtd/extern.h b/xen/drivers/passthrough= /vtd/extern.h index 667590ee52..0201ed9dc5 100644 --- a/xen/drivers/passthrough/vtd/extern.h +++ b/xen/drivers/passthrough/vtd/extern.h @@ -80,12 +80,10 @@ uint64_t alloc_pgtable_maddr(unsigned long npages, node= id_t node); void free_pgtable_maddr(u64 maddr); void *map_vtd_domain_page(u64 maddr); void unmap_vtd_domain_page(const void *va); -int domain_context_mapping_one(struct domain *domain, struct vtd_iommu *io= mmu, - uint8_t bus, uint8_t devfn, - const struct pci_dev *pdev, domid_t domid, - paddr_t pgd_maddr, unsigned int mode); -int domain_context_unmap_one(struct domain *domain, struct vtd_iommu *iomm= u, - uint8_t bus, uint8_t devfn); +int apply_context_single(struct domain *domain, struct iommu_context *ctx, + struct vtd_iommu *iommu, uint8_t bus, uint8_t dev= fn); +int unapply_context_single(struct domain *domain, struct vtd_iommu *iommu, + uint8_t bus, uint8_t devfn); int cf_check intel_iommu_get_reserved_device_memory( iommu_grdm_t *func, void *ctxt); =20 @@ -106,8 +104,8 @@ void platform_quirks_init(void); void vtd_ops_preamble_quirk(struct vtd_iommu *iommu); void vtd_ops_postamble_quirk(struct vtd_iommu *iommu); int __must_check me_wifi_quirk(struct domain *domain, uint8_t bus, - uint8_t devfn, domid_t domid, paddr_t pgd_m= addr, - unsigned int mode); + uint8_t devfn, domid_t domid, + unsigned int mode, struct iommu_context *ct= x); void pci_vtd_quirk(const struct pci_dev *); void quirk_iommu_caps(struct vtd_iommu *iommu); =20 diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index e13be244c1..5619d323ae 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -20,6 +20,7 @@ =20 #include #include +#include #include #include #include @@ -30,12 +31,20 @@ #include #include #include +#include +#include #include +#include +#include +#include +#include #include -#include #include #include #include +#include +#include +#include #include #include "iommu.h" #include "dmar.h" @@ -46,14 +55,6 @@ #define CONTIG_MASK DMA_PTE_CONTIG_MASK #include =20 -/* dom_io is used as a sentinel for quarantined devices */ -#define QUARANTINE_SKIP(d, pgd_maddr) ((d) =3D=3D dom_io && !(pgd_maddr)) -#define DEVICE_DOMID(d, pdev) ((d) !=3D dom_io ? (d)->domain_id \ - : (pdev)->arch.pseudo_domid) -#define DEVICE_PGTABLE(d, pdev) ((d) !=3D dom_io \ - ? dom_iommu(d)->arch.vtd.pgd_maddr \ - : (pdev)->arch.vtd.pgd_maddr) - bool __read_mostly iommu_igfx =3D true; bool __read_mostly iommu_qinval =3D true; #ifndef iommu_snoop @@ -66,7 +67,6 @@ static unsigned int __ro_after_init min_pt_levels =3D UIN= T_MAX; static struct tasklet vtd_fault_tasklet; =20 static int cf_check setup_hwdom_device(u8 devfn, struct pci_dev *); -static void setup_hwdom_rmrr(struct domain *d); =20 static bool domid_mapping(const struct vtd_iommu *iommu) { @@ -206,26 +206,14 @@ static bool any_pdev_behind_iommu(const struct domain= *d, * clear iommu in iommu_bitmap and clear domain_id in domid_bitmap. */ static void check_cleanup_domid_map(const struct domain *d, + const struct iommu_context *ctx, const struct pci_dev *exclude, struct vtd_iommu *iommu) { - bool found; - - if ( d =3D=3D dom_io ) - return; - - found =3D any_pdev_behind_iommu(d, exclude, iommu); - /* - * Hidden devices are associated with DomXEN but usable by the hardware - * domain. Hence they need considering here as well. - */ - if ( !found && is_hardware_domain(d) ) - found =3D any_pdev_behind_iommu(dom_xen, exclude, iommu); - - if ( !found ) + if ( !any_pdev_behind_iommu(d, exclude, iommu) ) { - clear_bit(iommu->index, dom_iommu(d)->arch.vtd.iommu_bitmap); - cleanup_domid_map(d->domain_id, iommu); + clear_bit(iommu->index, ctx->arch.vtd.iommu_bitmap); + cleanup_domid_map(ctx->arch.vtd.didmap[iommu->index], iommu); } } =20 @@ -312,8 +300,9 @@ static u64 bus_to_context_maddr(struct vtd_iommu *iommu= , u8 bus) * PTE for the requested address, * - for target =3D=3D 0 the full PTE contents below PADDR_BITS limit. */ -static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr, - unsigned int target, +static uint64_t addr_to_dma_page_maddr(struct domain *domain, + struct iommu_context *ctx, + daddr_t addr, unsigned int target, unsigned int *flush_flags, bool all= oc) { struct domain_iommu *hd =3D dom_iommu(domain); @@ -323,10 +312,9 @@ static uint64_t addr_to_dma_page_maddr(struct domain *= domain, daddr_t addr, u64 pte_maddr =3D 0; =20 addr &=3D (((u64)1) << addr_width) - 1; - ASSERT(spin_is_locked(&hd->arch.mapping_lock)); ASSERT(target || !alloc); =20 - if ( !hd->arch.vtd.pgd_maddr ) + if ( !ctx->arch.vtd.pgd_maddr ) { struct page_info *pg; =20 @@ -334,13 +322,13 @@ static uint64_t addr_to_dma_page_maddr(struct domain = *domain, daddr_t addr, goto out; =20 pte_maddr =3D level; - if ( !(pg =3D iommu_alloc_pgtable(hd, 0)) ) + if ( !(pg =3D iommu_alloc_pgtable(hd, ctx, 0)) ) goto out; =20 - hd->arch.vtd.pgd_maddr =3D page_to_maddr(pg); + ctx->arch.vtd.pgd_maddr =3D page_to_maddr(pg); } =20 - pte_maddr =3D hd->arch.vtd.pgd_maddr; + pte_maddr =3D ctx->arch.vtd.pgd_maddr; parent =3D map_vtd_domain_page(pte_maddr); while ( level > target ) { @@ -376,7 +364,7 @@ static uint64_t addr_to_dma_page_maddr(struct domain *d= omain, daddr_t addr, } =20 pte_maddr =3D level - 1; - pg =3D iommu_alloc_pgtable(hd, DMA_PTE_CONTIG_MASK); + pg =3D iommu_alloc_pgtable(hd, ctx, DMA_PTE_CONTIG_MASK); if ( !pg ) break; =20 @@ -428,38 +416,25 @@ static uint64_t addr_to_dma_page_maddr(struct domain = *domain, daddr_t addr, return pte_maddr; } =20 -static paddr_t domain_pgd_maddr(struct domain *d, paddr_t pgd_maddr, - unsigned int nr_pt_levels) +static paddr_t get_context_pgd(struct domain *d, struct iommu_context *ctx, + unsigned int nr_pt_levels) { - struct domain_iommu *hd =3D dom_iommu(d); unsigned int agaw; + paddr_t pgd_maddr =3D ctx->arch.vtd.pgd_maddr; =20 - ASSERT(spin_is_locked(&hd->arch.mapping_lock)); - - if ( pgd_maddr ) - /* nothing */; - else if ( iommu_use_hap_pt(d) ) + if ( !ctx->arch.vtd.pgd_maddr ) { - pagetable_t pgt =3D p2m_get_pagetable(p2m_get_hostp2m(d)); + /* + * Ensure we have pagetables allocated down to the smallest + * level the loop below may need to run to. + */ + addr_to_dma_page_maddr(d, ctx, 0, min_pt_levels, NULL, true); =20 - pgd_maddr =3D pagetable_get_paddr(pgt); + if ( !ctx->arch.vtd.pgd_maddr ) + return 0; } - else - { - if ( !hd->arch.vtd.pgd_maddr ) - { - /* - * Ensure we have pagetables allocated down to the smallest - * level the loop below may need to run to. - */ - addr_to_dma_page_maddr(d, 0, min_pt_levels, NULL, true); - - if ( !hd->arch.vtd.pgd_maddr ) - return 0; - } =20 - pgd_maddr =3D hd->arch.vtd.pgd_maddr; - } + pgd_maddr =3D ctx->arch.vtd.pgd_maddr; =20 /* Skip top level(s) of page tables for less-than-maximum level DRHDs.= */ for ( agaw =3D level_to_agaw(4); @@ -727,28 +702,18 @@ static int __must_check iommu_flush_all(void) return rc; } =20 -static int __must_check cf_check iommu_flush_iotlb(struct domain *d, dfn_t= dfn, +static int __must_check cf_check iommu_flush_iotlb(struct domain *d, + struct iommu_context *c= tx, + dfn_t dfn, unsigned long page_coun= t, unsigned int flush_flag= s) { - struct domain_iommu *hd =3D dom_iommu(d); struct acpi_drhd_unit *drhd; struct vtd_iommu *iommu; bool flush_dev_iotlb; int iommu_domid; int ret =3D 0; =20 - if ( flush_flags & IOMMU_FLUSHF_all ) - { - dfn =3D INVALID_DFN; - page_count =3D 0; - } - else - { - ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN)); - ASSERT(flush_flags); - } - /* * No need pcideves_lock here because we have flush * when assign/deassign device @@ -759,13 +724,20 @@ static int __must_check cf_check iommu_flush_iotlb(st= ruct domain *d, dfn_t dfn, =20 iommu =3D drhd->iommu; =20 - if ( !test_bit(iommu->index, hd->arch.vtd.iommu_bitmap) ) - continue; + if ( ctx ) + { + if ( !test_bit(iommu->index, ctx->arch.vtd.iommu_bitmap) ) + continue; + + iommu_domid =3D get_iommu_did(ctx->arch.vtd.didmap[iommu->inde= x], iommu, true); + + if ( iommu_domid =3D=3D -1 ) + continue; + } + else + iommu_domid =3D 0; =20 flush_dev_iotlb =3D !!find_ats_dev_drhd(iommu); - iommu_domid =3D get_iommu_did(d->domain_id, iommu, !d->is_dying); - if ( iommu_domid =3D=3D -1 ) - continue; =20 if ( !page_count || (page_count & (page_count - 1)) || dfn_eq(dfn, INVALID_DFN) || !IS_ALIGNED(dfn_x(dfn), page_coun= t) ) @@ -784,10 +756,13 @@ static int __must_check cf_check iommu_flush_iotlb(st= ruct domain *d, dfn_t dfn, ret =3D rc; } =20 + if ( !ret && ctx ) + arch_iommu_flush_free_queue(d, ctx); + return ret; } =20 -static void queue_free_pt(struct domain_iommu *hd, mfn_t mfn, unsigned int= level) +static void queue_free_pt(struct iommu_context *ctx, mfn_t mfn, unsigned i= nt level) { if ( level > 1 ) { @@ -796,13 +771,13 @@ static void queue_free_pt(struct domain_iommu *hd, mf= n_t mfn, unsigned int level =20 for ( i =3D 0; i < PTE_NUM; ++i ) if ( dma_pte_present(pt[i]) && !dma_pte_superpage(pt[i]) ) - queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(pt[i])), + queue_free_pt(ctx, maddr_to_mfn(dma_pte_addr(pt[i])), level - 1); =20 unmap_domain_page(pt); } =20 - iommu_queue_free_pgtable(hd, mfn_to_page(mfn)); + iommu_queue_free_pgtable(ctx, mfn_to_page(mfn)); } =20 static int iommu_set_root_entry(struct vtd_iommu *iommu) @@ -1433,11 +1408,6 @@ static int cf_check intel_iommu_domain_init(struct d= omain *d) { struct domain_iommu *hd =3D dom_iommu(d); =20 - hd->arch.vtd.iommu_bitmap =3D xzalloc_array(unsigned long, - BITS_TO_LONGS(nr_iommus)); - if ( !hd->arch.vtd.iommu_bitmap ) - return -ENOMEM; - hd->arch.vtd.agaw =3D width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH); =20 return 0; @@ -1448,7 +1418,7 @@ static void __hwdom_init cf_check intel_iommu_hwdom_i= nit(struct domain *d) struct acpi_drhd_unit *drhd; =20 setup_hwdom_pci_devices(d, setup_hwdom_device); - setup_hwdom_rmrr(d); + /* Make sure workarounds are applied before enabling the IOMMU(s). */ arch_iommu_hwdom_init(d); =20 @@ -1465,32 +1435,22 @@ static void __hwdom_init cf_check intel_iommu_hwdom= _init(struct domain *d) } } =20 -/* - * This function returns - * - a negative errno value upon error, - * - zero upon success when previously the entry was non-present, or this = isn't - * the "main" request for a device (pdev =3D=3D NULL), or for no-op quar= antining - * assignments, - * - positive (one) upon success when previously the entry was present and= this - * is the "main" request for a device (pdev !=3D NULL). +/** + * Apply a context on a device. + * @param domain Domain of the context + * @param iommu IOMMU hardware to use (must match device iommu) + * @param ctx IOMMU context to apply + * @param devfn PCI device function (may be different to pdev) */ -int domain_context_mapping_one( - struct domain *domain, - struct vtd_iommu *iommu, - uint8_t bus, uint8_t devfn, const struct pci_dev *pdev, - domid_t domid, paddr_t pgd_maddr, unsigned int mode) +int apply_context_single(struct domain *domain, struct iommu_context *ctx, + struct vtd_iommu *iommu, uint8_t bus, uint8_t dev= fn) { - struct domain_iommu *hd =3D dom_iommu(domain); struct context_entry *context, *context_entries, lctxt; - __uint128_t old; + __uint128_t res, old; uint64_t maddr; - uint16_t seg =3D iommu->drhd->segment, prev_did =3D 0; - struct domain *prev_dom =3D NULL; + uint16_t seg =3D iommu->drhd->segment, prev_did =3D 0, did; int rc, ret; - bool flush_dev_iotlb; - - if ( QUARANTINE_SKIP(domain, pgd_maddr) ) - return 0; + bool flush_dev_iotlb, overwrite_entry =3D false; =20 ASSERT(pcidevs_locked()); spin_lock(&iommu->lock); @@ -1499,28 +1459,15 @@ int domain_context_mapping_one( context =3D &context_entries[devfn]; old =3D (lctxt =3D *context).full; =20 - if ( context_present(lctxt) ) - { - domid_t domid; + did =3D ctx->arch.vtd.didmap[iommu->index]; =20 + if ( context_present(*context) ) + { prev_did =3D context_domain_id(lctxt); - domid =3D did_to_domain_id(iommu, prev_did); - if ( domid < DOMID_FIRST_RESERVED ) - prev_dom =3D rcu_lock_domain_by_id(domid); - else if ( pdev ? domid =3D=3D pdev->arch.pseudo_domid : domid > DO= MID_MASK ) - prev_dom =3D rcu_lock_domain(dom_io); - if ( !prev_dom ) - { - spin_unlock(&iommu->lock); - unmap_vtd_domain_page(context_entries); - dprintk(XENLOG_DEBUG VTDPREFIX, - "no domain for did %u (nr_dom %u)\n", - prev_did, cap_ndoms(iommu->cap)); - return -ESRCH; - } + overwrite_entry =3D true; } =20 - if ( iommu_hwdom_passthrough && is_hardware_domain(domain) ) + if ( iommu_hwdom_passthrough && is_hardware_domain(domain) && !ctx->id= ) { context_set_translation_type(lctxt, CONTEXT_TT_PASS_THRU); } @@ -1528,16 +1475,10 @@ int domain_context_mapping_one( { paddr_t root; =20 - spin_lock(&hd->arch.mapping_lock); - - root =3D domain_pgd_maddr(domain, pgd_maddr, iommu->nr_pt_levels); + root =3D get_context_pgd(domain, ctx, iommu->nr_pt_levels); if ( !root ) { - spin_unlock(&hd->arch.mapping_lock); - spin_unlock(&iommu->lock); unmap_vtd_domain_page(context_entries); - if ( prev_dom ) - rcu_unlock_domain(prev_dom); return -ENOMEM; } =20 @@ -1546,98 +1487,39 @@ int domain_context_mapping_one( context_set_translation_type(lctxt, CONTEXT_TT_DEV_IOTLB); else context_set_translation_type(lctxt, CONTEXT_TT_MULTI_LEVEL); - - spin_unlock(&hd->arch.mapping_lock); } =20 - rc =3D context_set_domain_id(&lctxt, domid, iommu); + rc =3D context_set_domain_id(&lctxt, did, iommu); if ( rc ) - { - unlock: - spin_unlock(&iommu->lock); - unmap_vtd_domain_page(context_entries); - if ( prev_dom ) - rcu_unlock_domain(prev_dom); - return rc; - } - - if ( !prev_dom ) - { - context_set_address_width(lctxt, level_to_agaw(iommu->nr_pt_levels= )); - context_set_fault_enable(lctxt); - context_set_present(lctxt); - } - else if ( prev_dom =3D=3D domain ) - { - ASSERT(lctxt.full =3D=3D context->full); - rc =3D !!pdev; goto unlock; - } - else - { - ASSERT(context_address_width(lctxt) =3D=3D - level_to_agaw(iommu->nr_pt_levels)); - ASSERT(!context_fault_disable(lctxt)); - } - - if ( cpu_has_cx16 ) - { - __uint128_t res =3D cmpxchg16b(context, &old, &lctxt.full); =20 - /* - * Hardware does not update the context entry behind our backs, - * so the return value should match "old". - */ - if ( res !=3D old ) - { - if ( pdev ) - check_cleanup_domid_map(domain, pdev, iommu); - printk(XENLOG_ERR - "%pp: unexpected context entry %016lx_%016lx (expected = %016lx_%016lx)\n", - &PCI_SBDF(seg, bus, devfn), - (uint64_t)(res >> 64), (uint64_t)res, - (uint64_t)(old >> 64), (uint64_t)old); - rc =3D -EILSEQ; - goto unlock; - } - } - else if ( !prev_dom || !(mode & MAP_WITH_RMRR) ) - { - context_clear_present(*context); - iommu_sync_cache(context, sizeof(*context)); + context_set_address_width(lctxt, level_to_agaw(iommu->nr_pt_levels)); + context_set_fault_enable(lctxt); + context_set_present(lctxt); =20 - write_atomic(&context->hi, lctxt.hi); - /* No barrier should be needed between these two. */ - write_atomic(&context->lo, lctxt.lo); - } - else /* Best effort, updating DID last. */ - { - /* - * By non-atomically updating the context entry's DID field last, - * during a short window in time TLB entries with the old domain = ID - * but the new page tables may be inserted. This could affect I/O - * of other devices using this same (old) domain ID. Such updati= ng - * therefore is not a problem if this was the only device associa= ted - * with the old domain ID. Diverting I/O of any of a dying domai= n's - * devices to the quarantine page tables is intended anyway. - */ - if ( !(mode & (MAP_OWNER_DYING | MAP_SINGLE_DEVICE)) ) - printk(XENLOG_WARNING VTDPREFIX - " %pp: reassignment may cause %pd data corruption\n", - &PCI_SBDF(seg, bus, devfn), prev_dom); + res =3D cmpxchg16b(context, &old, &lctxt.full); =20 - write_atomic(&context->lo, lctxt.lo); - /* No barrier should be needed between these two. */ - write_atomic(&context->hi, lctxt.hi); + /* + * Hardware does not update the context entry behind our backs, + * so the return value should match "old". + */ + if ( res !=3D old ) + { + printk(XENLOG_ERR + "%pp: unexpected context entry %016lx_%016lx (expected %01= 6lx_%016lx)\n", + &PCI_SBDF(seg, bus, devfn), + (uint64_t)(res >> 64), (uint64_t)res, + (uint64_t)(old >> 64), (uint64_t)old); + rc =3D -EILSEQ; + goto unlock; } =20 iommu_sync_cache(context, sizeof(struct context_entry)); - spin_unlock(&iommu->lock); =20 rc =3D iommu_flush_context_device(iommu, prev_did, PCI_BDF(bus, devfn), - DMA_CCMD_MASK_NOBIT, !prev_dom); + DMA_CCMD_MASK_NOBIT, !overwrite_entry); flush_dev_iotlb =3D !!find_ats_dev_drhd(iommu); - ret =3D iommu_flush_iotlb_dsi(iommu, prev_did, !prev_dom, flush_dev_io= tlb); + ret =3D iommu_flush_iotlb_dsi(iommu, prev_did, !overwrite_entry, flush= _dev_iotlb); =20 /* * The current logic for returns: @@ -1653,230 +1535,55 @@ int domain_context_mapping_one( if ( rc > 0 ) rc =3D 0; =20 - set_bit(iommu->index, hd->arch.vtd.iommu_bitmap); + set_bit(iommu->index, ctx->arch.vtd.iommu_bitmap); =20 unmap_vtd_domain_page(context_entries); + spin_unlock(&iommu->lock); =20 if ( !seg && !rc ) - rc =3D me_wifi_quirk(domain, bus, devfn, domid, pgd_maddr, mode); - - if ( rc && !(mode & MAP_ERROR_RECOVERY) ) - { - if ( !prev_dom || - /* - * Unmapping here means DEV_TYPE_PCI devices with RMRRs (if s= uch - * exist) would cause problems if such a region was actually - * accessed. - */ - (prev_dom =3D=3D dom_io && !pdev) ) - ret =3D domain_context_unmap_one(domain, iommu, bus, devfn); - else - ret =3D domain_context_mapping_one(prev_dom, iommu, bus, devfn= , pdev, - DEVICE_DOMID(prev_dom, pdev), - DEVICE_PGTABLE(prev_dom, pdev= ), - (mode & MAP_WITH_RMRR) | - MAP_ERROR_RECOVERY) < 0; - - if ( !ret && pdev && pdev->devfn =3D=3D devfn ) - check_cleanup_domid_map(domain, pdev, iommu); - } + rc =3D me_wifi_quirk(domain, bus, devfn, did, 0, ctx); =20 - if ( prev_dom ) - rcu_unlock_domain(prev_dom); + return rc; =20 - return rc ?: pdev && prev_dom; + unlock: + unmap_vtd_domain_page(context_entries); + spin_unlock(&iommu->lock); + return rc; } =20 -static const struct acpi_drhd_unit *domain_context_unmap( - struct domain *d, uint8_t devfn, struct pci_dev *pdev); - -static int domain_context_mapping(struct domain *domain, u8 devfn, - struct pci_dev *pdev) +int apply_context(struct domain *d, struct iommu_context *ctx, + struct pci_dev *pdev, u8 devfn) { const struct acpi_drhd_unit *drhd =3D acpi_find_matched_drhd_unit(pdev= ); - const struct acpi_rmrr_unit *rmrr; - paddr_t pgd_maddr =3D DEVICE_PGTABLE(domain, pdev); - domid_t orig_domid =3D pdev->arch.pseudo_domid; int ret =3D 0; - unsigned int i, mode =3D 0; - uint16_t seg =3D pdev->seg, bdf; - uint8_t bus =3D pdev->bus, secbus; - - /* - * Generally we assume only devices from one node to get assigned to a - * given guest. But even if not, by replacing the prior value here we - * guarantee that at least some basic allocations for the device being - * added will get done against its node. Any further allocations for - * this or other devices may be penalized then, but some would also be - * if we left other than NUMA_NO_NODE untouched here. - */ - if ( drhd && drhd->iommu->node !=3D NUMA_NO_NODE ) - dom_iommu(domain)->node =3D drhd->iommu->node; - - ASSERT(pcidevs_locked()); - - for_each_rmrr_device( rmrr, bdf, i ) - { - if ( rmrr->segment !=3D pdev->seg || bdf !=3D pdev->sbdf.bdf ) - continue; =20 - mode |=3D MAP_WITH_RMRR; - break; - } + if ( !drhd ) + return -EINVAL; =20 - if ( domain !=3D pdev->domain && pdev->domain !=3D dom_io ) + if ( pdev->type =3D=3D DEV_TYPE_PCI_HOST_BRIDGE || + pdev->type =3D=3D DEV_TYPE_PCIe_BRIDGE || + pdev->type =3D=3D DEV_TYPE_PCIe2PCI_BRIDGE || + pdev->type =3D=3D DEV_TYPE_LEGACY_PCI_BRIDGE ) { - if ( pdev->domain->is_dying ) - mode |=3D MAP_OWNER_DYING; - else if ( drhd && - !any_pdev_behind_iommu(pdev->domain, pdev, drhd->iommu) = && - !pdev->phantom_stride ) - mode |=3D MAP_SINGLE_DEVICE; + printk(XENLOG_WARNING VTDPREFIX " Ignoring apply_context on PCI br= idge\n"); + return 0; } =20 - switch ( pdev->type ) - { - bool prev_present; - - case DEV_TYPE_PCI_HOST_BRIDGE: - if ( iommu_debug ) - printk(VTDPREFIX "%pd:Hostbridge: skip %pp map\n", - domain, &PCI_SBDF(seg, bus, devfn)); - if ( !is_hardware_domain(domain) ) - return -EPERM; - break; - - case DEV_TYPE_PCIe_BRIDGE: - case DEV_TYPE_PCIe2PCI_BRIDGE: - case DEV_TYPE_LEGACY_PCI_BRIDGE: - break; - - case DEV_TYPE_PCIe_ENDPOINT: - if ( !drhd ) - return -ENODEV; - - if ( iommu_quarantine && orig_domid =3D=3D DOMID_INVALID ) - { - pdev->arch.pseudo_domid =3D - iommu_alloc_domid(drhd->iommu->pseudo_domid_map); - if ( pdev->arch.pseudo_domid =3D=3D DOMID_INVALID ) - return -ENOSPC; - } - - if ( iommu_debug ) - printk(VTDPREFIX "%pd:PCIe: map %pp\n", - domain, &PCI_SBDF(seg, bus, devfn)); - ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, devfn= , pdev, - DEVICE_DOMID(domain, pdev), pgd_m= addr, - mode); - if ( ret > 0 ) - ret =3D 0; - if ( !ret && devfn =3D=3D pdev->devfn && ats_device(pdev, drhd) > = 0 ) - enable_ats_device(pdev, &drhd->iommu->ats_devices); - - break; - - case DEV_TYPE_PCI: - if ( !drhd ) - return -ENODEV; - - if ( iommu_quarantine && orig_domid =3D=3D DOMID_INVALID ) - { - pdev->arch.pseudo_domid =3D - iommu_alloc_domid(drhd->iommu->pseudo_domid_map); - if ( pdev->arch.pseudo_domid =3D=3D DOMID_INVALID ) - return -ENOSPC; - } - - if ( iommu_debug ) - printk(VTDPREFIX "%pd:PCI: map %pp\n", - domain, &PCI_SBDF(seg, bus, devfn)); - - ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, devfn, - pdev, DEVICE_DOMID(domain, pdev), - pgd_maddr, mode); - if ( ret < 0 ) - break; - prev_present =3D ret; - - if ( (ret =3D find_upstream_bridge(seg, &bus, &devfn, &secbus)) < = 1 ) - { - if ( !ret ) - break; - ret =3D -ENXIO; - } - /* - * Strictly speaking if the device is the only one behind this bri= dge - * and the only one with this (secbus,0,0) tuple, it could be allo= wed - * to be re-assigned regardless of RMRR presence. But let's deal = with - * that case only if it is actually found in the wild. Note that - * dealing with this just here would still not render the operation - * secure. - */ - else if ( prev_present && (mode & MAP_WITH_RMRR) && - domain !=3D pdev->domain ) - ret =3D -EOPNOTSUPP; - - /* - * Mapping a bridge should, if anything, pass the struct pci_dev of - * that bridge. Since bridges don't normally get assigned to guest= s, - * their owner would be the wrong one. Pass NULL instead. - */ - if ( ret >=3D 0 ) - ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, d= evfn, - NULL, DEVICE_DOMID(domain, pd= ev), - pgd_maddr, mode); - - /* - * Devices behind PCIe-to-PCI/PCIx bridge may generate different - * requester-id. It may originate from devfn=3D0 on the secondary = bus - * behind the bridge. Map that id as well if we didn't already. - * - * Somewhat similar as for bridges, we don't want to pass a struct - * pci_dev here - there may not even exist one for this (secbus,0,= 0) - * tuple. If there is one, without properly working device groups = it - * may again not have the correct owner. - */ - if ( !ret && pdev_type(seg, bus, devfn) =3D=3D DEV_TYPE_PCIe2PCI_B= RIDGE && - (secbus !=3D pdev->bus || pdev->devfn !=3D 0) ) - ret =3D domain_context_mapping_one(domain, drhd->iommu, secbus= , 0, - NULL, DEVICE_DOMID(domain, pd= ev), - pgd_maddr, mode); - - if ( ret ) - { - if ( !prev_present ) - domain_context_unmap(domain, devfn, pdev); - else if ( pdev->domain !=3D domain ) /* Avoid infinite recursi= on. */ - domain_context_mapping(pdev->domain, devfn, pdev); - } + ASSERT(pcidevs_locked()); =20 - break; + ret =3D apply_context_single(d, ctx, drhd->iommu, pdev->bus, devfn); =20 - default: - dprintk(XENLOG_ERR VTDPREFIX, "%pd:unknown(%u): %pp\n", - domain, pdev->type, &PCI_SBDF(seg, bus, devfn)); - ret =3D -EINVAL; - break; - } + if ( !ret && ats_device(pdev, drhd) > 0 ) + enable_ats_device(pdev, &drhd->iommu->ats_devices); =20 if ( !ret && devfn =3D=3D pdev->devfn ) pci_vtd_quirk(pdev); =20 - if ( ret && drhd && orig_domid =3D=3D DOMID_INVALID ) - { - iommu_free_domid(pdev->arch.pseudo_domid, - drhd->iommu->pseudo_domid_map); - pdev->arch.pseudo_domid =3D DOMID_INVALID; - } - return ret; } =20 -int domain_context_unmap_one( - struct domain *domain, - struct vtd_iommu *iommu, - uint8_t bus, uint8_t devfn) +int unapply_context_single(struct domain *domain, struct vtd_iommu *iommu, + uint8_t bus, uint8_t devfn) { struct context_entry *context, *context_entries; u64 maddr; @@ -1928,8 +1635,8 @@ int domain_context_unmap_one( unmap_vtd_domain_page(context_entries); =20 if ( !iommu->drhd->segment && !rc ) - rc =3D me_wifi_quirk(domain, bus, devfn, DOMID_INVALID, 0, - UNMAP_ME_PHANTOM_FUNC); + rc =3D me_wifi_quirk(domain, bus, devfn, DOMID_INVALID, UNMAP_ME_P= HANTOM_FUNC, + NULL); =20 if ( rc && !is_hardware_domain(domain) && domain !=3D dom_io ) { @@ -1947,143 +1654,28 @@ int domain_context_unmap_one( return rc; } =20 -static const struct acpi_drhd_unit *domain_context_unmap( - struct domain *domain, - uint8_t devfn, - struct pci_dev *pdev) -{ - const struct acpi_drhd_unit *drhd =3D acpi_find_matched_drhd_unit(pdev= ); - struct vtd_iommu *iommu =3D drhd ? drhd->iommu : NULL; - int ret; - uint16_t seg =3D pdev->seg; - uint8_t bus =3D pdev->bus, tmp_bus, tmp_devfn, secbus; - - switch ( pdev->type ) - { - case DEV_TYPE_PCI_HOST_BRIDGE: - if ( iommu_debug ) - printk(VTDPREFIX "%pd:Hostbridge: skip %pp unmap\n", - domain, &PCI_SBDF(seg, bus, devfn)); - return ERR_PTR(is_hardware_domain(domain) ? 0 : -EPERM); - - case DEV_TYPE_PCIe_BRIDGE: - case DEV_TYPE_PCIe2PCI_BRIDGE: - case DEV_TYPE_LEGACY_PCI_BRIDGE: - return ERR_PTR(0); - - case DEV_TYPE_PCIe_ENDPOINT: - if ( !iommu ) - return ERR_PTR(-ENODEV); - - if ( iommu_debug ) - printk(VTDPREFIX "%pd:PCIe: unmap %pp\n", - domain, &PCI_SBDF(seg, bus, devfn)); - ret =3D domain_context_unmap_one(domain, iommu, bus, devfn); - if ( !ret && devfn =3D=3D pdev->devfn && ats_device(pdev, drhd) > = 0 ) - disable_ats_device(pdev); - - break; - - case DEV_TYPE_PCI: - if ( !iommu ) - return ERR_PTR(-ENODEV); - - if ( iommu_debug ) - printk(VTDPREFIX "%pd:PCI: unmap %pp\n", - domain, &PCI_SBDF(seg, bus, devfn)); - ret =3D domain_context_unmap_one(domain, iommu, bus, devfn); - if ( ret ) - break; - - tmp_bus =3D bus; - tmp_devfn =3D devfn; - if ( (ret =3D find_upstream_bridge(seg, &tmp_bus, &tmp_devfn, - &secbus)) < 1 ) - { - if ( ret ) - { - ret =3D -ENXIO; - if ( !domain->is_dying && - !is_hardware_domain(domain) && domain !=3D dom_io ) - { - domain_crash(domain); - /* Make upper layers continue in a best effort manner.= */ - ret =3D 0; - } - } - break; - } - - ret =3D domain_context_unmap_one(domain, iommu, tmp_bus, tmp_devfn= ); - /* PCIe to PCI/PCIx bridge */ - if ( !ret && pdev_type(seg, tmp_bus, tmp_devfn) =3D=3D DEV_TYPE_PC= Ie2PCI_BRIDGE ) - ret =3D domain_context_unmap_one(domain, iommu, secbus, 0); - - break; - - default: - dprintk(XENLOG_ERR VTDPREFIX, "%pd:unknown(%u): %pp\n", - domain, pdev->type, &PCI_SBDF(seg, bus, devfn)); - return ERR_PTR(-EINVAL); - } - - if ( !ret && pdev->devfn =3D=3D devfn && - !QUARANTINE_SKIP(domain, pdev->arch.vtd.pgd_maddr) ) - check_cleanup_domid_map(domain, pdev, iommu); - - return drhd; -} - -static void cf_check iommu_clear_root_pgtable(struct domain *d) +static void cf_check iommu_clear_root_pgtable(struct domain *d, struct iom= mu_context *ctx) { - struct domain_iommu *hd =3D dom_iommu(d); - - spin_lock(&hd->arch.mapping_lock); - hd->arch.vtd.pgd_maddr =3D 0; - spin_unlock(&hd->arch.mapping_lock); + ctx->arch.vtd.pgd_maddr =3D 0; } =20 static void cf_check iommu_domain_teardown(struct domain *d) { - struct domain_iommu *hd =3D dom_iommu(d); + struct iommu_context *ctx =3D iommu_default_context(d); const struct acpi_drhd_unit *drhd; =20 if ( list_empty(&acpi_drhd_units) ) return; =20 - iommu_identity_map_teardown(d); - - ASSERT(!hd->arch.vtd.pgd_maddr); + ASSERT(!ctx->arch.vtd.pgd_maddr); =20 for_each_drhd_unit ( drhd ) cleanup_domid_map(d->domain_id, drhd->iommu); - - XFREE(hd->arch.vtd.iommu_bitmap); -} - -static void quarantine_teardown(struct pci_dev *pdev, - const struct acpi_drhd_unit *drhd) -{ - struct domain_iommu *hd =3D dom_iommu(dom_io); - - ASSERT(pcidevs_locked()); - - if ( !pdev->arch.vtd.pgd_maddr ) - return; - - ASSERT(page_list_empty(&hd->arch.pgtables.list)); - page_list_move(&hd->arch.pgtables.list, &pdev->arch.pgtables_list); - while ( iommu_free_pgtables(dom_io) =3D=3D -ERESTART ) - /* nothing */; - pdev->arch.vtd.pgd_maddr =3D 0; - - if ( drhd ) - cleanup_domid_map(pdev->arch.pseudo_domid, drhd->iommu); } =20 static int __must_check cf_check intel_iommu_map_page( struct domain *d, dfn_t dfn, mfn_t mfn, unsigned int flags, - unsigned int *flush_flags) + unsigned int *flush_flags, struct iommu_context *ctx) { struct domain_iommu *hd =3D dom_iommu(d); struct dma_pte *page, *pte, old, new =3D {}; @@ -2094,33 +1686,24 @@ static int __must_check cf_check intel_iommu_map_pa= ge( ASSERT((hd->platform_ops->page_sizes >> IOMMUF_order(flags)) & PAGE_SIZE_4K); =20 - /* Do nothing if VT-d shares EPT page table */ - if ( iommu_use_hap_pt(d) ) - return 0; - - /* Do nothing if hardware domain and iommu supports pass thru. */ - if ( iommu_hwdom_passthrough && is_hardware_domain(d) ) + if ( ctx->opaque ) return 0; =20 - spin_lock(&hd->arch.mapping_lock); - /* * IOMMU mapping request can be safely ignored when the domain is dyin= g. * - * hd->arch.mapping_lock guarantees that d->is_dying will be observed + * hd->lock guarantees that d->is_dying will be observed * before any page tables are freed (see iommu_free_pgtables()) */ if ( d->is_dying ) { - spin_unlock(&hd->arch.mapping_lock); return 0; } =20 - pg_maddr =3D addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), level, flush= _flags, + pg_maddr =3D addr_to_dma_page_maddr(d, ctx, dfn_to_daddr(dfn), level, = flush_flags, true); if ( pg_maddr < PAGE_SIZE ) { - spin_unlock(&hd->arch.mapping_lock); return -ENOMEM; } =20 @@ -2141,7 +1724,6 @@ static int __must_check cf_check intel_iommu_map_page( =20 if ( !((old.val ^ new.val) & ~DMA_PTE_CONTIG_MASK) ) { - spin_unlock(&hd->arch.mapping_lock); unmap_vtd_domain_page(page); return 0; } @@ -2170,7 +1752,7 @@ static int __must_check cf_check intel_iommu_map_page( new.val &=3D ~(LEVEL_MASK << level_to_offset_bits(level)); dma_set_pte_superpage(new); =20 - pg_maddr =3D addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), ++level, + pg_maddr =3D addr_to_dma_page_maddr(d, ctx, dfn_to_daddr(dfn), ++l= evel, flush_flags, false); BUG_ON(pg_maddr < PAGE_SIZE); =20 @@ -2180,11 +1762,10 @@ static int __must_check cf_check intel_iommu_map_pa= ge( iommu_sync_cache(pte, sizeof(*pte)); =20 *flush_flags |=3D IOMMU_FLUSHF_modified | IOMMU_FLUSHF_all; - iommu_queue_free_pgtable(hd, pg); + iommu_queue_free_pgtable(ctx, pg); perfc_incr(iommu_pt_coalesces); } =20 - spin_unlock(&hd->arch.mapping_lock); unmap_vtd_domain_page(page); =20 *flush_flags |=3D IOMMU_FLUSHF_added; @@ -2193,7 +1774,7 @@ static int __must_check cf_check intel_iommu_map_page( *flush_flags |=3D IOMMU_FLUSHF_modified; =20 if ( IOMMUF_order(flags) && !dma_pte_superpage(old) ) - queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(old)), + queue_free_pt(ctx, maddr_to_mfn(dma_pte_addr(old)), IOMMUF_order(flags) / LEVEL_STRIDE); } =20 @@ -2201,7 +1782,8 @@ static int __must_check cf_check intel_iommu_map_page( } =20 static int __must_check cf_check intel_iommu_unmap_page( - struct domain *d, dfn_t dfn, unsigned int order, unsigned int *flush_f= lags) + struct domain *d, dfn_t dfn, unsigned int order, unsigned int *flush_f= lags, + struct iommu_context *ctx) { struct domain_iommu *hd =3D dom_iommu(d); daddr_t addr =3D dfn_to_daddr(dfn); @@ -2215,29 +1797,19 @@ static int __must_check cf_check intel_iommu_unmap_= page( */ ASSERT((hd->platform_ops->page_sizes >> order) & PAGE_SIZE_4K); =20 - /* Do nothing if VT-d shares EPT page table */ - if ( iommu_use_hap_pt(d) ) + if ( ctx->opaque ) return 0; =20 - /* Do nothing if hardware domain and iommu supports pass thru. */ - if ( iommu_hwdom_passthrough && is_hardware_domain(d) ) - return 0; - - spin_lock(&hd->arch.mapping_lock); /* get target level pte */ - pg_maddr =3D addr_to_dma_page_maddr(d, addr, level, flush_flags, false= ); + pg_maddr =3D addr_to_dma_page_maddr(d, ctx, addr, level, flush_flags, = false); if ( pg_maddr < PAGE_SIZE ) - { - spin_unlock(&hd->arch.mapping_lock); return pg_maddr ? -ENOMEM : 0; - } =20 page =3D map_vtd_domain_page(pg_maddr); pte =3D &page[address_level_offset(addr, level)]; =20 if ( !dma_pte_present(*pte) ) { - spin_unlock(&hd->arch.mapping_lock); unmap_vtd_domain_page(page); return 0; } @@ -2255,7 +1827,7 @@ static int __must_check cf_check intel_iommu_unmap_pa= ge( =20 unmap_vtd_domain_page(page); =20 - pg_maddr =3D addr_to_dma_page_maddr(d, addr, level, flush_flags, f= alse); + pg_maddr =3D addr_to_dma_page_maddr(d, ctx, addr, level, flush_fla= gs, false); BUG_ON(pg_maddr < PAGE_SIZE); =20 page =3D map_vtd_domain_page(pg_maddr); @@ -2264,42 +1836,31 @@ static int __must_check cf_check intel_iommu_unmap_= page( iommu_sync_cache(pte, sizeof(*pte)); =20 *flush_flags |=3D IOMMU_FLUSHF_all; - iommu_queue_free_pgtable(hd, pg); + iommu_queue_free_pgtable(ctx, pg); perfc_incr(iommu_pt_coalesces); } =20 - spin_unlock(&hd->arch.mapping_lock); - unmap_vtd_domain_page(page); =20 *flush_flags |=3D IOMMU_FLUSHF_modified; =20 if ( order && !dma_pte_superpage(old) ) - queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(old)), + queue_free_pt(ctx, maddr_to_mfn(dma_pte_addr(old)), order / LEVEL_STRIDE); =20 return 0; } =20 static int cf_check intel_iommu_lookup_page( - struct domain *d, dfn_t dfn, mfn_t *mfn, unsigned int *flags) + struct domain *d, dfn_t dfn, mfn_t *mfn, unsigned int *flags, + struct iommu_context *ctx) { - struct domain_iommu *hd =3D dom_iommu(d); uint64_t val; =20 - /* - * If VT-d shares EPT page table or if the domain is the hardware - * domain and iommu_passthrough is set then pass back the dfn. - */ - if ( iommu_use_hap_pt(d) || - (iommu_hwdom_passthrough && is_hardware_domain(d)) ) + if ( ctx->opaque ) return -EOPNOTSUPP; =20 - spin_lock(&hd->arch.mapping_lock); - - val =3D addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0, NULL, false); - - spin_unlock(&hd->arch.mapping_lock); + val =3D addr_to_dma_page_maddr(d, ctx, dfn_to_daddr(dfn), 0, NULL, fal= se); =20 if ( val < PAGE_SIZE ) return -ENOENT; @@ -2320,7 +1881,7 @@ static bool __init vtd_ept_page_compatible(const stru= ct vtd_iommu *iommu) =20 /* EPT is not initialised yet, so we must check the capability in * the MSR explicitly rather than use cpu_has_vmx_ept_*() */ - if ( rdmsr_safe(MSR_IA32_VMX_EPT_VPID_CAP, ept_cap) !=3D 0 )=20 + if ( rdmsr_safe(MSR_IA32_VMX_EPT_VPID_CAP, ept_cap) !=3D 0 ) return false; =20 return (ept_has_2mb(ept_cap) && opt_hap_2mb) <=3D @@ -2329,44 +1890,6 @@ static bool __init vtd_ept_page_compatible(const str= uct vtd_iommu *iommu) (cap_sps_1gb(vtd_cap) && iommu_superpages); } =20 -static int cf_check intel_iommu_add_device(u8 devfn, struct pci_dev *pdev) -{ - struct acpi_rmrr_unit *rmrr; - u16 bdf; - int ret, i; - - ASSERT(pcidevs_locked()); - - if ( !pdev->domain ) - return -EINVAL; - - for_each_rmrr_device ( rmrr, bdf, i ) - { - if ( rmrr->segment =3D=3D pdev->seg && bdf =3D=3D PCI_BDF(pdev->bu= s, devfn) ) - { - /* - * iommu_add_device() is only called for the hardware - * domain (see xen/drivers/passthrough/pci.c:pci_add_device()). - * Since RMRRs are always reserved in the e820 map for the har= dware - * domain, there shouldn't be a conflict. - */ - ret =3D iommu_identity_mapping(pdev->domain, p2m_access_rw, - rmrr->base_address, rmrr->end_add= ress, - 0); - if ( ret ) - dprintk(XENLOG_ERR VTDPREFIX, "%pd: RMRR mapping failed\n", - pdev->domain); - } - } - - ret =3D domain_context_mapping(pdev->domain, devfn, pdev); - if ( ret ) - dprintk(XENLOG_ERR VTDPREFIX, "%pd: context mapping failed\n", - pdev->domain); - - return ret; -} - static int cf_check intel_iommu_enable_device(struct pci_dev *pdev) { struct acpi_drhd_unit *drhd =3D acpi_find_matched_drhd_unit(pdev); @@ -2382,49 +1905,16 @@ static int cf_check intel_iommu_enable_device(struc= t pci_dev *pdev) return ret >=3D 0 ? 0 : ret; } =20 -static int cf_check intel_iommu_remove_device(u8 devfn, struct pci_dev *pd= ev) -{ - const struct acpi_drhd_unit *drhd; - struct acpi_rmrr_unit *rmrr; - u16 bdf; - unsigned int i; - - if ( !pdev->domain ) - return -EINVAL; - - drhd =3D domain_context_unmap(pdev->domain, devfn, pdev); - if ( IS_ERR(drhd) ) - return PTR_ERR(drhd); - - for_each_rmrr_device ( rmrr, bdf, i ) - { - if ( rmrr->segment !=3D pdev->seg || bdf !=3D PCI_BDF(pdev->bus, d= evfn) ) - continue; - - /* - * Any flag is nothing to clear these mappings but here - * its always safe and strict to set 0. - */ - iommu_identity_mapping(pdev->domain, p2m_access_x, rmrr->base_addr= ess, - rmrr->end_address, 0); - } - - quarantine_teardown(pdev, drhd); - - if ( drhd ) - { - iommu_free_domid(pdev->arch.pseudo_domid, - drhd->iommu->pseudo_domid_map); - pdev->arch.pseudo_domid =3D DOMID_INVALID; - } - - return 0; -} - static int __hwdom_init cf_check setup_hwdom_device( u8 devfn, struct pci_dev *pdev) { - return domain_context_mapping(pdev->domain, devfn, pdev); + if (pdev->type =3D=3D DEV_TYPE_PCI_HOST_BRIDGE || + pdev->type =3D=3D DEV_TYPE_PCIe_BRIDGE || + pdev->type =3D=3D DEV_TYPE_PCIe2PCI_BRIDGE || + pdev->type =3D=3D DEV_TYPE_LEGACY_PCI_BRIDGE) + return 0; + + return iommu_attach_context(hardware_domain, pdev, 0); } =20 void clear_fault_bits(struct vtd_iommu *iommu) @@ -2518,7 +2008,7 @@ static int __must_check init_vtd_hw(bool resume) =20 /* * Enable queue invalidation - */ =20 + */ for_each_drhd_unit ( drhd ) { iommu =3D drhd->iommu; @@ -2539,7 +2029,7 @@ static int __must_check init_vtd_hw(bool resume) =20 /* * Enable interrupt remapping - */ =20 + */ if ( iommu_intremap !=3D iommu_intremap_off ) { int apic; @@ -2594,34 +2084,53 @@ static int __must_check init_vtd_hw(bool resume) return iommu_flush_all(); } =20 -static void __hwdom_init setup_hwdom_rmrr(struct domain *d) +static struct iommu_state { + uint32_t fectl; +} *__read_mostly iommu_state; + +static void arch_iommu_dump_domain_contexts(struct domain *d) { - struct acpi_rmrr_unit *rmrr; - u16 bdf; - int ret, i; + unsigned int i, iommu_no; + struct pci_dev *pdev; + struct iommu_context *ctx; + struct domain_iommu *hd =3D dom_iommu(d); =20 - pcidevs_lock(); - for_each_rmrr_device ( rmrr, bdf, i ) + printk("d%hu contexts\n", d->domain_id); + + for (i =3D 0; i < (1 + hd->other_contexts.count); ++i) { - /* - * Here means we're add a device to the hardware domain. - * Since RMRRs are always reserved in the e820 map for the hardware - * domain, there shouldn't be a conflict. So its always safe and - * strict to set 0. - */ - ret =3D iommu_identity_mapping(d, p2m_access_rw, rmrr->base_addres= s, - rmrr->end_address, 0); - if ( ret ) - dprintk(XENLOG_ERR VTDPREFIX, - "IOMMU: mapping reserved region failed\n"); + if ( (ctx =3D iommu_get_context(d, i)) ) + { + printk(" Context %d (%"PRIx64")\n", i, ctx->arch.vtd.pgd_maddr= ); + + for (iommu_no =3D 0; iommu_no < nr_iommus; iommu_no++) + printk(" IOMMU %hu (used=3D%u; did=3D%hu)\n", iommu_no, + test_bit(iommu_no, ctx->arch.vtd.iommu_bitmap), + ctx->arch.vtd.didmap[iommu_no]); + + list_for_each_entry(pdev, &ctx->devices, context_list) + { + printk(" - %pp\n", &pdev->sbdf); + } + + iommu_put_context(ctx); + } } - pcidevs_unlock(); } =20 -static struct iommu_state { - uint32_t fectl; -} *__read_mostly iommu_state; +static void arch_iommu_dump_contexts(unsigned char key) +{ + struct domain *d; =20 + for_each_domain(d) + if (is_iommu_enabled(d)) { + struct domain_iommu *hd =3D dom_iommu(d); + printk("d%hu arena page usage: %d\n", d->domain_id, + atomic_read(&hd->arch.pt_arena.used_pages)); + + arch_iommu_dump_domain_contexts(d); + } +} static int __init cf_check vtd_setup(void) { struct acpi_drhd_unit *drhd; @@ -2749,6 +2258,7 @@ static int __init cf_check vtd_setup(void) iommu_ops.page_sizes |=3D large_sizes; =20 register_keyhandler('V', vtd_dump_iommu_info, "dump iommu info", 1); + register_keyhandler('X', arch_iommu_dump_contexts, "dump iommu context= s", 1); =20 return 0; =20 @@ -2763,192 +2273,6 @@ static int __init cf_check vtd_setup(void) return ret; } =20 -static int cf_check reassign_device_ownership( - struct domain *source, - struct domain *target, - u8 devfn, struct pci_dev *pdev) -{ - int ret; - - if ( !QUARANTINE_SKIP(target, pdev->arch.vtd.pgd_maddr) ) - { - if ( !has_arch_pdevs(target) ) - vmx_pi_hooks_assign(target); - -#ifdef CONFIG_PV - /* - * Devices assigned to untrusted domains (here assumed to be any d= omU) - * can attempt to send arbitrary LAPIC/MSI messages. We are unprot= ected - * by the root complex unless interrupt remapping is enabled. - */ - if ( !iommu_intremap && !is_hardware_domain(target) && - !is_system_domain(target) ) - untrusted_msi =3D true; -#endif - - ret =3D domain_context_mapping(target, devfn, pdev); - - if ( !ret && pdev->devfn =3D=3D devfn && - !QUARANTINE_SKIP(source, pdev->arch.vtd.pgd_maddr) ) - { - const struct acpi_drhd_unit *drhd =3D acpi_find_matched_drhd_u= nit(pdev); - - if ( drhd ) - check_cleanup_domid_map(source, pdev, drhd->iommu); - } - } - else - { - const struct acpi_drhd_unit *drhd; - - drhd =3D domain_context_unmap(source, devfn, pdev); - ret =3D IS_ERR(drhd) ? PTR_ERR(drhd) : 0; - } - if ( ret ) - { - if ( !has_arch_pdevs(target) ) - vmx_pi_hooks_deassign(target); - return ret; - } - - if ( devfn =3D=3D pdev->devfn && pdev->domain !=3D target ) - { - write_lock(&source->pci_lock); - list_del(&pdev->domain_list); - write_unlock(&source->pci_lock); - - pdev->domain =3D target; - - write_lock(&target->pci_lock); - list_add(&pdev->domain_list, &target->pdev_list); - write_unlock(&target->pci_lock); - } - - if ( !has_arch_pdevs(source) ) - vmx_pi_hooks_deassign(source); - - /* - * If the device belongs to the hardware domain, and it has RMRR, don't - * remove it from the hardware domain, because BIOS may use RMRR at - * booting time. - */ - if ( !is_hardware_domain(source) ) - { - const struct acpi_rmrr_unit *rmrr; - u16 bdf; - unsigned int i; - - for_each_rmrr_device( rmrr, bdf, i ) - if ( rmrr->segment =3D=3D pdev->seg && - bdf =3D=3D PCI_BDF(pdev->bus, devfn) ) - { - /* - * Any RMRR flag is always ignored when remove a device, - * but its always safe and strict to set 0. - */ - ret =3D iommu_identity_mapping(source, p2m_access_x, - rmrr->base_address, - rmrr->end_address, 0); - if ( ret && ret !=3D -ENOENT ) - return ret; - } - } - - return 0; -} - -static int cf_check intel_iommu_assign_device( - struct domain *d, u8 devfn, struct pci_dev *pdev, u32 flag) -{ - struct domain *s =3D pdev->domain; - struct acpi_rmrr_unit *rmrr; - int ret =3D 0, i; - u16 bdf, seg; - u8 bus; - - if ( list_empty(&acpi_drhd_units) ) - return -ENODEV; - - seg =3D pdev->seg; - bus =3D pdev->bus; - /* - * In rare cases one given rmrr is shared by multiple devices but - * obviously this would put the security of a system at risk. So - * we would prevent from this sort of device assignment. But this - * can be permitted if user set - * "pci =3D [ 'sbdf, rdm_policy=3Drelaxed' ]" - * - * TODO: in the future we can introduce group device assignment - * interface to make sure devices sharing RMRR are assigned to the - * same domain together. - */ - for_each_rmrr_device( rmrr, bdf, i ) - { - if ( rmrr->segment =3D=3D seg && bdf =3D=3D PCI_BDF(bus, devfn) && - rmrr->scope.devices_cnt > 1 ) - { - bool relaxed =3D flag & XEN_DOMCTL_DEV_RDM_RELAXED; - - printk(XENLOG_GUEST "%s" VTDPREFIX - " It's %s to assign %pp" - " with shared RMRR at %"PRIx64" for %pd.\n", - relaxed ? XENLOG_WARNING : XENLOG_ERR, - relaxed ? "risky" : "disallowed", - &PCI_SBDF(seg, bus, devfn), rmrr->base_address, d); - if ( !relaxed ) - return -EPERM; - } - } - - if ( d =3D=3D dom_io ) - return reassign_device_ownership(s, d, devfn, pdev); - - /* Setup rmrr identity mapping */ - for_each_rmrr_device( rmrr, bdf, i ) - { - if ( rmrr->segment =3D=3D seg && bdf =3D=3D PCI_BDF(bus, devfn) ) - { - ret =3D iommu_identity_mapping(d, p2m_access_rw, rmrr->base_ad= dress, - rmrr->end_address, flag); - if ( ret ) - { - printk(XENLOG_G_ERR VTDPREFIX - "%pd: cannot map reserved region [%"PRIx64",%"PRIx6= 4"]: %d\n", - d, rmrr->base_address, rmrr->end_address, ret); - break; - } - } - } - - if ( !ret ) - ret =3D reassign_device_ownership(s, d, devfn, pdev); - - /* See reassign_device_ownership() for the hwdom aspect. */ - if ( !ret || is_hardware_domain(d) ) - return ret; - - for_each_rmrr_device( rmrr, bdf, i ) - { - if ( rmrr->segment =3D=3D seg && bdf =3D=3D PCI_BDF(bus, devfn) ) - { - int rc =3D iommu_identity_mapping(d, p2m_access_x, - rmrr->base_address, - rmrr->end_address, 0); - - if ( rc && rc !=3D -ENOENT ) - { - printk(XENLOG_ERR VTDPREFIX - "%pd: cannot unmap reserved region [%"PRIx64",%"PRI= x64"]: %d\n", - d, rmrr->base_address, rmrr->end_address, rc); - domain_crash(d); - break; - } - } - } - - return ret; -} - static int cf_check intel_iommu_group_id(u16 seg, u8 bus, u8 devfn) { u8 secbus; @@ -3073,6 +2397,11 @@ static void vtd_dump_page_table_level(paddr_t pt_mad= dr, int level, paddr_t gpa, if ( level < 1 ) return; =20 + if (pt_maddr =3D=3D 0) { + printk(" (empty)\n"); + return; + } + pt_vaddr =3D map_vtd_domain_page(pt_maddr); =20 next_level =3D level - 1; @@ -3103,158 +2432,374 @@ static void vtd_dump_page_table_level(paddr_t pt_= maddr, int level, paddr_t gpa, =20 static void cf_check vtd_dump_page_tables(struct domain *d) { - const struct domain_iommu *hd =3D dom_iommu(d); + struct domain_iommu *hd =3D dom_iommu(d); + unsigned int i; =20 - printk(VTDPREFIX" %pd table has %d levels\n", d, + printk(VTDPREFIX " %pd table has %d levels\n", d, agaw_to_level(hd->arch.vtd.agaw)); - vtd_dump_page_table_level(hd->arch.vtd.pgd_maddr, - agaw_to_level(hd->arch.vtd.agaw), 0, 0); + + for (i =3D 1; i < (1 + hd->other_contexts.count); ++i) + { + struct iommu_context *ctx =3D iommu_get_context(d, i); + + printk(VTDPREFIX " %pd context %d: %s\n", d, i, + ctx ? "allocated" : "non-allocated"); + + if (ctx) + { + vtd_dump_page_table_level(ctx->arch.vtd.pgd_maddr, + agaw_to_level(hd->arch.vtd.agaw), 0,= 0); + iommu_put_context(ctx); + } + } } =20 -static int fill_qpt(struct dma_pte *this, unsigned int level, - struct page_info *pgs[6]) +static int intel_iommu_context_init(struct domain *d, struct iommu_context= *ctx, u32 flags) { - struct domain_iommu *hd =3D dom_iommu(dom_io); - unsigned int i; - int rc =3D 0; + struct acpi_drhd_unit *drhd; + + ctx->arch.vtd.didmap =3D xzalloc_array(u16, nr_iommus); =20 - for ( i =3D 0; !rc && i < PTE_NUM; ++i ) + if ( !ctx->arch.vtd.didmap ) + return -ENOMEM; + + ctx->arch.vtd.iommu_bitmap =3D xzalloc_array(unsigned long, + BITS_TO_LONGS(nr_iommus)); + if ( !ctx->arch.vtd.iommu_bitmap ) + return -ENOMEM; + + ctx->arch.vtd.superpage_progress =3D 0; + + if ( flags & IOMMU_CONTEXT_INIT_default ) { - struct dma_pte *pte =3D &this[i], *next; + ctx->arch.vtd.pgd_maddr =3D 0; =20 - if ( !dma_pte_present(*pte) ) + /* + * Context is considered "opaque" (non-managed) in these cases : + * - HAP is enabled, in this case, the pagetable is not managed b= y the + * IOMMU code, thus opaque + * - IOMMU is in passthrough which means that there is no actual = pagetable + * + * If no-dma mode is specified, it's always non-opaque as the page= table is + * always managed regardless of the rest. + */ + ctx->arch.hap_context =3D !iommu_hwdom_no_dma && (iommu_use_hap_pt= (d) || iommu_hwdom_passthrough); + + ctx->opaque =3D ctx->arch.hap_context; + + /* Populate context DID map using domain id. */ + for_each_drhd_unit(drhd) { - if ( !pgs[level] ) - { - /* - * The pgtable allocator is fine for the leaf page, as wel= l as - * page table pages, and the resulting allocations are alw= ays - * zeroed. - */ - pgs[level] =3D iommu_alloc_pgtable(hd, 0); - if ( !pgs[level] ) - { - rc =3D -ENOMEM; - break; - } - - if ( level ) - { - next =3D map_vtd_domain_page(page_to_maddr(pgs[level])= ); - rc =3D fill_qpt(next, level - 1, pgs); - unmap_vtd_domain_page(next); - } - } + ctx->arch.vtd.didmap[drhd->iommu->index] =3D + convert_domid(drhd->iommu, d->domain_id); + } + } + else + { + /* Populate context DID map using pseudo DIDs */ + for_each_drhd_unit(drhd) + { + ctx->arch.vtd.didmap[drhd->iommu->index] =3D + iommu_alloc_domid(drhd->iommu->pseudo_domid_map); + } + } =20 - dma_set_pte_addr(*pte, page_to_maddr(pgs[level])); - dma_set_pte_readable(*pte); - dma_set_pte_writable(*pte); + if ( !ctx->opaque ) + /* Create initial context page */ + addr_to_dma_page_maddr(d, ctx, 0, min_pt_levels, NULL, true); + + return arch_iommu_context_init(d, ctx, flags); +} + +static int intel_iommu_cleanup_pte(uint64_t pte_maddr, bool preempt) +{ + size_t i; + struct dma_pte *pte =3D map_vtd_domain_page(pte_maddr); + + for (i =3D 0; i < (1 << PAGETABLE_ORDER); ++i) + if ( dma_pte_present(pte[i]) ) + { + /* Remove the reference of the target mapping (if needed) */ + mfn_t mfn =3D maddr_to_mfn(dma_pte_addr(pte[i])); + + if ( mfn_valid(mfn) ) + put_page(mfn_to_page(mfn)); + + if ( preempt ) + dma_clear_pte(pte[i]); } - else if ( level && !dma_pte_superpage(*pte) ) + + unmap_vtd_domain_page(pte); + + return 0; +} + +/** + * Cleanup logic : + * Walk through the entire page table, progressively removing mappings if = preempt. + * + * Return values : + * - Report preemption with -ERESTART. + * - Report empty pte/pgd with 0. + * + * When preempted during superpage operation, store state in vtd.superpage= _progress. + */ + +static int intel_iommu_cleanup_superpage(struct iommu_context *ctx, + unsigned int page_order, uint64_= t pte_maddr, + bool preempt) +{ + size_t i =3D 0, page_count =3D 1 << page_order; + struct page_info *page =3D maddr_to_page(pte_maddr); + + if ( preempt ) + i =3D ctx->arch.vtd.superpage_progress; + + for (; i < page_count; page++) + { + put_page(page); + + if ( preempt && (i & 0xff) && general_preempt_check() ) { - next =3D map_vtd_domain_page(dma_pte_addr(*pte)); - rc =3D fill_qpt(next, level - 1, pgs); - unmap_vtd_domain_page(next); + ctx->arch.vtd.superpage_progress =3D i + 1; + return -ERESTART; } } =20 - return rc; + if ( preempt ) + ctx->arch.vtd.superpage_progress =3D 0; + + return 0; } =20 -static int cf_check intel_iommu_quarantine_init(struct pci_dev *pdev, - bool scratch_page) +static int intel_iommu_cleanup_mappings(struct iommu_context *ctx, + unsigned int nr_pt_levels, uint64= _t pgd_maddr, + bool preempt) { - struct domain_iommu *hd =3D dom_iommu(dom_io); - struct page_info *pg; - unsigned int agaw =3D hd->arch.vtd.agaw; - unsigned int level =3D agaw_to_level(agaw); - const struct acpi_drhd_unit *drhd; - const struct acpi_rmrr_unit *rmrr; - unsigned int i, bdf; - bool rmrr_found =3D false; + size_t i; int rc; + struct dma_pte *pgd; =20 - ASSERT(pcidevs_locked()); - ASSERT(!hd->arch.vtd.pgd_maddr); - ASSERT(page_list_empty(&hd->arch.pgtables.list)); + if ( ctx->opaque ) + /* don't touch opaque contexts */ + return 0; + + pgd =3D map_vtd_domain_page(pgd_maddr); =20 - if ( pdev->arch.vtd.pgd_maddr ) + for (i =3D 0; i < (1 << PAGETABLE_ORDER); ++i) { - clear_domain_page(pdev->arch.leaf_mfn); - return 0; + if ( dma_pte_present(pgd[i]) ) + { + uint64_t pte_maddr =3D dma_pte_addr(pgd[i]); + + if ( dma_pte_superpage(pgd[i]) ) + rc =3D intel_iommu_cleanup_superpage(ctx, nr_pt_levels * S= UPERPAGE_ORDER, + pte_maddr, preempt); + else if ( nr_pt_levels > 2 ) + /* Next level is not PTE */ + rc =3D intel_iommu_cleanup_mappings(ctx, nr_pt_levels - 1, + pte_maddr, preempt); + else + rc =3D intel_iommu_cleanup_pte(pte_maddr, preempt); + + if ( preempt && !rc ) + /* Fold pgd (no more mappings in it) */ + dma_clear_pte(pgd[i]); + else if ( preempt && (rc =3D=3D -ERESTART || general_preempt_c= heck()) ) + { + unmap_vtd_domain_page(pgd); + return -ERESTART; + } + } } =20 - drhd =3D acpi_find_matched_drhd_unit(pdev); - if ( !drhd ) - return -ENODEV; + unmap_vtd_domain_page(pgd); =20 - pg =3D iommu_alloc_pgtable(hd, 0); - if ( !pg ) - return -ENOMEM; + return 0; +} =20 - rc =3D context_set_domain_id(NULL, pdev->arch.pseudo_domid, drhd->iomm= u); +static int intel_iommu_context_teardown(struct domain *d, struct iommu_con= text *ctx, u32 flags) +{ + struct acpi_drhd_unit *drhd; + pcidevs_lock(); =20 - /* Transiently install the root into DomIO, for iommu_identity_mapping= (). */ - hd->arch.vtd.pgd_maddr =3D page_to_maddr(pg); + // Cleanup mappings + if ( intel_iommu_cleanup_mappings(ctx, agaw_to_level(d->iommu.arch.vtd= .agaw), + ctx->arch.vtd.pgd_maddr, + flags & IOMMUF_preempt) < 0 ) + { + pcidevs_unlock(); + return -ERESTART; + } =20 - for_each_rmrr_device ( rmrr, bdf, i ) + if (ctx->arch.vtd.didmap) { - if ( rc ) - break; + for_each_drhd_unit(drhd) + { + iommu_free_domid(ctx->arch.vtd.didmap[drhd->iommu->index], + drhd->iommu->pseudo_domid_map); + } + + xfree(ctx->arch.vtd.didmap); + } =20 - if ( rmrr->segment =3D=3D pdev->seg && bdf =3D=3D pdev->sbdf.bdf ) + pcidevs_unlock(); + return arch_iommu_context_teardown(d, ctx, flags); +} + +static int intel_iommu_dev_rmrr(struct domain *d, struct pci_dev *pdev, + struct iommu_context *ctx, bool unmap) +{ + struct acpi_rmrr_unit *rmrr; + u16 bdf; + int ret, i; + + for_each_rmrr_device(rmrr, bdf, i) + { + if ( PCI_SBDF(rmrr->segment, bdf).sbdf =3D=3D pdev->sbdf.sbdf ) { - rmrr_found =3D true; - - rc =3D iommu_identity_mapping(dom_io, p2m_access_rw, - rmrr->base_address, rmrr->end_addr= ess, - 0); - if ( rc ) - printk(XENLOG_ERR VTDPREFIX - "%pp: RMRR quarantine mapping failed\n", - &pdev->sbdf); + ret =3D iommu_identity_mapping(d, ctx, + unmap ? p2m_access_x : p2m_access= _rw, + rmrr->base_address, rmrr->end_add= ress, + 0); + + if ( ret < 0 ) + return ret; } } =20 - iommu_identity_map_teardown(dom_io); - hd->arch.vtd.pgd_maddr =3D 0; - pdev->arch.vtd.pgd_maddr =3D page_to_maddr(pg); + return 0; +} =20 - if ( !rc && scratch_page ) +static int intel_iommu_attach(struct domain *d, struct pci_dev *pdev, + struct iommu_context *ctx) +{ + int ret; + const struct acpi_drhd_unit *drhd =3D acpi_find_matched_drhd_unit(pdev= ); + + if (!pdev || !drhd) + return -EINVAL; + + if ( !ctx->opaque || ctx->arch.hap_context ) { - struct dma_pte *root; - struct page_info *pgs[6] =3D {}; + ret =3D intel_iommu_dev_rmrr(d, pdev, ctx, false); + + if ( ret ) + return ret; + } + + ret =3D apply_context(d, ctx, pdev, pdev->devfn); + + if ( ret ) + return ret; + + pci_vtd_quirk(pdev); + + return ret; +} + +static int intel_iommu_detach(struct domain *d, struct pci_dev *pdev, + struct iommu_context *prev_ctx) +{ + int ret; + const struct acpi_drhd_unit *drhd =3D acpi_find_matched_drhd_unit(pdev= ); + + if (!pdev || !drhd) + return -EINVAL; + + ret =3D unapply_context_single(d, drhd->iommu, pdev->bus, pdev->devfn); + + if ( ret ) + return ret; + + if ( !prev_ctx->opaque || prev_ctx->arch.hap_context ) + WARN_ON(intel_iommu_dev_rmrr(d, pdev, prev_ctx, true)); + + check_cleanup_domid_map(d, prev_ctx, NULL, drhd->iommu); + + return ret; +} =20 - root =3D map_vtd_domain_page(pdev->arch.vtd.pgd_maddr); - rc =3D fill_qpt(root, level - 1, pgs); - unmap_vtd_domain_page(root); +static int intel_iommu_reattach(struct domain *d, struct pci_dev *pdev, + struct iommu_context *prev_ctx, + struct iommu_context *ctx) +{ + int ret; + const struct acpi_drhd_unit *drhd =3D acpi_find_matched_drhd_unit(pdev= ); + + if (!pdev || !drhd) + return -EINVAL; =20 - pdev->arch.leaf_mfn =3D page_to_mfn(pgs[0]); + if ( !ctx->opaque || ctx->arch.hap_context ) + { + ret =3D intel_iommu_dev_rmrr(d, pdev, ctx, false); + + if ( ret ) + return ret; } =20 - page_list_move(&pdev->arch.pgtables_list, &hd->arch.pgtables.list); + ret =3D apply_context_single(d, ctx, drhd->iommu, pdev->bus, pdev->dev= fn); + + if ( ret ) + return ret; =20 - if ( rc || (!scratch_page && !rmrr_found) ) - quarantine_teardown(pdev, drhd); + if ( !prev_ctx->opaque || prev_ctx->arch.hap_context ) + WARN_ON(intel_iommu_dev_rmrr(d, pdev, prev_ctx, true)); =20 - return rc; + /* We are overwriting an entry, cleanup previous domid if needed. */ + check_cleanup_domid_map(d, prev_ctx, pdev, drhd->iommu); + + pci_vtd_quirk(pdev); + + return ret; +} + +static int intel_iommu_add_devfn(struct domain *d, struct pci_dev *pdev, + u16 devfn, struct iommu_context *ctx) +{ + const struct acpi_drhd_unit *drhd =3D acpi_find_matched_drhd_unit(pdev= ); + + if (!pdev || !drhd) + return -EINVAL; + + return apply_context(d, ctx, pdev, devfn); +} + +static int intel_iommu_remove_devfn(struct domain *d, struct pci_dev *pdev, + u16 devfn) +{ + const struct acpi_drhd_unit *drhd =3D acpi_find_matched_drhd_unit(pdev= ); + + if (!pdev || !drhd) + return -EINVAL; + + return unapply_context_single(d, drhd->iommu, pdev->bus, devfn); +} + +static uint64_t intel_iommu_get_max_iova(struct domain *d) +{ + struct domain_iommu *hd =3D dom_iommu(d); + + return (1LLU << agaw_to_width(hd->arch.vtd.agaw)) - 1; } =20 static const struct iommu_ops __initconst_cf_clobber vtd_ops =3D { .page_sizes =3D PAGE_SIZE_4K, .init =3D intel_iommu_domain_init, .hwdom_init =3D intel_iommu_hwdom_init, - .quarantine_init =3D intel_iommu_quarantine_init, - .add_device =3D intel_iommu_add_device, + .context_init =3D intel_iommu_context_init, + .context_teardown =3D intel_iommu_context_teardown, + .attach =3D intel_iommu_attach, + .detach =3D intel_iommu_detach, + .reattach =3D intel_iommu_reattach, + .add_devfn =3D intel_iommu_add_devfn, + .remove_devfn =3D intel_iommu_remove_devfn, .enable_device =3D intel_iommu_enable_device, - .remove_device =3D intel_iommu_remove_device, - .assign_device =3D intel_iommu_assign_device, .teardown =3D iommu_domain_teardown, .clear_root_pgtable =3D iommu_clear_root_pgtable, .map_page =3D intel_iommu_map_page, .unmap_page =3D intel_iommu_unmap_page, .lookup_page =3D intel_iommu_lookup_page, - .reassign_device =3D reassign_device_ownership, .get_device_group_id =3D intel_iommu_group_id, .enable_x2apic =3D intel_iommu_enable_eim, .disable_x2apic =3D intel_iommu_disable_eim, @@ -3269,6 +2814,7 @@ static const struct iommu_ops __initconst_cf_clobber = vtd_ops =3D { .iotlb_flush =3D iommu_flush_iotlb, .get_reserved_device_memory =3D intel_iommu_get_reserved_device_memory, .dump_page_tables =3D vtd_dump_page_tables, + .get_max_iova =3D intel_iommu_get_max_iova, }; =20 const struct iommu_init_ops __initconstrel intel_iommu_init_ops =3D { diff --git a/xen/drivers/passthrough/vtd/quirks.c b/xen/drivers/passthrough= /vtd/quirks.c index 950dcd56ef..568a1a06d5 100644 --- a/xen/drivers/passthrough/vtd/quirks.c +++ b/xen/drivers/passthrough/vtd/quirks.c @@ -408,9 +408,8 @@ void __init platform_quirks_init(void) =20 static int __must_check map_me_phantom_function(struct domain *domain, unsigned int dev, - domid_t domid, - paddr_t pgd_maddr, - unsigned int mode) + unsigned int mode, + struct iommu_context *ctx) { struct acpi_drhd_unit *drhd; struct pci_dev *pdev; @@ -422,18 +421,17 @@ static int __must_check map_me_phantom_function(struc= t domain *domain, =20 /* map or unmap ME phantom function */ if ( !(mode & UNMAP_ME_PHANTOM_FUNC) ) - rc =3D domain_context_mapping_one(domain, drhd->iommu, 0, - PCI_DEVFN(dev, 7), NULL, - domid, pgd_maddr, mode); + rc =3D apply_context_single(domain, ctx, drhd->iommu, 0, + PCI_DEVFN(dev, 7)); else - rc =3D domain_context_unmap_one(domain, drhd->iommu, 0, - PCI_DEVFN(dev, 7)); + rc =3D unapply_context_single(domain, drhd->iommu, 0, PCI_DEVFN(de= v, 7)); =20 return rc; } =20 int me_wifi_quirk(struct domain *domain, uint8_t bus, uint8_t devfn, - domid_t domid, paddr_t pgd_maddr, unsigned int mode) + domid_t domid, unsigned int mode, + struct iommu_context *ctx) { u32 id; int rc =3D 0; @@ -457,7 +455,7 @@ int me_wifi_quirk(struct domain *domain, uint8_t bus, u= int8_t devfn, case 0x423b8086: case 0x423c8086: case 0x423d8086: - rc =3D map_me_phantom_function(domain, 3, domid, pgd_maddr= , mode); + rc =3D map_me_phantom_function(domain, 3, mode, ctx); break; default: break; @@ -483,7 +481,7 @@ int me_wifi_quirk(struct domain *domain, uint8_t bus, u= int8_t devfn, case 0x42388086: /* Puma Peak */ case 0x422b8086: case 0x422c8086: - rc =3D map_me_phantom_function(domain, 22, domid, pgd_madd= r, mode); + rc =3D map_me_phantom_function(domain, 22, mode, ctx); break; default: break; diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough= /x86/Makefile index 75b2885336..1614f3d284 100644 --- a/xen/drivers/passthrough/x86/Makefile +++ b/xen/drivers/passthrough/x86/Makefile @@ -1,2 +1,3 @@ obj-y +=3D iommu.o +obj-y +=3D arena.o obj-$(CONFIG_HVM) +=3D hvm.o diff --git a/xen/drivers/passthrough/x86/arena.c b/xen/drivers/passthrough/= x86/arena.c new file mode 100644 index 0000000000..984bc4d643 --- /dev/null +++ b/xen/drivers/passthrough/x86/arena.c @@ -0,0 +1,157 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/** + * Simple arena-based page allocator. + * + * Allocate a large block using alloc_domheam_pages and allocate single pa= ges + * using iommu_arena_allocate_page and iommu_arena_free_page functions. + * + * Concurrent {allocate/free}_page is thread-safe + * iommu_arena_teardown during {allocate/free}_page is not thread-safe. + * + * Written by Teddy Astie + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +/* Maximum of scan tries if the bit found not available */ +#define ARENA_TSL_MAX_TRIES 5 + +int iommu_arena_initialize(struct iommu_arena *arena, struct domain *d, + unsigned int order, unsigned int memflags) +{ + struct page_info *page; + + /* TODO: Maybe allocate differently ? */ + page =3D alloc_domheap_pages(d, order, memflags); + + if ( !page ) + return -ENOMEM; + + arena->map =3D xzalloc_array(unsigned long, BITS_TO_LONGS(1LLU << orde= r)); + arena->order =3D order; + arena->region_start =3D page_to_mfn(page); + + _atomic_set(&arena->used_pages, 0); + bitmap_zero(arena->map, iommu_arena_size(arena)); + + printk(XENLOG_DEBUG "IOMMU: Allocated arena (%llu pages, start=3D%"PRI= _mfn")\n", + iommu_arena_size(arena), mfn_x(arena->region_start)); + return 0; +} + +int iommu_arena_teardown(struct iommu_arena *arena, bool check) +{ + BUG_ON(mfn_x(arena->region_start) =3D=3D 0); + + /* Check for allocations if check is specified */ + if ( check && (atomic_read(&arena->used_pages) > 0) ) + return -EBUSY; + + free_domheap_pages(mfn_to_page(arena->region_start), arena->order); + + arena->region_start =3D _mfn(0); + _atomic_set(&arena->used_pages, 0); + xfree(arena->map); + arena->map =3D NULL; + + return 0; +} + +struct page_info *iommu_arena_allocate_page(struct iommu_arena *arena) +{ + unsigned int index; + unsigned int tsl_tries =3D 0; + + BUG_ON(mfn_x(arena->region_start) =3D=3D 0); + + if ( atomic_read(&arena->used_pages) =3D=3D iommu_arena_size(arena) ) + /* All pages used */ + return NULL; + + do + { + index =3D find_first_zero_bit(arena->map, iommu_arena_size(arena)); + + if ( index >=3D iommu_arena_size(arena) ) + /* No more free pages */ + return NULL; + + /* + * While there shouldn't be a lot of retries in practice, this loop + * *may* run indefinetly if the found bit is never free due to bei= ng + * overwriten by another CPU core right after. Add a safeguard for + * such very rare cases. + */ + tsl_tries++; + + if ( unlikely(tsl_tries =3D=3D ARENA_TSL_MAX_TRIES) ) + { + printk(XENLOG_ERR "ARENA: Too many TSL retries !"); + return NULL; + } + + /* Make sure that the bit we found is still free */ + } while ( test_and_set_bit(index, arena->map) ); + + atomic_inc(&arena->used_pages); + + return mfn_to_page(mfn_add(arena->region_start, index)); +} + +bool iommu_arena_free_page(struct iommu_arena *arena, struct page_info *pa= ge) +{ + unsigned long index; + mfn_t frame; + + if ( !page ) + { + printk(XENLOG_WARNING "IOMMU: Trying to free NULL page"); + WARN(); + return false; + } + + frame =3D page_to_mfn(page); + + /* Check if page belongs to our arena */ + if ( (mfn_x(frame) < mfn_x(arena->region_start)) + || (mfn_x(frame) >=3D (mfn_x(arena->region_start) + iommu_arena_si= ze(arena))) ) + { + printk(XENLOG_WARNING + "IOMMU: Trying to free outside arena region [mfn=3D%"PRI_mf= n"]", + mfn_x(frame)); + WARN(); + return false; + } + + index =3D mfn_x(frame) - mfn_x(arena->region_start); + + /* Sanity check in case of underflow. */ + ASSERT(index < iommu_arena_size(arena)); + + if ( !test_and_clear_bit(index, arena->map) ) + { + /* + * Bit was free during our arena_free_page, which means that + * either this page was never allocated, or we are in a double-free + * situation. + */ + printk(XENLOG_WARNING + "IOMMU: Freeing non-allocated region (double-free?) [mfn=3D= %"PRI_mfn"]", + mfn_x(frame)); + WARN(); + return false; + } + + atomic_dec(&arena->used_pages); + + return true; +} \ No newline at end of file diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/= x86/iommu.c index 8b1e0596b8..849f57c1ce 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -12,6 +12,12 @@ * this program; If not, see . */ =20 +#include +#include +#include +#include +#include +#include #include #include #include @@ -28,6 +34,10 @@ #include #include #include +#include +#include +#include +#include =20 const struct iommu_init_ops *__initdata iommu_init_ops; struct iommu_ops __ro_after_init iommu_ops; @@ -183,19 +193,66 @@ void __hwdom_init arch_iommu_check_autotranslated_hwd= om(struct domain *d) panic("PVH hardware domain iommu must be set in 'strict' mode\n"); } =20 -int arch_iommu_domain_init(struct domain *d) +int arch_iommu_context_init(struct domain *d, struct iommu_context *ctx, u= 32 flags) +{ + INIT_PAGE_LIST_HEAD(&ctx->arch.pgtables); + INIT_PAGE_LIST_HEAD(&ctx->arch.free_queue); + INIT_LIST_HEAD(&ctx->arch.identity_maps); + + return 0; +} + +int arch_iommu_context_teardown(struct domain *d, struct iommu_context *ct= x, u32 flags) +{ + /* Cleanup all page tables */ + while ( iommu_free_pgtables(d, ctx) =3D=3D -ERESTART ) + /* nothing */; + + return 0; +} + +int arch_iommu_flush_free_queue(struct domain *d, struct iommu_context *ct= x) +{ + struct page_info *pg; + struct domain_iommu *hd =3D dom_iommu(d); + + while ( (pg =3D page_list_remove_head(&ctx->arch.free_queue)) ) + iommu_arena_free_page(&hd->arch.pt_arena, pg); + + return 0; +} + +int arch_iommu_pviommu_init(struct domain *d, uint16_t nb_ctx, uint32_t ar= ena_order) +{ + struct domain_iommu *hd =3D dom_iommu(d); + + if ( arena_order =3D=3D 0 ) + return 0; + + return iommu_arena_initialize(&hd->arch.pt_arena, NULL, arena_order, 0= ); +} + +int arch_iommu_pviommu_teardown(struct domain *d) { struct domain_iommu *hd =3D dom_iommu(d); =20 - spin_lock_init(&hd->arch.mapping_lock); + if ( iommu_arena_teardown(&hd->arch.pt_arena, true) ) + { + printk(XENLOG_WARNING "IOMMU Arena used while being destroyed\n"); + WARN(); =20 - INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.list); - spin_lock_init(&hd->arch.pgtables.lock); - INIT_LIST_HEAD(&hd->arch.identity_maps); + /* Teardown anyway */ + iommu_arena_teardown(&hd->arch.pt_arena, false); + } =20 return 0; } =20 +int arch_iommu_domain_init(struct domain *d) +{ + return 0; +} + void arch_iommu_domain_destroy(struct domain *d) { /* @@ -203,8 +260,9 @@ void arch_iommu_domain_destroy(struct domain *d) * domain is destroyed. Note that arch_iommu_domain_destroy() is * called unconditionally, so pgtables may be uninitialized. */ - ASSERT(!dom_iommu(d)->platform_ops || - page_list_empty(&dom_iommu(d)->arch.pgtables.list)); + struct domain_iommu *hd =3D dom_iommu(d); + + ASSERT(!hd->platform_ops); } =20 struct identity_map { @@ -214,32 +272,104 @@ struct identity_map { unsigned int count; }; =20 -int iommu_identity_mapping(struct domain *d, p2m_access_t p2ma, - paddr_t base, paddr_t end, +static int unmap_identity_region(struct domain *d, struct iommu_context *c= tx, + unsigned int base_pfn, unsigned int end_p= fn) +{ + int ret =3D 0; + + if ( ctx->arch.hap_context ) + { + this_cpu(iommu_dont_flush_iotlb) =3D true; + while ( base_pfn < end_pfn ) + { + if ( p2m_remove_identity_entry(d, base_pfn) ) + ret =3D -ENXIO; + + base_pfn++; + } + this_cpu(iommu_dont_flush_iotlb) =3D false; + } + else + { + size_t page_count =3D end_pfn - base_pfn + 1; + unsigned int flush_flags; + + ret =3D iommu_unmap(d, _dfn(base_pfn), page_count, 0, &flush_flags, + ctx->id); + + if ( ret ) + return ret; + + ret =3D iommu_iotlb_flush(d, _dfn(base_pfn), page_count, + flush_flags, ctx->id); + } + + return ret; +} + +static int map_identity_region(struct domain *d, struct iommu_context *ctx, + unsigned int base_pfn, unsigned int end_pfn, + p2m_access_t p2ma, unsigned int flag) +{ + int ret =3D 0; + unsigned int flush_flags =3D 0; + size_t page_count =3D end_pfn - base_pfn + 1; + + if ( ctx->arch.hap_context ) + { + this_cpu(iommu_dont_flush_iotlb) =3D true; + while ( base_pfn < end_pfn ) + { + ret =3D p2m_add_identity_entry(d, base_pfn, p2ma, flag); + + if ( ret ) + { + this_cpu(iommu_dont_flush_iotlb) =3D false; + return ret; + } + + base_pfn++; + } + this_cpu(iommu_dont_flush_iotlb) =3D false; + } + else + { + ret =3D iommu_map(d, _dfn(base_pfn), _mfn(base_pfn), page_count, + p2m_access_to_iommu_flags(p2ma), &flush_flags, + ctx->id); + + if ( ret ) + return ret; + } + + ret =3D iommu_iotlb_flush(d, _dfn(base_pfn), page_count, flush_flags, + ctx->id); + + return ret; +} + +/* p2m_access_x removes the mapping */ +int iommu_identity_mapping(struct domain *d, struct iommu_context *ctx, + p2m_access_t p2ma, paddr_t base, paddr_t end, unsigned int flag) { unsigned long base_pfn =3D base >> PAGE_SHIFT_4K; unsigned long end_pfn =3D PAGE_ALIGN_4K(end) >> PAGE_SHIFT_4K; struct identity_map *map; - struct domain_iommu *hd =3D dom_iommu(d); + int ret =3D 0; =20 ASSERT(pcidevs_locked()); ASSERT(base < end); =20 - /* - * No need to acquire hd->arch.mapping_lock: Both insertion and removal - * get done while holding pcidevs_lock. - */ - list_for_each_entry( map, &hd->arch.identity_maps, list ) + list_for_each_entry( map, &ctx->arch.identity_maps, list ) { if ( map->base =3D=3D base && map->end =3D=3D end ) { - int ret =3D 0; - if ( p2ma !=3D p2m_access_x ) { if ( map->access !=3D p2ma ) return -EADDRINUSE; + ++map->count; return 0; } @@ -247,12 +377,9 @@ int iommu_identity_mapping(struct domain *d, p2m_acces= s_t p2ma, if ( --map->count ) return 0; =20 - while ( base_pfn < end_pfn ) - { - if ( clear_identity_p2m_entry(d, base_pfn) ) - ret =3D -ENXIO; - base_pfn++; - } + printk("Unmapping [%"PRI_mfn"x:%"PRI_mfn"] for d%dc%d\n", base= _pfn, end_pfn, + d->domain_id, ctx->id); + ret =3D unmap_identity_region(d, ctx, base_pfn, end_pfn); =20 list_del(&map->list); xfree(map); @@ -271,47 +398,43 @@ int iommu_identity_mapping(struct domain *d, p2m_acce= ss_t p2ma, if ( !map ) return -ENOMEM; =20 - map->base =3D base; - map->end =3D end; - map->access =3D p2ma; - map->count =3D 1; - - /* - * Insert into list ahead of mapping, so the range can be found when - * trying to clean up. - */ - list_add_tail(&map->list, &hd->arch.identity_maps); + printk("Mapping [%"PRI_mfn"x:%"PRI_mfn"] for d%dc%d\n", base_pfn, end_= pfn, + d->domain_id, ctx->id); + ret =3D map_identity_region(d, ctx, base_pfn, end_pfn, p2ma, flag); =20 - for ( ; base_pfn < end_pfn; ++base_pfn ) + if ( ret ) { - int err =3D set_identity_p2m_entry(d, base_pfn, p2ma, flag); - - if ( !err ) - continue; - - if ( (map->base >> PAGE_SHIFT_4K) =3D=3D base_pfn ) - { - list_del(&map->list); - xfree(map); - } - return err; + xfree(map); + return ret; } =20 return 0; } =20 -void iommu_identity_map_teardown(struct domain *d) +void iommu_identity_map_teardown(struct domain *d, struct iommu_context *c= tx) { - struct domain_iommu *hd =3D dom_iommu(d); struct identity_map *map, *tmp; =20 - list_for_each_entry_safe ( map, tmp, &hd->arch.identity_maps, list ) + list_for_each_entry_safe ( map, tmp, &ctx->arch.identity_maps, list ) { list_del(&map->list); xfree(map); } } =20 +bool iommu_identity_map_check(struct domain *d, struct iommu_context *ctx, + mfn_t mfn) +{ + struct identity_map *map; + uint64_t addr =3D pfn_to_paddr(mfn_x(mfn)); + + list_for_each_entry ( map, &ctx->arch.identity_maps, list ) + if (addr >=3D map->base && addr < map->end) + return true; + + return false; +} + static int __hwdom_init cf_check map_subtract(unsigned long s, unsigned lo= ng e, void *data) { @@ -369,7 +492,7 @@ static int __hwdom_init cf_check identity_map(unsigned = long s, unsigned long e, if ( iomem_access_permitted(d, s, s) ) { rc =3D iommu_map(d, _dfn(s), _mfn(s), 1, perms, - &info->flush_flags); + &info->flush_flags, 0); if ( rc < 0 ) break; /* Must map a frame at least, which is what we request for= . */ @@ -379,7 +502,7 @@ static int __hwdom_init cf_check identity_map(unsigned = long s, unsigned long e, s++; } while ( (rc =3D iommu_map(d, _dfn(s), _mfn(s), e - s + 1, - perms, &info->flush_flags)) > 0 ) + perms, &info->flush_flags, 0)) > 0 ) { s +=3D rc; process_pending_softirqs(); @@ -408,6 +531,10 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain = *d) if ( iommu_hwdom_reserved =3D=3D -1 ) iommu_hwdom_reserved =3D 1; =20 + if ( iommu_hwdom_no_dma ) + /* Skip special mappings with no-dma mode */ + return; + if ( iommu_hwdom_inclusive ) { printk(XENLOG_WARNING @@ -545,7 +672,6 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *= d) =20 void arch_pci_init_pdev(struct pci_dev *pdev) { - pdev->arch.pseudo_domid =3D DOMID_INVALID; } =20 unsigned long *__init iommu_init_domid(domid_t reserve) @@ -576,8 +702,6 @@ domid_t iommu_alloc_domid(unsigned long *map) static unsigned int start; unsigned int idx =3D find_next_zero_bit(map, UINT16_MAX - DOMID_MASK, = start); =20 - ASSERT(pcidevs_locked()); - if ( idx >=3D UINT16_MAX - DOMID_MASK ) idx =3D find_first_zero_bit(map, UINT16_MAX - DOMID_MASK); if ( idx >=3D UINT16_MAX - DOMID_MASK ) @@ -603,7 +727,7 @@ void iommu_free_domid(domid_t domid, unsigned long *map) BUG(); } =20 -int iommu_free_pgtables(struct domain *d) +int iommu_free_pgtables(struct domain *d, struct iommu_context *ctx) { struct domain_iommu *hd =3D dom_iommu(d); struct page_info *pg; @@ -612,18 +736,18 @@ int iommu_free_pgtables(struct domain *d) if ( !is_iommu_enabled(d) ) return 0; =20 - /* After this barrier, no new IOMMU mappings can be inserted. */ - spin_barrier(&hd->arch.mapping_lock); - /* * Pages will be moved to the free list below. So we want to * clear the root page-table to avoid any potential use after-free. */ - iommu_vcall(hd->platform_ops, clear_root_pgtable, d); + iommu_vcall(hd->platform_ops, clear_root_pgtable, d, ctx); =20 - while ( (pg =3D page_list_remove_head(&hd->arch.pgtables.list)) ) + while ( (pg =3D page_list_remove_head(&ctx->arch.pgtables)) ) { - free_domheap_page(pg); + if (ctx->id =3D=3D 0) + free_domheap_page(pg); + else + iommu_arena_free_page(&hd->arch.pt_arena, pg); =20 if ( !(++done & 0xff) && general_preempt_check() ) return -ERESTART; @@ -633,6 +757,7 @@ int iommu_free_pgtables(struct domain *d) } =20 struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd, + struct iommu_context *ctx, uint64_t contig_mask) { unsigned int memflags =3D 0; @@ -644,7 +769,11 @@ struct page_info *iommu_alloc_pgtable(struct domain_io= mmu *hd, memflags =3D MEMF_node(hd->node); #endif =20 - pg =3D alloc_domheap_page(NULL, memflags); + if (ctx->id =3D=3D 0) + pg =3D alloc_domheap_page(NULL, memflags); + else + pg =3D iommu_arena_allocate_page(&hd->arch.pt_arena); + if ( !pg ) return NULL; =20 @@ -677,9 +806,7 @@ struct page_info *iommu_alloc_pgtable(struct domain_iom= mu *hd, =20 unmap_domain_page(p); =20 - spin_lock(&hd->arch.pgtables.lock); - page_list_add(pg, &hd->arch.pgtables.list); - spin_unlock(&hd->arch.pgtables.lock); + page_list_add(pg, &ctx->arch.pgtables); =20 return pg; } @@ -718,17 +845,20 @@ static void cf_check free_queued_pgtables(void *arg) } } =20 -void iommu_queue_free_pgtable(struct domain_iommu *hd, struct page_info *p= g) +void iommu_queue_free_pgtable(struct iommu_context *ctx, struct page_info = *pg) { unsigned int cpu =3D smp_processor_id(); =20 - spin_lock(&hd->arch.pgtables.lock); - page_list_del(pg, &hd->arch.pgtables.list); - spin_unlock(&hd->arch.pgtables.lock); + page_list_del(pg, &ctx->arch.pgtables); =20 - page_list_add_tail(pg, &per_cpu(free_pgt_list, cpu)); + if ( !ctx->id ) + { + page_list_add_tail(pg, &per_cpu(free_pgt_list, cpu)); =20 - tasklet_schedule(&per_cpu(free_pgt_tasklet, cpu)); + tasklet_schedule(&per_cpu(free_pgt_tasklet, cpu)); + } + else + page_list_add_tail(pg, &ctx->arch.free_queue); } =20 static int cf_check cpu_callback( --=20 2.45.2 Teddy Astie | Vates XCP-ng Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech From nobody Sat Nov 23 23:11:07 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass header.i=teddy.astie@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=vates.tech ARC-Seal: i=1; a=rsa-sha256; t=1730730564; cv=none; d=zohomail.com; s=zohoarc; b=MtG6l7XP6M1MZrvx7EPxyhPriNNIcg9Pha1gJcFD2ebDPT/t9dlWe0gobixZy+OZa9+Ga/PgiTWwz6u/9oyNck1q/Fj97OeCo9CpLLKVrFfwYsQTFvdfjhpu67yAfO2pC8wHW4u9qSEoNUKTwW3Bk/nKrS95V0t5oj5QFJOYkzo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1730730564; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=ox6PUJD+kYqAhQS7XnCT0BVUBQTkDDu1IHkIsGRxAhI=; b=cPJuAfFF8JMXRelEL8vjBkYHvfgKvfg27c06l5ALCHZe1VBwzC0piU0M7jOiOp8TnH/LA+fQhKi7rjYjXe2LFAiOVdWc2H5YeYoULU5SjZNzxwgiD3EMg6P+Ftm8kSdbtS1myxtIcdL7iRZWbVJdqRNpMEVdD5TN41aaxpTEdlM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=teddy.astie@vates.tech; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1730730564607118.69719192160369; Mon, 4 Nov 2024 06:29:24 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.830030.1244969 (Exim 4.92) (envelope-from ) id 1t7y4Y-0007GP-9l; Mon, 04 Nov 2024 14:28:50 +0000 Received: by outflank-mailman (output) from mailman id 830030.1244969; Mon, 04 Nov 2024 14:28:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t7y4Y-0007GD-6a; Mon, 04 Nov 2024 14:28:50 +0000 Received: by outflank-mailman (input) for mailman id 830030; Mon, 04 Nov 2024 14:28:49 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t7y4W-0006XR-T4 for xen-devel@lists.xenproject.org; Mon, 04 Nov 2024 14:28:49 +0000 Received: from mail128-130.atl41.mandrillapp.com (mail128-130.atl41.mandrillapp.com [198.2.128.130]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 18b998e7-9ab9-11ef-a0c5-8be0dac302b0; Mon, 04 Nov 2024 15:28:45 +0100 (CET) Received: from pmta08.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1]) by mail128-130.atl41.mandrillapp.com (Mailchimp) with ESMTP id 4Xhv3C74SczS62L0r for ; Mon, 4 Nov 2024 14:28:39 +0000 (GMT) Received: from [37.26.189.201] by mandrillapp.com id 37c1f77bdbc44b0894b546c2c0f0b07b; Mon, 04 Nov 2024 14:28:39 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 18b998e7-9ab9-11ef-a0c5-8be0dac302b0 X-Custom-Connection: eyJyZW1vdGVpcCI6IjE5OC4yLjEyOC4xMzAiLCJoZWxvIjoibWFpbDEyOC0xMzAuYXRsNDEubWFuZHJpbGxhcHAuY29tIn0= X-Custom-Transaction: eyJpZCI6IjE4Yjk5OGU3LTlhYjktMTFlZi1hMGM1LThiZTBkYWMzMDJiMCIsInRzIjoxNzMwNzMwNTI1LjY1Mzc3NCwic2VuZGVyIjoiYm91bmNlLW1kXzMwNTA0OTYyLjY3MjhkYTE3LnYxLTM3YzFmNzdiZGJjNDRiMDg5NGI1NDZjMmMwZjBiMDdiQGJvdW5jZS52YXRlcy50ZWNoIiwicmVjaXBpZW50IjoieGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnIn0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; s=mte1; t=1730730520; x=1730991020; bh=ox6PUJD+kYqAhQS7XnCT0BVUBQTkDDu1IHkIsGRxAhI=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=vDGymE3IR0u6eP7us0AZh7CSqMX9jv3WePdqk9yDLVO7TgL1+B4osd7Ff8oBmvVbK p4KKN3lrcPEWOX5Z+KFFfypNOBk+lGf9VRWph0FC+jMYUaFXqKG/eBPp4qF64nxxBq vEn96P6CtlbsZssJvLTLa8JoV0cbTaNNSImrqPnC5S4YXvU7KyzD4ouAgkw2Tsw7sj FsYGNAf2HVM+/42cmPN0Mg/bmfos0NBKC6sv91YylHgLkOMjUkE8x4xx1BIsQSLWxL Kc1y4XUCxB8l6z7Y75PYZAVTFUaj8SyQMPHyLdnTk6Nu4AKrmxkEgzcXRvxv6kVhHj X4DlePd7tIhfw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1; t=1730730520; x=1730991020; i=teddy.astie@vates.tech; bh=ox6PUJD+kYqAhQS7XnCT0BVUBQTkDDu1IHkIsGRxAhI=; h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID: Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date: Subject:From; b=qpbjToow4Edr+M9tveqqM6Raow3Rt9XoQPoIuCSkZjlE5k6RfFd6Y6hHeFrgck6Gf KP9a+/iotdIzplvcdkHd9slikHQdX1gHnuywwbicnLAUjr1vZolHauh3V+tbr4ktYo 6NEIXXHijmqzwgH4vhzTKDDz+54/QvBhzptfr1CThQVuMUGVV3H2YNzqyt2NlUR7gn GAk6mQrhVqFkEQjzAZiq3Rk7S1t8SnNOY789NvKFrP2ZIb5MJX85nqo985O7xMOFK3 a0zaPjPQza8lkfuURMcEgCc+PXDNKKvnGmPHm2J+ztbZdy5PAyrTMfaXXNfly8CwCT O1EmJjUrf7Nnw== From: "Teddy Astie" Subject: =?utf-8?Q?[XEN=20RFC=20PATCH=20v4=205/5]=20xen/public:=20Introduce=20PV-IOMMU=20hypercall=20interface?= X-Mailer: git-send-email 2.45.2 X-Bm-Disclaimer: Yes X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2 X-Bm-Transport-Timestamp: 1730730518718 To: xen-devel@lists.xenproject.org Cc: "Teddy Astie" , "Andrew Cooper" , "Jan Beulich" , "Julien Grall" , "Stefano Stabellini" Message-Id: <78b44f9f800b8f786835ecebdaf2d6ce7366f3da.1730718102.git.teddy.astie@vates.tech> In-Reply-To: References: X-Native-Encoded: 1 X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.37c1f77bdbc44b0894b546c2c0f0b07b?= X-Mandrill-User: md_30504962 Feedback-ID: 30504962:30504962.20241104:md Date: Mon, 04 Nov 2024 14:28:39 +0000 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity teddy.astie@vates.tech) (identity @mandrillapp.com) X-ZM-MESSAGEID: 1730730566242116600 Content-Type: text/plain; charset="utf-8" Introduce a new pv interface to manage the underlying IOMMU and manage cont= exts and devices. This interface allows creation of new contexts from Dom0 and addition of IOMMU mappings using guest PoV. This interface doesn't allow creation of mapping to other domains. Signed-off-by Teddy Astie --- Changed in V2: * formatting Changed in V3: * prevent IOMMU operations on dying contexts Changed in V4: * redesigned hypercall interface [1] * added remote_cmd and init logic [1] https://lore.kernel.org/all/fdfa32c9-c177-4d05-891a-138f9b663f19@vates.= tech/ --- xen/common/Makefile | 1 + xen/common/pv-iommu.c | 539 ++++++++++++++++++++++++++++++++++ xen/include/hypercall-defs.c | 6 + xen/include/public/pv-iommu.h | 341 +++++++++++++++++++++ xen/include/public/xen.h | 1 + 5 files changed, 888 insertions(+) create mode 100644 xen/common/pv-iommu.c create mode 100644 xen/include/public/pv-iommu.h diff --git a/xen/common/Makefile b/xen/common/Makefile index fc52e0857d..9d642ef635 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -58,6 +58,7 @@ obj-y +=3D wait.o obj-bin-y +=3D warning.init.o obj-$(CONFIG_XENOPROF) +=3D xenoprof.o obj-y +=3D xmalloc_tlsf.o +obj-y +=3D pv-iommu.o =20 obj-bin-$(CONFIG_X86) +=3D $(foreach n,decompress bunzip2 unxz unlzma lzo = unlzo unlz4 unzstd earlycpio,$(n).init.o) =20 diff --git a/xen/common/pv-iommu.c b/xen/common/pv-iommu.c new file mode 100644 index 0000000000..9c7d04b4c7 --- /dev/null +++ b/xen/common/pv-iommu.c @@ -0,0 +1,539 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * xen/common/pv_iommu.c + * + * PV-IOMMU hypercall interface. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define PVIOMMU_PREFIX "[PV-IOMMU] " + +static int get_paged_frame(struct domain *d, gfn_t gfn, mfn_t *mfn, + struct page_info **page, bool readonly) +{ + int ret =3D 0; + p2m_type_t p2mt =3D p2m_invalid; + + #ifdef CONFIG_X86 + p2m_query_t query =3D P2M_ALLOC; + + if ( !readonly ) + query |=3D P2M_UNSHARE; + + *mfn =3D get_gfn_type(d, gfn_x(gfn), &p2mt, query); + #else + *mfn =3D p2m_lookup(d, gfn, &p2mt); + #endif + + if ( mfn_eq(*mfn, INVALID_MFN) ) + { + /* No mapping ? */ + printk(XENLOG_G_WARNING PVIOMMU_PREFIX + "Trying to map to non-backed page frame (gfn=3D%"PRI_gfn + " p2mt=3D%d d%d)\n", gfn_x(gfn), p2mt, d->domain_id); + + ret =3D -ENOENT; + } + else if ( p2m_is_any_ram(p2mt) && mfn_valid(*mfn) ) + { + *page =3D get_page_from_mfn(*mfn, d); + ret =3D 0; + } + else if ( p2m_is_mmio(p2mt) || + iomem_access_permitted(d, mfn_x(*mfn),mfn_x(*mfn)) ) + { + *page =3D NULL; + ret =3D 0; + } + else + { + printk(XENLOG_G_WARNING PVIOMMU_PREFIX + "Unexpected p2mt %d (d%d gfn=3D%"PRI_gfn" mfn=3D%"PRI_mfn")= \n", + p2mt, d->domain_id, gfn_x(gfn), mfn_x(*mfn)); + + ret =3D -EPERM; + } + + put_gfn(d, gfn_x(gfn)); + return ret; +} + +static bool can_use_iommu_check(struct domain *d) +{ + if ( !is_iommu_enabled(d) ) + { + printk(XENLOG_G_WARNING PVIOMMU_PREFIX + "IOMMU disabled for this domain\n"); + return false; + } + + if ( !dom_iommu(d)->allow_pv_iommu ) + { + printk(XENLOG_G_WARNING PVIOMMU_PREFIX + "PV-IOMMU disabled for this domain\n"); + return false; + } + + return true; +} + +static long capabilities_op(struct pv_iommu_capabilities *cap, struct doma= in *d) +{ + cap->max_ctx_no =3D d->iommu.other_contexts.count; + cap->max_iova_addr =3D iommu_get_max_iova(d); + + cap->max_pasid =3D 0; /* TODO */ + cap->cap_flags =3D 0; + + if ( !dom_iommu(d)->no_dma ) + cap->cap_flags |=3D IOMMUCAP_default_identity; + + cap->pgsize_mask =3D PAGE_SIZE_4K; + + return 0; +} + +static long init_op(struct pv_iommu_init *init, struct domain *d) +{ + if (init->max_ctx_no =3D=3D UINT32_MAX) + return -E2BIG; + + return iommu_domain_pviommu_init(d, init->max_ctx_no + 1, init->arena_= order); +} + +static long alloc_context_op(struct pv_iommu_alloc *alloc, struct domain *= d) +{ + u16 ctx_no =3D 0; + int status =3D 0; + + status =3D iommu_context_alloc(d, &ctx_no, 0); + + if ( status ) + return status; + + printk(XENLOG_G_INFO PVIOMMU_PREFIX + "Created IOMMU context %hu in d%d\n", ctx_no, d->domain_id); + + alloc->ctx_no =3D ctx_no; + return 0; +} + +static long free_context_op(struct pv_iommu_free *free, struct domain *d) +{ + int flags =3D IOMMU_TEARDOWN_PREEMPT; + + if ( !free->ctx_no ) + return -EINVAL; + + if ( free->free_flags & IOMMU_FREE_reattach_default ) + flags |=3D IOMMU_TEARDOWN_REATTACH_DEFAULT; + + return iommu_context_free(d, free->ctx_no, flags); +} + +static long reattach_device_op(struct pv_iommu_reattach_device *reattach, + struct domain *d) +{ + int ret; + device_t *pdev; + struct physdev_pci_device dev =3D reattach->dev; + + pcidevs_lock(); + pdev =3D pci_get_pdev(d, PCI_SBDF(dev.seg, dev.bus, dev.devfn)); + + if ( !pdev ) + { + pcidevs_unlock(); + return -ENOENT; + } + + ret =3D iommu_reattach_context(d, d, pdev, reattach->ctx_no); + + pcidevs_unlock(); + return ret; +} + +static long map_pages_op(struct pv_iommu_map_pages *map, struct domain *d) +{ + struct iommu_context *ctx; + int ret =3D 0, flush_ret; + struct page_info *page =3D NULL; + mfn_t mfn, mfn_lookup; + unsigned int flags =3D 0, flush_flags =3D 0; + size_t i =3D 0; + dfn_t dfn0 =3D _dfn(map->dfn); /* original map->dfn */ + + if ( !map->ctx_no || !(ctx =3D iommu_get_context(d, map->ctx_no)) ) + return -EINVAL; + + if ( map->map_flags & IOMMU_MAP_readable ) + flags |=3D IOMMUF_readable; + + if ( map->map_flags & IOMMU_MAP_writeable ) + flags |=3D IOMMUF_writable; + + for (i =3D 0; i < map->nr_pages; i++) + { + gfn_t gfn =3D _gfn(map->gfn + i); + dfn_t dfn =3D _dfn(map->dfn + i); + +#ifdef CONFIG_X86 + if ( iommu_identity_map_check(d, ctx, _mfn(map->dfn)) ) + { + ret =3D -EADDRNOTAVAIL; + break; + } +#endif + + ret =3D get_paged_frame(d, gfn, &mfn, &page, 0); + + if ( ret ) + break; + + /* Check for conflict with existing mappings */ + if ( !iommu_lookup_page(d, dfn, &mfn_lookup, &flags, map->ctx_no) ) + { + if ( page ) + put_page(page); + + ret =3D -EADDRINUSE; + break; + } + + ret =3D iommu_map(d, dfn, mfn, 1, flags, &flush_flags, map->ctx_no= ); + + if ( ret ) + { + if ( page ) + put_page(page); + + break; + } + + map->mapped++; + + if ( (i & 0xff) && hypercall_preempt_check() ) + { + i++; + + map->gfn +=3D i; + map->dfn +=3D i; + map->nr_pages -=3D i; + + ret =3D -ERESTART; + break; + } + } + + flush_ret =3D iommu_iotlb_flush(d, dfn0, i, flush_flags, map->ctx_no); + + iommu_put_context(ctx); + + if ( flush_ret ) + printk(XENLOG_G_WARNING PVIOMMU_PREFIX + "Flush operation failed for d%dc%d (%d)\n", d->domain_id, + ctx->id, flush_ret); + + return ret; +} + +static long unmap_pages_op(struct pv_iommu_unmap_pages *unmap, struct doma= in *d) +{ + struct iommu_context *ctx; + mfn_t mfn; + int ret =3D 0, flush_ret; + unsigned int flags, flush_flags =3D 0; + size_t i =3D 0; + dfn_t dfn0 =3D _dfn(unmap->dfn); /* original unmap->dfn */ + + if ( !unmap->ctx_no || !(ctx =3D iommu_get_context(d, unmap->ctx_no)) ) + return -EINVAL; + + for (i =3D 0; i < unmap->nr_pages; i++) + { + dfn_t dfn =3D _dfn(unmap->dfn + i); + +#ifdef CONFIG_X86 + if ( iommu_identity_map_check(d, ctx, _mfn(unmap->dfn)) ) + { + ret =3D -EADDRNOTAVAIL; + break; + } +#endif + + /* Check if there is a valid mapping for this domain */ + if ( iommu_lookup_page(d, dfn, &mfn, &flags, unmap->ctx_no) ) { + ret =3D -ENOENT; + break; + } + + ret =3D iommu_unmap(d, dfn, 1, 0, &flush_flags, unmap->ctx_no); + + if ( ret ) + break; + + unmap->unmapped++; + + /* Decrement reference counter (if needed) */ + if ( mfn_valid(mfn) ) + put_page(mfn_to_page(mfn)); + + if ( (i & 0xff) && hypercall_preempt_check() ) + { + i++; + + unmap->dfn +=3D i; + unmap->nr_pages -=3D i; + + ret =3D -ERESTART; + break; + } + } + + flush_ret =3D iommu_iotlb_flush(d, dfn0, i, flush_flags, unmap->ctx_no= ); + + iommu_put_context(ctx); + + if ( flush_ret ) + printk(XENLOG_WARNING PVIOMMU_PREFIX + "Flush operation failed for d%dc%d (%d)\n", d->domain_id, + ctx->id, flush_ret); + + return ret; +} + +static long do_iommu_subop(int subop, XEN_GUEST_HANDLE_PARAM(void) arg, + struct domain *d, bool remote); + +static long remote_cmd_op(struct pv_iommu_remote_cmd *remote_cmd, + struct domain *current_domain) +{ + long ret =3D 0; + struct domain *d; + + /* TODO: use a better permission logic */ + if ( !is_hardware_domain(current_domain) ) + return -EPERM; + + d =3D get_domain_by_id(remote_cmd->domid); + + if ( !d ) + return -ENOENT; + + ret =3D do_iommu_subop(remote_cmd->subop, remote_cmd->arg, d, true); + + put_domain(d); + + return ret; +} + +static long do_iommu_subop(int subop, XEN_GUEST_HANDLE_PARAM(void) arg, + struct domain *d, bool remote) +{ + long ret =3D 0; + + switch ( subop ) + { + case IOMMU_noop: + break; + + case IOMMU_query_capabilities: + { + struct pv_iommu_capabilities cap; + + ret =3D capabilities_op(&cap, d); + + if ( unlikely(copy_to_guest(arg, &cap, 1)) ) + ret =3D -EFAULT; + + break; + } + + case IOMMU_init: + { + struct pv_iommu_init init; + + if ( unlikely(copy_from_guest(&init, arg, 1)) ) + { + ret =3D -EFAULT; + break; + } + + ret =3D init_op(&init, d); + } + + case IOMMU_alloc_context: + { + struct pv_iommu_alloc alloc; + + if ( unlikely(copy_from_guest(&alloc, arg, 1)) ) + { + ret =3D -EFAULT; + break; + } + + ret =3D alloc_context_op(&alloc, d); + + if ( unlikely(copy_to_guest(arg, &alloc, 1)) ) + ret =3D -EFAULT; + + break; + } + + case IOMMU_free_context: + { + struct pv_iommu_free free; + + if ( unlikely(copy_from_guest(&free, arg, 1)) ) + { + ret =3D -EFAULT; + break; + } + + ret =3D free_context_op(&free, d); + break; + } + + case IOMMU_reattach_device: + { + struct pv_iommu_reattach_device reattach; + + if ( unlikely(copy_from_guest(&reattach, arg, 1)) ) + { + ret =3D -EFAULT; + break; + } + + ret =3D reattach_device_op(&reattach, d); + break; + } + + case IOMMU_map_pages: + { + struct pv_iommu_map_pages map; + + if ( unlikely(copy_from_guest(&map, arg, 1)) ) + { + ret =3D -EFAULT; + break; + } + + ret =3D map_pages_op(&map, d); + + if ( unlikely(copy_to_guest(arg, &map, 1)) ) + ret =3D -EFAULT; + + break; + } + + case IOMMU_unmap_pages: + { + struct pv_iommu_unmap_pages unmap; + + if ( unlikely(copy_from_guest(&unmap, arg, 1)) ) + { + ret =3D -EFAULT; + break; + } + + ret =3D unmap_pages_op(&unmap, d); + + if ( unlikely(copy_to_guest(arg, &unmap, 1)) ) + ret =3D -EFAULT; + + break; + } + + case IOMMU_remote_cmd: + { + struct pv_iommu_remote_cmd remote_cmd; + + if ( remote ) + { + /* Prevent remote_cmd from being called recursively */ + ret =3D -EINVAL; + break; + } + + if ( unlikely(copy_from_guest(&remote_cmd, arg, 1)) ) + { + ret =3D -EFAULT; + break; + } + + ret =3D remote_cmd_op(&remote_cmd, d); + break; + } + + /* + * TODO + */ + case IOMMU_alloc_nested: + { + ret =3D -EOPNOTSUPP; + break; + } + + case IOMMU_flush_nested: + { + ret =3D -EOPNOTSUPP; + break; + } + + case IOMMU_attach_pasid: + { + ret =3D -EOPNOTSUPP; + break; + } + + case IOMMU_detach_pasid: + { + ret =3D -EOPNOTSUPP; + break; + } + + default: + return -EOPNOTSUPP; + } + + return ret; +} + +long do_iommu_op(unsigned int subop, XEN_GUEST_HANDLE_PARAM(void) arg) +{ + long ret =3D 0; + + if ( !can_use_iommu_check(current->domain) ) + return -ENODEV; + + ret =3D do_iommu_subop(subop, arg, current->domain, false); + + if ( ret =3D=3D -ERESTART ) + return hypercall_create_continuation(__HYPERVISOR_iommu_op, "ih", = subop, arg); + + return ret; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/hypercall-defs.c b/xen/include/hypercall-defs.c index 47c093acc8..59d7c02f55 100644 --- a/xen/include/hypercall-defs.c +++ b/xen/include/hypercall-defs.c @@ -209,6 +209,9 @@ hypfs_op(unsigned int cmd, const char *arg1, unsigned l= ong arg2, void *arg3, uns #ifdef CONFIG_X86 xenpmu_op(unsigned int op, xen_pmu_params_t *arg) #endif +#ifdef CONFIG_HAS_PASSTHROUGH +iommu_op(unsigned int subop, void *arg) +#endif =20 #ifdef CONFIG_PV caller: pv64 @@ -295,5 +298,8 @@ mca do do - = - - #ifndef CONFIG_PV_SHIM_EXCLUSIVE paging_domctl_cont do do do do - #endif +#ifdef CONFIG_HAS_PASSTHROUGH +iommu_op do do do do - +#endif =20 #endif /* !CPPCHECK */ diff --git a/xen/include/public/pv-iommu.h b/xen/include/public/pv-iommu.h new file mode 100644 index 0000000000..c14b8435c9 --- /dev/null +++ b/xen/include/public/pv-iommu.h @@ -0,0 +1,341 @@ +/* SPDX-License-Identifier: MIT */ +/** + * pv-iommu.h + * + * Paravirtualized IOMMU driver interface. + * + * Copyright (c) 2024 Teddy Astie + */ + +#ifndef __XEN_PUBLIC_PV_IOMMU_H__ +#define __XEN_PUBLIC_PV_IOMMU_H__ + +#include "xen.h" +#include "physdev.h" + +#ifndef uint64_aligned_t +#define uint64_aligned_t uint64_t +#endif + +#define IOMMU_DEFAULT_CONTEXT (0) + +enum { + /* Basic cmd */ + IOMMU_noop =3D 0, + IOMMU_query_capabilities, + IOMMU_init, + IOMMU_alloc_context, + IOMMU_free_context, + IOMMU_reattach_device, + IOMMU_map_pages, + IOMMU_unmap_pages, + IOMMU_remote_cmd, + + /* Extended cmd */ + IOMMU_alloc_nested, /* if IOMMUCAP_nested */ + IOMMU_flush_nested, /* if IOMMUCAP_nested */ + IOMMU_attach_pasid, /* if IOMMUCAP_pasid */ + IOMMU_detach_pasid, /* if IOMMUCAP_pasid */ +}; + +/** + * Indicate if the default context is a identity mapping to domain memory. + * If not defined, default context blocks all DMA to domain memory. + */ +#define IOMMUCAP_default_identity (1 << 0) + +/** + * IOMMU_MAP_cache support. + */ +#define IOMMUCAP_cache (1 << 1) + +/** + * Support for IOMMU_alloc_nested. + */ +#define IOMMUCAP_nested (1 << 2) + +/** + * Support for IOMMU_attach_pasid and IOMMU_detach_pasid and pasid paramet= er in + * reattach_context. + */ +#define IOMMUCAP_pasid (1 << 3) + +/** + * Support for IOMMU_ALLOC_identity + */ +#define IOMMUCAP_identity (1 << 4) + +/** + * IOMMU_query_capabilities + * Query PV-IOMMU capabilities for this domain. + */ +struct pv_iommu_capabilities { + /* + * OUT: Maximum device address (iova) that the guest can use for mappi= ngs. + */ + uint64_aligned_t max_iova_addr; + + /* OUT: IOMMU capabilities flags */ + uint32_t cap_flags; + + /* OUT: Mask of all supported page sizes. */ + uint32_t pgsize_mask; + + /* OUT: Maximum pasid (if IOMMUCAP_pasid) */ + uint32_t max_pasid; + + /* OUT: Maximum number of IOMMU context this domain can use. */ + uint16_t max_ctx_no; +}; +typedef struct pv_iommu_capabilities pv_iommu_capabilities_t; +DEFINE_XEN_GUEST_HANDLE(pv_iommu_capabilities_t); + +/** + * IOMMU_init + * Initialize PV-IOMMU for this domain. + * + * Fails with -EACCESS if PV-IOMMU is already initialized. + */ +struct pv_iommu_init { + /* IN: Maximum number of IOMMU context this domain can use. */ + uint32_t max_ctx_no; + + /* IN: Arena size in pages (in power of two) */ + uint32_t arena_order; +}; +typedef struct pv_iommu_init pv_iommu_init_t; +DEFINE_XEN_GUEST_HANDLE(pv_iommu_init_t); + +/** + * Create a 1:1 identity mapped context to domain memory + * (needs IOMMUCAP_identity). + */ +#define IOMMU_ALLOC_identity (1 << 0) + +/** + * IOMMU_alloc_context + * Allocate an IOMMU context. + * Fails with -ENOSPC if no context number is available. + */ +struct pv_iommu_alloc { + /* OUT: allocated IOMMU context number */ + uint16_t ctx_no; + + /* IN: allocation flags */ + uint32_t alloc_flags; +}; +typedef struct pv_iommu_alloc pv_iommu_alloc_t; +DEFINE_XEN_GUEST_HANDLE(pv_iommu_alloc_t); + +/** + * Move all devices to default context before freeing the context. + */ +#define IOMMU_FREE_reattach_default (1 << 0) + +/** + * IOMMU_free_context + * Destroy a IOMMU context. + * + * If IOMMU_FREE_reattach_default is specified, move all context devices to + * default context before destroying this context. + * + * If there are devices in the context and IOMMU_FREE_reattach_default is = not + * specified, fail with -EBUSY. + * + * The default context can't be destroyed. + */ +struct pv_iommu_free { + /* IN: IOMMU context number to free */ + uint16_t ctx_no; + + /* IN: Free operation specific flags */ + uint32_t free_flags; +}; +typedef struct pv_iommu_free pv_iommu_free_t; +DEFINE_XEN_GUEST_HANDLE(pv_iommu_free_t); + +/* Device has read access */ +#define IOMMU_MAP_readable (1 << 0) + +/* Device has write access */ +#define IOMMU_MAP_writeable (1 << 1) + +/* Enforce DMA coherency */ +#define IOMMU_MAP_cache (1 << 2) + +/** + * IOMMU_map_pages + * Map pages on a IOMMU context. + * + * pgsize must be supported by pgsize_mask. + * Fails with -EINVAL if mapping on top of another mapping. + * Report actually mapped page count in mapped field (regardless of failur= e). + */ +struct pv_iommu_map_pages { + /* IN: IOMMU context number */ + uint16_t ctx_no; + + /* IN: Guest frame number */ + uint64_aligned_t gfn; + + /* IN: Device frame number */ + uint64_aligned_t dfn; + + /* IN: Map flags */ + uint32_t map_flags; + + /* IN: Size of pages to map */ + uint32_t pgsize; + + /* IN: Number of pages to map */ + uint32_t nr_pages; + + /* OUT: Number of pages actually mapped */ + uint32_t mapped; +}; +typedef struct pv_iommu_map_pages pv_iommu_map_pages_t; +DEFINE_XEN_GUEST_HANDLE(pv_iommu_map_pages_t); + +/** + * IOMMU_unmap_pages + * Unmap pages on a IOMMU context. + * + * pgsize must be supported by pgsize_mask. + * Report actually unmapped page count in mapped field (regardless of fail= ure). + * Fails with -ENOENT when attempting to unmap a page without any mapping + */ +struct pv_iommu_unmap_pages { + /* IN: IOMMU context number */ + uint16_t ctx_no; + + /* IN: Device frame number */ + uint64_aligned_t dfn; + + /* IN: Size of pages to unmap */ + uint32_t pgsize; + + /* IN: Number of pages to unmap */ + uint32_t nr_pages; + + /* OUT: Number of pages actually unmapped */ + uint32_t unmapped; +}; +typedef struct pv_iommu_unmap_pages pv_iommu_unmap_pages_t; +DEFINE_XEN_GUEST_HANDLE(pv_iommu_unmap_pages_t); + +/** + * IOMMU_reattach_device + * Reattach a device to another IOMMU context. + * Fails with -ENODEV if no such device exist. + */ +struct pv_iommu_reattach_device { + /* IN: Target IOMMU context number */ + uint16_t ctx_no; + + /* IN: Physical device to move */ + struct physdev_pci_device dev; + + /* IN: PASID of the device (if IOMMUCAP_pasid) */ + uint32_t pasid; +}; +typedef struct pv_iommu_reattach_device pv_iommu_reattach_device_t; +DEFINE_XEN_GUEST_HANDLE(pv_iommu_reattach_device_t); + + +/** + * IOMMU_remote_cmd + * Do a PV-IOMMU operation on another domain. + * Current domain needs to be allowed to act on the target domain, otherwi= se + * fails with -EPERM. + */ +struct pv_iommu_remote_cmd { + /* IN: Target domain to do the subop on */ + uint16_t domid; + + /* IN: Command to do on target domain. */ + uint16_t subop; + + /* INOUT: Command argument from current domain memory */ + XEN_GUEST_HANDLE(void) arg; +}; +typedef struct pv_iommu_remote_cmd pv_iommu_remote_cmd_t; +DEFINE_XEN_GUEST_HANDLE(pv_iommu_remote_cmd_t); + +/** + * IOMMU_alloc_nested + * Create a nested IOMMU context (needs IOMMUCAP_nested). + * + * This context uses a platform-specific page table from domain address sp= ace + * specified in pgtable_gfn and use it for nested translations. + * + * Explicit flushes needs to be submited with IOMMU_flush_nested on + * modification of the nested pagetable to ensure coherency between IOTLB = and + * nested page table. + * + * This context can be destroyed using IOMMU_free_context. + * This context cannot be modified using map_pages, unmap_pages. + */ +struct pv_iommu_alloc_nested { + /* OUT: allocated IOMMU context number */ + uint16_t ctx_no; + + /* IN: guest frame number of the nested page table */ + uint64_aligned_t pgtable_gfn; + + /* IN: nested mode flags */ + uint64_aligned_t nested_flags; +}; +typedef struct pv_iommu_alloc_nested pv_iommu_alloc_nested_t; +DEFINE_XEN_GUEST_HANDLE(pv_iommu_alloc_nested_t); + +/** + * IOMMU_flush_nested (needs IOMMUCAP_nested) + * Flush the IOTLB for nested translation. + */ +struct pv_iommu_flush_nested { + /* TODO */ +}; +typedef struct pv_iommu_flush_nested pv_iommu_flush_nested_t; +DEFINE_XEN_GUEST_HANDLE(pv_iommu_flush_nested_t); + +/** + * IOMMU_attach_pasid (needs IOMMUCAP_pasid) + * Attach a new device-with-pasid to a IOMMU context. + * If a matching device-with-pasid already exists (globally), + * fail with -EEXIST. + * If pasid is 0, fails with -EINVAL. + * If physical device doesn't exist in domain, fail with -ENOENT. + */ +struct pv_iommu_attach_pasid { + /* IN: IOMMU context to add the device-with-pasid in */ + uint16_t ctx_no; + + /* IN: Physical device */ + struct physdev_pci_device dev; + + /* IN: pasid of the device to attach */ + uint32_t pasid; +}; +typedef struct pv_iommu_attach_pasid pv_iommu_attach_pasid_t; +DEFINE_XEN_GUEST_HANDLE(pv_iommu_attach_pasid_t); + +/** + * IOMMU_detach_pasid (needs IOMMUCAP_pasid) + * detach a device-with-pasid. + * If the device-with-pasid doesn't exist or belong to the domain, + * fail with -ENOENT. + * If pasid is 0, fails with -EINVAL. + */ +struct pv_iommu_detach_pasid { + /* IN: Physical device */ + struct physdev_pci_device dev; + + /* pasid of the device to detach */ + uint32_t pasid; +}; +typedef struct pv_iommu_detach_pasid pv_iommu_detach_pasid_t; +DEFINE_XEN_GUEST_HANDLE(pv_iommu_detach_pasid_t); + +/* long do_iommu_op(int subop, XEN_GUEST_HANDLE_PARAM(void) arg) */ + +#endif \ No newline at end of file diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h index b47d48d0e2..28ab815ebc 100644 --- a/xen/include/public/xen.h +++ b/xen/include/public/xen.h @@ -118,6 +118,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t); #define __HYPERVISOR_xenpmu_op 40 #define __HYPERVISOR_dm_op 41 #define __HYPERVISOR_hypfs_op 42 +#define __HYPERVISOR_iommu_op 43 =20 /* Architecture-specific hypercall definitions. */ #define __HYPERVISOR_arch_0 48 --=20 2.45.2 Teddy Astie | Vates XCP-ng Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech