From nobody Wed Dec 17 08:56:12 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B924A2951A1 for ; Fri, 9 May 2025 13:17:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796636; cv=none; b=fO/IBk1UtnOjOJEqQ/pcMhQuRi/gknLsyL8V85QIEj4XtdT2Drg7W3VzY/nlP/mcs2NNxbzeAwAiFkyRh7GyjM1zR2m1IWkuKcd2OjFFrKRmmkTw8boYVYptDjmYIRct+Aty6ii9dFicehJiqXJee7Tlg8qRdq57c31dEnNnh2I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796636; c=relaxed/simple; bh=1KIC4nYkrn3JhyQ1inoc70Z27yDXEBHxrexWlAto368=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SD0Meb0R8JlblG39p7EAnJfgPx+ZEAmqXSHkasQ+9/dC5pDhD9aVqi1i2gQ7Kg8VBY0T5k4m+5rl+eOpRI0QwJQKyZVDR5D9RHweRItHmN+PlHD2pGRHLE4uNccV1aMATXJZPZjtCeUlApywrwM1Qu7l8Xgu+gnFiJLgoe76/8Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=y7O4aWSR; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="y7O4aWSR" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43efa869b0aso14592215e9.3 for ; Fri, 09 May 2025 06:17:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746796633; x=1747401433; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IP6ztp/+QPDQJCJndD5+6IALkAYC7lh/LyjGCv4sMT4=; b=y7O4aWSR5fywvPyOWfbJZEJZ47LB+fSgRKuiLBiyAvToTV4itLobd+d9sqaGGBUv9I XfVnhPnVvDaI1ge5/i1ZaA7ipovtTOgasdrJeCELEXcw1uPypypApD4hAGph3KK75jj5 CJLtt2wJme8qebnQPzq8SX8qPD6LBE8Kl74dW7H1D4omE8yk9yY9aTOQN/nu9BP/UlZl 60akeVCXEEnuVsfz7RrqUaqm3HFIbjjqH6rqcUUVpG0r/ZDk3bKFFETAYq1gNQK9gGMn c+2fuEnh5zcqEWARt8zFadEtc7VGoL1x+x+Lm5hCgbRuHv5szPhU8BDTfpiITeCVhYUH 3Aug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746796633; x=1747401433; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IP6ztp/+QPDQJCJndD5+6IALkAYC7lh/LyjGCv4sMT4=; b=DN8LwlyCWeX+aBmKleFVpwxj9boM2XO7QYQP2W3QQbfw9lmeYriKj4abCXqSpiFMPp EQGfV0K4BKqKokTD/lxYPrIbq6C+yTBuc9GZRvgS5XytZ6bBkUeF8AQe6YtLqZOuwEkz FKBRuKlO/WWzMwpeIRnoHhJcAeYE6THRKQArv7qSmA1wx2SqQAAnpX27HAkjMR8w7N6P HRiW60PKVBxnfaktBntCgHrE/4BUis3uggIh+xhI1ZYgojzeE74W8BZoeI5lBGlZ6SFY 4FvSB2ybvo/Sq325/eD5L+NfUvZkvfMFODZIIpXP+hHYwVI6qdtBdjk9SPkVWMuxDUt7 oxsQ== X-Forwarded-Encrypted: i=1; AJvYcCVwYAZ41OFZjWzIZzSIgam0Dd5yYJO3RGsN/8ZQP44HRVYflKa8ebPZq1xqLQMijGJGjzxrMjdpJpRzt68=@vger.kernel.org X-Gm-Message-State: AOJu0YxMaAMd1EWtVgF05uAywVswJ2EQBDHvEXPY+xi9zNKKnQmE6qbQ vj9JAHdprQDdgaWdoOwADI/ELAybVFPrIpje7s7tG5lNe2EB6eld6hFS8rUoNawl31TGKbWQhyo X6joR0PnDBJvQkaTm2w== X-Google-Smtp-Source: AGHT+IHLhslOxv//qelYYDwLSjir28eVrMuxSKOMq29v/O516ODc7zyiSCdmfJB48+QjIQHo3PdA4xoToUMrdg/4 X-Received: from wmbep11.prod.google.com ([2002:a05:600c:840b:b0:441:b3f0:c46]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1e02:b0:43c:eeee:b713 with SMTP id 5b1f17b1804b1-442d6db8d1bmr24441515e9.20.1746796633145; Fri, 09 May 2025 06:17:13 -0700 (PDT) Date: Fri, 9 May 2025 14:16:57 +0100 In-Reply-To: <20250509131706.2336138-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250509131706.2336138-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1015.ga840276032-goog Message-ID: <20250509131706.2336138-2-vdonnefort@google.com> Subject: [PATCH v4 01/10] KVM: arm64: Handle huge mappings for np-guest CMOs From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" clean_dcache_guest_page() and invalidate_icache_guest_page() accept a size as an argument. But they also rely on fixmap, which can only map a single PAGE_SIZE page. With the upcoming stage-2 huge mappings for pKVM np-guests, those callbacks will get size > PAGE_SIZE. Loop the CMOs on a PAGE_SIZE basis until the whole range is done. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 31173c694695..23544928a637 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -219,14 +219,28 @@ static void guest_s2_put_page(void *addr) =20 static void clean_dcache_guest_page(void *va, size_t size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + WARN_ON(!PAGE_ALIGNED(size)); + + while (size) { + __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va +=3D PAGE_SIZE; + size -=3D PAGE_SIZE; + } } =20 static void invalidate_icache_guest_page(void *va, size_t size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + WARN_ON(!PAGE_ALIGNED(size)); + + while (size) { + __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va +=3D PAGE_SIZE; + size -=3D PAGE_SIZE; + } } =20 int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd) --=20 2.49.0.1015.ga840276032-goog From nobody Wed Dec 17 08:56:12 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7017F2951B1 for ; Fri, 9 May 2025 13:17:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796638; cv=none; b=Eu50xEuqETchDB/1EHnS3Nag6P8/ijxNkU+/R1KRJUVQMISQqUHXBd9PhppPmm4ELtvgKWsmCvL3Q7PxIKAY/H20StYAdYvuz1V5VJN1GxqWmFp0DcPRnMyAIhR/Mf+JoKaE7aSZ77YdsCe5qC3zpCeGKGTT85/YeXOOXb+x4fg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796638; c=relaxed/simple; bh=4qe2GlMyMtPmem+nY3Rk87tsumdIvPPjYYhaHo+Z3CI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bLSCWBZlCxlNPBSv38D/HuLvKb/XJsJMTCwoAS0oyKsYy+305ZIGTXecMJQenTGfZMxwRmb/hhn8KXk4ykQRtLveMl56V9P8EP5grn2nzB3y4Rykobwip9Tj0BQK9sNW9H5Jz9Kb5634lkGYo2CbOE59+T9Y/xINns2rUyopSrg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gsuMIYvb; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gsuMIYvb" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cf5196c25so10336025e9.0 for ; Fri, 09 May 2025 06:17:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746796635; x=1747401435; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MuUUPGUNlLgL3YW2d98D/HjXl/LhdxFI7FznSu/zXVs=; b=gsuMIYvbdNH2VQnf0BUtY7/YgmMiYd5VbzvQ4qTL1RZh5RMiOTzMjdNraR0SPXdAiz HZ2Fz6PbfZbGBcNK2+Ni8XI/yHxn60u2RGnrGTnalJEjs44SVI/exT2vGeVk3JTCQ/c2 mBCUOTZ73lxJRWIB/VRZqwNZ8JEdmDHOqVC69GKF0krC10PbHpnaKGGdmJTzq/EYBTJH R191nsoKOY9i+opVblpx8vwl/MzC2VUWMhGjKivPzS/6eGErUjOzVBDdiyfwYmr4Z++F ryhD9OkNqxpWn5xUC92K5iKh+ST97RJMnxw1EC0mnPavhL8QUcSyXwZ/9W6EQTMDV4KO L1Ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746796635; x=1747401435; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MuUUPGUNlLgL3YW2d98D/HjXl/LhdxFI7FznSu/zXVs=; b=U3mA4R2Sza9zRSW/IpaRQLZXaPTVcYf5U/4420+kSSGWgqlprTaZXj3UpqkfKrBpJE lpJC4MtMsWUci2eY4ZUvtwAYPHAQ1YAPMho4e3XGxIP3UmbWqUw+qpEN19nbQ4bm4Utq EXX+Sm1wGc0xL30mZwZ9Y5xQ41vsam7T/pppWWpXLSiRPM0x0XSeoaYJLBWc5erWK7Ch 5GIN2J5up9jqB+RMjYfHfDqXWMd3flonJludeXuSiQ2qDaPuYpSGcJhOdkjdUVxHln7z ZKp0UL8BOnJVjnXzONz/mQhqA+drmFdo3IM036ihIAQFPkCnAS8yg0h3gTTFSoI5bJNF M+Aw== X-Forwarded-Encrypted: i=1; AJvYcCUZWplwdqgq5drOlzjMdRBsxpZcwUH1VLVLUT0QmIo1010WbXsBBqekmk0qBShVdZatpAMuDYO6uFpaods=@vger.kernel.org X-Gm-Message-State: AOJu0YwyFkMbwKqYwvjsHQenWDuJx2aMuvxNKTYelXBIDQ5WNAM753aL ngvXMXTboBPi+wUBat17+gwHIECkMXj2+Z/logQlynL0620o8datAAVd60D3oQk7HAc7GYlgU3A NNLeWKNb4LQ6IjYotlg== X-Google-Smtp-Source: AGHT+IHr/LYMD6ZOv2XrHZsPTX950y7HhnTgMHB9cYGym0qmTnKEqfwNLxc6Z2FepzPjnmTq0Q2LQl1tmzVkxjk1 X-Received: from wmbfl14.prod.google.com ([2002:a05:600c:b8e:b0:440:58dd:3795]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:6488:b0:43c:fffc:786c with SMTP id 5b1f17b1804b1-442d6d6ace2mr32830755e9.19.1746796634979; Fri, 09 May 2025 06:17:14 -0700 (PDT) Date: Fri, 9 May 2025 14:16:58 +0100 In-Reply-To: <20250509131706.2336138-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250509131706.2336138-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1015.ga840276032-goog Message-ID: <20250509131706.2336138-3-vdonnefort@google.com> Subject: [PATCH v4 02/10] KVM: arm64: Introduce for_each_hyp_page From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a helper to iterate over the hypervisor vmemmap. This will be particularly handy with the introduction of huge mapping support for the np-guest stage-2. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/= include/nvhe/memory.h index eb0c2ebd1743..676a0a13c741 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -96,24 +96,24 @@ static inline struct hyp_page *hyp_phys_to_page(phys_ad= dr_t phys) #define hyp_page_to_virt(page) __hyp_va(hyp_page_to_phys(page)) #define hyp_page_to_pool(page) (((struct hyp_page *)page)->pool) =20 -static inline enum pkvm_page_state get_host_state(phys_addr_t phys) +static inline enum pkvm_page_state get_host_state(struct hyp_page *p) { - return (enum pkvm_page_state)hyp_phys_to_page(phys)->__host_state; + return (enum pkvm_page_state)p->__host_state; } =20 -static inline void set_host_state(phys_addr_t phys, enum pkvm_page_state s= tate) +static inline void set_host_state(struct hyp_page *p, enum pkvm_page_state= state) { - hyp_phys_to_page(phys)->__host_state =3D state; + p->__host_state =3D state; } =20 -static inline enum pkvm_page_state get_hyp_state(phys_addr_t phys) +static inline enum pkvm_page_state get_hyp_state(struct hyp_page *p) { - return hyp_phys_to_page(phys)->__hyp_state_comp ^ PKVM_PAGE_STATE_MASK; + return p->__hyp_state_comp ^ PKVM_PAGE_STATE_MASK; } =20 -static inline void set_hyp_state(phys_addr_t phys, enum pkvm_page_state st= ate) +static inline void set_hyp_state(struct hyp_page *p, enum pkvm_page_state = state) { - hyp_phys_to_page(phys)->__hyp_state_comp =3D state ^ PKVM_PAGE_STATE_MASK; + p->__hyp_state_comp =3D state ^ PKVM_PAGE_STATE_MASK; } =20 /* diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 23544928a637..4d269210dae0 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -60,6 +60,9 @@ static void hyp_unlock_component(void) hyp_spin_unlock(&pkvm_pgd_lock); } =20 +#define for_each_hyp_page(start, size, page) \ + for (page =3D hyp_phys_to_page(start); page < hyp_phys_to_page((start) + = (size)); page++) + static void *host_s2_zalloc_pages_exact(size_t size) { void *addr =3D hyp_alloc_pages(&host_s2_pool, get_order(size)); @@ -481,7 +484,8 @@ static int host_stage2_adjust_range(u64 addr, struct kv= m_mem_range *range) return -EAGAIN; =20 if (pte) { - WARN_ON(addr_is_memory(addr) && get_host_state(addr) !=3D PKVM_NOPAGE); + WARN_ON(addr_is_memory(addr) && + get_host_state(hyp_phys_to_page(addr)) !=3D PKVM_NOPAGE); return -EPERM; } =20 @@ -507,10 +511,10 @@ int host_stage2_idmap_locked(phys_addr_t addr, u64 si= ze, =20 static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm= _page_state state) { - phys_addr_t end =3D addr + size; + struct hyp_page *page; =20 - for (; addr < end; addr +=3D PAGE_SIZE) - set_host_state(addr, state); + for_each_hyp_page(addr, size, page) + set_host_state(page, state); } =20 int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) @@ -632,16 +636,17 @@ static int check_page_state_range(struct kvm_pgtable = *pgt, u64 addr, u64 size, static int __host_check_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - u64 end =3D addr + size; + struct hyp_page *page; int ret; =20 - ret =3D check_range_allowed_memory(addr, end); + ret =3D check_range_allowed_memory(addr, addr + size); if (ret) return ret; =20 hyp_assert_lock_held(&host_mmu.lock); - for (; addr < end; addr +=3D PAGE_SIZE) { - if (get_host_state(addr) !=3D state) + + for_each_hyp_page(addr, size, page) { + if (get_host_state(page) !=3D state) return -EPERM; } =20 @@ -651,7 +656,7 @@ static int __host_check_page_state_range(u64 addr, u64 = size, static int __host_set_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - if (get_host_state(addr) =3D=3D PKVM_NOPAGE) { + if (get_host_state(hyp_phys_to_page(addr)) =3D=3D PKVM_NOPAGE) { int ret =3D host_stage2_idmap_locked(addr, size, PKVM_HOST_MEM_PROT); =20 if (ret) @@ -665,18 +670,18 @@ static int __host_set_page_state_range(u64 addr, u64 = size, =20 static void __hyp_set_page_state_range(phys_addr_t phys, u64 size, enum pk= vm_page_state state) { - phys_addr_t end =3D phys + size; + struct hyp_page *page; =20 - for (; phys < end; phys +=3D PAGE_SIZE) - set_hyp_state(phys, state); + for_each_hyp_page(phys, size, page) + set_hyp_state(page, state); } =20 static int __hyp_check_page_state_range(phys_addr_t phys, u64 size, enum p= kvm_page_state state) { - phys_addr_t end =3D phys + size; + struct hyp_page *page; =20 - for (; phys < end; phys +=3D PAGE_SIZE) { - if (get_hyp_state(phys) !=3D state) + for_each_hyp_page(phys, size, page) { + if (get_hyp_state(page) !=3D state) return -EPERM; } =20 @@ -927,7 +932,7 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pk= vm_hyp_vcpu *vcpu, goto unlock; =20 page =3D hyp_phys_to_page(phys); - switch (get_host_state(phys)) { + switch (get_host_state(page)) { case PKVM_PAGE_OWNED: WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OW= NED)); break; @@ -979,9 +984,9 @@ static int __check_host_shared_guest(struct pkvm_hyp_vm= *vm, u64 *__phys, u64 ip if (WARN_ON(ret)) return ret; =20 - if (get_host_state(phys) !=3D PKVM_PAGE_SHARED_OWNED) - return -EPERM; page =3D hyp_phys_to_page(phys); + if (get_host_state(page) !=3D PKVM_PAGE_SHARED_OWNED) + return -EPERM; if (WARN_ON(!page->host_share_guest_count)) return -EINVAL; =20 diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setu= p.c index 6d513a4b3763..c19860fc8183 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -190,6 +190,7 @@ static int fix_host_ownership_walker(const struct kvm_p= gtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { enum pkvm_page_state state; + struct hyp_page *page; phys_addr_t phys; =20 if (!kvm_pte_valid(ctx->old)) @@ -202,6 +203,8 @@ static int fix_host_ownership_walker(const struct kvm_p= gtable_visit_ctx *ctx, if (!addr_is_memory(phys)) return -EINVAL; =20 + page =3D hyp_phys_to_page(phys); + /* * Adjust the host stage-2 mappings to match the ownership attributes * configured in the hypervisor stage-1, and make sure to propagate them @@ -210,15 +213,15 @@ static int fix_host_ownership_walker(const struct kvm= _pgtable_visit_ctx *ctx, state =3D pkvm_getstate(kvm_pgtable_hyp_pte_prot(ctx->old)); switch (state) { case PKVM_PAGE_OWNED: - set_hyp_state(phys, PKVM_PAGE_OWNED); + set_hyp_state(page, PKVM_PAGE_OWNED); return host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_HYP); case PKVM_PAGE_SHARED_OWNED: - set_hyp_state(phys, PKVM_PAGE_SHARED_OWNED); - set_host_state(phys, PKVM_PAGE_SHARED_BORROWED); + set_hyp_state(page, PKVM_PAGE_SHARED_OWNED); + set_host_state(page, PKVM_PAGE_SHARED_BORROWED); break; case PKVM_PAGE_SHARED_BORROWED: - set_hyp_state(phys, PKVM_PAGE_SHARED_BORROWED); - set_host_state(phys, PKVM_PAGE_SHARED_OWNED); + set_hyp_state(page, PKVM_PAGE_SHARED_BORROWED); + set_host_state(page, PKVM_PAGE_SHARED_OWNED); break; default: return -EINVAL; --=20 2.49.0.1015.ga840276032-goog From nobody Wed Dec 17 08:56:12 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5AB202951CB for ; Fri, 9 May 2025 13:17:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796640; cv=none; b=n4XEDCEhP57qasues85mznzUNsxdrTE3zKItNxgoTBss/3puK3wAt06zDC4DLJWxO6dqfdf7I+8cHHUZ0oSiRalTTqmQilcHsx8lSdQ2aNxB57A4tU9jBW0GlTr828eK1svppF93KAN96lQli5GmH+/Z8v30UJ4WX91qDnb+G2A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796640; c=relaxed/simple; bh=A/JXW1BWD70anWPsJd6hasixqWhZnPnl4TjOZUZgJ3U=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Bh+QpIawdiEG+lZLMHRwiayyWOsV1n9XfqT4Kwc3rtd2/6ng7aa7jXWzKZCoXHmP9kY4/Xy5/ZIGWOSCl7hI/tB9ZQ6h6OyiSfS+Yqb9g6eG7vV4SyfFxfBiA2VtByJrkrb0R806hWyDEWK3tSzflW7irrEXQjp4+u6y3vTH+G8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gD8+g1b9; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gD8+g1b9" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-442d472cf84so7507915e9.2 for ; Fri, 09 May 2025 06:17:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746796636; x=1747401436; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=I8hB+UpJ8nfoMqwD5zT3Vt9miZUxTA/LC+QPWcp3onM=; b=gD8+g1b9ueVw8JVvaDderbmL/ekj+mfzEdITVGCjfUWnem01ZM6OCXoRK/Gqcwsoye LlKC9raDNQzW0BL1hxO7tLcGJoKdLRtKlofxJ+5IA7rfdz5GGdFnm5ngU6nA2D+esXk/ OH4BySLrjgj9uP70xy/DBUhrrKWF1k+HGpVNE1kjFIL119h2dgOO0lvPDT3LfIEPRJvy sfaWjSj1rcuLkU5t8SnZplmiZ7PLeYh3a2YK3zFZDtvThfh9afiyINx/Eaz2ZsoLPR7q 98vUfTZwxgTLCCSQIGCDyQGY/smRM1b6/9V8jIYVNe2paAgQjc6p1f7tevZHQru1MDJa 4ivg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746796636; x=1747401436; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=I8hB+UpJ8nfoMqwD5zT3Vt9miZUxTA/LC+QPWcp3onM=; b=UDH4hqUqKhIVX5AozPxcZpEjf6+eIbJdPW5LEC6r06O/cLlqvGuIiUDwoXYMevwXue 15vL5dXHG+AnYJ8hcLT0oD0krfble7c6M+8YeyNfCpK+98g680JHS7ZewUy553RyfoDT Ow5oLaGZm7z8fpXR2Im4E30qjrtq5BimdUZljo3Wc+/GoHqIT8edlFbJMoJeKtBH1tOS 3sJG2p+eU4a9LXDiPkhLCVH/Yk1ICCItzA27jIdcmpyiY3XH4KUixjR/4CIiqoaXJ/Kl y60Ss/C8hSNKTYp+GRpe4Dkjf3FQOYVW+7suZdj1QWF0m3aYXkTz7lBRM5cON4tSvrmT ezvQ== X-Forwarded-Encrypted: i=1; AJvYcCWiigk/A2d9UI25H19ishBKrUQ/nCw1uKGmeP/QCds7BDp+xt6n90Tqrm/m3jMnE7gjDGZKfkQXYpGv7vE=@vger.kernel.org X-Gm-Message-State: AOJu0YwwdySYg8DgbBMCYKXv+orep6P9kj3vAV6lXezSd0CxNT84WFFL eByG93cOLyBf/Z86Zg4WZaEicQBKxMAO9NDt7w30ro+nAeSElpA9zYp0JDYb3oHi/FUBxJUnxqT QV7ONNqL0/WmwX4M4Xg== X-Google-Smtp-Source: AGHT+IHOKs9SkIggmh+/Y69yt2MLN4XNNtZ/42ScndjHR2rRG+bpQ9yfgd9NcLSJLAKDffw/mGE6sPD2AOF2xiGj X-Received: from wmsp16.prod.google.com ([2002:a05:600c:1d90:b0:440:595d:fbaf]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:c07:b0:43d:300f:fa3d with SMTP id 5b1f17b1804b1-442d6d18abfmr30485465e9.5.1746796636780; Fri, 09 May 2025 06:17:16 -0700 (PDT) Date: Fri, 9 May 2025 14:16:59 +0100 In-Reply-To: <20250509131706.2336138-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250509131706.2336138-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1015.ga840276032-goog Message-ID: <20250509131706.2336138-4-vdonnefort@google.com> Subject: [PATCH v4 03/10] KVM: arm64: Add a range to __pkvm_host_share_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_share_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 26016eb9323f..47aa7b01114f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -39,7 +39,7 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 59db9606e6e1..4d3d215955c3 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -245,7 +245,8 @@ static void handle___pkvm_host_share_guest(struct kvm_c= pu_context *host_ctxt) { DECLARE_REG(u64, pfn, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 4); struct pkvm_hyp_vcpu *hyp_vcpu; int ret =3D -EINVAL; =20 @@ -260,7 +261,7 @@ static void handle___pkvm_host_share_guest(struct kvm_c= pu_context *host_ctxt) if (ret) goto out; =20 - ret =3D __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot); + ret =3D __pkvm_host_share_guest(pfn, gfn, nr_pages, hyp_vcpu, prot); out: cpu_reg(host_ctxt, 1) =3D ret; } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 4d269210dae0..f0f7c6f83e57 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -696,10 +696,9 @@ static enum pkvm_page_state guest_get_page_state(kvm_p= te_t pte, u64 addr) return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); } =20 -static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 = addr, +static int __guest_check_page_state_range(struct pkvm_hyp_vm *vm, u64 addr, u64 size, enum pkvm_page_state state) { - struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); struct check_walk_data d =3D { .desired =3D state, .get_page_state =3D guest_get_page_state, @@ -908,48 +907,81 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) return ret; } =20 -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +static int __guest_check_transition_size(u64 phys, u64 ipa, u64 nr_pages, = u64 *size) +{ + if (nr_pages =3D=3D 1) { + *size =3D PAGE_SIZE; + return 0; + } + + /* We solely support PMD_SIZE huge-pages */ + if (nr_pages !=3D (1 << (PMD_SHIFT - PAGE_SHIFT))) + return -EINVAL; + + if (!IS_ALIGNED(phys | ipa, PMD_SIZE)) + return -EINVAL; + + *size =3D PMD_SIZE; + return 0; +} + +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot) { struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); u64 phys =3D hyp_pfn_to_phys(pfn); u64 ipa =3D hyp_pfn_to_phys(gfn); struct hyp_page *page; + u64 size; int ret; =20 if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; =20 - ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret =3D __guest_check_transition_size(phys, ipa, nr_pages, &size); + if (ret) + return ret; + + ret =3D check_range_allowed_memory(phys, phys + size); if (ret) return ret; =20 host_lock_component(); guest_lock_component(vm); =20 - ret =3D __guest_check_page_state_range(vcpu, ipa, PAGE_SIZE, PKVM_NOPAGE); + ret =3D __guest_check_page_state_range(vm, ipa, size, PKVM_NOPAGE); if (ret) goto unlock; =20 - page =3D hyp_phys_to_page(phys); - switch (get_host_state(page)) { - case PKVM_PAGE_OWNED: - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OW= NED)); - break; - case PKVM_PAGE_SHARED_OWNED: - if (page->host_share_guest_count) - break; - /* Only host to np-guest multi-sharing is tolerated */ - fallthrough; - default: - ret =3D -EPERM; - goto unlock; + for_each_hyp_page(phys, size, page) { + switch (get_host_state(page)) { + case PKVM_PAGE_OWNED: + continue; + case PKVM_PAGE_SHARED_OWNED: + if (page->host_share_guest_count =3D=3D U32_MAX) { + ret =3D -EBUSY; + goto unlock; + } + + /* Only host to np-guest multi-sharing is tolerated */ + if (page->host_share_guest_count) + continue; + + fallthrough; + default: + ret =3D -EPERM; + goto unlock; + } + } + + for_each_hyp_page(phys, size, page) { + set_host_state(page, PKVM_PAGE_SHARED_OWNED); + page->host_share_guest_count++; } =20 - WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, size, phys, pkvm_mkstate(prot, PKVM_PAGE_SHARED_BORROWED), &vcpu->vcpu.arch.pkvm_memcache, 0)); - page->host_share_guest_count++; =20 unlock: guest_unlock_component(vm); @@ -1170,6 +1202,9 @@ static void assert_page_state(void) struct pkvm_hyp_vcpu *vcpu =3D &selftest_vcpu; u64 phys =3D hyp_virt_to_phys(virt); u64 ipa[2] =3D { selftest_ipa(), selftest_ipa() + PAGE_SIZE }; + struct pkvm_hyp_vm *vm; + + vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); =20 host_lock_component(); WARN_ON(__host_check_page_state_range(phys, size, selftest_state.host)); @@ -1180,8 +1215,8 @@ static void assert_page_state(void) hyp_unlock_component(); =20 guest_lock_component(&selftest_vm); - WARN_ON(__guest_check_page_state_range(vcpu, ipa[0], size, selftest_state= .guest[0])); - WARN_ON(__guest_check_page_state_range(vcpu, ipa[1], size, selftest_state= .guest[1])); + WARN_ON(__guest_check_page_state_range(vm, ipa[0], size, selftest_state.g= uest[0])); + WARN_ON(__guest_check_page_state_range(vm, ipa[1], size, selftest_state.g= uest[1])); guest_unlock_component(&selftest_vm); } =20 @@ -1219,7 +1254,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_unshare_ffa, pfn, 1); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); =20 selftest_state.host =3D PKVM_PAGE_OWNED; @@ -1238,7 +1273,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_donate_hyp, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); =20 assert_transition_res(0, hyp_pin_shared_mem, virt, virt + size); @@ -1250,7 +1285,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_donate_hyp, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); =20 hyp_unpin_shared_mem(virt, virt + size); @@ -1269,7 +1304,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_share_hyp, pfn); assert_transition_res(-EPERM, __pkvm_host_unshare_hyp, pfn); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); =20 @@ -1280,8 +1315,8 @@ void pkvm_ownership_selftest(void *base) =20 selftest_state.host =3D PKVM_PAGE_SHARED_OWNED; selftest_state.guest[0] =3D PKVM_PAGE_SHARED_BORROWED; - assert_transition_res(0, __pkvm_host_share_guest, pfn, gfn, vcpu, prot); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(0, __pkvm_host_share_guest, pfn, gfn, 1, vcpu, prot= ); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_donate_hyp, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_hyp, pfn); @@ -1290,7 +1325,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); =20 selftest_state.guest[1] =3D PKVM_PAGE_SHARED_BORROWED; - assert_transition_res(0, __pkvm_host_share_guest, pfn, gfn + 1, vcpu, pro= t); + assert_transition_res(0, __pkvm_host_share_guest, pfn, gfn + 1, 1, vcpu, = prot); WARN_ON(hyp_virt_to_page(virt)->host_share_guest_count !=3D 2); =20 selftest_state.guest[0] =3D PKVM_NOPAGE; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 83a737484046..0285e2cd2e7f 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -347,7 +347,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u6= 4 addr, u64 size, return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, prot); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); if (ret) { /* Is the gfn already mapped due to a racing vCPU? */ if (ret =3D=3D -EPERM) --=20 2.49.0.1015.ga840276032-goog From nobody Wed Dec 17 08:56:12 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41F9C295512 for ; Fri, 9 May 2025 13:17:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796643; cv=none; b=PToGaY0cVGCZr5KW4ObcgVE/i300VwBjVTQfnzLAw8191FNTW2SZ56Z2yId3kdgDVgHHCAA20P26inJoq/+quA3+jMGyLfmiHXpbOUUQiomIKJ2ddO14RNTwRN+k34Cx36DYondhoCkiyPv6Zs1edexA9CKJedY2QPdOFsa+6JQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796643; c=relaxed/simple; bh=HPn8lrzAz8n6RVOfwH44bcQFtkRcGgvalRD3D6SncuU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=A2jylUSQinaF3fKWS2OQ4qQqJao7uUVb+V5hSgu5GG1eMqabMdHt4k909tg5LT8uX2VN5nLGI6AaFU5ZduUu2vD1rxYikGhMMRpU3algajl/V/rCC2mfEL/mCTK8vd4J4+UiOa2zqxX2tMASWpTX40ZXLLevt/R/p7qP+BtLhZY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ICfHFwuU; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ICfHFwuU" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-442d472cf7fso9801945e9.3 for ; Fri, 09 May 2025 06:17:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746796638; x=1747401438; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Y89KpiuMEOxtubuMTpIdKZbHzHFigeuxmtNmFCn1YRw=; b=ICfHFwuUi4hgMy1UfR0nES7Yqbsb1pgmVbSRG3UGqBb4qAIueCfIvTKgJ3mRk8IvS0 4Vfx69GJidr4Efq43onCO1o91wmDaPR12QiAR6v/ZHtWJoHifkH4+AsrN9I+q053G+zo +xpm13N8atRUGylaf+25IhiM3E0tL5Gr/yDE8LurZfwKVHGBWLlQ5uAzEvucc8R1N4TR cHr6dqNg2Yvwjsyp87DQvuyGLMONdE6XPkiMEuLL5IvJ82W+CH5+Y2G7tkFIYoQR3z04 omR2QMhzWdUwaS9Z90qi6MpKUSt1UUcgrOUg4FX4Kk1Y3oc58WhW46l7M6RAqTjEo0rS dzew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746796638; x=1747401438; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Y89KpiuMEOxtubuMTpIdKZbHzHFigeuxmtNmFCn1YRw=; b=tYTVzL1Aakzht2cLKZbp4DPdUMnWLWnA4H7sMXhRAYhG8Rq8ebIzxTOaxCGeclSyHM k4fzKhdrHr0xsyCOYfG4HKG60Z2pwDXJH67AcQF803bBCxx5mTUNW3bD+jgzDnJNUEAz vL1PK3/mJ/hBlqCrKW596o7+lGHR2XxiwjgU2zgIvY7GSI+EA0x4cwoTYObDv2TJARVj 36GwanLicA1KCpxMx2ii3UzOcF1GHOU4KYali4OlTmhLvRuuo1OxttMXixPPIoDxwSsZ BDZY8cNhFYI1BwuJJQqKKkxgcQgfkeAfw8aRfhe3Fe22hkUoY8OAyTNvqKCSUAgFAqst mUrQ== X-Forwarded-Encrypted: i=1; AJvYcCXiL2cOsmTWZOum27ba0nC/xIYFawcrGCSgh2l+mU3UlKhTlAlQaUryPrxJb4oEeIfsUD8JZqgjPcBvmHc=@vger.kernel.org X-Gm-Message-State: AOJu0YyNSQ50m2WhHmZ7zv9OIufTWD5mBBIVpxIZIKCBuinBLDv8hcn1 ct8HH1YZhDoUEE4hd/2FRGBjev5831dPTyf6pf6hqK9K+CqNgQ8DBbn3LpSmO/jVIZzuw8L+J8K bd081LeLuQmNHp2Pt/w== X-Google-Smtp-Source: AGHT+IG0PFdA7tBxedgnr2V/gfd20+TBkPi+kPSsUQGO9pdlkgnf5jsutVLpvJ7X4gVCm/d6/ylRU+TZdpbjbrLs X-Received: from wmbez7.prod.google.com ([2002:a05:600c:83c7:b0:442:cdb9:da41]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:46c3:b0:43c:fc04:6d35 with SMTP id 5b1f17b1804b1-442d6d0a9f6mr30262775e9.4.1746796638595; Fri, 09 May 2025 06:17:18 -0700 (PDT) Date: Fri, 9 May 2025 14:17:00 +0100 In-Reply-To: <20250509131706.2336138-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250509131706.2336138-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1015.ga840276032-goog Message-ID: <20250509131706.2336138-5-vdonnefort@google.com> Subject: [PATCH v4 04/10] KVM: arm64: Add a range to __pkvm_host_unshare_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_unshare_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 47aa7b01114f..19671edbe18f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -41,7 +41,7 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot); -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 4d3d215955c3..5c03bd1db873 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -270,6 +270,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm= _cpu_context *host_ctxt) { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -280,7 +281,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm= _cpu_context *host_ctxt) if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_unshare_guest(gfn, hyp_vm); + ret =3D __pkvm_host_unshare_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index f0f7c6f83e57..ae9a91a21a61 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -990,7 +990,7 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pa= ges, struct pkvm_hyp_vcpu return ret; } =20 -static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa) +static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa, u64 size) { enum pkvm_page_state state; struct hyp_page *page; @@ -1004,7 +1004,7 @@ static int __check_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 *__phys, u64 ip return ret; if (!kvm_pte_valid(pte)) return -ENOENT; - if (level !=3D KVM_PGTABLE_LAST_LEVEL) + if (kvm_granule_size(level) !=3D size) return -E2BIG; =20 state =3D guest_get_page_state(pte, ipa); @@ -1012,43 +1012,50 @@ static int __check_host_shared_guest(struct pkvm_hy= p_vm *vm, u64 *__phys, u64 ip return -EPERM; =20 phys =3D kvm_pte_to_phys(pte); - ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret =3D check_range_allowed_memory(phys, phys + size); if (WARN_ON(ret)) return ret; =20 - page =3D hyp_phys_to_page(phys); - if (get_host_state(page) !=3D PKVM_PAGE_SHARED_OWNED) - return -EPERM; - if (WARN_ON(!page->host_share_guest_count)) - return -EINVAL; + for_each_hyp_page(phys, size, page) { + if (get_host_state(page) !=3D PKVM_PAGE_SHARED_OWNED) + return -EPERM; + if (WARN_ON(!page->host_share_guest_count)) + return -EINVAL; + } =20 *__phys =3D phys; =20 return 0; } =20 -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *v= m) { u64 ipa =3D hyp_pfn_to_phys(gfn); struct hyp_page *page; - u64 phys; + u64 size, phys; int ret; =20 + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa); + ret =3D __check_host_shared_guest(vm, &phys, ipa, size); if (ret) goto unlock; =20 - ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, PAGE_SIZE); + ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, size); if (ret) goto unlock; =20 - page =3D hyp_phys_to_page(phys); - page->host_share_guest_count--; - if (!page->host_share_guest_count) - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_OWNED)); + for_each_hyp_page(phys, size, page) { + /* __check_host_shared_guest() protects against underflow */ + page->host_share_guest_count--; + if (!page->host_share_guest_count) + set_host_state(page, PKVM_PAGE_OWNED); + } =20 unlock: guest_unlock_component(vm); @@ -1068,7 +1075,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa); + ret =3D __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); =20 guest_unlock_component(vm); host_unlock_component(); @@ -1255,7 +1262,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_unshare_ffa, pfn, 1); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); =20 selftest_state.host =3D PKVM_PAGE_OWNED; selftest_state.hyp =3D PKVM_NOPAGE; @@ -1263,7 +1270,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_unshare_hyp, pfn); assert_transition_res(-EPERM, __pkvm_host_unshare_ffa, pfn, 1); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); =20 selftest_state.host =3D PKVM_PAGE_SHARED_OWNED; @@ -1274,7 +1281,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); =20 assert_transition_res(0, hyp_pin_shared_mem, virt, virt + size); assert_transition_res(0, hyp_pin_shared_mem, virt, virt + size); @@ -1286,7 +1293,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); =20 hyp_unpin_shared_mem(virt, virt + size); assert_page_state(); @@ -1305,7 +1312,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_unshare_hyp, pfn); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); =20 selftest_state.host =3D PKVM_PAGE_OWNED; @@ -1329,11 +1336,11 @@ void pkvm_ownership_selftest(void *base) WARN_ON(hyp_virt_to_page(virt)->host_share_guest_count !=3D 2); =20 selftest_state.guest[0] =3D PKVM_NOPAGE; - assert_transition_res(0, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(0, __pkvm_host_unshare_guest, gfn, 1, vm); =20 selftest_state.guest[1] =3D PKVM_NOPAGE; selftest_state.host =3D PKVM_PAGE_OWNED; - assert_transition_res(0, __pkvm_host_unshare_guest, gfn + 1, vm); + assert_transition_res(0, __pkvm_host_unshare_guest, gfn + 1, 1, vm); =20 selftest_state.host =3D PKVM_NOPAGE; selftest_state.hyp =3D PKVM_PAGE_OWNED; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 0285e2cd2e7f..f77c5157a8d7 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -371,7 +371,7 @@ int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, = u64 addr, u64 size) =20 lockdep_assert_held_write(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); if (WARN_ON(ret)) break; rb_erase(&mapping->node, &pgt->pkvm_mappings); --=20 2.49.0.1015.ga840276032-goog From nobody Wed Dec 17 08:56:12 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D0BBD295519 for ; Fri, 9 May 2025 13:17:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796643; cv=none; b=fwDI/Zx6rHFbL/Ik9wFheIxfwhp79La4P6zqG/vrpjCFpUkGlmmLoLvOCE6No60UKYcd2JpQHKUn/2MUxEzQbkc4+SC4uDbdsOCFA9yviTT/Op687sF/Ec0qipdCaLyK8zYs1SNGoUcXjNAH8E62LtUIlqonKLmws4bXrnBswA0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796643; c=relaxed/simple; bh=USNeLo6EV+eJ2K7Z7M9W89y3eGFJUG0YD1EYaPCtqwU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OkbSDmC+5Uy7URbdbCv0DXCtPdBwUNe6uqf/mZACdrBYLECeL4KcvCazYtbm5YJAUOrxUrmBi7nRvG1LXkkSK7LH+AonBZ3j0aJHBmCpRE9UNM2nQHyjTpFOJEcwrDlZ/lNfLQtcNUBjTW6+Zl5cwIPryQR9ou22mVgZb9vyVWk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nbC3Z821; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nbC3Z821" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43d00017e9dso9616965e9.0 for ; Fri, 09 May 2025 06:17:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746796640; x=1747401440; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FNv3jLWaEgFEiziR7ZiOEPolrwSTO3G84RgE2yq1bP8=; b=nbC3Z821i7qjeR1l95L52zu2uX/paYsh6ZpeZH46a46qcmyKpnaMDR+fwjZ2hqjzgJ meyDCNSM4hfsXtENuEXnNv91kSCSrtBHKXXScqCeaZdVefpQuVa7RES3odke/xAPyE88 oWoazmREFAeSF4zATdAnrpPwQaM9uDmGUia/gLXkYXTMsVaOyGQqMmYa2H6gKE/2n/g3 VmtY37Ap++cbm8dndYW1AQYPmomlhEJJy/DKhsAMUSqWzLXPz8iUK+ivzS1Ld/DQ0BV9 tnC3YdXn6C2iLsjirXDglDXX3DHQBLQG8Xli4eo5sZ2hOIxZsCqLSjYoPY+v+9Pypgzz 4a6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746796640; x=1747401440; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FNv3jLWaEgFEiziR7ZiOEPolrwSTO3G84RgE2yq1bP8=; b=TNB9yWF+RJtGZbfEnMounWs6BuuXN2kpVdb8HbH61x8TAvkDK5Tq9yh2qMPajzYOEK 6mFxsDE0PjScpE5NWxRDytzru/srzylpo4Ytbf5+fNO2XOjF9dXXjT4E/aDtFtAeIClE /cEuCwmmgsmMmomB8sRwLNr7QCnIoimPdH/XFci5GSEsDDFPOif89ac3rKxIwdleHH2a ssTWXucqOnsFP1WSXKukPhS0PWaYDqojJfVAPAYSh4tnqDEGV4bzLv8IDn296UC5/zA1 w45kxk51Lp0S87p3bfrEnTELtIpT7mbSXD327b9etltz9NygM+NfCNI7UrXQ4DKFWjUp B9ZA== X-Forwarded-Encrypted: i=1; AJvYcCUr/kIGPgJHaCDqe6LYhVGG4N8Nf4evYqCsmZ5KwhNg7yW7KkJdsA9q0L6C33AJpstvnCdn6+wbRH2lKbA=@vger.kernel.org X-Gm-Message-State: AOJu0YwLHxIrV5DZK4pLcG/I0dNSej0efm2YWlpZ3kuBReXnMVbhzgFO 21+qK3rFr1ppZ247W+0g9B8a9MVtagGqK+lIfAs6dsOXPMPiSs+Wbp5QFNfYTfqZ9Z9QxGPTHex m4DClXUBQyiKd7qUSxg== X-Google-Smtp-Source: AGHT+IFxD8fEjsE10nAGPXys8Wxu2K9BjiTRt3MeppsY7tB2Qo62VCHRwBX9+c9T0wwl3IksH2RkxteKkipKa6gd X-Received: from wmbdr13.prod.google.com ([2002:a05:600c:608d:b0:441:b607:4ec0]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:609a:b0:442:c98e:79ab with SMTP id 5b1f17b1804b1-442d6d37991mr30527015e9.9.1746796640453; Fri, 09 May 2025 06:17:20 -0700 (PDT) Date: Fri, 9 May 2025 14:17:01 +0100 In-Reply-To: <20250509131706.2336138-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250509131706.2336138-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1015.ga840276032-goog Message-ID: <20250509131706.2336138-6-vdonnefort@google.com> Subject: [PATCH v4 05/10] KVM: arm64: Add a range to __pkvm_host_wrprotect_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_wrprotect_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 19671edbe18f..64d4f3bf6269 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_page= s, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *hyp_vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 5c03bd1db873..fa7e2421d359 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -310,6 +310,7 @@ static void handle___pkvm_host_wrprotect_guest(struct k= vm_cpu_context *host_ctxt { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -320,7 +321,7 @@ static void handle___pkvm_host_wrprotect_guest(struct k= vm_cpu_context *host_ctxt if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_wrprotect_guest(gfn, hyp_vm); + ret =3D __pkvm_host_wrprotect_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index ae9a91a21a61..887848408e1b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1064,7 +1064,7 @@ int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, = struct pkvm_hyp_vm *vm) return ret; } =20 -static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa) +static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa, u64 = size) { u64 phys; int ret; @@ -1075,7 +1075,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); + ret =3D __check_host_shared_guest(vm, &phys, ipa, size); =20 guest_unlock_component(vm); host_unlock_component(); @@ -1095,7 +1095,7 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct pkv= m_hyp_vcpu *vcpu, enum kvm_ if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret =3D kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0); guest_unlock_component(vm); @@ -1103,17 +1103,21 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct p= kvm_hyp_vcpu *vcpu, enum kvm_ return ret; } =20 -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *vm) { - u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 size, ipa =3D hyp_pfn_to_phys(gfn); int ret; =20 if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE); + ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, size); guest_unlock_component(vm); =20 return ret; @@ -1127,7 +1131,7 @@ int __pkvm_host_test_clear_young_guest(u64 gfn, bool = mkold, struct pkvm_hyp_vm * if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); guest_unlock_component(vm); @@ -1143,7 +1147,7 @@ int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hy= p_vcpu *vcpu) if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0); guest_unlock_component(vm); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index f77c5157a8d7..daab4a00790a 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -390,7 +390,7 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, 1); if (WARN_ON(ret)) break; } --=20 2.49.0.1015.ga840276032-goog From nobody Wed Dec 17 08:56:12 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F3663295531 for ; Fri, 9 May 2025 13:17:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796645; cv=none; b=nIKW41PAOF6yaM0P9yq1jz/DbxOEf9nNplwU5yDeggRhlXl6wJLKI9wuJvg844Tq9MDPXitb/1KuLfwJ3UfoxViDts/ZPERBBQ9FvhF7t+LOG0W0uft26Q+fMSa67/ad2MhUbKGyYay1BF5s9mqAVvaP8db6QWZmJdDuy5UVG+Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796645; c=relaxed/simple; bh=0dN7xNkWVVcK8u32o4MK+QgSiql5mKiL3ICUdzvqXsc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OoKTl/NJVpfKYrAelsG+qaUbONmbAnUrhysWxjx+OgO5nRemW3tt/y6ltL94FFk0k3f9yMOHLM6CM/bGi8cg2YD6Phlf5vIt1NSL0hdKY45YHITieKiLqX4JfQWyYRs/aSuNzzHMZ5NZlt7EO8RFEvB75SNyCbp3Xihj0DbvszQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=w7pmr55f; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="w7pmr55f" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-441d438a9b7so13574985e9.1 for ; Fri, 09 May 2025 06:17:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746796642; x=1747401442; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sJXlEdEAMlkb2qBBm9toZJ5sXPbHKi+X7VZ3IC/AQeA=; b=w7pmr55fuZ65KMxV8tF15xmcnVF1IZmJJVDVmssEcrnmORpen8w5+FRoeWMzTgerWf Tje4k5CRXBlvcsYe7sdKMrSWlhibAM2uFLdyEvNG0P30Z5+1J1nGMblBUglYEfZMjFBy 00F46wePMMx65KQs6iEeI+rNlJXA+l1QBRqQ+iZ8vWYzhJG3QidA2ueY0z5E0bpY2lpv 0nUHrrpUhbdpRV5ikQBz4lTE6OfSQwp8AfbNzp9nSkoa7DBBdNeuPg0dsrBH4jP/rSVO xeC0PGQlvav0DCk4B5vKuFDcBu7VRzSZtb5gugHmYEhEE9zAVzfHicjTU3Xbi+isA1Vm PBxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746796642; x=1747401442; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sJXlEdEAMlkb2qBBm9toZJ5sXPbHKi+X7VZ3IC/AQeA=; b=FV8AMfaS9bRmNo6CvEsvb5JDPvzx/telbNuflh9rhCIu8Tx5Kw2F8vhftKALodPidA PEFm3YQfiKLUszzsWwns/TiIcUahB3oNjPfwy5QdaCxLRb0ZcTRBgxtxltMGggy7Y6af 9BaMORRHKBEbXyBHX/+IfqWnL8tiCEiEDG2C75sWTkUn2i1/M28jS38e/pEFW3EZkbx5 f1pzbJj6DzgSl3CteW75r3zHaDN3Lszt26Pa8PxnnQO0x/0vQCGpYHZJOT74F2BzEiH1 gltf1MuxGgVRyQmWrxaXjg72M0M93DXUZsA4wZpYCUEF0/JIhC0U+nGFeGkRbcvVnz/9 AqNQ== X-Forwarded-Encrypted: i=1; AJvYcCW0ERnZkd9jHv4nQGed4G3aucZh1eLEydVFdABlnzP3dVFEotCaijauwb3r1HKxSMhG9zq1nHrdTQ0wKfs=@vger.kernel.org X-Gm-Message-State: AOJu0Yw7DM2Xobkp4ij4OTCAM7Dq/lqAP7vdla2+fuU4kcbHx36kJcrX F9cDBVqu53PIm/rEnB2SytGNng5mLk1Gmig2aRUjTD7PUwTirlE/E4AWZeaobKcwwRRjEC7avPL fltJO+xWPCdRfPBE/7w== X-Google-Smtp-Source: AGHT+IGHjgfBBQziUExP1MPI5PB38ZTGtxeEqo5dEeOPRz5XOW0PWp3aLbHOfSRkKDKmFhtCkoq6HHA3Mb4iDMfv X-Received: from wmbdr16.prod.google.com ([2002:a05:600c:6090:b0:43d:4ba5:b5d6]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:8716:b0:441:b3eb:570c with SMTP id 5b1f17b1804b1-442d6d9c266mr30919685e9.6.1746796642493; Fri, 09 May 2025 06:17:22 -0700 (PDT) Date: Fri, 9 May 2025 14:17:02 +0100 In-Reply-To: <20250509131706.2336138-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250509131706.2336138-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1015.ga840276032-goog Message-ID: <20250509131706.2336138-7-vdonnefort@google.com> Subject: [PATCH v4 06/10] KVM: arm64: Add a range to __pkvm_host_test_clear_young_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_test_clear_young_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 64d4f3bf6269..5f9d56754e39 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_page= s, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *hyp_vm); +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, = struct pkvm_hyp_vm *vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index fa7e2421d359..8e8848de4d47 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -331,7 +331,8 @@ static void handle___pkvm_host_test_clear_young_guest(s= truct kvm_cpu_context *ho { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(bool, mkold, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(bool, mkold, host_ctxt, 4); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -342,7 +343,7 @@ static void handle___pkvm_host_test_clear_young_guest(s= truct kvm_cpu_context *ho if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_test_clear_young_guest(gfn, mkold, hyp_vm); + ret =3D __pkvm_host_test_clear_young_guest(gfn, nr_pages, mkold, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 887848408e1b..78fb9cea2034 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1123,17 +1123,21 @@ int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pag= es, struct pkvm_hyp_vm *vm) return ret; } =20 -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm) +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, = struct pkvm_hyp_vm *vm) { - u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 size, ipa =3D hyp_pfn_to_phys(gfn); int ret; =20 if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa, PAGE_SIZE); + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); + ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, size, mkold); guest_unlock_component(vm); =20 return ret; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index daab4a00790a..057874bbe3e1 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -420,7 +420,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pg= table *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, - mkold); + 1, mkold); =20 return young; } --=20 2.49.0.1015.ga840276032-goog From nobody Wed Dec 17 08:56:12 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EC2B12957AE for ; Fri, 9 May 2025 13:17:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796648; cv=none; b=WS6fQm5A7j3ZDxgr2ANc0qcUiXupMPAW+ry0lnE0BrNg7oCvqApgwv/WuePFsrY3XcTADcUd5IfG78q7LeaMc4V8Yq4q934RrQCmaZMWANTuHhj4CvIAdsBmONweaShXr64srXF0kdd2B9VQKtwZt08k6s1vw3xaHpWusv9aQGY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796648; c=relaxed/simple; bh=E7VlmQ5b+xrGGMbUITjYdvw3hvby2ZNarS87AT40iK0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=guUTQv9p/m4jp01b+lqVYI5qur+XChwtTrcBBM+u9DUsrtVSN3jSw2mEtOp2r+zi1CTY7J8TMyu6dmZbwg2EeOP2atNFU8R1Rz6XMg8WyUcgW6wXOElR92Qn0150ZbRW58XCeuhVgHN5CPmDcJ7/6B7KhVKjPkyOlbi3mkFm0S8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4T8iQ0fs; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4T8iQ0fs" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43cec217977so10602925e9.0 for ; Fri, 09 May 2025 06:17:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746796644; x=1747401444; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=z1Uryq/H5Al7KRzNx1c5hjinQdhQFMxs1pYSpg8hQoY=; b=4T8iQ0fsOb22TzjjzzNvQbv58/PhRflsG9pb+sjSvzv8LLI8ol/DbPdXZ7WmC9l9ol I20F0Aa5J+rY/OzE/WwxMA1KzDeV8D+ee19JC47i4l1uXJ5OrwR/ypAE/RhTqDMhJp4b m6tVQeC3HC9BsdLLkFscNnMGpU1R4v8CpUxMpxw5n4+iR30+/NSzWtZyrRNzLaK8ceCp ClXa5Nv1mWacb1FRPr9RvU2o3k+BO4DUK94RyuSxq2ealIu3b/FVO+DzQB81BPc4blti 0ZVkRs+rk28h9Xkiu4UVP/1ZRWLkjVphrOCnXeK+UFYUaG5+hBkPzPEOa5IUy4cr+mEP vPlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746796644; x=1747401444; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z1Uryq/H5Al7KRzNx1c5hjinQdhQFMxs1pYSpg8hQoY=; b=OFT3m9UrDCdfCAEGQCDF10J34jtjGDEGjQh+eTDnZ1OppCXKmc0qKqELGJ3koqTiFy 6r4L/Z7OXzK/B949Am+Ii4n2wGH3yoehBCRYcam5vbqV+Ru9nmwaxlKWfRetQ/t25vsh C6SHFXbcI/HY1mIohBKPGY8W8VmlIhuYSb75cUM9TUtYzU025opQ/jq7CwXEsxYzLjIe egSraaepyPvD0EbtGRMFYejZieIn7Nu+N06ctRRWsQQ/MXJ+Sds8t0IYkGmQRKtunwq/ DX86lPL67ZU+hR3G/S0rLfUIPNh0sdcsmPLrj55YHNTkD8rPfZMCRXTgAEy14YTCwjLV nvHQ== X-Forwarded-Encrypted: i=1; AJvYcCUPL9e4+GRsOqHsGVF/GrR/IzKtzSBVoyhmA8f83G8lEr8lP1I/6kmVXIziWvGgk5xnyMTpRT2MMm6uEYw=@vger.kernel.org X-Gm-Message-State: AOJu0Yw651ncWDHTU5MvP1E5nYI2075I4dTO9ufdOWX7QHY8t/UKGdqt Ua1RLBEEq1o7AGAt3/MPpddxYeOPMc4f7rSoZsKurRw/MwM5tD660JDGezzbgCxM4ez0a91MiK1 OncBx4xBBERIVo9IWFg== X-Google-Smtp-Source: AGHT+IGomDiLVgDmN7C8t3SdZrJfMFN/MUq/cHtvkG2ueOdPzk8loSbhG7i46xpFxQBXDMIRTr4jQEEd0OtkNlS8 X-Received: from wmbdo10.prod.google.com ([2002:a05:600c:680a:b0:440:68aa:44b]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:8708:b0:43c:f3e4:d6f6 with SMTP id 5b1f17b1804b1-442d6ddeb99mr32730975e9.31.1746796644429; Fri, 09 May 2025 06:17:24 -0700 (PDT) Date: Fri, 9 May 2025 14:17:03 +0100 In-Reply-To: <20250509131706.2336138-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250509131706.2336138-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1015.ga840276032-goog Message-ID: <20250509131706.2336138-8-vdonnefort@google.com> Subject: [PATCH v4 07/10] KVM: arm64: Convert pkvm_mappings to interval tree From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, let's convert pgt.pkvm_mappings to an interval tree. No functional change intended. Suggested-by: Vincent Donnefort Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 6b9d274052c7..1b43bcd2a679 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -413,7 +413,7 @@ static inline bool kvm_pgtable_walk_lock_held(void) */ struct kvm_pgtable { union { - struct rb_root pkvm_mappings; + struct rb_root_cached pkvm_mappings; struct { u32 ia_bits; s8 start_level; diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index d91bfcf2db56..da75d41c948c 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -173,6 +173,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 __subtree_last; /* Internal member for interval tree */ }; =20 int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *m= mu, diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 057874bbe3e1..6febddbec69e 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -5,6 +5,7 @@ */ =20 #include +#include #include #include #include @@ -256,80 +257,63 @@ static int __init finalize_pkvm(void) } device_initcall_sync(finalize_pkvm); =20 -static int cmp_mappings(struct rb_node *node, const struct rb_node *parent) +static u64 __pkvm_mapping_start(struct pkvm_mapping *m) { - struct pkvm_mapping *a =3D rb_entry(node, struct pkvm_mapping, node); - struct pkvm_mapping *b =3D rb_entry(parent, struct pkvm_mapping, node); - - if (a->gfn < b->gfn) - return -1; - if (a->gfn > b->gfn) - return 1; - return 0; + return m->gfn * PAGE_SIZE; } =20 -static struct rb_node *find_first_mapping_node(struct rb_root *root, u64 g= fn) +static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - struct rb_node *node =3D root->rb_node, *prev =3D NULL; - struct pkvm_mapping *mapping; - - while (node) { - mapping =3D rb_entry(node, struct pkvm_mapping, node); - if (mapping->gfn =3D=3D gfn) - return node; - prev =3D node; - node =3D (gfn < mapping->gfn) ? node->rb_left : node->rb_right; - } - - return prev; + return (m->gfn + 1) * PAGE_SIZE - 1; } =20 -/* - * __tmp is updated to rb_next(__tmp) *before* entering the body of the lo= op to allow freeing - * of __map inline. - */ +INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, + __pkvm_mapping_start, __pkvm_mapping_end, static, + pkvm_mapping); + #define for_each_mapping_in_range_safe(__pgt, __start, __end, __map) \ - for (struct rb_node *__tmp =3D find_first_mapping_node(&(__pgt)->pkvm_map= pings, \ - ((__start) >> PAGE_SHIFT)); \ + for (struct pkvm_mapping *__tmp =3D pkvm_mapping_iter_first(&(__pgt)->pkv= m_mappings, \ + __start, __end - 1); \ __tmp && ({ \ - __map =3D rb_entry(__tmp, struct pkvm_mapping, node); \ - __tmp =3D rb_next(__tmp); \ + __map =3D __tmp; \ + __tmp =3D pkvm_mapping_iter_next(__map, __start, __end - 1); \ true; \ }); \ - ) \ - if (__map->gfn < ((__start) >> PAGE_SHIFT)) \ - continue; \ - else if (__map->gfn >=3D ((__end) >> PAGE_SHIFT)) \ - break; \ - else + ) =20 int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *m= mu, struct kvm_pgtable_mm_ops *mm_ops) { - pgt->pkvm_mappings =3D RB_ROOT; + pgt->pkvm_mappings =3D RB_ROOT_CACHED; pgt->mmu =3D mmu; =20 return 0; } =20 -void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 start,= u64 end) { struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); pkvm_handle_t handle =3D kvm->arch.pkvm.handle; struct pkvm_mapping *mapping; - struct rb_node *node; + int ret; =20 if (!handle) - return; + return 0; =20 - node =3D rb_first(&pgt->pkvm_mappings); - while (node) { - mapping =3D rb_entry(node, struct pkvm_mapping, node); - kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); - node =3D rb_next(node); - rb_erase(&mapping->node, &pgt->pkvm_mappings); + for_each_mapping_in_range_safe(pgt, start, end, mapping) { + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); + if (WARN_ON(ret)) + return ret; + pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); kfree(mapping); } + + return 0; +} + +void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +{ + __pkvm_pgtable_stage2_unmap(pgt, 0, ~(0ULL)); } =20 int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, @@ -357,28 +341,16 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, = u64 addr, u64 size, swap(mapping, cache->mapping); mapping->gfn =3D gfn; mapping->pfn =3D pfn; - WARN_ON(rb_find_add(&mapping->node, &pgt->pkvm_mappings, cmp_mappings)); + pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); =20 return ret; } =20 int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { - struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); - pkvm_handle_t handle =3D kvm->arch.pkvm.handle; - struct pkvm_mapping *mapping; - int ret =3D 0; + lockdep_assert_held_write(&kvm_s2_mmu_to_kvm(pgt->mmu)->mmu_lock); =20 - lockdep_assert_held_write(&kvm->mmu_lock); - for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); - if (WARN_ON(ret)) - break; - rb_erase(&mapping->node, &pgt->pkvm_mappings); - kfree(mapping); - } - - return ret; + return __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); } =20 int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 s= ize) --=20 2.49.0.1015.ga840276032-goog From nobody Wed Dec 17 08:56:12 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 871FD2957CD for ; Fri, 9 May 2025 13:17:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796649; cv=none; b=gbQbwc7yfbTDUsGXqN8I73bd4F2MV28SM+rzYDJYC9KW39uTFx5NsyqYTyiIcgLcCAh/glGEylxSvfTh6s1C7lyns5J8xWlGjyi+trPYR6Se2skYo70GhLsY+bILfrcJsjZmh10cbJ9hSsmRh7y+wMaAdOkEtkxo0W0oswArsrQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796649; c=relaxed/simple; bh=VmFLE0qPF+vVZAa1wgh69Vyijh9+uNZXV/gnUp4P62A=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YTMeSvs1+0B47nJzYcvCmT3fuZXsjdpdNtBrMlRoEAc7Demb6nPwVxjZxg9IOAFLIwcaBQ52t1X2j/Ol6eHM8VMT3Q4dWMBYO8t33anhAyUw6b4Yf4YWHpr1svfz/VQPE27EniaxPRXJ021v6UsoO47RFB1hb/CrC6q9/GMDqt0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=R3wXIg8Y; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="R3wXIg8Y" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43e9b0fd00cso10997015e9.0 for ; Fri, 09 May 2025 06:17:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746796646; x=1747401446; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RNLpH3Q60kPaAsu8vhpy60m6RZim/8uyJRruoS33oJY=; b=R3wXIg8YULBZXKylFY1WqOzEBS7DcGzJQ9YMRQfqCeVCpUbz9QPMLGpyDwTdSyLGAZ 34IgEun+cSfNesPLXHIIOFNU1E2pdtrG0OYWH82LAirGyTKK0Kg4w0OHd0zMmKtTo7AR vihwzT0PymTb/BwRfeoeEeWQ2BZkF+vaN/+RBuGXM11nHaiVt8vD2axACUSiKwRmTZQM FiuA2wrt/+teunMX9F52n2RO0DxV7w47YfuvnVfKGfVvMrVZANksGOcnDYAavLPJsowP 0B2OugMHyq8G5rXa9sJXsutj64y6jKFcLiVF0u2QetpcW5/YDBHyNyyL1GIQx+jtqGlo ulRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746796646; x=1747401446; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RNLpH3Q60kPaAsu8vhpy60m6RZim/8uyJRruoS33oJY=; b=Hd8iaEdgTuzWILtC8cTJ0SHDEwgWWSouQ8Snr8BboQnJI4/Hmy9wPx999ZfNW3tWN8 PXu/dcxvo6V80HrA1wsXWmIDhpODhNQxMV7DgLoVp8CQXjaWf/uX52c/AErsWHGdYwhd bwiJLLnCqZ4OxhXNH4uJOU0Hp3tJKd0xICSLXZT4zV2GgyECKhY9WjxHh0VjdY1rWfI7 D9amhBfdy1sGNtKHu3v3fVuYe0OlEGP/QzBdG9Eh7OGstwpNIZ0A/VnZTHednn81cJdq eMduF2LRAQvF8T7AUBUbigiuXU63yqY14reLPNM0v/Rzqn/XZK4TMxQnlN1r951jJjzA 468g== X-Forwarded-Encrypted: i=1; AJvYcCV2G1lXIvOIuRKYSM0VKKRutr2F6JLmz3LhCbMWK6GX3rLv6no/d9HQA1sWSlWgjv6HR6cripbJTm+7amA=@vger.kernel.org X-Gm-Message-State: AOJu0YzfxiLzeVWfaZXEuUQ2ZXOI8xHpo7SU1/VhLflkJiMIA+ksVNkG dZq2nrVWDfj/vv5Z9VFGPm56N0pF27d+pOaQn4WfAaGUIbV2OONwKuudgsgFdBVg/CjY+dOcE/H tzusTiP3LwdxT+agLsA== X-Google-Smtp-Source: AGHT+IEzp3z+QA2RzhLt/eBNPn4r3BZgbOIVIrs7d4F5IcvdZaGGp2zyrkjpEIgHZCECiNxrwkCgmBm3JE7hKe7K X-Received: from wmbdr11.prod.google.com ([2002:a05:600c:608b:b0:441:d228:3918]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5919:b0:441:d228:3a07 with SMTP id 5b1f17b1804b1-442d0325520mr59393395e9.13.1746796646289; Fri, 09 May 2025 06:17:26 -0700 (PDT) Date: Fri, 9 May 2025 14:17:04 +0100 In-Reply-To: <20250509131706.2336138-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250509131706.2336138-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1015.ga840276032-goog Message-ID: <20250509131706.2336138-9-vdonnefort@google.com> Subject: [PATCH v4 08/10] KVM: arm64: Add a range to pkvm_mappings From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, add a nr_pages member for pkvm_mappings to allow EL1 to track the size of the stage-2 mapping. Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index da75d41c948c..ea58282f59bb 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -173,6 +173,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 nr_pages; u64 __subtree_last; /* Internal member for interval tree */ }; =20 diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 6febddbec69e..0e30f16149d5 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -264,7 +264,7 @@ static u64 __pkvm_mapping_start(struct pkvm_mapping *m) =20 static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - return (m->gfn + 1) * PAGE_SIZE - 1; + return (m->gfn + m->nr_pages) * PAGE_SIZE - 1; } =20 INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, @@ -301,7 +301,8 @@ static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtab= le *pgt, u64 start, u64 e return 0; =20 for_each_mapping_in_range_safe(pgt, start, end, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, + mapping->nr_pages); if (WARN_ON(ret)) return ret; pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); @@ -331,16 +332,32 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, = u64 addr, u64 size, return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); - if (ret) { - /* Is the gfn already mapped due to a racing vCPU? */ - if (ret =3D=3D -EPERM) + + /* + * Calling stage2_map() on top of existing mappings is either happening b= ecause of a race + * with another vCPU, or because we're changing between page and block ma= ppings. As per + * user_mem_abort(), same-size permission faults are handled in the relax= _perms() path. + */ + mapping =3D pkvm_mapping_iter_first(&pgt->pkvm_mappings, addr, addr + siz= e - 1); + if (mapping) { + if (size =3D=3D (mapping->nr_pages * PAGE_SIZE)) return -EAGAIN; + + /* Remove _any_ pkvm_mapping overlapping with the range, bigger or small= er. */ + ret =3D __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); + if (ret) + return ret; + mapping =3D NULL; } =20 + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, size / PAGE_= SIZE, prot); + if (WARN_ON(ret)) + return ret; + swap(mapping, cache->mapping); mapping->gfn =3D gfn; mapping->pfn =3D pfn; + mapping->nr_pages =3D size / PAGE_SIZE; pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); =20 return ret; @@ -362,7 +379,8 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, 1); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, + mapping->nr_pages); if (WARN_ON(ret)) break; } @@ -377,7 +395,8 @@ int pkvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, = u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) - __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), PAGE_SIZE); + __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), + PAGE_SIZE * mapping->nr_pages); =20 return 0; } @@ -392,7 +411,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pg= table *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, - 1, mkold); + mapping->nr_pages, mkold); =20 return young; } --=20 2.49.0.1015.ga840276032-goog From nobody Wed Dec 17 08:56:12 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46AB2295D84 for ; Fri, 9 May 2025 13:17:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796651; cv=none; b=bwc7P5BbWP1Ec4C0yS6LXbv42omyRvkJqjvHT3+/RlGvPoVIVjr2lQBSefuAm5Rk8o/jk8IBzNguoBp/+fSr4+ZRAuAUpcp0RCc/b6+NV7ULM6RMS9wcoA16vEYCQQUq9vD0rQNt4eqrG1yzgVbNgphbAlmi+BG3SbJHVHqOYqU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796651; c=relaxed/simple; bh=4XzFUi2IUGhuTAc22HpxKl0V29jC7HNRX5NluMAstk8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=A5L5A4dA5QXTeajoKOW1ryzWLWK0UU7wTiYo4cyKgAlgwE3zIt7E99u5oeWNAaENsJUMFLW/8iUeGVSJecGbkDNjrK1CnP8fIN4t6hanJkzj5akQFtv9vD/cqE2RKhEe5zCHFQ4GCTwtse3RpXxrGVEOJ3DKFJWmsdd0rCUtel8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=MkwBjTTA; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MkwBjTTA" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43efa869b19so13916335e9.2 for ; Fri, 09 May 2025 06:17:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746796648; x=1747401448; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iY9xHEFUpTNbpzkbkz6w6fJAjv29kAFcmljY+gYvWx4=; b=MkwBjTTA5AaVevaK7oS8CDnf1ydaWKqlnIF7F0K+59D4/wFiRMfUGrP/TXy3sOvRvV iW35LfNYyaYoT8hlvmauPieekINGUJEK6Ne1Akcq7VJtKbaYIdO6i1R28cPFkcJea8e2 pd05EoGHOb6z8/WPNLj9MF+j8Sc5F9mW6D2bsVUQZ1WAlC4eeiWkocUDdN2ZvWrekp7W l9UA2oIF6+Kn97lV+8deBEIioJxoOdULxw+Plhkp8U6DyuoZIpNm4QiFvLOoLqwRpkd3 XlfoY8B5s63v8s3mPcVEHA1rQ9xtaZna+nEUSHk61KkIhTEYc/a1gvtpbLcKn5200b/w jTdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746796648; x=1747401448; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iY9xHEFUpTNbpzkbkz6w6fJAjv29kAFcmljY+gYvWx4=; b=Iag/OCWtZT5Zb0MpRnfqQxFK7V2KbbMSSKAdWCnq19TV8MJ3mOxUUQM/4m2WBDAGam 4qhEd0CwsqkluZfQhFLKh+27Hu5FwdMrTGgQoo1MMFqvBIZNiUGtrntJRZU46nD8wUpb v4HscnEn2ay/h+m9o+0+QeQiqVJU/BLXJ776vH1H1TL+9iGOpGUPGZwkR3eXg4b72Cae 0WO+o7hE2b+qpVyMcW7PBYwhxjgmv/aFH65SQJUV6ieKe2GL3CJ9GP+g/a3ZfUYlWU7M PLhDyy0Ker+hiNHm+Z6i6zsQcNEFrGCnFWyDoCvOqkMh595d5BRFOcc3GeY3G6XUwsvn W9DA== X-Forwarded-Encrypted: i=1; AJvYcCVabesUeHIle5TumFF4MTU0Mao3fomRUN++KoRcGSLXxH+DIE9UICULw81ksZgz7v2KDL+4881mxMm3Rd4=@vger.kernel.org X-Gm-Message-State: AOJu0YxW0MJgFO7E+Dx0FF0UjMarckSKllnsY7kkZIfQ1VMgxeOMJips SHNxfoUodYDv1dkNqkg3onMTnGeqtRz41etmE5qNltAkrqkIVqL9dMTNMIsYBAPX1eAwR3MnwDj yLhmHfgB8PA/f6FE5IQ== X-Google-Smtp-Source: AGHT+IEF+XO6uPZe1MEto7eLqAL+0iiIrzIwhxZbJdzOlJd1PeGxlk1/ZzYpvt4tHE4RjgjMniLCrdsLZfZB2R8i X-Received: from wmbdq15.prod.google.com ([2002:a05:600c:64cf:b0:441:bf4e:899c]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3e8c:b0:43c:f969:13c0 with SMTP id 5b1f17b1804b1-442d6ddec16mr29442225e9.29.1746796648033; Fri, 09 May 2025 06:17:28 -0700 (PDT) Date: Fri, 9 May 2025 14:17:05 +0100 In-Reply-To: <20250509131706.2336138-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250509131706.2336138-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1015.ga840276032-goog Message-ID: <20250509131706.2336138-10-vdonnefort@google.com> Subject: [PATCH v4 09/10] KVM: arm64: Stage-2 huge mappings for np-guests From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now np-guests hypercalls with range are supported, we can let the hypervisor to install block mappings whenever the Stage-1 allows it, that is when backed by either Hugetlbfs or THPs. The size of those block mappings is limited to PMD_SIZE. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 78fb9cea2034..97e0fea9db4e 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -167,7 +167,7 @@ int kvm_host_prepare_stage2(void *pgt_pool_base) static bool guest_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot) { - return true; + return false; } =20 static void *guest_s2_zalloc_pages_exact(size_t size) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 754f2fe0cc67..7c8be22e81f9 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1537,7 +1537,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active || is_protected_kvm_enabled()) { + if (logging_active) { force_pte =3D true; vma_shift =3D PAGE_SHIFT; } else { @@ -1547,7 +1547,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SHIFT: - if (fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) + if (!is_protected_kvm_enabled() && + fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) break; fallthrough; #endif diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 0e30f16149d5..9504169fb37f 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -328,7 +328,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u6= 4 addr, u64 size, u64 pfn =3D phys >> PAGE_SHIFT; int ret; =20 - if (size !=3D PAGE_SIZE) + if (size !=3D PAGE_SIZE && size !=3D PMD_SIZE) return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); --=20 2.49.0.1015.ga840276032-goog From nobody Wed Dec 17 08:56:12 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E245295DAA for ; Fri, 9 May 2025 13:17:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796654; cv=none; b=BSxhAFPRNsCfJabMIFlBTrg4w/0WLIAhVZ1Kcg/Qeg2IEIacyl8JUtD5MfF9N3Cw8C0ZDy2fkAkGpzNUAsLxwHHVxsRgAf/a963iRlBob2IoPL38bqi6jJEupSaccDhS5/yxUHt1mUR4PR9K/QRyZ7vixOR2B3fOXU4KH8w1a8s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746796654; c=relaxed/simple; bh=l/bL8Cdy19zVzYeeeRC3Bhazuunjmk5hRum5pM/tfQ0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=C9cAdMJQcs//L2JYy5v07WGeC45Fu/zrb/XYiqrOnfM0wOhTY6LvzMxxJkjfwifHDOgZRyNp/pWD/inNJEpt+lbyzx/Gbhxoc/9V3l3zY4jBtp+I50EclYPwkVnADjhtGy0Lb8kQOjx4hQFbAWw1Svw8/atxgIN1yfR0cjbnNaQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XfoW4GBR; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XfoW4GBR" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43e9a3d2977so14052055e9.1 for ; Fri, 09 May 2025 06:17:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746796650; x=1747401450; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+Nkys3MADPIj5Es68fiIJTGNjZ/cE1BlJNsvbA6coAk=; b=XfoW4GBRv8s6+2SmJoxHLE24o+/GCCRXVIBJW1W2v1QBbJ/fVtsfSR9q1j6jG7Nt0j eqGAA3e/59wszQSnnAXgZ02K3wa01B9wKyo6mKfQiotd+0J/nveJbldu1Zg+e64j/Ge3 x+BiTN36lyKp53IFkRPXq8Y+e4CUB0S1+lzG9NWN0hlvcrqzT366llG8YJJW1kJfdGAj 95Mg4pjbMAZzVDRavVdIyV79H13Kvk7fR1zwpwhQx29Obesrpqpfj25FWsyEH3vT6J1a 0HCwp68VF9/rbh2bpqZ7//q3Vf4kKe6pDslsPRxodalKR65uytXPIvpnUVT8GOeT+u2O 2v4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746796650; x=1747401450; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+Nkys3MADPIj5Es68fiIJTGNjZ/cE1BlJNsvbA6coAk=; b=djiEwEBHpQqGtgnXJFK9CgNm5vcfVr4Sy1+7Qh1PgeVSve0AGrmSuXWWzig8PxpoeE BZl+SFfkAZ3c6LdxZDphrr4DpCixSBN974DNaPvlpcvkovqPtXnQk2tY0EwBr8P0bHWz bQmlUMDC87PngqV3eMbMkeliYXqb0FLC7tWFinNHBCrGkM14Ob0gxs5uaFZh7cg0FH9U Y61Qr8Cbv76U5MuqMqwPWWm1PrkCHrn8M0YJwM+cOEID7el57RkRzntFHVOCwhIF0WLo EuDG+OgOzu/u00w61loV8Lg7FNfeNiCSeCWdQ6xDI/eGxQBBxiOfr9RPEvnCgc1kLqd/ OVcQ== X-Forwarded-Encrypted: i=1; AJvYcCUgVBIqrcNO9VfU0NCNsq4ovS0D6y5UPgdm8yzcDdipwtnky5qogh77myIEmPGqFhdh/WYKR8EWw+S3cvE=@vger.kernel.org X-Gm-Message-State: AOJu0YwDwFzyc1Ku4aJ/WrocExWiKF6E/MkiNjtJnYIfLy98xCCzFy19 7ni/AseA62n+mMXC2ww/du+T/UaFGsAaKS28ZBTo5qFRYEszsCC8crKsq69VhfUip7MC6GqmJB+ wdTHZWmldQVzHwKeB3g== X-Google-Smtp-Source: AGHT+IHJ6Ug2lZ+EwcnN/sajdCCAyKS2oET8piVm/uCVqDq6/j9BnOqfKPaK7OSzRMQ+TH//37e8HgXM+UiJR6KC X-Received: from wmbji3.prod.google.com ([2002:a05:600c:a343:b0:43c:eba5:f9b3]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b0e:b0:43c:f6b0:e807 with SMTP id 5b1f17b1804b1-442d6ddea12mr29578065e9.31.1746796649880; Fri, 09 May 2025 06:17:29 -0700 (PDT) Date: Fri, 9 May 2025 14:17:06 +0100 In-Reply-To: <20250509131706.2336138-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250509131706.2336138-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1015.ga840276032-goog Message-ID: <20250509131706.2336138-11-vdonnefort@google.com> Subject: [PATCH v4 10/10] KVM: arm64: np-guest CMOs with PMD_SIZE fixmap From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the introduction of stage-2 huge mappings in the pKVM hypervisor, guest pages CMO is needed for PMD_SIZE size. Fixmap only supports PAGE_SIZE and iterating over the huge-page is time consuming (mostly due to TLBI on hyp_fixmap_unmap) which is a problem for EL2 latency. Introduce a shared PMD_SIZE fixmap (hyp_fixblock_map/hyp_fixblock_unmap) to improve guest page CMOs when stage-2 huge mappings are installed. On a Pixel6, the iterative solution resulted in a latency of ~700us, while the PMD_SIZE fixmap reduces it to ~100us. Because of the horrendous private range allocation that would be necessary, this is disabled for 64KiB pages systems. Suggested-by: Quentin Perret Signed-off-by: Vincent Donnefort Signed-off-by: Quentin Perret diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 1b43bcd2a679..2888b5d03757 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -59,6 +59,11 @@ typedef u64 kvm_pte_t; =20 #define KVM_PHYS_INVALID (-1ULL) =20 +#define KVM_PTE_TYPE BIT(1) +#define KVM_PTE_TYPE_BLOCK 0 +#define KVM_PTE_TYPE_PAGE 1 +#define KVM_PTE_TYPE_TABLE 1 + #define KVM_PTE_LEAF_ATTR_LO GENMASK(11, 2) =20 #define KVM_PTE_LEAF_ATTR_LO_S1_ATTRIDX GENMASK(4, 2) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/incl= ude/nvhe/mm.h index 230e4f2527de..b0c72bc2d5ba 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -13,9 +13,11 @@ extern struct kvm_pgtable pkvm_pgtable; extern hyp_spinlock_t pkvm_pgd_lock; =20 -int hyp_create_pcpu_fixmap(void); +int hyp_create_fixmap(void); void *hyp_fixmap_map(phys_addr_t phys); void hyp_fixmap_unmap(void); +void *hyp_fixblock_map(phys_addr_t phys); +void hyp_fixblock_unmap(void); =20 int hyp_create_idmap(u32 hyp_va_bits); int hyp_map_vectors(void); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 97e0fea9db4e..9f3ffa4e0690 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -220,16 +220,52 @@ static void guest_s2_put_page(void *addr) hyp_put_page(¤t_vm->pool, addr); } =20 +static void *__fixmap_guest_page(void *va, size_t *size) +{ + if (IS_ALIGNED(*size, PMD_SIZE)) { + void *addr =3D hyp_fixblock_map(__hyp_pa(va)); + + if (addr) + return addr; + + *size =3D PAGE_SIZE; + } + + if (IS_ALIGNED(*size, PAGE_SIZE)) + return hyp_fixmap_map(__hyp_pa(va)); + + WARN_ON(1); + + return NULL; +} + +static void __fixunmap_guest_page(size_t size) +{ + switch (size) { + case PAGE_SIZE: + hyp_fixmap_unmap(); + break; + case PMD_SIZE: + hyp_fixblock_unmap(); + break; + default: + WARN_ON(1); + } +} + static void clean_dcache_guest_page(void *va, size_t size) { WARN_ON(!PAGE_ALIGNED(size)); =20 while (size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), - PAGE_SIZE); - hyp_fixmap_unmap(); - va +=3D PAGE_SIZE; - size -=3D PAGE_SIZE; + size_t fixmap_size =3D size =3D=3D PMD_SIZE ? size : PAGE_SIZE; + void *addr =3D __fixmap_guest_page(va, &fixmap_size); + + __clean_dcache_guest_page(addr, fixmap_size); + __fixunmap_guest_page(fixmap_size); + + size -=3D fixmap_size; + va +=3D fixmap_size; } } =20 @@ -238,11 +274,14 @@ static void invalidate_icache_guest_page(void *va, si= ze_t size) WARN_ON(!PAGE_ALIGNED(size)); =20 while (size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), - PAGE_SIZE); - hyp_fixmap_unmap(); - va +=3D PAGE_SIZE; - size -=3D PAGE_SIZE; + size_t fixmap_size =3D size =3D=3D PMD_SIZE ? size : PAGE_SIZE; + void *addr =3D __fixmap_guest_page(va, &fixmap_size); + + __invalidate_icache_guest_page(addr, fixmap_size); + __fixunmap_guest_page(fixmap_size); + + size -=3D fixmap_size; + va +=3D fixmap_size; } } =20 diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index f41c7440b34b..e3b1bece8504 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -229,9 +229,8 @@ int hyp_map_vectors(void) return 0; } =20 -void *hyp_fixmap_map(phys_addr_t phys) +static void *fixmap_map_slot(struct hyp_fixmap_slot *slot, phys_addr_t phy= s) { - struct hyp_fixmap_slot *slot =3D this_cpu_ptr(&fixmap_slots); kvm_pte_t pte, *ptep =3D slot->ptep; =20 pte =3D *ptep; @@ -243,10 +242,21 @@ void *hyp_fixmap_map(phys_addr_t phys) return (void *)slot->addr; } =20 +void *hyp_fixmap_map(phys_addr_t phys) +{ + return fixmap_map_slot(this_cpu_ptr(&fixmap_slots), phys); +} + static void fixmap_clear_slot(struct hyp_fixmap_slot *slot) { kvm_pte_t *ptep =3D slot->ptep; u64 addr =3D slot->addr; + u32 level; + + if (FIELD_GET(KVM_PTE_TYPE, *ptep) =3D=3D KVM_PTE_TYPE_PAGE) + level =3D KVM_PGTABLE_LAST_LEVEL; + else + level =3D KVM_PGTABLE_LAST_LEVEL - 1; /* create_fixblock() guarantees PM= D level */ =20 WRITE_ONCE(*ptep, *ptep & ~KVM_PTE_VALID); =20 @@ -260,7 +270,7 @@ static void fixmap_clear_slot(struct hyp_fixmap_slot *s= lot) * https://lore.kernel.org/kvm/20221017115209.2099-1-will@kernel.org/T/#m= f10dfbaf1eaef9274c581b81c53758918c1d0f03 */ dsb(ishst); - __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), KVM_PGTABLE_LAST_LEVEL); + __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level); dsb(ish); isb(); } @@ -273,9 +283,9 @@ void hyp_fixmap_unmap(void) static int __create_fixmap_slot_cb(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - struct hyp_fixmap_slot *slot =3D per_cpu_ptr(&fixmap_slots, (u64)ctx->arg= ); + struct hyp_fixmap_slot *slot =3D (struct hyp_fixmap_slot *)ctx->arg; =20 - if (!kvm_pte_valid(ctx->old) || ctx->level !=3D KVM_PGTABLE_LAST_LEVEL) + if (!kvm_pte_valid(ctx->old) || (ctx->end - ctx->start) !=3D kvm_granule_= size(ctx->level)) return -EINVAL; =20 slot->addr =3D ctx->addr; @@ -296,13 +306,73 @@ static int create_fixmap_slot(u64 addr, u64 cpu) struct kvm_pgtable_walker walker =3D { .cb =3D __create_fixmap_slot_cb, .flags =3D KVM_PGTABLE_WALK_LEAF, - .arg =3D (void *)cpu, + .arg =3D (void *)per_cpu_ptr(&fixmap_slots, cpu), }; =20 return kvm_pgtable_walk(&pkvm_pgtable, addr, PAGE_SIZE, &walker); } =20 -int hyp_create_pcpu_fixmap(void) +#ifndef CONFIG_ARM64_64K_PAGES +static struct hyp_fixmap_slot hyp_fixblock_slot; +static DEFINE_HYP_SPINLOCK(hyp_fixblock_lock); + +void *hyp_fixblock_map(phys_addr_t phys) +{ + hyp_spin_lock(&hyp_fixblock_lock); + return fixmap_map_slot(&hyp_fixblock_slot, phys); +} + +void hyp_fixblock_unmap(void) +{ + fixmap_clear_slot(&hyp_fixblock_slot); + hyp_spin_unlock(&hyp_fixblock_lock); +} + +static int create_fixblock(void) +{ + struct kvm_pgtable_walker walker =3D { + .cb =3D __create_fixmap_slot_cb, + .flags =3D KVM_PGTABLE_WALK_LEAF, + .arg =3D (void *)&hyp_fixblock_slot, + }; + unsigned long addr; + phys_addr_t phys; + int ret, i; + + /* Find a RAM phys address, PMD aligned */ + for (i =3D 0; i < hyp_memblock_nr; i++) { + phys =3D ALIGN(hyp_memory[i].base, PMD_SIZE); + if (phys + PMD_SIZE < (hyp_memory[i].base + hyp_memory[i].size)) + break; + } + + if (i >=3D hyp_memblock_nr) + return -EINVAL; + + hyp_spin_lock(&pkvm_pgd_lock); + addr =3D ALIGN(__io_map_base, PMD_SIZE); + ret =3D __pkvm_alloc_private_va_range(addr, PMD_SIZE); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_hyp_map(&pkvm_pgtable, addr, PMD_SIZE, phys, PAGE_HYP= ); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_walk(&pkvm_pgtable, addr, PMD_SIZE, &walker); + +unlock: + hyp_spin_unlock(&pkvm_pgd_lock); + + return ret; +} +#else +void hyp_fixblock_unmap(void) { WARN_ON(1); } +void *hyp_fixblock_map(phys_addr_t phys) { return NULL; } +static int create_fixblock(void) { return 0; } +#endif + +int hyp_create_fixmap(void) { unsigned long addr, i; int ret; @@ -322,7 +392,7 @@ int hyp_create_pcpu_fixmap(void) return ret; } =20 - return 0; + return create_fixblock(); } =20 int hyp_create_idmap(u32 hyp_va_bits) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setu= p.c index c19860fc8183..a48d3f5a5afb 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -312,7 +312,7 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; =20 - ret =3D hyp_create_pcpu_fixmap(); + ret =3D hyp_create_fixmap(); if (ret) goto out; =20 diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index df5cc74a7dd0..c351b4abd5db 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -11,12 +11,6 @@ #include #include =20 - -#define KVM_PTE_TYPE BIT(1) -#define KVM_PTE_TYPE_BLOCK 0 -#define KVM_PTE_TYPE_PAGE 1 -#define KVM_PTE_TYPE_TABLE 1 - struct kvm_pgtable_walk_data { struct kvm_pgtable_walker *walker; =20 --=20 2.49.0.1015.ga840276032-goog