From nobody Wed Dec 17 14:20:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B6B7EE499B for ; Fri, 18 Aug 2023 23:36:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242717AbjHRXfp (ORCPT ); Fri, 18 Aug 2023 19:35:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242236AbjHRXfS (ORCPT ); Fri, 18 Aug 2023 19:35:18 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5981F30DF for ; Fri, 18 Aug 2023 16:35:17 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-26d269dc983so1694680a91.2 for ; Fri, 18 Aug 2023 16:35:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692401717; x=1693006517; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=ym9SwmeM0EPR5aYHYUdgC/P1fnr32QdCKm3f+6XcXjc=; b=Fv0Xz376xgfbogeLVo/xFfuLTRzprBEz5ZaGPXB4Ur1hfEMZIxb1yuw4VUcU7eJc94 dzehw7VYLMIm9IkWTD0asKDJDmvBiYp9oXxAVB4B1JfWh+ywV6o31j0Yd4wLak36umaa +d5AyiEIcenOU5yVmxpLlgY3QPvYZhkM9rATPtRN7O/jU9tj86CrtrjrKkQgbpOm8Pj4 3k5MJqvXJcSFnt7TwMa0k2dLebjdKSkm0guwxab6EkC6UjLSDYSWvGiSn/L8wt/QGYLA OA3VZB01v5tyvYxhBndx1DKpDsujYyisDDzwyXS/3OiuppqSUoaXISYPLIw5WNs485lU ROAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692401717; x=1693006517; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ym9SwmeM0EPR5aYHYUdgC/P1fnr32QdCKm3f+6XcXjc=; b=D7YqMK+NMOsr4VzmdOL2feA7xJjZYE7YoWY/nVPLAKetTeRNcqMb8AdnSrNE24Wvbi 8T71uKzgFe6eLpcqO/TIk2w+9DxSAEPCtZ7QAVENC/+LVtOdPD11qMWQEwbEQF6W8mIq rBN09Wijdwd9pw7nIcWgqBa5x3U3ZSOKnW07Z1Oqh/5CRiYyvvtRLb4EViFU1Rw+yFOP R4fmejT8SlQxGmUv1HsyndDDpur94UsDm38rXZ3qM8X5n02+Tcn7nqLZeQFkuRWZuAbT 7i5r9fh/XMu91941NatWkcg4yAlGiC72dBoSU9WM3eOAinRgzqaLs1n+iBTF5g/1Fclp huXg== X-Gm-Message-State: AOJu0Yx2L78wRNKimlO3Abbj4JaWBUv8WlgF/geO0RtaXhcAs7/L1Yqb bpGNEfflHLT3qm9QIXo01mIHBaUErd/m2Ifa4w== X-Google-Smtp-Source: AGHT+IGPoBEVZBfu7UdhnVUWQ69Uoqx9M/TxjOmX9oQgTBDD0sje061EEqzwCebhqYC+9MOR0gBKrC6ClC3SPuJMvg== X-Received: from riemann.sea.corp.google.com ([2620:15c:100:201:63a2:7ca2:9ea:acb8]) (user=srutherford job=sendgmr) by 2002:a17:90b:e8b:b0:26d:a6b:9a47 with SMTP id fv11-20020a17090b0e8b00b0026d0a6b9a47mr168989pjb.2.1692401716766; Fri, 18 Aug 2023 16:35:16 -0700 (PDT) Date: Fri, 18 Aug 2023 16:34:51 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.42.0.rc1.204.g551eb34607-goog Message-ID: <20230818233451.3615464-1-srutherford@google.com> Subject: [PATCH] x86/sev: Make early_set_memory_decrypted() calls page aligned From: Steve Rutherford To: Borislav Petkov , Thomas Gleixner Cc: Paolo Bonzini , Wanpeng Li , Vitaly Kuznetsov , Ingo Molnar , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David.Kaplan@amd.com, jacobhxu@google.com, patelsvishal@google.com, bhillier@google.com, Steve Rutherford Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" early_set_memory_decrypted() assumes its parameters are page aligned. Non-page aligned calls result in additional pages being marked as decrypted via the encryption status hypercall, which results in consistent corruption of pages during live migration. Live migration requires accurate encryption status information to avoid migrating pages from the wrong perspective. Fixes: 4716276184ec ("X86/KVM: Decrypt shared per-cpu variables when SEV is= active") Signed-off-by: Steve Rutherford Reviewed-by: Pankaj Gupta --- arch/x86/kernel/kvm.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 6a36db4f79fd..a0c072d3103c 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -419,7 +419,14 @@ static u64 kvm_steal_clock(int cpu) =20 static inline void __set_percpu_decrypted(void *ptr, unsigned long size) { - early_set_memory_decrypted((unsigned long) ptr, size); + /* + * early_set_memory_decrypted() requires page aligned parameters, but + * this function needs to handle ptrs offset into a page. + */ + unsigned long start =3D PAGE_ALIGN_DOWN((unsigned long) ptr); + unsigned long end =3D (unsigned long) ptr + size; + + early_set_memory_decrypted(start, end - start); } =20 /* @@ -438,6 +445,11 @@ static void __init sev_map_percpu_data(void) return; =20 for_each_possible_cpu(cpu) { + /* + * Calling __set_percpu_decrypted() for each per-cpu variable is + * inefficent, since it may decrypt the same page multiple times. + * That said, it avoids the need for more complicated logic. + */ __set_percpu_decrypted(&per_cpu(apf_reason, cpu), sizeof(apf_reason)); __set_percpu_decrypted(&per_cpu(steal_time, cpu), sizeof(steal_time)); __set_percpu_decrypted(&per_cpu(kvm_apic_eoi, cpu), sizeof(kvm_apic_eoi)= ); --=20 2.42.0.rc1.204.g551eb34607-goog