From nobody Sun Apr 28 12:22:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linux.intel.com ARC-Seal: i=1; a=rsa-sha256; t=1669961953; cv=none; d=zohomail.com; s=zohoarc; b=bKR0Hgaa/VoWkw0knLpKPOh8iFcf1aB305iDh5dwLeCO+lFeP+tf5wR46xqlOHIb3F0XX2lwlpa72iAFKFn0ODYz6pFFipRpe5nZpJZ0MGQyHSmPu076uGArB0ttAZC6tAB2EKp/nOrw24PO2VjOJm3EKZ658iHQnaGAbmQaI8c= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1669961953; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=hxMo29MPLERIVN55Mc5nap4HmlHiqc4o86xMClsF7e4=; b=H3x2nTJ9QVeIMfC4m1qn338rxCaXqyVvMpMs9uaogeQlUSKnIUC+80zhmEFik/zgH6JOyY2Ldotuv9bQwepA2o518KKJdcKaKUwl8FVebxX3nCXYLPcYlj5j7/DVCH9SGo9+v0U4aqxmF6pp/1BYGBy0QgBAsyJYV4hpukxz6IU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1669961953985171.6959284396238; Thu, 1 Dec 2022 22:19:13 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0zNc-00041R-0R; Fri, 02 Dec 2022 01:18:36 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0zNa-000413-HB for qemu-devel@nongnu.org; Fri, 02 Dec 2022 01:18:34 -0500 Received: from mga07.intel.com ([134.134.136.100]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0zNX-0007vm-OU for qemu-devel@nongnu.org; Fri, 02 Dec 2022 01:18:34 -0500 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:18:30 -0800 Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:18:19 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961911; x=1701497911; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=t0LZ0YyKrKPl65oG/djA5Wt39fL9WiC+pwAjlkHR9fE=; b=nuKeKhbj/Jr6CtLsIapJ+bf98OkmZ6+orHfxeUdKZ1VbJToWHzK2WGoq W3J3o2lx/buoTT5BYkSbkmvmVFkU6Ae4ycOVCXNxZ4xj9dOtDxJbTuxuk 4rmnajxsK35+FH/z1XssOc4f7LenQzswpCUOmKP1PvsPEd5A3irW7aliH 1d6/UYfqvIInINIlJRi3wxCcR9MPgsp9Z8WAmX+qkxXFLqb83Cf+ZJFpM Ov/l45dgT4GR/xyuWmGGAeoYKjogMglQzgeSfQ66kCKsxUu6ApPXDCq9o k0zB4ugr8ZCWGnS4Hzw4PjXK/RlpdtLcQxwdcU16qNeb3g8Y7hw8wuBJT A==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170391" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170391" X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698589" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698589" From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 1/9] mm: Introduce memfd_restricted system call to create restricted user memory Date: Fri, 2 Dec 2022 14:13:39 +0800 Message-Id: <20221202061347.1070246-2-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: none client-ip=134.134.136.100; envelope-from=chao.p.peng@linux.intel.com; helo=mga07.intel.com X-Spam_score_int: -42 X-Spam_score: -4.3 X-Spam_bar: ---- X-Spam_report: (-4.3 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @intel.com) X-ZM-MESSAGEID: 1669961955792100001 Content-Type: text/plain; charset="utf-8" From: "Kirill A. Shutemov" Introduce 'memfd_restricted' system call with the ability to create memory areas that are restricted from userspace access through ordinary MMU operations (e.g. read/write/mmap). The memory content is expected to be used through the new in-kernel interface by a third kernel module. memfd_restricted() is useful for scenarios where a file descriptor(fd) can be used as an interface into mm but want to restrict userspace's ability on the fd. Initially it is designed to provide protections for KVM encrypted guest memory. Normally KVM uses memfd memory via mmapping the memfd into KVM userspace (e.g. QEMU) and then using the mmaped virtual address to setup the mapping in the KVM secondary page table (e.g. EPT). With confidential computing technologies like Intel TDX, the memfd memory may be encrypted with special key for special software domain (e.g. KVM guest) and is not expected to be directly accessed by userspace. Precisely, userspace access to such encrypted memory may lead to host crash so should be prevented. memfd_restricted() provides semantics required for KVM guest encrypted memory support that a fd created with memfd_restricted() is going to be used as the source of guest memory in confidential computing environment and KVM can directly interact with core-mm without the need to expose the memoy content into KVM userspace. KVM userspace is still in charge of the lifecycle of the fd. It should pass the created fd to KVM. KVM uses the new restrictedmem_get_page() to obtain the physical memory page and then uses it to populate the KVM secondary page table entries. The userspace restricted memfd can be fallocate-ed or hole-punched from userspace. When hole-punched, KVM can get notified through invalidate_start/invalidate_end() callbacks, KVM then gets chance to remove any mapped entries of the range in the secondary page tables. Machine check can happen for memory pages in the restricted memfd, instead of routing this directly to userspace, we call the error() callback that KVM registered. KVM then gets chance to handle it correctly. memfd_restricted() itself is implemented as a shim layer on top of real memory file systems (currently tmpfs). Pages in restrictedmem are marked as unmovable and unevictable, this is required for current confidential usage. But in future this might be changed. By default memfd_restricted() prevents userspace read, write and mmap. By defining new bit in the 'flags', it can be extended to support other restricted semantics in the future. The system call is currently wired up for x86 arch. Signed-off-by: Kirill A. Shutemov Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba reviewed-by/acked-by is still expected from the mm people. --- arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + include/linux/restrictedmem.h | 71 ++++++ include/linux/syscalls.h | 1 + include/uapi/asm-generic/unistd.h | 5 +- include/uapi/linux/magic.h | 1 + kernel/sys_ni.c | 3 + mm/Kconfig | 4 + mm/Makefile | 1 + mm/memory-failure.c | 3 + mm/restrictedmem.c | 318 +++++++++++++++++++++++++ 11 files changed, 408 insertions(+), 1 deletion(-) create mode 100644 include/linux/restrictedmem.h create mode 100644 mm/restrictedmem.c diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscal= ls/syscall_32.tbl index 320480a8db4f..dc70ba90247e 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -455,3 +455,4 @@ 448 i386 process_mrelease sys_process_mrelease 449 i386 futex_waitv sys_futex_waitv 450 i386 set_mempolicy_home_node sys_set_mempolicy_home_node +451 i386 memfd_restricted sys_memfd_restricted diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscal= ls/syscall_64.tbl index c84d12608cd2..06516abc8318 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -372,6 +372,7 @@ 448 common process_mrelease sys_process_mrelease 449 common futex_waitv sys_futex_waitv 450 common set_mempolicy_home_node sys_set_mempolicy_home_node +451 common memfd_restricted sys_memfd_restricted =20 # # Due to a historical design error, certain syscalls are numbered differen= tly diff --git a/include/linux/restrictedmem.h b/include/linux/restrictedmem.h new file mode 100644 index 000000000000..c2700c5daa43 --- /dev/null +++ b/include/linux/restrictedmem.h @@ -0,0 +1,71 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _LINUX_RESTRICTEDMEM_H + +#include +#include +#include + +struct restrictedmem_notifier; + +struct restrictedmem_notifier_ops { + void (*invalidate_start)(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end); + void (*invalidate_end)(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end); + void (*error)(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end); +}; + +struct restrictedmem_notifier { + struct list_head list; + const struct restrictedmem_notifier_ops *ops; +}; + +#ifdef CONFIG_RESTRICTEDMEM + +void restrictedmem_register_notifier(struct file *file, + struct restrictedmem_notifier *notifier); +void restrictedmem_unregister_notifier(struct file *file, + struct restrictedmem_notifier *notifier); + +int restrictedmem_get_page(struct file *file, pgoff_t offset, + struct page **pagep, int *order); + +static inline bool file_is_restrictedmem(struct file *file) +{ + return file->f_inode->i_sb->s_magic =3D=3D RESTRICTEDMEM_MAGIC; +} + +void restrictedmem_error_page(struct page *page, struct address_space *map= ping); + +#else + +static inline void restrictedmem_register_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ +} + +static inline void restrictedmem_unregister_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ +} + +static inline int restrictedmem_get_page(struct file *file, pgoff_t offset, + struct page **pagep, int *order) +{ + return -1; +} + +static inline bool file_is_restrictedmem(struct file *file) +{ + return false; +} + +static inline void restrictedmem_error_page(struct page *page, + struct address_space *mapping) +{ +} + +#endif /* CONFIG_RESTRICTEDMEM */ + +#endif /* _LINUX_RESTRICTEDMEM_H */ diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index a34b0f9a9972..f9e9e0c820c5 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -1056,6 +1056,7 @@ asmlinkage long sys_memfd_secret(unsigned int flags); asmlinkage long sys_set_mempolicy_home_node(unsigned long start, unsigned = long len, unsigned long home_node, unsigned long flags); +asmlinkage long sys_memfd_restricted(unsigned int flags); =20 /* * Architecture-specific system calls diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/u= nistd.h index 45fa180cc56a..e93cd35e46d0 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -886,8 +886,11 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv) #define __NR_set_mempolicy_home_node 450 __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node) =20 +#define __NR_memfd_restricted 451 +__SYSCALL(__NR_memfd_restricted, sys_memfd_restricted) + #undef __NR_syscalls -#define __NR_syscalls 451 +#define __NR_syscalls 452 =20 /* * 32 bit systems traditionally used different diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h index 6325d1d0e90f..8aa38324b90a 100644 --- a/include/uapi/linux/magic.h +++ b/include/uapi/linux/magic.h @@ -101,5 +101,6 @@ #define DMA_BUF_MAGIC 0x444d4142 /* "DMAB" */ #define DEVMEM_MAGIC 0x454d444d /* "DMEM" */ #define SECRETMEM_MAGIC 0x5345434d /* "SECM" */ +#define RESTRICTEDMEM_MAGIC 0x5245534d /* "RESM" */ =20 #endif /* __LINUX_MAGIC_H__ */ diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index 860b2dcf3ac4..7c4a32cbd2e7 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -360,6 +360,9 @@ COND_SYSCALL(pkey_free); /* memfd_secret */ COND_SYSCALL(memfd_secret); =20 +/* memfd_restricted */ +COND_SYSCALL(memfd_restricted); + /* * Architecture specific weak syscall entries. */ diff --git a/mm/Kconfig b/mm/Kconfig index 57e1d8c5b505..06b0e1d6b8c1 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1076,6 +1076,10 @@ config IO_MAPPING config SECRETMEM def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED =20 +config RESTRICTEDMEM + bool + depends on TMPFS + config ANON_VMA_NAME bool "Anonymous VMA name support" depends on PROC_FS && ADVISE_SYSCALLS && MMU diff --git a/mm/Makefile b/mm/Makefile index 8e105e5b3e29..bcbb0edf9ba1 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -121,6 +121,7 @@ obj-$(CONFIG_PAGE_EXTENSION) +=3D page_ext.o obj-$(CONFIG_PAGE_TABLE_CHECK) +=3D page_table_check.o obj-$(CONFIG_CMA_DEBUGFS) +=3D cma_debug.o obj-$(CONFIG_SECRETMEM) +=3D secretmem.o +obj-$(CONFIG_RESTRICTEDMEM) +=3D restrictedmem.o obj-$(CONFIG_CMA_SYSFS) +=3D cma_sysfs.o obj-$(CONFIG_USERFAULTFD) +=3D userfaultfd.o obj-$(CONFIG_IDLE_PAGE_TRACKING) +=3D page_idle.o diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 145bb561ddb3..f91b444e471e 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -62,6 +62,7 @@ #include #include #include +#include #include "swap.h" #include "internal.h" #include "ras/ras_event.h" @@ -940,6 +941,8 @@ static int me_pagecache_clean(struct page_state *ps, st= ruct page *p) goto out; } =20 + restrictedmem_error_page(p, mapping); + /* * The shmem page is kept in page cache instead of truncating * so is expected to have an extra refcount after error-handling. diff --git a/mm/restrictedmem.c b/mm/restrictedmem.c new file mode 100644 index 000000000000..56953c204e5c --- /dev/null +++ b/mm/restrictedmem.c @@ -0,0 +1,318 @@ +// SPDX-License-Identifier: GPL-2.0 +#include "linux/sbitmap.h" +#include +#include +#include +#include +#include +#include +#include + +struct restrictedmem_data { + struct mutex lock; + struct file *memfd; + struct list_head notifiers; +}; + +static void restrictedmem_invalidate_start(struct restrictedmem_data *data, + pgoff_t start, pgoff_t end) +{ + struct restrictedmem_notifier *notifier; + + mutex_lock(&data->lock); + list_for_each_entry(notifier, &data->notifiers, list) { + notifier->ops->invalidate_start(notifier, start, end); + } + mutex_unlock(&data->lock); +} + +static void restrictedmem_invalidate_end(struct restrictedmem_data *data, + pgoff_t start, pgoff_t end) +{ + struct restrictedmem_notifier *notifier; + + mutex_lock(&data->lock); + list_for_each_entry(notifier, &data->notifiers, list) { + notifier->ops->invalidate_end(notifier, start, end); + } + mutex_unlock(&data->lock); +} + +static void restrictedmem_notifier_error(struct restrictedmem_data *data, + pgoff_t start, pgoff_t end) +{ + struct restrictedmem_notifier *notifier; + + mutex_lock(&data->lock); + list_for_each_entry(notifier, &data->notifiers, list) { + notifier->ops->error(notifier, start, end); + } + mutex_unlock(&data->lock); +} + +static int restrictedmem_release(struct inode *inode, struct file *file) +{ + struct restrictedmem_data *data =3D inode->i_mapping->private_data; + + fput(data->memfd); + kfree(data); + return 0; +} + +static long restrictedmem_punch_hole(struct restrictedmem_data *data, int = mode, + loff_t offset, loff_t len) +{ + int ret; + pgoff_t start, end; + struct file *memfd =3D data->memfd; + + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) + return -EINVAL; + + start =3D offset >> PAGE_SHIFT; + end =3D (offset + len) >> PAGE_SHIFT; + + restrictedmem_invalidate_start(data, start, end); + ret =3D memfd->f_op->fallocate(memfd, mode, offset, len); + restrictedmem_invalidate_end(data, start, end); + + return ret; +} + +static long restrictedmem_fallocate(struct file *file, int mode, + loff_t offset, loff_t len) +{ + struct restrictedmem_data *data =3D file->f_mapping->private_data; + struct file *memfd =3D data->memfd; + + if (mode & FALLOC_FL_PUNCH_HOLE) + return restrictedmem_punch_hole(data, mode, offset, len); + + return memfd->f_op->fallocate(memfd, mode, offset, len); +} + +static const struct file_operations restrictedmem_fops =3D { + .release =3D restrictedmem_release, + .fallocate =3D restrictedmem_fallocate, +}; + +static int restrictedmem_getattr(struct user_namespace *mnt_userns, + const struct path *path, struct kstat *stat, + u32 request_mask, unsigned int query_flags) +{ + struct inode *inode =3D d_inode(path->dentry); + struct restrictedmem_data *data =3D inode->i_mapping->private_data; + struct file *memfd =3D data->memfd; + + return memfd->f_inode->i_op->getattr(mnt_userns, path, stat, + request_mask, query_flags); +} + +static int restrictedmem_setattr(struct user_namespace *mnt_userns, + struct dentry *dentry, struct iattr *attr) +{ + struct inode *inode =3D d_inode(dentry); + struct restrictedmem_data *data =3D inode->i_mapping->private_data; + struct file *memfd =3D data->memfd; + int ret; + + if (attr->ia_valid & ATTR_SIZE) { + if (memfd->f_inode->i_size) + return -EPERM; + + if (!PAGE_ALIGNED(attr->ia_size)) + return -EINVAL; + } + + ret =3D memfd->f_inode->i_op->setattr(mnt_userns, + file_dentry(memfd), attr); + return ret; +} + +static const struct inode_operations restrictedmem_iops =3D { + .getattr =3D restrictedmem_getattr, + .setattr =3D restrictedmem_setattr, +}; + +static int restrictedmem_init_fs_context(struct fs_context *fc) +{ + if (!init_pseudo(fc, RESTRICTEDMEM_MAGIC)) + return -ENOMEM; + + fc->s_iflags |=3D SB_I_NOEXEC; + return 0; +} + +static struct file_system_type restrictedmem_fs =3D { + .owner =3D THIS_MODULE, + .name =3D "memfd:restrictedmem", + .init_fs_context =3D restrictedmem_init_fs_context, + .kill_sb =3D kill_anon_super, +}; + +static struct vfsmount *restrictedmem_mnt; + +static __init int restrictedmem_init(void) +{ + restrictedmem_mnt =3D kern_mount(&restrictedmem_fs); + if (IS_ERR(restrictedmem_mnt)) + return PTR_ERR(restrictedmem_mnt); + return 0; +} +fs_initcall(restrictedmem_init); + +static struct file *restrictedmem_file_create(struct file *memfd) +{ + struct restrictedmem_data *data; + struct address_space *mapping; + struct inode *inode; + struct file *file; + + data =3D kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return ERR_PTR(-ENOMEM); + + data->memfd =3D memfd; + mutex_init(&data->lock); + INIT_LIST_HEAD(&data->notifiers); + + inode =3D alloc_anon_inode(restrictedmem_mnt->mnt_sb); + if (IS_ERR(inode)) { + kfree(data); + return ERR_CAST(inode); + } + + inode->i_mode |=3D S_IFREG; + inode->i_op =3D &restrictedmem_iops; + inode->i_mapping->private_data =3D data; + + file =3D alloc_file_pseudo(inode, restrictedmem_mnt, + "restrictedmem", O_RDWR, + &restrictedmem_fops); + if (IS_ERR(file)) { + iput(inode); + kfree(data); + return ERR_CAST(file); + } + + file->f_flags |=3D O_LARGEFILE; + + /* + * These pages are currently unmovable so don't place them into movable + * pageblocks (e.g. CMA and ZONE_MOVABLE). + */ + mapping =3D memfd->f_mapping; + mapping_set_unevictable(mapping); + mapping_set_gfp_mask(mapping, + mapping_gfp_mask(mapping) & ~__GFP_MOVABLE); + + return file; +} + +SYSCALL_DEFINE1(memfd_restricted, unsigned int, flags) +{ + struct file *file, *restricted_file; + int fd, err; + + if (flags) + return -EINVAL; + + fd =3D get_unused_fd_flags(0); + if (fd < 0) + return fd; + + file =3D shmem_file_setup("memfd:restrictedmem", 0, VM_NORESERVE); + if (IS_ERR(file)) { + err =3D PTR_ERR(file); + goto err_fd; + } + file->f_mode |=3D FMODE_LSEEK | FMODE_PREAD | FMODE_PWRITE; + file->f_flags |=3D O_LARGEFILE; + + restricted_file =3D restrictedmem_file_create(file); + if (IS_ERR(restricted_file)) { + err =3D PTR_ERR(restricted_file); + fput(file); + goto err_fd; + } + + fd_install(fd, restricted_file); + return fd; +err_fd: + put_unused_fd(fd); + return err; +} + +void restrictedmem_register_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ + struct restrictedmem_data *data =3D file->f_mapping->private_data; + + mutex_lock(&data->lock); + list_add(¬ifier->list, &data->notifiers); + mutex_unlock(&data->lock); +} +EXPORT_SYMBOL_GPL(restrictedmem_register_notifier); + +void restrictedmem_unregister_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ + struct restrictedmem_data *data =3D file->f_mapping->private_data; + + mutex_lock(&data->lock); + list_del(¬ifier->list); + mutex_unlock(&data->lock); +} +EXPORT_SYMBOL_GPL(restrictedmem_unregister_notifier); + +int restrictedmem_get_page(struct file *file, pgoff_t offset, + struct page **pagep, int *order) +{ + struct restrictedmem_data *data =3D file->f_mapping->private_data; + struct file *memfd =3D data->memfd; + struct folio *folio; + struct page *page; + int ret; + + ret =3D shmem_get_folio(file_inode(memfd), offset, &folio, SGP_WRITE); + if (ret) + return ret; + + page =3D folio_file_page(folio, offset); + *pagep =3D page; + if (order) + *order =3D thp_order(compound_head(page)); + + SetPageUptodate(page); + unlock_page(page); + + return 0; +} +EXPORT_SYMBOL_GPL(restrictedmem_get_page); + +void restrictedmem_error_page(struct page *page, struct address_space *map= ping) +{ + struct super_block *sb =3D restrictedmem_mnt->mnt_sb; + struct inode *inode, *next; + + if (!shmem_mapping(mapping)) + return; + + spin_lock(&sb->s_inode_list_lock); + list_for_each_entry_safe(inode, next, &sb->s_inodes, i_sb_list) { + struct restrictedmem_data *data =3D inode->i_mapping->private_data; + struct file *memfd =3D data->memfd; + + if (memfd->f_mapping =3D=3D mapping) { + pgoff_t start, end; + + spin_unlock(&sb->s_inode_list_lock); + + start =3D page->index; + end =3D start + thp_nr_pages(page); + restrictedmem_notifier_error(data, start, end); + return; + } + } + spin_unlock(&sb->s_inode_list_lock); +} --=20 2.25.1 From nobody Sun Apr 28 12:22:13 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FEB5C4332F for ; Fri, 2 Dec 2022 06:19:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232380AbiLBGT4 (ORCPT ); Fri, 2 Dec 2022 01:19:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232435AbiLBGTi (ORCPT ); Fri, 2 Dec 2022 01:19:38 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 832C5DF62A; Thu, 1 Dec 2022 22:19:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961941; x=1701497941; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6LvyjYHjuWAwPSyQcfYrCfXc2wEuDH2WGLmg2tkJEmY=; b=Ng0JuxQIZci8gzpn+699q/GjGzXuQuAxvZjtfB+Vdqu8TWrS+tY7PAcl Ws1Ab5GIQTQCaBAJRCmigNp49QKk/7T4XO5FSkWD/h3uPhIJKvlq1pLGv FunRamgkPyAI+LXBV5Mag1Q+I/5gKM+lCuCEvbLSbbc1bq6nSOBdYKE4s Vs8EQ7npdLhRhC57tz++SQjC5ZWnzOZmKHRyUdvP9OBzfMoWyMAKVXGx0 /r9XnvGeSMVFK4eJ8J0wqD6+gfdCqarm/BWpbnq34Prl2dDdbnx8IK6xE kVe6y3yYWbo7Uygw/KvH0UTGNE1m8hY5W9Jlg6BkN9Ranw0er9fSEh8c6 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170444" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170444" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:18:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698624" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698624" Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:18:30 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 2/9] KVM: Introduce per-page memory attributes Date: Fri, 2 Dec 2022 14:13:40 +0800 Message-Id: <20221202061347.1070246-3-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In confidential computing usages, whether a page is private or shared is necessary information for KVM to perform operations like page fault handling, page zapping etc. There are other potential use cases for per-page memory attributes, e.g. to make memory read-only (or no-exec, or exec-only, etc.) without having to modify memslots. Introduce two ioctls (advertised by KVM_CAP_MEMORY_ATTRIBUTES) to allow userspace to operate on the per-page memory attributes. - KVM_SET_MEMORY_ATTRIBUTES to set the per-page memory attributes to a guest memory range. - KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES to return the KVM supported memory attributes. KVM internally uses xarray to store the per-page memory attributes. Suggested-by: Sean Christopherson Signed-off-by: Chao Peng Link: https://lore.kernel.org/all/Y2WB48kD0J4VGynX@google.com/ Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba reviewed-by/acked-by is still expected from the mm people. --- Documentation/virt/kvm/api.rst | 63 ++++++++++++++++++++++++++++ arch/x86/kvm/Kconfig | 1 + include/linux/kvm_host.h | 3 ++ include/uapi/linux/kvm.h | 17 ++++++++ virt/kvm/Kconfig | 3 ++ virt/kvm/kvm_main.c | 76 ++++++++++++++++++++++++++++++++++ 6 files changed, 163 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 5617bc4f899f..bb2f709c0900 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -5952,6 +5952,59 @@ delivery must be provided via the "reg_aen" struct. The "pad" and "reserved" fields may be used for future extensions and shou= ld be set to 0s by userspace. =20 +4.138 KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES +----------------------------------------- + +:Capability: KVM_CAP_MEMORY_ATTRIBUTES +:Architectures: x86 +:Type: vm ioctl +:Parameters: u64 memory attributes bitmask(out) +:Returns: 0 on success, <0 on error + +Returns supported memory attributes bitmask. Supported memory attributes w= ill +have the corresponding bits set in u64 memory attributes bitmask. + +The following memory attributes are defined:: + + #define KVM_MEMORY_ATTRIBUTE_READ (1ULL << 0) + #define KVM_MEMORY_ATTRIBUTE_WRITE (1ULL << 1) + #define KVM_MEMORY_ATTRIBUTE_EXECUTE (1ULL << 2) + #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) + +4.139 KVM_SET_MEMORY_ATTRIBUTES +----------------------------------------- + +:Capability: KVM_CAP_MEMORY_ATTRIBUTES +:Architectures: x86 +:Type: vm ioctl +:Parameters: struct kvm_memory_attributes(in/out) +:Returns: 0 on success, <0 on error + +Sets memory attributes for pages in a guest memory range. Parameters are +specified via the following structure:: + + struct kvm_memory_attributes { + __u64 address; + __u64 size; + __u64 attributes; + __u64 flags; + }; + +The user sets the per-page memory attributes to a guest memory range indic= ated +by address/size, and in return KVM adjusts address and size to reflect the +actual pages of the memory range have been successfully set to the attribu= tes. +If the call returns 0, "address" is updated to the last successful address= + 1 +and "size" is updated to the remaining address size that has not been set +successfully. The user should check the return value as well as the size to +decide if the operation succeeded for the whole range or not. The user may= want +to retry the operation with the returned address/size if the previous rang= e was +partially successful. + +Both address and size should be page aligned and the supported attributes = can be +retrieved with KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES. + +The "flags" field may be used for future extensions and should be set to 0= s. + 5. The kvm_run structure =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 @@ -8270,6 +8323,16 @@ structure. When getting the Modified Change Topology Report value, the attr->addr must point to a byte where the value will be stored or retrieved from. =20 +8.40 KVM_CAP_MEMORY_ATTRIBUTES +------------------------------ + +:Capability: KVM_CAP_MEMORY_ATTRIBUTES +:Architectures: x86 +:Type: vm + +This capability indicates KVM supports per-page memory attributes and ioct= ls +KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES/KVM_SET_MEMORY_ATTRIBUTES are availabl= e. + 9. Known KVM API problems =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D =20 diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index fbeaa9ddef59..a8e379a3afee 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -49,6 +49,7 @@ config KVM select SRCU select INTERVAL_TREE select HAVE_KVM_PM_NOTIFIER if PM + select HAVE_KVM_MEMORY_ATTRIBUTES help Support hosting fully virtualized guest machines using hardware virtualization extensions. You will need a fairly recent diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 8f874a964313..a784e2b06625 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -800,6 +800,9 @@ struct kvm { =20 #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER struct notifier_block pm_notifier; +#endif +#ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES + struct xarray mem_attr_array; #endif char stats_id[KVM_STATS_NAME_SIZE]; }; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 64dfe9c07c87..5d0941acb5bb 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1182,6 +1182,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_S390_CPU_TOPOLOGY 222 #define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223 #define KVM_CAP_S390_PROTECTED_ASYNC_DISABLE 224 +#define KVM_CAP_MEMORY_ATTRIBUTES 225 =20 #ifdef KVM_CAP_IRQ_ROUTING =20 @@ -2238,4 +2239,20 @@ struct kvm_s390_zpci_op { /* flags for kvm_s390_zpci_op->u.reg_aen.flags */ #define KVM_S390_ZPCIOP_REGAEN_HOST (1 << 0) =20 +/* Available with KVM_CAP_MEMORY_ATTRIBUTES */ +#define KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES _IOR(KVMIO, 0xd2, __u64) +#define KVM_SET_MEMORY_ATTRIBUTES _IOWR(KVMIO, 0xd3, struct = kvm_memory_attributes) + +struct kvm_memory_attributes { + __u64 address; + __u64 size; + __u64 attributes; + __u64 flags; +}; + +#define KVM_MEMORY_ATTRIBUTE_READ (1ULL << 0) +#define KVM_MEMORY_ATTRIBUTE_WRITE (1ULL << 1) +#define KVM_MEMORY_ATTRIBUTE_EXECUTE (1ULL << 2) +#define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) + #endif /* __LINUX_KVM_H */ diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 800f9470e36b..effdea5dd4f0 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -19,6 +19,9 @@ config HAVE_KVM_IRQ_ROUTING config HAVE_KVM_DIRTY_RING bool =20 +config HAVE_KVM_MEMORY_ATTRIBUTES + bool + # Only strongly ordered architectures can select this, as it doesn't # put any explicit constraint on userspace ordering. They can also # select the _ACQ_REL version. diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1782c4555d94..7f0f5e9f2406 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1150,6 +1150,9 @@ static struct kvm *kvm_create_vm(unsigned long type, = const char *fdname) spin_lock_init(&kvm->mn_invalidate_lock); rcuwait_init(&kvm->mn_memslots_update_rcuwait); xa_init(&kvm->vcpu_array); +#ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES + xa_init(&kvm->mem_attr_array); +#endif =20 INIT_LIST_HEAD(&kvm->gpc_list); spin_lock_init(&kvm->gpc_lock); @@ -1323,6 +1326,9 @@ static void kvm_destroy_vm(struct kvm *kvm) kvm_free_memslots(kvm, &kvm->__memslots[i][0]); kvm_free_memslots(kvm, &kvm->__memslots[i][1]); } +#ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES + xa_destroy(&kvm->mem_attr_array); +#endif cleanup_srcu_struct(&kvm->irq_srcu); cleanup_srcu_struct(&kvm->srcu); kvm_arch_free_vm(kvm); @@ -2323,6 +2329,49 @@ static int kvm_vm_ioctl_clear_dirty_log(struct kvm *= kvm, } #endif /* CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */ =20 +#ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES +static u64 kvm_supported_mem_attributes(struct kvm *kvm) +{ + return 0; +} + +static int kvm_vm_ioctl_set_mem_attributes(struct kvm *kvm, + struct kvm_memory_attributes *attrs) +{ + gfn_t start, end; + unsigned long i; + void *entry; + u64 supported_attrs =3D kvm_supported_mem_attributes(kvm); + + /* flags is currently not used. */ + if (attrs->flags) + return -EINVAL; + if (attrs->attributes & ~supported_attrs) + return -EINVAL; + if (attrs->size =3D=3D 0 || attrs->address + attrs->size < attrs->address) + return -EINVAL; + if (!PAGE_ALIGNED(attrs->address) || !PAGE_ALIGNED(attrs->size)) + return -EINVAL; + + start =3D attrs->address >> PAGE_SHIFT; + end =3D (attrs->address + attrs->size - 1 + PAGE_SIZE) >> PAGE_SHIFT; + + entry =3D attrs->attributes ? xa_mk_value(attrs->attributes) : NULL; + + mutex_lock(&kvm->lock); + for (i =3D start; i < end; i++) + if (xa_err(xa_store(&kvm->mem_attr_array, i, entry, + GFP_KERNEL_ACCOUNT))) + break; + mutex_unlock(&kvm->lock); + + attrs->address =3D i << PAGE_SHIFT; + attrs->size =3D (end - i) << PAGE_SHIFT; + + return 0; +} +#endif /* CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES */ + struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn) { return __gfn_to_memslot(kvm_memslots(kvm), gfn); @@ -4459,6 +4508,9 @@ static long kvm_vm_ioctl_check_extension_generic(stru= ct kvm *kvm, long arg) #ifdef CONFIG_HAVE_KVM_MSI case KVM_CAP_SIGNAL_MSI: #endif +#ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES + case KVM_CAP_MEMORY_ATTRIBUTES: +#endif #ifdef CONFIG_HAVE_KVM_IRQFD case KVM_CAP_IRQFD: case KVM_CAP_IRQFD_RESAMPLE: @@ -4804,6 +4856,30 @@ static long kvm_vm_ioctl(struct file *filp, break; } #endif /* CONFIG_HAVE_KVM_IRQ_ROUTING */ +#ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES + case KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES: { + u64 attrs =3D kvm_supported_mem_attributes(kvm); + + r =3D -EFAULT; + if (copy_to_user(argp, &attrs, sizeof(attrs))) + goto out; + r =3D 0; + break; + } + case KVM_SET_MEMORY_ATTRIBUTES: { + struct kvm_memory_attributes attrs; + + r =3D -EFAULT; + if (copy_from_user(&attrs, argp, sizeof(attrs))) + goto out; + + r =3D kvm_vm_ioctl_set_mem_attributes(kvm, &attrs); + + if (!r && copy_to_user(argp, &attrs, sizeof(attrs))) + r =3D -EFAULT; + break; + } +#endif /* CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES */ case KVM_CREATE_DEVICE: { struct kvm_create_device cd; =20 --=20 2.25.1 From nobody Sun Apr 28 12:22:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linux.intel.com ARC-Seal: i=1; a=rsa-sha256; t=1669962040; cv=none; d=zohomail.com; s=zohoarc; b=LyL1ud2TqdeV+/YG13NEoqP5MOXhk6y/8bvMRkgxQ9x3hIUPUM2vExkVxh3qeelAyb+bkG3l4JJRbazZ8XO61eKyQzUUM1I6LcSGmphJ1VAA4+BQ9t1zh9/HnpCav21b7woHf2Syf4x07fOayhair5coppvWQbfiG3ofw/Fv+pw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1669962040; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=pcBZ/mZl2j6HfhHqqkHUYOGGrAhhBMbiWJ5MGt2w4A0=; b=dxZXuiVAvoypjN0eXy9SGBvOnisCxEnTehWQquHjORPFyhuom1MbeDCREF+m5O3prqAWry25eHkC05QcP6z7TD8VmLM5Vm8usidMA04rNdGHiUyctAxQl5AcPeAD4mgQRidOkBrurOlufdbtXQ48MOQUwxLckEO2xb9/MGZW0Fk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1669962040598709.2475383618441; Thu, 1 Dec 2022 22:20:40 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0zNx-00045d-Lp; Fri, 02 Dec 2022 01:18:57 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0zNw-00044z-QD for qemu-devel@nongnu.org; Fri, 02 Dec 2022 01:18:56 -0500 Received: from mga07.intel.com ([134.134.136.100]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0zNu-0007z3-H6 for qemu-devel@nongnu.org; Fri, 02 Dec 2022 01:18:56 -0500 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:18:52 -0800 Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:18:41 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961934; x=1701497934; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qntyGV0VBLHYxvfiPB5jnYsGGKsN59dqMGuJDSWxnvI=; b=dqaoIfCwe1Vj7w3HVlm/kR+EpOVNVGjqyPdIPH2gbgFEBOMrD6Q5b0Mh EmSRVYXBwxoddpyd4Hl1hpzQuTHSETmmIPeCsvjTUFdaqK93BM7bYgyL2 ZkNxxLLECgT7gudXLUPDpSZNAfUUD9KcStyycyK7Fp+qcBxRvt7Br1iqE 8Ex5CU11zyVG73IxPe8F++4KuQXyRmP4JfJq+BPAYrcmbjTYu3gdewG0L sj5R0hC66kKS6he0fi9bfZ/Xf9LruXr0rtL7bgtzSQmUPeDTtC61y1i2M ystKcyH0LEYr4TT69uQDBoUI1OpxeRLbOrlAcu2ULBn7UiWNIw1FJveD8 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170485" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170485" X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698659" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698659" From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 3/9] KVM: Extend the memslot to support fd-based private memory Date: Fri, 2 Dec 2022 14:13:41 +0800 Message-Id: <20221202061347.1070246-4-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: none client-ip=134.134.136.100; envelope-from=chao.p.peng@linux.intel.com; helo=mga07.intel.com X-Spam_score_int: -42 X-Spam_score: -4.3 X-Spam_bar: ---- X-Spam_report: (-4.3 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @intel.com) X-ZM-MESSAGEID: 1669962041754100001 Content-Type: text/plain; charset="utf-8" In memory encryption usage, guest memory may be encrypted with special key and can be accessed only by the guest itself. We call such memory private memory. It's valueless and sometimes can cause problem to allow userspace to access guest private memory. This new KVM memslot extension allows guest private memory being provided through a restrictedmem backed file descriptor(fd) and userspace is restricted to access the bookmarked memory in the fd. This new extension, indicated by the new flag KVM_MEM_PRIVATE, adds two additional KVM memslot fields restricted_fd/restricted_offset to allow userspace to instruct KVM to provide guest memory through restricted_fd. 'guest_phys_addr' is mapped at the restricted_offset of restricted_fd and the size is 'memory_size'. The extended memslot can still have the userspace_addr(hva). When use, a single memslot can maintain both private memory through restricted_fd and shared memory through userspace_addr. Whether the private or shared part is visible to guest is maintained by other KVM code. A restrictedmem_notifier field is also added to the memslot structure to allow the restricted_fd's backing store to notify KVM the memory change, KVM then can invalidate its page table entries or handle memory errors. Together with the change, a new config HAVE_KVM_RESTRICTED_MEM is added and right now it is selected on X86_64 only. To make future maintenance easy, internally use a binary compatible alias struct kvm_user_mem_region to handle both the normal and the '_ext' variants. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba reviewed-by/acked-by is still expected from the mm people. --- Documentation/virt/kvm/api.rst | 40 ++++++++++++++++++++++----- arch/x86/kvm/Kconfig | 2 ++ arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 8 ++++-- include/uapi/linux/kvm.h | 28 +++++++++++++++++++ virt/kvm/Kconfig | 3 +++ virt/kvm/kvm_main.c | 49 ++++++++++++++++++++++++++++------ 7 files changed, 114 insertions(+), 18 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index bb2f709c0900..99352170c130 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -1319,7 +1319,7 @@ yet and must be cleared on entry. :Capability: KVM_CAP_USER_MEMORY :Architectures: all :Type: vm ioctl -:Parameters: struct kvm_userspace_memory_region (in) +:Parameters: struct kvm_userspace_memory_region(_ext) (in) :Returns: 0 on success, -1 on error =20 :: @@ -1332,9 +1332,18 @@ yet and must be cleared on entry. __u64 userspace_addr; /* start of the userspace allocated memory */ }; =20 + struct kvm_userspace_memory_region_ext { + struct kvm_userspace_memory_region region; + __u64 restricted_offset; + __u32 restricted_fd; + __u32 pad1; + __u64 pad2[14]; + }; + /* for kvm_memory_region::flags */ #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0) #define KVM_MEM_READONLY (1UL << 1) + #define KVM_MEM_PRIVATE (1UL << 2) =20 This ioctl allows the user to create, modify or delete a guest physical memory slot. Bits 0-15 of "slot" specify the slot id and this value @@ -1365,12 +1374,29 @@ It is recommended that the lower 21 bits of guest_p= hys_addr and userspace_addr be identical. This allows large pages in the guest to be backed by large pages in the host. =20 -The flags field supports two flags: KVM_MEM_LOG_DIRTY_PAGES and -KVM_MEM_READONLY. The former can be set to instruct KVM to keep track of -writes to memory within the slot. See KVM_GET_DIRTY_LOG ioctl to know how= to -use it. The latter can be set, if KVM_CAP_READONLY_MEM capability allows = it, -to make a new slot read-only. In this case, writes to this memory will be -posted to userspace as KVM_EXIT_MMIO exits. +kvm_userspace_memory_region_ext struct includes all fields of +kvm_userspace_memory_region struct, while also adds additional fields for = some +other features. See below description of flags field for more information. +It's recommended to use kvm_userspace_memory_region_ext in new userspace c= ode. + +The flags field supports following flags: + +- KVM_MEM_LOG_DIRTY_PAGES to instruct KVM to keep track of writes to memory + within the slot. For more details, see KVM_GET_DIRTY_LOG ioctl. + +- KVM_MEM_READONLY, if KVM_CAP_READONLY_MEM allows, to make a new slot + read-only. In this case, writes to this memory will be posted to userspa= ce as + KVM_EXIT_MMIO exits. + +- KVM_MEM_PRIVATE, if KVM_MEMORY_ATTRIBUTE_PRIVATE is supported (see + KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES ioctl), to indicate a new slot has p= rivate + memory backed by a file descriptor(fd) and userspace access to the fd ma= y be + restricted. Userspace should use restricted_fd/restricted_offset in the + kvm_userspace_memory_region_ext to instruct KVM to provide private memory + to guest. Userspace should guarantee not to map the same host physical a= ddress + indicated by restricted_fd/restricted_offset to different guest physical + addresses within multiple memslots. Failed to do this may result undefin= ed + behavior. =20 When the KVM_CAP_SYNC_MMU capability is available, changes in the backing = of the memory region are automatically reflected into the guest. For example= , an diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index a8e379a3afee..690cb21010e7 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -50,6 +50,8 @@ config KVM select INTERVAL_TREE select HAVE_KVM_PM_NOTIFIER if PM select HAVE_KVM_MEMORY_ATTRIBUTES + select HAVE_KVM_RESTRICTED_MEM if X86_64 + select RESTRICTEDMEM if HAVE_KVM_RESTRICTED_MEM help Support hosting fully virtualized guest machines using hardware virtualization extensions. You will need a fairly recent diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 7f850dfb4086..9a07380f8d3c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12224,7 +12224,7 @@ void __user * __x86_set_memory_region(struct kvm *k= vm, int id, gpa_t gpa, } =20 for (i =3D 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - struct kvm_userspace_memory_region m; + struct kvm_user_mem_region m; =20 m.slot =3D id | (i << 16); m.flags =3D 0; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index a784e2b06625..02347e386ea2 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -44,6 +44,7 @@ =20 #include #include +#include =20 #ifndef KVM_MAX_VCPU_IDS #define KVM_MAX_VCPU_IDS KVM_MAX_VCPUS @@ -585,6 +586,9 @@ struct kvm_memory_slot { u32 flags; short id; u16 as_id; + struct file *restricted_file; + loff_t restricted_offset; + struct restrictedmem_notifier notifier; }; =20 static inline bool kvm_slot_dirty_track_enabled(const struct kvm_memory_sl= ot *slot) @@ -1123,9 +1127,9 @@ enum kvm_mr_change { }; =20 int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_user_mem_region *mem); int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_user_mem_region *mem); void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen); int kvm_arch_prepare_memory_region(struct kvm *kvm, diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 5d0941acb5bb..13bff963b8b0 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -103,6 +103,33 @@ struct kvm_userspace_memory_region { __u64 userspace_addr; /* start of the userspace allocated memory */ }; =20 +struct kvm_userspace_memory_region_ext { + struct kvm_userspace_memory_region region; + __u64 restricted_offset; + __u32 restricted_fd; + __u32 pad1; + __u64 pad2[14]; +}; + +#ifdef __KERNEL__ +/* + * kvm_user_mem_region is a kernel-only alias of kvm_userspace_memory_regi= on_ext + * that "unpacks" kvm_userspace_memory_region so that KVM can directly acc= ess + * all fields from the top-level "extended" region. + */ +struct kvm_user_mem_region { + __u32 slot; + __u32 flags; + __u64 guest_phys_addr; + __u64 memory_size; + __u64 userspace_addr; + __u64 restricted_offset; + __u32 restricted_fd; + __u32 pad1; + __u64 pad2[14]; +}; +#endif + /* * The bit 0 ~ bit 15 of kvm_memory_region::flags are visible for userspac= e, * other bits are reserved for kvm internal use which are defined in @@ -110,6 +137,7 @@ struct kvm_userspace_memory_region { */ #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0) #define KVM_MEM_READONLY (1UL << 1) +#define KVM_MEM_PRIVATE (1UL << 2) =20 /* for KVM_IRQ_LINE */ struct kvm_irq_level { diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index effdea5dd4f0..d605545d6dd1 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -89,3 +89,6 @@ config KVM_XFER_TO_GUEST_WORK =20 config HAVE_KVM_PM_NOTIFIER bool + +config HAVE_KVM_RESTRICTED_MEM + bool diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 7f0f5e9f2406..b882eb2c76a2 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1532,7 +1532,7 @@ static void kvm_replace_memslot(struct kvm *kvm, } } =20 -static int check_memory_region_flags(const struct kvm_userspace_memory_reg= ion *mem) +static int check_memory_region_flags(const struct kvm_user_mem_region *mem) { u32 valid_flags =3D KVM_MEM_LOG_DIRTY_PAGES; =20 @@ -1934,7 +1934,7 @@ static bool kvm_check_memslot_overlap(struct kvm_mems= lots *slots, int id, * Must be called holding kvm->slots_lock for write. */ int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_user_mem_region *mem) { struct kvm_memory_slot *old, *new; struct kvm_memslots *slots; @@ -2038,7 +2038,7 @@ int __kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(__kvm_set_memory_region); =20 int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_user_mem_region *mem) { int r; =20 @@ -2050,7 +2050,7 @@ int kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(kvm_set_memory_region); =20 static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm, - struct kvm_userspace_memory_region *mem) + struct kvm_user_mem_region *mem) { if ((u16)mem->slot >=3D KVM_USER_MEM_SLOTS) return -EINVAL; @@ -4698,6 +4698,33 @@ static int kvm_vm_ioctl_get_stats_fd(struct kvm *kvm) return fd; } =20 +#define SANITY_CHECK_MEM_REGION_FIELD(field) \ +do { \ + BUILD_BUG_ON(offsetof(struct kvm_user_mem_region, field) !=3D \ + offsetof(struct kvm_userspace_memory_region, field)); \ + BUILD_BUG_ON(sizeof_field(struct kvm_user_mem_region, field) !=3D \ + sizeof_field(struct kvm_userspace_memory_region, field)); \ +} while (0) + +#define SANITY_CHECK_MEM_REGION_EXT_FIELD(field) \ +do { \ + BUILD_BUG_ON(offsetof(struct kvm_user_mem_region, field) !=3D \ + offsetof(struct kvm_userspace_memory_region_ext, field)); \ + BUILD_BUG_ON(sizeof_field(struct kvm_user_mem_region, field) !=3D \ + sizeof_field(struct kvm_userspace_memory_region_ext, field)); \ +} while (0) + +static void kvm_sanity_check_user_mem_region_alias(void) +{ + SANITY_CHECK_MEM_REGION_FIELD(slot); + SANITY_CHECK_MEM_REGION_FIELD(flags); + SANITY_CHECK_MEM_REGION_FIELD(guest_phys_addr); + SANITY_CHECK_MEM_REGION_FIELD(memory_size); + SANITY_CHECK_MEM_REGION_FIELD(userspace_addr); + SANITY_CHECK_MEM_REGION_EXT_FIELD(restricted_offset); + SANITY_CHECK_MEM_REGION_EXT_FIELD(restricted_fd); +} + static long kvm_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { @@ -4721,14 +4748,20 @@ static long kvm_vm_ioctl(struct file *filp, break; } case KVM_SET_USER_MEMORY_REGION: { - struct kvm_userspace_memory_region kvm_userspace_mem; + struct kvm_user_mem_region mem; + unsigned long size =3D sizeof(struct kvm_userspace_memory_region); + + kvm_sanity_check_user_mem_region_alias(); =20 r =3D -EFAULT; - if (copy_from_user(&kvm_userspace_mem, argp, - sizeof(kvm_userspace_mem))) + if (copy_from_user(&mem, argp, size)) + goto out; + + r =3D -EINVAL; + if (mem.flags & KVM_MEM_PRIVATE) goto out; =20 - r =3D kvm_vm_ioctl_set_memory_region(kvm, &kvm_userspace_mem); + r =3D kvm_vm_ioctl_set_memory_region(kvm, &mem); break; } case KVM_GET_DIRTY_LOG: { --=20 2.25.1 From nobody Sun Apr 28 12:22:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linux.intel.com ARC-Seal: i=1; a=rsa-sha256; t=1669962020; cv=none; d=zohomail.com; s=zohoarc; b=AZwLuTSjIRt68XoQ5yDt1kyurdYunLHkWwAeDnwiD6FIX5vdWGCj2gTFbumCvHdqMyDjEj6RaLw0cE4UCZ+oatZhlawPWk8tLRtmTow1HAkrT1ga5+PxCIrtB2mFIEVQRAtUq3NRxRQj1pE9+3tUSoPcW4F1iNAZI43JBImD3Ec= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1669962020; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=HtB0c5kAS3TehxjUZq+drbzELyVKnrW1oPOP86GtMGw=; b=ifEXwrL5NPcJMBVxlAUeXvtUAIMCWIkbF2wL4T7eQtZN2sstbqqrzBbpgLtOefNYpCDoVi5IlAYk8xMHCuEepqlbwCleZyGzXMM7AP4ZQLkoOr6usOX6f0G+KQck0QmIAwQYX9uaXLIeB6+EAKqrJzRq1jJdTOajV+bPxwEU9Kw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 166996202093866.05800165125117; Thu, 1 Dec 2022 22:20:20 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0zOF-0004I3-6y; Fri, 02 Dec 2022 01:19:15 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0zOD-0004E2-KG for qemu-devel@nongnu.org; Fri, 02 Dec 2022 01:19:13 -0500 Received: from mga07.intel.com ([134.134.136.100]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0zO4-0007zv-B0 for qemu-devel@nongnu.org; Fri, 02 Dec 2022 01:19:06 -0500 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:19:02 -0800 Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:18:51 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961944; x=1701497944; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DVQ0g3/rVdl/Yl3T+oMVV4ILn+xt6i5l+sFNLAIKp18=; b=Oqw9I8MqT8K0vpnzNiqn3UEeS0uqBLpw3FEHd8rr7xW1uYyXmk9/VRMa 2+8Ysc5JGluE1Ap6EfUWx9COL7k/yAWmMQFSl8+oAyvWltTumZxTtabaz JJe9Vt0OCUYlT8hwjs7/3faF+XxFMhkZpo0dsRQr2duQpegVoCqQ6jJOs 9qHQk09tfVzPpiXBjOx1OubrT3m0r9eZRmWuEtdbJjwKJiKslo/BzQs6h R5EiEZzFGnsNkEoXx0Y0P+7CSrRFjD/7vvqy4EHrZLJE3wPOCIgYdnxFp T3Qta44WmGh8JtZ5o+YZX68CrO0vGaaEBMLzGFbmcfP9sNgDK4E/NbZfg g==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170528" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170528" X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698677" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698677" From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 4/9] KVM: Add KVM_EXIT_MEMORY_FAULT exit Date: Fri, 2 Dec 2022 14:13:42 +0800 Message-Id: <20221202061347.1070246-5-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: none client-ip=134.134.136.100; envelope-from=chao.p.peng@linux.intel.com; helo=mga07.intel.com X-Spam_score_int: -42 X-Spam_score: -4.3 X-Spam_bar: ---- X-Spam_report: (-4.3 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @intel.com) X-ZM-MESSAGEID: 1669962021546100001 Content-Type: text/plain; charset="utf-8" This new KVM exit allows userspace to handle memory-related errors. It indicates an error happens in KVM at guest memory range [gpa, gpa+size). The flags includes additional information for userspace to handle the error. Currently bit 0 is defined as 'private memory' where '1' indicates error happens due to private memory access and '0' indicates error happens due to shared memory access. When private memory is enabled, this new exit will be used for KVM to exit to userspace for shared <-> private memory conversion in memory encryption usage. In such usage, typically there are two kind of memory conversions: - explicit conversion: happens when guest explicitly calls into KVM to map a range (as private or shared), KVM then exits to userspace to perform the map/unmap operations. - implicit conversion: happens in KVM page fault handler where KVM exits to userspace for an implicit conversion when the page is in a different state than requested (private or shared). Suggested-by: Sean Christopherson Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba reviewed-by/acked-by is still expected from the mm people. --- Documentation/virt/kvm/api.rst | 22 ++++++++++++++++++++++ include/uapi/linux/kvm.h | 8 ++++++++ 2 files changed, 30 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 99352170c130..d9edb14ce30b 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6634,6 +6634,28 @@ array field represents return values. The userspace = should update the return values of SBI call before resuming the VCPU. For more details on RISC-V SBI spec refer, https://github.com/riscv/riscv-sbi-doc. =20 +:: + + /* KVM_EXIT_MEMORY_FAULT */ + struct { + #define KVM_MEMORY_EXIT_FLAG_PRIVATE (1ULL << 0) + __u64 flags; + __u64 gpa; + __u64 size; + } memory; + +If exit reason is KVM_EXIT_MEMORY_FAULT then it indicates that the VCPU has +encountered a memory error which is not handled by KVM kernel module and +userspace may choose to handle it. The 'flags' field indicates the memory +properties of the exit. + + - KVM_MEMORY_EXIT_FLAG_PRIVATE - indicates the memory error is caused by + private memory access when the bit is set. Otherwise the memory error is + caused by shared memory access when the bit is clear. + +'gpa' and 'size' indicate the memory range the error occurs at. The usersp= ace +may handle the error and return to KVM to retry the previous memory access. + :: =20 /* KVM_EXIT_NOTIFY */ diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 13bff963b8b0..c7e9d375a902 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -300,6 +300,7 @@ struct kvm_xen_exit { #define KVM_EXIT_RISCV_SBI 35 #define KVM_EXIT_RISCV_CSR 36 #define KVM_EXIT_NOTIFY 37 +#define KVM_EXIT_MEMORY_FAULT 38 =20 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -541,6 +542,13 @@ struct kvm_run { #define KVM_NOTIFY_CONTEXT_INVALID (1 << 0) __u32 flags; } notify; + /* KVM_EXIT_MEMORY_FAULT */ + struct { +#define KVM_MEMORY_EXIT_FLAG_PRIVATE (1ULL << 0) + __u64 flags; + __u64 gpa; + __u64 size; + } memory; /* Fix the size of the union. */ char padding[256]; }; --=20 2.25.1 From nobody Sun Apr 28 12:22:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linux.intel.com ARC-Seal: i=1; a=rsa-sha256; t=1669961987; cv=none; d=zohomail.com; s=zohoarc; b=kBPOqzD+EbiIkhkeWQ8FXFkYlIwYK1oNbzy96C13UIN/MDkqjEUNjnMti2Xq9UYYB9XzHEoFQpaiA+i1gUuwdkFHD8unrosinkMqCLeP5dZ2DHLKlbeD2ltqrqRW6meDMnv8UeAKG736gDVd6m1C9XOyjvWd0wD6M1oranjSjGE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1669961987; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Dq7aaqrADLkgD07Cyv5Y6vzScXo2Gft3dqFO6/czcno=; b=L8FG1svG5v+Z6pRCLTQSpHE4pAlgTVLm8DC73Op4oyFQUY2rWdywuyPkQzXwfqvi9acHEC+YMf9iNhlHwRIa2TfIE70F+l7UxYKJpCAPF6AFP8+rmYY1lQOVJQ1FWsAvQ21dnEyjxbYTowPAKD2IbuH/0eUH041/TatNJfn2tjA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1669961987004701.7719451063757; Thu, 1 Dec 2022 22:19:47 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0zOH-0004KN-Uv; Fri, 02 Dec 2022 01:19:18 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0zOG-0004Jh-NW for qemu-devel@nongnu.org; Fri, 02 Dec 2022 01:19:16 -0500 Received: from mga07.intel.com ([134.134.136.100]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0zOE-0007zv-Gu for qemu-devel@nongnu.org; Fri, 02 Dec 2022 01:19:16 -0500 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:19:13 -0800 Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:19:02 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961954; x=1701497954; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4VbBf/BrO2dlA458W3eDkygjDffYK27a3r7McgtRbUs=; b=F3yhmrSgi2EC+RodRfYayLzz+rr0x0rRVTSX4gmuEQ53DVAw+WcOPWdX OvDBgv5P1dhA2/RnYBJbWXPvSH7lEsBGNqlT9yr5+6VJZVwel8RJOLbLg o0Ai9fkNfR0kEaAp6ecAgfcX8i2A1qF1/qYB3ZvJ+ZRj4gIAB6Mc7BEFB cuMNWvqGV9xsxdvg5C+9ToLwStOIcP5Vox8lQr67QUPjlNAeRIojcAIwR rijj6EfPEL9MZI79J+jmuuyxsmHf5m1aULt27noIhqorqpNgfssMCFxj0 w8oAAfWf2PExIV4I6ZU2MBX+sHQL99gH2S+kIkxrJ1duQMxECWdjtEeqm Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170566" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170566" X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698684" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698684" From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 5/9] KVM: Use gfn instead of hva for mmu_notifier_retry Date: Fri, 2 Dec 2022 14:13:43 +0800 Message-Id: <20221202061347.1070246-6-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: none client-ip=134.134.136.100; envelope-from=chao.p.peng@linux.intel.com; helo=mga07.intel.com X-Spam_score_int: -42 X-Spam_score: -4.3 X-Spam_bar: ---- X-Spam_report: (-4.3 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @intel.com) X-ZM-MESSAGEID: 1669961987412100001 Content-Type: text/plain; charset="utf-8" Currently in mmu_notifier invalidate path, hva range is recorded and then checked against by mmu_notifier_retry_hva() in the page fault handling path. However, for the to be introduced private memory, a page fault may not have a hva associated, checking gfn(gpa) makes more sense. For existing hva based shared memory, gfn is expected to also work. The only downside is when aliasing multiple gfns to a single hva, the current algorithm of checking multiple ranges could result in a much larger range being rejected. Such aliasing should be uncommon, so the impact is expected small. Suggested-by: Sean Christopherson Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba reviewed-by/acked-by is still expected from the mm people. --- arch/x86/kvm/mmu/mmu.c | 8 +++++--- include/linux/kvm_host.h | 33 +++++++++++++++++++++------------ virt/kvm/kvm_main.c | 32 +++++++++++++++++++++++--------- 3 files changed, 49 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4736d7849c60..e2c70b5afa3e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4259,7 +4259,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, return true; =20 return fault->slot && - mmu_invalidate_retry_hva(vcpu->kvm, mmu_seq, fault->hva); + mmu_invalidate_retry_gfn(vcpu->kvm, mmu_seq, fault->gfn); } =20 static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault) @@ -6098,7 +6098,9 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_sta= rt, gfn_t gfn_end) =20 write_lock(&kvm->mmu_lock); =20 - kvm_mmu_invalidate_begin(kvm, gfn_start, gfn_end); + kvm_mmu_invalidate_begin(kvm); + + kvm_mmu_invalidate_range_add(kvm, gfn_start, gfn_end); =20 flush =3D kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); =20 @@ -6112,7 +6114,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_sta= rt, gfn_t gfn_end) kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end - gfn_start); =20 - kvm_mmu_invalidate_end(kvm, gfn_start, gfn_end); + kvm_mmu_invalidate_end(kvm); =20 write_unlock(&kvm->mmu_lock); } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 02347e386ea2..3d69484d2704 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -787,8 +787,8 @@ struct kvm { struct mmu_notifier mmu_notifier; unsigned long mmu_invalidate_seq; long mmu_invalidate_in_progress; - unsigned long mmu_invalidate_range_start; - unsigned long mmu_invalidate_range_end; + gfn_t mmu_invalidate_range_start; + gfn_t mmu_invalidate_range_end; #endif struct list_head devices; u64 manual_dirty_log_protect; @@ -1389,10 +1389,9 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory= _cache *mc); void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); #endif =20 -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, - unsigned long end); -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, - unsigned long end); +void kvm_mmu_invalidate_begin(struct kvm *kvm); +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end); +void kvm_mmu_invalidate_end(struct kvm *kvm); =20 long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg); @@ -1963,9 +1962,9 @@ static inline int mmu_invalidate_retry(struct kvm *kv= m, unsigned long mmu_seq) return 0; } =20 -static inline int mmu_invalidate_retry_hva(struct kvm *kvm, +static inline int mmu_invalidate_retry_gfn(struct kvm *kvm, unsigned long mmu_seq, - unsigned long hva) + gfn_t gfn) { lockdep_assert_held(&kvm->mmu_lock); /* @@ -1974,10 +1973,20 @@ static inline int mmu_invalidate_retry_hva(struct k= vm *kvm, * that might be being invalidated. Note that it may include some false * positives, due to shortcuts when handing concurrent invalidations. */ - if (unlikely(kvm->mmu_invalidate_in_progress) && - hva >=3D kvm->mmu_invalidate_range_start && - hva < kvm->mmu_invalidate_range_end) - return 1; + if (unlikely(kvm->mmu_invalidate_in_progress)) { + /* + * Dropping mmu_lock after bumping mmu_invalidate_in_progress + * but before updating the range is a KVM bug. + */ + if (WARN_ON_ONCE(kvm->mmu_invalidate_range_start =3D=3D INVALID_GPA || + kvm->mmu_invalidate_range_end =3D=3D INVALID_GPA)) + return 1; + + if (gfn >=3D kvm->mmu_invalidate_range_start && + gfn < kvm->mmu_invalidate_range_end) + return 1; + } + if (kvm->mmu_invalidate_seq !=3D mmu_seq) return 1; return 0; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index b882eb2c76a2..ad55dfbc75d7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -540,9 +540,7 @@ static void kvm_mmu_notifier_invalidate_range(struct mm= u_notifier *mn, =20 typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range= ); =20 -typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, - unsigned long end); - +typedef void (*on_lock_fn_t)(struct kvm *kvm); typedef void (*on_unlock_fn_t)(struct kvm *kvm); =20 struct kvm_hva_range { @@ -628,7 +626,8 @@ static __always_inline int __kvm_handle_hva_range(struc= t kvm *kvm, locked =3D true; KVM_MMU_LOCK(kvm); if (!IS_KVM_NULL_FN(range->on_lock)) - range->on_lock(kvm, range->start, range->end); + range->on_lock(kvm); + if (IS_KVM_NULL_FN(range->handler)) break; } @@ -715,8 +714,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_noti= fier *mn, kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); } =20 -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, - unsigned long end) +void kvm_mmu_invalidate_begin(struct kvm *kvm) { /* * The count increase must become visible at unlock time as no @@ -724,6 +722,17 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigne= d long start, * count is also read inside the mmu_lock critical section. */ kvm->mmu_invalidate_in_progress++; + + if (likely(kvm->mmu_invalidate_in_progress =3D=3D 1)) { + kvm->mmu_invalidate_range_start =3D INVALID_GPA; + kvm->mmu_invalidate_range_end =3D INVALID_GPA; + } +} + +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(!kvm->mmu_invalidate_in_progress); + if (likely(kvm->mmu_invalidate_in_progress =3D=3D 1)) { kvm->mmu_invalidate_range_start =3D start; kvm->mmu_invalidate_range_end =3D end; @@ -744,6 +753,12 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigne= d long start, } } =20 +static bool kvm_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range = *range) +{ + kvm_mmu_invalidate_range_add(kvm, range->start, range->end); + return kvm_unmap_gfn_range(kvm, range); +} + static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { @@ -752,7 +767,7 @@ static int kvm_mmu_notifier_invalidate_range_start(stru= ct mmu_notifier *mn, .start =3D range->start, .end =3D range->end, .pte =3D __pte(0), - .handler =3D kvm_unmap_gfn_range, + .handler =3D kvm_mmu_unmap_gfn_range, .on_lock =3D kvm_mmu_invalidate_begin, .on_unlock =3D kvm_arch_guest_memory_reclaimed, .flush_on_ret =3D true, @@ -791,8 +806,7 @@ static int kvm_mmu_notifier_invalidate_range_start(stru= ct mmu_notifier *mn, return 0; } =20 -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, - unsigned long end) +void kvm_mmu_invalidate_end(struct kvm *kvm) { /* * This sequence increase will notify the kvm page fault that --=20 2.25.1 From nobody Sun Apr 28 12:22:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linux.intel.com ARC-Seal: i=1; a=rsa-sha256; t=1669962036; cv=none; d=zohomail.com; s=zohoarc; b=YrZF9vcVeJT1XFnR5fo0MYDSuibgVUwgZQcKCZ13k3HAZoWGWF6vxov3OWi3a9nJxoBJaYcI8GXGdAEOLKjVv7dLLktnnqULpvlfrVDpyyyW1wYdKWhkJnotLQxqcWVfjbhFgYxuPJlwFN/yYpbvoYKvA69mmTB1yT63lVKwKVc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1669962036; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ogKl7ul//IIr8lIfE0AHGtPLBl/1d8HgtNolCRn0LrI=; b=XLlHJ6ZPEDwbB2pk6vBM+wYxsGTDOsV9dv8pDaOYrjsHISwJ3PbcsW+TmQ7sQjvdDgny/61mKCn9lBkA2UkFtT5u3z0gTnZDMD11p353T2rOO8Qp4ABG6NsKH5+A4Il02HXU8pPesyUNFPGhQMvntPEZGErlm4UmMTDc2j4VRrY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1669962036794406.9710233549721; Thu, 1 Dec 2022 22:20:36 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0zOb-0004fj-7F; Fri, 02 Dec 2022 01:19:37 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0zOS-0004eB-SP for qemu-devel@nongnu.org; Fri, 02 Dec 2022 01:19:28 -0500 Received: from mga07.intel.com ([134.134.136.100]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0zOQ-00081W-2V for qemu-devel@nongnu.org; Fri, 02 Dec 2022 01:19:28 -0500 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:19:24 -0800 Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:19:13 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961966; x=1701497966; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Kvt2bx+9z1hqybtj0u/XMjFK2xqSu46ePlTuyoBers4=; b=jN9/Gj0Ds/cNFl/f+IyxAGhjm2RK19jbhjyJA6vX1U2uCd5u4lW+0wnb a935h9Yjk0KEVCEvbn0CGvvvVJ9e6qb+SkqjHxtwSVH/iwUS+pWP+y2Wg /qh4a/015Xgih5iwI0j18GkyAQae8ObDy0aqQG+Kk1naBBhYXZp8DpbfC PkmxZpFdQ2fqvM3katLbx/jybOh58vAm4cdFXmZBHWrjtauTtLzIAlO1S 7KwvlLDFvgd6p9O7rUERFXGnBRtJCn80INH6YfN/SzUsLw+0D4J+iun6G S+g4Qd7ast/w1WMDN3+fXlmIEwtdJBOID7026SlX4pw8OdaMpXS9qF2T0 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170611" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170611" X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698707" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698707" From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 6/9] KVM: Unmap existing mappings when change the memory attributes Date: Fri, 2 Dec 2022 14:13:44 +0800 Message-Id: <20221202061347.1070246-7-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: none client-ip=134.134.136.100; envelope-from=chao.p.peng@linux.intel.com; helo=mga07.intel.com X-Spam_score_int: -42 X-Spam_score: -4.3 X-Spam_bar: ---- X-Spam_report: (-4.3 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @intel.com) X-ZM-MESSAGEID: 1669962037662100001 Content-Type: text/plain; charset="utf-8" Unmap the existing guest mappings when memory attribute is changed between shared and private. This is needed because shared pages and private pages are from different backends, unmapping existing ones gives a chance for page fault handler to re-populate the mappings according to the new attribute. Only architecture has private memory support needs this and the supported architecture is expected to rewrite the weak kvm_arch_has_private_mem(). Also, during memory attribute changing and the unmapping time frame, page fault handler may happen in the same memory range and can cause incorrect page state, invoke kvm_mmu_invalidate_* helpers to let the page fault handler retry during this time frame. Signed-off-by: Chao Peng reviewed-by/acked-by is still expected from the mm people. --- include/linux/kvm_host.h | 7 +- virt/kvm/kvm_main.c | 168 ++++++++++++++++++++++++++------------- 2 files changed, 116 insertions(+), 59 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 3d69484d2704..3331c0c92838 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -255,7 +255,6 @@ bool kvm_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr= 2_or_gpa, int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu); #endif =20 -#ifdef KVM_ARCH_WANT_MMU_NOTIFIER struct kvm_gfn_range { struct kvm_memory_slot *slot; gfn_t start; @@ -264,6 +263,8 @@ struct kvm_gfn_range { bool may_block; }; bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); + +#ifdef KVM_ARCH_WANT_MMU_NOTIFIER bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range); @@ -785,11 +786,12 @@ struct kvm { =20 #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER) struct mmu_notifier mmu_notifier; +#endif unsigned long mmu_invalidate_seq; long mmu_invalidate_in_progress; gfn_t mmu_invalidate_range_start; gfn_t mmu_invalidate_range_end; -#endif + struct list_head devices; u64 manual_dirty_log_protect; struct dentry *debugfs_dentry; @@ -1480,6 +1482,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcp= u *vcpu); int kvm_arch_post_init_vm(struct kvm *kvm); void kvm_arch_pre_destroy_vm(struct kvm *kvm); int kvm_arch_create_vm_debugfs(struct kvm *kvm); +bool kvm_arch_has_private_mem(struct kvm *kvm); =20 #ifndef __KVM_HAVE_ARCH_VM_ALLOC /* diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index ad55dfbc75d7..4e1e1e113bf0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -520,6 +520,62 @@ void kvm_destroy_vcpus(struct kvm *kvm) } EXPORT_SYMBOL_GPL(kvm_destroy_vcpus); =20 +void kvm_mmu_invalidate_begin(struct kvm *kvm) +{ + /* + * The count increase must become visible at unlock time as no + * spte can be established without taking the mmu_lock and + * count is also read inside the mmu_lock critical section. + */ + kvm->mmu_invalidate_in_progress++; + + if (likely(kvm->mmu_invalidate_in_progress =3D=3D 1)) { + kvm->mmu_invalidate_range_start =3D INVALID_GPA; + kvm->mmu_invalidate_range_end =3D INVALID_GPA; + } +} + +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(!kvm->mmu_invalidate_in_progress); + + if (likely(kvm->mmu_invalidate_in_progress =3D=3D 1)) { + kvm->mmu_invalidate_range_start =3D start; + kvm->mmu_invalidate_range_end =3D end; + } else { + /* + * Fully tracking multiple concurrent ranges has diminishing + * returns. Keep things simple and just find the minimal range + * which includes the current and new ranges. As there won't be + * enough information to subtract a range after its invalidate + * completes, any ranges invalidated concurrently will + * accumulate and persist until all outstanding invalidates + * complete. + */ + kvm->mmu_invalidate_range_start =3D + min(kvm->mmu_invalidate_range_start, start); + kvm->mmu_invalidate_range_end =3D + max(kvm->mmu_invalidate_range_end, end); + } +} + +void kvm_mmu_invalidate_end(struct kvm *kvm) +{ + /* + * This sequence increase will notify the kvm page fault that + * the page that is going to be mapped in the spte could have + * been freed. + */ + kvm->mmu_invalidate_seq++; + smp_wmb(); + /* + * The above sequence increase must be visible before the + * below count decrease, which is ensured by the smp_wmb above + * in conjunction with the smp_rmb in mmu_invalidate_retry(). + */ + kvm->mmu_invalidate_in_progress--; +} + #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER) static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn) { @@ -714,45 +770,6 @@ static void kvm_mmu_notifier_change_pte(struct mmu_not= ifier *mn, kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); } =20 -void kvm_mmu_invalidate_begin(struct kvm *kvm) -{ - /* - * The count increase must become visible at unlock time as no - * spte can be established without taking the mmu_lock and - * count is also read inside the mmu_lock critical section. - */ - kvm->mmu_invalidate_in_progress++; - - if (likely(kvm->mmu_invalidate_in_progress =3D=3D 1)) { - kvm->mmu_invalidate_range_start =3D INVALID_GPA; - kvm->mmu_invalidate_range_end =3D INVALID_GPA; - } -} - -void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end) -{ - WARN_ON_ONCE(!kvm->mmu_invalidate_in_progress); - - if (likely(kvm->mmu_invalidate_in_progress =3D=3D 1)) { - kvm->mmu_invalidate_range_start =3D start; - kvm->mmu_invalidate_range_end =3D end; - } else { - /* - * Fully tracking multiple concurrent ranges has diminishing - * returns. Keep things simple and just find the minimal range - * which includes the current and new ranges. As there won't be - * enough information to subtract a range after its invalidate - * completes, any ranges invalidated concurrently will - * accumulate and persist until all outstanding invalidates - * complete. - */ - kvm->mmu_invalidate_range_start =3D - min(kvm->mmu_invalidate_range_start, start); - kvm->mmu_invalidate_range_end =3D - max(kvm->mmu_invalidate_range_end, end); - } -} - static bool kvm_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range = *range) { kvm_mmu_invalidate_range_add(kvm, range->start, range->end); @@ -806,23 +823,6 @@ static int kvm_mmu_notifier_invalidate_range_start(str= uct mmu_notifier *mn, return 0; } =20 -void kvm_mmu_invalidate_end(struct kvm *kvm) -{ - /* - * This sequence increase will notify the kvm page fault that - * the page that is going to be mapped in the spte could have - * been freed. - */ - kvm->mmu_invalidate_seq++; - smp_wmb(); - /* - * The above sequence increase must be visible before the - * below count decrease, which is ensured by the smp_wmb above - * in conjunction with the smp_rmb in mmu_invalidate_retry(). - */ - kvm->mmu_invalidate_in_progress--; -} - static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { @@ -1140,6 +1140,11 @@ int __weak kvm_arch_create_vm_debugfs(struct kvm *kv= m) return 0; } =20 +bool __weak kvm_arch_has_private_mem(struct kvm *kvm) +{ + return false; +} + static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) { struct kvm *kvm =3D kvm_arch_alloc_vm(); @@ -2349,15 +2354,47 @@ static u64 kvm_supported_mem_attributes(struct kvm = *kvm) return 0; } =20 +static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_gfn_range gfn_range; + struct kvm_memory_slot *slot; + struct kvm_memslots *slots; + struct kvm_memslot_iter iter; + int i; + int r =3D 0; + + gfn_range.pte =3D __pte(0); + gfn_range.may_block =3D true; + + for (i =3D 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots =3D __kvm_memslots(kvm, i); + + kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) { + slot =3D iter.slot; + gfn_range.start =3D max(start, slot->base_gfn); + gfn_range.end =3D min(end, slot->base_gfn + slot->npages); + if (gfn_range.start >=3D gfn_range.end) + continue; + gfn_range.slot =3D slot; + + r |=3D kvm_unmap_gfn_range(kvm, &gfn_range); + } + } + + if (r) + kvm_flush_remote_tlbs(kvm); +} + static int kvm_vm_ioctl_set_mem_attributes(struct kvm *kvm, struct kvm_memory_attributes *attrs) { gfn_t start, end; unsigned long i; void *entry; + int idx; u64 supported_attrs =3D kvm_supported_mem_attributes(kvm); =20 - /* flags is currently not used. */ + /* 'flags' is currently not used. */ if (attrs->flags) return -EINVAL; if (attrs->attributes & ~supported_attrs) @@ -2372,6 +2409,13 @@ static int kvm_vm_ioctl_set_mem_attributes(struct kv= m *kvm, =20 entry =3D attrs->attributes ? xa_mk_value(attrs->attributes) : NULL; =20 + if (kvm_arch_has_private_mem(kvm)) { + KVM_MMU_LOCK(kvm); + kvm_mmu_invalidate_begin(kvm); + kvm_mmu_invalidate_range_add(kvm, start, end); + KVM_MMU_UNLOCK(kvm); + } + mutex_lock(&kvm->lock); for (i =3D start; i < end; i++) if (xa_err(xa_store(&kvm->mem_attr_array, i, entry, @@ -2379,6 +2423,16 @@ static int kvm_vm_ioctl_set_mem_attributes(struct kv= m *kvm, break; mutex_unlock(&kvm->lock); =20 + if (kvm_arch_has_private_mem(kvm)) { + idx =3D srcu_read_lock(&kvm->srcu); + KVM_MMU_LOCK(kvm); + if (i > start) + kvm_unmap_mem_range(kvm, start, i); + kvm_mmu_invalidate_end(kvm); + KVM_MMU_UNLOCK(kvm); + srcu_read_unlock(&kvm->srcu, idx); + } + attrs->address =3D i << PAGE_SHIFT; attrs->size =3D (end - i) << PAGE_SHIFT; =20 --=20 2.25.1 From nobody Sun Apr 28 12:22:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linux.intel.com ARC-Seal: i=1; a=rsa-sha256; t=1669962042; cv=none; d=zohomail.com; s=zohoarc; b=iptEhZ6Ti+SoOb/mcVZiiqMEoW6UpLLsc/4qOsH7HAIk601f0IgzgceiYC/A4v/olAE2VPAZhucdR1XpGKCr+hAilewkOGJMYsz7jWxpcm3S8rEM9zEBBi5xnCQ1PhHolVQ0VbUVTz6ptU4MYnxchukq/dII3e8l0QhLyh9/wqM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1669962042; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=5l4bbgIU+KNMUQwtsv66wFTG6GccOEOAAAgbwFDAWJ4=; b=A4wojSHogIfeO0xNbkCqmyni+lobLnjQF144RsZHjGqr5jbyvXprcXBOQl2yIjRvGyjHPCg7JTu4Rc3o49ty8wrev4mL0lEwvK3XYka1gWW8cf8kiSUu1jfLSbE8uFRvvqXnaAdfpvfr2rmDtyFlnvmwS8ZL9tp4zlcBgN40JIs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1669962042448781.0838236918854; Thu, 1 Dec 2022 22:20:42 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0zOi-0004la-Ip; Fri, 02 Dec 2022 01:19:44 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0zOd-0004ha-JG for qemu-devel@nongnu.org; Fri, 02 Dec 2022 01:19:39 -0500 Received: from mga07.intel.com ([134.134.136.100]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0zOb-000836-7U for qemu-devel@nongnu.org; Fri, 02 Dec 2022 01:19:39 -0500 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:19:34 -0800 Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:19:23 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961977; x=1701497977; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uHWX+bmiWvi8twUa4xphTJMpG6518qs7AwJd83WeEzU=; b=hLMOo7SBBCgHgYf2Z/7tSHoz695PCXQFwsMazU2iDBZhXhnlDgWOc3It ftTHaLTiqko8YAkT/s8lXPws9AqnpfDSkeVeiq4U6npGDbWOfYoUnYh1H 85rRleyF4xYciTWtW9j1ZA4MeEikgMKjpT01pdaNdVbvO+ajRQBHC24Hl XBbRmu1VX4zIv6+vUn82i47D2f/8NRDcL1wwA5SmTvkx/7cGsibzg+Z29 Sbfjixm4OYQBqveMgnJ/TZHcoD2lIMr6LsQueFjowCY5Br7g5pJagphWt AOTtw9GpwnlXb36kIWwMROupRYxGjrg9DECW+7f5GITm8A1K/7jIqx5V6 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170646" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170646" X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698745" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698745" From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 7/9] KVM: Update lpage info when private/shared memory are mixed Date: Fri, 2 Dec 2022 14:13:45 +0800 Message-Id: <20221202061347.1070246-8-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: none client-ip=134.134.136.100; envelope-from=chao.p.peng@linux.intel.com; helo=mga07.intel.com X-Spam_score_int: -42 X-Spam_score: -4.3 X-Spam_bar: ---- X-Spam_report: (-4.3 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @intel.com) X-ZM-MESSAGEID: 1669962043699100005 Content-Type: text/plain; charset="utf-8" A large page with mixed private/shared subpages can't be mapped as large page since its sub private/shared pages are from different memory backends and may also treated by architecture differently. When private/shared memory are mixed in a large page, the current lpage_info is not sufficient to decide whether the page can be mapped as large page or not and additional private/shared mixed information is needed. Tracking this 'mixed' information with the current 'count' like disallow_lpage is a bit challenge so reserve a bit in 'disallow_lpage' to indicate a large page has mixed private/share subpages and update this 'mixed' bit whenever the memory attribute is changed between private and shared. Signed-off-by: Chao Peng reviewed-by/acked-by is still expected from the mm people. --- arch/x86/include/asm/kvm_host.h | 8 ++ arch/x86/kvm/mmu/mmu.c | 134 +++++++++++++++++++++++++++++++- arch/x86/kvm/x86.c | 2 + include/linux/kvm_host.h | 19 +++++ virt/kvm/kvm_main.c | 9 ++- 5 files changed, 169 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 283cbb83d6ae..7772ab37ac89 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -38,6 +38,7 @@ #include =20 #define __KVM_HAVE_ARCH_VCPU_DEBUGFS +#define __KVM_HAVE_ARCH_SET_MEMORY_ATTRIBUTES =20 #define KVM_MAX_VCPUS 1024 =20 @@ -1011,6 +1012,13 @@ struct kvm_vcpu_arch { #endif }; =20 +/* + * Use a bit in disallow_lpage to indicate private/shared pages mixed at t= he + * level. The remaining bits are used as a reference count. + */ +#define KVM_LPAGE_PRIVATE_SHARED_MIXED (1U << 31) +#define KVM_LPAGE_COUNT_MAX ((1U << 31) - 1) + struct kvm_lpage_info { int disallow_lpage; }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e2c70b5afa3e..2190fd8c95c0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -763,11 +763,16 @@ static void update_gfn_disallow_lpage_count(const str= uct kvm_memory_slot *slot, { struct kvm_lpage_info *linfo; int i; + int disallow_count; =20 for (i =3D PG_LEVEL_2M; i <=3D KVM_MAX_HUGEPAGE_LEVEL; ++i) { linfo =3D lpage_info_slot(gfn, slot, i); + + disallow_count =3D linfo->disallow_lpage & KVM_LPAGE_COUNT_MAX; + WARN_ON(disallow_count + count < 0 || + disallow_count > KVM_LPAGE_COUNT_MAX - count); + linfo->disallow_lpage +=3D count; - WARN_ON(linfo->disallow_lpage < 0); } } =20 @@ -6986,3 +6991,130 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm) if (kvm->arch.nx_huge_page_recovery_thread) kthread_stop(kvm->arch.nx_huge_page_recovery_thread); } + +static bool linfo_is_mixed(struct kvm_lpage_info *linfo) +{ + return linfo->disallow_lpage & KVM_LPAGE_PRIVATE_SHARED_MIXED; +} + +static void linfo_set_mixed(gfn_t gfn, struct kvm_memory_slot *slot, + int level, bool mixed) +{ + struct kvm_lpage_info *linfo =3D lpage_info_slot(gfn, slot, level); + + if (mixed) + linfo->disallow_lpage |=3D KVM_LPAGE_PRIVATE_SHARED_MIXED; + else + linfo->disallow_lpage &=3D ~KVM_LPAGE_PRIVATE_SHARED_MIXED; +} + +static bool is_expected_attr_entry(void *entry, unsigned long expected_att= rs) +{ + bool expect_private =3D expected_attrs & KVM_MEMORY_ATTRIBUTE_PRIVATE; + + if (xa_to_value(entry) & KVM_MEMORY_ATTRIBUTE_PRIVATE) { + if (!expect_private) + return false; + } else if (expect_private) + return false; + + return true; +} + +static bool mem_attrs_mixed_2m(struct kvm *kvm, unsigned long attrs, + gfn_t start, gfn_t end) +{ + XA_STATE(xas, &kvm->mem_attr_array, start); + gfn_t gfn =3D start; + void *entry; + bool mixed =3D false; + + rcu_read_lock(); + entry =3D xas_load(&xas); + while (gfn < end) { + if (xas_retry(&xas, entry)) + continue; + + KVM_BUG_ON(gfn !=3D xas.xa_index, kvm); + + if (!is_expected_attr_entry(entry, attrs)) { + mixed =3D true; + break; + } + + entry =3D xas_next(&xas); + gfn++; + } + + rcu_read_unlock(); + return mixed; +} + +static bool mem_attrs_mixed(struct kvm *kvm, struct kvm_memory_slot *slot, + int level, unsigned long attrs, + gfn_t start, gfn_t end) +{ + unsigned long gfn; + + if (level =3D=3D PG_LEVEL_2M) + return mem_attrs_mixed_2m(kvm, attrs, start, end); + + for (gfn =3D start; gfn < end; gfn +=3D KVM_PAGES_PER_HPAGE(level - 1)) + if (linfo_is_mixed(lpage_info_slot(gfn, slot, level - 1)) || + !is_expected_attr_entry(xa_load(&kvm->mem_attr_array, gfn), + attrs)) + return true; + return false; +} + +static void kvm_update_lpage_private_shared_mixed(struct kvm *kvm, + struct kvm_memory_slot *slot, + unsigned long attrs, + gfn_t start, gfn_t end) +{ + unsigned long pages, mask; + gfn_t gfn, gfn_end, first, last; + int level; + bool mixed; + + /* + * The sequence matters here: we set the higher level basing on the + * lower level's scanning result. + */ + for (level =3D PG_LEVEL_2M; level <=3D KVM_MAX_HUGEPAGE_LEVEL; level++) { + pages =3D KVM_PAGES_PER_HPAGE(level); + mask =3D ~(pages - 1); + first =3D start & mask; + last =3D (end - 1) & mask; + + /* + * We only need to scan the head and tail page, for middle pages + * we know they will not be mixed. + */ + gfn =3D max(first, slot->base_gfn); + gfn_end =3D min(first + pages, slot->base_gfn + slot->npages); + mixed =3D mem_attrs_mixed(kvm, slot, level, attrs, gfn, gfn_end); + linfo_set_mixed(gfn, slot, level, mixed); + + if (first =3D=3D last) + return; + + for (gfn =3D first + pages; gfn < last; gfn +=3D pages) + linfo_set_mixed(gfn, slot, level, false); + + gfn =3D last; + gfn_end =3D min(last + pages, slot->base_gfn + slot->npages); + mixed =3D mem_attrs_mixed(kvm, slot, level, attrs, gfn, gfn_end); + linfo_set_mixed(gfn, slot, level, mixed); + } +} + +void kvm_arch_set_memory_attributes(struct kvm *kvm, + struct kvm_memory_slot *slot, + unsigned long attrs, + gfn_t start, gfn_t end) +{ + if (kvm_slot_can_be_private(slot)) + kvm_update_lpage_private_shared_mixed(kvm, slot, attrs, + start, end); +} diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9a07380f8d3c..5aefcff614d2 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12362,6 +12362,8 @@ static int kvm_alloc_memslot_metadata(struct kvm *k= vm, if ((slot->base_gfn + npages) & (KVM_PAGES_PER_HPAGE(level) - 1)) linfo[lpages - 1].disallow_lpage =3D 1; ugfn =3D slot->userspace_addr >> PAGE_SHIFT; + if (kvm_slot_can_be_private(slot)) + ugfn |=3D slot->restricted_offset >> PAGE_SHIFT; /* * If the gfn and userspace address are not aligned wrt each * other, disable large page support for this slot. diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 3331c0c92838..25099c94e770 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -592,6 +592,11 @@ struct kvm_memory_slot { struct restrictedmem_notifier notifier; }; =20 +static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *s= lot) +{ + return slot && (slot->flags & KVM_MEM_PRIVATE); +} + static inline bool kvm_slot_dirty_track_enabled(const struct kvm_memory_sl= ot *slot) { return slot->flags & KVM_MEM_LOG_DIRTY_PAGES; @@ -2316,4 +2321,18 @@ static inline void kvm_account_pgtable_pages(void *v= irt, int nr) /* Max number of entries allowed for each kvm dirty ring */ #define KVM_DIRTY_RING_MAX_ENTRIES 65536 =20 +#ifdef __KVM_HAVE_ARCH_SET_MEMORY_ATTRIBUTES +void kvm_arch_set_memory_attributes(struct kvm *kvm, + struct kvm_memory_slot *slot, + unsigned long attrs, + gfn_t start, gfn_t end); +#else +static inline void kvm_arch_set_memory_attributes(struct kvm *kvm, + struct kvm_memory_slot *slot, + unsigned long attrs, + gfn_t start, gfn_t end) +{ +} +#endif /* __KVM_HAVE_ARCH_SET_MEMORY_ATTRIBUTES */ + #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 4e1e1e113bf0..e107afea32f0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2354,7 +2354,8 @@ static u64 kvm_supported_mem_attributes(struct kvm *k= vm) return 0; } =20 -static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end) +static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end, + unsigned long attrs) { struct kvm_gfn_range gfn_range; struct kvm_memory_slot *slot; @@ -2378,6 +2379,10 @@ static void kvm_unmap_mem_range(struct kvm *kvm, gfn= _t start, gfn_t end) gfn_range.slot =3D slot; =20 r |=3D kvm_unmap_gfn_range(kvm, &gfn_range); + + kvm_arch_set_memory_attributes(kvm, slot, attrs, + gfn_range.start, + gfn_range.end); } } =20 @@ -2427,7 +2432,7 @@ static int kvm_vm_ioctl_set_mem_attributes(struct kvm= *kvm, idx =3D srcu_read_lock(&kvm->srcu); KVM_MMU_LOCK(kvm); if (i > start) - kvm_unmap_mem_range(kvm, start, i); + kvm_unmap_mem_range(kvm, start, i, attrs->attributes); kvm_mmu_invalidate_end(kvm); KVM_MMU_UNLOCK(kvm); srcu_read_unlock(&kvm->srcu, idx); --=20 2.25.1 From nobody Sun Apr 28 12:22:13 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20BE0C4332F for ; Fri, 2 Dec 2022 06:21:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232496AbiLBGVe (ORCPT ); Fri, 2 Dec 2022 01:21:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232355AbiLBGUO (ORCPT ); Fri, 2 Dec 2022 01:20:14 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C285DCBF8; Thu, 1 Dec 2022 22:19:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961986; x=1701497986; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NF2CzRCc7hZrLWjr9YAx/RuM8EDcz32QaRo9aI6LYAA=; b=FpMi1cfSpM5WruLPkC+9aEbC846WL+6TX5az2+7lbDyaSYnn4UQyLI7L z6L05oHrydSogLMiih0QRIn0S4RC2tcoycj9nWH2sWSN4/MOc5wHCBiPd nlSOGXMLGr+ayOayd9S7TxqX6XNSoqg7U1Aff+RUHHu5N27JOwilTcz9v Qrp7iZtdfbRpKiTDSF1tsNxCYVYVxBOpbA1v7vAiQmB6xmyjzqMkrSiFo s0S/O6Qa9YBhlLQOJBluj3NK5qZ8EMug2H8o/p9NQ+Vu6pbZZPVyHO3Rm iWW8aM8PrZmHMTfY9So3Q5LF24+DBiqNoHIh/msGqcRCTChJIlXTSoDJw Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170684" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170684" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:19:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698783" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698783" Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:19:34 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 8/9] KVM: Handle page fault for private memory Date: Fri, 2 Dec 2022 14:13:46 +0800 Message-Id: <20221202061347.1070246-9-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" A KVM_MEM_PRIVATE memslot can include both fd-based private memory and hva-based shared memory. Architecture code (like TDX code) can tell whether the on-going fault is private or not. This patch adds a 'is_private' field to kvm_page_fault to indicate this and architecture code is expected to set it. To handle page fault for such memslot, the handling logic is different depending on whether the fault is private or shared. KVM checks if 'is_private' matches the host's view of the page (maintained in mem_attr_array). - For a successful match, private pfn is obtained with restrictedmem_get_page() and shared pfn is obtained with existing get_user_pages(). - For a failed match, KVM causes a KVM_EXIT_MEMORY_FAULT exit to userspace. Userspace then can convert memory between private/shared in host's view and retry the fault. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba reviewed-by/acked-by is still expected from the mm people. --- arch/x86/kvm/mmu/mmu.c | 63 +++++++++++++++++++++++++++++++-- arch/x86/kvm/mmu/mmu_internal.h | 14 +++++++- arch/x86/kvm/mmu/mmutrace.h | 1 + arch/x86/kvm/mmu/tdp_mmu.c | 2 +- include/linux/kvm_host.h | 30 ++++++++++++++++ 5 files changed, 105 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2190fd8c95c0..b1953ebc012e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3058,7 +3058,7 @@ static int host_pfn_mapping_level(struct kvm *kvm, gf= n_t gfn, =20 int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level) + int max_level, bool is_private) { struct kvm_lpage_info *linfo; int host_level; @@ -3070,6 +3070,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, break; } =20 + if (is_private) + return max_level; + if (max_level =3D=3D PG_LEVEL_4K) return PG_LEVEL_4K; =20 @@ -3098,7 +3101,8 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, s= truct kvm_page_fault *fault * level, which will be used to do precise, accurate accounting. */ fault->req_level =3D kvm_mmu_max_mapping_level(vcpu->kvm, slot, - fault->gfn, fault->max_level); + fault->gfn, fault->max_level, + fault->is_private); if (fault->req_level =3D=3D PG_LEVEL_4K || fault->huge_page_disallowed) return; =20 @@ -4178,6 +4182,49 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu= , struct kvm_async_pf *work) kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); } =20 +static inline u8 order_to_level(int order) +{ + BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); + + if (order >=3D KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) + return PG_LEVEL_1G; + + if (order >=3D KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) + return PG_LEVEL_2M; + + return PG_LEVEL_4K; +} + +static int kvm_do_memory_fault_exit(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) +{ + vcpu->run->exit_reason =3D KVM_EXIT_MEMORY_FAULT; + if (fault->is_private) + vcpu->run->memory.flags =3D KVM_MEMORY_EXIT_FLAG_PRIVATE; + else + vcpu->run->memory.flags =3D 0; + vcpu->run->memory.gpa =3D fault->gfn << PAGE_SHIFT; + vcpu->run->memory.size =3D PAGE_SIZE; + return RET_PF_USER; +} + +static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) +{ + int order; + struct kvm_memory_slot *slot =3D fault->slot; + + if (!kvm_slot_can_be_private(slot)) + return kvm_do_memory_fault_exit(vcpu, fault); + + if (kvm_restricted_mem_get_pfn(slot, fault->gfn, &fault->pfn, &order)) + return RET_PF_RETRY; + + fault->max_level =3D min(order_to_level(order), fault->max_level); + fault->map_writable =3D !(slot->flags & KVM_MEM_READONLY); + return RET_PF_CONTINUE; +} + static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *f= ault) { struct kvm_memory_slot *slot =3D fault->slot; @@ -4210,6 +4257,12 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, st= ruct kvm_page_fault *fault) return RET_PF_EMULATE; } =20 + if (fault->is_private !=3D kvm_mem_is_private(vcpu->kvm, fault->gfn)) + return kvm_do_memory_fault_exit(vcpu, fault); + + if (fault->is_private) + return kvm_faultin_pfn_private(vcpu, fault); + async =3D false; fault->pfn =3D __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &asyn= c, fault->write, &fault->map_writable, @@ -5599,6 +5652,9 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu= , gpa_t cr2_or_gpa, u64 err return -EIO; } =20 + if (r =3D=3D RET_PF_USER) + return 0; + if (r < 0) return r; if (r !=3D RET_PF_EMULATE) @@ -6452,7 +6508,8 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *= kvm, */ if (sp->role.direct && sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn, - PG_LEVEL_NUM)) { + PG_LEVEL_NUM, + false)) { kvm_zap_one_rmap_spte(kvm, rmap_head, sptep); =20 if (kvm_available_flush_tlb_with_range()) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index dbaf6755c5a7..5ccf08183b00 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -189,6 +189,7 @@ struct kvm_page_fault { =20 /* Derived from mmu and global state. */ const bool is_tdp; + const bool is_private; const bool nx_huge_page_workaround_enabled; =20 /* @@ -237,6 +238,7 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kv= m_page_fault *fault); * RET_PF_RETRY: let CPU fault again on the address. * RET_PF_EMULATE: mmio page fault, emulate the instruction directly. * RET_PF_INVALID: the spte is invalid, let the real page fault path updat= e it. + * RET_PF_USER: need to exit to userspace to handle this fault. * RET_PF_FIXED: The faulting entry has been fixed. * RET_PF_SPURIOUS: The faulting entry was already fixed, e.g. by another = vCPU. * @@ -253,6 +255,7 @@ enum { RET_PF_RETRY, RET_PF_EMULATE, RET_PF_INVALID, + RET_PF_USER, RET_PF_FIXED, RET_PF_SPURIOUS, }; @@ -310,7 +313,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu= *vcpu, gpa_t cr2_or_gpa, =20 int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level); + int max_level, bool is_private); void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault = *fault); void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, in= t cur_level); =20 @@ -319,4 +322,13 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cac= he *mc); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *s= p); =20 +#ifndef CONFIG_HAVE_KVM_RESTRICTED_MEM +static inline int kvm_restricted_mem_get_pfn(struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *order) +{ + WARN_ON_ONCE(1); + return -EOPNOTSUPP; +} +#endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index ae86820cef69..2d7555381955 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -58,6 +58,7 @@ TRACE_DEFINE_ENUM(RET_PF_CONTINUE); TRACE_DEFINE_ENUM(RET_PF_RETRY); TRACE_DEFINE_ENUM(RET_PF_EMULATE); TRACE_DEFINE_ENUM(RET_PF_INVALID); +TRACE_DEFINE_ENUM(RET_PF_USER); TRACE_DEFINE_ENUM(RET_PF_FIXED); TRACE_DEFINE_ENUM(RET_PF_SPURIOUS); =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 771210ce5181..8ba1a4afc546 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1768,7 +1768,7 @@ static void zap_collapsible_spte_range(struct kvm *kv= m, continue; =20 max_mapping_level =3D kvm_mmu_max_mapping_level(kvm, slot, - iter.gfn, PG_LEVEL_NUM); + iter.gfn, PG_LEVEL_NUM, false); if (max_mapping_level < iter.level) continue; =20 diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 25099c94e770..153842bb33df 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2335,4 +2335,34 @@ static inline void kvm_arch_set_memory_attributes(st= ruct kvm *kvm, } #endif /* __KVM_HAVE_ARCH_SET_MEMORY_ATTRIBUTES */ =20 +#ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES +static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) +{ + return xa_to_value(xa_load(&kvm->mem_attr_array, gfn)) & + KVM_MEMORY_ATTRIBUTE_PRIVATE; +} +#else +static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) +{ + return false; +} + +#endif /* CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES */ + +#ifdef CONFIG_HAVE_KVM_RESTRICTED_MEM +static inline int kvm_restricted_mem_get_pfn(struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *order) +{ + int ret; + struct page *page; + pgoff_t index =3D gfn - slot->base_gfn + + (slot->restricted_offset >> PAGE_SHIFT); + + ret =3D restrictedmem_get_page(slot->restricted_file, index, + &page, order); + *pfn =3D page_to_pfn(page); + return ret; +} +#endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ + #endif --=20 2.25.1 From nobody Sun Apr 28 12:22:13 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linux.intel.com ARC-Seal: i=1; a=rsa-sha256; t=1669962077; cv=none; d=zohomail.com; s=zohoarc; b=dEUqum7EqAw1EBR5oRfB0/2hnUckSC+Dm3lgd3mjJgfiXcrvAf+zIihdzCBSSKSXluZjllezMmTuQVSX8JXuE4P4xqx5edV4F3fLlMbEbLFtftgYq2rTa2l+/F22l/kkS26SeTxJSOMB3bnpTvh3cRvcjcmVFSN4F8Bem2cbttI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1669962077; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=LitJ1Pcae3Q2I4e/+ChynXp7xxO2JKX2XlQWHV4NnwA=; b=ltyCkuYJZxodDrfyUApvfYfS+x5O+OMmcXBwBiaHPHKBhzJikvzPk9fayRYplfoOg5RxkAbpL7KNNakgvIHzZdMHmA9wRarTm+2Qko8LclDpwDb7yDMbdhG0TYTGLXyvw1JAxZT+T6a4lZDgyxHWyl7NJWaIRwAQ1qEyCRDVf7A= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1669962077170381.40062774734156; Thu, 1 Dec 2022 22:21:17 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0zP2-0005EF-6s; Fri, 02 Dec 2022 01:20:04 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0zOz-0005CY-0r for qemu-devel@nongnu.org; Fri, 02 Dec 2022 01:20:01 -0500 Received: from mga07.intel.com ([134.134.136.100]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0zOw-00084m-5t for qemu-devel@nongnu.org; Fri, 02 Dec 2022 01:20:00 -0500 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:19:56 -0800 Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:19:45 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961998; x=1701497998; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=88Pkdv7kTSmlac+dsvaEbFSp3wXt4d9KWmWXJPdj1wM=; b=hEHxJDl39us7QWLRmlKCx21UZ/xi3nMf/KGCAZs7MdcqmNpk6mR4H9Go ozBFV6o9cvue2MIdlSGLdBmuJBuyeduBmEXqeqQKpgVfgNBb3rr8pWW8c 33wKlqux2yCaW73kR0iqwkAwBLAIx25/XMUDlwyKzOaX86p7H1DTa0XJJ luLDHA4avWl2P3YIXK8oyiPbcKQSFnwr/MiFDnp9DI/cS4ukbMJkuzdpl LoB4f2dK6I/IysyyCwb3x3+P/c7NmMF7dMU3NV//0xoEVjtgAp1MAqHbv qL7UPwwmqNhdDSypM7agcJlR5s6Mw9hwRxmAWSR1aZrqfQuOZSly3YxvB Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170721" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170721" X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698820" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698820" From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 9/9] KVM: Enable and expose KVM_MEM_PRIVATE Date: Fri, 2 Dec 2022 14:13:47 +0800 Message-Id: <20221202061347.1070246-10-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: none client-ip=134.134.136.100; envelope-from=chao.p.peng@linux.intel.com; helo=mga07.intel.com X-Spam_score_int: -42 X-Spam_score: -4.3 X-Spam_bar: ---- X-Spam_report: (-4.3 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @intel.com) X-ZM-MESSAGEID: 1669962077970100001 Content-Type: text/plain; charset="utf-8" Register/unregister private memslot to fd-based memory backing store restrictedmem and implement the callbacks for restrictedmem_notifier: - invalidate_start()/invalidate_end() to zap the existing memory mappings in the KVM page table. - error() to request KVM_REQ_MEMORY_MCE and later exit to userspace with KVM_EXIT_SHUTDOWN. Expose KVM_MEM_PRIVATE for memslot and KVM_MEMORY_ATTRIBUTE_PRIVATE for KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES to userspace but either are controlled by kvm_arch_has_private_mem() which should be rewritten by architecture code. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba reviewed-by/acked-by is still expected from the mm people. --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/x86.c | 13 +++ include/linux/kvm_host.h | 3 + virt/kvm/kvm_main.c | 179 +++++++++++++++++++++++++++++++- 4 files changed, 191 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 7772ab37ac89..27ef31133352 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -114,6 +114,7 @@ KVM_ARCH_REQ_FLAGS(31, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_HV_TLB_FLUSH \ KVM_ARCH_REQ_FLAGS(32, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) +#define KVM_REQ_MEMORY_MCE KVM_ARCH_REQ(33) =20 #define CR0_RESERVED_BITS \ (~(unsigned long)(X86_CR0_PE | X86_CR0_MP | X86_CR0_EM | X86_CR0_TS \ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5aefcff614d2..c67e22f3e2ee 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6587,6 +6587,13 @@ int kvm_arch_pm_notifier(struct kvm *kvm, unsigned l= ong state) } #endif /* CONFIG_HAVE_KVM_PM_NOTIFIER */ =20 +#ifdef CONFIG_HAVE_KVM_RESTRICTED_MEM +void kvm_arch_memory_mce(struct kvm *kvm) +{ + kvm_make_all_cpus_request(kvm, KVM_REQ_MEMORY_MCE); +} +#endif + static int kvm_vm_ioctl_get_clock(struct kvm *kvm, void __user *argp) { struct kvm_clock_data data =3D { 0 }; @@ -10357,6 +10364,12 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) =20 if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu)) static_call(kvm_x86_update_cpu_dirty_logging)(vcpu); + + if (kvm_check_request(KVM_REQ_MEMORY_MCE, vcpu)) { + vcpu->run->exit_reason =3D KVM_EXIT_SHUTDOWN; + r =3D 0; + goto out; + } } =20 if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win || diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 153842bb33df..f032d878e034 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -590,6 +590,7 @@ struct kvm_memory_slot { struct file *restricted_file; loff_t restricted_offset; struct restrictedmem_notifier notifier; + struct kvm *kvm; }; =20 static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *s= lot) @@ -2363,6 +2364,8 @@ static inline int kvm_restricted_mem_get_pfn(struct k= vm_memory_slot *slot, *pfn =3D page_to_pfn(page); return ret; } + +void kvm_arch_memory_mce(struct kvm *kvm); #endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ =20 #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e107afea32f0..ac835fc77273 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -936,6 +936,121 @@ static int kvm_init_mmu_notifier(struct kvm *kvm) =20 #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */ =20 +#ifdef CONFIG_HAVE_KVM_RESTRICTED_MEM +static bool restrictedmem_range_is_valid(struct kvm_memory_slot *slot, + pgoff_t start, pgoff_t end, + gfn_t *gfn_start, gfn_t *gfn_end) +{ + unsigned long base_pgoff =3D slot->restricted_offset >> PAGE_SHIFT; + + if (start > base_pgoff) + *gfn_start =3D slot->base_gfn + start - base_pgoff; + else + *gfn_start =3D slot->base_gfn; + + if (end < base_pgoff + slot->npages) + *gfn_end =3D slot->base_gfn + end - base_pgoff; + else + *gfn_end =3D slot->base_gfn + slot->npages; + + if (*gfn_start >=3D *gfn_end) + return false; + + return true; +} + +static void kvm_restrictedmem_invalidate_begin(struct restrictedmem_notifi= er *notifier, + pgoff_t start, pgoff_t end) +{ + struct kvm_memory_slot *slot =3D container_of(notifier, + struct kvm_memory_slot, + notifier); + struct kvm *kvm =3D slot->kvm; + gfn_t gfn_start, gfn_end; + struct kvm_gfn_range gfn_range; + int idx; + + if (!restrictedmem_range_is_valid(slot, start, end, + &gfn_start, &gfn_end)) + return; + + gfn_range.start =3D gfn_start; + gfn_range.end =3D gfn_end; + gfn_range.slot =3D slot; + gfn_range.pte =3D __pte(0); + gfn_range.may_block =3D true; + + idx =3D srcu_read_lock(&kvm->srcu); + KVM_MMU_LOCK(kvm); + + kvm_mmu_invalidate_begin(kvm); + kvm_mmu_invalidate_range_add(kvm, gfn_start, gfn_end); + if (kvm_unmap_gfn_range(kvm, &gfn_range)) + kvm_flush_remote_tlbs(kvm); + + KVM_MMU_UNLOCK(kvm); + srcu_read_unlock(&kvm->srcu, idx); +} + +static void kvm_restrictedmem_invalidate_end(struct restrictedmem_notifier= *notifier, + pgoff_t start, pgoff_t end) +{ + struct kvm_memory_slot *slot =3D container_of(notifier, + struct kvm_memory_slot, + notifier); + struct kvm *kvm =3D slot->kvm; + gfn_t gfn_start, gfn_end; + + if (!restrictedmem_range_is_valid(slot, start, end, + &gfn_start, &gfn_end)) + return; + + KVM_MMU_LOCK(kvm); + kvm_mmu_invalidate_end(kvm); + KVM_MMU_UNLOCK(kvm); +} + +static void kvm_restrictedmem_error(struct restrictedmem_notifier *notifie= r, + pgoff_t start, pgoff_t end) +{ + struct kvm_memory_slot *slot =3D container_of(notifier, + struct kvm_memory_slot, + notifier); + kvm_arch_memory_mce(slot->kvm); +} + +static struct restrictedmem_notifier_ops kvm_restrictedmem_notifier_ops = =3D { + .invalidate_start =3D kvm_restrictedmem_invalidate_begin, + .invalidate_end =3D kvm_restrictedmem_invalidate_end, + .error =3D kvm_restrictedmem_error, +}; + +static inline void kvm_restrictedmem_register(struct kvm_memory_slot *slot) +{ + slot->notifier.ops =3D &kvm_restrictedmem_notifier_ops; + restrictedmem_register_notifier(slot->restricted_file, &slot->notifier); +} + +static inline void kvm_restrictedmem_unregister(struct kvm_memory_slot *sl= ot) +{ + restrictedmem_unregister_notifier(slot->restricted_file, + &slot->notifier); +} + +#else /* !CONFIG_HAVE_KVM_RESTRICTED_MEM */ + +static inline void kvm_restrictedmem_register(struct kvm_memory_slot *slot) +{ + WARN_ON_ONCE(1); +} + +static inline void kvm_restrictedmem_unregister(struct kvm_memory_slot *sl= ot) +{ + WARN_ON_ONCE(1); +} + +#endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ + #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER static int kvm_pm_notifier_call(struct notifier_block *bl, unsigned long state, @@ -980,6 +1095,11 @@ static void kvm_destroy_dirty_bitmap(struct kvm_memor= y_slot *memslot) /* This does not remove the slot from struct kvm_memslots data structures = */ static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) { + if (slot->flags & KVM_MEM_PRIVATE) { + kvm_restrictedmem_unregister(slot); + fput(slot->restricted_file); + } + kvm_destroy_dirty_bitmap(slot); =20 kvm_arch_free_memslot(kvm, slot); @@ -1551,10 +1671,14 @@ static void kvm_replace_memslot(struct kvm *kvm, } } =20 -static int check_memory_region_flags(const struct kvm_user_mem_region *mem) +static int check_memory_region_flags(struct kvm *kvm, + const struct kvm_user_mem_region *mem) { u32 valid_flags =3D KVM_MEM_LOG_DIRTY_PAGES; =20 + if (kvm_arch_has_private_mem(kvm)) + valid_flags |=3D KVM_MEM_PRIVATE; + #ifdef __KVM_HAVE_READONLY_MEM valid_flags |=3D KVM_MEM_READONLY; #endif @@ -1630,6 +1754,9 @@ static int kvm_prepare_memory_region(struct kvm *kvm, { int r; =20 + if (change =3D=3D KVM_MR_CREATE && new->flags & KVM_MEM_PRIVATE) + kvm_restrictedmem_register(new); + /* * If dirty logging is disabled, nullify the bitmap; the old bitmap * will be freed on "commit". If logging is enabled in both old and @@ -1658,6 +1785,9 @@ static int kvm_prepare_memory_region(struct kvm *kvm, if (r && new && new->dirty_bitmap && (!old || !old->dirty_bitmap)) kvm_destroy_dirty_bitmap(new); =20 + if (r && change =3D=3D KVM_MR_CREATE && new->flags & KVM_MEM_PRIVATE) + kvm_restrictedmem_unregister(new); + return r; } =20 @@ -1963,7 +2093,7 @@ int __kvm_set_memory_region(struct kvm *kvm, int as_id, id; int r; =20 - r =3D check_memory_region_flags(mem); + r =3D check_memory_region_flags(kvm, mem); if (r) return r; =20 @@ -1982,6 +2112,10 @@ int __kvm_set_memory_region(struct kvm *kvm, !access_ok((void __user *)(unsigned long)mem->userspace_addr, mem->memory_size)) return -EINVAL; + if (mem->flags & KVM_MEM_PRIVATE && + (mem->restricted_offset & (PAGE_SIZE - 1) || + mem->restricted_offset > U64_MAX - mem->memory_size)) + return -EINVAL; if (as_id >=3D KVM_ADDRESS_SPACE_NUM || id >=3D KVM_MEM_SLOTS_NUM) return -EINVAL; if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr) @@ -2020,6 +2154,9 @@ int __kvm_set_memory_region(struct kvm *kvm, if ((kvm->nr_memslot_pages + npages) < kvm->nr_memslot_pages) return -EINVAL; } else { /* Modify an existing slot. */ + /* Private memslots are immutable, they can only be deleted. */ + if (mem->flags & KVM_MEM_PRIVATE) + return -EINVAL; if ((mem->userspace_addr !=3D old->userspace_addr) || (npages !=3D old->npages) || ((mem->flags ^ old->flags) & KVM_MEM_READONLY)) @@ -2048,10 +2185,28 @@ int __kvm_set_memory_region(struct kvm *kvm, new->npages =3D npages; new->flags =3D mem->flags; new->userspace_addr =3D mem->userspace_addr; + if (mem->flags & KVM_MEM_PRIVATE) { + new->restricted_file =3D fget(mem->restricted_fd); + if (!new->restricted_file || + !file_is_restrictedmem(new->restricted_file)) { + r =3D -EINVAL; + goto out; + } + new->restricted_offset =3D mem->restricted_offset; + } + + new->kvm =3D kvm; =20 r =3D kvm_set_memslot(kvm, old, new, change); if (r) - kfree(new); + goto out; + + return 0; + +out: + if (new->restricted_file) + fput(new->restricted_file); + kfree(new); return r; } EXPORT_SYMBOL_GPL(__kvm_set_memory_region); @@ -2351,6 +2506,8 @@ static int kvm_vm_ioctl_clear_dirty_log(struct kvm *k= vm, #ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES static u64 kvm_supported_mem_attributes(struct kvm *kvm) { + if (kvm_arch_has_private_mem(kvm)) + return KVM_MEMORY_ATTRIBUTE_PRIVATE; return 0; } =20 @@ -4822,16 +4979,28 @@ static long kvm_vm_ioctl(struct file *filp, } case KVM_SET_USER_MEMORY_REGION: { struct kvm_user_mem_region mem; - unsigned long size =3D sizeof(struct kvm_userspace_memory_region); + unsigned int flags_offset =3D offsetof(typeof(mem), flags); + unsigned long size; + u32 flags; =20 kvm_sanity_check_user_mem_region_alias(); =20 + memset(&mem, 0, sizeof(mem)); + r =3D -EFAULT; + if (get_user(flags, (u32 __user *)(argp + flags_offset))) + goto out; + + if (flags & KVM_MEM_PRIVATE) + size =3D sizeof(struct kvm_userspace_memory_region_ext); + else + size =3D sizeof(struct kvm_userspace_memory_region); + if (copy_from_user(&mem, argp, size)) goto out; =20 r =3D -EINVAL; - if (mem.flags & KVM_MEM_PRIVATE) + if ((flags ^ mem.flags) & KVM_MEM_PRIVATE) goto out; =20 r =3D kvm_vm_ioctl_set_memory_region(kvm, &mem); --=20 2.25.1