From nobody Sun Dec 14 12:17:01 2025 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F1F8F28983A; Thu, 22 May 2025 13:23:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.156.1 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747920190; cv=none; b=bVuAbhWlh1hVHvHET0H1pl9fZrGdMn7X1dc1+d+mTmEgYSSWtW9TOBTMDOJmDGVZ/rc8oKAtvzXW94NHgljmpnyXupwAcxC2nIZUVRyG+1Vx90r2sAYCZ5Lzg/v/rPKJv5MYqGydO30iE+WX29etwe5xvif1kPEQnhzOpqcCIpU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747920190; c=relaxed/simple; bh=EYnzR96PwCeBR3KKYudUJT+Aq7RXpH2C9hrMtaI4EQw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hi4bA05zMqTtzsh1ptXzgk97XrLjJkLSH6LzL0ATY0nFHO6PGNuDpi/8ctyd62zaJ2IYLzxjJ6YKQiAU/XU5+w4SbcB6qev7GMPyLVGX55DdRKa6qN9sqK614jgavajM9opd8OM/NTF4U3lOwwMJoq/q3JQeSq6wP/Ty8y622ms= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=ofmrkAc/; arc=none smtp.client-ip=148.163.156.1 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="ofmrkAc/" Received: from pps.filterd (m0353729.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54M6JVFw010297; Thu, 22 May 2025 13:23:05 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=f0jpWTRUglvKLJXPn 0713SU1VgSFfOdem/Tcgr7u5wY=; b=ofmrkAc/o8xne5Bss8mCGwwrpTJT+W+LC rlxULu5Pe5zSoIlrWZvaVelwwZVv7p7Er2LmtkieDr7kbkQz9PYexH37DAQNaBzS JkpZ8/ix4RYei/5fo6YTtQ8fdkZjOsy9wvmxtBiqPqTKKiFUppdEL32ilFcnzcco JV84tAg7OMumVlSZnoPC6BECbzIjnBl8xEynAatrIkKlSlOVquV5gx9MwHbUJlXl yx2kJmbPlRZiGPeT62/JCu5a1AP6NcPkwqd1/c7Mi5xcRdi990js6mI4fYqa8c1J UJXgQ23z7WmczURKHVW6Ux9S6Wqp6SdLV8yAmA+vLfjx2xcyVceMw== Received: from ppma12.dal12v.mail.ibm.com (dc.9e.1632.ip4.static.sl-reverse.com [50.22.158.220]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 46sxhw9w6k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 May 2025 13:23:05 +0000 (GMT) Received: from pps.filterd (ppma12.dal12v.mail.ibm.com [127.0.0.1]) by ppma12.dal12v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 54MDF5Bd015497; Thu, 22 May 2025 13:23:04 GMT Received: from smtprelay04.fra02v.mail.ibm.com ([9.218.2.228]) by ppma12.dal12v.mail.ibm.com (PPS) with ESMTPS id 46rwnnhjyp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 May 2025 13:23:04 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay04.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 54MDN0fH16318908 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 22 May 2025 13:23:00 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 99D3720043; Thu, 22 May 2025 13:23:00 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 414A02004E; Thu, 22 May 2025 13:23:00 +0000 (GMT) Received: from p-imbrenda.boeblingen.de.ibm.com (unknown [9.152.224.66]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Thu, 22 May 2025 13:23:00 +0000 (GMT) From: Claudio Imbrenda To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, linux-s390@vger.kernel.org, frankja@linux.ibm.com, borntraeger@de.ibm.com, seiden@linux.ibm.com, nsg@linux.ibm.com, nrb@linux.ibm.com, david@redhat.com, hca@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, gor@linux.ibm.com, schlameuss@linux.ibm.com Subject: [PATCH v3 1/4] s390: remove unneeded includes Date: Thu, 22 May 2025 15:22:56 +0200 Message-ID: <20250522132259.167708-2-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250522132259.167708-1-imbrenda@linux.ibm.com> References: <20250522132259.167708-1-imbrenda@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTIyMDEzMyBTYWx0ZWRfX/O9p2jRADZPH NMc8R5t8ielqBddrDLF+mmr8H2h7ranVGaATTK00kSRkUqEXHvB2YjgEcFDIdrv7xMIpj28wN4+ C05Vca+eT+lSbM/NOM4denxVbAtSsapPw9GcoIqsAT8vomswtQ4Q+ygXoCBis4hOSIYfEFEV2ZE xC1vubS8s21knlUrrxAENrlWnjfyeTE3gOCZ06tL1LDVJyJX7EukN3kCg19nOOnx4cjuLtOhM88 zp7FdSn6tTANoVbL/8BYbnRkK0qEzV3niQI/CTMVU9CwVawVLzhwFUnJXFuYzk1YVyr5kAFaDou QR45tN96cC3LEFZYgWmjWCCzeI8bMLTq2NbiXFnMB1/Vm7WZt7/qFKOqFCnPDXdkiA+MITtEq8k 5OyXO5pEgXUS/pQ/CHOCqzm2UVcx4rm22xA7f3iSKmXKS/sWXyOXc7XpSZYFuQoNFQDsmNbK X-Proofpoint-GUID: G-eEn9EBuMzsQY3E6itn7h0t9CLADsnw X-Authority-Analysis: v=2.4 cv=O685vA9W c=1 sm=1 tr=0 ts=682f2539 cx=c_pps a=bLidbwmWQ0KltjZqbj+ezA==:117 a=bLidbwmWQ0KltjZqbj+ezA==:17 a=dt9VzEwgFbYA:10 a=VnNF1IyMAAAA:8 a=uOUJJYmQgii9-mSc-xUA:9 X-Proofpoint-ORIG-GUID: G-eEn9EBuMzsQY3E6itn7h0t9CLADsnw X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-22_06,2025-05-22_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 mlxlogscore=920 spamscore=0 mlxscore=0 phishscore=0 bulkscore=0 priorityscore=1501 lowpriorityscore=0 impostorscore=0 clxscore=1015 malwarescore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505160000 definitions=main-2505220133 Content-Type: text/plain; charset="utf-8" Many files don't need to include asm/tlb.h or asm/gmap.h. On the other hand, asm/tlb.h does need to include asm/gmap.h. Remove all unneeded includes so that asm/tlb.h is not directly used by s390 arch code anymore. Remove asm/gmap.h from a few other files as well, so that now only KVM code, mm/gmap.c, and asm/tlb.h include it. Signed-off-by: Claudio Imbrenda Reviewed-by: Christoph Schlameuss Reviewed-by: Steffen Eiden --- arch/s390/include/asm/tlb.h | 1 + arch/s390/include/asm/uv.h | 1 - arch/s390/kvm/intercept.c | 1 + arch/s390/mm/fault.c | 1 - arch/s390/mm/gmap.c | 1 - arch/s390/mm/init.c | 1 - arch/s390/mm/pgalloc.c | 2 -- arch/s390/mm/pgtable.c | 1 - 8 files changed, 2 insertions(+), 7 deletions(-) diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index f20601995bb0..56d5f9e0eb2e 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -36,6 +36,7 @@ static inline bool __tlb_remove_folio_pages(struct mmu_ga= ther *tlb, =20 #include #include +#include =20 /* * Release the page cache reference for a pte removed by diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h index 46fb0ef6f984..eeb2db4783e6 100644 --- a/arch/s390/include/asm/uv.h +++ b/arch/s390/include/asm/uv.h @@ -16,7 +16,6 @@ #include #include #include -#include #include =20 #define UVC_CC_OK 0 diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c index a06a000f196c..b4834bd4d216 100644 --- a/arch/s390/kvm/intercept.c +++ b/arch/s390/kvm/intercept.c @@ -16,6 +16,7 @@ #include #include #include +#include =20 #include "kvm-s390.h" #include "gaccess.h" diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index da84ff6770de..3829521450dd 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -40,7 +40,6 @@ #include #include #include -#include #include #include #include diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index a94bd4870c65..4869555ff403 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -24,7 +24,6 @@ #include #include #include -#include =20 /* * The address is saved in a radix tree directly; NULL would be ambiguous, diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index afa085e8186c..074bf4fb4ce2 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -40,7 +40,6 @@ #include #include #include -#include #include #include #include diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index e3a6f8ae156c..ddab36875370 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -12,8 +12,6 @@ #include #include #include -#include -#include #include =20 unsigned long *crst_table_alloc(struct mm_struct *mm) diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c index 9901934284ec..7df70cd8f739 100644 --- a/arch/s390/mm/pgtable.c +++ b/arch/s390/mm/pgtable.c @@ -20,7 +20,6 @@ #include #include =20 -#include #include #include #include --=20 2.49.0 From nobody Sun Dec 14 12:17:01 2025 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A3F4028BA90; Thu, 22 May 2025 13:23:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.156.1 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747920190; cv=none; b=t2KL3DzL+sfHA2AoPJyisljMyaZOGUCQ5vd+/aldW20VeEb7j0788+gBHRQbf4wTFHYTPngcVC8jfE2PPPHdDhEqffUPxbzjXfTkmuITccg0q0Xa4mDAj1F5GQFFhdV5RsU6hy/yJNNeyS7eOl7iRr8bUMZEZA7TMbwxWu2XyyQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747920190; c=relaxed/simple; bh=0qqqPE9xnugy+HKbrUpX3u1km+F4o6c19MmrBY7D9bk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CkGmn9Or3jyjqAlc8k8Ckq46c6DKd4EgwkiFK3+6B1SaijYRLjyCyqKaTCN9HFK+cCLhjklLhACy5/jaPNJHtTu9jreesWVr3JEiVeZV+vybMGjDUHEVaeRw82xa0vWpZezmC5FFwydQec/HORgeHSJIqmG36sqNC/6Po449+CM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=GzHBeOIh; arc=none smtp.client-ip=148.163.156.1 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="GzHBeOIh" Received: from pps.filterd (m0360083.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54M8pZKg002955; Thu, 22 May 2025 13:23:06 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=U47iHLeCCBH8D7ypN hYU+XbgoBkrOGAZQxp6TQV0bHI=; b=GzHBeOIhnscpj5bPjvYT4VaM3G6L4YC3m vNK/cwjlWAYeNY05vQpweKstIcaffOEW/UgprqrkE7/kMa7jM/C5o1ZsCGVGIHyg D3ct4riAdp2MGqdBhtTL50ymO02Y8EYSD+V/rTzpHpY8mQm2ZxXhk2BTR3NdUgra N8No1ItT6FzxNm4bt7PJd3Im+nRZkdM9kiFbEOKNh2qU4SVCjl1FV9+c2FR4oM4u Vc8BIbgTmBp2j0vC5LNSQbXyL0XX0OjQ78f6rA0Cft43azgLt583J+XbU37X9qRJ IVPJ47+nPNgzNWuulex8juDRsEVn41IjZbk1CWV+wv4SFkAvZpqJg== Received: from ppma12.dal12v.mail.ibm.com (dc.9e.1632.ip4.static.sl-reverse.com [50.22.158.220]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 46t0sjh6ku-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 May 2025 13:23:05 +0000 (GMT) Received: from pps.filterd (ppma12.dal12v.mail.ibm.com [127.0.0.1]) by ppma12.dal12v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 54M9al3w015506; Thu, 22 May 2025 13:23:04 GMT Received: from smtprelay01.fra02v.mail.ibm.com ([9.218.2.227]) by ppma12.dal12v.mail.ibm.com (PPS) with ESMTPS id 46rwnnhjyr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 May 2025 13:23:04 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay01.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 54MDN1jp42074562 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 22 May 2025 13:23:01 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0D6D62004D; Thu, 22 May 2025 13:23:01 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A3A3C2004B; Thu, 22 May 2025 13:23:00 +0000 (GMT) Received: from p-imbrenda.boeblingen.de.ibm.com (unknown [9.152.224.66]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Thu, 22 May 2025 13:23:00 +0000 (GMT) From: Claudio Imbrenda To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, linux-s390@vger.kernel.org, frankja@linux.ibm.com, borntraeger@de.ibm.com, seiden@linux.ibm.com, nsg@linux.ibm.com, nrb@linux.ibm.com, david@redhat.com, hca@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, gor@linux.ibm.com, schlameuss@linux.ibm.com Subject: [PATCH v3 2/4] KVM: s390: remove unneeded srcu lock Date: Thu, 22 May 2025 15:22:57 +0200 Message-ID: <20250522132259.167708-3-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250522132259.167708-1-imbrenda@linux.ibm.com> References: <20250522132259.167708-1-imbrenda@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTIyMDEzMyBTYWx0ZWRfX4ytbLFeZjgK+ Ncs++e9nFGGS+KGPckJDiGbmT+ZYvEHTAvk5lx+vpk49Hx121tsIR1D0SdkVeADVebAFJorvXSV K2geW4HCur+64KjdgWEnPW9NloZzr6CNLPUkz/KhuJ79HwdrYu70YZVW3LBuEk0RfcoPkKVfZAT leHzpQzYCx6nuZUQ+yDwYgZkfKBjU3spWsaerDbLkyfLLR6q/XrApAlAQwpnKfCSI2DAL67ci2x Mcv3IiEtQYOeGyvuCc23GfAGy1dyFauOi9yuWa5k7BuWkFRJp2tbQrw8xOoU3+4p9hH2IryKuAZ X4/zP3GGVlv7T2+R3aIf1RuE/q2LSMk7SLMEuzxU/rMn6j/hPeWCyscggKVwHbr58rcQdzwr408 boBA79Ha+pQ3WC579BgX2stbrfeW8sNx5PLB8j4ttjFtq0nuTlRlB0Ic6gNrEVXTcNB6h/NO X-Proofpoint-GUID: Gbh2uI7HZhfy3ML-C07Wqd4_JD8aDPJX X-Proofpoint-ORIG-GUID: Gbh2uI7HZhfy3ML-C07Wqd4_JD8aDPJX X-Authority-Analysis: v=2.4 cv=HcAUTjE8 c=1 sm=1 tr=0 ts=682f2539 cx=c_pps a=bLidbwmWQ0KltjZqbj+ezA==:117 a=bLidbwmWQ0KltjZqbj+ezA==:17 a=dt9VzEwgFbYA:10 a=VnNF1IyMAAAA:8 a=-g0RikXFz7zMAnJmuvsA:9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-22_06,2025-05-22_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 bulkscore=0 mlxscore=0 suspectscore=0 adultscore=0 impostorscore=0 lowpriorityscore=0 mlxlogscore=770 malwarescore=0 spamscore=0 phishscore=0 clxscore=1015 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505160000 definitions=main-2505220133 Content-Type: text/plain; charset="utf-8" All paths leading to handle_essa() already hold the kvm->srcu. Remove unneeded srcu locking from handle_essa(). Add lockdep assertion to make sure we will always be holding kvm->srcu when entering handle_essa(). Signed-off-by: Claudio Imbrenda Reviewed-by: Nina Schoetterl-Glausch Reviewed-by: Christian Borntraeger Reviewed-by: Christoph Schlameuss Reviewed-by: Steffen Eiden --- arch/s390/kvm/priv.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c index 1a49b89706f8..9253c70897a8 100644 --- a/arch/s390/kvm/priv.c +++ b/arch/s390/kvm/priv.c @@ -1248,6 +1248,8 @@ static inline int __do_essa(struct kvm_vcpu *vcpu, co= nst int orc) =20 static int handle_essa(struct kvm_vcpu *vcpu) { + lockdep_assert_held(&vcpu->kvm->srcu); + /* entries expected to be 1FF */ int entries =3D (vcpu->arch.sie_block->cbrlo & ~PAGE_MASK) >> 3; unsigned long *cbrlo; @@ -1297,12 +1299,8 @@ static int handle_essa(struct kvm_vcpu *vcpu) /* Retry the ESSA instruction */ kvm_s390_retry_instr(vcpu); } else { - int srcu_idx; - mmap_read_lock(vcpu->kvm->mm); - srcu_idx =3D srcu_read_lock(&vcpu->kvm->srcu); i =3D __do_essa(vcpu, orc); - srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx); mmap_read_unlock(vcpu->kvm->mm); if (i < 0) return i; --=20 2.49.0 From nobody Sun Dec 14 12:17:01 2025 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1351728BAB9; Thu, 22 May 2025 13:23:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.156.1 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747920192; cv=none; b=G8y6tI46tq/Ig3gJYN43DOhPFPjtklmlQtSR5Y33h7m/KLlqn9BGcg0KXJIBuHHgt4bPuSPF3gAaDJtLssR0CkCledF8YJ0SMle18X3GJJbMI3UiE2afGDoXhON5i6VVgLxsaHVhxwO7ueGfOYY4pslL+xGIvv9ylIxh2MHi5OI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747920192; c=relaxed/simple; bh=SD3dV2AhhLS5ETePKgEqFq8lwqN43zXhaz+DqPUdQeA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eF7MLtF9mFf6nZhmnpIQfAEmMwjp9+Ou4EjuGKYXlvsqQ0CMz0vU9HKF/4d9Y/fErVozmnvboiCiL55gGzUDuNh04Y5Tx+0XubqNuk3mHIll9FUKrCh4UhuXgpegWBQsoUpCWQ2dyLiqqY5qC4wYdXTXS/FFl404KBJrX8QlSjU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=SqLmXBWt; arc=none smtp.client-ip=148.163.156.1 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="SqLmXBWt" Received: from pps.filterd (m0360083.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54M8penO003104; Thu, 22 May 2025 13:23:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=rSXVBfkhCYBsEPizw 8o3yyTicbDSXIQTZHr897ROOjo=; b=SqLmXBWtktN6iX0BU43eeDNDnGLLaAgk3 8GPdTcCpZDpKYGxNXBpe9Gsjh5rJJ1cDYD40cRQPeVBaMqXr0XDPT9vL/eNCSz/1 9SCwlAoMEHdXYSdA1G7+rlqJAo2wAcEfEKhpcMw34H8rXI8UeU+juYXLfkTNtpRR XbXCW1nHSt5wBd5w4djqn/ProDRh3ys3kXK0UvE1nLg2WxP9AHBOl74T+8UE1RJA SM++8frjs6mMTIupGM4Be2hw7aKxCVy+Kr46Ync3nPYiTgFJeXR3b12Y0rM49//s 4K0I5+l/lQlg5AbjkYhgvKlEL/y1OM6LM/+O4zjaT6qT8wlULYL1w== Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 46t0sjh6kw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 May 2025 13:23:06 +0000 (GMT) Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 54M9a9vr024711; Thu, 22 May 2025 13:23:05 GMT Received: from smtprelay01.fra02v.mail.ibm.com ([9.218.2.227]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 46rwkr9k8h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 May 2025 13:23:05 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay01.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 54MDN1Bg60096998 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 22 May 2025 13:23:01 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7F3AB2004B; Thu, 22 May 2025 13:23:01 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 18BD92004E; Thu, 22 May 2025 13:23:01 +0000 (GMT) Received: from p-imbrenda.boeblingen.de.ibm.com (unknown [9.152.224.66]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Thu, 22 May 2025 13:23:01 +0000 (GMT) From: Claudio Imbrenda To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, linux-s390@vger.kernel.org, frankja@linux.ibm.com, borntraeger@de.ibm.com, seiden@linux.ibm.com, nsg@linux.ibm.com, nrb@linux.ibm.com, david@redhat.com, hca@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, gor@linux.ibm.com, schlameuss@linux.ibm.com Subject: [PATCH v3 3/4] KVM: s390: refactor and split some gmap helpers Date: Thu, 22 May 2025 15:22:58 +0200 Message-ID: <20250522132259.167708-4-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250522132259.167708-1-imbrenda@linux.ibm.com> References: <20250522132259.167708-1-imbrenda@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTIyMDEzMyBTYWx0ZWRfXylRf7o+iT/iI Sd+58oSModiXcFkmI/IgYr3xW9aLUvD/3KUH5lvVbXDtMXrAUvU8SZlhmO3SiZ4NPXklcLXKctz dgF9I33bPN06dtW9+oYazwx0zGG1iDDmlKKR1wHFEl6I9Ut3PBwOI6k9QUZ8KmkCQCm3L6jMyU4 ZhYxvkNqn3++ECzE/R2WtPZ+xTs9t1gqshcOhpLxf5E6YKgQOiC7gBmBdhDYkjmy1bEwYW09TwS rXdC3MTXEokT+lXBflWq6C5+d07iEQ8os+ndmaYboF7HjjsOYyg7Ut1Ll8fwiHfNRj2CujhbRGZ 3d2htjwx6Rzf6vxbTvdPuY+MX8SJK3n8x3vN4iRi067+U+Cu+eHq8iMY4nymO2FDUXEzP7OJSIk psRyXLj1UrzIyi/jyz5W4ReCl6RKrnQ/PxTXVeIfJP767PHPYOwBAM5UgPzVCs+OkKtlesQM X-Proofpoint-GUID: Hs2nTTxOdAmdoWThh1ZZAtJfqb6m05GW X-Proofpoint-ORIG-GUID: Hs2nTTxOdAmdoWThh1ZZAtJfqb6m05GW X-Authority-Analysis: v=2.4 cv=HcAUTjE8 c=1 sm=1 tr=0 ts=682f253b cx=c_pps a=GFwsV6G8L6GxiO2Y/PsHdQ==:117 a=GFwsV6G8L6GxiO2Y/PsHdQ==:17 a=dt9VzEwgFbYA:10 a=VnNF1IyMAAAA:8 a=VwQbUJbxAAAA:8 a=5-MEMhoE1pe7EqxZ07oA:9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-22_06,2025-05-22_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 bulkscore=0 mlxscore=0 suspectscore=0 adultscore=0 impostorscore=0 lowpriorityscore=0 mlxlogscore=999 malwarescore=0 spamscore=0 phishscore=0 clxscore=1015 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505160000 definitions=main-2505220133 Content-Type: text/plain; charset="utf-8" Refactor some gmap functions; move the implementation into a separate file with only helper functions. The new helper functions work on vm addresses, leaving all gmap logic in the gmap functions, which mostly become just wrappers. The whole gmap handling is going to be moved inside KVM soon, but the helper functions need to touch core mm functions, and thus need to stay in the core of kernel. Signed-off-by: Claudio Imbrenda Reviewed-by: Christoph Schlameuss Reviewed-by: Steffen Eiden --- MAINTAINERS | 2 + arch/s390/include/asm/gmap.h | 2 - arch/s390/include/asm/gmap_helpers.h | 15 ++ arch/s390/kvm/diag.c | 13 +- arch/s390/kvm/kvm-s390.c | 5 +- arch/s390/mm/Makefile | 2 + arch/s390/mm/gmap.c | 157 +------------------ arch/s390/mm/gmap_helpers.c | 223 +++++++++++++++++++++++++++ 8 files changed, 259 insertions(+), 160 deletions(-) create mode 100644 arch/s390/include/asm/gmap_helpers.h create mode 100644 arch/s390/mm/gmap_helpers.c diff --git a/MAINTAINERS b/MAINTAINERS index f21f1dabb5fe..b0a8fb5a254c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -13093,12 +13093,14 @@ S: Supported T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux.git F: Documentation/virt/kvm/s390* F: arch/s390/include/asm/gmap.h +F: arch/s390/include/asm/gmap_helpers.h F: arch/s390/include/asm/kvm* F: arch/s390/include/uapi/asm/kvm* F: arch/s390/include/uapi/asm/uvdevice.h F: arch/s390/kernel/uv.c F: arch/s390/kvm/ F: arch/s390/mm/gmap.c +F: arch/s390/mm/gmap_helpers.c F: drivers/s390/char/uvdevice.c F: tools/testing/selftests/drivers/s390x/uvdevice/ F: tools/testing/selftests/kvm/*/s390/ diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h index 9f2814d0e1e9..66c5808fd011 100644 --- a/arch/s390/include/asm/gmap.h +++ b/arch/s390/include/asm/gmap.h @@ -110,7 +110,6 @@ int gmap_map_segment(struct gmap *gmap, unsigned long f= rom, int gmap_unmap_segment(struct gmap *gmap, unsigned long to, unsigned long = len); unsigned long __gmap_translate(struct gmap *, unsigned long gaddr); int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmad= dr); -void gmap_discard(struct gmap *, unsigned long from, unsigned long to); void __gmap_zap(struct gmap *, unsigned long gaddr); void gmap_unlink(struct mm_struct *, unsigned long *table, unsigned long v= maddr); =20 @@ -134,7 +133,6 @@ int gmap_protect_one(struct gmap *gmap, unsigned long g= addr, int prot, unsigned =20 void gmap_sync_dirty_log_pmd(struct gmap *gmap, unsigned long dirty_bitmap= [4], unsigned long gaddr, unsigned long vmaddr); -int s390_disable_cow_sharing(void); int s390_replace_asce(struct gmap *gmap); void s390_uv_destroy_pfns(unsigned long count, unsigned long *pfns); int __s390_uv_destroy_range(struct mm_struct *mm, unsigned long start, diff --git a/arch/s390/include/asm/gmap_helpers.h b/arch/s390/include/asm/g= map_helpers.h new file mode 100644 index 000000000000..5356446a61c4 --- /dev/null +++ b/arch/s390/include/asm/gmap_helpers.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Helper functions for KVM guest address space mapping code + * + * Copyright IBM Corp. 2025 + */ + +#ifndef _ASM_S390_GMAP_HELPERS_H +#define _ASM_S390_GMAP_HELPERS_H + +void gmap_helper_zap_one_page(struct mm_struct *mm, unsigned long vmaddr); +void gmap_helper_discard(struct mm_struct *mm, unsigned long vmaddr, unsig= ned long end); +int gmap_helper_disable_cow_sharing(void); + +#endif /* _ASM_S390_GMAP_HELPERS_H */ diff --git a/arch/s390/kvm/diag.c b/arch/s390/kvm/diag.c index 74f73141f9b9..5faa5af56d9a 100644 --- a/arch/s390/kvm/diag.c +++ b/arch/s390/kvm/diag.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include "kvm-s390.h" #include "trace.h" @@ -32,12 +33,13 @@ static int diag_release_pages(struct kvm_vcpu *vcpu) =20 VCPU_EVENT(vcpu, 5, "diag release pages %lX %lX", start, end); =20 + mmap_read_lock(vcpu->kvm->mm); /* * We checked for start >=3D end above, so lets check for the * fast path (no prefix swap page involved) */ if (end <=3D prefix || start >=3D prefix + 2 * PAGE_SIZE) { - gmap_discard(vcpu->arch.gmap, start, end); + gmap_helper_discard(vcpu->kvm->mm, start, end); } else { /* * This is slow path. gmap_discard will check for start @@ -45,13 +47,14 @@ static int diag_release_pages(struct kvm_vcpu *vcpu) * prefix and let gmap_discard make some of these calls * NOPs. */ - gmap_discard(vcpu->arch.gmap, start, prefix); + gmap_helper_discard(vcpu->kvm->mm, start, prefix); if (start <=3D prefix) - gmap_discard(vcpu->arch.gmap, 0, PAGE_SIZE); + gmap_helper_discard(vcpu->kvm->mm, 0, PAGE_SIZE); if (end > prefix + PAGE_SIZE) - gmap_discard(vcpu->arch.gmap, PAGE_SIZE, 2 * PAGE_SIZE); - gmap_discard(vcpu->arch.gmap, prefix + 2 * PAGE_SIZE, end); + gmap_helper_discard(vcpu->kvm->mm, PAGE_SIZE, 2 * PAGE_SIZE); + gmap_helper_discard(vcpu->kvm->mm, prefix + 2 * PAGE_SIZE, end); } + mmap_read_unlock(vcpu->kvm->mm); return 0; } =20 diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 3f3175193fd7..10cfc047525d 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include #include @@ -2674,7 +2675,9 @@ static int kvm_s390_handle_pv(struct kvm *kvm, struct= kvm_pv_cmd *cmd) if (r) break; =20 - r =3D s390_disable_cow_sharing(); + mmap_write_lock(kvm->mm); + r =3D gmap_helper_disable_cow_sharing(); + mmap_write_unlock(kvm->mm); if (r) break; =20 diff --git a/arch/s390/mm/Makefile b/arch/s390/mm/Makefile index 9726b91fe7e4..bd0401cc7ca5 100644 --- a/arch/s390/mm/Makefile +++ b/arch/s390/mm/Makefile @@ -12,3 +12,5 @@ obj-$(CONFIG_HUGETLB_PAGE) +=3D hugetlbpage.o obj-$(CONFIG_PTDUMP) +=3D dump_pagetables.o obj-$(CONFIG_PGSTE) +=3D gmap.o obj-$(CONFIG_PFAULT) +=3D pfault.o + +obj-$(subst m,y,$(CONFIG_KVM)) +=3D gmap_helpers.o diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index 4869555ff403..e75bdb2f9be4 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include =20 @@ -619,63 +620,20 @@ EXPORT_SYMBOL(__gmap_link); */ void __gmap_zap(struct gmap *gmap, unsigned long gaddr) { - struct vm_area_struct *vma; unsigned long vmaddr; - spinlock_t *ptl; - pte_t *ptep; + + mmap_assert_locked(gmap->mm); =20 /* Find the vm address for the guest address */ vmaddr =3D (unsigned long) radix_tree_lookup(&gmap->guest_to_host, gaddr >> PMD_SHIFT); if (vmaddr) { vmaddr |=3D gaddr & ~PMD_MASK; - - vma =3D vma_lookup(gmap->mm, vmaddr); - if (!vma || is_vm_hugetlb_page(vma)) - return; - - /* Get pointer to the page table entry */ - ptep =3D get_locked_pte(gmap->mm, vmaddr, &ptl); - if (likely(ptep)) { - ptep_zap_unused(gmap->mm, vmaddr, ptep, 0); - pte_unmap_unlock(ptep, ptl); - } + gmap_helper_zap_one_page(gmap->mm, vmaddr); } } EXPORT_SYMBOL_GPL(__gmap_zap); =20 -void gmap_discard(struct gmap *gmap, unsigned long from, unsigned long to) -{ - unsigned long gaddr, vmaddr, size; - struct vm_area_struct *vma; - - mmap_read_lock(gmap->mm); - for (gaddr =3D from; gaddr < to; - gaddr =3D (gaddr + PMD_SIZE) & PMD_MASK) { - /* Find the vm address for the guest address */ - vmaddr =3D (unsigned long) - radix_tree_lookup(&gmap->guest_to_host, - gaddr >> PMD_SHIFT); - if (!vmaddr) - continue; - vmaddr |=3D gaddr & ~PMD_MASK; - /* Find vma in the parent mm */ - vma =3D find_vma(gmap->mm, vmaddr); - if (!vma) - continue; - /* - * We do not discard pages that are backed by - * hugetlbfs, so we don't have to refault them. - */ - if (is_vm_hugetlb_page(vma)) - continue; - size =3D min(to - gaddr, PMD_SIZE - (gaddr & ~PMD_MASK)); - zap_page_range_single(vma, vmaddr, size, NULL); - } - mmap_read_unlock(gmap->mm); -} -EXPORT_SYMBOL_GPL(gmap_discard); - static LIST_HEAD(gmap_notifier_list); static DEFINE_SPINLOCK(gmap_notifier_lock); =20 @@ -2295,111 +2253,6 @@ static const struct mm_walk_ops find_zeropage_ops = =3D { .walk_lock =3D PGWALK_WRLOCK, }; =20 -/* - * Unshare all shared zeropages, replacing them by anonymous pages. Note t= hat - * we cannot simply zap all shared zeropages, because this could later - * trigger unexpected userfaultfd missing events. - * - * This must be called after mm->context.allow_cow_sharing was - * set to 0, to avoid future mappings of shared zeropages. - * - * mm contracts with s390, that even if mm were to remove a page table, - * and racing with walk_page_range_vma() calling pte_offset_map_lock() - * would fail, it will never insert a page table containing empty zero - * pages once mm_forbids_zeropage(mm) i.e. - * mm->context.allow_cow_sharing is set to 0. - */ -static int __s390_unshare_zeropages(struct mm_struct *mm) -{ - struct vm_area_struct *vma; - VMA_ITERATOR(vmi, mm, 0); - unsigned long addr; - vm_fault_t fault; - int rc; - - for_each_vma(vmi, vma) { - /* - * We could only look at COW mappings, but it's more future - * proof to catch unexpected zeropages in other mappings and - * fail. - */ - if ((vma->vm_flags & VM_PFNMAP) || is_vm_hugetlb_page(vma)) - continue; - addr =3D vma->vm_start; - -retry: - rc =3D walk_page_range_vma(vma, addr, vma->vm_end, - &find_zeropage_ops, &addr); - if (rc < 0) - return rc; - else if (!rc) - continue; - - /* addr was updated by find_zeropage_pte_entry() */ - fault =3D handle_mm_fault(vma, addr, - FAULT_FLAG_UNSHARE | FAULT_FLAG_REMOTE, - NULL); - if (fault & VM_FAULT_OOM) - return -ENOMEM; - /* - * See break_ksm(): even after handle_mm_fault() returned 0, we - * must start the lookup from the current address, because - * handle_mm_fault() may back out if there's any difficulty. - * - * VM_FAULT_SIGBUS and VM_FAULT_SIGSEGV are unexpected but - * maybe they could trigger in the future on concurrent - * truncation. In that case, the shared zeropage would be gone - * and we can simply retry and make progress. - */ - cond_resched(); - goto retry; - } - - return 0; -} - -static int __s390_disable_cow_sharing(struct mm_struct *mm) -{ - int rc; - - if (!mm->context.allow_cow_sharing) - return 0; - - mm->context.allow_cow_sharing =3D 0; - - /* Replace all shared zeropages by anonymous pages. */ - rc =3D __s390_unshare_zeropages(mm); - /* - * Make sure to disable KSM (if enabled for the whole process or - * individual VMAs). Note that nothing currently hinders user space - * from re-enabling it. - */ - if (!rc) - rc =3D ksm_disable(mm); - if (rc) - mm->context.allow_cow_sharing =3D 1; - return rc; -} - -/* - * Disable most COW-sharing of memory pages for the whole process: - * (1) Disable KSM and unmerge/unshare any KSM pages. - * (2) Disallow shared zeropages and unshare any zerpages that are mapped. - * - * Not that we currently don't bother with COW-shared pages that are shared - * with parent/child processes due to fork(). - */ -int s390_disable_cow_sharing(void) -{ - int rc; - - mmap_write_lock(current->mm); - rc =3D __s390_disable_cow_sharing(current->mm); - mmap_write_unlock(current->mm); - return rc; -} -EXPORT_SYMBOL_GPL(s390_disable_cow_sharing); - /* * Enable storage key handling from now on and initialize the storage * keys with the default key. @@ -2467,7 +2320,7 @@ int s390_enable_skey(void) goto out_up; =20 mm->context.uses_skeys =3D 1; - rc =3D __s390_disable_cow_sharing(mm); + rc =3D gmap_helper_disable_cow_sharing(); if (rc) { mm->context.uses_skeys =3D 0; goto out_up; diff --git a/arch/s390/mm/gmap_helpers.c b/arch/s390/mm/gmap_helpers.c new file mode 100644 index 000000000000..763656dafa16 --- /dev/null +++ b/arch/s390/mm/gmap_helpers.c @@ -0,0 +1,223 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Helper functions for KVM guest address space mapping code + * + * Copyright IBM Corp. 2007, 2025 + */ +#include +#include +#include +#include +#include +#include +#include + +/** + * ptep_zap_swap_entry() - discard a swap entry. + * @mm: the mm + * @entry: the swap entry that needs to be zapped + * + * Discards the given swap entry. If the swap entry was an actual swap + * entry (and not a migration entry, for example), the actual swapped + * page is also discarded from swap. + */ +static void ptep_zap_swap_entry(struct mm_struct *mm, swp_entry_t entry) +{ + if (!non_swap_entry(entry)) + dec_mm_counter(mm, MM_SWAPENTS); + else if (is_migration_entry(entry)) + dec_mm_counter(mm, mm_counter(pfn_swap_entry_folio(entry))); + free_swap_and_cache(entry); +} + +/** + * gmap_helper_zap_one_page() - discard a page if it was swapped. + * @mm: the mm + * @vmaddr: the userspace virtual address that needs to be discarded + * + * If the given address maps to a swap entry, discard it. + * + * Context: needs to be called while holding the mmap lock. + */ +void gmap_helper_zap_one_page(struct mm_struct *mm, unsigned long vmaddr) +{ + struct vm_area_struct *vma; + spinlock_t *ptl; + pte_t *ptep; + + mmap_assert_locked(mm); + + /* Find the vm address for the guest address */ + vma =3D vma_lookup(mm, vmaddr); + if (!vma || is_vm_hugetlb_page(vma)) + return; + + /* Get pointer to the page table entry */ + ptep =3D get_locked_pte(mm, vmaddr, &ptl); + if (unlikely(!ptep)) + return; + if (pte_swap(*ptep)) + ptep_zap_swap_entry(mm, pte_to_swp_entry(*ptep)); + pte_unmap_unlock(ptep, ptl); +} +EXPORT_SYMBOL_GPL(gmap_helper_zap_one_page); + +/** + * gmap_helper_discard() - discard user pages in the given range + * @mm: the mm + * @vmaddr: starting userspace address + * @end: end address (first address outside the range) + * + * All userpace pages in the range @vamddr (inclusive) to @end (exclusive)= are + * discarded and unmapped. + * + * Context: needs to be called while holding the mmap lock. + */ +void gmap_helper_discard(struct mm_struct *mm, unsigned long vmaddr, unsig= ned long end) +{ + struct vm_area_struct *vma; + unsigned long next; + + mmap_assert_locked(mm); + + while (vmaddr < end) { + vma =3D find_vma_intersection(mm, vmaddr, end); + if (!vma) + break; + vmaddr =3D max(vmaddr, vma->vm_start); + next =3D min(end, vma->vm_end); + if (!is_vm_hugetlb_page(vma)) + zap_page_range_single(vma, vmaddr, next - vmaddr, NULL); + vmaddr =3D next; + } +} +EXPORT_SYMBOL_GPL(gmap_helper_discard); + +static int find_zeropage_pte_entry(pte_t *pte, unsigned long addr, + unsigned long end, struct mm_walk *walk) +{ + unsigned long *found_addr =3D walk->private; + + /* Return 1 of the page is a zeropage. */ + if (is_zero_pfn(pte_pfn(*pte))) { + /* + * Shared zeropage in e.g., a FS DAX mapping? We cannot do the + * right thing and likely don't care: FAULT_FLAG_UNSHARE + * currently only works in COW mappings, which is also where + * mm_forbids_zeropage() is checked. + */ + if (!is_cow_mapping(walk->vma->vm_flags)) + return -EFAULT; + + *found_addr =3D addr; + return 1; + } + return 0; +} + +static const struct mm_walk_ops find_zeropage_ops =3D { + .pte_entry =3D find_zeropage_pte_entry, + .walk_lock =3D PGWALK_WRLOCK, +}; + +/** __gmap_helper_unshare_zeropages() - unshare all shared zeropages + * @mm: the mm whose zero pages are to be unshared + * + * Unshare all shared zeropages, replacing them by anonymous pages. Note t= hat + * we cannot simply zap all shared zeropages, because this could later + * trigger unexpected userfaultfd missing events. + * + * This must be called after mm->context.allow_cow_sharing was + * set to 0, to avoid future mappings of shared zeropages. + * + * mm contracts with s390, that even if mm were to remove a page table, + * and racing with walk_page_range_vma() calling pte_offset_map_lock() + * would fail, it will never insert a page table containing empty zero + * pages once mm_forbids_zeropage(mm) i.e. + * mm->context.allow_cow_sharing is set to 0. + */ +static int __gmap_helper_unshare_zeropages(struct mm_struct *mm) +{ + struct vm_area_struct *vma; + VMA_ITERATOR(vmi, mm, 0); + unsigned long addr; + vm_fault_t fault; + int rc; + + for_each_vma(vmi, vma) { + /* + * We could only look at COW mappings, but it's more future + * proof to catch unexpected zeropages in other mappings and + * fail. + */ + if ((vma->vm_flags & VM_PFNMAP) || is_vm_hugetlb_page(vma)) + continue; + addr =3D vma->vm_start; + +retry: + rc =3D walk_page_range_vma(vma, addr, vma->vm_end, + &find_zeropage_ops, &addr); + if (rc < 0) + return rc; + else if (!rc) + continue; + + /* addr was updated by find_zeropage_pte_entry() */ + fault =3D handle_mm_fault(vma, addr, + FAULT_FLAG_UNSHARE | FAULT_FLAG_REMOTE, + NULL); + if (fault & VM_FAULT_OOM) + return -ENOMEM; + /* + * See break_ksm(): even after handle_mm_fault() returned 0, we + * must start the lookup from the current address, because + * handle_mm_fault() may back out if there's any difficulty. + * + * VM_FAULT_SIGBUS and VM_FAULT_SIGSEGV are unexpected but + * maybe they could trigger in the future on concurrent + * truncation. In that case, the shared zeropage would be gone + * and we can simply retry and make progress. + */ + cond_resched(); + goto retry; + } + + return 0; +} + +/** + * gmap_helper_disable_cow_sharing() - disable all COW sharing + * + * Disable most COW-sharing of memory pages for the whole process: + * (1) Disable KSM and unmerge/unshare any KSM pages. + * (2) Disallow shared zeropages and unshare any zerpages that are mapped. + * + * Not that we currently don't bother with COW-shared pages that are shared + * with parent/child processes due to fork(). + */ +int gmap_helper_disable_cow_sharing(void) +{ + struct mm_struct *mm =3D current->mm; + int rc; + + mmap_assert_write_locked(mm); + + if (!mm->context.allow_cow_sharing) + return 0; + + mm->context.allow_cow_sharing =3D 0; + + /* Replace all shared zeropages by anonymous pages. */ + rc =3D __gmap_helper_unshare_zeropages(mm); + /* + * Make sure to disable KSM (if enabled for the whole process or + * individual VMAs). Note that nothing currently hinders user space + * from re-enabling it. + */ + if (!rc) + rc =3D ksm_disable(mm); + if (rc) + mm->context.allow_cow_sharing =3D 1; + return rc; +} +EXPORT_SYMBOL_GPL(gmap_helper_disable_cow_sharing); --=20 2.49.0 From nobody Sun Dec 14 12:17:01 2025 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92AF228C012; Thu, 22 May 2025 13:23:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.156.1 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747920192; cv=none; b=QcCq8B9EYL+/al/KvuqIvGsUX+c1rZ9C1gAoo3EYWoyatCJsjEs+QABj1MEMYd0gSDVM7O7qhFchm06dkX2MHTQw4/fHP3bCfyd2zdl0TtIg+RYVa9yiGDRiOjC8P2Oo8wpVzu69YSY0/+efo9Gha0LGiyWI+0dBh1p/UdTjxVk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747920192; c=relaxed/simple; bh=YbPWIuPs7QpLHfw25jCy/yyj8MIItSeVpSuGo+1fRNc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=h2ox0LQaWpjZMZomPIu1/9aZpwHNdtRzkcMnMiAcdHURZaz7n7BJTxOAhQAql9l9I0eVCOrMfG2eQCNB3Re9rJueI1v+f7wmE6dC1LlsnQL+0qTMNG4koA33iYJW8UqABl2vWC4otu3/jXf9b1zk6upT5y5No1WhWoqaSpa/ZuQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=IlQKBVg8; arc=none smtp.client-ip=148.163.156.1 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="IlQKBVg8" Received: from pps.filterd (m0353729.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54M6Ilc6008683; Thu, 22 May 2025 13:23:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=vgayC4evPn/qHj9pG VcTPc4mVjLyoHS9/81AnAB9RWY=; b=IlQKBVg81aAAYRN9qHJU5W2sl/AObT8Cy qaO1XZSQbhVOJQex3nv4Wog1Yi7Qfg//rXpGkg9Y0qYr4Zcw7wlqWCdN+xZcrV9M CE95Dos2l9+h2GRfLdDXJjH7Gz2IzS4t4GKBmrB9yedV5SUosKXJCnkXzfnC2tpt FyP8iNY9+svoRt+v2vSUgjX3EQ6i2kM/sJceqjHBUH/TqkhtTFgxFK0AlHQxTqde AHqGBc7agKiUGDrzltApVGJuPvDnQ0KRKSfCfi2AJMzRPiblD81aoCKr8YGBPr5y 5XFfmdWZtEWUbvUiI0N4gnM+Ldnq8CYqam7SXwkRtQkDpvuUzgq6Q== Received: from ppma23.wdc07v.mail.ibm.com (5d.69.3da9.ip4.static.sl-reverse.com [169.61.105.93]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 46sxhw9w6u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 May 2025 13:23:07 +0000 (GMT) Received: from pps.filterd (ppma23.wdc07v.mail.ibm.com [127.0.0.1]) by ppma23.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 54M9IPmv031973; Thu, 22 May 2025 13:23:06 GMT Received: from smtprelay02.fra02v.mail.ibm.com ([9.218.2.226]) by ppma23.wdc07v.mail.ibm.com (PPS) with ESMTPS id 46rwmq9jrj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 May 2025 13:23:06 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay02.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 54MDN29E29557034 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 22 May 2025 13:23:02 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F2A1920043; Thu, 22 May 2025 13:23:01 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8AC3C2004D; Thu, 22 May 2025 13:23:01 +0000 (GMT) Received: from p-imbrenda.boeblingen.de.ibm.com (unknown [9.152.224.66]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Thu, 22 May 2025 13:23:01 +0000 (GMT) From: Claudio Imbrenda To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, linux-s390@vger.kernel.org, frankja@linux.ibm.com, borntraeger@de.ibm.com, seiden@linux.ibm.com, nsg@linux.ibm.com, nrb@linux.ibm.com, david@redhat.com, hca@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, gor@linux.ibm.com, schlameuss@linux.ibm.com Subject: [PATCH v3 4/4] KVM: s390: simplify and move pv code Date: Thu, 22 May 2025 15:22:59 +0200 Message-ID: <20250522132259.167708-5-imbrenda@linux.ibm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250522132259.167708-1-imbrenda@linux.ibm.com> References: <20250522132259.167708-1-imbrenda@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTIyMDEzMyBTYWx0ZWRfXyF8PW10+1A3z FpnbtXo/Es91lKj/+zeJNz8zCqlmHHZUHE/QZeI/qtXR1OacvoOp032MWjgdR90bgmFRtsCvhIe oj99RywQ/Bdu0M3Ib8/5URM8EBoecwK61K2ZCs1wF8Dde8pibc6Skd8dPqcRHIzBLCcQKsgIa81 CKkLsUXZfRzxj1FC36en45Iel6JdO0+pfSjXNqQ3zhzTy1KOQaXTyGIMuUGNUTTULc2HHIea2kY zw2Kxl7ccDGtvzHGiCkxkm+MvWTOfQp/MDXc4k8MqhObjJmjsDw5+wVWVKLS71MBzxwG5AzGyw5 g0IOdlskpUtKSfMctRfYb80Fv8GDBoh5NcErRStaslL/XW3aAtGMwJXSn6qnQ1lrBXdi3UvxTpo BV25gMZiX33KAmK73awkb/prt6YcEMvzaWPtQ5Vh2HZKIbRrkue6te/mQ3wfQyP3auI9XSGx X-Proofpoint-GUID: -bZMkc4kNs5GtP2oZX0INx5zCs3fp7J1 X-Authority-Analysis: v=2.4 cv=O685vA9W c=1 sm=1 tr=0 ts=682f253b cx=c_pps a=3Bg1Hr4SwmMryq2xdFQyZA==:117 a=3Bg1Hr4SwmMryq2xdFQyZA==:17 a=dt9VzEwgFbYA:10 a=VnNF1IyMAAAA:8 a=20KFwNOVAAAA:8 a=7_jnfmalSeIvgFZyEUQA:9 X-Proofpoint-ORIG-GUID: -bZMkc4kNs5GtP2oZX0INx5zCs3fp7J1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-22_06,2025-05-22_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 mlxlogscore=999 spamscore=0 mlxscore=0 phishscore=0 bulkscore=0 priorityscore=1501 lowpriorityscore=0 impostorscore=0 clxscore=1015 malwarescore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505160000 definitions=main-2505220133 Content-Type: text/plain; charset="utf-8" All functions in kvm/gmap.c fit better in kvm/pv.c instead. Move and rename them appropriately, then delete the now empty kvm/gmap.c and kvm/gmap.h. Signed-off-by: Claudio Imbrenda Reviewed-by: Nina Schoetterl-Glausch Reviewed-by: Christoph Schlameuss Reviewed-by: Steffen Eiden --- arch/s390/kernel/uv.c | 12 ++-- arch/s390/kvm/Makefile | 2 +- arch/s390/kvm/gaccess.c | 3 +- arch/s390/kvm/gmap-vsie.c | 1 - arch/s390/kvm/gmap.c | 121 -------------------------------------- arch/s390/kvm/gmap.h | 39 ------------ arch/s390/kvm/intercept.c | 10 +--- arch/s390/kvm/kvm-s390.c | 5 +- arch/s390/kvm/kvm-s390.h | 42 +++++++++++++ arch/s390/kvm/pv.c | 61 ++++++++++++++++++- arch/s390/kvm/vsie.c | 19 +++++- 11 files changed, 133 insertions(+), 182 deletions(-) delete mode 100644 arch/s390/kvm/gmap.c delete mode 100644 arch/s390/kvm/gmap.h diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c index 9a5d5be8acf4..644c110287c4 100644 --- a/arch/s390/kernel/uv.c +++ b/arch/s390/kernel/uv.c @@ -135,7 +135,7 @@ int uv_destroy_folio(struct folio *folio) { int rc; =20 - /* See gmap_make_secure(): large folios cannot be secure */ + /* Large folios cannot be secure */ if (unlikely(folio_test_large(folio))) return 0; =20 @@ -184,7 +184,7 @@ int uv_convert_from_secure_folio(struct folio *folio) { int rc; =20 - /* See gmap_make_secure(): large folios cannot be secure */ + /* Large folios cannot be secure */ if (unlikely(folio_test_large(folio))) return 0; =20 @@ -403,15 +403,15 @@ EXPORT_SYMBOL_GPL(make_hva_secure); =20 /* * To be called with the folio locked or with an extra reference! This will - * prevent gmap_make_secure from touching the folio concurrently. Having 2 - * parallel arch_make_folio_accessible is fine, as the UV calls will becom= e a - * no-op if the folio is already exported. + * prevent kvm_s390_pv_make_secure() from touching the folio concurrently. + * Having 2 parallel arch_make_folio_accessible is fine, as the UV calls w= ill + * become a no-op if the folio is already exported. */ int arch_make_folio_accessible(struct folio *folio) { int rc =3D 0; =20 - /* See gmap_make_secure(): large folios cannot be secure */ + /* Large folios cannot be secure */ if (unlikely(folio_test_large(folio))) return 0; =20 diff --git a/arch/s390/kvm/Makefile b/arch/s390/kvm/Makefile index f0ffe874adc2..9a723c48b05a 100644 --- a/arch/s390/kvm/Makefile +++ b/arch/s390/kvm/Makefile @@ -8,7 +8,7 @@ include $(srctree)/virt/kvm/Makefile.kvm ccflags-y :=3D -Ivirt/kvm -Iarch/s390/kvm =20 kvm-y +=3D kvm-s390.o intercept.o interrupt.o priv.o sigp.o -kvm-y +=3D diag.o gaccess.o guestdbg.o vsie.o pv.o gmap.o gmap-vsie.o +kvm-y +=3D diag.o gaccess.o guestdbg.o vsie.o pv.o gmap-vsie.o =20 kvm-$(CONFIG_VFIO_PCI_ZDEV_KVM) +=3D pci.o obj-$(CONFIG_KVM) +=3D kvm.o diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c index f6fded15633a..e23670e1949c 100644 --- a/arch/s390/kvm/gaccess.c +++ b/arch/s390/kvm/gaccess.c @@ -16,9 +16,10 @@ #include #include #include "kvm-s390.h" -#include "gmap.h" #include "gaccess.h" =20 +#define GMAP_SHADOW_FAKE_TABLE 1ULL + /* * vaddress union in order to easily decode a virtual address into its * region first index, region second index etc. parts. diff --git a/arch/s390/kvm/gmap-vsie.c b/arch/s390/kvm/gmap-vsie.c index a6d1dbb04c97..56ef153eb8fe 100644 --- a/arch/s390/kvm/gmap-vsie.c +++ b/arch/s390/kvm/gmap-vsie.c @@ -22,7 +22,6 @@ #include =20 #include "kvm-s390.h" -#include "gmap.h" =20 /** * gmap_find_shadow - find a specific asce in the list of shadow tables diff --git a/arch/s390/kvm/gmap.c b/arch/s390/kvm/gmap.c deleted file mode 100644 index 6d8944d1b4a0..000000000000 --- a/arch/s390/kvm/gmap.c +++ /dev/null @@ -1,121 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Guest memory management for KVM/s390 - * - * Copyright IBM Corp. 2008, 2020, 2024 - * - * Author(s): Claudio Imbrenda - * Martin Schwidefsky - * David Hildenbrand - * Janosch Frank - */ - -#include -#include -#include -#include -#include - -#include -#include -#include - -#include "gmap.h" - -/** - * gmap_make_secure() - make one guest page secure - * @gmap: the guest gmap - * @gaddr: the guest address that needs to be made secure - * @uvcb: the UVCB specifying which operation needs to be performed - * - * Context: needs to be called with kvm->srcu held. - * Return: 0 on success, < 0 in case of error. - */ -int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb) -{ - struct kvm *kvm =3D gmap->private; - unsigned long vmaddr; - - lockdep_assert_held(&kvm->srcu); - - vmaddr =3D gfn_to_hva(kvm, gpa_to_gfn(gaddr)); - if (kvm_is_error_hva(vmaddr)) - return -EFAULT; - return make_hva_secure(gmap->mm, vmaddr, uvcb); -} - -int gmap_convert_to_secure(struct gmap *gmap, unsigned long gaddr) -{ - struct uv_cb_cts uvcb =3D { - .header.cmd =3D UVC_CMD_CONV_TO_SEC_STOR, - .header.len =3D sizeof(uvcb), - .guest_handle =3D gmap->guest_handle, - .gaddr =3D gaddr, - }; - - return gmap_make_secure(gmap, gaddr, &uvcb); -} - -/** - * __gmap_destroy_page() - Destroy a guest page. - * @gmap: the gmap of the guest - * @page: the page to destroy - * - * An attempt will be made to destroy the given guest page. If the attempt - * fails, an attempt is made to export the page. If both attempts fail, an - * appropriate error is returned. - * - * Context: must be called holding the mm lock for gmap->mm - */ -static int __gmap_destroy_page(struct gmap *gmap, struct page *page) -{ - struct folio *folio =3D page_folio(page); - int rc; - - /* - * See gmap_make_secure(): large folios cannot be secure. Small - * folio implies FW_LEVEL_PTE. - */ - if (folio_test_large(folio)) - return -EFAULT; - - rc =3D uv_destroy_folio(folio); - /* - * Fault handlers can race; it is possible that two CPUs will fault - * on the same secure page. One CPU can destroy the page, reboot, - * re-enter secure mode and import it, while the second CPU was - * stuck at the beginning of the handler. At some point the second - * CPU will be able to progress, and it will not be able to destroy - * the page. In that case we do not want to terminate the process, - * we instead try to export the page. - */ - if (rc) - rc =3D uv_convert_from_secure_folio(folio); - - return rc; -} - -/** - * gmap_destroy_page() - Destroy a guest page. - * @gmap: the gmap of the guest - * @gaddr: the guest address to destroy - * - * An attempt will be made to destroy the given guest page. If the attempt - * fails, an attempt is made to export the page. If both attempts fail, an - * appropriate error is returned. - * - * Context: may sleep. - */ -int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr) -{ - struct page *page; - int rc =3D 0; - - mmap_read_lock(gmap->mm); - page =3D gfn_to_page(gmap->private, gpa_to_gfn(gaddr)); - if (page) - rc =3D __gmap_destroy_page(gmap, page); - kvm_release_page_clean(page); - mmap_read_unlock(gmap->mm); - return rc; -} diff --git a/arch/s390/kvm/gmap.h b/arch/s390/kvm/gmap.h deleted file mode 100644 index c8f031c9ea5f..000000000000 --- a/arch/s390/kvm/gmap.h +++ /dev/null @@ -1,39 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * KVM guest address space mapping code - * - * Copyright IBM Corp. 2007, 2016, 2025 - * Author(s): Martin Schwidefsky - * Claudio Imbrenda - */ - -#ifndef ARCH_KVM_S390_GMAP_H -#define ARCH_KVM_S390_GMAP_H - -#define GMAP_SHADOW_FAKE_TABLE 1ULL - -int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb); -int gmap_convert_to_secure(struct gmap *gmap, unsigned long gaddr); -int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr); -struct gmap *gmap_shadow(struct gmap *parent, unsigned long asce, int edat= _level); - -/** - * gmap_shadow_valid - check if a shadow guest address space matches the - * given properties and is still valid - * @sg: pointer to the shadow guest address space structure - * @asce: ASCE for which the shadow table is requested - * @edat_level: edat level to be used for the shadow translation - * - * Returns 1 if the gmap shadow is still valid and matches the given - * properties, the caller can continue using it. Returns 0 otherwise, the - * caller has to request a new shadow gmap in this case. - * - */ -static inline int gmap_shadow_valid(struct gmap *sg, unsigned long asce, i= nt edat_level) -{ - if (sg->removed) - return 0; - return sg->orig_asce =3D=3D asce && sg->edat_level =3D=3D edat_level; -} - -#endif diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c index b4834bd4d216..c7908950c1f4 100644 --- a/arch/s390/kvm/intercept.c +++ b/arch/s390/kvm/intercept.c @@ -16,13 +16,11 @@ #include #include #include -#include =20 #include "kvm-s390.h" #include "gaccess.h" #include "trace.h" #include "trace-s390.h" -#include "gmap.h" =20 u8 kvm_s390_get_ilen(struct kvm_vcpu *vcpu) { @@ -546,7 +544,7 @@ static int handle_pv_uvc(struct kvm_vcpu *vcpu) guest_uvcb->header.cmd); return 0; } - rc =3D gmap_make_secure(vcpu->arch.gmap, uvcb.gaddr, &uvcb); + rc =3D kvm_s390_pv_make_secure(vcpu->kvm, uvcb.gaddr, &uvcb); /* * If the unpin did not succeed, the guest will exit again for the UVC * and we will retry the unpin. @@ -654,10 +652,8 @@ int kvm_handle_sie_intercept(struct kvm_vcpu *vcpu) break; case ICPT_PV_PREF: rc =3D 0; - gmap_convert_to_secure(vcpu->arch.gmap, - kvm_s390_get_prefix(vcpu)); - gmap_convert_to_secure(vcpu->arch.gmap, - kvm_s390_get_prefix(vcpu) + PAGE_SIZE); + kvm_s390_pv_convert_to_secure(vcpu->kvm, kvm_s390_get_prefix(vcpu)); + kvm_s390_pv_convert_to_secure(vcpu->kvm, kvm_s390_get_prefix(vcpu) + PAG= E_SIZE); break; default: return -EOPNOTSUPP; diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 10cfc047525d..d5ad10791c25 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -53,7 +53,6 @@ #include "kvm-s390.h" #include "gaccess.h" #include "pci.h" -#include "gmap.h" =20 #define CREATE_TRACE_POINTS #include "trace.h" @@ -4976,7 +4975,7 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu= *vcpu) * previous protected guest. The old pages need to be destroyed * so the new guest can use them. */ - if (gmap_destroy_page(vcpu->arch.gmap, gaddr)) { + if (kvm_s390_pv_destroy_page(vcpu->kvm, gaddr)) { /* * Either KVM messed up the secure guest mapping or the * same page is mapped into multiple secure guests. @@ -4998,7 +4997,7 @@ static int vcpu_post_run_handle_fault(struct kvm_vcpu= *vcpu) * guest has not been imported yet. Try to import the page into * the protected guest. */ - rc =3D gmap_convert_to_secure(vcpu->arch.gmap, gaddr); + rc =3D kvm_s390_pv_convert_to_secure(vcpu->kvm, gaddr); if (rc =3D=3D -EINVAL) send_sig(SIGSEGV, current, 0); if (rc !=3D -ENXIO) diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h index 8d3bbb2dd8d2..c44fe0c3a097 100644 --- a/arch/s390/kvm/kvm-s390.h +++ b/arch/s390/kvm/kvm-s390.h @@ -308,6 +308,9 @@ int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void _= _user *buff_user, u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc); int kvm_s390_pv_dump_complete(struct kvm *kvm, void __user *buff_user, u16 *rc, u16 *rrc); +int kvm_s390_pv_destroy_page(struct kvm *kvm, unsigned long gaddr); +int kvm_s390_pv_convert_to_secure(struct kvm *kvm, unsigned long gaddr); +int kvm_s390_pv_make_secure(struct kvm *kvm, unsigned long gaddr, void *uv= cb); =20 static inline u64 kvm_s390_pv_get_handle(struct kvm *kvm) { @@ -319,6 +322,41 @@ static inline u64 kvm_s390_pv_cpu_get_handle(struct kv= m_vcpu *vcpu) return vcpu->arch.pv.handle; } =20 +/** + * __kvm_s390_pv_destroy_page() - Destroy a guest page. + * @page: the page to destroy + * + * An attempt will be made to destroy the given guest page. If the attempt + * fails, an attempt is made to export the page. If both attempts fail, an + * appropriate error is returned. + * + * Context: must be called holding the mm lock for gmap->mm + */ +static inline int __kvm_s390_pv_destroy_page(struct page *page) +{ + struct folio *folio =3D page_folio(page); + int rc; + + /* Large folios cannot be secure. Small folio implies FW_LEVEL_PTE. */ + if (folio_test_large(folio)) + return -EFAULT; + + rc =3D uv_destroy_folio(folio); + /* + * Fault handlers can race; it is possible that two CPUs will fault + * on the same secure page. One CPU can destroy the page, reboot, + * re-enter secure mode and import it, while the second CPU was + * stuck at the beginning of the handler. At some point the second + * CPU will be able to progress, and it will not be able to destroy + * the page. In that case we do not want to terminate the process, + * we instead try to export the page. + */ + if (rc) + rc =3D uv_convert_from_secure_folio(folio); + + return rc; +} + /* implemented in interrupt.c */ int kvm_s390_handle_wait(struct kvm_vcpu *vcpu); void kvm_s390_vcpu_wakeup(struct kvm_vcpu *vcpu); @@ -398,6 +436,10 @@ void kvm_s390_vsie_gmap_notifier(struct gmap *gmap, un= signed long start, unsigned long end); void kvm_s390_vsie_init(struct kvm *kvm); void kvm_s390_vsie_destroy(struct kvm *kvm); +int gmap_shadow_valid(struct gmap *sg, unsigned long asce, int edat_level); + +/* implemented in gmap-vsie.c */ +struct gmap *gmap_shadow(struct gmap *parent, unsigned long asce, int edat= _level); =20 /* implemented in sigp.c */ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu); diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c index 22c012aa5206..14c330ec8ceb 100644 --- a/arch/s390/kvm/pv.c +++ b/arch/s390/kvm/pv.c @@ -17,7 +17,6 @@ #include #include #include "kvm-s390.h" -#include "gmap.h" =20 bool kvm_s390_pv_is_protected(struct kvm *kvm) { @@ -33,6 +32,64 @@ bool kvm_s390_pv_cpu_is_protected(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvm_s390_pv_cpu_is_protected); =20 +/** + * kvm_s390_pv_make_secure() - make one guest page secure + * @kvm: the guest + * @gaddr: the guest address that needs to be made secure + * @uvcb: the UVCB specifying which operation needs to be performed + * + * Context: needs to be called with kvm->srcu held. + * Return: 0 on success, < 0 in case of error. + */ +int kvm_s390_pv_make_secure(struct kvm *kvm, unsigned long gaddr, void *uv= cb) +{ + unsigned long vmaddr; + + lockdep_assert_held(&kvm->srcu); + + vmaddr =3D gfn_to_hva(kvm, gpa_to_gfn(gaddr)); + if (kvm_is_error_hva(vmaddr)) + return -EFAULT; + return make_hva_secure(kvm->mm, vmaddr, uvcb); +} + +int kvm_s390_pv_convert_to_secure(struct kvm *kvm, unsigned long gaddr) +{ + struct uv_cb_cts uvcb =3D { + .header.cmd =3D UVC_CMD_CONV_TO_SEC_STOR, + .header.len =3D sizeof(uvcb), + .guest_handle =3D kvm_s390_pv_get_handle(kvm), + .gaddr =3D gaddr, + }; + + return kvm_s390_pv_make_secure(kvm, gaddr, &uvcb); +} + +/** + * kvm_s390_pv_destroy_page() - Destroy a guest page. + * @kvm: the guest + * @gaddr: the guest address to destroy + * + * An attempt will be made to destroy the given guest page. If the attempt + * fails, an attempt is made to export the page. If both attempts fail, an + * appropriate error is returned. + * + * Context: may sleep. + */ +int kvm_s390_pv_destroy_page(struct kvm *kvm, unsigned long gaddr) +{ + struct page *page; + int rc =3D 0; + + mmap_read_lock(kvm->mm); + page =3D gfn_to_page(kvm, gpa_to_gfn(gaddr)); + if (page) + rc =3D __kvm_s390_pv_destroy_page(page); + kvm_release_page_clean(page); + mmap_read_unlock(kvm->mm); + return rc; +} + /** * struct pv_vm_to_be_destroyed - Represents a protected VM that needs to * be destroyed @@ -638,7 +695,7 @@ static int unpack_one(struct kvm *kvm, unsigned long ad= dr, u64 tweak, .tweak[0] =3D tweak, .tweak[1] =3D offset, }; - int ret =3D gmap_make_secure(kvm->arch.gmap, addr, &uvcb); + int ret =3D kvm_s390_pv_make_secure(kvm, addr, &uvcb); unsigned long vmaddr; bool unlocked; =20 diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c index a78df3a4f353..13a9661d2b28 100644 --- a/arch/s390/kvm/vsie.c +++ b/arch/s390/kvm/vsie.c @@ -23,7 +23,6 @@ #include #include "kvm-s390.h" #include "gaccess.h" -#include "gmap.h" =20 enum vsie_page_flags { VSIE_PAGE_IN_USE =3D 0, @@ -68,6 +67,24 @@ struct vsie_page { __u8 fac[S390_ARCH_FAC_LIST_SIZE_BYTE]; /* 0x0800 */ }; =20 +/** + * gmap_shadow_valid() - check if a shadow guest address space matches the + * given properties and is still valid + * @sg: pointer to the shadow guest address space structure + * @asce: ASCE for which the shadow table is requested + * @edat_level: edat level to be used for the shadow translation + * + * Returns 1 if the gmap shadow is still valid and matches the given + * properties, the caller can continue using it. Returns 0 otherwise; the + * caller has to request a new shadow gmap in this case. + */ +int gmap_shadow_valid(struct gmap *sg, unsigned long asce, int edat_level) +{ + if (sg->removed) + return 0; + return sg->orig_asce =3D=3D asce && sg->edat_level =3D=3D edat_level; +} + /* trigger a validity icpt for the given scb */ static int set_validity_icpt(struct kvm_s390_sie_block *scb, __u16 reason_code) --=20 2.49.0