From nobody Fri May 3 17:41:53 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1611937218; cv=none; d=zohomail.com; s=zohoarc; b=jMjaf2MIHOq7yLrce62YPrrTuQFy15+u6cmhfDdeHYEDdGhbODKG1oPKicCruVDznb/uDDKpucFMMnlRLXVS618T0LfBPWvKcm7sn58381+4uZ4yhDWG+vyUd70Y3H3IXN69huenZAbcLQhKPmPBh023nOMOjEfljmaFIevjPDI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1611937218; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=pQE/B5fzZf3jiK+QiCdMMnScKlarCr8sADBb9isYXQM=; b=L/t2bBL7ph9ZN3wUyOIkNhd6xbSchiH+Pp6YMUiN+JqrgLE7pCd9RynEeeQfT+wXmJn3+60jQgsHv+SO2lvWy34gM3RF7U6TOyj2YerHhaGfcOGytyqQqGTo5vgbLbYwPCxLx57EpD4PJgpry30GoDTCglj1hErGdiEzuiWjTug= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1611937218228230.43324141093; Fri, 29 Jan 2021 08:20:18 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.78289.142360 (Exim 4.92) (envelope-from ) id 1l5WV3-00078C-IS; Fri, 29 Jan 2021 16:19:57 +0000 Received: by outflank-mailman (output) from mailman id 78289.142360; Fri, 29 Jan 2021 16:19:57 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5WV3-000785-F2; Fri, 29 Jan 2021 16:19:57 +0000 Received: by outflank-mailman (input) for mailman id 78289; Fri, 29 Jan 2021 16:19:56 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5WV2-00077z-2W for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 16:19:56 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 06332875-1727-41eb-9049-bf59a6abdda2; Fri, 29 Jan 2021 16:19:55 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 44D93AF72; Fri, 29 Jan 2021 16:19:54 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 06332875-1727-41eb-9049-bf59a6abdda2 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1611937194; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pQE/B5fzZf3jiK+QiCdMMnScKlarCr8sADBb9isYXQM=; b=vHSzF7cKwMt9UyYYQEXm67J/iFlw/ozSnEQnNj/mZHexZSMi6tRa8TlbJ+zHXvrpYADUa0 mNzE5MtEFvo3jB1E3dP55AVyqOf8CC49p4rAH1bEGSsIWL3vKMQobVUMNcWw3atqdU4OWf uvGIhrOjYDakg4jCZId7PlmGIxF5d0s= Subject: [PATCH 1/2] x86/time: change initiation of the calibration timer From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= References: <35443b5a-1410-7099-a937-e9f537bbe989@suse.com> Message-ID: Date: Fri, 29 Jan 2021 17:19:55 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.0 MIME-Version: 1.0 In-Reply-To: <35443b5a-1410-7099-a937-e9f537bbe989@suse.com> Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" Setting the timer a second (EPOCH) into the future at a random point during boot (prior to bringing up APs and prior to launching Dom0) does not yield predictable results: The timer may expire while we're still bringing up APs (too early) or when Dom0 already boots (too late). Instead invoke the timer handler function explicitly at a predictable point in time, once we've established the rendezvous function to use (and hence also once all APs are online). This will, through the raising and handling of TIMER_SOFTIRQ, then also have the effect of arming the timer. Signed-off-by: Jan Beulich --- a/xen/arch/x86/time.c +++ b/xen/arch/x86/time.c @@ -854,9 +854,7 @@ static void resume_platform_timer(void) =20 static void __init reset_platform_timer(void) { - /* Deactivate any timers running */ kill_timer(&plt_overflow_timer); - kill_timer(&calibration_timer); =20 /* Reset counters and stamps */ spin_lock_irq(&platform_timer_lock); @@ -1956,19 +1954,13 @@ static void __init reset_percpu_time(voi t->stamp.master_stime =3D t->stamp.local_stime; } =20 -static void __init try_platform_timer_tail(bool late) +static void __init try_platform_timer_tail(void) { init_timer(&plt_overflow_timer, plt_overflow, NULL, 0); plt_overflow(NULL); =20 platform_timer_stamp =3D plt_stamp64; stime_platform_stamp =3D NOW(); - - if ( !late ) - init_percpu_time(); - - init_timer(&calibration_timer, time_calibration, NULL, 0); - set_timer(&calibration_timer, NOW() + EPOCH); } =20 /* Late init function, after all cpus have booted */ @@ -2009,10 +2001,13 @@ static int __init verify_tsc_reliability time_calibration_rendezvous_fn =3D time_calibration_nop_rendez= vous; =20 /* Finish platform timer switch. */ - try_platform_timer_tail(true); + try_platform_timer_tail(); =20 printk("Switched to Platform timer %s TSC\n", freq_string(plt_src.frequency)); + + time_calibration(NULL); + return 0; } } @@ -2033,6 +2028,8 @@ static int __init verify_tsc_reliability !boot_cpu_has(X86_FEATURE_TSC_RELIABLE) ) time_calibration_rendezvous_fn =3D time_calibration_tsc_rendezvous; =20 + time_calibration(NULL); + return 0; } __initcall(verify_tsc_reliability); @@ -2048,7 +2045,11 @@ int __init init_xen_time(void) do_settime(get_wallclock_time(), 0, NOW()); =20 /* Finish platform timer initialization. */ - try_platform_timer_tail(false); + try_platform_timer_tail(); + + init_percpu_time(); + + init_timer(&calibration_timer, time_calibration, NULL, 0); =20 /* * Setup space to track per-socket TSC_ADJUST values. Don't fiddle with From nobody Fri May 3 17:41:53 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1611937273; cv=none; d=zohomail.com; s=zohoarc; b=YijKPEBJVuqVi0ksu/W42QuLYdBYVJ1mdY3weFpXQU22osq52PkjU0T/9Jbzr+GDkSQeJwMeMtcFcy8zbRO1DuIDDBUTWrM/6iDIiCa2RZOQ/gcwyEfeDv7zLgvI6H9N+tR2s7VmxbdxLelOgAzH8c3MrLRUjvI68IdyBKRnxS8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1611937273; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=/Ssquy+rPe4/EYFAhCpT7dpYV2lE6HH4TbsIX6MEXDQ=; b=Rh4c35t/Fp61kIBYPaHnqCfGa9KrvDod42I+38SSCXFFyclMd4RI2YqIZlEo60+Lz8wwjKzZU6GeIuRsX5tLZGYCdGORTr2kDJDwdSs0kWPMjagAPS3hw8SwKDDbx+rEhKJmzw7C+l57hedU1xNNEaw7UI5Rv2sZBOWOL3mzGhI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1611937273217104.90749592208329; Fri, 29 Jan 2021 08:21:13 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.78293.142372 (Exim 4.92) (envelope-from ) id 1l5WW2-0007up-SI; Fri, 29 Jan 2021 16:20:58 +0000 Received: by outflank-mailman (output) from mailman id 78293.142372; Fri, 29 Jan 2021 16:20:58 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5WW2-0007ui-PD; Fri, 29 Jan 2021 16:20:58 +0000 Received: by outflank-mailman (input) for mailman id 78293; Fri, 29 Jan 2021 16:20:56 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5WW0-0007ua-LM for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 16:20:56 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d96cb176-aa03-47de-9071-f3f2eb94037e; Fri, 29 Jan 2021 16:20:55 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id DC542B040; Fri, 29 Jan 2021 16:20:54 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d96cb176-aa03-47de-9071-f3f2eb94037e X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1611937255; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/Ssquy+rPe4/EYFAhCpT7dpYV2lE6HH4TbsIX6MEXDQ=; b=hfDCTMUNBHa9X4gPjChjFJafBb0+t//RulpOegDOWCSp+p5EWwmDPQnfuJt4SkR28t1gGq jxweFJvDf181TFJnAANvkucx7IQe3VUOiI2mLWXYeRQKuljSx6/gDYVV1tm7Qq+IyJFzuk NFnsH82ezAcqDVK1okQGwIi0IJiGNV4= Subject: [PATCH RFC 2/2] x86/time: don't move TSC backwards in time_calibration_tsc_rendezvous() From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= , Claudemir Todo Bom References: <35443b5a-1410-7099-a937-e9f537bbe989@suse.com> Message-ID: Date: Fri, 29 Jan 2021 17:20:55 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.0 MIME-Version: 1.0 In-Reply-To: <35443b5a-1410-7099-a937-e9f537bbe989@suse.com> Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" While doing this for small amounts may be okay, the unconditional use of CPU0's value here has been found to be a problem when the boot time TSC of the BSP was behind that of all APs by more than a second. In particular because of get_s_time_fixed() producing insane output when the calculated delta is negative, we can't allow this to happen. On the first iteration have all other CPUs sort out the highest TSC value any one of them has read. On the second iteration, if that maximum is higher than CPU0's, update its recorded value from that taken in the first iteration, along with the system time. Use the resulting value on the last iteration to write everyone's TSCs. Reported-by: Claudemir Todo Bom Signed-off-by: Jan Beulich --- Since CPU0 reads its TSC last on the first iteration, if TSCs were perfectly sync-ed there shouldn't ever be a need to update. However, even on the TSC-reliable system I first tested this on (using "tsc=3Dskewed" to get this rendezvous function into use in the first place) updates by up to several thousand clocks did happen. I wonder whether this points at some problem with the approach that I'm not (yet) seeing. Considering the sufficiently modern CPU it's using, I suspect the system wouldn't even need to turn off TSC_RELIABLE, if only there wasn't the boot time skew. Hence another approach might be to fix this boot time skew. Of course to recognize whether the TSCs then still aren't in sync we'd need to run tsc_check_reliability() sufficiently long after that adjustment. The above and the desire to have the change tested by the reporter are the reasons for the RFC. As per the comment ahead of it, the original purpose of the function was to deal with TSCs halted in deep C states. While this probably explains why only forward moves were ever expected, I don't see how this could have been reliable in case CPU0 was deep-sleeping for a sufficiently long time. My only guess here is a hidden assumption of CPU0 never being idle for long enough. --- a/xen/arch/x86/time.c +++ b/xen/arch/x86/time.c @@ -1658,7 +1658,7 @@ struct calibration_rendezvous { cpumask_t cpu_calibration_map; atomic_t semaphore; s_time_t master_stime; - u64 master_tsc_stamp; + uint64_t master_tsc_stamp, max_tsc_stamp; }; =20 static void @@ -1696,6 +1696,21 @@ static void time_calibration_tsc_rendezv r->master_stime =3D read_platform_stime(NULL); r->master_tsc_stamp =3D rdtsc_ordered(); } + else if ( r->master_tsc_stamp < r->max_tsc_stamp ) + { + /* + * We want to avoid moving the TSC backwards for any CPU. + * Use the largest value observed anywhere on the first + * iteration and bump up our previously recorded system + * accordingly. + */ + uint64_t delta =3D r->max_tsc_stamp - r->master_tsc_stamp; + + r->master_stime +=3D scale_delta(delta, + &this_cpu(cpu_time).tsc_sca= le); + r->master_tsc_stamp =3D r->max_tsc_stamp; + } + atomic_inc(&r->semaphore); =20 if ( i =3D=3D 0 ) @@ -1711,6 +1726,17 @@ static void time_calibration_tsc_rendezv while ( atomic_read(&r->semaphore) < total_cpus ) cpu_relax(); =20 + if ( _r ) + { + uint64_t tsc =3D rdtsc_ordered(), cur; + + while ( tsc > (cur =3D r->max_tsc_stamp) ) + if ( cmpxchg(&r->max_tsc_stamp, cur, tsc) =3D=3D cur ) + break; + + _r =3D NULL; + } + if ( i =3D=3D 0 ) write_tsc(r->master_tsc_stamp); =20