From nobody Tue Feb 10 02:49:48 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of groups.io designates 66.175.222.108 as permitted sender) client-ip=66.175.222.108; envelope-from=bounce+27952+110643+1787277+3901457@groups.io; helo=mail02.groups.io; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of groups.io designates 66.175.222.108 as permitted sender) smtp.mailfrom=bounce+27952+110643+1787277+3901457@groups.io; dmarc=fail(p=none dis=none) header.from=intel.com ARC-Seal: i=1; a=rsa-sha256; t=1699025436; cv=none; d=zohomail.com; s=zohoarc; b=miCSdFU9RZ2JrltWkl04tSh3ojf1Bw0rgwBWhAKlvpyDJYayAdxGx0GAGMkgoPIIldxBiyD9wYicffLK2NdqzMW5xjoIZgVFefQzP6LKsIz+qVzNVIohU8uqR2E1DOHuI6n35zuxSIAVoYsRybwPVgtQrpHbyqUWm1EHvzhqYtw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1699025436; h=Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Id:List-Help:List-Unsubscribe:Message-ID:Reply-To:Reply-To:References:Sender:Subject:Subject:To:To:Message-Id; bh=MlCDC++C8IVQzMcLvLLNIF/JF5KV0e0DdoBrAn51ahE=; b=Wwdo3CkqvXW/rzeUQoI/oTpoanYQCeqzxKyo5n76Lr3/V2Kd0+p34JPV6NIQGj3zoUI/89g7/ydyMQ05U+WG7mBva/tIan9oXmDzqFihAgWTmgK7if1qCTcuJBRqdN1ju8rA+We8iB/rKEzHZBCML0L8m9Gx9G3JVOIjHhaUPpM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of groups.io designates 66.175.222.108 as permitted sender) smtp.mailfrom=bounce+27952+110643+1787277+3901457@groups.io; dmarc=fail header.from= (p=none dis=none) Received: from mail02.groups.io (mail02.groups.io [66.175.222.108]) by mx.zohomail.com with SMTPS id 1699025436584829.4541367027672; Fri, 3 Nov 2023 08:30:36 -0700 (PDT) Return-Path: DKIM-Signature: a=rsa-sha256; bh=i6RcqRhpzt5rOm8EtYON24psypVyK4arGZ+Oetusl7g=; c=relaxed/simple; d=groups.io; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:Precedence:List-Subscribe:List-Help:Sender:List-Id:Mailing-List:Delivered-To:Reply-To:List-Unsubscribe-Post:List-Unsubscribe; s=20140610; t=1699025436; v=1; b=pIuef/fi8dBDk9RVkTc4QvrISY4CbksIMhTREq2mlgQoMEY6hsi+LueddJ/gU7EXldR3H/2H QzFrj7w+JeJXgbKWbPRXOVqu+viwgmGgFl+NrDbsAoCfgafGZF97g+EHSzRXsUOrqo8olrgFOQw ltq8Kzkw0b1BD6GJofu1CYcc= X-Received: by 127.0.0.2 with SMTP id dQagYY1788612xT8H5VSLZUm; Fri, 03 Nov 2023 08:30:36 -0700 X-Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by mx.groups.io with SMTP id smtpd.web10.56401.1699025420211463760 for ; Fri, 03 Nov 2023 08:30:35 -0700 X-IronPort-AV: E=McAfee;i="6600,9927,10883"; a="1898964" X-IronPort-AV: E=Sophos;i="6.03,273,1694761200"; d="scan'208";a="1898964" X-Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Nov 2023 08:30:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10883"; a="790793363" X-IronPort-AV: E=Sophos;i="6.03,273,1694761200"; d="scan'208";a="790793363" X-Received: from sh1gapp1009.ccr.corp.intel.com ([10.239.189.219]) by orsmga008.jf.intel.com with ESMTP; 03 Nov 2023 08:30:33 -0700 From: "Wu, Jiaxin" To: devel@edk2.groups.io Cc: Eric Dong , Ray Ni , Zeng Star , Gerd Hoffmann , Rahul Kumar Subject: [edk2-devel] [PATCH v1 7/7] UefiCpuPkg/PiSmmCpuDxeSmm: Consume SmmCpuSyncLib Date: Fri, 3 Nov 2023 23:30:12 +0800 Message-Id: <20231103153012.3704-8-jiaxin.wu@intel.com> In-Reply-To: <20231103153012.3704-1-jiaxin.wu@intel.com> References: <20231103153012.3704-1-jiaxin.wu@intel.com> Precedence: Bulk List-Subscribe: List-Help: Sender: devel@edk2.groups.io List-Id: Mailing-List: list devel@edk2.groups.io; contact devel+owner@edk2.groups.io Reply-To: devel@edk2.groups.io,jiaxin.wu@intel.com List-Unsubscribe-Post: List-Unsubscribe=One-Click List-Unsubscribe: X-Gm-Message-State: e6J8OIyOLbXcW67Zy4lVoJE8x1787277AA= X-ZohoMail-DKIM: pass (identity @groups.io) X-ZM-MESSAGEID: 1699025437108100029 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" There is the SmmCpuSyncLib Library class define the SMM CPU sync flow, which is aligned with existing SMM CPU driver sync behavior. This patch is to consume SmmCpuSyncLib instance directly. With this change, SMM CPU Sync flow/logic can be customized with different implementation no matter for any purpose, e.g. performance tuning, handle specific register, etc. Change-Id: Id034de47b85743c125f0d76420947e0dd9e69518 Cc: Eric Dong Cc: Ray Ni Cc: Zeng Star Cc: Gerd Hoffmann Cc: Rahul Kumar Signed-off-by: Jiaxin Wu --- UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 256 +++++------------------= ---- UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 6 +- UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 1 + 3 files changed, 49 insertions(+), 214 deletions(-) diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c b/UefiCpuPkg/PiSmmCpuDxe= Smm/MpService.c index 5a42a5dd12..a30b2aa234 100644 --- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c @@ -27,122 +27,10 @@ MM_COMPLETION mSmmStartupThisApToken; // // Processor specified by mPackageFirstThreadIndex[PackageIndex] will do t= he package-scope register check. // UINT32 *mPackageFirstThreadIndex =3D NULL; =20 -/** - Performs an atomic compare exchange operation to get semaphore. - The compare exchange operation must be performed using - MP safe mechanisms. - - @param Sem IN: 32-bit unsigned integer - OUT: original integer - 1 - @return Original integer - 1 - -**/ -UINT32 -WaitForSemaphore ( - IN OUT volatile UINT32 *Sem - ) -{ - UINT32 Value; - - for ( ; ;) { - Value =3D *Sem; - if ((Value !=3D 0) && - (InterlockedCompareExchange32 ( - (UINT32 *)Sem, - Value, - Value - 1 - ) =3D=3D Value)) - { - break; - } - - CpuPause (); - } - - return Value - 1; -} - -/** - Performs an atomic compare exchange operation to release semaphore. - The compare exchange operation must be performed using - MP safe mechanisms. - - @param Sem IN: 32-bit unsigned integer - OUT: original integer + 1 - @return Original integer + 1 - -**/ -UINT32 -ReleaseSemaphore ( - IN OUT volatile UINT32 *Sem - ) -{ - UINT32 Value; - - do { - Value =3D *Sem; - } while (Value + 1 !=3D 0 && - InterlockedCompareExchange32 ( - (UINT32 *)Sem, - Value, - Value + 1 - ) !=3D Value); - - return Value + 1; -} - -/** - Performs an atomic compare exchange operation to lock semaphore. - The compare exchange operation must be performed using - MP safe mechanisms. - - @param Sem IN: 32-bit unsigned integer - OUT: -1 - @return Original integer - -**/ -UINT32 -LockdownSemaphore ( - IN OUT volatile UINT32 *Sem - ) -{ - UINT32 Value; - - do { - Value =3D *Sem; - } while (InterlockedCompareExchange32 ( - (UINT32 *)Sem, - Value, - (UINT32)-1 - ) !=3D Value); - - return Value; -} - -/** - Used for BSP to wait all APs. - Wait all APs to performs an atomic compare exchange operation to release= semaphore. - - @param NumberOfAPs AP number - -**/ -VOID -WaitForAllAPs ( - IN UINTN NumberOfAPs - ) -{ - UINTN BspIndex; - - BspIndex =3D mSmmMpSyncData->BspIndex; - while (NumberOfAPs-- > 0) { - WaitForSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run); - } -} - /** Used for BSP to release all APs. Performs an atomic compare exchange operation to release semaphore for each AP. =20 @@ -154,57 +42,15 @@ ReleaseAllAPs ( { UINTN Index; =20 for (Index =3D 0; Index < mMaxNumberOfCpus; Index++) { if (IsPresentAp (Index)) { - ReleaseSemaphore (mSmmMpSyncData->CpuData[Index].Run); + SmmCpuSyncReleaseOneAp (mSmmMpSyncData->SmmCpuSyncCtx, Index, gSmmCp= uPrivate->SmmCoreEntryContext.CurrentlyExecutingCpu); } } } =20 -/** - Used for BSP to release one AP. - - @param ApSem IN: 32-bit unsigned integer - OUT: original integer + 1 -**/ -VOID -ReleaseOneAp ( - IN OUT volatile UINT32 *ApSem - ) -{ - ReleaseSemaphore (ApSem); -} - -/** - Used for AP to wait BSP. - - @param ApSem IN: 32-bit unsigned integer - OUT: original integer 0 -**/ -VOID -WaitForBsp ( - IN OUT volatile UINT32 *ApSem - ) -{ - WaitForSemaphore (ApSem); -} - -/** - Used for AP to release BSP. - - @param BspSem IN: 32-bit unsigned integer - OUT: original integer + 1 -**/ -VOID -ReleaseBsp ( - IN OUT volatile UINT32 *BspSem - ) -{ - ReleaseSemaphore (BspSem); -} - /** Check whether the index of CPU perform the package level register programming during System Management Mode initialization. =20 The index of Processor specified by mPackageFirstThreadIndex[PackageInde= x] @@ -292,35 +138,35 @@ AllCpusInSmmExceptBlockedDisabled ( =20 BlockedCount =3D 0; DisabledCount =3D 0; =20 // - // Check to make sure mSmmMpSyncData->Counter is valid and not locked. + // Check to make sure the CPU arrival count is valid and not locked. // - ASSERT (*mSmmMpSyncData->Counter <=3D mNumberOfCpus); + ASSERT (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SmmCpuSyncCtx) <= =3D mNumberOfCpus); =20 // // Check whether all CPUs in SMM. // - if (*mSmmMpSyncData->Counter =3D=3D mNumberOfCpus) { + if (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SmmCpuSyncCtx) =3D=3D = mNumberOfCpus) { return TRUE; } =20 // // Check for the Blocked & Disabled Exceptions Case. // GetSmmDelayedBlockedDisabledCount (NULL, &BlockedCount, &DisabledCount); =20 // - // *mSmmMpSyncData->Counter might be updated by all APs concurrently. Th= e value + // The CPU arrival count might be updated by all APs concurrently. The v= alue // can be dynamic changed. If some Aps enter the SMI after the BlockedCo= unt & - // DisabledCount check, then the *mSmmMpSyncData->Counter will be increa= sed, thus - // leading the *mSmmMpSyncData->Counter + BlockedCount + DisabledCount >= mNumberOfCpus. + // DisabledCount check, then the CPU arrival count will be increased, th= us + // leading the retrieved CPU arrival count + BlockedCount + DisabledCoun= t > mNumberOfCpus. // since the BlockedCount & DisabledCount are local variable, it's ok he= re only for // the checking of all CPUs In Smm. // - if (*mSmmMpSyncData->Counter + BlockedCount + DisabledCount >=3D mNumber= OfCpus) { + if (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SmmCpuSyncCtx) + Block= edCount + DisabledCount >=3D mNumberOfCpus) { return TRUE; } =20 return FALSE; } @@ -396,11 +242,11 @@ SmmWaitForApArrival ( PERF_FUNCTION_BEGIN (); =20 DelayedCount =3D 0; BlockedCount =3D 0; =20 - ASSERT (*mSmmMpSyncData->Counter <=3D mNumberOfCpus); + ASSERT (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SmmCpuSyncCtx) <= =3D mNumberOfCpus); =20 LmceEn =3D FALSE; LmceSignal =3D FALSE; if (mMachineCheckSupported) { LmceEn =3D IsLmceOsEnabled (); @@ -447,11 +293,11 @@ SmmWaitForApArrival ( // d) We don't add code to check SMI disabling status to skip sending IP= I to SMI disabled APs, because: // - In traditional flow, SMI disabling is discouraged. // - In relaxed flow, CheckApArrival() will check SMI disabling statu= s before calling this function. // In both cases, adding SMI-disabling checking code increases overhe= ad. // - if (*mSmmMpSyncData->Counter < mNumberOfCpus) { + if (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SmmCpuSyncCtx) < mNumb= erOfCpus) { // // Send SMI IPIs to bring outside processors in // for (Index =3D 0; Index < mMaxNumberOfCpus; Index++) { if (!(*(mSmmMpSyncData->CpuData[Index].Present)) && (gSmmCpuPrivate-= >ProcessorInfo[Index].ProcessorId !=3D INVALID_APIC_ID)) { @@ -548,11 +394,11 @@ WaitForAllAPsNotBusy ( Check whether it is an present AP. =20 @param CpuIndex The AP index which calls this function. =20 @retval TRUE It's a present AP. - @retval TRUE This is not an AP or it is not present. + @retval FALSE This is not an AP or it is not present. =20 **/ BOOLEAN IsPresentAp ( IN UINTN CpuIndex @@ -659,30 +505,30 @@ BSPHandler ( // Wait for APs to arrive // SmmWaitForApArrival (); =20 // - // Lock the counter down and retrieve the number of APs + // Lock door for late comming CPU checkin and retrieve the Arrived num= ber of APs // *mSmmMpSyncData->AllCpusInSync =3D TRUE; - ApCount =3D LockdownSemaphore (mSmmMpSyncData->= Counter) - 1; + ApCount =3D SmmCpuSyncLockDoor (mSmmMpSyncData-= >SmmCpuSyncCtx, CpuIndex) - 1; =20 // // Wait for all APs: // 1. Make sure all Aps have set the Present. // 2. Get ready for programming MTRRs. // - WaitForAllAPs (ApCount); + SmmCpuSyncWaitForAllAPs (mSmmMpSyncData->SmmCpuSyncCtx, ApCount, CpuIn= dex); =20 if (SmmCpuFeaturesNeedConfigureMtrrs ()) { // // Signal all APs it's time for backup MTRRs // ReleaseAllAPs (); =20 // - // WaitForBsp() may wait for ever if an AP happens to enter SMM at + // SmmCpuSyncWaitForBsp() may wait for ever if an AP happens to ente= r SMM at // exactly this point. Please make sure PcdCpuSmmMaxSyncLoops has be= en set // to a large enough value to avoid this situation. // Note: For HT capable CPUs, threads within a core share the same s= et of MTRRs. // We do the backup first and then set MTRR to avoid race condition = for threads // in the same core. @@ -690,28 +536,28 @@ BSPHandler ( MtrrGetAllMtrrs (&Mtrrs); =20 // // Wait for all APs to complete their MTRR saving // - WaitForAllAPs (ApCount); + SmmCpuSyncWaitForAllAPs (mSmmMpSyncData->SmmCpuSyncCtx, ApCount, Cpu= Index); =20 // // Let all processors program SMM MTRRs together // ReleaseAllAPs (); =20 // - // WaitForBsp() may wait for ever if an AP happens to enter SMM at + // SmmCpuSyncWaitForBsp() may wait for ever if an AP happens to ente= r SMM at // exactly this point. Please make sure PcdCpuSmmMaxSyncLoops has be= en set // to a large enough value to avoid this situation. // ReplaceOSMtrrs (CpuIndex); =20 // // Wait for all APs to complete their MTRR programming // - WaitForAllAPs (ApCount); + SmmCpuSyncWaitForAllAPs (mSmmMpSyncData->SmmCpuSyncCtx, ApCount, Cpu= Index); } } =20 // // The BUSY lock is initialized to Acquired state @@ -743,14 +589,14 @@ BSPHandler ( // make those APs to exit SMI synchronously. APs which arrive later will= be excluded and // will run through freely. // if ((SyncMode !=3D SmmCpuSyncModeTradition) && !SmmCpuFeaturesNeedConfig= ureMtrrs ()) { // - // Lock the counter down and retrieve the number of APs + // Lock door for late comming CPU checkin and retrieve the Arrived num= ber of APs // *mSmmMpSyncData->AllCpusInSync =3D TRUE; - ApCount =3D LockdownSemaphore (mSmmMpSyncData->= Counter) - 1; + ApCount =3D SmmCpuSyncLockDoor (mSmmMpSyncData-= >SmmCpuSyncCtx, CpuIndex) - 1; // // Make sure all APs have their Present flag set // while (TRUE) { PresentCount =3D 0; @@ -774,11 +620,11 @@ BSPHandler ( =20 if (SmmCpuFeaturesNeedConfigureMtrrs ()) { // // Wait for all APs to complete their pending tasks // - WaitForAllAPs (ApCount); + SmmCpuSyncWaitForAllAPs (mSmmMpSyncData->SmmCpuSyncCtx, ApCount, CpuIn= dex); =20 // // Signal APs to restore MTRRs // ReleaseAllAPs (); @@ -790,11 +636,11 @@ BSPHandler ( MtrrSetAllMtrrs (&Mtrrs); =20 // // Wait for all APs to complete MTRR programming // - WaitForAllAPs (ApCount); + SmmCpuSyncWaitForAllAPs (mSmmMpSyncData->SmmCpuSyncCtx, ApCount, CpuIn= dex); =20 // // Signal APs to Reset states/semaphore for this processor // ReleaseAllAPs (); @@ -818,11 +664,11 @@ BSPHandler ( =20 // // Gather APs to exit SMM synchronously. Note the Present flag is cleare= d by now but // WaitForAllAps does not depend on the Present flag. // - WaitForAllAPs (ApCount); + SmmCpuSyncWaitForAllAPs (mSmmMpSyncData->SmmCpuSyncCtx, ApCount, CpuInde= x); =20 // // At this point, all APs should have exited from APHandler(). // Migrate the SMM MP performance logging to standard SMM performance lo= gging. // Any SMM MP performance logging after this point will be migrated in n= ext SMI. @@ -844,11 +690,11 @@ BSPHandler ( } =20 // // Allow APs to check in from this point on // - *mSmmMpSyncData->Counter =3D 0; + SmmCpuSyncContextReset (mSmmMpSyncData->SmmCpuSyncCtx); *mSmmMpSyncData->AllCpusInSync =3D FALSE; mSmmMpSyncData->AllApArrivedWithException =3D FALSE; =20 PERF_FUNCTION_END (); } @@ -914,21 +760,21 @@ APHandler ( =20 if (!(*mSmmMpSyncData->InsideSmm)) { // // Give up since BSP is unable to enter SMM // and signal the completion of this AP - // Reduce the mSmmMpSyncData->Counter! + // Reduce the CPU arrival count! // - WaitForSemaphore (mSmmMpSyncData->Counter); + SmmCpuSyncCheckOutCpu (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex); return; } } else { // // Don't know BSP index. Give up without sending IPI to BSP. - // Reduce the mSmmMpSyncData->Counter! + // Reduce the CPU arrival count! // - WaitForSemaphore (mSmmMpSyncData->Counter); + SmmCpuSyncCheckOutCpu (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex); return; } } =20 // @@ -946,50 +792,50 @@ APHandler ( // // Notify BSP of arrival at this point // 1. Set the Present. // 2. Get ready for programming MTRRs. // - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run); + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex, BspInde= x); } =20 if (SmmCpuFeaturesNeedConfigureMtrrs ()) { // // Wait for the signal from BSP to backup MTRRs // - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run); + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex, BspInde= x); =20 // // Backup OS MTRRs // MtrrGetAllMtrrs (&Mtrrs); =20 // // Signal BSP the completion of this AP // - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run); + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex, BspInde= x); =20 // // Wait for BSP's signal to program MTRRs // - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run); + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex, BspInde= x); =20 // // Replace OS MTRRs with SMI MTRRs // ReplaceOSMtrrs (CpuIndex); =20 // // Signal BSP the completion of this AP // - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run); + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex, BspInde= x); } =20 while (TRUE) { // // Wait for something to happen // - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run); + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex, BspInde= x); =20 // // Check if BSP wants to exit SMM // if (!(*mSmmMpSyncData->InsideSmm)) { @@ -1025,43 +871,43 @@ APHandler ( =20 if (SmmCpuFeaturesNeedConfigureMtrrs ()) { // // Notify BSP the readiness of this AP to program MTRRs // - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run); + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex, BspInde= x); =20 // // Wait for the signal from BSP to program MTRRs // - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run); + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex, BspInde= x); =20 // // Restore OS MTRRs // SmmCpuFeaturesReenableSmrr (); MtrrSetAllMtrrs (&Mtrrs); =20 // // Notify BSP the readiness of this AP to Reset states/semaphore for t= his processor // - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run); + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex, BspInde= x); =20 // // Wait for the signal from BSP to Reset states/semaphore for this pro= cessor // - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run); + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex, BspInde= x); } =20 // // Reset states/semaphore for this processor // *(mSmmMpSyncData->CpuData[CpuIndex].Present) =3D FALSE; =20 // // Notify BSP the readiness of this AP to exit SMM // - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run); + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex, BspIndex); } =20 /** Checks whether the input token is the current used token. =20 @@ -1325,11 +1171,11 @@ InternalSmmStartupThisAp ( mSmmMpSyncData->CpuData[CpuIndex].Status =3D CpuStatus; if (mSmmMpSyncData->CpuData[CpuIndex].Status !=3D NULL) { *mSmmMpSyncData->CpuData[CpuIndex].Status =3D EFI_NOT_READY; } =20 - ReleaseOneAp (mSmmMpSyncData->CpuData[CpuIndex].Run); + SmmCpuSyncReleaseOneAp (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex, gSmmCpu= Private->SmmCoreEntryContext.CurrentlyExecutingCpu); =20 if (Token =3D=3D NULL) { AcquireSpinLock (mSmmMpSyncData->CpuData[CpuIndex].Busy); ReleaseSpinLock (mSmmMpSyncData->CpuData[CpuIndex].Busy); } @@ -1454,11 +1300,11 @@ InternalSmmStartupAllAPs ( =20 // // Decrease the count to mark this processor(AP or BSP) as finished. // if (ProcToken !=3D NULL) { - WaitForSemaphore (&ProcToken->RunningApCount); + InterlockedDecrement (&ProcToken->RunningApCount); } } } =20 ReleaseAllAPs (); @@ -1729,14 +1575,14 @@ SmiRendezvous ( // goto Exit; } else { // // Signal presence of this processor - // mSmmMpSyncData->Counter is increased here! - // "ReleaseSemaphore (mSmmMpSyncData->Counter) =3D=3D 0" means BSP has= already ended the synchronization. + // CPU check in here! + // "RSmmCpuSyncIncreaseArrivalCount (mSmmMpSyncData->SmmCpuSyncCtx) <= =3D 0" means BSP has already ended the synchronization. // - if (ReleaseSemaphore (mSmmMpSyncData->Counter) =3D=3D 0) { + if (SmmCpuSyncCheckInCpu (mSmmMpSyncData->SmmCpuSyncCtx, CpuIndex) <= =3D 0) { // // BSP has already ended the synchronization, so QUIT!!! // Existing AP is too late now to enter SMI since BSP has already en= ded the synchronization!!! // =20 @@ -1828,12 +1674,10 @@ SmiRendezvous ( } else { APHandler (CpuIndex, ValidSmi, mSmmMpSyncData->EffectiveSyncMode); } } =20 - ASSERT (*mSmmMpSyncData->CpuData[CpuIndex].Run =3D=3D 0); - // // Wait for BSP's signal to exit SMI // while (*mSmmMpSyncData->AllCpusInSync) { CpuPause (); @@ -1949,12 +1793,10 @@ InitializeSmmCpuSemaphores ( SemaphoreBlock =3D AllocatePages (Pages); ASSERT (SemaphoreBlock !=3D NULL); ZeroMem (SemaphoreBlock, TotalSize); =20 SemaphoreAddr =3D (UINTN)SemaphoreBloc= k; - mSmmCpuSemaphores.SemaphoreGlobal.Counter =3D (UINT32 *)SemaphoreA= ddr; - SemaphoreAddr +=3D SemaphoreSize; mSmmCpuSemaphores.SemaphoreGlobal.InsideSmm =3D (BOOLEAN *)Semaphore= Addr; SemaphoreAddr +=3D SemaphoreSize; mSmmCpuSemaphores.SemaphoreGlobal.AllCpusInSync =3D (BOOLEAN *)Semaphore= Addr; SemaphoreAddr +=3D SemaphoreSize; mSmmCpuSemaphores.SemaphoreGlobal.PFLock =3D (SPIN_LOCK *)Semapho= reAddr; @@ -1964,12 +1806,10 @@ InitializeSmmCpuSemaphores ( SemaphoreAddr +=3D SemaphoreSize; =20 SemaphoreAddr =3D (UINTN)SemaphoreBlock + Globa= lSemaphoresSize; mSmmCpuSemaphores.SemaphoreCpu.Busy =3D (SPIN_LOCK *)SemaphoreAddr; SemaphoreAddr +=3D ProcessorCount * SemaphoreSiz= e; - mSmmCpuSemaphores.SemaphoreCpu.Run =3D (UINT32 *)SemaphoreAddr; - SemaphoreAddr +=3D ProcessorCount * SemaphoreSiz= e; mSmmCpuSemaphores.SemaphoreCpu.Present =3D (BOOLEAN *)SemaphoreAddr; =20 mPFLock =3D mSmmCpuSemaphores.SemaphoreGlobal.PFLo= ck; mConfigSmmCodeAccessCheckLock =3D mSmmCpuSemaphores.SemaphoreGlobal.Code= AccessCheckLock; =20 @@ -2003,32 +1843,28 @@ InitializeMpSyncData ( mSmmMpSyncData->BspIndex =3D (UINT32)-1; } =20 mSmmMpSyncData->EffectiveSyncMode =3D mCpuSmmSyncMode; =20 - mSmmMpSyncData->Counter =3D mSmmCpuSemaphores.SemaphoreGlobal.Co= unter; + mSmmMpSyncData->SmmCpuSyncCtx =3D SmmCpuSyncContextInit (gSmmCpuPrivat= e->SmmCoreEntryContext.NumberOfCpus); mSmmMpSyncData->InsideSmm =3D mSmmCpuSemaphores.SemaphoreGlobal.In= sideSmm; mSmmMpSyncData->AllCpusInSync =3D mSmmCpuSemaphores.SemaphoreGlobal.Al= lCpusInSync; ASSERT ( - mSmmMpSyncData->Counter !=3D NULL && mSmmMpSyncData->InsideSmm !=3D = NULL && + mSmmMpSyncData->SmmCpuSyncCtx !=3D NULL && mSmmMpSyncData->InsideSmm= !=3D NULL && mSmmMpSyncData->AllCpusInSync !=3D NULL ); - *mSmmMpSyncData->Counter =3D 0; *mSmmMpSyncData->InsideSmm =3D FALSE; *mSmmMpSyncData->AllCpusInSync =3D FALSE; =20 mSmmMpSyncData->AllApArrivedWithException =3D FALSE; =20 for (CpuIndex =3D 0; CpuIndex < gSmmCpuPrivate->SmmCoreEntryContext.Nu= mberOfCpus; CpuIndex++) { mSmmMpSyncData->CpuData[CpuIndex].Busy =3D (SPIN_LOCK *)((UINTN)mSmmCpuSemaphores.SemaphoreCpu.Busy + mSemaph= oreSize * CpuIndex); - mSmmMpSyncData->CpuData[CpuIndex].Run =3D - (UINT32 *)((UINTN)mSmmCpuSemaphores.SemaphoreCpu.Run + mSemaphoreS= ize * CpuIndex); mSmmMpSyncData->CpuData[CpuIndex].Present =3D (BOOLEAN *)((UINTN)mSmmCpuSemaphores.SemaphoreCpu.Present + mSemap= horeSize * CpuIndex); *(mSmmMpSyncData->CpuData[CpuIndex].Busy) =3D 0; - *(mSmmMpSyncData->CpuData[CpuIndex].Run) =3D 0; *(mSmmMpSyncData->CpuData[CpuIndex].Present) =3D FALSE; } } } =20 diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h b/UefiCpuPkg/PiSmmC= puDxeSmm/PiSmmCpuDxeSmm.h index 654935dc76..b7bb937fbb 100644 --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h @@ -52,10 +52,11 @@ SPDX-License-Identifier: BSD-2-Clause-Patent #include #include #include #include #include +#include =20 #include #include =20 #include @@ -403,11 +404,10 @@ SmmRelocationSemaphoreComplete ( /// typedef struct { SPIN_LOCK *Busy; volatile EFI_AP_PROCEDURE2 Procedure; volatile VOID *Parameter; - volatile UINT32 *Run; volatile BOOLEAN *Present; PROCEDURE_TOKEN *Token; EFI_STATUS *Status; } SMM_CPU_DATA_BLOCK; =20 @@ -421,29 +421,28 @@ typedef struct { // // Pointer to an array. The array should be located immediately after th= is structure // so that UC cache-ability can be set together. // SMM_CPU_DATA_BLOCK *CpuData; - volatile UINT32 *Counter; volatile UINT32 BspIndex; volatile BOOLEAN *InsideSmm; volatile BOOLEAN *AllCpusInSync; volatile SMM_CPU_SYNC_MODE EffectiveSyncMode; volatile BOOLEAN SwitchBsp; volatile BOOLEAN *CandidateBsp; volatile BOOLEAN AllApArrivedWithException; EFI_AP_PROCEDURE StartupProcedure; VOID *StartupProcArgs; + VOID *SmmCpuSyncCtx; } SMM_DISPATCHER_MP_SYNC_DATA; =20 #define SMM_PSD_OFFSET 0xfb00 =20 /// /// All global semaphores' pointer /// typedef struct { - volatile UINT32 *Counter; volatile BOOLEAN *InsideSmm; volatile BOOLEAN *AllCpusInSync; SPIN_LOCK *PFLock; SPIN_LOCK *CodeAccessCheckLock; } SMM_CPU_SEMAPHORE_GLOBAL; @@ -451,11 +450,10 @@ typedef struct { /// /// All semaphores for each processor /// typedef struct { SPIN_LOCK *Busy; - volatile UINT32 *Run; volatile BOOLEAN *Present; SPIN_LOCK *Token; } SMM_CPU_SEMAPHORE_CPU; =20 /// diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf b/UefiCpuPkg/PiSm= mCpuDxeSmm/PiSmmCpuDxeSmm.inf index 5d52ed7d13..e92b8c747d 100644 --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf @@ -101,10 +101,11 @@ SmmCpuFeaturesLib PeCoffGetEntryPointLib PerformanceLib CpuPageTableLib MmSaveStateLib + SmmCpuSyncLib =20 [Protocols] gEfiSmmAccess2ProtocolGuid ## CONSUMES gEfiMpServiceProtocolGuid ## CONSUMES gEfiSmmConfigurationProtocolGuid ## PRODUCES --=20 2.16.2.windows.1 -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#110643): https://edk2.groups.io/g/devel/message/110643 Mute This Topic: https://groups.io/mt/102366305/1787277 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [importer@patchew.org] -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-