[PULL 00/50] ppc queue

Cédric Le Goater posted 50 patches 3 months, 3 weeks ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20250721162233.686837-1-clg@redhat.com
Maintainers: Nicholas Piggin <npiggin@gmail.com>, "Frédéric Barrat" <fbarrat@linux.ibm.com>, Daniel Henrique Barboza <danielhb413@gmail.com>, Harsh Prateek Bora <harshpb@linux.ibm.com>
There is a newer version of this series
hw/intc/pnv_xive2_regs.h    |   1 +
include/hw/ppc/xive.h       |  66 +++-
include/hw/ppc/xive2.h      |  22 +-
include/hw/ppc/xive2_regs.h |  22 +-
hw/intc/pnv_xive.c          |  16 +-
hw/intc/pnv_xive2.c         | 140 ++++++---
hw/intc/spapr_xive.c        |  18 +-
hw/intc/xive.c              | 555 ++++++++++++++++++++++------------
hw/intc/xive2.c             | 717 +++++++++++++++++++++++++++++++++-----------
hw/ppc/pnv.c                |  48 +--
hw/ppc/spapr.c              |  21 +-
hw/intc/trace-events        |  12 +-
12 files changed, 1146 insertions(+), 492 deletions(-)
[PULL 00/50] ppc queue
Posted by Cédric Le Goater 3 months, 3 weeks ago
The following changes since commit e82989544e38062beeeaad88c175afbeed0400f8:

  Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2025-07-18 14:10:02 -0400)

are available in the Git repository at:

  https://github.com/legoater/qemu/ tags/pull-ppc-20250721

for you to fetch changes up to df3614b7983e0629b0d422259968985ca0117bfa:

  ppc/xive2: Enable lower level contexts on VP push (2025-07-21 08:03:53 +0200)

----------------------------------------------------------------
ppc/xive queue:

* Various bug fixes around lost interrupts particularly.
* Major group interrupt work, in particular around redistributing
  interrupts. Upstream group support is not in a complete or usable
  state as it is.
* Significant context push/pull improvements, particularly pool and
  phys context handling was quite incomplete beyond trivial OPAL
  case that pushes at boot.
* Improved tracing and checking for unimp and guest error situations.
* Various other missing feature support.

----------------------------------------------------------------
Glenn Miles (12):
      ppc/xive2: Fix calculation of END queue sizes
      ppc/xive2: Use fair irq target search algorithm
      ppc/xive2: Fix irq preempted by lower priority group irq
      ppc/xive2: Fix treatment of PIPR in CPPR update
      pnv/xive2: Support ESB Escalation
      ppc/xive2: add interrupt priority configuration flags
      ppc/xive2: Support redistribution of group interrupts
      ppc/xive: Add more interrupt notification tracing
      ppc/xive2: Improve pool regs variable name
      ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
      ppc/xive2: Redistribute group interrupt precluded by CPPR update
      ppc/xive2: redistribute irqs for pool and phys ctx pull

Michael Kowal (4):
      ppc/xive2: Remote VSDs need to match on forwarding address
      ppc/xive2: Reset Generation Flipped bit on END Cache Watch
      pnv/xive2: Print value in invalid register write logging
      pnv/xive2: Permit valid writes to VC/PC Flush Control registers

Nicholas Piggin (34):
      ppc/xive: Fix xive trace event output
      ppc/xive: Report access size in XIVE TM operation error logs
      ppc/xive2: fix context push calculation of IPB priority
      ppc/xive: Fix PHYS NSR ring matching
      ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR
      ppc/xive2: Set CPPR delivery should account for group priority
      ppc/xive: tctx_notify should clear the precluded interrupt
      ppc/xive: Explicitly zero NSR after accepting
      ppc/xive: Move NSR decoding into helper functions
      ppc/xive: Fix pulling pool and phys contexts
      pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
      ppc/xive: Change presenter .match_nvt to match not present
      ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt
      ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
      ppc/xive: Fix high prio group interrupt being preempted by low prio VP
      ppc/xive: Split xive recompute from IPB function
      ppc/xive: tctx signaling registers rework
      ppc/xive: tctx_accept only lower irq line if an interrupt was presented
      ppc/xive: Add xive_tctx_pipr_set() helper function
      ppc/xive2: split tctx presentation processing from set CPPR
      ppc/xive2: Consolidate presentation processing in context push
      ppc/xive2: Avoid needless interrupt re-check on CPPR set
      ppc/xive: Assert group interrupts were redistributed
      ppc/xive2: implement NVP context save restore for POOL ring
      ppc/xive2: Prevent pulling of pool context losing phys interrupt
      ppc/xive: Redistribute phys after pulling of pool context
      ppc/xive: Check TIMA operations validity
      ppc/xive2: Implement pool context push TIMA op
      ppc/xive2: redistribute group interrupts on context push
      ppc/xive2: Implement set_os_pending TIMA op
      ppc/xive2: Implement POOL LGS push TIMA op
      ppc/xive2: Implement PHYS ring VP push TIMA op
      ppc/xive: Split need_resend into restore_nvp
      ppc/xive2: Enable lower level contexts on VP push

 hw/intc/pnv_xive2_regs.h    |   1 +
 include/hw/ppc/xive.h       |  66 +++-
 include/hw/ppc/xive2.h      |  22 +-
 include/hw/ppc/xive2_regs.h |  22 +-
 hw/intc/pnv_xive.c          |  16 +-
 hw/intc/pnv_xive2.c         | 140 ++++++---
 hw/intc/spapr_xive.c        |  18 +-
 hw/intc/xive.c              | 555 ++++++++++++++++++++++------------
 hw/intc/xive2.c             | 717 +++++++++++++++++++++++++++++++++-----------
 hw/ppc/pnv.c                |  48 +--
 hw/ppc/spapr.c              |  21 +-
 hw/intc/trace-events        |  12 +-
 12 files changed, 1146 insertions(+), 492 deletions(-)
Re: [PULL 00/50] ppc queue
Posted by Stefan Hajnoczi 3 months, 3 weeks ago
Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/10.1 for any user-visible changes.
Re: [PULL 00/50] ppc queue
Posted by Michael Tokarev 3 months, 3 weeks ago
21.07.2025 19:21, Cédric Le Goater wrote:

> ----------------------------------------------------------------
> ppc/xive queue:
> 
> * Various bug fixes around lost interrupts particularly.
> * Major group interrupt work, in particular around redistributing
>    interrupts. Upstream group support is not in a complete or usable
>    state as it is.
> * Significant context push/pull improvements, particularly pool and
>    phys context handling was quite incomplete beyond trivial OPAL
>    case that pushes at boot.
> * Improved tracing and checking for unimp and guest error situations.
> * Various other missing feature support.

Is there anything in there which should be picked up for
stable qemu branches?

Thanks,

/mjt

> ----------------------------------------------------------------
> Glenn Miles (12):
>        ppc/xive2: Fix calculation of END queue sizes
>        ppc/xive2: Use fair irq target search algorithm
>        ppc/xive2: Fix irq preempted by lower priority group irq
>        ppc/xive2: Fix treatment of PIPR in CPPR update
>        pnv/xive2: Support ESB Escalation
>        ppc/xive2: add interrupt priority configuration flags
>        ppc/xive2: Support redistribution of group interrupts
>        ppc/xive: Add more interrupt notification tracing
>        ppc/xive2: Improve pool regs variable name
>        ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
>        ppc/xive2: Redistribute group interrupt precluded by CPPR update
>        ppc/xive2: redistribute irqs for pool and phys ctx pull
> 
> Michael Kowal (4):
>        ppc/xive2: Remote VSDs need to match on forwarding address
>        ppc/xive2: Reset Generation Flipped bit on END Cache Watch
>        pnv/xive2: Print value in invalid register write logging
>        pnv/xive2: Permit valid writes to VC/PC Flush Control registers
> 
> Nicholas Piggin (34):
>        ppc/xive: Fix xive trace event output
>        ppc/xive: Report access size in XIVE TM operation error logs
>        ppc/xive2: fix context push calculation of IPB priority
>        ppc/xive: Fix PHYS NSR ring matching
>        ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR
>        ppc/xive2: Set CPPR delivery should account for group priority
>        ppc/xive: tctx_notify should clear the precluded interrupt
>        ppc/xive: Explicitly zero NSR after accepting
>        ppc/xive: Move NSR decoding into helper functions
>        ppc/xive: Fix pulling pool and phys contexts
>        pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
>        ppc/xive: Change presenter .match_nvt to match not present
>        ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt
>        ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
>        ppc/xive: Fix high prio group interrupt being preempted by low prio VP
>        ppc/xive: Split xive recompute from IPB function
>        ppc/xive: tctx signaling registers rework
>        ppc/xive: tctx_accept only lower irq line if an interrupt was presented
>        ppc/xive: Add xive_tctx_pipr_set() helper function
>        ppc/xive2: split tctx presentation processing from set CPPR
>        ppc/xive2: Consolidate presentation processing in context push
>        ppc/xive2: Avoid needless interrupt re-check on CPPR set
>        ppc/xive: Assert group interrupts were redistributed
>        ppc/xive2: implement NVP context save restore for POOL ring
>        ppc/xive2: Prevent pulling of pool context losing phys interrupt
>        ppc/xive: Redistribute phys after pulling of pool context
>        ppc/xive: Check TIMA operations validity
>        ppc/xive2: Implement pool context push TIMA op
>        ppc/xive2: redistribute group interrupts on context push
>        ppc/xive2: Implement set_os_pending TIMA op
>        ppc/xive2: Implement POOL LGS push TIMA op
>        ppc/xive2: Implement PHYS ring VP push TIMA op
>        ppc/xive: Split need_resend into restore_nvp
>        ppc/xive2: Enable lower level contexts on VP push

Re: [PULL 00/50] ppc queue
Posted by Cédric Le Goater 3 months, 3 weeks ago
+ Glenn, Michael, Caleb, Gautam

On 7/22/25 13:44, Michael Tokarev wrote:
> 21.07.2025 19:21, Cédric Le Goater wrote:
> 
>> ----------------------------------------------------------------
>> ppc/xive queue:
>>
>> * Various bug fixes around lost interrupts particularly.
>> * Major group interrupt work, in particular around redistributing
>>    interrupts. Upstream group support is not in a complete or usable
>>    state as it is.
>> * Significant context push/pull improvements, particularly pool and
>>    phys context handling was quite incomplete beyond trivial OPAL
>>    case that pushes at boot.
>> * Improved tracing and checking for unimp and guest error situations.
>> * Various other missing feature support.
> 
> Is there anything in there which should be picked up for
> stable qemu branches?

May be the IBM simulation team can say ?
I think this would also require some testing before applying.

Which stable branch are you targeting ? 7.2 to 10.0 ?


Thanks,

C.
  


Re: [PULL 00/50] ppc queue
Posted by Michael Tokarev 3 months, 3 weeks ago
On 22.07.2025 16:37, Cédric Le Goater wrote:
> + Glenn, Michael, Caleb, Gautam
> 
> On 7/22/25 13:44, Michael Tokarev wrote:
>> 21.07.2025 19:21, Cédric Le Goater wrote:
>>
>>> ----------------------------------------------------------------
>>> ppc/xive queue:
>>>
>>> * Various bug fixes around lost interrupts particularly.
>>> * Major group interrupt work, in particular around redistributing
>>>    interrupts. Upstream group support is not in a complete or usable
>>>    state as it is.
>>> * Significant context push/pull improvements, particularly pool and
>>>    phys context handling was quite incomplete beyond trivial OPAL
>>>    case that pushes at boot.
>>> * Improved tracing and checking for unimp and guest error situations.
>>> * Various other missing feature support.
>>
>> Is there anything in there which should be picked up for
>> stable qemu branches?
> 
> May be the IBM simulation team can say ?
> I think this would also require some testing before applying.
> 
> Which stable branch are you targeting ? 7.2 to 10.0 ?

There are currently 2 active stable branches, 7.2 and 10.0.
Both are supposed to be long-term maintenance.  I think 7.2
can be left behind already.

Thanks,

/mjt

Re: [PULL 00/50] ppc queue
Posted by Miles Glenn 3 months, 1 week ago
On Tue, 2025-07-22 at 17:25 +0300, Michael Tokarev wrote:
> On 22.07.2025 16:37, Cédric Le Goater wrote:
> > + Glenn, Michael, Caleb, Gautam
> > 
> > On 7/22/25 13:44, Michael Tokarev wrote:
> > > 21.07.2025 19:21, Cédric Le Goater wrote:
> > > 
> > > > ----------------------------------------------------------------
> > > > ppc/xive queue:
> > > > 
> > > > * Various bug fixes around lost interrupts particularly.
> > > > * Major group interrupt work, in particular around redistributing
> > > >    interrupts. Upstream group support is not in a complete or usable
> > > >    state as it is.
> > > > * Significant context push/pull improvements, particularly pool and
> > > >    phys context handling was quite incomplete beyond trivial OPAL
> > > >    case that pushes at boot.
> > > > * Improved tracing and checking for unimp and guest error situations.
> > > > * Various other missing feature support.
> > > 
> > > Is there anything in there which should be picked up for
> > > stable qemu branches?
> > 
> > May be the IBM simulation team can say ?
> > I think this would also require some testing before applying.
> > 
> > Which stable branch are you targeting ? 7.2 to 10.0 ?
> 
> There are currently 2 active stable branches, 7.2 and 10.0.
> Both are supposed to be long-term maintenance.  I think 7.2
> can be left behind already.
> 
> Thanks,
> 
> /mjt

Michael T.,

All of the XIVE fixes/changes originating from myself were made in an
effort to get PowerVM firmware running on PowerNV with minimal testing
of OPAL firmware.  However, even with those fixes, running PowerVM on
PowerNV is still pretty unstable.  While backporting these fixes would
likely increase the stability of running PowerVM on PowerNV, I do think
it could pose significant risk to the stability of running OPAL on
PowerNV.  With that in mind, I think it's probably best if we did not
backport any of my own XIVE changes.

Nick, can you respond regarding the changes you made?

Thanks,

Glenn






Re: [PULL 00/50] ppc queue
Posted by Michael Tokarev 3 months, 1 week ago
On 05.08.2025 19:26, Miles Glenn wrote:
> On Tue, 2025-07-22 at 17:25 +03..00, Michael Tokarev wrote:
...
>> There are currently 2 active stable branches, 7.2 and 10.0.
>> Both are supposed to be long-term maintenance.  I think 7.2
>> can be left behind already.
>>
>> Thanks,
>>
>> /mjt
> 
> Michael T.,
> 
> All of the XIVE fixes/changes originating from myself were made in an
> effort to get PowerVM firmware running on PowerNV with minimal testing
> of OPAL firmware.  However, even with those fixes, running PowerVM on
> PowerNV is still pretty unstable.  While backporting these fixes would
> likely increase the stability of running PowerVM on PowerNV, I do think
> it could pose significant risk to the stability of running OPAL on
> PowerNV.  With that in mind, I think it's probably best if we did not
> backport any of my own XIVE changes.

My view on this, - having in mind 10.0 most likely will be a long-term
support branch - we can pick the PowerVM changes, and if a breakage with
the case you mentioned is found (which will be the same breakage as with
master branch, hopefully), we can pick fixes for these too.

Especially as we have more time now after release of 10.1 and before the
next stable series.

So to me, breakage in stable series is not a good thing, but we can as
well fix it there, - so there might be some balance between known bugs,
possible breakage and future fixes.

But it's definitely your call, you know this area much better.

Thanks,

/mjt
Re: [PULL 00/50] ppc queue
Posted by Cédric Le Goater 3 months, 1 week ago
On 8/5/25 18:33, Michael Tokarev wrote:
> On 05.08.2025 19:26, Miles Glenn wrote:
>> On Tue, 2025-07-22 at 17:25 +03..00, Michael Tokarev wrote:
> ...
>>> There are currently 2 active stable branches, 7.2 and 10.0.
>>> Both are supposed to be long-term maintenance.  I think 7.2
>>> can be left behind already.
>>>
>>> Thanks,
>>>
>>> /mjt
>>
>> Michael T.,
>>
>> All of the XIVE fixes/changes originating from myself were made in an
>> effort to get PowerVM firmware running on PowerNV with minimal testing
>> of OPAL firmware.  However, even with those fixes, running PowerVM on
>> PowerNV is still pretty unstable.  While backporting these fixes would
>> likely increase the stability of running PowerVM on PowerNV, I do think
>> it could pose significant risk to the stability of running OPAL on
>> PowerNV.  With that in mind, I think it's probably best if we did not
>> backport any of my own XIVE changes.
> 
> My view on this, - having in mind 10.0 most likely will be a long-term
> support branch - we can pick the PowerVM changes, and if a breakage with
> the case you mentioned is found (which will be the same breakage as with
> master branch, hopefully), we can pick fixes for these too.
> 
> Especially as we have more time now after release of 10.1 and before the
> next stable series.
> 
> So to me, breakage in stable series is not a good thing, but we can as
> well fix it there, - so there might be some balance between known bugs,
> possible breakage and future fixes.

We have a large set of functional tests for powernv, even checking
emulated nested virtualization IIRC. I still have some scripts running
16 sockets powernv machines with a bunch of pci devices to stress
emulation a bit more.

The upstream target is OPAL firmware, not PowerVM. Patches for PowerVM
may be proposed later, if deemed appropriate by the IBM simulation team.

Cheers,

C.


> But it's definitely your call, you know this area much better.
> 
> Thanks,
> 
> /mjt
> 


Re: [PULL 00/50] ppc queue
Posted by Cédric Le Goater 3 months, 1 week ago
On 8/5/25 18:26, Miles Glenn wrote:
> On Tue, 2025-07-22 at 17:25 +0300, Michael Tokarev wrote:
>> On 22.07.2025 16:37, Cédric Le Goater wrote:
>>> + Glenn, Michael, Caleb, Gautam
>>>
>>> On 7/22/25 13:44, Michael Tokarev wrote:
>>>> 21.07.2025 19:21, Cédric Le Goater wrote:
>>>>
>>>>> ----------------------------------------------------------------
>>>>> ppc/xive queue:
>>>>>
>>>>> * Various bug fixes around lost interrupts particularly.
>>>>> * Major group interrupt work, in particular around redistributing
>>>>>     interrupts. Upstream group support is not in a complete or usable
>>>>>     state as it is.
>>>>> * Significant context push/pull improvements, particularly pool and
>>>>>     phys context handling was quite incomplete beyond trivial OPAL
>>>>>     case that pushes at boot.
>>>>> * Improved tracing and checking for unimp and guest error situations.
>>>>> * Various other missing feature support.
>>>>
>>>> Is there anything in there which should be picked up for
>>>> stable qemu branches?
>>>
>>> May be the IBM simulation team can say ?
>>> I think this would also require some testing before applying.
>>>
>>> Which stable branch are you targeting ? 7.2 to 10.0 ?
>>
>> There are currently 2 active stable branches, 7.2 and 10.0.
>> Both are supposed to be long-term maintenance.  I think 7.2
>> can be left behind already.
>>
>> Thanks,
>>
>> /mjt
> 
> Michael T.,
> 
> All of the XIVE fixes/changes originating from myself were made in an
> effort to get PowerVM firmware running on PowerNV with minimal testing
> of OPAL firmware.  However, even with those fixes, running PowerVM on
> PowerNV is still pretty unstable.  While backporting these fixes would
> likely increase the stability of running PowerVM on PowerNV, I do think
> it could pose significant risk to the stability of running OPAL on
> PowerNV.  With that in mind, I think it's probably best if we did not
> backport any of my own XIVE changes.

These seem to be interesting to have :

ppc/xive2: Fix treatment of PIPR in CPPR update
ppc/xive2: Fix irq preempted by lower priority group irq
ppc/xive: Fix PHYS NSR ring matching
ppc/xive2: fix context push calculation of IPB priority
ppc/xive2: Remote VSDs need to match on forwarding address
ppc/xive2: Fix calculation of END queue sizes
ppc/xive: Report access size in XIVE TM operation error logs
ppc/xive: Fix xive trace event output

?

Thanks,

C.


Re: [PULL 00/50] ppc queue
Posted by Miles Glenn 3 months, 1 week ago
On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
> On 8/5/25 18:26, Miles Glenn wrote:
> > On Tue, 2025-07-22 at 17:25 +0300, Michael Tokarev wrote:
> > > On 22.07.2025 16:37, Cédric Le Goater wrote:
> > > > + Glenn, Michael, Caleb, Gautam
> > > > 
> > > > On 7/22/25 13:44, Michael Tokarev wrote:
> > > > > 21.07.2025 19:21, Cédric Le Goater wrote:
> > > > > 
> > > > > > ----------------------------------------------------------------
> > > > > > ppc/xive queue:
> > > > > > 
> > > > > > * Various bug fixes around lost interrupts particularly.
> > > > > > * Major group interrupt work, in particular around redistributing
> > > > > >     interrupts. Upstream group support is not in a complete or usable
> > > > > >     state as it is.
> > > > > > * Significant context push/pull improvements, particularly pool and
> > > > > >     phys context handling was quite incomplete beyond trivial OPAL
> > > > > >     case that pushes at boot.
> > > > > > * Improved tracing and checking for unimp and guest error situations.
> > > > > > * Various other missing feature support.
> > > > > 
> > > > > Is there anything in there which should be picked up for
> > > > > stable qemu branches?
> > > > 
> > > > May be the IBM simulation team can say ?
> > > > I think this would also require some testing before applying.
> > > > 
> > > > Which stable branch are you targeting ? 7.2 to 10.0 ?
> > > 
> > > There are currently 2 active stable branches, 7.2 and 10.0.
> > > Both are supposed to be long-term maintenance.  I think 7.2
> > > can be left behind already.
> > > 
> > > Thanks,
> > > 
> > > /mjt
> > 
> > Michael T.,
> > 
> > All of the XIVE fixes/changes originating from myself were made in an
> > effort to get PowerVM firmware running on PowerNV with minimal testing
> > of OPAL firmware.  However, even with those fixes, running PowerVM on
> > PowerNV is still pretty unstable.  While backporting these fixes would
> > likely increase the stability of running PowerVM on PowerNV, I do think
> > it could pose significant risk to the stability of running OPAL on
> > PowerNV.  With that in mind, I think it's probably best if we did not
> > backport any of my own XIVE changes.
> 
> These seem to be interesting to have :
> 
> ppc/xive2: Fix treatment of PIPR in CPPR update
> ppc/xive2: Fix irq preempted by lower priority group irq
> ppc/xive: Fix PHYS NSR ring matching
> ppc/xive2: fix context push calculation of IPB priority
> ppc/xive2: Remote VSDs need to match on forwarding address
> ppc/xive2: Fix calculation of END queue sizes
> ppc/xive: Report access size in XIVE TM operation error logs
> ppc/xive: Fix xive trace event output
> 
> ?
> 
> Thanks,
> 
> C.
> 

I'm still not sure that the benefit is worth the effort, but I
certainly don't have a problem with them being backported if someone
has the desire and the time to do it.

Thanks,

Glenn




Re: [PULL 00/50] ppc queue
Posted by Michael Tokarev 3 months, 1 week ago
On 06.08.2025 23:46, Miles Glenn wrote:
> On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
...
>> These seem to be interesting to have :
>>
>> ppc/xive2: Fix treatment of PIPR in CPPR update
>> ppc/xive2: Fix irq preempted by lower priority group irq
>> ppc/xive: Fix PHYS NSR ring matching
>> ppc/xive2: fix context push calculation of IPB priority
>> ppc/xive2: Remote VSDs need to match on forwarding address
>> ppc/xive2: Fix calculation of END queue sizes
>> ppc/xive: Report access size in XIVE TM operation error logs
>> ppc/xive: Fix xive trace event output
> 
> I'm still not sure that the benefit is worth the effort, but I
> certainly don't have a problem with them being backported if someone
> has the desire and the time to do it.

I mentioned already that 10.0 series will (hopefully) be LTS series.
At the very least, it is what we'll have in the upcoming debian
stable release (trixie), which will be stable for the next 2 years.
Whenever this is important to have working Power* support in debian -
I don't know.

All the mentioned patches applied to 10.0 branch cleanly (in the
reverse order, from bottom to top), so there's no effort needed
to back-port them.  And the result passes at least the standard
qemu testsuite.  So it looks like everything works as intended.

Please keep qemu-stable@ in Cc for other fixes which you think are
of interest for older/stable series of qemu.

Thanks,

/mjt

Re: [PULL 00/50] ppc queue
Posted by Miles Glenn 3 months, 1 week ago
On Fri, 2025-08-08 at 09:07 +0300, Michael Tokarev wrote:
> On 06.08.2025 23:46, Miles Glenn wrote:
> > On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
> ...
> > > These seem to be interesting to have :
> > > 
> > > ppc/xive2: Fix treatment of PIPR in CPPR update
> > > ppc/xive2: Fix irq preempted by lower priority group irq
> > > ppc/xive: Fix PHYS NSR ring matching
> > > ppc/xive2: fix context push calculation of IPB priority
> > > ppc/xive2: Remote VSDs need to match on forwarding address
> > > ppc/xive2: Fix calculation of END queue sizes
> > > ppc/xive: Report access size in XIVE TM operation error logs
> > > ppc/xive: Fix xive trace event output
> > 
> > I'm still not sure that the benefit is worth the effort, but I
> > certainly don't have a problem with them being backported if someone
> > has the desire and the time to do it.
> 
> I mentioned already that 10.0 series will (hopefully) be LTS series.
> At the very least, it is what we'll have in the upcoming debian
> stable release (trixie), which will be stable for the next 2 years.
> Whenever this is important to have working Power* support in debian -
> I don't know.
> 
> All the mentioned patches applied to 10.0 branch cleanly (in the
> reverse order, from bottom to top), so there's no effort needed
> to back-port them.  And the result passes at least the standard
> qemu testsuite.  So it looks like everything works as intended.
> 
> Please keep qemu-stable@ in Cc for other fixes which you think are
> of interest for older/stable series of qemu.
> 
> Thanks,
> 
> /mjt

Will do, and thanks for doing the backporting, Michael!

-Glenn


Re: [PULL 00/50] ppc queue
Posted by Cédric Le Goater 3 months, 1 week ago
On 8/8/25 08:07, Michael Tokarev wrote:
> On 06.08.2025 23:46, Miles Glenn wrote:
>> On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
> ...
>>> These seem to be interesting to have :
>>>
>>> ppc/xive2: Fix treatment of PIPR in CPPR update
>>> ppc/xive2: Fix irq preempted by lower priority group irq

I added :

   ppc/xive2: Reset Generation Flipped bit on END Cache Watch

>>> ppc/xive: Fix PHYS NSR ring matching
>>> ppc/xive2: fix context push calculation of IPB priority
>>> ppc/xive2: Remote VSDs need to match on forwarding address
>>> ppc/xive2: Fix calculation of END queue sizes
>>> ppc/xive: Report access size in XIVE TM operation error logs
>>> ppc/xive: Fix xive trace event output
>>
>> I'm still not sure that the benefit is worth the effort, but I
>> certainly don't have a problem with them being backported if someone
>> has the desire and the time to do it.
> 
> I mentioned already that 10.0 series will (hopefully) be LTS series.
> At the very least, it is what we'll have in the upcoming debian
> stable release (trixie), which will be stable for the next 2 years.
> Whenever this is important to have working Power* support in debian -
> I don't know.
> 
> All the mentioned patches applied to 10.0 branch cleanly (in the
> reverse order, from bottom to top), so there's no effort needed
> to back-port them.  And the result passes at least the standard
> qemu testsuite.  So it looks like everything works as intended.


24.04 operates correctly with a "6.14.0-27-generic #27~24.04.1-Ubuntu"
kernel on a PowerNV10 system defined as :

   Architecture:             ppc64le
     Byte Order:             Little Endian
   CPU(s):                   16
     On-line CPU(s) list:    0-15
   Model name:               POWER10, altivec supported
     Model:                  2.0 (pvr 0080 1200)
     Thread(s) per core:     4
     Core(s) per socket:     2
     Socket(s):              2
     Frequency boost:        enabled
     CPU(s) scaling MHz:     76%
     CPU max MHz:            3800.0000
     CPU min MHz:            2000.0000
   Caches (sum of all):
     L1d:                    128 KiB (4 instances)
     L1i:                    128 KiB (4 instances)
   NUMA:
     NUMA node(s):           2
     NUMA node0 CPU(s):      0-7
     NUMA node1 CPU(s):      8-15

with devices :

   0000:00:00.0 PCI bridge: IBM Device 0652
   0000:01:00.0 Non-Volatile memory controller: Red Hat, Inc. QEMU NVM Express Controller (rev 02)
   0001:00:00.0 PCI bridge: IBM Device 0652
   0001:01:00.0 PCI bridge: Red Hat, Inc. Device 000e
   0001:02:02.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03)
   0001:02:03.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
   0002:00:00.0 PCI bridge: IBM Device 0652
   ...

A rhel9 nested guest boots too.

Poweroff and reboot are fine.



Michael,

I would say ship it.


Glenn, Gautam,

It would nice to get rid of these messages.
   
   [    0.000000] NR_IRQS: 512, nr_irqs: 512, preallocated irqs: 16
   [    2.270794918,5] XIVE: [ IC 00  ] Resetting one xive...
   [    2.271575295,3] XIVE: [CPU 0000] Error enabling PHYS CAM already enabled
   CPU 0100 Backtrace:
    S: 0000000032413a20 R: 0000000030021408   .backtrace+0x40
    S: 0000000032413ad0 R: 000000003008427c   .xive2_tima_enable_phys+0x40
    S: 0000000032413b50 R: 0000000030087430   .__xive_reset.constprop.0.isra.0+0x520
    S: 0000000032413c90 R: 0000000030087638   .opal_xive_reset+0x78
    S: 0000000032413d10 R: 00000000300038bc   opal_entry+0x14c
    --- OPAL call token: 0x80 caller R1: 0xc0000000014bbc90 ---
   [    2.273581201,3] XIVE: [CPU 0001] Error enabling PHYS CAM already enabled


Is it a modeling issue ?


Thanks,

C.





Re: [PULL 00/50] ppc queue
Posted by Gautam Menghani 2 months, 3 weeks ago
On Fri, Aug 08, 2025 at 10:17:24AM +0200, Cédric Le Goater wrote:
> On 8/8/25 08:07, Michael Tokarev wrote:
> > On 06.08.2025 23:46, Miles Glenn wrote:
> > > On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
> > ...
> > > > These seem to be interesting to have :
> > > > 
> > > > ppc/xive2: Fix treatment of PIPR in CPPR update
> > > > ppc/xive2: Fix irq preempted by lower priority group irq
> 
> I added :
> 
>   ppc/xive2: Reset Generation Flipped bit on END Cache Watch
> 
> > > > ppc/xive: Fix PHYS NSR ring matching
> > > > ppc/xive2: fix context push calculation of IPB priority
> > > > ppc/xive2: Remote VSDs need to match on forwarding address
> > > > ppc/xive2: Fix calculation of END queue sizes
> > > > ppc/xive: Report access size in XIVE TM operation error logs
> > > > ppc/xive: Fix xive trace event output
> > > 
> > > I'm still not sure that the benefit is worth the effort, but I
> > > certainly don't have a problem with them being backported if someone
> > > has the desire and the time to do it.
> > 
> > I mentioned already that 10.0 series will (hopefully) be LTS series.
> > At the very least, it is what we'll have in the upcoming debian
> > stable release (trixie), which will be stable for the next 2 years.
> > Whenever this is important to have working Power* support in debian -
> > I don't know.
> > 
> > All the mentioned patches applied to 10.0 branch cleanly (in the
> > reverse order, from bottom to top), so there's no effort needed
> > to back-port them.  And the result passes at least the standard
> > qemu testsuite.  So it looks like everything works as intended.
> 
> 
> 24.04 operates correctly with a "6.14.0-27-generic #27~24.04.1-Ubuntu"
> kernel on a PowerNV10 system defined as :
> 
>   Architecture:             ppc64le
>     Byte Order:             Little Endian
>   CPU(s):                   16
>     On-line CPU(s) list:    0-15
>   Model name:               POWER10, altivec supported
>     Model:                  2.0 (pvr 0080 1200)
>     Thread(s) per core:     4
>     Core(s) per socket:     2
>     Socket(s):              2
>     Frequency boost:        enabled
>     CPU(s) scaling MHz:     76%
>     CPU max MHz:            3800.0000
>     CPU min MHz:            2000.0000
>   Caches (sum of all):
>     L1d:                    128 KiB (4 instances)
>     L1i:                    128 KiB (4 instances)
>   NUMA:
>     NUMA node(s):           2
>     NUMA node0 CPU(s):      0-7
>     NUMA node1 CPU(s):      8-15
> 
> with devices :
> 
>   0000:00:00.0 PCI bridge: IBM Device 0652
>   0000:01:00.0 Non-Volatile memory controller: Red Hat, Inc. QEMU NVM Express Controller (rev 02)
>   0001:00:00.0 PCI bridge: IBM Device 0652
>   0001:01:00.0 PCI bridge: Red Hat, Inc. Device 000e
>   0001:02:02.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03)
>   0001:02:03.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
>   0002:00:00.0 PCI bridge: IBM Device 0652
>   ...
> 
> A rhel9 nested guest boots too.
> 
> Poweroff and reboot are fine.
> 
> 
> 
> Michael,
> 
> I would say ship it.
> 
> 
> Glenn, Gautam,
> 
> It would nice to get rid of these messages.
>   [    0.000000] NR_IRQS: 512, nr_irqs: 512, preallocated irqs: 16
>   [    2.270794918,5] XIVE: [ IC 00  ] Resetting one xive...
>   [    2.271575295,3] XIVE: [CPU 0000] Error enabling PHYS CAM already enabled
>   CPU 0100 Backtrace:
>    S: 0000000032413a20 R: 0000000030021408   .backtrace+0x40
>    S: 0000000032413ad0 R: 000000003008427c   .xive2_tima_enable_phys+0x40
>    S: 0000000032413b50 R: 0000000030087430   .__xive_reset.constprop.0.isra.0+0x520
>    S: 0000000032413c90 R: 0000000030087638   .opal_xive_reset+0x78
>    S: 0000000032413d10 R: 00000000300038bc   opal_entry+0x14c
>    --- OPAL call token: 0x80 caller R1: 0xc0000000014bbc90 ---
>   [    2.273581201,3] XIVE: [CPU 0001] Error enabling PHYS CAM already enabled
> 

Hi Cedric,

I'm not able to repro this with the latest QEMU master (commit
5836af078321).

My command line is:

$ cat run.sh
#!/bin/bash

./build/qemu-system-ppc64 \
	-smp 16,sockets=2,cores=2,threads=4 \
	-kernel vmlinux \
	-initrd initrd.img \
	-append 'root=LABEL=cloudimg-rootfs ro console=hvc0 earlyprintk' \
	-drive file=/home/gautam/images/noble-server-cloudimg-ppc64el.img,format=qcow2,if=none,id=drive0,format=qcow2,cache=none -device nvme,bus=pcie.0,addr=0x0,drive=drive0,serial=1234 \
	-M powernv10  -netdev user,id=net0,hostfwd=tcp::2223-:22 -device e1000e,netdev=net0,bus=pcie.1 -nographic


Can you please share your command line with which you got the above
warnings?

Thanks,
Gautam

> 
> Is it a modeling issue ?
> 
> 
> Thanks,
> 
> C.
> 
> 
> 
> 
Re: [PULL 00/50] ppc queue
Posted by Cédric Le Goater 2 months, 2 weeks ago
Hello,

On 8/19/25 14:56, Gautam Menghani wrote:
> On Fri, Aug 08, 2025 at 10:17:24AM +0200, Cédric Le Goater wrote:
>> On 8/8/25 08:07, Michael Tokarev wrote:
>>> On 06.08.2025 23:46, Miles Glenn wrote:
>>>> On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
>>> ...
>>>>> These seem to be interesting to have :
>>>>>
>>>>> ppc/xive2: Fix treatment of PIPR in CPPR update
>>>>> ppc/xive2: Fix irq preempted by lower priority group irq
>>
>> I added :
>>
>>    ppc/xive2: Reset Generation Flipped bit on END Cache Watch
>>
>>>>> ppc/xive: Fix PHYS NSR ring matching
>>>>> ppc/xive2: fix context push calculation of IPB priority
>>>>> ppc/xive2: Remote VSDs need to match on forwarding address
>>>>> ppc/xive2: Fix calculation of END queue sizes
>>>>> ppc/xive: Report access size in XIVE TM operation error logs
>>>>> ppc/xive: Fix xive trace event output
>>>>
>>>> I'm still not sure that the benefit is worth the effort, but I
>>>> certainly don't have a problem with them being backported if someone
>>>> has the desire and the time to do it.
>>>
>>> I mentioned already that 10.0 series will (hopefully) be LTS series.
>>> At the very least, it is what we'll have in the upcoming debian
>>> stable release (trixie), which will be stable for the next 2 years.
>>> Whenever this is important to have working Power* support in debian -
>>> I don't know.
>>>
>>> All the mentioned patches applied to 10.0 branch cleanly (in the
>>> reverse order, from bottom to top), so there's no effort needed
>>> to back-port them.  And the result passes at least the standard
>>> qemu testsuite.  So it looks like everything works as intended.
>>
>>
>> 24.04 operates correctly with a "6.14.0-27-generic #27~24.04.1-Ubuntu"
>> kernel on a PowerNV10 system defined as :
>>
>>    Architecture:             ppc64le
>>      Byte Order:             Little Endian
>>    CPU(s):                   16
>>      On-line CPU(s) list:    0-15
>>    Model name:               POWER10, altivec supported
>>      Model:                  2.0 (pvr 0080 1200)
>>      Thread(s) per core:     4
>>      Core(s) per socket:     2
>>      Socket(s):              2
>>      Frequency boost:        enabled
>>      CPU(s) scaling MHz:     76%
>>      CPU max MHz:            3800.0000
>>      CPU min MHz:            2000.0000
>>    Caches (sum of all):
>>      L1d:                    128 KiB (4 instances)
>>      L1i:                    128 KiB (4 instances)
>>    NUMA:
>>      NUMA node(s):           2
>>      NUMA node0 CPU(s):      0-7
>>      NUMA node1 CPU(s):      8-15
>>
>> with devices :
>>
>>    0000:00:00.0 PCI bridge: IBM Device 0652
>>    0000:01:00.0 Non-Volatile memory controller: Red Hat, Inc. QEMU NVM Express Controller (rev 02)
>>    0001:00:00.0 PCI bridge: IBM Device 0652
>>    0001:01:00.0 PCI bridge: Red Hat, Inc. Device 000e
>>    0001:02:02.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03)
>>    0001:02:03.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
>>    0002:00:00.0 PCI bridge: IBM Device 0652
>>    ...
>>
>> A rhel9 nested guest boots too.
>>
>> Poweroff and reboot are fine.
>>
>>
>>
>> Michael,
>>
>> I would say ship it.
>>
>>
>> Glenn, Gautam,
>>
>> It would nice to get rid of these messages.
>>    [    0.000000] NR_IRQS: 512, nr_irqs: 512, preallocated irqs: 16
>>    [    2.270794918,5] XIVE: [ IC 00  ] Resetting one xive...
>>    [    2.271575295,3] XIVE: [CPU 0000] Error enabling PHYS CAM already enabled
>>    CPU 0100 Backtrace:
>>     S: 0000000032413a20 R: 0000000030021408   .backtrace+0x40
>>     S: 0000000032413ad0 R: 000000003008427c   .xive2_tima_enable_phys+0x40
>>     S: 0000000032413b50 R: 0000000030087430   .__xive_reset.constprop.0.isra.0+0x520
>>     S: 0000000032413c90 R: 0000000030087638   .opal_xive_reset+0x78
>>     S: 0000000032413d10 R: 00000000300038bc   opal_entry+0x14c
>>     --- OPAL call token: 0x80 caller R1: 0xc0000000014bbc90 ---
>>    [    2.273581201,3] XIVE: [CPU 0001] Error enabling PHYS CAM already enabled
>>
> 
> Hi Cedric,
> 
> I'm not able to repro this with the latest QEMU master (commit
> 5836af078321).
> 
> My command line is:
> 
> $ cat run.sh
> #!/bin/bash
> 
> ./build/qemu-system-ppc64 \
> 	-smp 16,sockets=2,cores=2,threads=4 \
> 	-kernel vmlinux \
> 	-initrd initrd.img \
> 	-append 'root=LABEL=cloudimg-rootfs ro console=hvc0 earlyprintk' \
> 	-drive file=/home/gautam/images/noble-server-cloudimg-ppc64el.img,format=qcow2,if=none,id=drive0,format=qcow2,cache=none -device nvme,bus=pcie.0,addr=0x0,drive=drive0,serial=1234 \
> 	-M powernv10  -netdev user,id=net0,hostfwd=tcp::2223-:22 -device e1000e,netdev=net0,bus=pcie.1 -nographic
> 
> 
> Can you please share your command line with which you got the above
> warnings?

It's the same as yours.

The issue seems to be with OPAL. OPAL v7.1-106-g785a5e307 (shipped with QEMU)
is OK. Latest OPAL v7.1-133-gd365a01a0996 is not.


Thanks,

C.


Re: [PULL 00/50] ppc queue
Posted by Mike Kowal 3 months ago
On 8/8/2025 3:17 AM, Cédric Le Goater wrote:
> On 8/8/25 08:07, Michael Tokarev wrote:
>> On 06.08.2025 23:46, Miles Glenn wrote:
>>> On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
>> ...
>>>> These seem to be interesting to have :
>>>>
>>>> ppc/xive2: Fix treatment of PIPR in CPPR update
>>>> ppc/xive2: Fix irq preempted by lower priority group irq
>
> I added :
>
>   ppc/xive2: Reset Generation Flipped bit on END Cache Watch
>
>>>> ppc/xive: Fix PHYS NSR ring matching
>>>> ppc/xive2: fix context push calculation of IPB priority
>>>> ppc/xive2: Remote VSDs need to match on forwarding address
>>>> ppc/xive2: Fix calculation of END queue sizes
>>>> ppc/xive: Report access size in XIVE TM operation error logs
>>>> ppc/xive: Fix xive trace event output
>>>
>>> I'm still not sure that the benefit is worth the effort, but I
>>> certainly don't have a problem with them being backported if someone
>>> has the desire and the time to do it.
>>
>> I mentioned already that 10.0 series will (hopefully) be LTS series.
>> At the very least, it is what we'll have in the upcoming debian
>> stable release (trixie), which will be stable for the next 2 years.
>> Whenever this is important to have working Power* support in debian -
>> I don't know.
>>
>> All the mentioned patches applied to 10.0 branch cleanly (in the
>> reverse order, from bottom to top), so there's no effort needed
>> to back-port them.  And the result passes at least the standard
>> qemu testsuite.  So it looks like everything works as intended.
>
>
> 24.04 operates correctly with a "6.14.0-27-generic #27~24.04.1-Ubuntu"
> kernel on a PowerNV10 system defined as :
>
>   Architecture:             ppc64le
>     Byte Order:             Little Endian
>   CPU(s):                   16
>     On-line CPU(s) list:    0-15
>   Model name:               POWER10, altivec supported
>     Model:                  2.0 (pvr 0080 1200)
>     Thread(s) per core:     4
>     Core(s) per socket:     2
>     Socket(s):              2
>     Frequency boost:        enabled
>     CPU(s) scaling MHz:     76%
>     CPU max MHz:            3800.0000
>     CPU min MHz:            2000.0000
>   Caches (sum of all):
>     L1d:                    128 KiB (4 instances)
>     L1i:                    128 KiB (4 instances)
>   NUMA:
>     NUMA node(s):           2
>     NUMA node0 CPU(s):      0-7
>     NUMA node1 CPU(s):      8-15
>
> with devices :
>
>   0000:00:00.0 PCI bridge: IBM Device 0652
>   0000:01:00.0 Non-Volatile memory controller: Red Hat, Inc. QEMU NVM 
> Express Controller (rev 02)
>   0001:00:00.0 PCI bridge: IBM Device 0652
>   0001:01:00.0 PCI bridge: Red Hat, Inc. Device 000e
>   0001:02:02.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host 
> Controller (rev 03)
>   0001:02:03.0 Ethernet controller: Intel Corporation 82574L Gigabit 
> Network Connection
>   0002:00:00.0 PCI bridge: IBM Device 0652
>   ...
>
> A rhel9 nested guest boots too.
>
> Poweroff and reboot are fine.
>
>
>
> Michael,
>
> I would say ship it.
>
>
> Glenn, Gautam,
>
> It would nice to get rid of these messages.
>     [    0.000000] NR_IRQS: 512, nr_irqs: 512, preallocated irqs: 16
>   [    2.270794918,5] XIVE: [ IC 00  ] Resetting one xive...
>   [    2.271575295,3] XIVE: [CPU 0000] Error enabling PHYS CAM already 
> enabled
>   CPU 0100 Backtrace:
>    S: 0000000032413a20 R: 0000000030021408   .backtrace+0x40
>    S: 0000000032413ad0 R: 000000003008427c .xive2_tima_enable_phys+0x40
>    S: 0000000032413b50 R: 0000000030087430 
> .__xive_reset.constprop.0.isra.0+0x520
>    S: 0000000032413c90 R: 0000000030087638   .opal_xive_reset+0x78
>    S: 0000000032413d10 R: 00000000300038bc   opal_entry+0x14c
>    --- OPAL call token: 0x80 caller R1: 0xc0000000014bbc90 ---
>   [    2.273581201,3] XIVE: [CPU 0001] Error enabling PHYS CAM already 
> enabled
>
>
> Is it a modeling issue ?

I do not think it a modeling issue  We do get any warning or error 
messages when booting Linux on Power VM. Note that " [PATCH 43/50] 
ppc/xive: Check TIMA operations validity" added some warning logs.  The 
problem is that the Context is 'hardware owned' since it is already 
pushed/enabled.

MAK

>
>
> Thanks,
>
> C.
>
>
>
>

Re: [PULL 00/50] ppc queue
Posted by Miles Glenn 3 months, 1 week ago
On Fri, 2025-08-08 at 10:17 +0200, Cédric Le Goater wrote:
> On 8/8/25 08:07, Michael Tokarev wrote:
> > On 06.08.2025 23:46, Miles Glenn wrote:
> > > On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
> > ...
> > > > These seem to be interesting to have :
> > > > 
> > > > ppc/xive2: Fix treatment of PIPR in CPPR update
> > > > ppc/xive2: Fix irq preempted by lower priority group irq
> 
> I added :
> 
>    ppc/xive2: Reset Generation Flipped bit on END Cache Watch
> 
> > > > ppc/xive: Fix PHYS NSR ring matching
> > > > ppc/xive2: fix context push calculation of IPB priority
> > > > ppc/xive2: Remote VSDs need to match on forwarding address
> > > > ppc/xive2: Fix calculation of END queue sizes
> > > > ppc/xive: Report access size in XIVE TM operation error logs
> > > > ppc/xive: Fix xive trace event output
> > > 
> > > I'm still not sure that the benefit is worth the effort, but I
> > > certainly don't have a problem with them being backported if someone
> > > has the desire and the time to do it.
> > 
> > I mentioned already that 10.0 series will (hopefully) be LTS series.
> > At the very least, it is what we'll have in the upcoming debian
> > stable release (trixie), which will be stable for the next 2 years.
> > Whenever this is important to have working Power* support in debian -
> > I don't know.
> > 
> > All the mentioned patches applied to 10.0 branch cleanly (in the
> > reverse order, from bottom to top), so there's no effort needed
> > to back-port them.  And the result passes at least the standard
> > qemu testsuite.  So it looks like everything works as intended.
> 
> 24.04 operates correctly with a "6.14.0-27-generic #27~24.04.1-Ubuntu"
> kernel on a PowerNV10 system defined as :
> 
>    Architecture:             ppc64le
>      Byte Order:             Little Endian
>    CPU(s):                   16
>      On-line CPU(s) list:    0-15
>    Model name:               POWER10, altivec supported
>      Model:                  2.0 (pvr 0080 1200)
>      Thread(s) per core:     4
>      Core(s) per socket:     2
>      Socket(s):              2
>      Frequency boost:        enabled
>      CPU(s) scaling MHz:     76%
>      CPU max MHz:            3800.0000
>      CPU min MHz:            2000.0000
>    Caches (sum of all):
>      L1d:                    128 KiB (4 instances)
>      L1i:                    128 KiB (4 instances)
>    NUMA:
>      NUMA node(s):           2
>      NUMA node0 CPU(s):      0-7
>      NUMA node1 CPU(s):      8-15
> 
> with devices :
> 
>    0000:00:00.0 PCI bridge: IBM Device 0652
>    0000:01:00.0 Non-Volatile memory controller: Red Hat, Inc. QEMU NVM Express Controller (rev 02)
>    0001:00:00.0 PCI bridge: IBM Device 0652
>    0001:01:00.0 PCI bridge: Red Hat, Inc. Device 000e
>    0001:02:02.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03)
>    0001:02:03.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
>    0002:00:00.0 PCI bridge: IBM Device 0652
>    ...
> 
> A rhel9 nested guest boots too.
> 
> Poweroff and reboot are fine.
> 
> 
> 
> Michael,
> 
> I would say ship it.
> 
> 
> Glenn, Gautam,
> 
> It would nice to get rid of these messages.
>    
>    [    0.000000] NR_IRQS: 512, nr_irqs: 512, preallocated irqs: 16
>    [    2.270794918,5] XIVE: [ IC 00  ] Resetting one xive...
>    [    2.271575295,3] XIVE: [CPU 0000] Error enabling PHYS CAM already enabled
>    CPU 0100 Backtrace:
>     S: 0000000032413a20 R: 0000000030021408   .backtrace+0x40
>     S: 0000000032413ad0 R: 000000003008427c   .xive2_tima_enable_phys+0x40
>     S: 0000000032413b50 R: 0000000030087430   .__xive_reset.constprop.0.isra.0+0x520
>     S: 0000000032413c90 R: 0000000030087638   .opal_xive_reset+0x78
>     S: 0000000032413d10 R: 00000000300038bc   opal_entry+0x14c
>     --- OPAL call token: 0x80 caller R1: 0xc0000000014bbc90 ---
>    [    2.273581201,3] XIVE: [CPU 0001] Error enabling PHYS CAM already enabled
> 
> 
> Is it a modeling issue ?
> 
> 
> Thanks,
> 
> C.
> 
> 
> 
> 

Thank you, Cédric!

I'm not sure what's causing that error message.  I'm assuming it wasn't
there before now, which would probably mean that something (the model?)
is enabling the PHYS cams at initialization or realization where we
didn't used to.

Mike Kowal, is that the expecected behavior?  Can you take a look when
you have a chance?

Thanks,

Glenn