[PATCH] x86/HVM: limit upcall vector related verbosity

Jan Beulich posted 1 patch 5 months, 1 week ago
Failed in applying to current master (apply log)
There is a newer version of this series
[PATCH] x86/HVM: limit upcall vector related verbosity
Posted by Jan Beulich 5 months, 1 week ago
Avoid logging all-identical messages for every vCPU, but make sure to
log unusual events like the vector differing from vCPU 0's (note that
the respective condition also makes sure vCPU 0 itself will have the
vector setting logged), or it changing after it was once set. (Arguably
a downside is that some vCPU not having its vector set would no longer
be recognizable from the logs. But I think that's tolerable as
sufficiently unlikely outside of people actively fiddling with related
code.)

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4129,7 +4129,10 @@ static int hvmop_set_evtchn_upcall_vecto
     if ( (v = domain_vcpu(d, op.vcpu)) == NULL )
         return -ENOENT;
 
-    printk(XENLOG_G_INFO "%pv: upcall vector %02x\n", v, op.vector);
+    if ( op.vector != d->vcpu[0]->arch.hvm.evtchn_upcall_vector ||
+         (v->arch.hvm.evtchn_upcall_vector &&
+          op.vector != v->arch.hvm.evtchn_upcall_vector) )
+        printk(XENLOG_G_INFO "%pv: upcall vector %02x\n", v, op.vector);
 
     v->arch.hvm.evtchn_upcall_vector = op.vector;
     hvm_assert_evtchn_irq(v);
Re: [PATCH] x86/HVM: limit upcall vector related verbosity
Posted by Roger Pau Monné 5 months, 1 week ago
On Thu, Nov 23, 2023 at 11:25:34AM +0100, Jan Beulich wrote:
> Avoid logging all-identical messages for every vCPU, but make sure to
> log unusual events like the vector differing from vCPU 0's (note that
> the respective condition also makes sure vCPU 0 itself will have the
> vector setting logged), or it changing after it was once set. (Arguably
> a downside is that some vCPU not having its vector set would no longer
> be recognizable from the logs. But I think that's tolerable as
> sufficiently unlikely outside of people actively fiddling with related
> code.)

Maybe we could consider printing unconditionally for debug builds?

Thanks, Roger.
Re: [PATCH] x86/HVM: limit upcall vector related verbosity
Posted by Jan Beulich 5 months, 1 week ago
On 23.11.2023 11:47, Roger Pau Monné wrote:
> On Thu, Nov 23, 2023 at 11:25:34AM +0100, Jan Beulich wrote:
>> Avoid logging all-identical messages for every vCPU, but make sure to
>> log unusual events like the vector differing from vCPU 0's (note that
>> the respective condition also makes sure vCPU 0 itself will have the
>> vector setting logged), or it changing after it was once set. (Arguably
>> a downside is that some vCPU not having its vector set would no longer
>> be recognizable from the logs. But I think that's tolerable as
>> sufficiently unlikely outside of people actively fiddling with related
>> code.)
> 
> Maybe we could consider printing unconditionally for debug builds?

Indeed I was considering that, but it's primarily debug builds where the
unnecessary redundancy is annoying me. (After all I work with debug builds
much more than with release ones.)

Jan

Re: [PATCH] x86/HVM: limit upcall vector related verbosity
Posted by Roger Pau Monné 5 months, 1 week ago
On Thu, Nov 23, 2023 at 11:50:41AM +0100, Jan Beulich wrote:
> On 23.11.2023 11:47, Roger Pau Monné wrote:
> > On Thu, Nov 23, 2023 at 11:25:34AM +0100, Jan Beulich wrote:
> >> Avoid logging all-identical messages for every vCPU, but make sure to
> >> log unusual events like the vector differing from vCPU 0's (note that
> >> the respective condition also makes sure vCPU 0 itself will have the
> >> vector setting logged), or it changing after it was once set. (Arguably
> >> a downside is that some vCPU not having its vector set would no longer
> >> be recognizable from the logs. But I think that's tolerable as
> >> sufficiently unlikely outside of people actively fiddling with related
> >> code.)
> > 
> > Maybe we could consider printing unconditionally for debug builds?
> 
> Indeed I was considering that, but it's primarily debug builds where the
> unnecessary redundancy is annoying me. (After all I work with debug builds
> much more than with release ones.)

I did find the message useful when doing guest bringup in the past, in
order to know the guest was correctly setting up the vector callbacks.

I guess there are other ways to figure that out, or the message could
be added when people is doing bringup themselves.

I find the save/restore messages during domain create much more
unhelpful and annoying that this.

Thanks, Roger.

Re: [PATCH] x86/HVM: limit upcall vector related verbosity
Posted by Andrew Cooper 5 months, 1 week ago
On 23/11/2023 11:01 am, Roger Pau Monné wrote:
> On Thu, Nov 23, 2023 at 11:50:41AM +0100, Jan Beulich wrote:
>> On 23.11.2023 11:47, Roger Pau Monné wrote:
>>> On Thu, Nov 23, 2023 at 11:25:34AM +0100, Jan Beulich wrote:
>>>> Avoid logging all-identical messages for every vCPU, but make sure to
>>>> log unusual events like the vector differing from vCPU 0's (note that
>>>> the respective condition also makes sure vCPU 0 itself will have the
>>>> vector setting logged), or it changing after it was once set. (Arguably
>>>> a downside is that some vCPU not having its vector set would no longer
>>>> be recognizable from the logs. But I think that's tolerable as
>>>> sufficiently unlikely outside of people actively fiddling with related
>>>> code.)
>>> Maybe we could consider printing unconditionally for debug builds?
>> Indeed I was considering that, but it's primarily debug builds where the
>> unnecessary redundancy is annoying me. (After all I work with debug builds
>> much more than with release ones.)
> I did find the message useful when doing guest bringup in the past, in
> order to know the guest was correctly setting up the vector callbacks.
>
> I guess there are other ways to figure that out, or the message could
> be added when people is doing bringup themselves.
>
> I find the save/restore messages during domain create much more
> unhelpful and annoying that this.

+1  I delete them in every debugging branch...

As to this message, we ought to do something.

However, the reason it's on every vCPU is because on Windows, there's no
ability to request the same vector on every CPU, and it often does end
up slightly different.

I think the logic added is fine, although it could do with a small
comment explaining why.  Debugging of weird bugs by dmesg isn't helpful
anyway - things like "did it have a vector upcall" should be answered by
tools in dom0.

~Andrew

Re: [PATCH] x86/HVM: limit upcall vector related verbosity
Posted by Jan Beulich 5 months, 1 week ago
On 23.11.2023 12:01, Roger Pau Monné wrote:
> On Thu, Nov 23, 2023 at 11:50:41AM +0100, Jan Beulich wrote:
>> On 23.11.2023 11:47, Roger Pau Monné wrote:
>>> On Thu, Nov 23, 2023 at 11:25:34AM +0100, Jan Beulich wrote:
>>>> Avoid logging all-identical messages for every vCPU, but make sure to
>>>> log unusual events like the vector differing from vCPU 0's (note that
>>>> the respective condition also makes sure vCPU 0 itself will have the
>>>> vector setting logged), or it changing after it was once set. (Arguably
>>>> a downside is that some vCPU not having its vector set would no longer
>>>> be recognizable from the logs. But I think that's tolerable as
>>>> sufficiently unlikely outside of people actively fiddling with related
>>>> code.)
>>>
>>> Maybe we could consider printing unconditionally for debug builds?
>>
>> Indeed I was considering that, but it's primarily debug builds where the
>> unnecessary redundancy is annoying me. (After all I work with debug builds
>> much more than with release ones.)
> 
> I did find the message useful when doing guest bringup in the past, in
> order to know the guest was correctly setting up the vector callbacks.
> 
> I guess there are other ways to figure that out, or the message could
> be added when people is doing bringup themselves.
> 
> I find the save/restore messages during domain create much more
> unhelpful and annoying that this.

Yeah, these too are ugly. But going through this save/restore cycle when
starting a guest is supposed to go away anyway, according to Andrew. Plus
the primary annoyance here is for PVH Dom0 with vCPU count not restricted,
and there's no save/restore involved there. The typical HVM guest, otoh,
wouldn't have as many vCPU-s ...

Jan