[PATCH] Partial revert "x86/xen: fix balloon target initialization for PVH dom0"

Roger Pau Monne posted 1 patch 5 days, 15 hours ago
Failed in applying to current master (apply log)
There is a newer version of this series
drivers/xen/balloon.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
[PATCH] Partial revert "x86/xen: fix balloon target initialization for PVH dom0"
Posted by Roger Pau Monne 5 days, 15 hours ago
This partially reverts commit 87af633689ce16ddb166c80f32b120e50b1295de so
the current memory target for PV guests is still fetched from
start_info->nr_pages, which matches exactly what the toolstack sets the
initial memory target to.

Using get_num_physpages() is possible on PV also, but needs adjusting to
take into account the ISA hole and the PFN at 0 not considered usable
memory depite being populated, and hence would need extra adjustments.
Instead of carrying those extra adjustments switch back to the previous
code.  That leaves Linux with a difference in how current memory target is
obtained for HVM vs PV, but that's better than adding extra logic just for
PV.

Also, for HVM the target is not (and has never been) accurately calculated,
as in that case part of what starts as guest memory is reused by hvmloader
and possibly other firmware to store ACPI tables and similar firmware
information, thus the memory is no longer being reported as RAM in the
memory map.

Reported-by: James Dingwall <james@dingwall.me.uk>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 drivers/xen/balloon.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 49c3f9926394..e799650f6c8c 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -724,6 +724,7 @@ static int __init balloon_add_regions(void)
 static int __init balloon_init(void)
 {
 	struct task_struct *task;
+	unsigned long current_pages;
 	int rc;
 
 	if (!xen_domain())
@@ -731,12 +732,15 @@ static int __init balloon_init(void)
 
 	pr_info("Initialising balloon driver\n");
 
-	if (xen_released_pages >= get_num_physpages()) {
+	current_pages = xen_pv_domain() ? min(xen_start_info->nr_pages, max_pfn)
+	                                : get_num_physpages();
+
+	if (xen_released_pages >= current_pages) {
 		WARN(1, "Released pages underflow current target");
 		return -ERANGE;
 	}
 
-	balloon_stats.current_pages = get_num_physpages() - xen_released_pages;
+	balloon_stats.current_pages = current_pages - xen_released_pages;
 	balloon_stats.target_pages  = balloon_stats.current_pages;
 	balloon_stats.balloon_low   = 0;
 	balloon_stats.balloon_high  = 0;
-- 
2.51.0


Re: [PATCH] Partial revert "x86/xen: fix balloon target initialization for PVH dom0"
Posted by James Dingwall 4 days, 18 hours ago
On Tue, Jan 20, 2026 at 03:06:47PM +0100, Roger Pau Monne wrote:
> This partially reverts commit 87af633689ce16ddb166c80f32b120e50b1295de so
> the current memory target for PV guests is still fetched from
> start_info->nr_pages, which matches exactly what the toolstack sets the
> initial memory target to.
> 
> Using get_num_physpages() is possible on PV also, but needs adjusting to
> take into account the ISA hole and the PFN at 0 not considered usable
> memory depite being populated, and hence would need extra adjustments.
> Instead of carrying those extra adjustments switch back to the previous
> code.  That leaves Linux with a difference in how current memory target is
> obtained for HVM vs PV, but that's better than adding extra logic just for
> PV.
> 
> Also, for HVM the target is not (and has never been) accurately calculated,
> as in that case part of what starts as guest memory is reused by hvmloader
> and possibly other firmware to store ACPI tables and similar firmware
> information, thus the memory is no longer being reported as RAM in the
> memory map.
> 
> Reported-by: James Dingwall <james@dingwall.me.uk>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  drivers/xen/balloon.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 49c3f9926394..e799650f6c8c 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -724,6 +724,7 @@ static int __init balloon_add_regions(void)
>  static int __init balloon_init(void)
>  {
>  	struct task_struct *task;
> +	unsigned long current_pages;
>  	int rc;
>  
>  	if (!xen_domain())
> @@ -731,12 +732,15 @@ static int __init balloon_init(void)
>  
>  	pr_info("Initialising balloon driver\n");
>  
> -	if (xen_released_pages >= get_num_physpages()) {
> +	current_pages = xen_pv_domain() ? min(xen_start_info->nr_pages, max_pfn)
> +	                                : get_num_physpages();
> +
> +	if (xen_released_pages >= current_pages) {
>  		WARN(1, "Released pages underflow current target");
>  		return -ERANGE;
>  	}
>  
> -	balloon_stats.current_pages = get_num_physpages() - xen_released_pages;
> +	balloon_stats.current_pages = current_pages - xen_released_pages;
>  	balloon_stats.target_pages  = balloon_stats.current_pages;
>  	balloon_stats.balloon_low   = 0;
>  	balloon_stats.balloon_high  = 0;
> -- 
> 2.51.0
> 

Thank you Roger, I tested this patch on the system which originally showed
the error and the pci passthrough now works as expected.

Regards,
James
Re: [PATCH] Partial revert "x86/xen: fix balloon target initialization for PVH dom0"
Posted by Jason Andryuk 5 days, 9 hours ago
On 2026-01-20 09:06, Roger Pau Monne wrote:
> This partially reverts commit 87af633689ce16ddb166c80f32b120e50b1295de so
> the current memory target for PV guests is still fetched from
> start_info->nr_pages, which matches exactly what the toolstack sets the
> initial memory target to.
> 
> Using get_num_physpages() is possible on PV also, but needs adjusting to
> take into account the ISA hole and the PFN at 0 not considered usable
> memory depite being populated, and hence would need extra adjustments.
> Instead of carrying those extra adjustments switch back to the previous
> code.  That leaves Linux with a difference in how current memory target is
> obtained for HVM vs PV, but that's better than adding extra logic just for
> PV.
> 
> Also, for HVM the target is not (and has never been) accurately calculated,
> as in that case part of what starts as guest memory is reused by hvmloader
> and possibly other firmware to store ACPI tables and similar firmware
> information, thus the memory is no longer being reported as RAM in the
> memory map.
> 
> Reported-by: James Dingwall <james@dingwall.me.uk>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jason Andryuk <jason.andryuk@amd.com>

Thanks,
Jason

Re: [PATCH] Partial revert "x86/xen: fix balloon target initialization for PVH dom0"
Posted by Roger Pau Monné 4 days, 18 hours ago
On Tue, Jan 20, 2026 at 03:10:06PM -0500, Jason Andryuk wrote:
> On 2026-01-20 09:06, Roger Pau Monne wrote:
> > This partially reverts commit 87af633689ce16ddb166c80f32b120e50b1295de so
> > the current memory target for PV guests is still fetched from
> > start_info->nr_pages, which matches exactly what the toolstack sets the
> > initial memory target to.
> > 
> > Using get_num_physpages() is possible on PV also, but needs adjusting to
> > take into account the ISA hole and the PFN at 0 not considered usable
> > memory depite being populated, and hence would need extra adjustments.
> > Instead of carrying those extra adjustments switch back to the previous
> > code.  That leaves Linux with a difference in how current memory target is
> > obtained for HVM vs PV, but that's better than adding extra logic just for
> > PV.
> > 
> > Also, for HVM the target is not (and has never been) accurately calculated,
> > as in that case part of what starts as guest memory is reused by hvmloader
> > and possibly other firmware to store ACPI tables and similar firmware
> > information, thus the memory is no longer being reported as RAM in the
> > memory map.
> > 
> > Reported-by: James Dingwall <james@dingwall.me.uk>
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Jason Andryuk <jason.andryuk@amd.com>

Thanks.

I've been considering what we discussed and as a separate follow up we
might want to attempt to switch to using `XENMEM_current_reservation`
for dom0?  It might make the accounting in PVH dom0 better, as that's
what the toolstack uses to set the xenstore target when initializing
dom0 values.

Regards, Roger.
Re: [PATCH] Partial revert "x86/xen: fix balloon target initialization for PVH dom0"
Posted by Jason Andryuk 4 days, 12 hours ago
On 2026-01-21 06:17, Roger Pau Monné wrote:
> On Tue, Jan 20, 2026 at 03:10:06PM -0500, Jason Andryuk wrote:
>> On 2026-01-20 09:06, Roger Pau Monne wrote:
>>> This partially reverts commit 87af633689ce16ddb166c80f32b120e50b1295de so
>>> the current memory target for PV guests is still fetched from
>>> start_info->nr_pages, which matches exactly what the toolstack sets the
>>> initial memory target to.
>>>
>>> Using get_num_physpages() is possible on PV also, but needs adjusting to
>>> take into account the ISA hole and the PFN at 0 not considered usable
>>> memory depite being populated, and hence would need extra adjustments.
>>> Instead of carrying those extra adjustments switch back to the previous
>>> code.  That leaves Linux with a difference in how current memory target is
>>> obtained for HVM vs PV, but that's better than adding extra logic just for
>>> PV.
>>>
>>> Also, for HVM the target is not (and has never been) accurately calculated,
>>> as in that case part of what starts as guest memory is reused by hvmloader
>>> and possibly other firmware to store ACPI tables and similar firmware
>>> information, thus the memory is no longer being reported as RAM in the
>>> memory map.
>>>
>>> Reported-by: James Dingwall <james@dingwall.me.uk>
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>
>> Reviewed-by: Jason Andryuk <jason.andryuk@amd.com>
> 
> Thanks.
> 
> I've been considering what we discussed and as a separate follow up we
> might want to attempt to switch to using `XENMEM_current_reservation`
> for dom0?  It might make the accounting in PVH dom0 better, as that's
> what the toolstack uses to set the xenstore target when initializing
> dom0 values.

Yes, I thought that could be a follow on.  I've attached what I have 
tested, but it is based on a branch pre-dating xen_released_pages. 
xenmem_current_reservation with PVH dom0 seemed good without the 
xen_released_pages adjustment, but I don't know what that would be for a 
PVH dom0.

Regards,
Jason
Re: [PATCH] Partial revert "x86/xen: fix balloon target initialization for PVH dom0"
Posted by Roger Pau Monné 4 days, 11 hours ago
On Wed, Jan 21, 2026 at 12:21:33PM -0500, Jason Andryuk wrote:
> On 2026-01-21 06:17, Roger Pau Monné wrote:
> > On Tue, Jan 20, 2026 at 03:10:06PM -0500, Jason Andryuk wrote:
> > > On 2026-01-20 09:06, Roger Pau Monne wrote:
> > > > This partially reverts commit 87af633689ce16ddb166c80f32b120e50b1295de so
> > > > the current memory target for PV guests is still fetched from
> > > > start_info->nr_pages, which matches exactly what the toolstack sets the
> > > > initial memory target to.
> > > > 
> > > > Using get_num_physpages() is possible on PV also, but needs adjusting to
> > > > take into account the ISA hole and the PFN at 0 not considered usable
> > > > memory depite being populated, and hence would need extra adjustments.
> > > > Instead of carrying those extra adjustments switch back to the previous
> > > > code.  That leaves Linux with a difference in how current memory target is
> > > > obtained for HVM vs PV, but that's better than adding extra logic just for
> > > > PV.
> > > > 
> > > > Also, for HVM the target is not (and has never been) accurately calculated,
> > > > as in that case part of what starts as guest memory is reused by hvmloader
> > > > and possibly other firmware to store ACPI tables and similar firmware
> > > > information, thus the memory is no longer being reported as RAM in the
> > > > memory map.
> > > > 
> > > > Reported-by: James Dingwall <james@dingwall.me.uk>
> > > > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > > 
> > > Reviewed-by: Jason Andryuk <jason.andryuk@amd.com>
> > 
> > Thanks.
> > 
> > I've been considering what we discussed and as a separate follow up we
> > might want to attempt to switch to using `XENMEM_current_reservation`
> > for dom0?  It might make the accounting in PVH dom0 better, as that's
> > what the toolstack uses to set the xenstore target when initializing
> > dom0 values.
> 
> Yes, I thought that could be a follow on.  I've attached what I have tested,
> but it is based on a branch pre-dating xen_released_pages.
> xenmem_current_reservation with PVH dom0 seemed good without the
> xen_released_pages adjustment, but I don't know what that would be for a PVH
> dom0.
> 
> Regards,
> Jason

> From 8b628ad0ebe52c30e31298e868f2f5187f2f52da Mon Sep 17 00:00:00 2001
> From: Jason Andryuk <jason.andryuk@amd.com>
> Date: Fri, 7 Nov 2025 16:44:53 -0500
> Subject: [PATCH] xen/balloon: Initialize dom0 with XENMEM_current_reservation
> 
> The balloon driver bases its action off the memory/target and
> memory/static-max xenstore keys.  These are set by the toolstack and
> match the domain's hypervisor allocated pages - domain_tot_pages().
> 
> However, PVH and HVM domains query get_num_physpages() for the initial
> balloon driver current and target pages.  get_num_physpages() is different
> from dom_totain_pages(), and has been observed way off in a PVH dom0.
> Here a PVH dom0 is assigned 917503 pages (3.5GB), but
> get_num_physpages() is 7312455:
> xen:balloon: pages curr 7312455 target 7312455
> xen:balloon: current_reservation 917503
> 
> While Xen truncated the PVH dom0 memory map's RAM, Linux undoes that
> operation and restores RAM regions in pvh_reserve_extra_memory().
> 
> Use XENMEM_current_reveration to initialize the balloon driver current
> and target pages.  This is the hypervisor value, and matches what the
> toolstack writes.  This avoids any ballooning from the inital
> allocation.  While domUs are affected, the implications of the change
> are unclear - only make the change for dom0.
> 
> Signed-off-by: Jason Andryuk <jason.andryuk@amd.com>
> ---
>  drivers/xen/balloon.c          | 9 ++++++---
>  drivers/xen/mem-reservation.c  | 8 ++++++++
>  include/xen/interface/memory.h | 5 +++++
>  include/xen/mem-reservation.h  | 2 ++
>  4 files changed, 21 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 528395133b4f..fa6cbe6ce2ca 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -713,10 +713,13 @@ static int __init balloon_init(void)
>  
>  #ifdef CONFIG_XEN_PV
>  	balloon_stats.current_pages = xen_pv_domain()
> -		? min(xen_start_info->nr_pages - xen_released_pages, max_pfn)
> -		: get_num_physpages();
> +		? min(xen_start_info->nr_pages - xen_released_pages, max_pfn) :
> +		xen_initial_domain() ? xenmem_current_reservation() :
> +				       get_num_physpages();
>  #else
> -	balloon_stats.current_pages = get_num_physpages();
> +	balloon_stats.current_pages =
> +		xen_initial_domain() ? xenmem_current_reservation() :
> +				       get_num_physpages();
>  #endif
>  	balloon_stats.target_pages  = balloon_stats.current_pages;
>  	balloon_stats.balloon_low   = 0;
> diff --git a/drivers/xen/mem-reservation.c b/drivers/xen/mem-reservation.c
> index 24648836e0d4..40c5c40d34fe 100644
> --- a/drivers/xen/mem-reservation.c
> +++ b/drivers/xen/mem-reservation.c
> @@ -113,3 +113,11 @@ int xenmem_reservation_decrease(int count, xen_pfn_t *frames)
>  	return HYPERVISOR_memory_op(XENMEM_decrease_reservation, &reservation);
>  }
>  EXPORT_SYMBOL_GPL(xenmem_reservation_decrease);
> +
> +long xenmem_current_reservation(void)
> +{
> +	struct xen_memory_domain domain = { .domid = DOMID_SELF };
> +
> +	return HYPERVISOR_memory_op(XENMEM_current_reservation, &domain);
> +}
> +EXPORT_SYMBOL_GPL(xenmem_current_reservation);
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index 1a371a825c55..72619a75fed2 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -104,6 +104,11 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_memory_exchange);
>   */
>  #define XENMEM_maximum_ram_page     2
>  
> +struct xen_memory_domain {
> +    /* [IN] Domain information is being queried for. */
> +    domid_t domid;
> +};

Other callers that would use xen_memory_domain just pass a pointer to
a domid_t, I think you could simplify the patch to the diff below,
which sits on top of the patch here:

I haven't tested it yet to see whether that's OK to do on PV, I would
think PV and PVH would be the same here, since the setting of the
xenstore target value is based in the return of
XENMEM_current_reservation for both.

Thanks, Roger.
---
diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index e799650f6c8c..c592d7bae8c3 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -724,7 +724,8 @@ static int __init balloon_add_regions(void)
 static int __init balloon_init(void)
 {
 	struct task_struct *task;
-	unsigned long current_pages;
+	unsigned long current_pages = 0;
+	domid_t domid = DOMID_SELF;
 	int rc;
 
 	if (!xen_domain())
@@ -732,8 +733,13 @@ static int __init balloon_init(void)
 
 	pr_info("Initialising balloon driver\n");
 
-	current_pages = xen_pv_domain() ? min(xen_start_info->nr_pages, max_pfn)
-	                                : get_num_physpages();
+	if (xen_initial_domain())
+		current_pages = HYPERVISOR_memory_op(XENMEM_current_reservation,
+		                                     &domid);
+	if (current_pages <= 0)
+		current_pages =
+		    xen_pv_domain() ? min(xen_start_info->nr_pages, max_pfn)
+	                            : get_num_physpages();
 
 	if (xen_released_pages >= current_pages) {
 		WARN(1, "Released pages underflow current target");
Re: [PATCH] Partial revert "x86/xen: fix balloon target initialization for PVH dom0"
Posted by Jason Andryuk 3 days, 15 hours ago
On 2026-01-21 12:49, Roger Pau Monné wrote:
> On Wed, Jan 21, 2026 at 12:21:33PM -0500, Jason Andryuk wrote:
>> On 2026-01-21 06:17, Roger Pau Monné wrote:
>>> On Tue, Jan 20, 2026 at 03:10:06PM -0500, Jason Andryuk wrote:
>>>> On 2026-01-20 09:06, Roger Pau Monne wrote:
>>>>> This partially reverts commit 87af633689ce16ddb166c80f32b120e50b1295de so
>>>>> the current memory target for PV guests is still fetched from
>>>>> start_info->nr_pages, which matches exactly what the toolstack sets the
>>>>> initial memory target to.
>>>>>
>>>>> Using get_num_physpages() is possible on PV also, but needs adjusting to
>>>>> take into account the ISA hole and the PFN at 0 not considered usable
>>>>> memory depite being populated, and hence would need extra adjustments.
>>>>> Instead of carrying those extra adjustments switch back to the previous
>>>>> code.  That leaves Linux with a difference in how current memory target is
>>>>> obtained for HVM vs PV, but that's better than adding extra logic just for
>>>>> PV.
>>>>>
>>>>> Also, for HVM the target is not (and has never been) accurately calculated,
>>>>> as in that case part of what starts as guest memory is reused by hvmloader
>>>>> and possibly other firmware to store ACPI tables and similar firmware
>>>>> information, thus the memory is no longer being reported as RAM in the
>>>>> memory map.
>>>>>
>>>>> Reported-by: James Dingwall <james@dingwall.me.uk>
>>>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>>>
>>>> Reviewed-by: Jason Andryuk <jason.andryuk@amd.com>
>>>
>>> Thanks.
>>>
>>> I've been considering what we discussed and as a separate follow up we
>>> might want to attempt to switch to using `XENMEM_current_reservation`
>>> for dom0?  It might make the accounting in PVH dom0 better, as that's
>>> what the toolstack uses to set the xenstore target when initializing
>>> dom0 values.
>>
>> Yes, I thought that could be a follow on.  I've attached what I have tested,
>> but it is based on a branch pre-dating xen_released_pages.
>> xenmem_current_reservation with PVH dom0 seemed good without the
>> xen_released_pages adjustment, but I don't know what that would be for a PVH
>> dom0.
>>
>> Regards,
>> Jason
> 
>>  From 8b628ad0ebe52c30e31298e868f2f5187f2f52da Mon Sep 17 00:00:00 2001
>> From: Jason Andryuk <jason.andryuk@amd.com>
>> Date: Fri, 7 Nov 2025 16:44:53 -0500
>> Subject: [PATCH] xen/balloon: Initialize dom0 with XENMEM_current_reservation
>>
>> The balloon driver bases its action off the memory/target and
>> memory/static-max xenstore keys.  These are set by the toolstack and
>> match the domain's hypervisor allocated pages - domain_tot_pages().
>>
>> However, PVH and HVM domains query get_num_physpages() for the initial
>> balloon driver current and target pages.  get_num_physpages() is different
>> from dom_totain_pages(), and has been observed way off in a PVH dom0.
>> Here a PVH dom0 is assigned 917503 pages (3.5GB), but
>> get_num_physpages() is 7312455:
>> xen:balloon: pages curr 7312455 target 7312455
>> xen:balloon: current_reservation 917503
>>
>> While Xen truncated the PVH dom0 memory map's RAM, Linux undoes that
>> operation and restores RAM regions in pvh_reserve_extra_memory().
>>
>> Use XENMEM_current_reveration to initialize the balloon driver current
>> and target pages.  This is the hypervisor value, and matches what the
>> toolstack writes.  This avoids any ballooning from the inital
>> allocation.  While domUs are affected, the implications of the change
>> are unclear - only make the change for dom0.
>>
>> Signed-off-by: Jason Andryuk <jason.andryuk@amd.com>
>> ---
>>   drivers/xen/balloon.c          | 9 ++++++---
>>   drivers/xen/mem-reservation.c  | 8 ++++++++
>>   include/xen/interface/memory.h | 5 +++++
>>   include/xen/mem-reservation.h  | 2 ++
>>   4 files changed, 21 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
>> index 528395133b4f..fa6cbe6ce2ca 100644
>> --- a/drivers/xen/balloon.c
>> +++ b/drivers/xen/balloon.c
>> @@ -713,10 +713,13 @@ static int __init balloon_init(void)
>>   
>>   #ifdef CONFIG_XEN_PV
>>   	balloon_stats.current_pages = xen_pv_domain()
>> -		? min(xen_start_info->nr_pages - xen_released_pages, max_pfn)
>> -		: get_num_physpages();
>> +		? min(xen_start_info->nr_pages - xen_released_pages, max_pfn) :
>> +		xen_initial_domain() ? xenmem_current_reservation() :
>> +				       get_num_physpages();
>>   #else
>> -	balloon_stats.current_pages = get_num_physpages();
>> +	balloon_stats.current_pages =
>> +		xen_initial_domain() ? xenmem_current_reservation() :
>> +				       get_num_physpages();
>>   #endif
>>   	balloon_stats.target_pages  = balloon_stats.current_pages;
>>   	balloon_stats.balloon_low   = 0;
>> diff --git a/drivers/xen/mem-reservation.c b/drivers/xen/mem-reservation.c
>> index 24648836e0d4..40c5c40d34fe 100644
>> --- a/drivers/xen/mem-reservation.c
>> +++ b/drivers/xen/mem-reservation.c
>> @@ -113,3 +113,11 @@ int xenmem_reservation_decrease(int count, xen_pfn_t *frames)
>>   	return HYPERVISOR_memory_op(XENMEM_decrease_reservation, &reservation);
>>   }
>>   EXPORT_SYMBOL_GPL(xenmem_reservation_decrease);
>> +
>> +long xenmem_current_reservation(void)
>> +{
>> +	struct xen_memory_domain domain = { .domid = DOMID_SELF };
>> +
>> +	return HYPERVISOR_memory_op(XENMEM_current_reservation, &domain);
>> +}
>> +EXPORT_SYMBOL_GPL(xenmem_current_reservation);
>> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
>> index 1a371a825c55..72619a75fed2 100644
>> --- a/include/xen/interface/memory.h
>> +++ b/include/xen/interface/memory.h
>> @@ -104,6 +104,11 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_memory_exchange);
>>    */
>>   #define XENMEM_maximum_ram_page     2
>>   
>> +struct xen_memory_domain {
>> +    /* [IN] Domain information is being queried for. */
>> +    domid_t domid;
>> +};
> 
> Other callers that would use xen_memory_domain just pass a pointer to
> a domid_t, I think you could simplify the patch to the diff below,
> which sits on top of the patch here:

Ah, yes, xen_memory_domain is just wrapping a domid.

> 
> I haven't tested it yet to see whether that's OK to do on PV, I would
> think PV and PVH would be the same here, since the setting of the
> xenstore target value is based in the return of
> XENMEM_current_reservation for both.

On a system with 32GB and dom0=pvh dom0_mem=7G:

[    0.295201] xen:balloon: current_pages: 1835007 get_num_physpages 
8220126 xen_released_pages 6385120
[    0.295201] ------------[ cut here ]------------
[    0.295201] Released pages underflow current target

8220126 - 6385120 = 1835006

And also for PV:

[    1.406923] xen:balloon: current_pages: 1835008 get_num_physpages 
8220127 xen_released_pages 6385120
[    1.406928] ------------[ cut here ]------------
[    1.406931] Released pages underflow current target


So we don't want to subtract xen_released_pages for dom0.  Is 
xen_released_pages expected to be non-zero for a domU?

IIRC, for a domU, xl writes the xenstore nodes as the ~build time memory 
value, which doesn't include video ram.  Later QEMU populates the 
videoram, which increases current reservation.  Then the two values 
don't match when the domU initializes the balloon values.

Regards,
Jason

Re: [PATCH] Partial revert "x86/xen: fix balloon target initialization for PVH dom0"
Posted by Roger Pau Monné 3 days, 12 hours ago
On Thu, Jan 22, 2026 at 09:40:01AM -0500, Jason Andryuk wrote:
> On 2026-01-21 12:49, Roger Pau Monné wrote:
> > I haven't tested it yet to see whether that's OK to do on PV, I would
> > think PV and PVH would be the same here, since the setting of the
> > xenstore target value is based in the return of
> > XENMEM_current_reservation for both.
> 
> On a system with 32GB and dom0=pvh dom0_mem=7G:
> 
> [    0.295201] xen:balloon: current_pages: 1835007 get_num_physpages 8220126
> xen_released_pages 6385120
> [    0.295201] ------------[ cut here ]------------
> [    0.295201] Released pages underflow current target
> 
> 8220126 - 6385120 = 1835006
> 
> And also for PV:
> 
> [    1.406923] xen:balloon: current_pages: 1835008 get_num_physpages 8220127
> xen_released_pages 6385120
> [    1.406928] ------------[ cut here ]------------
> [    1.406931] Released pages underflow current target
> 
> 
> So we don't want to subtract xen_released_pages for dom0.  Is
> xen_released_pages expected to be non-zero for a domU?

Oh, yes.  In fact I think the patch here is wrong for PV dom0, as it
shouldn't subtract xen_released_pages from xen_start_info->nr_pages.
I will need to send v2.

> IIRC, for a domU, xl writes the xenstore nodes as the ~build time memory
> value, which doesn't include video ram.  Later QEMU populates the videoram,
> which increases current reservation.  Then the two values don't match when
> the domU initializes the balloon values.

Yeah, the modifications done to the physmap by QEMU skew the target,
so what's in xenstore doesn't match what `XENMEM_current_reservation`
returns.  However is very hard to fix this.  We could attempt to make
the toolstack write the xenstore node based on the return of
XENMEM_current_reservation once QEMU has started.  Sadly a domU would
have no way to know whether the xenstore value accounts for the QEMU
consumed memory or not.  We would need to introduce a new target
xenstore node, which is equally messy.

Thanks for the testing.

Roger.
Re: [PATCH] Partial revert "x86/xen: fix balloon target initialization for PVH dom0"
Posted by Jason Andryuk 3 days, 12 hours ago
On 2026-01-22 11:57, Roger Pau Monné wrote:
> On Thu, Jan 22, 2026 at 09:40:01AM -0500, Jason Andryuk wrote:
>> On 2026-01-21 12:49, Roger Pau Monné wrote:
>>> I haven't tested it yet to see whether that's OK to do on PV, I would
>>> think PV and PVH would be the same here, since the setting of the
>>> xenstore target value is based in the return of
>>> XENMEM_current_reservation for both.
>>
>> On a system with 32GB and dom0=pvh dom0_mem=7G:
>>
>> [    0.295201] xen:balloon: current_pages: 1835007 get_num_physpages 8220126
>> xen_released_pages 6385120
>> [    0.295201] ------------[ cut here ]------------
>> [    0.295201] Released pages underflow current target
>>
>> 8220126 - 6385120 = 1835006
>>
>> And also for PV:
>>
>> [    1.406923] xen:balloon: current_pages: 1835008 get_num_physpages 8220127
>> xen_released_pages 6385120
>> [    1.406928] ------------[ cut here ]------------
>> [    1.406931] Released pages underflow current target
>>
>>
>> So we don't want to subtract xen_released_pages for dom0.  Is
>> xen_released_pages expected to be non-zero for a domU?
> 
> Oh, yes.  In fact I think the patch here is wrong for PV dom0, as it
> shouldn't subtract xen_released_pages from xen_start_info->nr_pages.
> I will need to send v2.

To be clear, the numbers and warning are from the follow on 
current_reservation patch.

Regards,
Jason

Re: [PATCH] Partial revert "x86/xen: fix balloon target initialization for PVH dom0"
Posted by Roger Pau Monné 3 days, 12 hours ago
On Thu, Jan 22, 2026 at 12:03:47PM -0500, Jason Andryuk wrote:
> On 2026-01-22 11:57, Roger Pau Monné wrote:
> > On Thu, Jan 22, 2026 at 09:40:01AM -0500, Jason Andryuk wrote:
> > > On 2026-01-21 12:49, Roger Pau Monné wrote:
> > > > I haven't tested it yet to see whether that's OK to do on PV, I would
> > > > think PV and PVH would be the same here, since the setting of the
> > > > xenstore target value is based in the return of
> > > > XENMEM_current_reservation for both.
> > > 
> > > On a system with 32GB and dom0=pvh dom0_mem=7G:
> > > 
> > > [    0.295201] xen:balloon: current_pages: 1835007 get_num_physpages 8220126
> > > xen_released_pages 6385120
> > > [    0.295201] ------------[ cut here ]------------
> > > [    0.295201] Released pages underflow current target
> > > 
> > > 8220126 - 6385120 = 1835006
> > > 
> > > And also for PV:
> > > 
> > > [    1.406923] xen:balloon: current_pages: 1835008 get_num_physpages 8220127
> > > xen_released_pages 6385120
> > > [    1.406928] ------------[ cut here ]------------
> > > [    1.406931] Released pages underflow current target
> > > 
> > > 
> > > So we don't want to subtract xen_released_pages for dom0.  Is
> > > xen_released_pages expected to be non-zero for a domU?
> > 
> > Oh, yes.  In fact I think the patch here is wrong for PV dom0, as it
> > shouldn't subtract xen_released_pages from xen_start_info->nr_pages.
> > I will need to send v2.
> 
> To be clear, the numbers and warning are from the follow on
> current_reservation patch.

Yes, but I think it's also bad to use xen_start_info->nr_pages -
xen_released_pages (my current proposal).  xen_released_pages had a
different meaning on PV which I kind of stolen for the unpopulated
alloc work.  It was originally meant to signal pages that are freed
during the boot process when it's not possible to relocate them; see
xen_set_identity_and_release_chunk.  Unpopulated alloc doesn't free
the pages it uses, because they are already free to start with.

I think I need to introduce a new counter that accounts for pages
consumed by unpopulated alloc strictly, and not re-use the
xen_released_pages counter.

All this memory accounting is far more complex than it should be
sadly.

Thanks, Roger.

Re: [PATCH] Partial revert "x86/xen: fix balloon target initialization for PVH dom0"
Posted by Jan Beulich 3 days, 22 hours ago
On 21.01.2026 18:49, Roger Pau Monné wrote:
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -724,7 +724,8 @@ static int __init balloon_add_regions(void)
>  static int __init balloon_init(void)
>  {
>  	struct task_struct *task;
> -	unsigned long current_pages;
> +	unsigned long current_pages = 0;

With this, ...

> @@ -732,8 +733,13 @@ static int __init balloon_init(void)
>  
>  	pr_info("Initialising balloon driver\n");
>  
> -	current_pages = xen_pv_domain() ? min(xen_start_info->nr_pages, max_pfn)
> -	                                : get_num_physpages();
> +	if (xen_initial_domain())
> +		current_pages = HYPERVISOR_memory_op(XENMEM_current_reservation,
> +		                                     &domid);
> +	if (current_pages <= 0)

... why <= ? Gives the impression you may mean to cover HYPERVISOR_memory_op()
returning an error, yet that'll require a signed long variable then.

Jan

> +		current_pages =
> +		    xen_pv_domain() ? min(xen_start_info->nr_pages, max_pfn)
> +	                            : get_num_physpages();
>  
>  	if (xen_released_pages >= current_pages) {
>  		WARN(1, "Released pages underflow current target");
> 


Re: [PATCH] Partial revert "x86/xen: fix balloon target initialization for PVH dom0"
Posted by Roger Pau Monné 3 days, 21 hours ago
On Thu, Jan 22, 2026 at 08:17:22AM +0100, Jan Beulich wrote:
> On 21.01.2026 18:49, Roger Pau Monné wrote:
> > --- a/drivers/xen/balloon.c
> > +++ b/drivers/xen/balloon.c
> > @@ -724,7 +724,8 @@ static int __init balloon_add_regions(void)
> >  static int __init balloon_init(void)
> >  {
> >  	struct task_struct *task;
> > -	unsigned long current_pages;
> > +	unsigned long current_pages = 0;
> 
> With this, ...
> 
> > @@ -732,8 +733,13 @@ static int __init balloon_init(void)
> >  
> >  	pr_info("Initialising balloon driver\n");
> >  
> > -	current_pages = xen_pv_domain() ? min(xen_start_info->nr_pages, max_pfn)
> > -	                                : get_num_physpages();
> > +	if (xen_initial_domain())
> > +		current_pages = HYPERVISOR_memory_op(XENMEM_current_reservation,
> > +		                                     &domid);
> > +	if (current_pages <= 0)
> 
> ... why <= ? Gives the impression you may mean to cover HYPERVISOR_memory_op()
> returning an error, yet that'll require a signed long variable then.

Oh, indeed, current_pages should be signed long.  This was an untested
mash-up of a slightly different patch, where I didn't realize that
current_pages was unsigned.

Thanks, Roger.

Re: [PATCH] Partial revert "x86/xen: fix balloon target initialization for PVH dom0"
Posted by Roger Pau Monné 5 days, 14 hours ago
On Tue, Jan 20, 2026 at 03:06:47PM +0100, Roger Pau Monne wrote:
> This partially reverts commit 87af633689ce16ddb166c80f32b120e50b1295de so
> the current memory target for PV guests is still fetched from
> start_info->nr_pages, which matches exactly what the toolstack sets the
> initial memory target to.
> 
> Using get_num_physpages() is possible on PV also, but needs adjusting to
> take into account the ISA hole and the PFN at 0 not considered usable
> memory depite being populated, and hence would need extra adjustments.
> Instead of carrying those extra adjustments switch back to the previous
> code.  That leaves Linux with a difference in how current memory target is
> obtained for HVM vs PV, but that's better than adding extra logic just for
> PV.
> 
> Also, for HVM the target is not (and has never been) accurately calculated,
> as in that case part of what starts as guest memory is reused by hvmloader
> and possibly other firmware to store ACPI tables and similar firmware
> information, thus the memory is no longer being reported as RAM in the
> memory map.
> 

While kind of obvious, I guess this needs a:

Fixes: 87af633689ce ("x86/xen: fix balloon target initialization for PVH dom0")

So it doesn't fall through the cracks for backports.

Thanks, Roger.