In allocate_global_asid(), 'global_asid_available' cannot be zero, as it
has already been checked in use_global_asid(). Therefore, the warning in
allocate_global_asid() cannot be triggered; fix the wrong judgment in
allocate_global_asid().
Fixes: d504d1247e36 ("x86/mm: Add global ASID allocation helper functions")
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
---
arch/x86/mm/tlb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index cad4a8eae2d8..e9eda296fb0e 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -319,7 +319,7 @@ static u16 allocate_global_asid(void)
asid = find_next_zero_bit(global_asid_used, MAX_ASID_AVAILABLE, last_global_asid);
- if (asid >= MAX_ASID_AVAILABLE && !global_asid_available) {
+ if (asid >= MAX_ASID_AVAILABLE && global_asid_available) {
/* This should never happen. */
VM_WARN_ONCE(1, "Unable to allocate global ASID despite %d available\n",
global_asid_available);
--
2.31.1
On Sat, 2025-03-29 at 21:05 +0800, Hou Wenlong wrote:
> In allocate_global_asid(), 'global_asid_available' cannot be zero, as
> it
> has already been checked in use_global_asid(). Therefore, the warning
> in
> allocate_global_asid() cannot be triggered; fix the wrong judgment in
> allocate_global_asid().
>
> Fixes: d504d1247e36 ("x86/mm: Add global ASID allocation helper
> functions")
> Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Good catch.
Reviewed-by: Rik van Riel <riel@surriel.com>
Looking at allocate_global_asid() again, I wonder if
that needs to be turned back into a loop.
What if we have no global asids available, and then
an asid gets freed that is smaller than the value
of last_global_asid?
--
All Rights Reversed.
On Sun, Mar 30, 2025 at 01:33:44AM +0800, Rik van Riel wrote:
> On Sat, 2025-03-29 at 21:05 +0800, Hou Wenlong wrote:
> > In allocate_global_asid(), 'global_asid_available' cannot be zero, as
> > it
> > has already been checked in use_global_asid(). Therefore, the warning
> > in
> > allocate_global_asid() cannot be triggered; fix the wrong judgment in
> > allocate_global_asid().
> >
> > Fixes: d504d1247e36 ("x86/mm: Add global ASID allocation helper
> > functions")
> > Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
>
> Good catch.
>
> Reviewed-by: Rik van Riel <riel@surriel.com>
>
>
> Looking at allocate_global_asid() again, I wonder if
> that needs to be turned back into a loop.
>
> What if we have no global asids available, and then
> an asid gets freed that is smaller than the value
> of last_global_asid?
Uh, I think this can be an easily triggered issue in the current code.
Moreover, reset_global_asid_space() is only called when 'last_global_asid'
is 'MAX_ASID_AVAILALBE-1', which means that free ASIDs are returned to
avaialable pool only in that case. So if the 'global_asid_used'
bitmap is full, but 'last_global_asid' is not 'MAX_ASID_AVIALBLE-1', the
allocation can fail as well. So I think the allocation may look like
this:
```
index 0f86c3140fdc..7f4bcb0e3d8c 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -312,17 +312,27 @@ static u16 allocate_global_asid(void)
lockdep_assert_held(&global_asid_lock);
- /* The previous allocation hit the edge of available address space */
- if (last_global_asid >= MAX_ASID_AVAILABLE - 1)
- reset_global_asid_space();
-
+ /* Search the range [last_global_asid, MAX_ASID_AVAILABLE-1]. */
asid = find_next_zero_bit(global_asid_used, MAX_ASID_AVAILABLE, last_global_asid);
-
- if (asid >= MAX_ASID_AVAILABLE && global_asid_available) {
- /* This should never happen. */
- VM_WARN_ONCE(1, "Unable to allocate global ASID despite %d available\n",
- global_asid_available);
- return 0;
+ if (asid >= MAX_ASID_AVAILABLE) {
+ /* Search the range [TLB_NR_DYN_ASIDS, last_global_asid-1]. */
+ asid = find_next_zero_bit(global_asid_used, last_global_asid, TLB_NR_DYN_ASIDS);
+ if (asid >= last_global_asid) {
+ /*
+ * The 'global_asid_used' bitmap is full, so merge the
+ * 'global_asid_freed' bitmap and search from the
+ * beginning again.
+ */
+ reset_global_asid_space();
+ asid = find_next_zero_bit(global_asid_used, MAX_ASID_AVAILABLE,
+ last_global_asid);
+ if (asid >= MAX_ASID_AVAILABLE && global_asid_available) {
+ /* This should never happen. */
+ VM_WARN_ONCE(1, "Unable to allocate global ASID despite %d available\n",
+ global_asid_available);
+ return 0;
+ }
+ }
}
```
> --
> All Rights Reversed.
On Mon, Mar 31, 2025 at 01:12:34PM +0800, Hou Wenlong wrote:
> On Sun, Mar 30, 2025 at 01:33:44AM +0800, Rik van Riel wrote:
> > On Sat, 2025-03-29 at 21:05 +0800, Hou Wenlong wrote:
> > > In allocate_global_asid(), 'global_asid_available' cannot be zero, as
> > > it
> > > has already been checked in use_global_asid(). Therefore, the warning
> > > in
> > > allocate_global_asid() cannot be triggered; fix the wrong judgment in
> > > allocate_global_asid().
> > >
> > > Fixes: d504d1247e36 ("x86/mm: Add global ASID allocation helper
> > > functions")
> > > Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
> >
> > Good catch.
> >
> > Reviewed-by: Rik van Riel <riel@surriel.com>
> >
> >
> > Looking at allocate_global_asid() again, I wonder if
> > that needs to be turned back into a loop.
> >
> > What if we have no global asids available, and then
> > an asid gets freed that is smaller than the value
> > of last_global_asid?
>
> Uh, I think this can be an easily triggered issue in the current code.
> Moreover, reset_global_asid_space() is only called when 'last_global_asid'
> is 'MAX_ASID_AVAILALBE-1', which means that free ASIDs are returned to
> avaialable pool only in that case. So if the 'global_asid_used'
> bitmap is full, but 'last_global_asid' is not 'MAX_ASID_AVIALBLE-1', the
> allocation can fail as well. So I think the allocation may look like
> this:
> ```
> index 0f86c3140fdc..7f4bcb0e3d8c 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -312,17 +312,27 @@ static u16 allocate_global_asid(void)
>
> lockdep_assert_held(&global_asid_lock);
>
> - /* The previous allocation hit the edge of available address space */
> - if (last_global_asid >= MAX_ASID_AVAILABLE - 1)
> - reset_global_asid_space();
> -
> + /* Search the range [last_global_asid, MAX_ASID_AVAILABLE-1]. */
> asid = find_next_zero_bit(global_asid_used, MAX_ASID_AVAILABLE, last_global_asid);
> -
> - if (asid >= MAX_ASID_AVAILABLE && global_asid_available) {
> - /* This should never happen. */
> - VM_WARN_ONCE(1, "Unable to allocate global ASID despite %d available\n",
> - global_asid_available);
> - return 0;
> + if (asid >= MAX_ASID_AVAILABLE) {
> + /* Search the range [TLB_NR_DYN_ASIDS, last_global_asid-1]. */
> + asid = find_next_zero_bit(global_asid_used, last_global_asid, TLB_NR_DYN_ASIDS);
Oh no, this doesn’t seem like it can happen, since we always search
starting from 'last_global_asid'. Even if there are free ASIDs that are
smaller than the value of 'last_global_asid', they will still be in the
'global_asid_freed' bitmap. Therefore, we need to call
reset_global_asid_space() directly here. I've looked back at your
patchset, and it seems that the original implementation before v6 is the
same as what I am describing here. I'm not sure why that was changed.
Additionally, I am wondering if it would be acceptable to remove the
'global_asid_freed' bitmap directly and clear the bit in the
'global_asid_used' bitmap instead. We could either search the range
[TLB_NR_DYN_ASIDS, MAX_ASID_AVAILABLE-1] or search the range
[global_asid_available, MAX_ASID_AVAILABLE-1] first, then search the
range [TLB_NR_DYN_ASIDS, global_asid_available-1]. This might simplify
the code.
--
Thanks!
> + if (asid >= last_global_asid) {
> + /*
> + * The 'global_asid_used' bitmap is full, so merge the
> + * 'global_asid_freed' bitmap and search from the
> + * beginning again.
> + */
> + reset_global_asid_space();
> + asid = find_next_zero_bit(global_asid_used, MAX_ASID_AVAILABLE,
> + last_global_asid);
> + if (asid >= MAX_ASID_AVAILABLE && global_asid_available) {
> + /* This should never happen. */
> + VM_WARN_ONCE(1, "Unable to allocate global ASID despite %d available\n",
> + global_asid_available);
> + return 0;
> + }
> + }
> }
> ```
>
> > --
> > All Rights Reversed.
© 2016 - 2025 Red Hat, Inc.