[PATCH] xarray: rename xa_lock/xa_unlock to xa_enter/xa_leave

Alice Ryhl posted 1 patch 2 months ago
Documentation/core-api/xarray.rst |  75 ++++++++++++-------
include/linux/xarray.h            | 148 +++++++++++++++++++++++---------------
2 files changed, 141 insertions(+), 82 deletions(-)
[PATCH] xarray: rename xa_lock/xa_unlock to xa_enter/xa_leave
Posted by Alice Ryhl 2 months ago
Functions such as __xa_store() may temporarily unlock the internal
spinlock if allocation is necessary. This means that code such as

	xa_lock(xa);
	__xa_store(xa, idx1, ptr1, GFP_KERNEL);
	__xa_store(xa, idx2, ptr2, GFP_KERNEL);
	xa_unlock(xa);

does not prevent the situation where another thread sees the first store
without seeing the second store. Even if GFP_ATOMIC is used, this can
still happen if the reader uses xa_load without taking the xa_lock. This
is not the behavior that you would expect from a lock. We should not
subvert the expectations of the reader.

Thus, rename xa_lock/xa_unlock to xa_enter/xa_leave. Users of the XArray
will have fewer implicit expectations about how functions with these
names will behave, which encourages users to check the documentation.
The documentation is amended with additional notes about these caveats.

The previous example becomes:

	xa_enter(xa);
	__xa_store(xa, idx1, ptr1, GFP_KERNEL);
	__xa_store(xa, idx2, ptr2, GFP_KERNEL);
	xa_leave(xa);

Existing users of the function will be updated to use the new name in
follow-up patches. The old names will be deleted later to avoid
conflicts with new code using xa_lock().

The idea to rename these functions came up during a discussion at
the Linux Plumbers conference 2024. I was working on a Rust API for
using the XArray from Rust code, but I was dissatisfied with the Rust
API being too confusing for the same reasons as outlined above.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 Documentation/core-api/xarray.rst |  75 ++++++++++++-------
 include/linux/xarray.h            | 148 +++++++++++++++++++++++---------------
 2 files changed, 141 insertions(+), 82 deletions(-)

diff --git a/Documentation/core-api/xarray.rst b/Documentation/core-api/xarray.rst
index 77e0ece2b1d6..2f3546cc2db2 100644
--- a/Documentation/core-api/xarray.rst
+++ b/Documentation/core-api/xarray.rst
@@ -200,7 +200,7 @@ Takes RCU read lock:
  * xa_extract()
  * xa_get_mark()
 
-Takes xa_lock internally:
+Internally calls xa_enter to lock spinlock:
  * xa_store()
  * xa_store_bh()
  * xa_store_irq()
@@ -224,7 +224,7 @@ Takes xa_lock internally:
  * xa_set_mark()
  * xa_clear_mark()
 
-Assumes xa_lock held on entry:
+Caller must use xa_enter:
  * __xa_store()
  * __xa_insert()
  * __xa_erase()
@@ -233,14 +233,41 @@ Assumes xa_lock held on entry:
  * __xa_set_mark()
  * __xa_clear_mark()
 
+Variants of xa_enter and xa_leave:
+ * xa_enter()
+ * xa_tryenter()
+ * xa_enter_bh()
+ * xa_enter_irq()
+ * xa_enter_irqsave()
+ * xa_enter_nested()
+ * xa_enter_bh_nested()
+ * xa_enter_irq_nested()
+ * xa_enter_irqsave_nested()
+ * xa_leave()
+ * xa_leave_bh()
+ * xa_leave_irq()
+ * xa_leave_irqrestore()
+
+The xa_enter() and xa_leave() functions correspond to spin_lock() and
+spin_unlock() on the internal spinlock.  Be aware that functions such as
+__xa_store() may temporarily unlock the internal spinlock to allocate memory.
+Because of that, if you have several calls to __xa_store() between a single
+xa_enter()/xa_leave() pair, other users of the XArray may see the first store
+without seeing the second store.  The xa_enter() function is not called xa_lock()
+to emphasize this distinction.
+
-If you want to take advantage of the lock to protect the data structures
-that you are storing in the XArray, you can call xa_lock()
-before calling xa_load(), then take a reference count on the
-object you have found before calling xa_unlock().  This will
-prevent stores from removing the object from the array between looking
-up the object and incrementing the refcount.  You can also use RCU to
-avoid dereferencing freed memory, but an explanation of that is beyond
-the scope of this document.
+If you want to take advantage of the lock to protect the data stored in the
+XArray, then you can use xa_enter() and xa_leave() to enter and leave the
+critical region of the internal spinlock. For example, you can enter the critcal
+region with xa_enter(), look up a value with xa_load(), increment the refcount,
+and then call xa_leave().  This will prevent stores from removing the object
+from the array between looking up the object and incrementing the refcount.
+
+Instead of xa_enter(), you can also use RCU to increment the refcount, but an
+explanation of that is beyond the scope of this document.
+
+Interrupts
+----------
 
 The XArray does not disable interrupts or softirqs while modifying
 the array.  It is safe to read the XArray from interrupt or softirq
@@ -258,21 +285,21 @@ context and then erase them in softirq context, you can do that this way::
     {
         int err;
 
-        xa_lock_bh(&foo->array);
+        xa_enter_bh(&foo->array);
         err = xa_err(__xa_store(&foo->array, index, entry, GFP_KERNEL));
         if (!err)
             foo->count++;
-        xa_unlock_bh(&foo->array);
+        xa_leave_bh(&foo->array);
         return err;
     }
 
     /* foo_erase() is only called from softirq context */
     void foo_erase(struct foo *foo, unsigned long index)
     {
-        xa_lock(&foo->array);
+        xa_enter(&foo->array);
         __xa_erase(&foo->array, index);
         foo->count--;
-        xa_unlock(&foo->array);
+        xa_leave(&foo->array);
     }
 
 If you are going to modify the XArray from interrupt or softirq context,
@@ -280,12 +307,12 @@ you need to initialise the array using xa_init_flags(), passing
 ``XA_FLAGS_LOCK_IRQ`` or ``XA_FLAGS_LOCK_BH``.
 
 The above example also shows a common pattern of wanting to extend the
-coverage of the xa_lock on the store side to protect some statistics
-associated with the array.
+coverage of the internal spinlock on the store side to protect some
+statistics associated with the array.
 
 Sharing the XArray with interrupt context is also possible, either
-using xa_lock_irqsave() in both the interrupt handler and process
-context, or xa_lock_irq() in process context and xa_lock()
+using xa_enter_irqsave() in both the interrupt handler and process
+context, or xa_enter_irq() in process context and xa_enter()
 in the interrupt handler.  Some of the more common patterns have helper
 functions such as xa_store_bh(), xa_store_irq(),
 xa_erase_bh(), xa_erase_irq(), xa_cmpxchg_bh()
@@ -293,8 +320,8 @@ and xa_cmpxchg_irq().
 
 Sometimes you need to protect access to the XArray with a mutex because
 that lock sits above another mutex in the locking hierarchy.  That does
-not entitle you to use functions like __xa_erase() without taking
-the xa_lock; the xa_lock is used for lockdep validation and will be used
+not entitle you to use functions like __xa_erase() without calling
+xa_enter; the XArray lock is used for lockdep validation and will be used
 for other purposes in the future.
 
 The __xa_set_mark() and __xa_clear_mark() functions are also
@@ -308,8 +335,8 @@ Advanced API
 The advanced API offers more flexibility and better performance at the
 cost of an interface which can be harder to use and has fewer safeguards.
 No locking is done for you by the advanced API, and you are required
-to use the xa_lock while modifying the array.  You can choose whether
-to use the xa_lock or the RCU lock while doing read-only operations on
+to use xa_enter while modifying the array.  You can choose whether
+to use xa_enter or the RCU lock while doing read-only operations on
 the array.  You can mix advanced and normal operations on the same array;
 indeed the normal API is implemented in terms of the advanced API.  The
 advanced API is only available to modules with a GPL-compatible license.
@@ -320,8 +347,8 @@ This macro initialises the xa_state ready to start walking around the
 XArray.  It is used as a cursor to maintain the position in the XArray
 and let you compose various operations together without having to restart
 from the top every time.  The contents of the xa_state are protected by
-the rcu_read_lock() or the xas_lock().  If you need to drop whichever of
-those locks is protecting your state and tree, you must call xas_pause()
+the rcu_read_lock() or the xas_enter() lock.  If you need to drop whichever
+of those locks is protecting your state and tree, you must call xas_pause()
 so that future calls do not rely on the parts of the state which were
 left unprotected.
 
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 0b618ec04115..dde10de4e6bf 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -532,29 +532,48 @@ static inline bool xa_marked(const struct xarray *xa, xa_mark_t mark)
 	for (index = 0, entry = xa_find(xa, &index, ULONG_MAX, filter); \
 	     entry; entry = xa_find_after(xa, &index, ULONG_MAX, filter))
 
-#define xa_trylock(xa)		spin_trylock(&(xa)->xa_lock)
-#define xa_lock(xa)		spin_lock(&(xa)->xa_lock)
-#define xa_unlock(xa)		spin_unlock(&(xa)->xa_lock)
-#define xa_lock_bh(xa)		spin_lock_bh(&(xa)->xa_lock)
-#define xa_unlock_bh(xa)	spin_unlock_bh(&(xa)->xa_lock)
-#define xa_lock_irq(xa)		spin_lock_irq(&(xa)->xa_lock)
-#define xa_unlock_irq(xa)	spin_unlock_irq(&(xa)->xa_lock)
-#define xa_lock_irqsave(xa, flags) \
+#define xa_tryenter(xa)		spin_trylock(&(xa)->xa_lock)
+#define xa_enter(xa)		spin_lock(&(xa)->xa_lock)
+#define xa_leave(xa)		spin_unlock(&(xa)->xa_lock)
+#define xa_enter_bh(xa)		spin_lock_bh(&(xa)->xa_lock)
+#define xa_leave_bh(xa)		spin_unlock_bh(&(xa)->xa_lock)
+#define xa_enter_irq(xa)	spin_lock_irq(&(xa)->xa_lock)
+#define xa_leave_irq(xa)	spin_unlock_irq(&(xa)->xa_lock)
+#define xa_enter_irqsave(xa, flags) \
 				spin_lock_irqsave(&(xa)->xa_lock, flags)
-#define xa_unlock_irqrestore(xa, flags) \
+#define xa_leave_irqrestore(xa, flags) \
 				spin_unlock_irqrestore(&(xa)->xa_lock, flags)
-#define xa_lock_nested(xa, subclass) \
+#define xa_enter_nested(xa, subclass) \
 				spin_lock_nested(&(xa)->xa_lock, subclass)
-#define xa_lock_bh_nested(xa, subclass) \
+#define xa_enter_bh_nested(xa, subclass) \
 				spin_lock_bh_nested(&(xa)->xa_lock, subclass)
-#define xa_lock_irq_nested(xa, subclass) \
+#define xa_enter_irq_nested(xa, subclass) \
 				spin_lock_irq_nested(&(xa)->xa_lock, subclass)
-#define xa_lock_irqsave_nested(xa, flags, subclass) \
+#define xa_enter_irqsave_nested(xa, flags, subclass) \
 		spin_lock_irqsave_nested(&(xa)->xa_lock, flags, subclass)
 
+/*
+ * These names are deprecated. Please use xa_enter instead of xa_lock, and
+ * xa_leave instead of xa_unlock.
+ */
+#define xa_trylock(xa)			xa_tryenter(xa)
+#define xa_lock(xa)			xa_enter(xa)
+#define xa_unlock(xa)			xa_leave(xa)
+#define xa_lock_bh(xa)			xa_enter_bh(xa)
+#define xa_unlock_bh(xa)		xa_leave_bh(xa)
+#define xa_lock_irq(xa)			xa_enter_irq(xa)
+#define xa_unlock_irq(xa)		xa_leave_irq(xa)
+#define xa_lock_irqsave(xa, flags)	xa_enter_irqsave(xa, flags)
+#define xa_unlock_irqrestore(xa, flags) xa_leave_irqsave(xa, flags)
+#define xa_lock_nested(xa, subclass)	xa_enter_nested(xa, subclass)
+#define xa_lock_bh_nested(xa, subclass) xa_enter_bh_nested(xa, subclass)
+#define xa_lock_irq_nested(xa, subclass) xa_enter_irq_nested(xa, subclass)
+#define xa_lock_irqsave_nested(xa, flags, subclass) \
+		xa_enter_irqsave_nested(xa, flags, subclass)
+
 /*
- * Versions of the normal API which require the caller to hold the
- * xa_lock.  If the GFP flags allow it, they will drop the lock to
+ * Versions of the normal API which require the caller to use
+ * xa_enter.  If the GFP flags allow it, they will drop the lock to
  * allocate memory, then reacquire it afterwards.  These functions
  * may also re-enable interrupts if the XArray flags indicate the
  * locking should be interrupt safe.
@@ -592,9 +611,9 @@ static inline void *xa_store_bh(struct xarray *xa, unsigned long index,
 	void *curr;
 
 	might_alloc(gfp);
-	xa_lock_bh(xa);
+	xa_enter_bh(xa);
 	curr = __xa_store(xa, index, entry, gfp);
-	xa_unlock_bh(xa);
+	xa_leave_bh(xa);
 
 	return curr;
 }
@@ -619,9 +638,9 @@ static inline void *xa_store_irq(struct xarray *xa, unsigned long index,
 	void *curr;
 
 	might_alloc(gfp);
-	xa_lock_irq(xa);
+	xa_enter_irq(xa);
 	curr = __xa_store(xa, index, entry, gfp);
-	xa_unlock_irq(xa);
+	xa_leave_irq(xa);
 
 	return curr;
 }
@@ -643,9 +662,9 @@ static inline void *xa_erase_bh(struct xarray *xa, unsigned long index)
 {
 	void *entry;
 
-	xa_lock_bh(xa);
+	xa_enter_bh(xa);
 	entry = __xa_erase(xa, index);
-	xa_unlock_bh(xa);
+	xa_leave_bh(xa);
 
 	return entry;
 }
@@ -667,9 +686,9 @@ static inline void *xa_erase_irq(struct xarray *xa, unsigned long index)
 {
 	void *entry;
 
-	xa_lock_irq(xa);
+	xa_enter_irq(xa);
 	entry = __xa_erase(xa, index);
-	xa_unlock_irq(xa);
+	xa_leave_irq(xa);
 
 	return entry;
 }
@@ -695,9 +714,9 @@ static inline void *xa_cmpxchg(struct xarray *xa, unsigned long index,
 	void *curr;
 
 	might_alloc(gfp);
-	xa_lock(xa);
+	xa_enter(xa);
 	curr = __xa_cmpxchg(xa, index, old, entry, gfp);
-	xa_unlock(xa);
+	xa_leave(xa);
 
 	return curr;
 }
@@ -723,9 +742,9 @@ static inline void *xa_cmpxchg_bh(struct xarray *xa, unsigned long index,
 	void *curr;
 
 	might_alloc(gfp);
-	xa_lock_bh(xa);
+	xa_enter_bh(xa);
 	curr = __xa_cmpxchg(xa, index, old, entry, gfp);
-	xa_unlock_bh(xa);
+	xa_leave_bh(xa);
 
 	return curr;
 }
@@ -751,9 +770,9 @@ static inline void *xa_cmpxchg_irq(struct xarray *xa, unsigned long index,
 	void *curr;
 
 	might_alloc(gfp);
-	xa_lock_irq(xa);
+	xa_enter_irq(xa);
 	curr = __xa_cmpxchg(xa, index, old, entry, gfp);
-	xa_unlock_irq(xa);
+	xa_leave_irq(xa);
 
 	return curr;
 }
@@ -781,9 +800,9 @@ static inline int __must_check xa_insert(struct xarray *xa,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock(xa);
+	xa_enter(xa);
 	err = __xa_insert(xa, index, entry, gfp);
-	xa_unlock(xa);
+	xa_leave(xa);
 
 	return err;
 }
@@ -811,9 +830,9 @@ static inline int __must_check xa_insert_bh(struct xarray *xa,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock_bh(xa);
+	xa_enter_bh(xa);
 	err = __xa_insert(xa, index, entry, gfp);
-	xa_unlock_bh(xa);
+	xa_leave_bh(xa);
 
 	return err;
 }
@@ -841,9 +860,9 @@ static inline int __must_check xa_insert_irq(struct xarray *xa,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock_irq(xa);
+	xa_enter_irq(xa);
 	err = __xa_insert(xa, index, entry, gfp);
-	xa_unlock_irq(xa);
+	xa_leave_irq(xa);
 
 	return err;
 }
@@ -874,9 +893,9 @@ static inline __must_check int xa_alloc(struct xarray *xa, u32 *id,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock(xa);
+	xa_enter(xa);
 	err = __xa_alloc(xa, id, entry, limit, gfp);
-	xa_unlock(xa);
+	xa_leave(xa);
 
 	return err;
 }
@@ -907,9 +926,9 @@ static inline int __must_check xa_alloc_bh(struct xarray *xa, u32 *id,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock_bh(xa);
+	xa_enter_bh(xa);
 	err = __xa_alloc(xa, id, entry, limit, gfp);
-	xa_unlock_bh(xa);
+	xa_leave_bh(xa);
 
 	return err;
 }
@@ -940,9 +959,9 @@ static inline int __must_check xa_alloc_irq(struct xarray *xa, u32 *id,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock_irq(xa);
+	xa_enter_irq(xa);
 	err = __xa_alloc(xa, id, entry, limit, gfp);
-	xa_unlock_irq(xa);
+	xa_leave_irq(xa);
 
 	return err;
 }
@@ -977,9 +996,9 @@ static inline int xa_alloc_cyclic(struct xarray *xa, u32 *id, void *entry,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock(xa);
+	xa_enter(xa);
 	err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
-	xa_unlock(xa);
+	xa_leave(xa);
 
 	return err;
 }
@@ -1014,9 +1033,9 @@ static inline int xa_alloc_cyclic_bh(struct xarray *xa, u32 *id, void *entry,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock_bh(xa);
+	xa_enter_bh(xa);
 	err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
-	xa_unlock_bh(xa);
+	xa_leave_bh(xa);
 
 	return err;
 }
@@ -1051,9 +1070,9 @@ static inline int xa_alloc_cyclic_irq(struct xarray *xa, u32 *id, void *entry,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock_irq(xa);
+	xa_enter_irq(xa);
 	err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
-	xa_unlock_irq(xa);
+	xa_leave_irq(xa);
 
 	return err;
 }
@@ -1408,17 +1427,30 @@ struct xa_state {
 			(1U << (order % XA_CHUNK_SHIFT)) - 1)
 
 #define xas_marked(xas, mark)	xa_marked((xas)->xa, (mark))
-#define xas_trylock(xas)	xa_trylock((xas)->xa)
-#define xas_lock(xas)		xa_lock((xas)->xa)
-#define xas_unlock(xas)		xa_unlock((xas)->xa)
-#define xas_lock_bh(xas)	xa_lock_bh((xas)->xa)
-#define xas_unlock_bh(xas)	xa_unlock_bh((xas)->xa)
-#define xas_lock_irq(xas)	xa_lock_irq((xas)->xa)
-#define xas_unlock_irq(xas)	xa_unlock_irq((xas)->xa)
-#define xas_lock_irqsave(xas, flags) \
-				xa_lock_irqsave((xas)->xa, flags)
-#define xas_unlock_irqrestore(xas, flags) \
-				xa_unlock_irqrestore((xas)->xa, flags)
+#define xas_tryenter(xas)	xa_tryenter((xas)->xa)
+#define xas_enter(xas)		xa_enter((xas)->xa)
+#define xas_leave(xas)		xa_leave((xas)->xa)
+#define xas_enter_bh(xas)	xa_enter_bh((xas)->xa)
+#define xas_leave_bh(xas)	xa_leave_bh((xas)->xa)
+#define xas_enter_irq(xas)	xa_enter_irq((xas)->xa)
+#define xas_leave_irq(xas)	xa_leave_irq((xas)->xa)
+#define xas_enter_irqsave(xas, flags) xa_enter_irqsave((xas)->xa, flags)
+#define xas_leave_irqrestore(xas, flags) xa_leave_irqrestore((xas)->xa, flags)
+
+
+/*
+ * These names are deprecated. Please use xas_enter instead of xas_lock, and
+ * xas_leave instead of xas_unlock.
+ */
+#define xas_trylock(xas)			xas_tryenter(xas)
+#define xas_lock(xas)				xas_enter(xas)
+#define xas_unlock(xas)				xas_leave(xas)
+#define xas_lock_bh(xas)			xas_enter_bh(xas)
+#define xas_unlock_bh(xas)			xas_leave_bh(xas)
+#define xas_lock_irq(xas)			xas_enter_irq(xas)
+#define xas_unlock_irq(xas)			xas_leave_irq(xas)
+#define xas_lock_irqsave(xas, flags)		xas_enter_irqsave(xas, flags)
+#define xas_unlock_irqrestore(xas, flags)	xas_leave_irqsave(xas, flags)
 
 /**
  * xas_error() - Return an errno stored in the xa_state.

---
base-commit: 98f7e32f20d28ec452afb208f9cffc08448a2652
change-id: 20240921-xa_enter_leave-b11552c3caa2

Best regards,
-- 
Alice Ryhl <aliceryhl@google.com>
Re: [PATCH] xarray: rename xa_lock/xa_unlock to xa_enter/xa_leave
Posted by kernel test robot 2 months ago
Hi Alice,

kernel test robot noticed the following build errors:

[auto build test ERROR on 98f7e32f20d28ec452afb208f9cffc08448a2652]

url:    https://github.com/intel-lab-lkp/linux/commits/Alice-Ryhl/xarray-rename-xa_lock-xa_unlock-to-xa_enter-xa_leave/20240923-184045
base:   98f7e32f20d28ec452afb208f9cffc08448a2652
patch link:    https://lore.kernel.org/r/20240923-xa_enter_leave-v1-1-6ff365e8520a%40google.com
patch subject: [PATCH] xarray: rename xa_lock/xa_unlock to xa_enter/xa_leave
config: x86_64-defconfig (https://download.01.org/0day-ci/archive/20240924/202409240026.7kkshSxM-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.3.0-12) 11.3.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240924/202409240026.7kkshSxM-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202409240026.7kkshSxM-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from include/linux/list_lru.h:14,
                    from include/linux/fs.h:13,
                    from mm/page-writeback.c:19:
   mm/page-writeback.c: In function '__folio_mark_dirty':
>> include/linux/xarray.h:567:41: error: implicit declaration of function 'xa_leave_irqsave'; did you mean 'xa_lock_irqsave'? [-Werror=implicit-function-declaration]
     567 | #define xa_unlock_irqrestore(xa, flags) xa_leave_irqsave(xa, flags)
         |                                         ^~~~~~~~~~~~~~~~
   mm/page-writeback.c:2801:9: note: in expansion of macro 'xa_unlock_irqrestore'
    2801 |         xa_unlock_irqrestore(&mapping->i_pages, flags);
         |         ^~~~~~~~~~~~~~~~~~~~
   mm/page-writeback.c: In function '__folio_start_writeback':
>> include/linux/xarray.h:1453:49: error: implicit declaration of function 'xas_leave_irqsave'; did you mean 'xas_lock_irqsave'? [-Werror=implicit-function-declaration]
    1453 | #define xas_unlock_irqrestore(xas, flags)       xas_leave_irqsave(xas, flags)
         |                                                 ^~~~~~~~~~~~~~~~~
   mm/page-writeback.c:3155:17: note: in expansion of macro 'xas_unlock_irqrestore'
    3155 |                 xas_unlock_irqrestore(&xas, flags);
         |                 ^~~~~~~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors


vim +567 include/linux/xarray.h

   426	
   427	/**
   428	 * xa_for_each_range() - Iterate over a portion of an XArray.
   429	 * @xa: XArray.
   430	 * @index: Index of @entry.
   431	 * @entry: Entry retrieved from array.
   432	 * @start: First index to retrieve from array.
   433	 * @last: Last index to retrieve from array.
   434	 *
   435	 * During the iteration, @entry will have the value of the entry stored
   436	 * in @xa at @index.  You may modify @index during the iteration if you
   437	 * want to skip or reprocess indices.  It is safe to modify the array
   438	 * during the iteration.  At the end of the iteration, @entry will be set
   439	 * to NULL and @index will have a value less than or equal to max.
   440	 *
   441	 * xa_for_each_range() is O(n.log(n)) while xas_for_each() is O(n).  You have
   442	 * to handle your own locking with xas_for_each(), and if you have to unlock
   443	 * after each iteration, it will also end up being O(n.log(n)).
   444	 * xa_for_each_range() will spin if it hits a retry entry; if you intend to
   445	 * see retry entries, you should use the xas_for_each() iterator instead.
   446	 * The xas_for_each() iterator will expand into more inline code than
   447	 * xa_for_each_range().
   448	 *
   449	 * Context: Any context.  Takes and releases the RCU lock.
   450	 */
   451	#define xa_for_each_range(xa, index, entry, start, last)		\
   452		for (index = start,						\
   453		     entry = xa_find(xa, &index, last, XA_PRESENT);		\
   454		     entry;							\
   455		     entry = xa_find_after(xa, &index, last, XA_PRESENT))
   456	
   457	/**
   458	 * xa_for_each_start() - Iterate over a portion of an XArray.
   459	 * @xa: XArray.
   460	 * @index: Index of @entry.
   461	 * @entry: Entry retrieved from array.
   462	 * @start: First index to retrieve from array.
   463	 *
   464	 * During the iteration, @entry will have the value of the entry stored
   465	 * in @xa at @index.  You may modify @index during the iteration if you
   466	 * want to skip or reprocess indices.  It is safe to modify the array
   467	 * during the iteration.  At the end of the iteration, @entry will be set
   468	 * to NULL and @index will have a value less than or equal to max.
   469	 *
   470	 * xa_for_each_start() is O(n.log(n)) while xas_for_each() is O(n).  You have
   471	 * to handle your own locking with xas_for_each(), and if you have to unlock
   472	 * after each iteration, it will also end up being O(n.log(n)).
   473	 * xa_for_each_start() will spin if it hits a retry entry; if you intend to
   474	 * see retry entries, you should use the xas_for_each() iterator instead.
   475	 * The xas_for_each() iterator will expand into more inline code than
   476	 * xa_for_each_start().
   477	 *
   478	 * Context: Any context.  Takes and releases the RCU lock.
   479	 */
   480	#define xa_for_each_start(xa, index, entry, start) \
   481		xa_for_each_range(xa, index, entry, start, ULONG_MAX)
   482	
   483	/**
   484	 * xa_for_each() - Iterate over present entries in an XArray.
   485	 * @xa: XArray.
   486	 * @index: Index of @entry.
   487	 * @entry: Entry retrieved from array.
   488	 *
   489	 * During the iteration, @entry will have the value of the entry stored
   490	 * in @xa at @index.  You may modify @index during the iteration if you want
   491	 * to skip or reprocess indices.  It is safe to modify the array during the
   492	 * iteration.  At the end of the iteration, @entry will be set to NULL and
   493	 * @index will have a value less than or equal to max.
   494	 *
   495	 * xa_for_each() is O(n.log(n)) while xas_for_each() is O(n).  You have
   496	 * to handle your own locking with xas_for_each(), and if you have to unlock
   497	 * after each iteration, it will also end up being O(n.log(n)).  xa_for_each()
   498	 * will spin if it hits a retry entry; if you intend to see retry entries,
   499	 * you should use the xas_for_each() iterator instead.  The xas_for_each()
   500	 * iterator will expand into more inline code than xa_for_each().
   501	 *
   502	 * Context: Any context.  Takes and releases the RCU lock.
   503	 */
   504	#define xa_for_each(xa, index, entry) \
   505		xa_for_each_start(xa, index, entry, 0)
   506	
   507	/**
   508	 * xa_for_each_marked() - Iterate over marked entries in an XArray.
   509	 * @xa: XArray.
   510	 * @index: Index of @entry.
   511	 * @entry: Entry retrieved from array.
   512	 * @filter: Selection criterion.
   513	 *
   514	 * During the iteration, @entry will have the value of the entry stored
   515	 * in @xa at @index.  The iteration will skip all entries in the array
   516	 * which do not match @filter.  You may modify @index during the iteration
   517	 * if you want to skip or reprocess indices.  It is safe to modify the array
   518	 * during the iteration.  At the end of the iteration, @entry will be set to
   519	 * NULL and @index will have a value less than or equal to max.
   520	 *
   521	 * xa_for_each_marked() is O(n.log(n)) while xas_for_each_marked() is O(n).
   522	 * You have to handle your own locking with xas_for_each(), and if you have
   523	 * to unlock after each iteration, it will also end up being O(n.log(n)).
   524	 * xa_for_each_marked() will spin if it hits a retry entry; if you intend to
   525	 * see retry entries, you should use the xas_for_each_marked() iterator
   526	 * instead.  The xas_for_each_marked() iterator will expand into more inline
   527	 * code than xa_for_each_marked().
   528	 *
   529	 * Context: Any context.  Takes and releases the RCU lock.
   530	 */
   531	#define xa_for_each_marked(xa, index, entry, filter) \
   532		for (index = 0, entry = xa_find(xa, &index, ULONG_MAX, filter); \
   533		     entry; entry = xa_find_after(xa, &index, ULONG_MAX, filter))
   534	
   535	#define xa_tryenter(xa)		spin_trylock(&(xa)->xa_lock)
   536	#define xa_enter(xa)		spin_lock(&(xa)->xa_lock)
   537	#define xa_leave(xa)		spin_unlock(&(xa)->xa_lock)
   538	#define xa_enter_bh(xa)		spin_lock_bh(&(xa)->xa_lock)
   539	#define xa_leave_bh(xa)		spin_unlock_bh(&(xa)->xa_lock)
   540	#define xa_enter_irq(xa)	spin_lock_irq(&(xa)->xa_lock)
   541	#define xa_leave_irq(xa)	spin_unlock_irq(&(xa)->xa_lock)
   542	#define xa_enter_irqsave(xa, flags) \
   543					spin_lock_irqsave(&(xa)->xa_lock, flags)
   544	#define xa_leave_irqrestore(xa, flags) \
   545					spin_unlock_irqrestore(&(xa)->xa_lock, flags)
   546	#define xa_enter_nested(xa, subclass) \
   547					spin_lock_nested(&(xa)->xa_lock, subclass)
   548	#define xa_enter_bh_nested(xa, subclass) \
   549					spin_lock_bh_nested(&(xa)->xa_lock, subclass)
   550	#define xa_enter_irq_nested(xa, subclass) \
   551					spin_lock_irq_nested(&(xa)->xa_lock, subclass)
   552	#define xa_enter_irqsave_nested(xa, flags, subclass) \
   553			spin_lock_irqsave_nested(&(xa)->xa_lock, flags, subclass)
   554	
   555	/*
   556	 * These names are deprecated. Please use xa_enter instead of xa_lock, and
   557	 * xa_leave instead of xa_unlock.
   558	 */
   559	#define xa_trylock(xa)			xa_tryenter(xa)
   560	#define xa_lock(xa)			xa_enter(xa)
   561	#define xa_unlock(xa)			xa_leave(xa)
   562	#define xa_lock_bh(xa)			xa_enter_bh(xa)
   563	#define xa_unlock_bh(xa)		xa_leave_bh(xa)
   564	#define xa_lock_irq(xa)			xa_enter_irq(xa)
   565	#define xa_unlock_irq(xa)		xa_leave_irq(xa)
   566	#define xa_lock_irqsave(xa, flags)	xa_enter_irqsave(xa, flags)
 > 567	#define xa_unlock_irqrestore(xa, flags) xa_leave_irqsave(xa, flags)
   568	#define xa_lock_nested(xa, subclass)	xa_enter_nested(xa, subclass)
   569	#define xa_lock_bh_nested(xa, subclass) xa_enter_bh_nested(xa, subclass)
   570	#define xa_lock_irq_nested(xa, subclass) xa_enter_irq_nested(xa, subclass)
   571	#define xa_lock_irqsave_nested(xa, flags, subclass) \
   572			xa_enter_irqsave_nested(xa, flags, subclass)
   573	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Re: [PATCH] xarray: rename xa_lock/xa_unlock to xa_enter/xa_leave
Posted by kernel test robot 2 months ago
Hi Alice,

kernel test robot noticed the following build errors:

[auto build test ERROR on 98f7e32f20d28ec452afb208f9cffc08448a2652]

url:    https://github.com/intel-lab-lkp/linux/commits/Alice-Ryhl/xarray-rename-xa_lock-xa_unlock-to-xa_enter-xa_leave/20240923-184045
base:   98f7e32f20d28ec452afb208f9cffc08448a2652
patch link:    https://lore.kernel.org/r/20240923-xa_enter_leave-v1-1-6ff365e8520a%40google.com
patch subject: [PATCH] xarray: rename xa_lock/xa_unlock to xa_enter/xa_leave
config: x86_64-allnoconfig (https://download.01.org/0day-ci/archive/20240923/202409232343.7o1tQrIx-lkp@intel.com/config)
compiler: clang version 18.1.8 (https://github.com/llvm/llvm-project 3b5b5c1ec4a3095ab096dd780e84d7ab81f3d7ff)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240923/202409232343.7o1tQrIx-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202409232343.7o1tQrIx-lkp@intel.com/

All errors (new ones prefixed by >>):

>> lib/idr.c:453:2: error: call to undeclared function 'xas_leave_irqsave'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
     453 |         xas_unlock_irqrestore(&xas, flags);
         |         ^
   include/linux/xarray.h:1453:43: note: expanded from macro 'xas_unlock_irqrestore'
    1453 | #define xas_unlock_irqrestore(xas, flags)       xas_leave_irqsave(xas, flags)
         |                                                 ^
   lib/idr.c:521:2: error: call to undeclared function 'xas_leave_irqsave'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
     521 |         xas_unlock_irqrestore(&xas, flags);
         |         ^
   include/linux/xarray.h:1453:43: note: expanded from macro 'xas_unlock_irqrestore'
    1453 | #define xas_unlock_irqrestore(xas, flags)       xas_leave_irqsave(xas, flags)
         |                                                 ^
   lib/idr.c:553:2: error: call to undeclared function 'xas_leave_irqsave'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
     553 |         xas_unlock_irqrestore(&xas, flags);
         |         ^
   include/linux/xarray.h:1453:43: note: expanded from macro 'xas_unlock_irqrestore'
    1453 | #define xas_unlock_irqrestore(xas, flags)       xas_leave_irqsave(xas, flags)
         |                                                 ^
   3 errors generated.
--
>> lib/xarray.c:2256:2: error: call to undeclared function 'xas_leave_irqsave'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
    2256 |         xas_unlock_irqrestore(&xas, flags);
         |         ^
   include/linux/xarray.h:1453:43: note: expanded from macro 'xas_unlock_irqrestore'
    1453 | #define xas_unlock_irqrestore(xas, flags)       xas_leave_irqsave(xas, flags)
         |                                                 ^
   1 error generated.
--
>> mm/page-writeback.c:2801:2: error: call to undeclared function 'xa_leave_irqsave'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
    2801 |         xa_unlock_irqrestore(&mapping->i_pages, flags);
         |         ^
   include/linux/xarray.h:567:41: note: expanded from macro 'xa_unlock_irqrestore'
     567 | #define xa_unlock_irqrestore(xa, flags) xa_leave_irqsave(xa, flags)
         |                                         ^
   mm/page-writeback.c:3100:3: error: call to undeclared function 'xa_leave_irqsave'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
    3100 |                 xa_unlock_irqrestore(&mapping->i_pages, flags);
         |                 ^
   include/linux/xarray.h:567:41: note: expanded from macro 'xa_unlock_irqrestore'
     567 | #define xa_unlock_irqrestore(xa, flags) xa_leave_irqsave(xa, flags)
         |                                         ^
>> mm/page-writeback.c:3155:3: error: call to undeclared function 'xas_leave_irqsave'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
    3155 |                 xas_unlock_irqrestore(&xas, flags);
         |                 ^
   include/linux/xarray.h:1453:43: note: expanded from macro 'xas_unlock_irqrestore'
    1453 | #define xas_unlock_irqrestore(xas, flags)       xas_leave_irqsave(xas, flags)
         |                                                 ^
   3 errors generated.


vim +/xas_leave_irqsave +453 lib/idr.c

5806f07cd2c329 Jeff Mahoney            2006-06-26  307  
56083ab17e0075 Randy Dunlap            2010-10-26  308  /**
56083ab17e0075 Randy Dunlap            2010-10-26  309   * DOC: IDA description
72dba584b695d8 Tejun Heo               2007-06-14  310   *
0a835c4f090af2 Matthew Wilcox          2016-12-20  311   * The IDA is an ID allocator which does not provide the ability to
0a835c4f090af2 Matthew Wilcox          2016-12-20  312   * associate an ID with a pointer.  As such, it only needs to store one
0a835c4f090af2 Matthew Wilcox          2016-12-20  313   * bit per ID, and so is more space efficient than an IDR.  To use an IDA,
0a835c4f090af2 Matthew Wilcox          2016-12-20  314   * define it using DEFINE_IDA() (or embed a &struct ida in a data structure,
0a835c4f090af2 Matthew Wilcox          2016-12-20  315   * then initialise it using ida_init()).  To allocate a new ID, call
5ade60dda43c89 Matthew Wilcox          2018-03-20  316   * ida_alloc(), ida_alloc_min(), ida_alloc_max() or ida_alloc_range().
5ade60dda43c89 Matthew Wilcox          2018-03-20  317   * To free an ID, call ida_free().
72dba584b695d8 Tejun Heo               2007-06-14  318   *
b03f8e43c92618 Matthew Wilcox          2018-06-18  319   * ida_destroy() can be used to dispose of an IDA without needing to
b03f8e43c92618 Matthew Wilcox          2018-06-18  320   * free the individual IDs in it.  You can use ida_is_empty() to find
b03f8e43c92618 Matthew Wilcox          2018-06-18  321   * out whether the IDA has any IDs currently allocated.
0a835c4f090af2 Matthew Wilcox          2016-12-20  322   *
f32f004cddf86d Matthew Wilcox          2018-07-04  323   * The IDA handles its own locking.  It is safe to call any of the IDA
f32f004cddf86d Matthew Wilcox          2018-07-04  324   * functions without synchronisation in your code.
f32f004cddf86d Matthew Wilcox          2018-07-04  325   *
0a835c4f090af2 Matthew Wilcox          2016-12-20  326   * IDs are currently limited to the range [0-INT_MAX].  If this is an awkward
0a835c4f090af2 Matthew Wilcox          2016-12-20  327   * limitation, it should be quite straightforward to raise the maximum.
72dba584b695d8 Tejun Heo               2007-06-14  328   */
72dba584b695d8 Tejun Heo               2007-06-14  329  
d37cacc5adace7 Matthew Wilcox          2016-12-17  330  /*
d37cacc5adace7 Matthew Wilcox          2016-12-17  331   * Developer's notes:
d37cacc5adace7 Matthew Wilcox          2016-12-17  332   *
f32f004cddf86d Matthew Wilcox          2018-07-04  333   * The IDA uses the functionality provided by the XArray to store bitmaps in
f32f004cddf86d Matthew Wilcox          2018-07-04  334   * each entry.  The XA_FREE_MARK is only cleared when all bits in the bitmap
f32f004cddf86d Matthew Wilcox          2018-07-04  335   * have been set.
d37cacc5adace7 Matthew Wilcox          2016-12-17  336   *
f32f004cddf86d Matthew Wilcox          2018-07-04  337   * I considered telling the XArray that each slot is an order-10 node
f32f004cddf86d Matthew Wilcox          2018-07-04  338   * and indexing by bit number, but the XArray can't allow a single multi-index
f32f004cddf86d Matthew Wilcox          2018-07-04  339   * entry in the head, which would significantly increase memory consumption
f32f004cddf86d Matthew Wilcox          2018-07-04  340   * for the IDA.  So instead we divide the index by the number of bits in the
f32f004cddf86d Matthew Wilcox          2018-07-04  341   * leaf bitmap before doing a radix tree lookup.
d37cacc5adace7 Matthew Wilcox          2016-12-17  342   *
d37cacc5adace7 Matthew Wilcox          2016-12-17  343   * As an optimisation, if there are only a few low bits set in any given
3159f943aafdba Matthew Wilcox          2017-11-03  344   * leaf, instead of allocating a 128-byte bitmap, we store the bits
f32f004cddf86d Matthew Wilcox          2018-07-04  345   * as a value entry.  Value entries never have the XA_FREE_MARK cleared
f32f004cddf86d Matthew Wilcox          2018-07-04  346   * because we can always convert them into a bitmap entry.
f32f004cddf86d Matthew Wilcox          2018-07-04  347   *
f32f004cddf86d Matthew Wilcox          2018-07-04  348   * It would be possible to optimise further; once we've run out of a
f32f004cddf86d Matthew Wilcox          2018-07-04  349   * single 128-byte bitmap, we currently switch to a 576-byte node, put
f32f004cddf86d Matthew Wilcox          2018-07-04  350   * the 128-byte bitmap in the first entry and then start allocating extra
f32f004cddf86d Matthew Wilcox          2018-07-04  351   * 128-byte entries.  We could instead use the 512 bytes of the node's
f32f004cddf86d Matthew Wilcox          2018-07-04  352   * data as a bitmap before moving to that scheme.  I do not believe this
f32f004cddf86d Matthew Wilcox          2018-07-04  353   * is a worthwhile optimisation; Rasmus Villemoes surveyed the current
f32f004cddf86d Matthew Wilcox          2018-07-04  354   * users of the IDA and almost none of them use more than 1024 entries.
f32f004cddf86d Matthew Wilcox          2018-07-04  355   * Those that do use more than the 8192 IDs that the 512 bytes would
f32f004cddf86d Matthew Wilcox          2018-07-04  356   * provide.
f32f004cddf86d Matthew Wilcox          2018-07-04  357   *
f32f004cddf86d Matthew Wilcox          2018-07-04  358   * The IDA always uses a lock to alloc/free.  If we add a 'test_bit'
d37cacc5adace7 Matthew Wilcox          2016-12-17  359   * equivalent, it will still need locking.  Going to RCU lookup would require
d37cacc5adace7 Matthew Wilcox          2016-12-17  360   * using RCU to free bitmaps, and that's not trivial without embedding an
d37cacc5adace7 Matthew Wilcox          2016-12-17  361   * RCU head in the bitmap, which adds a 2-pointer overhead to each 128-byte
d37cacc5adace7 Matthew Wilcox          2016-12-17  362   * bitmap, which is excessive.
d37cacc5adace7 Matthew Wilcox          2016-12-17  363   */
d37cacc5adace7 Matthew Wilcox          2016-12-17  364  
f32f004cddf86d Matthew Wilcox          2018-07-04  365  /**
f32f004cddf86d Matthew Wilcox          2018-07-04  366   * ida_alloc_range() - Allocate an unused ID.
f32f004cddf86d Matthew Wilcox          2018-07-04  367   * @ida: IDA handle.
f32f004cddf86d Matthew Wilcox          2018-07-04  368   * @min: Lowest ID to allocate.
f32f004cddf86d Matthew Wilcox          2018-07-04  369   * @max: Highest ID to allocate.
f32f004cddf86d Matthew Wilcox          2018-07-04  370   * @gfp: Memory allocation flags.
f32f004cddf86d Matthew Wilcox          2018-07-04  371   *
f32f004cddf86d Matthew Wilcox          2018-07-04  372   * Allocate an ID between @min and @max, inclusive.  The allocated ID will
f32f004cddf86d Matthew Wilcox          2018-07-04  373   * not exceed %INT_MAX, even if @max is larger.
f32f004cddf86d Matthew Wilcox          2018-07-04  374   *
3b6742618ed921 Stephen Boyd            2020-10-15  375   * Context: Any context. It is safe to call this function without
3b6742618ed921 Stephen Boyd            2020-10-15  376   * locking in your code.
f32f004cddf86d Matthew Wilcox          2018-07-04  377   * Return: The allocated ID, or %-ENOMEM if memory could not be allocated,
f32f004cddf86d Matthew Wilcox          2018-07-04  378   * or %-ENOSPC if there are no free IDs.
f32f004cddf86d Matthew Wilcox          2018-07-04  379   */
f32f004cddf86d Matthew Wilcox          2018-07-04  380  int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
f32f004cddf86d Matthew Wilcox          2018-07-04  381  			gfp_t gfp)
72dba584b695d8 Tejun Heo               2007-06-14  382  {
f32f004cddf86d Matthew Wilcox          2018-07-04  383  	XA_STATE(xas, &ida->xa, min / IDA_BITMAP_BITS);
f32f004cddf86d Matthew Wilcox          2018-07-04  384  	unsigned bit = min % IDA_BITMAP_BITS;
f32f004cddf86d Matthew Wilcox          2018-07-04  385  	unsigned long flags;
f32f004cddf86d Matthew Wilcox          2018-07-04  386  	struct ida_bitmap *bitmap, *alloc = NULL;
f32f004cddf86d Matthew Wilcox          2018-07-04  387  
f32f004cddf86d Matthew Wilcox          2018-07-04  388  	if ((int)min < 0)
f32f004cddf86d Matthew Wilcox          2018-07-04  389  		return -ENOSPC;
f32f004cddf86d Matthew Wilcox          2018-07-04  390  
f32f004cddf86d Matthew Wilcox          2018-07-04  391  	if ((int)max < 0)
f32f004cddf86d Matthew Wilcox          2018-07-04  392  		max = INT_MAX;
f32f004cddf86d Matthew Wilcox          2018-07-04  393  
f32f004cddf86d Matthew Wilcox          2018-07-04  394  retry:
f32f004cddf86d Matthew Wilcox          2018-07-04  395  	xas_lock_irqsave(&xas, flags);
f32f004cddf86d Matthew Wilcox          2018-07-04  396  next:
f32f004cddf86d Matthew Wilcox          2018-07-04  397  	bitmap = xas_find_marked(&xas, max / IDA_BITMAP_BITS, XA_FREE_MARK);
f32f004cddf86d Matthew Wilcox          2018-07-04  398  	if (xas.xa_index > min / IDA_BITMAP_BITS)
0a835c4f090af2 Matthew Wilcox          2016-12-20  399  		bit = 0;
f32f004cddf86d Matthew Wilcox          2018-07-04  400  	if (xas.xa_index * IDA_BITMAP_BITS + bit > max)
f32f004cddf86d Matthew Wilcox          2018-07-04  401  		goto nospc;
f32f004cddf86d Matthew Wilcox          2018-07-04  402  
3159f943aafdba Matthew Wilcox          2017-11-03  403  	if (xa_is_value(bitmap)) {
3159f943aafdba Matthew Wilcox          2017-11-03  404  		unsigned long tmp = xa_to_value(bitmap);
f32f004cddf86d Matthew Wilcox          2018-07-04  405  
f32f004cddf86d Matthew Wilcox          2018-07-04  406  		if (bit < BITS_PER_XA_VALUE) {
f32f004cddf86d Matthew Wilcox          2018-07-04  407  			bit = find_next_zero_bit(&tmp, BITS_PER_XA_VALUE, bit);
f32f004cddf86d Matthew Wilcox          2018-07-04  408  			if (xas.xa_index * IDA_BITMAP_BITS + bit > max)
f32f004cddf86d Matthew Wilcox          2018-07-04  409  				goto nospc;
f32f004cddf86d Matthew Wilcox          2018-07-04  410  			if (bit < BITS_PER_XA_VALUE) {
f32f004cddf86d Matthew Wilcox          2018-07-04  411  				tmp |= 1UL << bit;
f32f004cddf86d Matthew Wilcox          2018-07-04  412  				xas_store(&xas, xa_mk_value(tmp));
f32f004cddf86d Matthew Wilcox          2018-07-04  413  				goto out;
d37cacc5adace7 Matthew Wilcox          2016-12-17  414  			}
f32f004cddf86d Matthew Wilcox          2018-07-04  415  		}
f32f004cddf86d Matthew Wilcox          2018-07-04  416  		bitmap = alloc;
f32f004cddf86d Matthew Wilcox          2018-07-04  417  		if (!bitmap)
f32f004cddf86d Matthew Wilcox          2018-07-04  418  			bitmap = kzalloc(sizeof(*bitmap), GFP_NOWAIT);
d37cacc5adace7 Matthew Wilcox          2016-12-17  419  		if (!bitmap)
f32f004cddf86d Matthew Wilcox          2018-07-04  420  			goto alloc;
3159f943aafdba Matthew Wilcox          2017-11-03  421  		bitmap->bitmap[0] = tmp;
f32f004cddf86d Matthew Wilcox          2018-07-04  422  		xas_store(&xas, bitmap);
f32f004cddf86d Matthew Wilcox          2018-07-04  423  		if (xas_error(&xas)) {
f32f004cddf86d Matthew Wilcox          2018-07-04  424  			bitmap->bitmap[0] = 0;
f32f004cddf86d Matthew Wilcox          2018-07-04  425  			goto out;
f32f004cddf86d Matthew Wilcox          2018-07-04  426  		}
d37cacc5adace7 Matthew Wilcox          2016-12-17  427  	}
d37cacc5adace7 Matthew Wilcox          2016-12-17  428  
0a835c4f090af2 Matthew Wilcox          2016-12-20  429  	if (bitmap) {
f32f004cddf86d Matthew Wilcox          2018-07-04  430  		bit = find_next_zero_bit(bitmap->bitmap, IDA_BITMAP_BITS, bit);
f32f004cddf86d Matthew Wilcox          2018-07-04  431  		if (xas.xa_index * IDA_BITMAP_BITS + bit > max)
f32f004cddf86d Matthew Wilcox          2018-07-04  432  			goto nospc;
0a835c4f090af2 Matthew Wilcox          2016-12-20  433  		if (bit == IDA_BITMAP_BITS)
f32f004cddf86d Matthew Wilcox          2018-07-04  434  			goto next;
72dba584b695d8 Tejun Heo               2007-06-14  435  
0a835c4f090af2 Matthew Wilcox          2016-12-20  436  		__set_bit(bit, bitmap->bitmap);
0a835c4f090af2 Matthew Wilcox          2016-12-20  437  		if (bitmap_full(bitmap->bitmap, IDA_BITMAP_BITS))
f32f004cddf86d Matthew Wilcox          2018-07-04  438  			xas_clear_mark(&xas, XA_FREE_MARK);
0a835c4f090af2 Matthew Wilcox          2016-12-20  439  	} else {
3159f943aafdba Matthew Wilcox          2017-11-03  440  		if (bit < BITS_PER_XA_VALUE) {
3159f943aafdba Matthew Wilcox          2017-11-03  441  			bitmap = xa_mk_value(1UL << bit);
3159f943aafdba Matthew Wilcox          2017-11-03  442  		} else {
f32f004cddf86d Matthew Wilcox          2018-07-04  443  			bitmap = alloc;
72dba584b695d8 Tejun Heo               2007-06-14  444  			if (!bitmap)
f32f004cddf86d Matthew Wilcox          2018-07-04  445  				bitmap = kzalloc(sizeof(*bitmap), GFP_NOWAIT);
f32f004cddf86d Matthew Wilcox          2018-07-04  446  			if (!bitmap)
f32f004cddf86d Matthew Wilcox          2018-07-04  447  				goto alloc;
0a835c4f090af2 Matthew Wilcox          2016-12-20  448  			__set_bit(bit, bitmap->bitmap);
3159f943aafdba Matthew Wilcox          2017-11-03  449  		}
f32f004cddf86d Matthew Wilcox          2018-07-04  450  		xas_store(&xas, bitmap);
72dba584b695d8 Tejun Heo               2007-06-14  451  	}
f32f004cddf86d Matthew Wilcox          2018-07-04  452  out:
f32f004cddf86d Matthew Wilcox          2018-07-04 @453  	xas_unlock_irqrestore(&xas, flags);
f32f004cddf86d Matthew Wilcox          2018-07-04  454  	if (xas_nomem(&xas, gfp)) {
f32f004cddf86d Matthew Wilcox          2018-07-04  455  		xas.xa_index = min / IDA_BITMAP_BITS;
f32f004cddf86d Matthew Wilcox          2018-07-04  456  		bit = min % IDA_BITMAP_BITS;
f32f004cddf86d Matthew Wilcox          2018-07-04  457  		goto retry;
72dba584b695d8 Tejun Heo               2007-06-14  458  	}
f32f004cddf86d Matthew Wilcox          2018-07-04  459  	if (bitmap != alloc)
f32f004cddf86d Matthew Wilcox          2018-07-04  460  		kfree(alloc);
f32f004cddf86d Matthew Wilcox          2018-07-04  461  	if (xas_error(&xas))
f32f004cddf86d Matthew Wilcox          2018-07-04  462  		return xas_error(&xas);
f32f004cddf86d Matthew Wilcox          2018-07-04  463  	return xas.xa_index * IDA_BITMAP_BITS + bit;
f32f004cddf86d Matthew Wilcox          2018-07-04  464  alloc:
f32f004cddf86d Matthew Wilcox          2018-07-04  465  	xas_unlock_irqrestore(&xas, flags);
f32f004cddf86d Matthew Wilcox          2018-07-04  466  	alloc = kzalloc(sizeof(*bitmap), gfp);
f32f004cddf86d Matthew Wilcox          2018-07-04  467  	if (!alloc)
f32f004cddf86d Matthew Wilcox          2018-07-04  468  		return -ENOMEM;
f32f004cddf86d Matthew Wilcox          2018-07-04  469  	xas_set(&xas, min / IDA_BITMAP_BITS);
f32f004cddf86d Matthew Wilcox          2018-07-04  470  	bit = min % IDA_BITMAP_BITS;
f32f004cddf86d Matthew Wilcox          2018-07-04  471  	goto retry;
f32f004cddf86d Matthew Wilcox          2018-07-04  472  nospc:
f32f004cddf86d Matthew Wilcox          2018-07-04  473  	xas_unlock_irqrestore(&xas, flags);
a219b856a2b993 Matthew Wilcox (Oracle  2020-04-02  474) 	kfree(alloc);
f32f004cddf86d Matthew Wilcox          2018-07-04  475  	return -ENOSPC;
0a835c4f090af2 Matthew Wilcox          2016-12-20  476  }
f32f004cddf86d Matthew Wilcox          2018-07-04  477  EXPORT_SYMBOL(ida_alloc_range);
72dba584b695d8 Tejun Heo               2007-06-14  478  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki