XArray updates for 5.1-rc1

-----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCgAyFiEEejHryeLBw/spnjHrDpNsjXcpgj4FAlyHF2oUHHdpbGx5QGlu
 ZnJhZGVhZC5vcmcACgkQDpNsjXcpgj5j9AgAlpeptRfnPO0+VXj+EbxaOOI8tOG+
 w+vBasWoQB+lZ9ctf1qUQVSeLn0ErxTM7BaIP7plfDrEWiIbRWkV18B+heS5d1Yz
 aTV1d/8tG6/eo61K2VqXHbUhymgMtbXDsg1rwWTF8+Q4xIcMqfYAR0f9ptU1Oejc
 pNAn16dYgKi6+4eluY7gXxruBosQ6yNml6iEje9A3uR8nhzTI/P3Yf2GGIZnQLsL
 +UIx4Ps38dJ3VCYBPfbnszZfYPpILUH9/Bdx+mAMUtZwvpM3JYqc8XsiFfqDO7n1
 3003yUytnRkb1UK3QIvkbPt0G8UOI4s9fxRPsA8lLSww/f2y1r5kC4Mxbg==
 =HSP/
 -----END PGP SIGNATURE-----

Merge tag 'xarray-5.1-rc1' of git://git.infradead.org/users/willy/linux-dax

Pull XArray updates from Matthew Wilcox:
 "This pull request changes the xa_alloc() API. I'm only aware of one
  subsystem that has started trying to use it, and we agree on the fixup
  as part of the merge.

  The xa_insert() error code also changed to match xa_alloc() (EEXIST to
  EBUSY), and I added xa_alloc_cyclic(). Beyond that, the usual
  bugfixes, optimisations and tweaking.

  I now have a git tree with all users of the radix tree and IDR
  converted over to the XArray that I'll be feeding to maintainers over
  the next few weeks"

* tag 'xarray-5.1-rc1' of git://git.infradead.org/users/willy/linux-dax:
  XArray: Fix xa_reserve for 2-byte aligned entries
  XArray: Fix xa_erase of 2-byte aligned entries
  XArray: Use xa_cmpxchg to implement xa_reserve
  XArray: Fix xa_release in allocating arrays
  XArray: Mark xa_insert and xa_reserve as must_check
  XArray: Add cyclic allocation
  XArray: Redesign xa_alloc API
  XArray: Add support for 1s-based allocation
  XArray: Change xa_insert to return -EBUSY
  XArray: Update xa_erase family descriptions
  XArray tests: RCU lock prohibits GFP_KERNEL
This commit is contained in:
Linus Torvalds 2019-03-11 20:06:18 -07:00
commit ea295481b6
7 changed files with 560 additions and 259 deletions

View file

@ -85,7 +85,7 @@ which was at that index; if it returns the same entry which was passed as
If you want to only store a new entry to an index if the current entry If you want to only store a new entry to an index if the current entry
at that index is ``NULL``, you can use :c:func:`xa_insert` which at that index is ``NULL``, you can use :c:func:`xa_insert` which
returns ``-EEXIST`` if the entry is not empty. returns ``-EBUSY`` if the entry is not empty.
You can enquire whether a mark is set on an entry by using You can enquire whether a mark is set on an entry by using
:c:func:`xa_get_mark`. If the entry is not ``NULL``, you can set a mark :c:func:`xa_get_mark`. If the entry is not ``NULL``, you can set a mark
@ -131,17 +131,23 @@ If you use :c:func:`DEFINE_XARRAY_ALLOC` to define the XArray, or
initialise it by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`, initialise it by passing ``XA_FLAGS_ALLOC`` to :c:func:`xa_init_flags`,
the XArray changes to track whether entries are in use or not. the XArray changes to track whether entries are in use or not.
You can call :c:func:`xa_alloc` to store the entry at any unused index You can call :c:func:`xa_alloc` to store the entry at an unused index
in the XArray. If you need to modify the array from interrupt context, in the XArray. If you need to modify the array from interrupt context,
you can use :c:func:`xa_alloc_bh` or :c:func:`xa_alloc_irq` to disable you can use :c:func:`xa_alloc_bh` or :c:func:`xa_alloc_irq` to disable
interrupts while allocating the ID. interrupts while allocating the ID.
Using :c:func:`xa_store`, :c:func:`xa_cmpxchg` or :c:func:`xa_insert` Using :c:func:`xa_store`, :c:func:`xa_cmpxchg` or :c:func:`xa_insert` will
will mark the entry as being allocated. Unlike a normal XArray, storing also mark the entry as being allocated. Unlike a normal XArray, storing
``NULL`` will mark the entry as being in use, like :c:func:`xa_reserve`. ``NULL`` will mark the entry as being in use, like :c:func:`xa_reserve`.
To free an entry, use :c:func:`xa_erase` (or :c:func:`xa_release` if To free an entry, use :c:func:`xa_erase` (or :c:func:`xa_release` if
you only want to free the entry if it's ``NULL``). you only want to free the entry if it's ``NULL``).
By default, the lowest free entry is allocated starting from 0. If you
want to allocate entries starting at 1, it is more efficient to use
:c:func:`DEFINE_XARRAY_ALLOC1` or ``XA_FLAGS_ALLOC1``. If you want to
allocate IDs up to a maximum, then wrap back around to the lowest free
ID, you can use :c:func:`xa_alloc_cyclic`.
You cannot use ``XA_MARK_0`` with an allocating XArray as this mark You cannot use ``XA_MARK_0`` with an allocating XArray as this mark
is used to track whether an entry is free or not. The other marks are is used to track whether an entry is free or not. The other marks are
available for your use. available for your use.
@ -209,7 +215,6 @@ Assumes xa_lock held on entry:
* :c:func:`__xa_erase` * :c:func:`__xa_erase`
* :c:func:`__xa_cmpxchg` * :c:func:`__xa_cmpxchg`
* :c:func:`__xa_alloc` * :c:func:`__xa_alloc`
* :c:func:`__xa_reserve`
* :c:func:`__xa_set_mark` * :c:func:`__xa_set_mark`
* :c:func:`__xa_clear_mark` * :c:func:`__xa_clear_mark`

View file

@ -668,19 +668,10 @@ static int assign_name(struct ib_device *device, const char *name)
} }
strlcpy(device->name, dev_name(&device->dev), IB_DEVICE_NAME_MAX); strlcpy(device->name, dev_name(&device->dev), IB_DEVICE_NAME_MAX);
/* Cyclically allocate a user visible ID for the device */ ret = xa_alloc_cyclic(&devices, &device->index, device, xa_limit_31b,
device->index = last_id; &last_id, GFP_KERNEL);
ret = xa_alloc(&devices, &device->index, INT_MAX, device, GFP_KERNEL); if (ret > 0)
if (ret == -ENOSPC) { ret = 0;
device->index = 0;
ret = xa_alloc(&devices, &device->index, INT_MAX, device,
GFP_KERNEL);
}
if (ret)
goto out;
last_id = device->index + 1;
ret = 0;
out: out:
up_write(&devices_rwsem); up_write(&devices_rwsem);
@ -1059,14 +1050,15 @@ static int assign_client_id(struct ib_client *client)
* to get the LIFO order. The extra linked list can go away if xarray * to get the LIFO order. The extra linked list can go away if xarray
* learns to reverse iterate. * learns to reverse iterate.
*/ */
if (list_empty(&client_list)) if (list_empty(&client_list)) {
client->client_id = 0; client->client_id = 0;
else } else {
client->client_id = struct ib_client *last;
list_last_entry(&client_list, struct ib_client, list)
->client_id; last = list_last_entry(&client_list, struct ib_client, list);
ret = xa_alloc(&clients, &client->client_id, INT_MAX, client, client->client_id = last->client_id + 1;
GFP_KERNEL); }
ret = xa_insert(&clients, client->client_id, client, GFP_KERNEL);
if (ret) if (ret)
goto out; goto out;

View file

@ -13,28 +13,6 @@
#include "cma_priv.h" #include "cma_priv.h"
#include "restrack.h" #include "restrack.h"
static int rt_xa_alloc_cyclic(struct xarray *xa, u32 *id, void *entry,
u32 *next)
{
int err;
*id = *next;
if (*next == U32_MAX)
*id = 0;
xa_lock(xa);
err = __xa_alloc(xa, id, U32_MAX, entry, GFP_KERNEL);
if (err && *next != U32_MAX) {
*id = 0;
err = __xa_alloc(xa, id, *next, entry, GFP_KERNEL);
}
if (!err)
*next = *id + 1;
xa_unlock(xa);
return err;
}
/** /**
* rdma_restrack_init() - initialize and allocate resource tracking * rdma_restrack_init() - initialize and allocate resource tracking
* @dev: IB device * @dev: IB device
@ -226,7 +204,8 @@ static void rdma_restrack_add(struct rdma_restrack_entry *res)
kref_init(&res->kref); kref_init(&res->kref);
init_completion(&res->comp); init_completion(&res->comp);
if (res->type != RDMA_RESTRACK_QP) if (res->type != RDMA_RESTRACK_QP)
ret = rt_xa_alloc_cyclic(&rt->xa, &res->id, res, &rt->next_id); ret = xa_alloc_cyclic(&rt->xa, &res->id, res, xa_limit_32b,
&rt->next_id, GFP_KERNEL);
else { else {
/* Special case to ensure that LQPN points to right QP */ /* Special case to ensure that LQPN points to right QP */
struct ib_qp *qp = container_of(res, struct ib_qp, res); struct ib_qp *qp = container_of(res, struct ib_qp, res);

View file

@ -189,7 +189,7 @@ retry:
*/ */
if (!err) if (!err)
return 0; return 0;
else if (err != -EEXIST) else if (err != -EBUSY)
goto failed_unlock; goto failed_unlock;
err = invalidate_inode_pages2_range(btnc, newkey, newkey); err = invalidate_inode_pages2_range(btnc, newkey, newkey);

View file

@ -131,6 +131,12 @@ static inline unsigned int xa_pointer_tag(void *entry)
* xa_mk_internal() - Create an internal entry. * xa_mk_internal() - Create an internal entry.
* @v: Value to turn into an internal entry. * @v: Value to turn into an internal entry.
* *
* Internal entries are used for a number of purposes. Entries 0-255 are
* used for sibling entries (only 0-62 are used by the current code). 256
* is used for the retry entry. 257 is used for the reserved / zero entry.
* Negative internal entries are used to represent errnos. Node pointers
* are also tagged as internal entries in some situations.
*
* Context: Any context. * Context: Any context.
* Return: An XArray internal entry corresponding to this value. * Return: An XArray internal entry corresponding to this value.
*/ */
@ -163,6 +169,22 @@ static inline bool xa_is_internal(const void *entry)
return ((unsigned long)entry & 3) == 2; return ((unsigned long)entry & 3) == 2;
} }
#define XA_ZERO_ENTRY xa_mk_internal(257)
/**
* xa_is_zero() - Is the entry a zero entry?
* @entry: Entry retrieved from the XArray
*
* The normal API will return NULL as the contents of a slot containing
* a zero entry. You can only see zero entries by using the advanced API.
*
* Return: %true if the entry is a zero entry.
*/
static inline bool xa_is_zero(const void *entry)
{
return unlikely(entry == XA_ZERO_ENTRY);
}
/** /**
* xa_is_err() - Report whether an XArray operation returned an error * xa_is_err() - Report whether an XArray operation returned an error
* @entry: Result from calling an XArray function * @entry: Result from calling an XArray function
@ -200,6 +222,27 @@ static inline int xa_err(void *entry)
return 0; return 0;
} }
/**
* struct xa_limit - Represents a range of IDs.
* @min: The lowest ID to allocate (inclusive).
* @max: The maximum ID to allocate (inclusive).
*
* This structure is used either directly or via the XA_LIMIT() macro
* to communicate the range of IDs that are valid for allocation.
* Two common ranges are predefined for you:
* * xa_limit_32b - [0 - UINT_MAX]
* * xa_limit_31b - [0 - INT_MAX]
*/
struct xa_limit {
u32 max;
u32 min;
};
#define XA_LIMIT(_min, _max) (struct xa_limit) { .min = _min, .max = _max }
#define xa_limit_32b XA_LIMIT(0, UINT_MAX)
#define xa_limit_31b XA_LIMIT(0, INT_MAX)
typedef unsigned __bitwise xa_mark_t; typedef unsigned __bitwise xa_mark_t;
#define XA_MARK_0 ((__force xa_mark_t)0U) #define XA_MARK_0 ((__force xa_mark_t)0U)
#define XA_MARK_1 ((__force xa_mark_t)1U) #define XA_MARK_1 ((__force xa_mark_t)1U)
@ -220,10 +263,14 @@ enum xa_lock_type {
#define XA_FLAGS_LOCK_IRQ ((__force gfp_t)XA_LOCK_IRQ) #define XA_FLAGS_LOCK_IRQ ((__force gfp_t)XA_LOCK_IRQ)
#define XA_FLAGS_LOCK_BH ((__force gfp_t)XA_LOCK_BH) #define XA_FLAGS_LOCK_BH ((__force gfp_t)XA_LOCK_BH)
#define XA_FLAGS_TRACK_FREE ((__force gfp_t)4U) #define XA_FLAGS_TRACK_FREE ((__force gfp_t)4U)
#define XA_FLAGS_ZERO_BUSY ((__force gfp_t)8U)
#define XA_FLAGS_ALLOC_WRAPPED ((__force gfp_t)16U)
#define XA_FLAGS_MARK(mark) ((__force gfp_t)((1U << __GFP_BITS_SHIFT) << \ #define XA_FLAGS_MARK(mark) ((__force gfp_t)((1U << __GFP_BITS_SHIFT) << \
(__force unsigned)(mark))) (__force unsigned)(mark)))
/* ALLOC is for a normal 0-based alloc. ALLOC1 is for an 1-based alloc */
#define XA_FLAGS_ALLOC (XA_FLAGS_TRACK_FREE | XA_FLAGS_MARK(XA_FREE_MARK)) #define XA_FLAGS_ALLOC (XA_FLAGS_TRACK_FREE | XA_FLAGS_MARK(XA_FREE_MARK))
#define XA_FLAGS_ALLOC1 (XA_FLAGS_TRACK_FREE | XA_FLAGS_ZERO_BUSY)
/** /**
* struct xarray - The anchor of the XArray. * struct xarray - The anchor of the XArray.
@ -279,7 +326,7 @@ struct xarray {
#define DEFINE_XARRAY(name) DEFINE_XARRAY_FLAGS(name, 0) #define DEFINE_XARRAY(name) DEFINE_XARRAY_FLAGS(name, 0)
/** /**
* DEFINE_XARRAY_ALLOC() - Define an XArray which can allocate IDs. * DEFINE_XARRAY_ALLOC() - Define an XArray which allocates IDs starting at 0.
* @name: A string that names your XArray. * @name: A string that names your XArray.
* *
* This is intended for file scope definitions of allocating XArrays. * This is intended for file scope definitions of allocating XArrays.
@ -287,6 +334,15 @@ struct xarray {
*/ */
#define DEFINE_XARRAY_ALLOC(name) DEFINE_XARRAY_FLAGS(name, XA_FLAGS_ALLOC) #define DEFINE_XARRAY_ALLOC(name) DEFINE_XARRAY_FLAGS(name, XA_FLAGS_ALLOC)
/**
* DEFINE_XARRAY_ALLOC1() - Define an XArray which allocates IDs starting at 1.
* @name: A string that names your XArray.
*
* This is intended for file scope definitions of allocating XArrays.
* See also DEFINE_XARRAY().
*/
#define DEFINE_XARRAY_ALLOC1(name) DEFINE_XARRAY_FLAGS(name, XA_FLAGS_ALLOC1)
void *xa_load(struct xarray *, unsigned long index); void *xa_load(struct xarray *, unsigned long index);
void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t); void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
void *xa_erase(struct xarray *, unsigned long index); void *xa_erase(struct xarray *, unsigned long index);
@ -463,9 +519,12 @@ void *__xa_erase(struct xarray *, unsigned long index);
void *__xa_store(struct xarray *, unsigned long index, void *entry, gfp_t); void *__xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old, void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old,
void *entry, gfp_t); void *entry, gfp_t);
int __xa_insert(struct xarray *, unsigned long index, void *entry, gfp_t); int __must_check __xa_insert(struct xarray *, unsigned long index,
int __xa_alloc(struct xarray *, u32 *id, u32 max, void *entry, gfp_t); void *entry, gfp_t);
int __xa_reserve(struct xarray *, unsigned long index, gfp_t); int __must_check __xa_alloc(struct xarray *, u32 *id, void *entry,
struct xa_limit, gfp_t);
int __must_check __xa_alloc_cyclic(struct xarray *, u32 *id, void *entry,
struct xa_limit, u32 *next, gfp_t);
void __xa_set_mark(struct xarray *, unsigned long index, xa_mark_t); void __xa_set_mark(struct xarray *, unsigned long index, xa_mark_t);
void __xa_clear_mark(struct xarray *, unsigned long index, xa_mark_t); void __xa_clear_mark(struct xarray *, unsigned long index, xa_mark_t);
@ -526,9 +585,9 @@ static inline void *xa_store_irq(struct xarray *xa, unsigned long index,
* @xa: XArray. * @xa: XArray.
* @index: Index of entry. * @index: Index of entry.
* *
* This function is the equivalent of calling xa_store() with %NULL as * After this function returns, loading from @index will return %NULL.
* the third argument. The XArray does not need to allocate memory, so * If the index is part of a multi-index entry, all indices will be erased
* the user does not need to provide GFP flags. * and none of the entries will be part of a multi-index entry.
* *
* Context: Any context. Takes and releases the xa_lock while * Context: Any context. Takes and releases the xa_lock while
* disabling softirqs. * disabling softirqs.
@ -550,9 +609,9 @@ static inline void *xa_erase_bh(struct xarray *xa, unsigned long index)
* @xa: XArray. * @xa: XArray.
* @index: Index of entry. * @index: Index of entry.
* *
* This function is the equivalent of calling xa_store() with %NULL as * After this function returns, loading from @index will return %NULL.
* the third argument. The XArray does not need to allocate memory, so * If the index is part of a multi-index entry, all indices will be erased
* the user does not need to provide GFP flags. * and none of the entries will be part of a multi-index entry.
* *
* Context: Process context. Takes and releases the xa_lock while * Context: Process context. Takes and releases the xa_lock while
* disabling interrupts. * disabling interrupts.
@ -664,11 +723,11 @@ static inline void *xa_cmpxchg_irq(struct xarray *xa, unsigned long index,
* *
* Context: Any context. Takes and releases the xa_lock. May sleep if * Context: Any context. Takes and releases the xa_lock. May sleep if
* the @gfp flags permit. * the @gfp flags permit.
* Return: 0 if the store succeeded. -EEXIST if another entry was present. * Return: 0 if the store succeeded. -EBUSY if another entry was present.
* -ENOMEM if memory could not be allocated. * -ENOMEM if memory could not be allocated.
*/ */
static inline int xa_insert(struct xarray *xa, unsigned long index, static inline int __must_check xa_insert(struct xarray *xa,
void *entry, gfp_t gfp) unsigned long index, void *entry, gfp_t gfp)
{ {
int err; int err;
@ -693,11 +752,11 @@ static inline int xa_insert(struct xarray *xa, unsigned long index,
* *
* Context: Any context. Takes and releases the xa_lock while * Context: Any context. Takes and releases the xa_lock while
* disabling softirqs. May sleep if the @gfp flags permit. * disabling softirqs. May sleep if the @gfp flags permit.
* Return: 0 if the store succeeded. -EEXIST if another entry was present. * Return: 0 if the store succeeded. -EBUSY if another entry was present.
* -ENOMEM if memory could not be allocated. * -ENOMEM if memory could not be allocated.
*/ */
static inline int xa_insert_bh(struct xarray *xa, unsigned long index, static inline int __must_check xa_insert_bh(struct xarray *xa,
void *entry, gfp_t gfp) unsigned long index, void *entry, gfp_t gfp)
{ {
int err; int err;
@ -722,11 +781,11 @@ static inline int xa_insert_bh(struct xarray *xa, unsigned long index,
* *
* Context: Process context. Takes and releases the xa_lock while * Context: Process context. Takes and releases the xa_lock while
* disabling interrupts. May sleep if the @gfp flags permit. * disabling interrupts. May sleep if the @gfp flags permit.
* Return: 0 if the store succeeded. -EEXIST if another entry was present. * Return: 0 if the store succeeded. -EBUSY if another entry was present.
* -ENOMEM if memory could not be allocated. * -ENOMEM if memory could not be allocated.
*/ */
static inline int xa_insert_irq(struct xarray *xa, unsigned long index, static inline int __must_check xa_insert_irq(struct xarray *xa,
void *entry, gfp_t gfp) unsigned long index, void *entry, gfp_t gfp)
{ {
int err; int err;
@ -741,26 +800,26 @@ static inline int xa_insert_irq(struct xarray *xa, unsigned long index,
* xa_alloc() - Find somewhere to store this entry in the XArray. * xa_alloc() - Find somewhere to store this entry in the XArray.
* @xa: XArray. * @xa: XArray.
* @id: Pointer to ID. * @id: Pointer to ID.
* @max: Maximum ID to allocate (inclusive).
* @entry: New entry. * @entry: New entry.
* @limit: Range of ID to allocate.
* @gfp: Memory allocation flags. * @gfp: Memory allocation flags.
* *
* Allocates an unused ID in the range specified by @id and @max. * Finds an empty entry in @xa between @limit.min and @limit.max,
* Updates the @id pointer with the index, then stores the entry at that * stores the index into the @id pointer, then stores the entry at
* index. A concurrent lookup will not see an uninitialised @id. * that index. A concurrent lookup will not see an uninitialised @id.
* *
* Context: Process context. Takes and releases the xa_lock. May sleep if * Context: Any context. Takes and releases the xa_lock. May sleep if
* the @gfp flags permit. * the @gfp flags permit.
* Return: 0 on success, -ENOMEM if memory allocation fails or -ENOSPC if * Return: 0 on success, -ENOMEM if memory could not be allocated or
* there is no more space in the XArray. * -EBUSY if there are no free entries in @limit.
*/ */
static inline int xa_alloc(struct xarray *xa, u32 *id, u32 max, void *entry, static inline __must_check int xa_alloc(struct xarray *xa, u32 *id,
gfp_t gfp) void *entry, struct xa_limit limit, gfp_t gfp)
{ {
int err; int err;
xa_lock(xa); xa_lock(xa);
err = __xa_alloc(xa, id, max, entry, gfp); err = __xa_alloc(xa, id, entry, limit, gfp);
xa_unlock(xa); xa_unlock(xa);
return err; return err;
@ -770,26 +829,26 @@ static inline int xa_alloc(struct xarray *xa, u32 *id, u32 max, void *entry,
* xa_alloc_bh() - Find somewhere to store this entry in the XArray. * xa_alloc_bh() - Find somewhere to store this entry in the XArray.
* @xa: XArray. * @xa: XArray.
* @id: Pointer to ID. * @id: Pointer to ID.
* @max: Maximum ID to allocate (inclusive).
* @entry: New entry. * @entry: New entry.
* @limit: Range of ID to allocate.
* @gfp: Memory allocation flags. * @gfp: Memory allocation flags.
* *
* Allocates an unused ID in the range specified by @id and @max. * Finds an empty entry in @xa between @limit.min and @limit.max,
* Updates the @id pointer with the index, then stores the entry at that * stores the index into the @id pointer, then stores the entry at
* index. A concurrent lookup will not see an uninitialised @id. * that index. A concurrent lookup will not see an uninitialised @id.
* *
* Context: Any context. Takes and releases the xa_lock while * Context: Any context. Takes and releases the xa_lock while
* disabling softirqs. May sleep if the @gfp flags permit. * disabling softirqs. May sleep if the @gfp flags permit.
* Return: 0 on success, -ENOMEM if memory allocation fails or -ENOSPC if * Return: 0 on success, -ENOMEM if memory could not be allocated or
* there is no more space in the XArray. * -EBUSY if there are no free entries in @limit.
*/ */
static inline int xa_alloc_bh(struct xarray *xa, u32 *id, u32 max, void *entry, static inline int __must_check xa_alloc_bh(struct xarray *xa, u32 *id,
gfp_t gfp) void *entry, struct xa_limit limit, gfp_t gfp)
{ {
int err; int err;
xa_lock_bh(xa); xa_lock_bh(xa);
err = __xa_alloc(xa, id, max, entry, gfp); err = __xa_alloc(xa, id, entry, limit, gfp);
xa_unlock_bh(xa); xa_unlock_bh(xa);
return err; return err;
@ -799,26 +858,125 @@ static inline int xa_alloc_bh(struct xarray *xa, u32 *id, u32 max, void *entry,
* xa_alloc_irq() - Find somewhere to store this entry in the XArray. * xa_alloc_irq() - Find somewhere to store this entry in the XArray.
* @xa: XArray. * @xa: XArray.
* @id: Pointer to ID. * @id: Pointer to ID.
* @max: Maximum ID to allocate (inclusive).
* @entry: New entry. * @entry: New entry.
* @limit: Range of ID to allocate.
* @gfp: Memory allocation flags. * @gfp: Memory allocation flags.
* *
* Allocates an unused ID in the range specified by @id and @max. * Finds an empty entry in @xa between @limit.min and @limit.max,
* Updates the @id pointer with the index, then stores the entry at that * stores the index into the @id pointer, then stores the entry at
* index. A concurrent lookup will not see an uninitialised @id. * that index. A concurrent lookup will not see an uninitialised @id.
* *
* Context: Process context. Takes and releases the xa_lock while * Context: Process context. Takes and releases the xa_lock while
* disabling interrupts. May sleep if the @gfp flags permit. * disabling interrupts. May sleep if the @gfp flags permit.
* Return: 0 on success, -ENOMEM if memory allocation fails or -ENOSPC if * Return: 0 on success, -ENOMEM if memory could not be allocated or
* there is no more space in the XArray. * -EBUSY if there are no free entries in @limit.
*/ */
static inline int xa_alloc_irq(struct xarray *xa, u32 *id, u32 max, void *entry, static inline int __must_check xa_alloc_irq(struct xarray *xa, u32 *id,
gfp_t gfp) void *entry, struct xa_limit limit, gfp_t gfp)
{ {
int err; int err;
xa_lock_irq(xa); xa_lock_irq(xa);
err = __xa_alloc(xa, id, max, entry, gfp); err = __xa_alloc(xa, id, entry, limit, gfp);
xa_unlock_irq(xa);
return err;
}
/**
* xa_alloc_cyclic() - Find somewhere to store this entry in the XArray.
* @xa: XArray.
* @id: Pointer to ID.
* @entry: New entry.
* @limit: Range of allocated ID.
* @next: Pointer to next ID to allocate.
* @gfp: Memory allocation flags.
*
* Finds an empty entry in @xa between @limit.min and @limit.max,
* stores the index into the @id pointer, then stores the entry at
* that index. A concurrent lookup will not see an uninitialised @id.
* The search for an empty entry will start at @next and will wrap
* around if necessary.
*
* Context: Any context. Takes and releases the xa_lock. May sleep if
* the @gfp flags permit.
* Return: 0 if the allocation succeeded without wrapping. 1 if the
* allocation succeeded after wrapping, -ENOMEM if memory could not be
* allocated or -EBUSY if there are no free entries in @limit.
*/
static inline int xa_alloc_cyclic(struct xarray *xa, u32 *id, void *entry,
struct xa_limit limit, u32 *next, gfp_t gfp)
{
int err;
xa_lock(xa);
err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
xa_unlock(xa);
return err;
}
/**
* xa_alloc_cyclic_bh() - Find somewhere to store this entry in the XArray.
* @xa: XArray.
* @id: Pointer to ID.
* @entry: New entry.
* @limit: Range of allocated ID.
* @next: Pointer to next ID to allocate.
* @gfp: Memory allocation flags.
*
* Finds an empty entry in @xa between @limit.min and @limit.max,
* stores the index into the @id pointer, then stores the entry at
* that index. A concurrent lookup will not see an uninitialised @id.
* The search for an empty entry will start at @next and will wrap
* around if necessary.
*
* Context: Any context. Takes and releases the xa_lock while
* disabling softirqs. May sleep if the @gfp flags permit.
* Return: 0 if the allocation succeeded without wrapping. 1 if the
* allocation succeeded after wrapping, -ENOMEM if memory could not be
* allocated or -EBUSY if there are no free entries in @limit.
*/
static inline int xa_alloc_cyclic_bh(struct xarray *xa, u32 *id, void *entry,
struct xa_limit limit, u32 *next, gfp_t gfp)
{
int err;
xa_lock_bh(xa);
err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
xa_unlock_bh(xa);
return err;
}
/**
* xa_alloc_cyclic_irq() - Find somewhere to store this entry in the XArray.
* @xa: XArray.
* @id: Pointer to ID.
* @entry: New entry.
* @limit: Range of allocated ID.
* @next: Pointer to next ID to allocate.
* @gfp: Memory allocation flags.
*
* Finds an empty entry in @xa between @limit.min and @limit.max,
* stores the index into the @id pointer, then stores the entry at
* that index. A concurrent lookup will not see an uninitialised @id.
* The search for an empty entry will start at @next and will wrap
* around if necessary.
*
* Context: Process context. Takes and releases the xa_lock while
* disabling interrupts. May sleep if the @gfp flags permit.
* Return: 0 if the allocation succeeded without wrapping. 1 if the
* allocation succeeded after wrapping, -ENOMEM if memory could not be
* allocated or -EBUSY if there are no free entries in @limit.
*/
static inline int xa_alloc_cyclic_irq(struct xarray *xa, u32 *id, void *entry,
struct xa_limit limit, u32 *next, gfp_t gfp)
{
int err;
xa_lock_irq(xa);
err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
xa_unlock_irq(xa); xa_unlock_irq(xa);
return err; return err;
@ -842,16 +1000,10 @@ static inline int xa_alloc_irq(struct xarray *xa, u32 *id, u32 max, void *entry,
* May sleep if the @gfp flags permit. * May sleep if the @gfp flags permit.
* Return: 0 if the reservation succeeded or -ENOMEM if it failed. * Return: 0 if the reservation succeeded or -ENOMEM if it failed.
*/ */
static inline static inline __must_check
int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp) int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)
{ {
int ret; return xa_err(xa_cmpxchg(xa, index, NULL, XA_ZERO_ENTRY, gfp));
xa_lock(xa);
ret = __xa_reserve(xa, index, gfp);
xa_unlock(xa);
return ret;
} }
/** /**
@ -866,16 +1018,10 @@ int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)
* disabling softirqs. * disabling softirqs.
* Return: 0 if the reservation succeeded or -ENOMEM if it failed. * Return: 0 if the reservation succeeded or -ENOMEM if it failed.
*/ */
static inline static inline __must_check
int xa_reserve_bh(struct xarray *xa, unsigned long index, gfp_t gfp) int xa_reserve_bh(struct xarray *xa, unsigned long index, gfp_t gfp)
{ {
int ret; return xa_err(xa_cmpxchg_bh(xa, index, NULL, XA_ZERO_ENTRY, gfp));
xa_lock_bh(xa);
ret = __xa_reserve(xa, index, gfp);
xa_unlock_bh(xa);
return ret;
} }
/** /**
@ -890,16 +1036,10 @@ int xa_reserve_bh(struct xarray *xa, unsigned long index, gfp_t gfp)
* disabling interrupts. * disabling interrupts.
* Return: 0 if the reservation succeeded or -ENOMEM if it failed. * Return: 0 if the reservation succeeded or -ENOMEM if it failed.
*/ */
static inline static inline __must_check
int xa_reserve_irq(struct xarray *xa, unsigned long index, gfp_t gfp) int xa_reserve_irq(struct xarray *xa, unsigned long index, gfp_t gfp)
{ {
int ret; return xa_err(xa_cmpxchg_irq(xa, index, NULL, XA_ZERO_ENTRY, gfp));
xa_lock_irq(xa);
ret = __xa_reserve(xa, index, gfp);
xa_unlock_irq(xa);
return ret;
} }
/** /**
@ -913,7 +1053,7 @@ int xa_reserve_irq(struct xarray *xa, unsigned long index, gfp_t gfp)
*/ */
static inline void xa_release(struct xarray *xa, unsigned long index) static inline void xa_release(struct xarray *xa, unsigned long index)
{ {
xa_cmpxchg(xa, index, NULL, NULL, 0); xa_cmpxchg(xa, index, XA_ZERO_ENTRY, NULL, 0);
} }
/* Everything below here is the Advanced API. Proceed with caution. */ /* Everything below here is the Advanced API. Proceed with caution. */
@ -1073,18 +1213,6 @@ static inline bool xa_is_sibling(const void *entry)
} }
#define XA_RETRY_ENTRY xa_mk_internal(256) #define XA_RETRY_ENTRY xa_mk_internal(256)
#define XA_ZERO_ENTRY xa_mk_internal(257)
/**
* xa_is_zero() - Is the entry a zero entry?
* @entry: Entry retrieved from the XArray
*
* Return: %true if the entry is a zero entry.
*/
static inline bool xa_is_zero(const void *entry)
{
return unlikely(entry == XA_ZERO_ENTRY);
}
/** /**
* xa_is_retry() - Is the entry a retry entry? * xa_is_retry() - Is the entry a retry entry?

View file

@ -40,9 +40,9 @@ static void *xa_store_index(struct xarray *xa, unsigned long index, gfp_t gfp)
static void xa_alloc_index(struct xarray *xa, unsigned long index, gfp_t gfp) static void xa_alloc_index(struct xarray *xa, unsigned long index, gfp_t gfp)
{ {
u32 id = 0; u32 id;
XA_BUG_ON(xa, xa_alloc(xa, &id, UINT_MAX, xa_mk_index(index), XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_index(index), xa_limit_32b,
gfp) != 0); gfp) != 0);
XA_BUG_ON(xa, id != index); XA_BUG_ON(xa, id != index);
} }
@ -107,8 +107,11 @@ static noinline void check_xas_retry(struct xarray *xa)
XA_BUG_ON(xa, xas.xa_node != XAS_RESTART); XA_BUG_ON(xa, xas.xa_node != XAS_RESTART);
XA_BUG_ON(xa, xas_next_entry(&xas, ULONG_MAX) != xa_mk_value(0)); XA_BUG_ON(xa, xas_next_entry(&xas, ULONG_MAX) != xa_mk_value(0));
XA_BUG_ON(xa, xas.xa_node != NULL); XA_BUG_ON(xa, xas.xa_node != NULL);
rcu_read_unlock();
XA_BUG_ON(xa, xa_store_index(xa, 1, GFP_KERNEL) != NULL); XA_BUG_ON(xa, xa_store_index(xa, 1, GFP_KERNEL) != NULL);
rcu_read_lock();
XA_BUG_ON(xa, !xa_is_internal(xas_reload(&xas))); XA_BUG_ON(xa, !xa_is_internal(xas_reload(&xas)));
xas.xa_node = XAS_RESTART; xas.xa_node = XAS_RESTART;
XA_BUG_ON(xa, xas_next_entry(&xas, ULONG_MAX) != xa_mk_value(0)); XA_BUG_ON(xa, xas_next_entry(&xas, ULONG_MAX) != xa_mk_value(0));
@ -343,7 +346,7 @@ static noinline void check_cmpxchg(struct xarray *xa)
XA_BUG_ON(xa, !xa_empty(xa)); XA_BUG_ON(xa, !xa_empty(xa));
XA_BUG_ON(xa, xa_store_index(xa, 12345678, GFP_KERNEL) != NULL); XA_BUG_ON(xa, xa_store_index(xa, 12345678, GFP_KERNEL) != NULL);
XA_BUG_ON(xa, xa_insert(xa, 12345678, xa, GFP_KERNEL) != -EEXIST); XA_BUG_ON(xa, xa_insert(xa, 12345678, xa, GFP_KERNEL) != -EBUSY);
XA_BUG_ON(xa, xa_cmpxchg(xa, 12345678, SIX, FIVE, GFP_KERNEL) != LOTS); XA_BUG_ON(xa, xa_cmpxchg(xa, 12345678, SIX, FIVE, GFP_KERNEL) != LOTS);
XA_BUG_ON(xa, xa_cmpxchg(xa, 12345678, LOTS, FIVE, GFP_KERNEL) != LOTS); XA_BUG_ON(xa, xa_cmpxchg(xa, 12345678, LOTS, FIVE, GFP_KERNEL) != LOTS);
XA_BUG_ON(xa, xa_cmpxchg(xa, 12345678, FIVE, LOTS, GFP_KERNEL) != FIVE); XA_BUG_ON(xa, xa_cmpxchg(xa, 12345678, FIVE, LOTS, GFP_KERNEL) != FIVE);
@ -358,46 +361,65 @@ static noinline void check_reserve(struct xarray *xa)
{ {
void *entry; void *entry;
unsigned long index; unsigned long index;
int count;
/* An array with a reserved entry is not empty */ /* An array with a reserved entry is not empty */
XA_BUG_ON(xa, !xa_empty(xa)); XA_BUG_ON(xa, !xa_empty(xa));
xa_reserve(xa, 12345678, GFP_KERNEL); XA_BUG_ON(xa, xa_reserve(xa, 12345678, GFP_KERNEL) != 0);
XA_BUG_ON(xa, xa_empty(xa)); XA_BUG_ON(xa, xa_empty(xa));
XA_BUG_ON(xa, xa_load(xa, 12345678)); XA_BUG_ON(xa, xa_load(xa, 12345678));
xa_release(xa, 12345678); xa_release(xa, 12345678);
XA_BUG_ON(xa, !xa_empty(xa)); XA_BUG_ON(xa, !xa_empty(xa));
/* Releasing a used entry does nothing */ /* Releasing a used entry does nothing */
xa_reserve(xa, 12345678, GFP_KERNEL); XA_BUG_ON(xa, xa_reserve(xa, 12345678, GFP_KERNEL) != 0);
XA_BUG_ON(xa, xa_store_index(xa, 12345678, GFP_NOWAIT) != NULL); XA_BUG_ON(xa, xa_store_index(xa, 12345678, GFP_NOWAIT) != NULL);
xa_release(xa, 12345678); xa_release(xa, 12345678);
xa_erase_index(xa, 12345678); xa_erase_index(xa, 12345678);
XA_BUG_ON(xa, !xa_empty(xa)); XA_BUG_ON(xa, !xa_empty(xa));
/* cmpxchg sees a reserved entry as NULL */ /* cmpxchg sees a reserved entry as ZERO */
xa_reserve(xa, 12345678, GFP_KERNEL); XA_BUG_ON(xa, xa_reserve(xa, 12345678, GFP_KERNEL) != 0);
XA_BUG_ON(xa, xa_cmpxchg(xa, 12345678, NULL, xa_mk_value(12345678), XA_BUG_ON(xa, xa_cmpxchg(xa, 12345678, XA_ZERO_ENTRY,
GFP_NOWAIT) != NULL); xa_mk_value(12345678), GFP_NOWAIT) != NULL);
xa_release(xa, 12345678); xa_release(xa, 12345678);
xa_erase_index(xa, 12345678); xa_erase_index(xa, 12345678);
XA_BUG_ON(xa, !xa_empty(xa)); XA_BUG_ON(xa, !xa_empty(xa));
/* But xa_insert does not */ /* xa_insert treats it as busy */
xa_reserve(xa, 12345678, GFP_KERNEL); XA_BUG_ON(xa, xa_reserve(xa, 12345678, GFP_KERNEL) != 0);
XA_BUG_ON(xa, xa_insert(xa, 12345678, xa_mk_value(12345678), 0) != XA_BUG_ON(xa, xa_insert(xa, 12345678, xa_mk_value(12345678), 0) !=
-EEXIST); -EBUSY);
XA_BUG_ON(xa, xa_empty(xa)); XA_BUG_ON(xa, xa_empty(xa));
XA_BUG_ON(xa, xa_erase(xa, 12345678) != NULL); XA_BUG_ON(xa, xa_erase(xa, 12345678) != NULL);
XA_BUG_ON(xa, !xa_empty(xa)); XA_BUG_ON(xa, !xa_empty(xa));
/* Can iterate through a reserved entry */ /* Can iterate through a reserved entry */
xa_store_index(xa, 5, GFP_KERNEL); xa_store_index(xa, 5, GFP_KERNEL);
xa_reserve(xa, 6, GFP_KERNEL); XA_BUG_ON(xa, xa_reserve(xa, 6, GFP_KERNEL) != 0);
xa_store_index(xa, 7, GFP_KERNEL); xa_store_index(xa, 7, GFP_KERNEL);
count = 0;
xa_for_each(xa, index, entry) { xa_for_each(xa, index, entry) {
XA_BUG_ON(xa, index != 5 && index != 7); XA_BUG_ON(xa, index != 5 && index != 7);
count++;
} }
XA_BUG_ON(xa, count != 2);
/* If we free a reserved entry, we should be able to allocate it */
if (xa->xa_flags & XA_FLAGS_ALLOC) {
u32 id;
XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_value(8),
XA_LIMIT(5, 10), GFP_KERNEL) != 0);
XA_BUG_ON(xa, id != 8);
xa_release(xa, 6);
XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_value(6),
XA_LIMIT(5, 10), GFP_KERNEL) != 0);
XA_BUG_ON(xa, id != 6);
}
xa_destroy(xa); xa_destroy(xa);
} }
@ -586,64 +608,194 @@ static noinline void check_multi_store(struct xarray *xa)
#endif #endif
} }
static DEFINE_XARRAY_ALLOC(xa0); static noinline void check_xa_alloc_1(struct xarray *xa, unsigned int base)
static noinline void check_xa_alloc(void)
{ {
int i; int i;
u32 id; u32 id;
/* An empty array should assign 0 to the first alloc */ XA_BUG_ON(xa, !xa_empty(xa));
xa_alloc_index(&xa0, 0, GFP_KERNEL); /* An empty array should assign %base to the first alloc */
xa_alloc_index(xa, base, GFP_KERNEL);
/* Erasing it should make the array empty again */ /* Erasing it should make the array empty again */
xa_erase_index(&xa0, 0); xa_erase_index(xa, base);
XA_BUG_ON(&xa0, !xa_empty(&xa0)); XA_BUG_ON(xa, !xa_empty(xa));
/* And it should assign 0 again */ /* And it should assign %base again */
xa_alloc_index(&xa0, 0, GFP_KERNEL); xa_alloc_index(xa, base, GFP_KERNEL);
/* The next assigned ID should be 1 */ /* Allocating and then erasing a lot should not lose base */
xa_alloc_index(&xa0, 1, GFP_KERNEL); for (i = base + 1; i < 2 * XA_CHUNK_SIZE; i++)
xa_erase_index(&xa0, 1); xa_alloc_index(xa, i, GFP_KERNEL);
for (i = base; i < 2 * XA_CHUNK_SIZE; i++)
xa_erase_index(xa, i);
xa_alloc_index(xa, base, GFP_KERNEL);
/* Destroying the array should do the same as erasing */
xa_destroy(xa);
/* And it should assign %base again */
xa_alloc_index(xa, base, GFP_KERNEL);
/* The next assigned ID should be base+1 */
xa_alloc_index(xa, base + 1, GFP_KERNEL);
xa_erase_index(xa, base + 1);
/* Storing a value should mark it used */ /* Storing a value should mark it used */
xa_store_index(&xa0, 1, GFP_KERNEL); xa_store_index(xa, base + 1, GFP_KERNEL);
xa_alloc_index(&xa0, 2, GFP_KERNEL); xa_alloc_index(xa, base + 2, GFP_KERNEL);
/* If we then erase 0, it should be free */ /* If we then erase base, it should be free */
xa_erase_index(&xa0, 0); xa_erase_index(xa, base);
xa_alloc_index(&xa0, 0, GFP_KERNEL); xa_alloc_index(xa, base, GFP_KERNEL);
xa_erase_index(&xa0, 1); xa_erase_index(xa, base + 1);
xa_erase_index(&xa0, 2); xa_erase_index(xa, base + 2);
for (i = 1; i < 5000; i++) { for (i = 1; i < 5000; i++) {
xa_alloc_index(&xa0, i, GFP_KERNEL); xa_alloc_index(xa, base + i, GFP_KERNEL);
} }
xa_destroy(&xa0); xa_destroy(xa);
id = 0xfffffffeU; /* Check that we fail properly at the limit of allocation */
XA_BUG_ON(&xa0, xa_alloc(&xa0, &id, UINT_MAX, xa_mk_index(id), XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_index(UINT_MAX - 1),
XA_LIMIT(UINT_MAX - 1, UINT_MAX),
GFP_KERNEL) != 0); GFP_KERNEL) != 0);
XA_BUG_ON(&xa0, id != 0xfffffffeU); XA_BUG_ON(xa, id != 0xfffffffeU);
XA_BUG_ON(&xa0, xa_alloc(&xa0, &id, UINT_MAX, xa_mk_index(id), XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_index(UINT_MAX),
XA_LIMIT(UINT_MAX - 1, UINT_MAX),
GFP_KERNEL) != 0); GFP_KERNEL) != 0);
XA_BUG_ON(&xa0, id != 0xffffffffU); XA_BUG_ON(xa, id != 0xffffffffU);
XA_BUG_ON(&xa0, xa_alloc(&xa0, &id, UINT_MAX, xa_mk_index(id), id = 3;
GFP_KERNEL) != -ENOSPC); XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_index(0),
XA_BUG_ON(&xa0, id != 0xffffffffU); XA_LIMIT(UINT_MAX - 1, UINT_MAX),
xa_destroy(&xa0); GFP_KERNEL) != -EBUSY);
XA_BUG_ON(xa, id != 3);
xa_destroy(xa);
id = 10; XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_index(10), XA_LIMIT(10, 5),
XA_BUG_ON(&xa0, xa_alloc(&xa0, &id, 5, xa_mk_index(id), GFP_KERNEL) != -EBUSY);
GFP_KERNEL) != -ENOSPC); XA_BUG_ON(xa, xa_store_index(xa, 3, GFP_KERNEL) != 0);
XA_BUG_ON(&xa0, xa_store_index(&xa0, 3, GFP_KERNEL) != 0); XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_index(10), XA_LIMIT(10, 5),
XA_BUG_ON(&xa0, xa_alloc(&xa0, &id, 5, xa_mk_index(id), GFP_KERNEL) != -EBUSY);
GFP_KERNEL) != -ENOSPC); xa_erase_index(xa, 3);
xa_erase_index(&xa0, 3); XA_BUG_ON(xa, !xa_empty(xa));
XA_BUG_ON(&xa0, !xa_empty(&xa0)); }
static noinline void check_xa_alloc_2(struct xarray *xa, unsigned int base)
{
unsigned int i, id;
unsigned long index;
void *entry;
/* Allocate and free a NULL and check xa_empty() behaves */
XA_BUG_ON(xa, !xa_empty(xa));
XA_BUG_ON(xa, xa_alloc(xa, &id, NULL, xa_limit_32b, GFP_KERNEL) != 0);
XA_BUG_ON(xa, id != base);
XA_BUG_ON(xa, xa_empty(xa));
XA_BUG_ON(xa, xa_erase(xa, id) != NULL);
XA_BUG_ON(xa, !xa_empty(xa));
/* Ditto, but check destroy instead of erase */
XA_BUG_ON(xa, !xa_empty(xa));
XA_BUG_ON(xa, xa_alloc(xa, &id, NULL, xa_limit_32b, GFP_KERNEL) != 0);
XA_BUG_ON(xa, id != base);
XA_BUG_ON(xa, xa_empty(xa));
xa_destroy(xa);
XA_BUG_ON(xa, !xa_empty(xa));
for (i = base; i < base + 10; i++) {
XA_BUG_ON(xa, xa_alloc(xa, &id, NULL, xa_limit_32b,
GFP_KERNEL) != 0);
XA_BUG_ON(xa, id != i);
}
XA_BUG_ON(xa, xa_store(xa, 3, xa_mk_index(3), GFP_KERNEL) != NULL);
XA_BUG_ON(xa, xa_store(xa, 4, xa_mk_index(4), GFP_KERNEL) != NULL);
XA_BUG_ON(xa, xa_store(xa, 4, NULL, GFP_KERNEL) != xa_mk_index(4));
XA_BUG_ON(xa, xa_erase(xa, 5) != NULL);
XA_BUG_ON(xa, xa_alloc(xa, &id, NULL, xa_limit_32b, GFP_KERNEL) != 0);
XA_BUG_ON(xa, id != 5);
xa_for_each(xa, index, entry) {
xa_erase_index(xa, index);
}
for (i = base; i < base + 9; i++) {
XA_BUG_ON(xa, xa_erase(xa, i) != NULL);
XA_BUG_ON(xa, xa_empty(xa));
}
XA_BUG_ON(xa, xa_erase(xa, 8) != NULL);
XA_BUG_ON(xa, xa_empty(xa));
XA_BUG_ON(xa, xa_erase(xa, base + 9) != NULL);
XA_BUG_ON(xa, !xa_empty(xa));
xa_destroy(xa);
}
static noinline void check_xa_alloc_3(struct xarray *xa, unsigned int base)
{
struct xa_limit limit = XA_LIMIT(1, 0x3fff);
u32 next = 0;
unsigned int i, id;
unsigned long index;
void *entry;
XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(1), limit,
&next, GFP_KERNEL) != 0);
XA_BUG_ON(xa, id != 1);
next = 0x3ffd;
XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(0x3ffd), limit,
&next, GFP_KERNEL) != 0);
XA_BUG_ON(xa, id != 0x3ffd);
xa_erase_index(xa, 0x3ffd);
xa_erase_index(xa, 1);
XA_BUG_ON(xa, !xa_empty(xa));
for (i = 0x3ffe; i < 0x4003; i++) {
if (i < 0x4000)
entry = xa_mk_index(i);
else
entry = xa_mk_index(i - 0x3fff);
XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, entry, limit,
&next, GFP_KERNEL) != (id == 1));
XA_BUG_ON(xa, xa_mk_index(id) != entry);
}
/* Check wrap-around is handled correctly */
if (base != 0)
xa_erase_index(xa, base);
xa_erase_index(xa, base + 1);
next = UINT_MAX;
XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(UINT_MAX),
xa_limit_32b, &next, GFP_KERNEL) != 0);
XA_BUG_ON(xa, id != UINT_MAX);
XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(base),
xa_limit_32b, &next, GFP_KERNEL) != 1);
XA_BUG_ON(xa, id != base);
XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(base + 1),
xa_limit_32b, &next, GFP_KERNEL) != 0);
XA_BUG_ON(xa, id != base + 1);
xa_for_each(xa, index, entry)
xa_erase_index(xa, index);
XA_BUG_ON(xa, !xa_empty(xa));
}
static DEFINE_XARRAY_ALLOC(xa0);
static DEFINE_XARRAY_ALLOC1(xa1);
static noinline void check_xa_alloc(void)
{
check_xa_alloc_1(&xa0, 0);
check_xa_alloc_1(&xa1, 1);
check_xa_alloc_2(&xa0, 0);
check_xa_alloc_2(&xa1, 1);
check_xa_alloc_3(&xa0, 0);
check_xa_alloc_3(&xa1, 1);
} }
static noinline void __check_store_iter(struct xarray *xa, unsigned long start, static noinline void __check_store_iter(struct xarray *xa, unsigned long start,
@ -1194,9 +1346,8 @@ static void check_align_1(struct xarray *xa, char *name)
void *entry; void *entry;
for (i = 0; i < 8; i++) { for (i = 0; i < 8; i++) {
id = 0; XA_BUG_ON(xa, xa_alloc(xa, &id, name + i, xa_limit_32b,
XA_BUG_ON(xa, xa_alloc(xa, &id, UINT_MAX, name + i, GFP_KERNEL) GFP_KERNEL) != 0);
!= 0);
XA_BUG_ON(xa, id != i); XA_BUG_ON(xa, id != i);
} }
xa_for_each(xa, index, entry) xa_for_each(xa, index, entry)
@ -1204,6 +1355,30 @@ static void check_align_1(struct xarray *xa, char *name)
xa_destroy(xa); xa_destroy(xa);
} }
/*
* We should always be able to store without allocating memory after
* reserving a slot.
*/
static void check_align_2(struct xarray *xa, char *name)
{
int i;
XA_BUG_ON(xa, !xa_empty(xa));
for (i = 0; i < 8; i++) {
XA_BUG_ON(xa, xa_store(xa, 0, name + i, GFP_KERNEL) != NULL);
xa_erase(xa, 0);
}
for (i = 0; i < 8; i++) {
XA_BUG_ON(xa, xa_reserve(xa, 0, GFP_KERNEL) != 0);
XA_BUG_ON(xa, xa_store(xa, 0, name + i, 0) != NULL);
xa_erase(xa, 0);
}
XA_BUG_ON(xa, !xa_empty(xa));
}
static noinline void check_align(struct xarray *xa) static noinline void check_align(struct xarray *xa)
{ {
char name[] = "Motorola 68000"; char name[] = "Motorola 68000";
@ -1212,7 +1387,7 @@ static noinline void check_align(struct xarray *xa)
check_align_1(xa, name + 1); check_align_1(xa, name + 1);
check_align_1(xa, name + 2); check_align_1(xa, name + 2);
check_align_1(xa, name + 3); check_align_1(xa, name + 3);
// check_align_2(xa, name); check_align_2(xa, name);
} }
static LIST_HEAD(shadow_nodes); static LIST_HEAD(shadow_nodes);
@ -1354,6 +1529,7 @@ static int xarray_checks(void)
check_xas_erase(&array); check_xas_erase(&array);
check_cmpxchg(&array); check_cmpxchg(&array);
check_reserve(&array); check_reserve(&array);
check_reserve(&xa0);
check_multi_store(&array); check_multi_store(&array);
check_xa_alloc(); check_xa_alloc();
check_find(&array); check_find(&array);

View file

@ -57,6 +57,11 @@ static inline bool xa_track_free(const struct xarray *xa)
return xa->xa_flags & XA_FLAGS_TRACK_FREE; return xa->xa_flags & XA_FLAGS_TRACK_FREE;
} }
static inline bool xa_zero_busy(const struct xarray *xa)
{
return xa->xa_flags & XA_FLAGS_ZERO_BUSY;
}
static inline void xa_mark_set(struct xarray *xa, xa_mark_t mark) static inline void xa_mark_set(struct xarray *xa, xa_mark_t mark)
{ {
if (!(xa->xa_flags & XA_FLAGS_MARK(mark))) if (!(xa->xa_flags & XA_FLAGS_MARK(mark)))
@ -432,6 +437,8 @@ static void xas_shrink(struct xa_state *xas)
break; break;
if (!xa_is_node(entry) && node->shift) if (!xa_is_node(entry) && node->shift)
break; break;
if (xa_is_zero(entry) && xa_zero_busy(xa))
entry = NULL;
xas->xa_node = XAS_BOUNDS; xas->xa_node = XAS_BOUNDS;
RCU_INIT_POINTER(xa->xa_head, entry); RCU_INIT_POINTER(xa->xa_head, entry);
@ -628,6 +635,8 @@ static void *xas_create(struct xa_state *xas, bool allow_root)
if (xas_top(node)) { if (xas_top(node)) {
entry = xa_head_locked(xa); entry = xa_head_locked(xa);
xas->xa_node = NULL; xas->xa_node = NULL;
if (!entry && xa_zero_busy(xa))
entry = XA_ZERO_ENTRY;
shift = xas_expand(xas, entry); shift = xas_expand(xas, entry);
if (shift < 0) if (shift < 0)
return NULL; return NULL;
@ -758,10 +767,12 @@ void *xas_store(struct xa_state *xas, void *entry)
void *first, *next; void *first, *next;
bool value = xa_is_value(entry); bool value = xa_is_value(entry);
if (entry) if (entry) {
first = xas_create(xas, !xa_is_node(entry)); bool allow_root = !xa_is_node(entry) && !xa_is_zero(entry);
else first = xas_create(xas, allow_root);
} else {
first = xas_load(xas); first = xas_load(xas);
}
if (xas_invalid(xas)) if (xas_invalid(xas))
return first; return first;
@ -791,7 +802,7 @@ void *xas_store(struct xa_state *xas, void *entry)
* entry is set to NULL. * entry is set to NULL.
*/ */
rcu_assign_pointer(*slot, entry); rcu_assign_pointer(*slot, entry);
if (xa_is_node(next)) if (xa_is_node(next) && (!node || node->shift))
xas_free_nodes(xas, xa_to_node(next)); xas_free_nodes(xas, xa_to_node(next));
if (!node) if (!node)
break; break;
@ -1294,13 +1305,12 @@ static void *xas_result(struct xa_state *xas, void *curr)
* @xa: XArray. * @xa: XArray.
* @index: Index into array. * @index: Index into array.
* *
* If the entry at this index is a multi-index entry then all indices will * After this function returns, loading from @index will return %NULL.
* be erased, and the entry will no longer be a multi-index entry. * If the index is part of a multi-index entry, all indices will be erased
* This function expects the xa_lock to be held on entry. * and none of the entries will be part of a multi-index entry.
* *
* Context: Any context. Expects xa_lock to be held on entry. May * Context: Any context. Expects xa_lock to be held on entry.
* release and reacquire xa_lock if @gfp flags permit. * Return: The entry which used to be at this index.
* Return: The old entry at this index.
*/ */
void *__xa_erase(struct xarray *xa, unsigned long index) void *__xa_erase(struct xarray *xa, unsigned long index)
{ {
@ -1314,9 +1324,9 @@ EXPORT_SYMBOL(__xa_erase);
* @xa: XArray. * @xa: XArray.
* @index: Index of entry. * @index: Index of entry.
* *
* This function is the equivalent of calling xa_store() with %NULL as * After this function returns, loading from @index will return %NULL.
* the third argument. The XArray does not need to allocate memory, so * If the index is part of a multi-index entry, all indices will be erased
* the user does not need to provide GFP flags. * and none of the entries will be part of a multi-index entry.
* *
* Context: Any context. Takes and releases the xa_lock. * Context: Any context. Takes and releases the xa_lock.
* Return: The entry which used to be at this index. * Return: The entry which used to be at this index.
@ -1421,16 +1431,12 @@ void *__xa_cmpxchg(struct xarray *xa, unsigned long index,
if (WARN_ON_ONCE(xa_is_advanced(entry))) if (WARN_ON_ONCE(xa_is_advanced(entry)))
return XA_ERROR(-EINVAL); return XA_ERROR(-EINVAL);
if (xa_track_free(xa) && !entry)
entry = XA_ZERO_ENTRY;
do { do {
curr = xas_load(&xas); curr = xas_load(&xas);
if (curr == XA_ZERO_ENTRY)
curr = NULL;
if (curr == old) { if (curr == old) {
xas_store(&xas, entry); xas_store(&xas, entry);
if (xa_track_free(xa)) if (xa_track_free(xa) && entry && !curr)
xas_clear_mark(&xas, XA_FREE_MARK); xas_clear_mark(&xas, XA_FREE_MARK);
} }
} while (__xas_nomem(&xas, gfp)); } while (__xas_nomem(&xas, gfp));
@ -1452,7 +1458,7 @@ EXPORT_SYMBOL(__xa_cmpxchg);
* *
* Context: Any context. Expects xa_lock to be held on entry. May * Context: Any context. Expects xa_lock to be held on entry. May
* release and reacquire xa_lock if @gfp flags permit. * release and reacquire xa_lock if @gfp flags permit.
* Return: 0 if the store succeeded. -EEXIST if another entry was present. * Return: 0 if the store succeeded. -EBUSY if another entry was present.
* -ENOMEM if memory could not be allocated. * -ENOMEM if memory could not be allocated.
*/ */
int __xa_insert(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp) int __xa_insert(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
@ -1472,7 +1478,7 @@ int __xa_insert(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
if (xa_track_free(xa)) if (xa_track_free(xa))
xas_clear_mark(&xas, XA_FREE_MARK); xas_clear_mark(&xas, XA_FREE_MARK);
} else { } else {
xas_set_err(&xas, -EEXIST); xas_set_err(&xas, -EBUSY);
} }
} while (__xas_nomem(&xas, gfp)); } while (__xas_nomem(&xas, gfp));
@ -1480,42 +1486,6 @@ int __xa_insert(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
} }
EXPORT_SYMBOL(__xa_insert); EXPORT_SYMBOL(__xa_insert);
/**
* __xa_reserve() - Reserve this index in the XArray.
* @xa: XArray.
* @index: Index into array.
* @gfp: Memory allocation flags.
*
* Ensures there is somewhere to store an entry at @index in the array.
* If there is already something stored at @index, this function does
* nothing. If there was nothing there, the entry is marked as reserved.
* Loading from a reserved entry returns a %NULL pointer.
*
* If you do not use the entry that you have reserved, call xa_release()
* or xa_erase() to free any unnecessary memory.
*
* Context: Any context. Expects the xa_lock to be held on entry. May
* release the lock, sleep and reacquire the lock if the @gfp flags permit.
* Return: 0 if the reservation succeeded or -ENOMEM if it failed.
*/
int __xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)
{
XA_STATE(xas, xa, index);
void *curr;
do {
curr = xas_load(&xas);
if (!curr) {
xas_store(&xas, XA_ZERO_ENTRY);
if (xa_track_free(xa))
xas_clear_mark(&xas, XA_FREE_MARK);
}
} while (__xas_nomem(&xas, gfp));
return xas_error(&xas);
}
EXPORT_SYMBOL(__xa_reserve);
#ifdef CONFIG_XARRAY_MULTI #ifdef CONFIG_XARRAY_MULTI
static void xas_set_range(struct xa_state *xas, unsigned long first, static void xas_set_range(struct xa_state *xas, unsigned long first,
unsigned long last) unsigned long last)
@ -1607,23 +1577,23 @@ EXPORT_SYMBOL(xa_store_range);
* __xa_alloc() - Find somewhere to store this entry in the XArray. * __xa_alloc() - Find somewhere to store this entry in the XArray.
* @xa: XArray. * @xa: XArray.
* @id: Pointer to ID. * @id: Pointer to ID.
* @max: Maximum ID to allocate (inclusive). * @limit: Range for allocated ID.
* @entry: New entry. * @entry: New entry.
* @gfp: Memory allocation flags. * @gfp: Memory allocation flags.
* *
* Allocates an unused ID in the range specified by @id and @max. * Finds an empty entry in @xa between @limit.min and @limit.max,
* Updates the @id pointer with the index, then stores the entry at that * stores the index into the @id pointer, then stores the entry at
* index. A concurrent lookup will not see an uninitialised @id. * that index. A concurrent lookup will not see an uninitialised @id.
* *
* Context: Any context. Expects xa_lock to be held on entry. May * Context: Any context. Expects xa_lock to be held on entry. May
* release and reacquire xa_lock if @gfp flags permit. * release and reacquire xa_lock if @gfp flags permit.
* Return: 0 on success, -ENOMEM if memory allocation fails or -ENOSPC if * Return: 0 on success, -ENOMEM if memory could not be allocated or
* there is no more space in the XArray. * -EBUSY if there are no free entries in @limit.
*/ */
int __xa_alloc(struct xarray *xa, u32 *id, u32 max, void *entry, gfp_t gfp) int __xa_alloc(struct xarray *xa, u32 *id, void *entry,
struct xa_limit limit, gfp_t gfp)
{ {
XA_STATE(xas, xa, 0); XA_STATE(xas, xa, 0);
int err;
if (WARN_ON_ONCE(xa_is_advanced(entry))) if (WARN_ON_ONCE(xa_is_advanced(entry)))
return -EINVAL; return -EINVAL;
@ -1634,21 +1604,70 @@ int __xa_alloc(struct xarray *xa, u32 *id, u32 max, void *entry, gfp_t gfp)
entry = XA_ZERO_ENTRY; entry = XA_ZERO_ENTRY;
do { do {
xas.xa_index = *id; xas.xa_index = limit.min;
xas_find_marked(&xas, max, XA_FREE_MARK); xas_find_marked(&xas, limit.max, XA_FREE_MARK);
if (xas.xa_node == XAS_RESTART) if (xas.xa_node == XAS_RESTART)
xas_set_err(&xas, -ENOSPC); xas_set_err(&xas, -EBUSY);
else
*id = xas.xa_index;
xas_store(&xas, entry); xas_store(&xas, entry);
xas_clear_mark(&xas, XA_FREE_MARK); xas_clear_mark(&xas, XA_FREE_MARK);
} while (__xas_nomem(&xas, gfp)); } while (__xas_nomem(&xas, gfp));
err = xas_error(&xas); return xas_error(&xas);
if (!err)
*id = xas.xa_index;
return err;
} }
EXPORT_SYMBOL(__xa_alloc); EXPORT_SYMBOL(__xa_alloc);
/**
* __xa_alloc_cyclic() - Find somewhere to store this entry in the XArray.
* @xa: XArray.
* @id: Pointer to ID.
* @entry: New entry.
* @limit: Range of allocated ID.
* @next: Pointer to next ID to allocate.
* @gfp: Memory allocation flags.
*
* Finds an empty entry in @xa between @limit.min and @limit.max,
* stores the index into the @id pointer, then stores the entry at
* that index. A concurrent lookup will not see an uninitialised @id.
* The search for an empty entry will start at @next and will wrap
* around if necessary.
*
* Context: Any context. Expects xa_lock to be held on entry. May
* release and reacquire xa_lock if @gfp flags permit.
* Return: 0 if the allocation succeeded without wrapping. 1 if the
* allocation succeeded after wrapping, -ENOMEM if memory could not be
* allocated or -EBUSY if there are no free entries in @limit.
*/
int __xa_alloc_cyclic(struct xarray *xa, u32 *id, void *entry,
struct xa_limit limit, u32 *next, gfp_t gfp)
{
u32 min = limit.min;
int ret;
limit.min = max(min, *next);
ret = __xa_alloc(xa, id, entry, limit, gfp);
if ((xa->xa_flags & XA_FLAGS_ALLOC_WRAPPED) && ret == 0) {
xa->xa_flags &= ~XA_FLAGS_ALLOC_WRAPPED;
ret = 1;
}
if (ret < 0 && limit.min > min) {
limit.min = min;
ret = __xa_alloc(xa, id, entry, limit, gfp);
if (ret == 0)
ret = 1;
}
if (ret >= 0) {
*next = *id + 1;
if (*next == 0)
xa->xa_flags |= XA_FLAGS_ALLOC_WRAPPED;
}
return ret;
}
EXPORT_SYMBOL(__xa_alloc_cyclic);
/** /**
* __xa_set_mark() - Set this mark on this entry while locked. * __xa_set_mark() - Set this mark on this entry while locked.
* @xa: XArray. * @xa: XArray.
@ -1943,6 +1962,8 @@ void xa_destroy(struct xarray *xa)
entry = xa_head_locked(xa); entry = xa_head_locked(xa);
RCU_INIT_POINTER(xa->xa_head, NULL); RCU_INIT_POINTER(xa->xa_head, NULL);
xas_init_marks(&xas); xas_init_marks(&xas);
if (xa_zero_busy(xa))
xa_mark_clear(xa, XA_FREE_MARK);
/* lockdep checks we're still holding the lock in xas_free_nodes() */ /* lockdep checks we're still holding the lock in xas_free_nodes() */
if (xa_is_node(entry)) if (xa_is_node(entry))
xas_free_nodes(&xas, xa_to_node(entry)); xas_free_nodes(&xas, xa_to_node(entry));