mm/zswap.c: update zsmalloc in comment to zbud
zswap used zsmalloc before and now using zbud. But, some comments saying it use zsmalloc yet. Fix the trivial problems. Signed-off-by: SeongJae Park <sj38.park@gmail.com> Cc: Seth Jennings <sjenning@linux.vnet.ibm.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
6b4525164e
commit
6335b19344
1 changed files with 2 additions and 2 deletions
|
@ -165,7 +165,7 @@ static void zswap_comp_exit(void)
|
|||
* be held while changing the refcount. Since the lock must
|
||||
* be held, there is no reason to also make refcount atomic.
|
||||
* offset - the swap offset for the entry. Index into the red-black tree.
|
||||
* handle - zsmalloc allocation handle that stores the compressed page data
|
||||
* handle - zbud allocation handle that stores the compressed page data
|
||||
* length - the length in bytes of the compressed page data. Needed during
|
||||
* decompression
|
||||
*/
|
||||
|
@ -282,7 +282,7 @@ static void zswap_rb_erase(struct rb_root *root, struct zswap_entry *entry)
|
|||
}
|
||||
|
||||
/*
|
||||
* Carries out the common pattern of freeing and entry's zsmalloc allocation,
|
||||
* Carries out the common pattern of freeing and entry's zbud allocation,
|
||||
* freeing the entry itself, and decrementing the number of stored pages.
|
||||
*/
|
||||
static void zswap_free_entry(struct zswap_tree *tree,
|
||||
|
|
Loading…
Reference in a new issue