ksm: drain pagevecs to lru
It was hard to explain the page counts which were causing new LTP tests of KSM to fail: we need to drain the per-cpu pagevecs to LRU occasionally. Signed-off-by: Hugh Dickins <hughd@google.com> Reported-by: CAI Qian <caiqian@redhat.com> Cc:Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
73ae31e598
commit
2919bfd075
1 changed files with 12 additions and 0 deletions
12
mm/ksm.c
12
mm/ksm.c
|
@ -1296,6 +1296,18 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page)
|
|||
|
||||
slot = ksm_scan.mm_slot;
|
||||
if (slot == &ksm_mm_head) {
|
||||
/*
|
||||
* A number of pages can hang around indefinitely on per-cpu
|
||||
* pagevecs, raised page count preventing write_protect_page
|
||||
* from merging them. Though it doesn't really matter much,
|
||||
* it is puzzling to see some stuck in pages_volatile until
|
||||
* other activity jostles them out, and they also prevented
|
||||
* LTP's KSM test from succeeding deterministically; so drain
|
||||
* them here (here rather than on entry to ksm_do_scan(),
|
||||
* so we don't IPI too often when pages_to_scan is set low).
|
||||
*/
|
||||
lru_add_drain_all();
|
||||
|
||||
root_unstable_tree = RB_ROOT;
|
||||
|
||||
spin_lock(&ksm_mmlist_lock);
|
||||
|
|
Loading…
Reference in a new issue