mm/huge_memory.c: __split_huge_page() use atomic ClearPageDirty()
Swapping load on huge=always tmpfs (with khugepaged tuned up to be very eager, but I'm not sure that is relevant) soon hung uninterruptibly, waiting for page lock in shmem_getpage_gfp()'s find_lock_entry(), most often when "cp -a" was trying to write to a smallish file. Debug showed that the page in question was not locked, and page->mapping NULL by now, but page->index consistent with having been in a huge page before. Reproduced in minutes on a 4.15 kernel, even with 4.17's605ca5ede7
("mm/huge_memory.c: reorder operations in __split_huge_page_tail()") added in; but took hours to reproduce on a 4.17 kernel (no idea why). The culprit proved to be the __ClearPageDirty() on tails beyond i_size in __split_huge_page(): the non-atomic __bitoperation may have been safe when 4.8'sbaa355fd33
("thp: file pages support for split_huge_page()") introduced it, but liable to erase PageWaiters after 4.10's6290602709
("mm: add PageWaiters indicating tasks are waiting for a page bit"). Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1805291841070.3197@eggly.anvils Fixes:6290602709
("mm: add PageWaiters indicating tasks are waiting for a page bit") Signed-off-by: Hugh Dickins <hughd@google.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
dd52cb8790
commit
2d077d4b59
1 changed files with 1 additions and 1 deletions
|
@ -2431,7 +2431,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
|
|||
__split_huge_page_tail(head, i, lruvec, list);
|
||||
/* Some pages can be beyond i_size: drop them from page cache */
|
||||
if (head[i].index >= end) {
|
||||
__ClearPageDirty(head + i);
|
||||
ClearPageDirty(head + i);
|
||||
__delete_from_page_cache(head + i, NULL);
|
||||
if (IS_ENABLED(CONFIG_SHMEM) && PageSwapBacked(head))
|
||||
shmem_uncharge(head->mapping->host, 1);
|
||||
|
|
Loading…
Reference in a new issue