mm: don't drop a partial page in a zone's memory map size
In a zone's present pages number, account for all pages occupied by the memory map, including a partial. Signed-off-by: Johannes Weiner <hannes@saeurebad.de> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
f899b0adc6
commit
f723215419
1 changed files with 2 additions and 1 deletions
|
@ -3378,7 +3378,8 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
|
|||
* is used by this zone for memmap. This affects the watermark
|
||||
* and per-cpu initialisations
|
||||
*/
|
||||
memmap_pages = (size * sizeof(struct page)) >> PAGE_SHIFT;
|
||||
memmap_pages =
|
||||
PAGE_ALIGN(size * sizeof(struct page)) >> PAGE_SHIFT;
|
||||
if (realsize >= memmap_pages) {
|
||||
realsize -= memmap_pages;
|
||||
printk(KERN_DEBUG
|
||||
|
|
Loading…
Reference in a new issue