mm: munlock: remove redundant get_page/put_page pair on the fast path
authorVlastimil Babka <vbabka@suse.cz>
Wed, 11 Sep 2013 21:22:33 +0000 (14:22 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Wed, 11 Sep 2013 22:58:00 +0000 (15:58 -0700)
The performance of the fast path in munlock_vma_range() can be further
improved by avoiding atomic ops of a redundant get_page()/put_page() pair.

When calling get_page() during page isolation, we already have the pin
from follow_page_mask().  This pin will be then returned by
__pagevec_lru_add(), after which we do not reference the pages anymore.

After this patch, an 8% speedup was measured for munlocking a 56GB large
memory area with THP disabled.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Jörn Engel <joern@logfs.org>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Michel Lespinasse <walken@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/mlock.c

index abdc612b042dcfc18de24981fab5774f5e1956f6..19a934dce5d626464a8832cfc90b735a3e10730b 100644 (file)
@@ -303,8 +303,10 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
                        if (PageLRU(page)) {
                                lruvec = mem_cgroup_page_lruvec(page, zone);
                                lru = page_lru(page);
-
-                               get_page(page);
+                               /*
+                                * We already have pin from follow_page_mask()
+                                * so we can spare the get_page() here.
+                                */
                                ClearPageLRU(page);
                                del_page_from_lru_list(page, lruvec, lru);
                        } else {
@@ -336,25 +338,25 @@ skip_munlock:
                        lock_page(page);
                        if (!__putback_lru_fast_prepare(page, &pvec_putback,
                                        &pgrescued)) {
-                               /* Slow path */
+                               /*
+                                * Slow path. We don't want to lose the last
+                                * pin before unlock_page()
+                                */
+                               get_page(page); /* for putback_lru_page() */
                                __munlock_isolated_page(page);
                                unlock_page(page);
+                               put_page(page); /* from follow_page_mask() */
                        }
                }
        }
 
-       /* Phase 3: page putback for pages that qualified for the fast path */
+       /*
+        * Phase 3: page putback for pages that qualified for the fast path
+        * This will also call put_page() to return pin from follow_page_mask()
+        */
        if (pagevec_count(&pvec_putback))
                __putback_lru_fast(&pvec_putback, pgrescued);
 
-       /* Phase 4: put_page to return pin from follow_page_mask() */
-       for (i = 0; i < nr; i++) {
-               struct page *page = pvec->pages[i];
-
-               if (page)
-                       put_page(page);
-       }
-
        pagevec_reinit(pvec);
 }
 
This page took 0.027937 seconds and 5 git commands to generate.