mm,vmacache: optimize overflow system-wide flushing
authorDavidlohr Bueso <davidlohr@hp.com>
Wed, 4 Jun 2014 23:06:47 +0000 (16:06 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Wed, 4 Jun 2014 23:53:57 +0000 (16:53 -0700)
For single threaded workloads, we can avoid flushing and iterating through
the entire list of tasks, making the whole function a lot faster,
requiring only a single atomic read for the mm_users.

Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Suggested-by: Oleg Nesterov <oleg@redhat.com>
Cc: Aswin Chandramouleeswaran <aswin@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/vmacache.c

index 658ed3b3e38d32a2c210d43163554cb23e6ac173..9f25af825dec6d929348e16db85a6c17ab00f31d 100644 (file)
@@ -17,6 +17,16 @@ void vmacache_flush_all(struct mm_struct *mm)
 {
        struct task_struct *g, *p;
 
+       /*
+        * Single threaded tasks need not iterate the entire
+        * list of process. We can avoid the flushing as well
+        * since the mm's seqnum was increased and don't have
+        * to worry about other threads' seqnum. Current's
+        * flush will occur upon the next lookup.
+        */
+       if (atomic_read(&mm->mm_users) == 1)
+               return;
+
        rcu_read_lock();
        for_each_process_thread(g, p) {
                /*
This page took 0.025461 seconds and 5 git commands to generate.