mm/slab: reverse iteration on find_mergeable()
authorJoonsoo Kim <iamjoonsoo.kim@lge.com>
Wed, 10 Dec 2014 23:42:18 +0000 (15:42 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 11 Dec 2014 01:41:04 +0000 (17:41 -0800)
Unlike SLUB, sometimes, object isn't started at the beginning of the slab
in the SLAB.  This causes the unalignment problem when after slab merging
is supported by commit 12220dea07f1 ("mm/slab: support slab merge").
Alignment mismatch check is introduced ("mm/slab: fix unalignment problem
on Malta with EVA due to slab merge") to prevent merge in this case.

This causes undesirable result that merging happens between infrequently
used kmem_caches if there are kmem_caches with same size and is 256 bytes,
are merged into pool_workqueue rather than kmalloc-256, because
kmem_caches for kmalloc are at the tail of the list.

To prevent this situation, this patch reverses iteration order in
find_mergeable() to find frequently used kmem_caches.  This change helps
to merge kmem_cache to frequently used kmem_caches, such as kmalloc
kmem_caches.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/slab_common.c

index 06aeaf091f213af43ce9f73d8beffbd754a3bf51..2a3f5ff410cf1e684179ce3d7eb72e3d40102639 100644 (file)
@@ -240,7 +240,7 @@ struct kmem_cache *find_mergeable(size_t size, size_t align,
        size = ALIGN(size, align);
        flags = kmem_cache_flags(size, flags, name, NULL);
 
-       list_for_each_entry(s, &slab_caches, list) {
+       list_for_each_entry_reverse(s, &slab_caches, list) {
                if (slab_unmergeable(s))
                        continue;
 
This page took 0.02828 seconds and 5 git commands to generate.