Commit | Line | Data |
---|---|---|
db0fb184 | 1 | Documentation for /proc/sys/vm/* kernel version 2.6.29 |
1da177e4 | 2 | (c) 1998, 1999, Rik van Riel <riel@nl.linux.org> |
db0fb184 | 3 | (c) 2008 Peter W. Morreale <pmorreale@novell.com> |
1da177e4 LT |
4 | |
5 | For general info and legal blurb, please look in README. | |
6 | ||
7 | ============================================================== | |
8 | ||
9 | This file contains the documentation for the sysctl files in | |
db0fb184 | 10 | /proc/sys/vm and is valid for Linux kernel version 2.6.29. |
1da177e4 LT |
11 | |
12 | The files in this directory can be used to tune the operation | |
13 | of the virtual memory (VM) subsystem of the Linux kernel and | |
14 | the writeout of dirty data to disk. | |
15 | ||
16 | Default values and initialization routines for most of these | |
17 | files can be found in mm/swap.c. | |
18 | ||
19 | Currently, these files are in /proc/sys/vm: | |
db0fb184 PM |
20 | |
21 | - block_dump | |
22 | - dirty_background_bytes | |
1da177e4 | 23 | - dirty_background_ratio |
db0fb184 | 24 | - dirty_bytes |
1da177e4 | 25 | - dirty_expire_centisecs |
db0fb184 | 26 | - dirty_ratio |
1da177e4 | 27 | - dirty_writeback_centisecs |
db0fb184 PM |
28 | - drop_caches |
29 | - hugepages_treat_as_movable | |
30 | - hugetlb_shm_group | |
31 | - laptop_mode | |
32 | - legacy_va_layout | |
33 | - lowmem_reserve_ratio | |
1da177e4 LT |
34 | - max_map_count |
35 | - min_free_kbytes | |
0ff38490 | 36 | - min_slab_ratio |
db0fb184 PM |
37 | - min_unmapped_ratio |
38 | - mmap_min_addr | |
d5dbac87 NA |
39 | - nr_hugepages |
40 | - nr_overcommit_hugepages | |
db0fb184 PM |
41 | - nr_pdflush_threads |
42 | - nr_trim_pages (only if CONFIG_MMU=n) | |
43 | - numa_zonelist_order | |
44 | - oom_dump_tasks | |
45 | - oom_kill_allocating_task | |
46 | - overcommit_memory | |
47 | - overcommit_ratio | |
48 | - page-cluster | |
49 | - panic_on_oom | |
50 | - percpu_pagelist_fraction | |
51 | - stat_interval | |
52 | - swappiness | |
53 | - vfs_cache_pressure | |
54 | - zone_reclaim_mode | |
55 | ||
1da177e4 LT |
56 | |
57 | ============================================================== | |
58 | ||
db0fb184 | 59 | block_dump |
1da177e4 | 60 | |
db0fb184 PM |
61 | block_dump enables block I/O debugging when set to a nonzero value. More |
62 | information on block I/O debugging is in Documentation/laptops/laptop-mode.txt. | |
1da177e4 LT |
63 | |
64 | ============================================================== | |
65 | ||
db0fb184 | 66 | dirty_background_bytes |
1da177e4 | 67 | |
db0fb184 PM |
68 | Contains the amount of dirty memory at which the pdflush background writeback |
69 | daemon will start writeback. | |
1da177e4 | 70 | |
db0fb184 PM |
71 | If dirty_background_bytes is written, dirty_background_ratio becomes a function |
72 | of its value (dirty_background_bytes / the amount of dirtyable system memory). | |
1da177e4 | 73 | |
db0fb184 | 74 | ============================================================== |
1da177e4 | 75 | |
db0fb184 | 76 | dirty_background_ratio |
1da177e4 | 77 | |
db0fb184 PM |
78 | Contains, as a percentage of total system memory, the number of pages at which |
79 | the pdflush background writeback daemon will start writing out dirty data. | |
1da177e4 | 80 | |
db0fb184 | 81 | ============================================================== |
1da177e4 | 82 | |
db0fb184 PM |
83 | dirty_bytes |
84 | ||
85 | Contains the amount of dirty memory at which a process generating disk writes | |
86 | will itself start writeback. | |
87 | ||
88 | If dirty_bytes is written, dirty_ratio becomes a function of its value | |
89 | (dirty_bytes / the amount of dirtyable system memory). | |
1da177e4 | 90 | |
9e4a5bda AR |
91 | Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any |
92 | value lower than this limit will be ignored and the old configuration will be | |
93 | retained. | |
94 | ||
1da177e4 LT |
95 | ============================================================== |
96 | ||
db0fb184 | 97 | dirty_expire_centisecs |
1da177e4 | 98 | |
db0fb184 PM |
99 | This tunable is used to define when dirty data is old enough to be eligible |
100 | for writeout by the pdflush daemons. It is expressed in 100'ths of a second. | |
101 | Data which has been dirty in-memory for longer than this interval will be | |
102 | written out next time a pdflush daemon wakes up. | |
103 | ||
104 | ============================================================== | |
105 | ||
106 | dirty_ratio | |
107 | ||
108 | Contains, as a percentage of total system memory, the number of pages at which | |
109 | a process which is generating disk writes will itself start writing out dirty | |
110 | data. | |
1da177e4 LT |
111 | |
112 | ============================================================== | |
113 | ||
db0fb184 | 114 | dirty_writeback_centisecs |
1da177e4 | 115 | |
db0fb184 PM |
116 | The pdflush writeback daemons will periodically wake up and write `old' data |
117 | out to disk. This tunable expresses the interval between those wakeups, in | |
118 | 100'ths of a second. | |
1da177e4 | 119 | |
db0fb184 | 120 | Setting this to zero disables periodic writeback altogether. |
1da177e4 LT |
121 | |
122 | ============================================================== | |
123 | ||
db0fb184 | 124 | drop_caches |
1da177e4 | 125 | |
db0fb184 PM |
126 | Writing to this will cause the kernel to drop clean caches, dentries and |
127 | inodes from memory, causing that memory to become free. | |
1da177e4 | 128 | |
db0fb184 PM |
129 | To free pagecache: |
130 | echo 1 > /proc/sys/vm/drop_caches | |
131 | To free dentries and inodes: | |
132 | echo 2 > /proc/sys/vm/drop_caches | |
133 | To free pagecache, dentries and inodes: | |
134 | echo 3 > /proc/sys/vm/drop_caches | |
1da177e4 | 135 | |
db0fb184 PM |
136 | As this is a non-destructive operation and dirty objects are not freeable, the |
137 | user should run `sync' first. | |
1da177e4 LT |
138 | |
139 | ============================================================== | |
140 | ||
db0fb184 | 141 | hugepages_treat_as_movable |
1da177e4 | 142 | |
db0fb184 PM |
143 | This parameter is only useful when kernelcore= is specified at boot time to |
144 | create ZONE_MOVABLE for pages that may be reclaimed or migrated. Huge pages | |
145 | are not movable so are not normally allocated from ZONE_MOVABLE. A non-zero | |
146 | value written to hugepages_treat_as_movable allows huge pages to be allocated | |
147 | from ZONE_MOVABLE. | |
8ad4b1fb | 148 | |
db0fb184 PM |
149 | Once enabled, the ZONE_MOVABLE is treated as an area of memory the huge |
150 | pages pool can easily grow or shrink within. Assuming that applications are | |
151 | not running that mlock() a lot of memory, it is likely the huge pages pool | |
152 | can grow to the size of ZONE_MOVABLE by repeatedly entering the desired value | |
153 | into nr_hugepages and triggering page reclaim. | |
24950898 | 154 | |
8ad4b1fb RS |
155 | ============================================================== |
156 | ||
db0fb184 | 157 | hugetlb_shm_group |
8ad4b1fb | 158 | |
db0fb184 PM |
159 | hugetlb_shm_group contains group id that is allowed to create SysV |
160 | shared memory segment using hugetlb page. | |
8ad4b1fb | 161 | |
db0fb184 | 162 | ============================================================== |
8ad4b1fb | 163 | |
db0fb184 | 164 | laptop_mode |
1743660b | 165 | |
db0fb184 PM |
166 | laptop_mode is a knob that controls "laptop mode". All the things that are |
167 | controlled by this knob are discussed in Documentation/laptops/laptop-mode.txt. | |
1743660b | 168 | |
db0fb184 | 169 | ============================================================== |
1743660b | 170 | |
db0fb184 | 171 | legacy_va_layout |
1b2ffb78 | 172 | |
db0fb184 PM |
173 | If non-zero, this sysctl disables the new 32-bit mmap mmap layout - the kernel |
174 | will use the legacy (2.4) layout for all processes. | |
1b2ffb78 | 175 | |
db0fb184 | 176 | ============================================================== |
1b2ffb78 | 177 | |
db0fb184 PM |
178 | lowmem_reserve_ratio |
179 | ||
180 | For some specialised workloads on highmem machines it is dangerous for | |
181 | the kernel to allow process memory to be allocated from the "lowmem" | |
182 | zone. This is because that memory could then be pinned via the mlock() | |
183 | system call, or by unavailability of swapspace. | |
184 | ||
185 | And on large highmem machines this lack of reclaimable lowmem memory | |
186 | can be fatal. | |
187 | ||
188 | So the Linux page allocator has a mechanism which prevents allocations | |
189 | which _could_ use highmem from using too much lowmem. This means that | |
190 | a certain amount of lowmem is defended from the possibility of being | |
191 | captured into pinned user memory. | |
192 | ||
193 | (The same argument applies to the old 16 megabyte ISA DMA region. This | |
194 | mechanism will also defend that region from allocations which could use | |
195 | highmem or lowmem). | |
196 | ||
197 | The `lowmem_reserve_ratio' tunable determines how aggressive the kernel is | |
198 | in defending these lower zones. | |
199 | ||
200 | If you have a machine which uses highmem or ISA DMA and your | |
201 | applications are using mlock(), or if you are running with no swap then | |
202 | you probably should change the lowmem_reserve_ratio setting. | |
203 | ||
204 | The lowmem_reserve_ratio is an array. You can see them by reading this file. | |
205 | - | |
206 | % cat /proc/sys/vm/lowmem_reserve_ratio | |
207 | 256 256 32 | |
208 | - | |
209 | Note: # of this elements is one fewer than number of zones. Because the highest | |
210 | zone's value is not necessary for following calculation. | |
211 | ||
212 | But, these values are not used directly. The kernel calculates # of protection | |
213 | pages for each zones from them. These are shown as array of protection pages | |
214 | in /proc/zoneinfo like followings. (This is an example of x86-64 box). | |
215 | Each zone has an array of protection pages like this. | |
216 | ||
217 | - | |
218 | Node 0, zone DMA | |
219 | pages free 1355 | |
220 | min 3 | |
221 | low 3 | |
222 | high 4 | |
223 | : | |
224 | : | |
225 | numa_other 0 | |
226 | protection: (0, 2004, 2004, 2004) | |
227 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
228 | pagesets | |
229 | cpu: 0 pcp: 0 | |
230 | : | |
231 | - | |
232 | These protections are added to score to judge whether this zone should be used | |
233 | for page allocation or should be reclaimed. | |
234 | ||
235 | In this example, if normal pages (index=2) are required to this DMA zone and | |
236 | pages_high is used for watermark, the kernel judges this zone should not be | |
237 | used because pages_free(1355) is smaller than watermark + protection[2] | |
238 | (4 + 2004 = 2008). If this protection value is 0, this zone would be used for | |
239 | normal page requirement. If requirement is DMA zone(index=0), protection[0] | |
240 | (=0) is used. | |
241 | ||
242 | zone[i]'s protection[j] is calculated by following expression. | |
243 | ||
244 | (i < j): | |
245 | zone[i]->protection[j] | |
246 | = (total sums of present_pages from zone[i+1] to zone[j] on the node) | |
247 | / lowmem_reserve_ratio[i]; | |
248 | (i = j): | |
249 | (should not be protected. = 0; | |
250 | (i > j): | |
251 | (not necessary, but looks 0) | |
252 | ||
253 | The default values of lowmem_reserve_ratio[i] are | |
254 | 256 (if zone[i] means DMA or DMA32 zone) | |
255 | 32 (others). | |
256 | As above expression, they are reciprocal number of ratio. | |
257 | 256 means 1/256. # of protection pages becomes about "0.39%" of total present | |
258 | pages of higher zones on the node. | |
259 | ||
260 | If you would like to protect more pages, smaller values are effective. | |
261 | The minimum value is 1 (1/1 -> 100%). | |
1b2ffb78 | 262 | |
db0fb184 | 263 | ============================================================== |
1b2ffb78 | 264 | |
db0fb184 | 265 | max_map_count: |
1743660b | 266 | |
db0fb184 PM |
267 | This file contains the maximum number of memory map areas a process |
268 | may have. Memory map areas are used as a side-effect of calling | |
269 | malloc, directly by mmap and mprotect, and also when loading shared | |
270 | libraries. | |
1743660b | 271 | |
db0fb184 PM |
272 | While most applications need less than a thousand maps, certain |
273 | programs, particularly malloc debuggers, may consume lots of them, | |
274 | e.g., up to one or two maps per allocation. | |
fadd8fbd | 275 | |
db0fb184 | 276 | The default value is 65536. |
9614634f | 277 | |
db0fb184 | 278 | ============================================================== |
9614634f | 279 | |
db0fb184 | 280 | min_free_kbytes: |
9614634f | 281 | |
db0fb184 PM |
282 | This is used to force the Linux VM to keep a minimum number |
283 | of kilobytes free. The VM uses this number to compute a pages_min | |
284 | value for each lowmem zone in the system. Each lowmem zone gets | |
285 | a number of reserved free pages based proportionally on its size. | |
286 | ||
287 | Some minimal amount of memory is needed to satisfy PF_MEMALLOC | |
288 | allocations; if you set this to lower than 1024KB, your system will | |
289 | become subtly broken, and prone to deadlock under high loads. | |
290 | ||
291 | Setting this too high will OOM your machine instantly. | |
9614634f CL |
292 | |
293 | ============================================================= | |
294 | ||
0ff38490 CL |
295 | min_slab_ratio: |
296 | ||
297 | This is available only on NUMA kernels. | |
298 | ||
299 | A percentage of the total pages in each zone. On Zone reclaim | |
300 | (fallback from the local zone occurs) slabs will be reclaimed if more | |
301 | than this percentage of pages in a zone are reclaimable slab pages. | |
302 | This insures that the slab growth stays under control even in NUMA | |
303 | systems that rarely perform global reclaim. | |
304 | ||
305 | The default is 5 percent. | |
306 | ||
307 | Note that slab reclaim is triggered in a per zone / node fashion. | |
308 | The process of reclaiming slab memory is currently not node specific | |
309 | and may not be fast. | |
310 | ||
311 | ============================================================= | |
312 | ||
db0fb184 | 313 | min_unmapped_ratio: |
fadd8fbd | 314 | |
db0fb184 | 315 | This is available only on NUMA kernels. |
fadd8fbd | 316 | |
db0fb184 PM |
317 | A percentage of the total pages in each zone. Zone reclaim will only |
318 | occur if more than this percentage of pages are file backed and unmapped. | |
319 | This is to insure that a minimal amount of local pages is still available for | |
320 | file I/O even if the node is overallocated. | |
2b744c01 | 321 | |
db0fb184 | 322 | The default is 1 percent. |
fadd8fbd | 323 | |
db0fb184 | 324 | ============================================================== |
2b744c01 | 325 | |
db0fb184 | 326 | mmap_min_addr |
ed032189 | 327 | |
db0fb184 PM |
328 | This file indicates the amount of address space which a user process will |
329 | be restricted from mmaping. Since kernel null dereference bugs could | |
330 | accidentally operate based on the information in the first couple of pages | |
331 | of memory userspace processes should not be allowed to write to them. By | |
332 | default this value is set to 0 and no protections will be enforced by the | |
333 | security module. Setting this value to something like 64k will allow the | |
334 | vast majority of applications to work correctly and provide defense in depth | |
335 | against future potential kernel bugs. | |
fe071d7e | 336 | |
db0fb184 | 337 | ============================================================== |
fef1bdd6 | 338 | |
db0fb184 | 339 | nr_hugepages |
fef1bdd6 | 340 | |
db0fb184 | 341 | Change the minimum size of the hugepage pool. |
fef1bdd6 | 342 | |
db0fb184 | 343 | See Documentation/vm/hugetlbpage.txt |
fef1bdd6 | 344 | |
db0fb184 | 345 | ============================================================== |
fef1bdd6 | 346 | |
db0fb184 | 347 | nr_overcommit_hugepages |
fef1bdd6 | 348 | |
db0fb184 PM |
349 | Change the maximum size of the hugepage pool. The maximum is |
350 | nr_hugepages + nr_overcommit_hugepages. | |
fe071d7e | 351 | |
db0fb184 | 352 | See Documentation/vm/hugetlbpage.txt |
fe071d7e | 353 | |
db0fb184 | 354 | ============================================================== |
fe071d7e | 355 | |
db0fb184 | 356 | nr_pdflush_threads |
fe071d7e | 357 | |
db0fb184 PM |
358 | The current number of pdflush threads. This value is read-only. |
359 | The value changes according to the number of dirty pages in the system. | |
fe071d7e | 360 | |
db0fb184 PM |
361 | When neccessary, additional pdflush threads are created, one per second, up to |
362 | nr_pdflush_threads_max. | |
fe071d7e | 363 | |
ed032189 EP |
364 | ============================================================== |
365 | ||
db0fb184 | 366 | nr_trim_pages |
ed032189 | 367 | |
db0fb184 PM |
368 | This is available only on NOMMU kernels. |
369 | ||
370 | This value adjusts the excess page trimming behaviour of power-of-2 aligned | |
371 | NOMMU mmap allocations. | |
372 | ||
373 | A value of 0 disables trimming of allocations entirely, while a value of 1 | |
374 | trims excess pages aggressively. Any value >= 1 acts as the watermark where | |
375 | trimming of allocations is initiated. | |
376 | ||
377 | The default value is 1. | |
378 | ||
379 | See Documentation/nommu-mmap.txt for more information. | |
ed032189 | 380 | |
f0c0b2b8 KH |
381 | ============================================================== |
382 | ||
383 | numa_zonelist_order | |
384 | ||
385 | This sysctl is only for NUMA. | |
386 | 'where the memory is allocated from' is controlled by zonelists. | |
387 | (This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation. | |
388 | you may be able to read ZONE_DMA as ZONE_DMA32...) | |
389 | ||
390 | In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following. | |
391 | ZONE_NORMAL -> ZONE_DMA | |
392 | This means that a memory allocation request for GFP_KERNEL will | |
393 | get memory from ZONE_DMA only when ZONE_NORMAL is not available. | |
394 | ||
395 | In NUMA case, you can think of following 2 types of order. | |
396 | Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL | |
397 | ||
398 | (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL | |
399 | (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA. | |
400 | ||
401 | Type(A) offers the best locality for processes on Node(0), but ZONE_DMA | |
402 | will be used before ZONE_NORMAL exhaustion. This increases possibility of | |
403 | out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small. | |
404 | ||
405 | Type(B) cannot offer the best locality but is more robust against OOM of | |
406 | the DMA zone. | |
407 | ||
408 | Type(A) is called as "Node" order. Type (B) is "Zone" order. | |
409 | ||
410 | "Node order" orders the zonelists by node, then by zone within each node. | |
411 | Specify "[Nn]ode" for zone order | |
412 | ||
413 | "Zone Order" orders the zonelists by zone type, then by node within each | |
414 | zone. Specify "[Zz]one"for zode order. | |
415 | ||
416 | Specify "[Dd]efault" to request automatic configuration. Autoconfiguration | |
417 | will select "node" order in following case. | |
418 | (1) if the DMA zone does not exist or | |
419 | (2) if the DMA zone comprises greater than 50% of the available memory or | |
420 | (3) if any node's DMA zone comprises greater than 60% of its local memory and | |
421 | the amount of local memory is big enough. | |
422 | ||
423 | Otherwise, "zone" order will be selected. Default order is recommended unless | |
424 | this is causing problems for your system/application. | |
d5dbac87 NA |
425 | |
426 | ============================================================== | |
427 | ||
db0fb184 | 428 | oom_dump_tasks |
d5dbac87 | 429 | |
db0fb184 PM |
430 | Enables a system-wide task dump (excluding kernel threads) to be |
431 | produced when the kernel performs an OOM-killing and includes such | |
432 | information as pid, uid, tgid, vm size, rss, cpu, oom_adj score, and | |
433 | name. This is helpful to determine why the OOM killer was invoked | |
434 | and to identify the rogue task that caused it. | |
d5dbac87 | 435 | |
db0fb184 PM |
436 | If this is set to zero, this information is suppressed. On very |
437 | large systems with thousands of tasks it may not be feasible to dump | |
438 | the memory state information for each one. Such systems should not | |
439 | be forced to incur a performance penalty in OOM conditions when the | |
440 | information may not be desired. | |
441 | ||
442 | If this is set to non-zero, this information is shown whenever the | |
443 | OOM killer actually kills a memory-hogging task. | |
444 | ||
445 | The default value is 0. | |
d5dbac87 NA |
446 | |
447 | ============================================================== | |
448 | ||
db0fb184 | 449 | oom_kill_allocating_task |
d5dbac87 | 450 | |
db0fb184 PM |
451 | This enables or disables killing the OOM-triggering task in |
452 | out-of-memory situations. | |
d5dbac87 | 453 | |
db0fb184 PM |
454 | If this is set to zero, the OOM killer will scan through the entire |
455 | tasklist and select a task based on heuristics to kill. This normally | |
456 | selects a rogue memory-hogging task that frees up a large amount of | |
457 | memory when killed. | |
458 | ||
459 | If this is set to non-zero, the OOM killer simply kills the task that | |
460 | triggered the out-of-memory condition. This avoids the expensive | |
461 | tasklist scan. | |
462 | ||
463 | If panic_on_oom is selected, it takes precedence over whatever value | |
464 | is used in oom_kill_allocating_task. | |
465 | ||
466 | The default value is 0. | |
dd8632a1 PM |
467 | |
468 | ============================================================== | |
469 | ||
db0fb184 | 470 | overcommit_memory: |
dd8632a1 | 471 | |
db0fb184 | 472 | This value contains a flag that enables memory overcommitment. |
dd8632a1 | 473 | |
db0fb184 PM |
474 | When this flag is 0, the kernel attempts to estimate the amount |
475 | of free memory left when userspace requests more memory. | |
dd8632a1 | 476 | |
db0fb184 PM |
477 | When this flag is 1, the kernel pretends there is always enough |
478 | memory until it actually runs out. | |
dd8632a1 | 479 | |
db0fb184 PM |
480 | When this flag is 2, the kernel uses a "never overcommit" |
481 | policy that attempts to prevent any overcommit of memory. | |
dd8632a1 | 482 | |
db0fb184 PM |
483 | This feature can be very useful because there are a lot of |
484 | programs that malloc() huge amounts of memory "just-in-case" | |
485 | and don't use much of it. | |
486 | ||
487 | The default value is 0. | |
488 | ||
489 | See Documentation/vm/overcommit-accounting and | |
490 | security/commoncap.c::cap_vm_enough_memory() for more information. | |
491 | ||
492 | ============================================================== | |
493 | ||
494 | overcommit_ratio: | |
495 | ||
496 | When overcommit_memory is set to 2, the committed address | |
497 | space is not permitted to exceed swap plus this percentage | |
498 | of physical RAM. See above. | |
499 | ||
500 | ============================================================== | |
501 | ||
502 | page-cluster | |
503 | ||
504 | page-cluster controls the number of pages which are written to swap in | |
505 | a single attempt. The swap I/O size. | |
506 | ||
507 | It is a logarithmic value - setting it to zero means "1 page", setting | |
508 | it to 1 means "2 pages", setting it to 2 means "4 pages", etc. | |
509 | ||
510 | The default value is three (eight pages at a time). There may be some | |
511 | small benefits in tuning this to a different value if your workload is | |
512 | swap-intensive. | |
513 | ||
514 | ============================================================= | |
515 | ||
516 | panic_on_oom | |
517 | ||
518 | This enables or disables panic on out-of-memory feature. | |
519 | ||
520 | If this is set to 0, the kernel will kill some rogue process, | |
521 | called oom_killer. Usually, oom_killer can kill rogue processes and | |
522 | system will survive. | |
523 | ||
524 | If this is set to 1, the kernel panics when out-of-memory happens. | |
525 | However, if a process limits using nodes by mempolicy/cpusets, | |
526 | and those nodes become memory exhaustion status, one process | |
527 | may be killed by oom-killer. No panic occurs in this case. | |
528 | Because other nodes' memory may be free. This means system total status | |
529 | may be not fatal yet. | |
530 | ||
531 | If this is set to 2, the kernel panics compulsorily even on the | |
532 | above-mentioned. | |
533 | ||
534 | The default value is 0. | |
535 | 1 and 2 are for failover of clustering. Please select either | |
536 | according to your policy of failover. | |
537 | ||
538 | ============================================================= | |
539 | ||
540 | percpu_pagelist_fraction | |
541 | ||
542 | This is the fraction of pages at most (high mark pcp->high) in each zone that | |
543 | are allocated for each per cpu page list. The min value for this is 8. It | |
544 | means that we don't allow more than 1/8th of pages in each zone to be | |
545 | allocated in any single per_cpu_pagelist. This entry only changes the value | |
546 | of hot per cpu pagelists. User can specify a number like 100 to allocate | |
547 | 1/100th of each zone to each per cpu page list. | |
548 | ||
549 | The batch value of each per cpu pagelist is also updated as a result. It is | |
550 | set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8) | |
551 | ||
552 | The initial value is zero. Kernel does not use this value at boot time to set | |
553 | the high water marks for each per cpu page list. | |
554 | ||
555 | ============================================================== | |
556 | ||
557 | stat_interval | |
558 | ||
559 | The time interval between which vm statistics are updated. The default | |
560 | is 1 second. | |
561 | ||
562 | ============================================================== | |
563 | ||
564 | swappiness | |
565 | ||
566 | This control is used to define how aggressive the kernel will swap | |
567 | memory pages. Higher values will increase agressiveness, lower values | |
568 | descrease the amount of swap. | |
569 | ||
570 | The default value is 60. | |
571 | ||
572 | ============================================================== | |
573 | ||
574 | vfs_cache_pressure | |
575 | ------------------ | |
576 | ||
577 | Controls the tendency of the kernel to reclaim the memory which is used for | |
578 | caching of directory and inode objects. | |
579 | ||
580 | At the default value of vfs_cache_pressure=100 the kernel will attempt to | |
581 | reclaim dentries and inodes at a "fair" rate with respect to pagecache and | |
582 | swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer | |
583 | to retain dentry and inode caches. Increasing vfs_cache_pressure beyond 100 | |
584 | causes the kernel to prefer to reclaim dentries and inodes. | |
585 | ||
586 | ============================================================== | |
587 | ||
588 | zone_reclaim_mode: | |
589 | ||
590 | Zone_reclaim_mode allows someone to set more or less aggressive approaches to | |
591 | reclaim memory when a zone runs out of memory. If it is set to zero then no | |
592 | zone reclaim occurs. Allocations will be satisfied from other zones / nodes | |
593 | in the system. | |
594 | ||
595 | This is value ORed together of | |
596 | ||
597 | 1 = Zone reclaim on | |
598 | 2 = Zone reclaim writes dirty pages out | |
599 | 4 = Zone reclaim swaps pages | |
600 | ||
601 | zone_reclaim_mode is set during bootup to 1 if it is determined that pages | |
602 | from remote zones will cause a measurable performance reduction. The | |
603 | page allocator will then reclaim easily reusable pages (those page | |
604 | cache pages that are currently not used) before allocating off node pages. | |
605 | ||
606 | It may be beneficial to switch off zone reclaim if the system is | |
607 | used for a file server and all of memory should be used for caching files | |
608 | from disk. In that case the caching effect is more important than | |
609 | data locality. | |
610 | ||
611 | Allowing zone reclaim to write out pages stops processes that are | |
612 | writing large amounts of data from dirtying pages on other nodes. Zone | |
613 | reclaim will write out dirty pages if a zone fills up and so effectively | |
614 | throttle the process. This may decrease the performance of a single process | |
615 | since it cannot use all of system memory to buffer the outgoing writes | |
616 | anymore but it preserve the memory on other nodes so that the performance | |
617 | of other processes running on other nodes will not be affected. | |
618 | ||
619 | Allowing regular swap effectively restricts allocations to the local | |
620 | node unless explicitly overridden by memory policies or cpuset | |
621 | configurations. | |
622 | ||
623 | ============ End of Document ================================= |