Commit | Line | Data |
---|---|---|
db0fb184 | 1 | Documentation for /proc/sys/vm/* kernel version 2.6.29 |
1da177e4 | 2 | (c) 1998, 1999, Rik van Riel <riel@nl.linux.org> |
db0fb184 | 3 | (c) 2008 Peter W. Morreale <pmorreale@novell.com> |
1da177e4 LT |
4 | |
5 | For general info and legal blurb, please look in README. | |
6 | ||
7 | ============================================================== | |
8 | ||
9 | This file contains the documentation for the sysctl files in | |
db0fb184 | 10 | /proc/sys/vm and is valid for Linux kernel version 2.6.29. |
1da177e4 LT |
11 | |
12 | The files in this directory can be used to tune the operation | |
13 | of the virtual memory (VM) subsystem of the Linux kernel and | |
14 | the writeout of dirty data to disk. | |
15 | ||
16 | Default values and initialization routines for most of these | |
17 | files can be found in mm/swap.c. | |
18 | ||
19 | Currently, these files are in /proc/sys/vm: | |
db0fb184 PM |
20 | |
21 | - block_dump | |
76ab0f53 | 22 | - compact_memory |
db0fb184 | 23 | - dirty_background_bytes |
1da177e4 | 24 | - dirty_background_ratio |
db0fb184 | 25 | - dirty_bytes |
1da177e4 | 26 | - dirty_expire_centisecs |
db0fb184 | 27 | - dirty_ratio |
1da177e4 | 28 | - dirty_writeback_centisecs |
db0fb184 | 29 | - drop_caches |
5e771905 | 30 | - extfrag_threshold |
db0fb184 PM |
31 | - hugepages_treat_as_movable |
32 | - hugetlb_shm_group | |
33 | - laptop_mode | |
34 | - legacy_va_layout | |
35 | - lowmem_reserve_ratio | |
1da177e4 | 36 | - max_map_count |
6a46079c AK |
37 | - memory_failure_early_kill |
38 | - memory_failure_recovery | |
1da177e4 | 39 | - min_free_kbytes |
0ff38490 | 40 | - min_slab_ratio |
db0fb184 PM |
41 | - min_unmapped_ratio |
42 | - mmap_min_addr | |
d5dbac87 NA |
43 | - nr_hugepages |
44 | - nr_overcommit_hugepages | |
db0fb184 PM |
45 | - nr_pdflush_threads |
46 | - nr_trim_pages (only if CONFIG_MMU=n) | |
47 | - numa_zonelist_order | |
48 | - oom_dump_tasks | |
49 | - oom_kill_allocating_task | |
50 | - overcommit_memory | |
51 | - overcommit_ratio | |
52 | - page-cluster | |
53 | - panic_on_oom | |
54 | - percpu_pagelist_fraction | |
55 | - stat_interval | |
56 | - swappiness | |
57 | - vfs_cache_pressure | |
58 | - zone_reclaim_mode | |
59 | ||
1da177e4 LT |
60 | ============================================================== |
61 | ||
db0fb184 | 62 | block_dump |
1da177e4 | 63 | |
db0fb184 PM |
64 | block_dump enables block I/O debugging when set to a nonzero value. More |
65 | information on block I/O debugging is in Documentation/laptops/laptop-mode.txt. | |
1da177e4 LT |
66 | |
67 | ============================================================== | |
68 | ||
76ab0f53 MG |
69 | compact_memory |
70 | ||
71 | Available only when CONFIG_COMPACTION is set. When 1 is written to the file, | |
72 | all zones are compacted such that free memory is available in contiguous | |
73 | blocks where possible. This can be important for example in the allocation of | |
74 | huge pages although processes will also directly compact memory as required. | |
75 | ||
76 | ============================================================== | |
77 | ||
db0fb184 | 78 | dirty_background_bytes |
1da177e4 | 79 | |
db0fb184 PM |
80 | Contains the amount of dirty memory at which the pdflush background writeback |
81 | daemon will start writeback. | |
1da177e4 | 82 | |
abffc020 AR |
83 | Note: dirty_background_bytes is the counterpart of dirty_background_ratio. Only |
84 | one of them may be specified at a time. When one sysctl is written it is | |
85 | immediately taken into account to evaluate the dirty memory limits and the | |
86 | other appears as 0 when read. | |
1da177e4 | 87 | |
db0fb184 | 88 | ============================================================== |
1da177e4 | 89 | |
db0fb184 | 90 | dirty_background_ratio |
1da177e4 | 91 | |
db0fb184 PM |
92 | Contains, as a percentage of total system memory, the number of pages at which |
93 | the pdflush background writeback daemon will start writing out dirty data. | |
1da177e4 | 94 | |
db0fb184 | 95 | ============================================================== |
1da177e4 | 96 | |
db0fb184 PM |
97 | dirty_bytes |
98 | ||
99 | Contains the amount of dirty memory at which a process generating disk writes | |
100 | will itself start writeback. | |
101 | ||
abffc020 AR |
102 | Note: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be |
103 | specified at a time. When one sysctl is written it is immediately taken into | |
104 | account to evaluate the dirty memory limits and the other appears as 0 when | |
105 | read. | |
1da177e4 | 106 | |
9e4a5bda AR |
107 | Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any |
108 | value lower than this limit will be ignored and the old configuration will be | |
109 | retained. | |
110 | ||
1da177e4 LT |
111 | ============================================================== |
112 | ||
db0fb184 | 113 | dirty_expire_centisecs |
1da177e4 | 114 | |
db0fb184 PM |
115 | This tunable is used to define when dirty data is old enough to be eligible |
116 | for writeout by the pdflush daemons. It is expressed in 100'ths of a second. | |
117 | Data which has been dirty in-memory for longer than this interval will be | |
118 | written out next time a pdflush daemon wakes up. | |
119 | ||
120 | ============================================================== | |
121 | ||
122 | dirty_ratio | |
123 | ||
124 | Contains, as a percentage of total system memory, the number of pages at which | |
125 | a process which is generating disk writes will itself start writing out dirty | |
126 | data. | |
1da177e4 LT |
127 | |
128 | ============================================================== | |
129 | ||
db0fb184 | 130 | dirty_writeback_centisecs |
1da177e4 | 131 | |
db0fb184 PM |
132 | The pdflush writeback daemons will periodically wake up and write `old' data |
133 | out to disk. This tunable expresses the interval between those wakeups, in | |
134 | 100'ths of a second. | |
1da177e4 | 135 | |
db0fb184 | 136 | Setting this to zero disables periodic writeback altogether. |
1da177e4 LT |
137 | |
138 | ============================================================== | |
139 | ||
db0fb184 | 140 | drop_caches |
1da177e4 | 141 | |
db0fb184 PM |
142 | Writing to this will cause the kernel to drop clean caches, dentries and |
143 | inodes from memory, causing that memory to become free. | |
1da177e4 | 144 | |
db0fb184 PM |
145 | To free pagecache: |
146 | echo 1 > /proc/sys/vm/drop_caches | |
147 | To free dentries and inodes: | |
148 | echo 2 > /proc/sys/vm/drop_caches | |
149 | To free pagecache, dentries and inodes: | |
150 | echo 3 > /proc/sys/vm/drop_caches | |
1da177e4 | 151 | |
db0fb184 PM |
152 | As this is a non-destructive operation and dirty objects are not freeable, the |
153 | user should run `sync' first. | |
1da177e4 LT |
154 | |
155 | ============================================================== | |
156 | ||
5e771905 MG |
157 | extfrag_threshold |
158 | ||
159 | This parameter affects whether the kernel will compact memory or direct | |
160 | reclaim to satisfy a high-order allocation. /proc/extfrag_index shows what | |
161 | the fragmentation index for each order is in each zone in the system. Values | |
162 | tending towards 0 imply allocations would fail due to lack of memory, | |
163 | values towards 1000 imply failures are due to fragmentation and -1 implies | |
164 | that the allocation will succeed as long as watermarks are met. | |
165 | ||
166 | The kernel will not compact memory in a zone if the | |
167 | fragmentation index is <= extfrag_threshold. The default value is 500. | |
168 | ||
169 | ============================================================== | |
170 | ||
db0fb184 | 171 | hugepages_treat_as_movable |
1da177e4 | 172 | |
db0fb184 PM |
173 | This parameter is only useful when kernelcore= is specified at boot time to |
174 | create ZONE_MOVABLE for pages that may be reclaimed or migrated. Huge pages | |
175 | are not movable so are not normally allocated from ZONE_MOVABLE. A non-zero | |
176 | value written to hugepages_treat_as_movable allows huge pages to be allocated | |
177 | from ZONE_MOVABLE. | |
8ad4b1fb | 178 | |
db0fb184 PM |
179 | Once enabled, the ZONE_MOVABLE is treated as an area of memory the huge |
180 | pages pool can easily grow or shrink within. Assuming that applications are | |
181 | not running that mlock() a lot of memory, it is likely the huge pages pool | |
182 | can grow to the size of ZONE_MOVABLE by repeatedly entering the desired value | |
183 | into nr_hugepages and triggering page reclaim. | |
24950898 | 184 | |
8ad4b1fb RS |
185 | ============================================================== |
186 | ||
db0fb184 | 187 | hugetlb_shm_group |
8ad4b1fb | 188 | |
db0fb184 PM |
189 | hugetlb_shm_group contains group id that is allowed to create SysV |
190 | shared memory segment using hugetlb page. | |
8ad4b1fb | 191 | |
db0fb184 | 192 | ============================================================== |
8ad4b1fb | 193 | |
db0fb184 | 194 | laptop_mode |
1743660b | 195 | |
db0fb184 PM |
196 | laptop_mode is a knob that controls "laptop mode". All the things that are |
197 | controlled by this knob are discussed in Documentation/laptops/laptop-mode.txt. | |
1743660b | 198 | |
db0fb184 | 199 | ============================================================== |
1743660b | 200 | |
db0fb184 | 201 | legacy_va_layout |
1b2ffb78 | 202 | |
2174efb6 | 203 | If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel |
db0fb184 | 204 | will use the legacy (2.4) layout for all processes. |
1b2ffb78 | 205 | |
db0fb184 | 206 | ============================================================== |
1b2ffb78 | 207 | |
db0fb184 PM |
208 | lowmem_reserve_ratio |
209 | ||
210 | For some specialised workloads on highmem machines it is dangerous for | |
211 | the kernel to allow process memory to be allocated from the "lowmem" | |
212 | zone. This is because that memory could then be pinned via the mlock() | |
213 | system call, or by unavailability of swapspace. | |
214 | ||
215 | And on large highmem machines this lack of reclaimable lowmem memory | |
216 | can be fatal. | |
217 | ||
218 | So the Linux page allocator has a mechanism which prevents allocations | |
219 | which _could_ use highmem from using too much lowmem. This means that | |
220 | a certain amount of lowmem is defended from the possibility of being | |
221 | captured into pinned user memory. | |
222 | ||
223 | (The same argument applies to the old 16 megabyte ISA DMA region. This | |
224 | mechanism will also defend that region from allocations which could use | |
225 | highmem or lowmem). | |
226 | ||
227 | The `lowmem_reserve_ratio' tunable determines how aggressive the kernel is | |
228 | in defending these lower zones. | |
229 | ||
230 | If you have a machine which uses highmem or ISA DMA and your | |
231 | applications are using mlock(), or if you are running with no swap then | |
232 | you probably should change the lowmem_reserve_ratio setting. | |
233 | ||
234 | The lowmem_reserve_ratio is an array. You can see them by reading this file. | |
235 | - | |
236 | % cat /proc/sys/vm/lowmem_reserve_ratio | |
237 | 256 256 32 | |
238 | - | |
239 | Note: # of this elements is one fewer than number of zones. Because the highest | |
240 | zone's value is not necessary for following calculation. | |
241 | ||
242 | But, these values are not used directly. The kernel calculates # of protection | |
243 | pages for each zones from them. These are shown as array of protection pages | |
244 | in /proc/zoneinfo like followings. (This is an example of x86-64 box). | |
245 | Each zone has an array of protection pages like this. | |
246 | ||
247 | - | |
248 | Node 0, zone DMA | |
249 | pages free 1355 | |
250 | min 3 | |
251 | low 3 | |
252 | high 4 | |
253 | : | |
254 | : | |
255 | numa_other 0 | |
256 | protection: (0, 2004, 2004, 2004) | |
257 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
258 | pagesets | |
259 | cpu: 0 pcp: 0 | |
260 | : | |
261 | - | |
262 | These protections are added to score to judge whether this zone should be used | |
263 | for page allocation or should be reclaimed. | |
264 | ||
265 | In this example, if normal pages (index=2) are required to this DMA zone and | |
41858966 MG |
266 | watermark[WMARK_HIGH] is used for watermark, the kernel judges this zone should |
267 | not be used because pages_free(1355) is smaller than watermark + protection[2] | |
db0fb184 PM |
268 | (4 + 2004 = 2008). If this protection value is 0, this zone would be used for |
269 | normal page requirement. If requirement is DMA zone(index=0), protection[0] | |
270 | (=0) is used. | |
271 | ||
272 | zone[i]'s protection[j] is calculated by following expression. | |
273 | ||
274 | (i < j): | |
275 | zone[i]->protection[j] | |
276 | = (total sums of present_pages from zone[i+1] to zone[j] on the node) | |
277 | / lowmem_reserve_ratio[i]; | |
278 | (i = j): | |
279 | (should not be protected. = 0; | |
280 | (i > j): | |
281 | (not necessary, but looks 0) | |
282 | ||
283 | The default values of lowmem_reserve_ratio[i] are | |
284 | 256 (if zone[i] means DMA or DMA32 zone) | |
285 | 32 (others). | |
286 | As above expression, they are reciprocal number of ratio. | |
287 | 256 means 1/256. # of protection pages becomes about "0.39%" of total present | |
288 | pages of higher zones on the node. | |
289 | ||
290 | If you would like to protect more pages, smaller values are effective. | |
291 | The minimum value is 1 (1/1 -> 100%). | |
1b2ffb78 | 292 | |
db0fb184 | 293 | ============================================================== |
1b2ffb78 | 294 | |
db0fb184 | 295 | max_map_count: |
1743660b | 296 | |
db0fb184 PM |
297 | This file contains the maximum number of memory map areas a process |
298 | may have. Memory map areas are used as a side-effect of calling | |
299 | malloc, directly by mmap and mprotect, and also when loading shared | |
300 | libraries. | |
1743660b | 301 | |
db0fb184 PM |
302 | While most applications need less than a thousand maps, certain |
303 | programs, particularly malloc debuggers, may consume lots of them, | |
304 | e.g., up to one or two maps per allocation. | |
fadd8fbd | 305 | |
db0fb184 | 306 | The default value is 65536. |
9614634f | 307 | |
6a46079c AK |
308 | ============================================================= |
309 | ||
310 | memory_failure_early_kill: | |
311 | ||
312 | Control how to kill processes when uncorrected memory error (typically | |
313 | a 2bit error in a memory module) is detected in the background by hardware | |
314 | that cannot be handled by the kernel. In some cases (like the page | |
315 | still having a valid copy on disk) the kernel will handle the failure | |
316 | transparently without affecting any applications. But if there is | |
317 | no other uptodate copy of the data it will kill to prevent any data | |
318 | corruptions from propagating. | |
319 | ||
320 | 1: Kill all processes that have the corrupted and not reloadable page mapped | |
321 | as soon as the corruption is detected. Note this is not supported | |
322 | for a few types of pages, like kernel internally allocated data or | |
323 | the swap cache, but works for the majority of user pages. | |
324 | ||
325 | 0: Only unmap the corrupted page from all processes and only kill a process | |
326 | who tries to access it. | |
327 | ||
328 | The kill is done using a catchable SIGBUS with BUS_MCEERR_AO, so processes can | |
329 | handle this if they want to. | |
330 | ||
331 | This is only active on architectures/platforms with advanced machine | |
332 | check handling and depends on the hardware capabilities. | |
333 | ||
334 | Applications can override this setting individually with the PR_MCE_KILL prctl | |
335 | ||
336 | ============================================================== | |
337 | ||
338 | memory_failure_recovery | |
339 | ||
340 | Enable memory failure recovery (when supported by the platform) | |
341 | ||
342 | 1: Attempt recovery. | |
343 | ||
344 | 0: Always panic on a memory failure. | |
345 | ||
db0fb184 | 346 | ============================================================== |
9614634f | 347 | |
db0fb184 | 348 | min_free_kbytes: |
9614634f | 349 | |
db0fb184 | 350 | This is used to force the Linux VM to keep a minimum number |
41858966 MG |
351 | of kilobytes free. The VM uses this number to compute a |
352 | watermark[WMARK_MIN] value for each lowmem zone in the system. | |
353 | Each lowmem zone gets a number of reserved free pages based | |
354 | proportionally on its size. | |
db0fb184 PM |
355 | |
356 | Some minimal amount of memory is needed to satisfy PF_MEMALLOC | |
357 | allocations; if you set this to lower than 1024KB, your system will | |
358 | become subtly broken, and prone to deadlock under high loads. | |
359 | ||
360 | Setting this too high will OOM your machine instantly. | |
9614634f CL |
361 | |
362 | ============================================================= | |
363 | ||
0ff38490 CL |
364 | min_slab_ratio: |
365 | ||
366 | This is available only on NUMA kernels. | |
367 | ||
368 | A percentage of the total pages in each zone. On Zone reclaim | |
369 | (fallback from the local zone occurs) slabs will be reclaimed if more | |
370 | than this percentage of pages in a zone are reclaimable slab pages. | |
371 | This insures that the slab growth stays under control even in NUMA | |
372 | systems that rarely perform global reclaim. | |
373 | ||
374 | The default is 5 percent. | |
375 | ||
376 | Note that slab reclaim is triggered in a per zone / node fashion. | |
377 | The process of reclaiming slab memory is currently not node specific | |
378 | and may not be fast. | |
379 | ||
380 | ============================================================= | |
381 | ||
db0fb184 | 382 | min_unmapped_ratio: |
fadd8fbd | 383 | |
db0fb184 | 384 | This is available only on NUMA kernels. |
fadd8fbd | 385 | |
90afa5de MG |
386 | This is a percentage of the total pages in each zone. Zone reclaim will |
387 | only occur if more than this percentage of pages are in a state that | |
388 | zone_reclaim_mode allows to be reclaimed. | |
389 | ||
390 | If zone_reclaim_mode has the value 4 OR'd, then the percentage is compared | |
391 | against all file-backed unmapped pages including swapcache pages and tmpfs | |
392 | files. Otherwise, only unmapped pages backed by normal files but not tmpfs | |
393 | files and similar are considered. | |
2b744c01 | 394 | |
db0fb184 | 395 | The default is 1 percent. |
fadd8fbd | 396 | |
db0fb184 | 397 | ============================================================== |
2b744c01 | 398 | |
db0fb184 | 399 | mmap_min_addr |
ed032189 | 400 | |
db0fb184 | 401 | This file indicates the amount of address space which a user process will |
af901ca1 | 402 | be restricted from mmapping. Since kernel null dereference bugs could |
db0fb184 PM |
403 | accidentally operate based on the information in the first couple of pages |
404 | of memory userspace processes should not be allowed to write to them. By | |
405 | default this value is set to 0 and no protections will be enforced by the | |
406 | security module. Setting this value to something like 64k will allow the | |
407 | vast majority of applications to work correctly and provide defense in depth | |
408 | against future potential kernel bugs. | |
fe071d7e | 409 | |
db0fb184 | 410 | ============================================================== |
fef1bdd6 | 411 | |
db0fb184 | 412 | nr_hugepages |
fef1bdd6 | 413 | |
db0fb184 | 414 | Change the minimum size of the hugepage pool. |
fef1bdd6 | 415 | |
db0fb184 | 416 | See Documentation/vm/hugetlbpage.txt |
fef1bdd6 | 417 | |
db0fb184 | 418 | ============================================================== |
fef1bdd6 | 419 | |
db0fb184 | 420 | nr_overcommit_hugepages |
fef1bdd6 | 421 | |
db0fb184 PM |
422 | Change the maximum size of the hugepage pool. The maximum is |
423 | nr_hugepages + nr_overcommit_hugepages. | |
fe071d7e | 424 | |
db0fb184 | 425 | See Documentation/vm/hugetlbpage.txt |
fe071d7e | 426 | |
db0fb184 | 427 | ============================================================== |
fe071d7e | 428 | |
db0fb184 | 429 | nr_pdflush_threads |
fe071d7e | 430 | |
db0fb184 PM |
431 | The current number of pdflush threads. This value is read-only. |
432 | The value changes according to the number of dirty pages in the system. | |
fe071d7e | 433 | |
19f59460 | 434 | When necessary, additional pdflush threads are created, one per second, up to |
db0fb184 | 435 | nr_pdflush_threads_max. |
fe071d7e | 436 | |
ed032189 EP |
437 | ============================================================== |
438 | ||
db0fb184 | 439 | nr_trim_pages |
ed032189 | 440 | |
db0fb184 PM |
441 | This is available only on NOMMU kernels. |
442 | ||
443 | This value adjusts the excess page trimming behaviour of power-of-2 aligned | |
444 | NOMMU mmap allocations. | |
445 | ||
446 | A value of 0 disables trimming of allocations entirely, while a value of 1 | |
447 | trims excess pages aggressively. Any value >= 1 acts as the watermark where | |
448 | trimming of allocations is initiated. | |
449 | ||
450 | The default value is 1. | |
451 | ||
452 | See Documentation/nommu-mmap.txt for more information. | |
ed032189 | 453 | |
f0c0b2b8 KH |
454 | ============================================================== |
455 | ||
456 | numa_zonelist_order | |
457 | ||
458 | This sysctl is only for NUMA. | |
459 | 'where the memory is allocated from' is controlled by zonelists. | |
460 | (This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation. | |
461 | you may be able to read ZONE_DMA as ZONE_DMA32...) | |
462 | ||
463 | In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following. | |
464 | ZONE_NORMAL -> ZONE_DMA | |
465 | This means that a memory allocation request for GFP_KERNEL will | |
466 | get memory from ZONE_DMA only when ZONE_NORMAL is not available. | |
467 | ||
468 | In NUMA case, you can think of following 2 types of order. | |
469 | Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL | |
470 | ||
471 | (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL | |
472 | (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA. | |
473 | ||
474 | Type(A) offers the best locality for processes on Node(0), but ZONE_DMA | |
475 | will be used before ZONE_NORMAL exhaustion. This increases possibility of | |
476 | out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small. | |
477 | ||
478 | Type(B) cannot offer the best locality but is more robust against OOM of | |
479 | the DMA zone. | |
480 | ||
481 | Type(A) is called as "Node" order. Type (B) is "Zone" order. | |
482 | ||
483 | "Node order" orders the zonelists by node, then by zone within each node. | |
5a3016a6 | 484 | Specify "[Nn]ode" for node order |
f0c0b2b8 KH |
485 | |
486 | "Zone Order" orders the zonelists by zone type, then by node within each | |
5a3016a6 | 487 | zone. Specify "[Zz]one" for zone order. |
f0c0b2b8 KH |
488 | |
489 | Specify "[Dd]efault" to request automatic configuration. Autoconfiguration | |
490 | will select "node" order in following case. | |
491 | (1) if the DMA zone does not exist or | |
492 | (2) if the DMA zone comprises greater than 50% of the available memory or | |
493 | (3) if any node's DMA zone comprises greater than 60% of its local memory and | |
494 | the amount of local memory is big enough. | |
495 | ||
496 | Otherwise, "zone" order will be selected. Default order is recommended unless | |
497 | this is causing problems for your system/application. | |
d5dbac87 NA |
498 | |
499 | ============================================================== | |
500 | ||
db0fb184 | 501 | oom_dump_tasks |
d5dbac87 | 502 | |
db0fb184 PM |
503 | Enables a system-wide task dump (excluding kernel threads) to be |
504 | produced when the kernel performs an OOM-killing and includes such | |
505 | information as pid, uid, tgid, vm size, rss, cpu, oom_adj score, and | |
506 | name. This is helpful to determine why the OOM killer was invoked | |
507 | and to identify the rogue task that caused it. | |
d5dbac87 | 508 | |
db0fb184 PM |
509 | If this is set to zero, this information is suppressed. On very |
510 | large systems with thousands of tasks it may not be feasible to dump | |
511 | the memory state information for each one. Such systems should not | |
512 | be forced to incur a performance penalty in OOM conditions when the | |
513 | information may not be desired. | |
514 | ||
515 | If this is set to non-zero, this information is shown whenever the | |
516 | OOM killer actually kills a memory-hogging task. | |
517 | ||
ad915c43 | 518 | The default value is 1 (enabled). |
d5dbac87 NA |
519 | |
520 | ============================================================== | |
521 | ||
db0fb184 | 522 | oom_kill_allocating_task |
d5dbac87 | 523 | |
db0fb184 PM |
524 | This enables or disables killing the OOM-triggering task in |
525 | out-of-memory situations. | |
d5dbac87 | 526 | |
db0fb184 PM |
527 | If this is set to zero, the OOM killer will scan through the entire |
528 | tasklist and select a task based on heuristics to kill. This normally | |
529 | selects a rogue memory-hogging task that frees up a large amount of | |
530 | memory when killed. | |
531 | ||
532 | If this is set to non-zero, the OOM killer simply kills the task that | |
533 | triggered the out-of-memory condition. This avoids the expensive | |
534 | tasklist scan. | |
535 | ||
536 | If panic_on_oom is selected, it takes precedence over whatever value | |
537 | is used in oom_kill_allocating_task. | |
538 | ||
539 | The default value is 0. | |
dd8632a1 PM |
540 | |
541 | ============================================================== | |
542 | ||
db0fb184 | 543 | overcommit_memory: |
dd8632a1 | 544 | |
db0fb184 | 545 | This value contains a flag that enables memory overcommitment. |
dd8632a1 | 546 | |
db0fb184 PM |
547 | When this flag is 0, the kernel attempts to estimate the amount |
548 | of free memory left when userspace requests more memory. | |
dd8632a1 | 549 | |
db0fb184 PM |
550 | When this flag is 1, the kernel pretends there is always enough |
551 | memory until it actually runs out. | |
dd8632a1 | 552 | |
db0fb184 PM |
553 | When this flag is 2, the kernel uses a "never overcommit" |
554 | policy that attempts to prevent any overcommit of memory. | |
dd8632a1 | 555 | |
db0fb184 PM |
556 | This feature can be very useful because there are a lot of |
557 | programs that malloc() huge amounts of memory "just-in-case" | |
558 | and don't use much of it. | |
559 | ||
560 | The default value is 0. | |
561 | ||
562 | See Documentation/vm/overcommit-accounting and | |
563 | security/commoncap.c::cap_vm_enough_memory() for more information. | |
564 | ||
565 | ============================================================== | |
566 | ||
567 | overcommit_ratio: | |
568 | ||
569 | When overcommit_memory is set to 2, the committed address | |
570 | space is not permitted to exceed swap plus this percentage | |
571 | of physical RAM. See above. | |
572 | ||
573 | ============================================================== | |
574 | ||
575 | page-cluster | |
576 | ||
577 | page-cluster controls the number of pages which are written to swap in | |
578 | a single attempt. The swap I/O size. | |
579 | ||
580 | It is a logarithmic value - setting it to zero means "1 page", setting | |
581 | it to 1 means "2 pages", setting it to 2 means "4 pages", etc. | |
582 | ||
583 | The default value is three (eight pages at a time). There may be some | |
584 | small benefits in tuning this to a different value if your workload is | |
585 | swap-intensive. | |
586 | ||
587 | ============================================================= | |
588 | ||
589 | panic_on_oom | |
590 | ||
591 | This enables or disables panic on out-of-memory feature. | |
592 | ||
593 | If this is set to 0, the kernel will kill some rogue process, | |
594 | called oom_killer. Usually, oom_killer can kill rogue processes and | |
595 | system will survive. | |
596 | ||
597 | If this is set to 1, the kernel panics when out-of-memory happens. | |
598 | However, if a process limits using nodes by mempolicy/cpusets, | |
599 | and those nodes become memory exhaustion status, one process | |
600 | may be killed by oom-killer. No panic occurs in this case. | |
601 | Because other nodes' memory may be free. This means system total status | |
602 | may be not fatal yet. | |
603 | ||
604 | If this is set to 2, the kernel panics compulsorily even on the | |
daaf1e68 KH |
605 | above-mentioned. Even oom happens under memory cgroup, the whole |
606 | system panics. | |
db0fb184 PM |
607 | |
608 | The default value is 0. | |
609 | 1 and 2 are for failover of clustering. Please select either | |
610 | according to your policy of failover. | |
daaf1e68 KH |
611 | panic_on_oom=2+kdump gives you very strong tool to investigate |
612 | why oom happens. You can get snapshot. | |
db0fb184 PM |
613 | |
614 | ============================================================= | |
615 | ||
616 | percpu_pagelist_fraction | |
617 | ||
618 | This is the fraction of pages at most (high mark pcp->high) in each zone that | |
619 | are allocated for each per cpu page list. The min value for this is 8. It | |
620 | means that we don't allow more than 1/8th of pages in each zone to be | |
621 | allocated in any single per_cpu_pagelist. This entry only changes the value | |
622 | of hot per cpu pagelists. User can specify a number like 100 to allocate | |
623 | 1/100th of each zone to each per cpu page list. | |
624 | ||
625 | The batch value of each per cpu pagelist is also updated as a result. It is | |
626 | set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8) | |
627 | ||
628 | The initial value is zero. Kernel does not use this value at boot time to set | |
629 | the high water marks for each per cpu page list. | |
630 | ||
631 | ============================================================== | |
632 | ||
633 | stat_interval | |
634 | ||
635 | The time interval between which vm statistics are updated. The default | |
636 | is 1 second. | |
637 | ||
638 | ============================================================== | |
639 | ||
640 | swappiness | |
641 | ||
642 | This control is used to define how aggressive the kernel will swap | |
643 | memory pages. Higher values will increase agressiveness, lower values | |
19f59460 | 644 | decrease the amount of swap. |
db0fb184 PM |
645 | |
646 | The default value is 60. | |
647 | ||
648 | ============================================================== | |
649 | ||
650 | vfs_cache_pressure | |
651 | ------------------ | |
652 | ||
653 | Controls the tendency of the kernel to reclaim the memory which is used for | |
654 | caching of directory and inode objects. | |
655 | ||
656 | At the default value of vfs_cache_pressure=100 the kernel will attempt to | |
657 | reclaim dentries and inodes at a "fair" rate with respect to pagecache and | |
658 | swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer | |
55c37a84 JK |
659 | to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will |
660 | never reclaim dentries and inodes due to memory pressure and this can easily | |
661 | lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100 | |
db0fb184 PM |
662 | causes the kernel to prefer to reclaim dentries and inodes. |
663 | ||
664 | ============================================================== | |
665 | ||
666 | zone_reclaim_mode: | |
667 | ||
668 | Zone_reclaim_mode allows someone to set more or less aggressive approaches to | |
669 | reclaim memory when a zone runs out of memory. If it is set to zero then no | |
670 | zone reclaim occurs. Allocations will be satisfied from other zones / nodes | |
671 | in the system. | |
672 | ||
673 | This is value ORed together of | |
674 | ||
675 | 1 = Zone reclaim on | |
676 | 2 = Zone reclaim writes dirty pages out | |
677 | 4 = Zone reclaim swaps pages | |
678 | ||
679 | zone_reclaim_mode is set during bootup to 1 if it is determined that pages | |
680 | from remote zones will cause a measurable performance reduction. The | |
681 | page allocator will then reclaim easily reusable pages (those page | |
682 | cache pages that are currently not used) before allocating off node pages. | |
683 | ||
684 | It may be beneficial to switch off zone reclaim if the system is | |
685 | used for a file server and all of memory should be used for caching files | |
686 | from disk. In that case the caching effect is more important than | |
687 | data locality. | |
688 | ||
689 | Allowing zone reclaim to write out pages stops processes that are | |
690 | writing large amounts of data from dirtying pages on other nodes. Zone | |
691 | reclaim will write out dirty pages if a zone fills up and so effectively | |
692 | throttle the process. This may decrease the performance of a single process | |
693 | since it cannot use all of system memory to buffer the outgoing writes | |
694 | anymore but it preserve the memory on other nodes so that the performance | |
695 | of other processes running on other nodes will not be affected. | |
696 | ||
697 | Allowing regular swap effectively restricts allocations to the local | |
698 | node unless explicitly overridden by memory policies or cpuset | |
699 | configurations. | |
700 | ||
701 | ============ End of Document ================================= |