Commit | Line | Data |
---|---|---|
db0fb184 | 1 | Documentation for /proc/sys/vm/* kernel version 2.6.29 |
1da177e4 | 2 | (c) 1998, 1999, Rik van Riel <riel@nl.linux.org> |
db0fb184 | 3 | (c) 2008 Peter W. Morreale <pmorreale@novell.com> |
1da177e4 LT |
4 | |
5 | For general info and legal blurb, please look in README. | |
6 | ||
7 | ============================================================== | |
8 | ||
9 | This file contains the documentation for the sysctl files in | |
db0fb184 | 10 | /proc/sys/vm and is valid for Linux kernel version 2.6.29. |
1da177e4 LT |
11 | |
12 | The files in this directory can be used to tune the operation | |
13 | of the virtual memory (VM) subsystem of the Linux kernel and | |
14 | the writeout of dirty data to disk. | |
15 | ||
16 | Default values and initialization routines for most of these | |
17 | files can be found in mm/swap.c. | |
18 | ||
19 | Currently, these files are in /proc/sys/vm: | |
db0fb184 | 20 | |
4eeab4f5 | 21 | - admin_reserve_kbytes |
db0fb184 | 22 | - block_dump |
76ab0f53 | 23 | - compact_memory |
db0fb184 | 24 | - dirty_background_bytes |
1da177e4 | 25 | - dirty_background_ratio |
db0fb184 | 26 | - dirty_bytes |
1da177e4 | 27 | - dirty_expire_centisecs |
db0fb184 | 28 | - dirty_ratio |
1da177e4 | 29 | - dirty_writeback_centisecs |
db0fb184 | 30 | - drop_caches |
5e771905 | 31 | - extfrag_threshold |
db0fb184 PM |
32 | - hugepages_treat_as_movable |
33 | - hugetlb_shm_group | |
34 | - laptop_mode | |
35 | - legacy_va_layout | |
36 | - lowmem_reserve_ratio | |
1da177e4 | 37 | - max_map_count |
6a46079c AK |
38 | - memory_failure_early_kill |
39 | - memory_failure_recovery | |
1da177e4 | 40 | - min_free_kbytes |
0ff38490 | 41 | - min_slab_ratio |
db0fb184 PM |
42 | - min_unmapped_ratio |
43 | - mmap_min_addr | |
d5dbac87 NA |
44 | - nr_hugepages |
45 | - nr_overcommit_hugepages | |
db0fb184 PM |
46 | - nr_trim_pages (only if CONFIG_MMU=n) |
47 | - numa_zonelist_order | |
48 | - oom_dump_tasks | |
49 | - oom_kill_allocating_task | |
49f0ce5f | 50 | - overcommit_kbytes |
db0fb184 PM |
51 | - overcommit_memory |
52 | - overcommit_ratio | |
53 | - page-cluster | |
54 | - panic_on_oom | |
55 | - percpu_pagelist_fraction | |
56 | - stat_interval | |
57 | - swappiness | |
c9b1d098 | 58 | - user_reserve_kbytes |
db0fb184 PM |
59 | - vfs_cache_pressure |
60 | - zone_reclaim_mode | |
61 | ||
1da177e4 LT |
62 | ============================================================== |
63 | ||
4eeab4f5 AS |
64 | admin_reserve_kbytes |
65 | ||
66 | The amount of free memory in the system that should be reserved for users | |
67 | with the capability cap_sys_admin. | |
68 | ||
69 | admin_reserve_kbytes defaults to min(3% of free pages, 8MB) | |
70 | ||
71 | That should provide enough for the admin to log in and kill a process, | |
72 | if necessary, under the default overcommit 'guess' mode. | |
73 | ||
74 | Systems running under overcommit 'never' should increase this to account | |
75 | for the full Virtual Memory Size of programs used to recover. Otherwise, | |
76 | root may not be able to log in to recover the system. | |
77 | ||
78 | How do you calculate a minimum useful reserve? | |
79 | ||
80 | sshd or login + bash (or some other shell) + top (or ps, kill, etc.) | |
81 | ||
82 | For overcommit 'guess', we can sum resident set sizes (RSS). | |
83 | On x86_64 this is about 8MB. | |
84 | ||
85 | For overcommit 'never', we can take the max of their virtual sizes (VSZ) | |
86 | and add the sum of their RSS. | |
87 | On x86_64 this is about 128MB. | |
88 | ||
89 | Changing this takes effect whenever an application requests memory. | |
90 | ||
91 | ============================================================== | |
92 | ||
db0fb184 | 93 | block_dump |
1da177e4 | 94 | |
db0fb184 PM |
95 | block_dump enables block I/O debugging when set to a nonzero value. More |
96 | information on block I/O debugging is in Documentation/laptops/laptop-mode.txt. | |
1da177e4 LT |
97 | |
98 | ============================================================== | |
99 | ||
76ab0f53 MG |
100 | compact_memory |
101 | ||
102 | Available only when CONFIG_COMPACTION is set. When 1 is written to the file, | |
103 | all zones are compacted such that free memory is available in contiguous | |
104 | blocks where possible. This can be important for example in the allocation of | |
105 | huge pages although processes will also directly compact memory as required. | |
106 | ||
107 | ============================================================== | |
108 | ||
db0fb184 | 109 | dirty_background_bytes |
1da177e4 | 110 | |
6601fac8 AB |
111 | Contains the amount of dirty memory at which the background kernel |
112 | flusher threads will start writeback. | |
1da177e4 | 113 | |
abffc020 AR |
114 | Note: dirty_background_bytes is the counterpart of dirty_background_ratio. Only |
115 | one of them may be specified at a time. When one sysctl is written it is | |
116 | immediately taken into account to evaluate the dirty memory limits and the | |
117 | other appears as 0 when read. | |
1da177e4 | 118 | |
db0fb184 | 119 | ============================================================== |
1da177e4 | 120 | |
db0fb184 | 121 | dirty_background_ratio |
1da177e4 | 122 | |
715ea41e ZL |
123 | Contains, as a percentage of total available memory that contains free pages |
124 | and reclaimable pages, the number of pages at which the background kernel | |
125 | flusher threads will start writing out dirty data. | |
126 | ||
127 | The total avaiable memory is not equal to total system memory. | |
1da177e4 | 128 | |
db0fb184 | 129 | ============================================================== |
1da177e4 | 130 | |
db0fb184 PM |
131 | dirty_bytes |
132 | ||
133 | Contains the amount of dirty memory at which a process generating disk writes | |
134 | will itself start writeback. | |
135 | ||
abffc020 AR |
136 | Note: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be |
137 | specified at a time. When one sysctl is written it is immediately taken into | |
138 | account to evaluate the dirty memory limits and the other appears as 0 when | |
139 | read. | |
1da177e4 | 140 | |
9e4a5bda AR |
141 | Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any |
142 | value lower than this limit will be ignored and the old configuration will be | |
143 | retained. | |
144 | ||
1da177e4 LT |
145 | ============================================================== |
146 | ||
db0fb184 | 147 | dirty_expire_centisecs |
1da177e4 | 148 | |
db0fb184 | 149 | This tunable is used to define when dirty data is old enough to be eligible |
6601fac8 AB |
150 | for writeout by the kernel flusher threads. It is expressed in 100'ths |
151 | of a second. Data which has been dirty in-memory for longer than this | |
152 | interval will be written out next time a flusher thread wakes up. | |
db0fb184 PM |
153 | |
154 | ============================================================== | |
155 | ||
156 | dirty_ratio | |
157 | ||
715ea41e ZL |
158 | Contains, as a percentage of total available memory that contains free pages |
159 | and reclaimable pages, the number of pages at which a process which is | |
160 | generating disk writes will itself start writing out dirty data. | |
161 | ||
162 | The total avaiable memory is not equal to total system memory. | |
1da177e4 LT |
163 | |
164 | ============================================================== | |
165 | ||
db0fb184 | 166 | dirty_writeback_centisecs |
1da177e4 | 167 | |
6601fac8 | 168 | The kernel flusher threads will periodically wake up and write `old' data |
db0fb184 PM |
169 | out to disk. This tunable expresses the interval between those wakeups, in |
170 | 100'ths of a second. | |
1da177e4 | 171 | |
db0fb184 | 172 | Setting this to zero disables periodic writeback altogether. |
1da177e4 LT |
173 | |
174 | ============================================================== | |
175 | ||
db0fb184 | 176 | drop_caches |
1da177e4 | 177 | |
5509a5d2 DH |
178 | Writing to this will cause the kernel to drop clean caches, as well as |
179 | reclaimable slab objects like dentries and inodes. Once dropped, their | |
180 | memory becomes free. | |
1da177e4 | 181 | |
db0fb184 PM |
182 | To free pagecache: |
183 | echo 1 > /proc/sys/vm/drop_caches | |
5509a5d2 | 184 | To free reclaimable slab objects (includes dentries and inodes): |
db0fb184 | 185 | echo 2 > /proc/sys/vm/drop_caches |
5509a5d2 | 186 | To free slab objects and pagecache: |
db0fb184 | 187 | echo 3 > /proc/sys/vm/drop_caches |
1da177e4 | 188 | |
5509a5d2 DH |
189 | This is a non-destructive operation and will not free any dirty objects. |
190 | To increase the number of objects freed by this operation, the user may run | |
191 | `sync' prior to writing to /proc/sys/vm/drop_caches. This will minimize the | |
192 | number of dirty objects on the system and create more candidates to be | |
193 | dropped. | |
194 | ||
195 | This file is not a means to control the growth of the various kernel caches | |
196 | (inodes, dentries, pagecache, etc...) These objects are automatically | |
197 | reclaimed by the kernel when memory is needed elsewhere on the system. | |
198 | ||
199 | Use of this file can cause performance problems. Since it discards cached | |
200 | objects, it may cost a significant amount of I/O and CPU to recreate the | |
201 | dropped objects, especially if they were under heavy use. Because of this, | |
202 | use outside of a testing or debugging environment is not recommended. | |
203 | ||
204 | You may see informational messages in your kernel log when this file is | |
205 | used: | |
206 | ||
207 | cat (1234): drop_caches: 3 | |
208 | ||
209 | These are informational only. They do not mean that anything is wrong | |
210 | with your system. To disable them, echo 4 (bit 3) into drop_caches. | |
1da177e4 LT |
211 | |
212 | ============================================================== | |
213 | ||
5e771905 MG |
214 | extfrag_threshold |
215 | ||
216 | This parameter affects whether the kernel will compact memory or direct | |
217 | reclaim to satisfy a high-order allocation. /proc/extfrag_index shows what | |
218 | the fragmentation index for each order is in each zone in the system. Values | |
219 | tending towards 0 imply allocations would fail due to lack of memory, | |
220 | values towards 1000 imply failures are due to fragmentation and -1 implies | |
221 | that the allocation will succeed as long as watermarks are met. | |
222 | ||
223 | The kernel will not compact memory in a zone if the | |
224 | fragmentation index is <= extfrag_threshold. The default value is 500. | |
225 | ||
226 | ============================================================== | |
227 | ||
db0fb184 | 228 | hugepages_treat_as_movable |
1da177e4 | 229 | |
86cdb465 NH |
230 | This parameter controls whether we can allocate hugepages from ZONE_MOVABLE |
231 | or not. If set to non-zero, hugepages can be allocated from ZONE_MOVABLE. | |
232 | ZONE_MOVABLE is created when kernel boot parameter kernelcore= is specified, | |
233 | so this parameter has no effect if used without kernelcore=. | |
234 | ||
235 | Hugepage migration is now available in some situations which depend on the | |
236 | architecture and/or the hugepage size. If a hugepage supports migration, | |
237 | allocation from ZONE_MOVABLE is always enabled for the hugepage regardless | |
238 | of the value of this parameter. | |
239 | IOW, this parameter affects only non-migratable hugepages. | |
240 | ||
241 | Assuming that hugepages are not migratable in your system, one usecase of | |
242 | this parameter is that users can make hugepage pool more extensible by | |
243 | enabling the allocation from ZONE_MOVABLE. This is because on ZONE_MOVABLE | |
244 | page reclaim/migration/compaction work more and you can get contiguous | |
245 | memory more likely. Note that using ZONE_MOVABLE for non-migratable | |
246 | hugepages can do harm to other features like memory hotremove (because | |
247 | memory hotremove expects that memory blocks on ZONE_MOVABLE are always | |
248 | removable,) so it's a trade-off responsible for the users. | |
24950898 | 249 | |
8ad4b1fb RS |
250 | ============================================================== |
251 | ||
db0fb184 | 252 | hugetlb_shm_group |
8ad4b1fb | 253 | |
db0fb184 PM |
254 | hugetlb_shm_group contains group id that is allowed to create SysV |
255 | shared memory segment using hugetlb page. | |
8ad4b1fb | 256 | |
db0fb184 | 257 | ============================================================== |
8ad4b1fb | 258 | |
db0fb184 | 259 | laptop_mode |
1743660b | 260 | |
db0fb184 PM |
261 | laptop_mode is a knob that controls "laptop mode". All the things that are |
262 | controlled by this knob are discussed in Documentation/laptops/laptop-mode.txt. | |
1743660b | 263 | |
db0fb184 | 264 | ============================================================== |
1743660b | 265 | |
db0fb184 | 266 | legacy_va_layout |
1b2ffb78 | 267 | |
2174efb6 | 268 | If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel |
db0fb184 | 269 | will use the legacy (2.4) layout for all processes. |
1b2ffb78 | 270 | |
db0fb184 | 271 | ============================================================== |
1b2ffb78 | 272 | |
db0fb184 PM |
273 | lowmem_reserve_ratio |
274 | ||
275 | For some specialised workloads on highmem machines it is dangerous for | |
276 | the kernel to allow process memory to be allocated from the "lowmem" | |
277 | zone. This is because that memory could then be pinned via the mlock() | |
278 | system call, or by unavailability of swapspace. | |
279 | ||
280 | And on large highmem machines this lack of reclaimable lowmem memory | |
281 | can be fatal. | |
282 | ||
283 | So the Linux page allocator has a mechanism which prevents allocations | |
284 | which _could_ use highmem from using too much lowmem. This means that | |
285 | a certain amount of lowmem is defended from the possibility of being | |
286 | captured into pinned user memory. | |
287 | ||
288 | (The same argument applies to the old 16 megabyte ISA DMA region. This | |
289 | mechanism will also defend that region from allocations which could use | |
290 | highmem or lowmem). | |
291 | ||
292 | The `lowmem_reserve_ratio' tunable determines how aggressive the kernel is | |
293 | in defending these lower zones. | |
294 | ||
295 | If you have a machine which uses highmem or ISA DMA and your | |
296 | applications are using mlock(), or if you are running with no swap then | |
297 | you probably should change the lowmem_reserve_ratio setting. | |
298 | ||
299 | The lowmem_reserve_ratio is an array. You can see them by reading this file. | |
300 | - | |
301 | % cat /proc/sys/vm/lowmem_reserve_ratio | |
302 | 256 256 32 | |
303 | - | |
304 | Note: # of this elements is one fewer than number of zones. Because the highest | |
305 | zone's value is not necessary for following calculation. | |
306 | ||
307 | But, these values are not used directly. The kernel calculates # of protection | |
308 | pages for each zones from them. These are shown as array of protection pages | |
309 | in /proc/zoneinfo like followings. (This is an example of x86-64 box). | |
310 | Each zone has an array of protection pages like this. | |
311 | ||
312 | - | |
313 | Node 0, zone DMA | |
314 | pages free 1355 | |
315 | min 3 | |
316 | low 3 | |
317 | high 4 | |
318 | : | |
319 | : | |
320 | numa_other 0 | |
321 | protection: (0, 2004, 2004, 2004) | |
322 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
323 | pagesets | |
324 | cpu: 0 pcp: 0 | |
325 | : | |
326 | - | |
327 | These protections are added to score to judge whether this zone should be used | |
328 | for page allocation or should be reclaimed. | |
329 | ||
330 | In this example, if normal pages (index=2) are required to this DMA zone and | |
41858966 MG |
331 | watermark[WMARK_HIGH] is used for watermark, the kernel judges this zone should |
332 | not be used because pages_free(1355) is smaller than watermark + protection[2] | |
db0fb184 PM |
333 | (4 + 2004 = 2008). If this protection value is 0, this zone would be used for |
334 | normal page requirement. If requirement is DMA zone(index=0), protection[0] | |
335 | (=0) is used. | |
336 | ||
337 | zone[i]'s protection[j] is calculated by following expression. | |
338 | ||
339 | (i < j): | |
340 | zone[i]->protection[j] | |
341 | = (total sums of present_pages from zone[i+1] to zone[j] on the node) | |
342 | / lowmem_reserve_ratio[i]; | |
343 | (i = j): | |
344 | (should not be protected. = 0; | |
345 | (i > j): | |
346 | (not necessary, but looks 0) | |
347 | ||
348 | The default values of lowmem_reserve_ratio[i] are | |
349 | 256 (if zone[i] means DMA or DMA32 zone) | |
350 | 32 (others). | |
351 | As above expression, they are reciprocal number of ratio. | |
352 | 256 means 1/256. # of protection pages becomes about "0.39%" of total present | |
353 | pages of higher zones on the node. | |
354 | ||
355 | If you would like to protect more pages, smaller values are effective. | |
356 | The minimum value is 1 (1/1 -> 100%). | |
1b2ffb78 | 357 | |
db0fb184 | 358 | ============================================================== |
1b2ffb78 | 359 | |
db0fb184 | 360 | max_map_count: |
1743660b | 361 | |
db0fb184 PM |
362 | This file contains the maximum number of memory map areas a process |
363 | may have. Memory map areas are used as a side-effect of calling | |
364 | malloc, directly by mmap and mprotect, and also when loading shared | |
365 | libraries. | |
1743660b | 366 | |
db0fb184 PM |
367 | While most applications need less than a thousand maps, certain |
368 | programs, particularly malloc debuggers, may consume lots of them, | |
369 | e.g., up to one or two maps per allocation. | |
fadd8fbd | 370 | |
db0fb184 | 371 | The default value is 65536. |
9614634f | 372 | |
6a46079c AK |
373 | ============================================================= |
374 | ||
375 | memory_failure_early_kill: | |
376 | ||
377 | Control how to kill processes when uncorrected memory error (typically | |
378 | a 2bit error in a memory module) is detected in the background by hardware | |
379 | that cannot be handled by the kernel. In some cases (like the page | |
380 | still having a valid copy on disk) the kernel will handle the failure | |
381 | transparently without affecting any applications. But if there is | |
382 | no other uptodate copy of the data it will kill to prevent any data | |
383 | corruptions from propagating. | |
384 | ||
385 | 1: Kill all processes that have the corrupted and not reloadable page mapped | |
386 | as soon as the corruption is detected. Note this is not supported | |
387 | for a few types of pages, like kernel internally allocated data or | |
388 | the swap cache, but works for the majority of user pages. | |
389 | ||
390 | 0: Only unmap the corrupted page from all processes and only kill a process | |
391 | who tries to access it. | |
392 | ||
393 | The kill is done using a catchable SIGBUS with BUS_MCEERR_AO, so processes can | |
394 | handle this if they want to. | |
395 | ||
396 | This is only active on architectures/platforms with advanced machine | |
397 | check handling and depends on the hardware capabilities. | |
398 | ||
399 | Applications can override this setting individually with the PR_MCE_KILL prctl | |
400 | ||
401 | ============================================================== | |
402 | ||
403 | memory_failure_recovery | |
404 | ||
405 | Enable memory failure recovery (when supported by the platform) | |
406 | ||
407 | 1: Attempt recovery. | |
408 | ||
409 | 0: Always panic on a memory failure. | |
410 | ||
db0fb184 | 411 | ============================================================== |
9614634f | 412 | |
db0fb184 | 413 | min_free_kbytes: |
9614634f | 414 | |
db0fb184 | 415 | This is used to force the Linux VM to keep a minimum number |
41858966 MG |
416 | of kilobytes free. The VM uses this number to compute a |
417 | watermark[WMARK_MIN] value for each lowmem zone in the system. | |
418 | Each lowmem zone gets a number of reserved free pages based | |
419 | proportionally on its size. | |
db0fb184 PM |
420 | |
421 | Some minimal amount of memory is needed to satisfy PF_MEMALLOC | |
422 | allocations; if you set this to lower than 1024KB, your system will | |
423 | become subtly broken, and prone to deadlock under high loads. | |
424 | ||
425 | Setting this too high will OOM your machine instantly. | |
9614634f CL |
426 | |
427 | ============================================================= | |
428 | ||
0ff38490 CL |
429 | min_slab_ratio: |
430 | ||
431 | This is available only on NUMA kernels. | |
432 | ||
433 | A percentage of the total pages in each zone. On Zone reclaim | |
434 | (fallback from the local zone occurs) slabs will be reclaimed if more | |
435 | than this percentage of pages in a zone are reclaimable slab pages. | |
436 | This insures that the slab growth stays under control even in NUMA | |
437 | systems that rarely perform global reclaim. | |
438 | ||
439 | The default is 5 percent. | |
440 | ||
441 | Note that slab reclaim is triggered in a per zone / node fashion. | |
442 | The process of reclaiming slab memory is currently not node specific | |
443 | and may not be fast. | |
444 | ||
445 | ============================================================= | |
446 | ||
db0fb184 | 447 | min_unmapped_ratio: |
fadd8fbd | 448 | |
db0fb184 | 449 | This is available only on NUMA kernels. |
fadd8fbd | 450 | |
90afa5de MG |
451 | This is a percentage of the total pages in each zone. Zone reclaim will |
452 | only occur if more than this percentage of pages are in a state that | |
453 | zone_reclaim_mode allows to be reclaimed. | |
454 | ||
455 | If zone_reclaim_mode has the value 4 OR'd, then the percentage is compared | |
456 | against all file-backed unmapped pages including swapcache pages and tmpfs | |
457 | files. Otherwise, only unmapped pages backed by normal files but not tmpfs | |
458 | files and similar are considered. | |
2b744c01 | 459 | |
db0fb184 | 460 | The default is 1 percent. |
fadd8fbd | 461 | |
db0fb184 | 462 | ============================================================== |
2b744c01 | 463 | |
db0fb184 | 464 | mmap_min_addr |
ed032189 | 465 | |
db0fb184 | 466 | This file indicates the amount of address space which a user process will |
af901ca1 | 467 | be restricted from mmapping. Since kernel null dereference bugs could |
db0fb184 PM |
468 | accidentally operate based on the information in the first couple of pages |
469 | of memory userspace processes should not be allowed to write to them. By | |
470 | default this value is set to 0 and no protections will be enforced by the | |
471 | security module. Setting this value to something like 64k will allow the | |
472 | vast majority of applications to work correctly and provide defense in depth | |
473 | against future potential kernel bugs. | |
fe071d7e | 474 | |
db0fb184 | 475 | ============================================================== |
fef1bdd6 | 476 | |
db0fb184 | 477 | nr_hugepages |
fef1bdd6 | 478 | |
db0fb184 | 479 | Change the minimum size of the hugepage pool. |
fef1bdd6 | 480 | |
db0fb184 | 481 | See Documentation/vm/hugetlbpage.txt |
fef1bdd6 | 482 | |
db0fb184 | 483 | ============================================================== |
fef1bdd6 | 484 | |
db0fb184 | 485 | nr_overcommit_hugepages |
fef1bdd6 | 486 | |
db0fb184 PM |
487 | Change the maximum size of the hugepage pool. The maximum is |
488 | nr_hugepages + nr_overcommit_hugepages. | |
fe071d7e | 489 | |
db0fb184 | 490 | See Documentation/vm/hugetlbpage.txt |
fe071d7e | 491 | |
db0fb184 | 492 | ============================================================== |
fe071d7e | 493 | |
db0fb184 | 494 | nr_trim_pages |
ed032189 | 495 | |
db0fb184 PM |
496 | This is available only on NOMMU kernels. |
497 | ||
498 | This value adjusts the excess page trimming behaviour of power-of-2 aligned | |
499 | NOMMU mmap allocations. | |
500 | ||
501 | A value of 0 disables trimming of allocations entirely, while a value of 1 | |
502 | trims excess pages aggressively. Any value >= 1 acts as the watermark where | |
503 | trimming of allocations is initiated. | |
504 | ||
505 | The default value is 1. | |
506 | ||
507 | See Documentation/nommu-mmap.txt for more information. | |
ed032189 | 508 | |
f0c0b2b8 KH |
509 | ============================================================== |
510 | ||
511 | numa_zonelist_order | |
512 | ||
513 | This sysctl is only for NUMA. | |
514 | 'where the memory is allocated from' is controlled by zonelists. | |
515 | (This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation. | |
516 | you may be able to read ZONE_DMA as ZONE_DMA32...) | |
517 | ||
518 | In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following. | |
519 | ZONE_NORMAL -> ZONE_DMA | |
520 | This means that a memory allocation request for GFP_KERNEL will | |
521 | get memory from ZONE_DMA only when ZONE_NORMAL is not available. | |
522 | ||
523 | In NUMA case, you can think of following 2 types of order. | |
524 | Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL | |
525 | ||
526 | (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL | |
527 | (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA. | |
528 | ||
529 | Type(A) offers the best locality for processes on Node(0), but ZONE_DMA | |
530 | will be used before ZONE_NORMAL exhaustion. This increases possibility of | |
531 | out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small. | |
532 | ||
533 | Type(B) cannot offer the best locality but is more robust against OOM of | |
534 | the DMA zone. | |
535 | ||
536 | Type(A) is called as "Node" order. Type (B) is "Zone" order. | |
537 | ||
538 | "Node order" orders the zonelists by node, then by zone within each node. | |
5a3016a6 | 539 | Specify "[Nn]ode" for node order |
f0c0b2b8 KH |
540 | |
541 | "Zone Order" orders the zonelists by zone type, then by node within each | |
5a3016a6 | 542 | zone. Specify "[Zz]one" for zone order. |
f0c0b2b8 KH |
543 | |
544 | Specify "[Dd]efault" to request automatic configuration. Autoconfiguration | |
545 | will select "node" order in following case. | |
546 | (1) if the DMA zone does not exist or | |
547 | (2) if the DMA zone comprises greater than 50% of the available memory or | |
f8f191f1 | 548 | (3) if any node's DMA zone comprises greater than 70% of its local memory and |
f0c0b2b8 KH |
549 | the amount of local memory is big enough. |
550 | ||
551 | Otherwise, "zone" order will be selected. Default order is recommended unless | |
552 | this is causing problems for your system/application. | |
d5dbac87 NA |
553 | |
554 | ============================================================== | |
555 | ||
db0fb184 | 556 | oom_dump_tasks |
d5dbac87 | 557 | |
db0fb184 PM |
558 | Enables a system-wide task dump (excluding kernel threads) to be |
559 | produced when the kernel performs an OOM-killing and includes such | |
de34d965 DR |
560 | information as pid, uid, tgid, vm size, rss, nr_ptes, swapents, |
561 | oom_score_adj score, and name. This is helpful to determine why the | |
562 | OOM killer was invoked, to identify the rogue task that caused it, | |
563 | and to determine why the OOM killer chose the task it did to kill. | |
d5dbac87 | 564 | |
db0fb184 PM |
565 | If this is set to zero, this information is suppressed. On very |
566 | large systems with thousands of tasks it may not be feasible to dump | |
567 | the memory state information for each one. Such systems should not | |
568 | be forced to incur a performance penalty in OOM conditions when the | |
569 | information may not be desired. | |
570 | ||
571 | If this is set to non-zero, this information is shown whenever the | |
572 | OOM killer actually kills a memory-hogging task. | |
573 | ||
ad915c43 | 574 | The default value is 1 (enabled). |
d5dbac87 NA |
575 | |
576 | ============================================================== | |
577 | ||
db0fb184 | 578 | oom_kill_allocating_task |
d5dbac87 | 579 | |
db0fb184 PM |
580 | This enables or disables killing the OOM-triggering task in |
581 | out-of-memory situations. | |
d5dbac87 | 582 | |
db0fb184 PM |
583 | If this is set to zero, the OOM killer will scan through the entire |
584 | tasklist and select a task based on heuristics to kill. This normally | |
585 | selects a rogue memory-hogging task that frees up a large amount of | |
586 | memory when killed. | |
587 | ||
588 | If this is set to non-zero, the OOM killer simply kills the task that | |
589 | triggered the out-of-memory condition. This avoids the expensive | |
590 | tasklist scan. | |
591 | ||
592 | If panic_on_oom is selected, it takes precedence over whatever value | |
593 | is used in oom_kill_allocating_task. | |
594 | ||
595 | The default value is 0. | |
dd8632a1 PM |
596 | |
597 | ============================================================== | |
598 | ||
49f0ce5f JM |
599 | overcommit_kbytes: |
600 | ||
601 | When overcommit_memory is set to 2, the committed address space is not | |
602 | permitted to exceed swap plus this amount of physical RAM. See below. | |
603 | ||
604 | Note: overcommit_kbytes is the counterpart of overcommit_ratio. Only one | |
605 | of them may be specified at a time. Setting one disables the other (which | |
606 | then appears as 0 when read). | |
607 | ||
608 | ============================================================== | |
609 | ||
db0fb184 | 610 | overcommit_memory: |
dd8632a1 | 611 | |
db0fb184 | 612 | This value contains a flag that enables memory overcommitment. |
dd8632a1 | 613 | |
db0fb184 PM |
614 | When this flag is 0, the kernel attempts to estimate the amount |
615 | of free memory left when userspace requests more memory. | |
dd8632a1 | 616 | |
db0fb184 PM |
617 | When this flag is 1, the kernel pretends there is always enough |
618 | memory until it actually runs out. | |
dd8632a1 | 619 | |
db0fb184 PM |
620 | When this flag is 2, the kernel uses a "never overcommit" |
621 | policy that attempts to prevent any overcommit of memory. | |
c9b1d098 | 622 | Note that user_reserve_kbytes affects this policy. |
dd8632a1 | 623 | |
db0fb184 PM |
624 | This feature can be very useful because there are a lot of |
625 | programs that malloc() huge amounts of memory "just-in-case" | |
626 | and don't use much of it. | |
627 | ||
628 | The default value is 0. | |
629 | ||
630 | See Documentation/vm/overcommit-accounting and | |
631 | security/commoncap.c::cap_vm_enough_memory() for more information. | |
632 | ||
633 | ============================================================== | |
634 | ||
635 | overcommit_ratio: | |
636 | ||
637 | When overcommit_memory is set to 2, the committed address | |
638 | space is not permitted to exceed swap plus this percentage | |
639 | of physical RAM. See above. | |
640 | ||
641 | ============================================================== | |
642 | ||
643 | page-cluster | |
644 | ||
df858fa8 CE |
645 | page-cluster controls the number of pages up to which consecutive pages |
646 | are read in from swap in a single attempt. This is the swap counterpart | |
647 | to page cache readahead. | |
648 | The mentioned consecutivity is not in terms of virtual/physical addresses, | |
649 | but consecutive on swap space - that means they were swapped out together. | |
db0fb184 PM |
650 | |
651 | It is a logarithmic value - setting it to zero means "1 page", setting | |
652 | it to 1 means "2 pages", setting it to 2 means "4 pages", etc. | |
df858fa8 | 653 | Zero disables swap readahead completely. |
db0fb184 PM |
654 | |
655 | The default value is three (eight pages at a time). There may be some | |
656 | small benefits in tuning this to a different value if your workload is | |
657 | swap-intensive. | |
658 | ||
df858fa8 CE |
659 | Lower values mean lower latencies for initial faults, but at the same time |
660 | extra faults and I/O delays for following faults if they would have been part of | |
661 | that consecutive pages readahead would have brought in. | |
662 | ||
db0fb184 PM |
663 | ============================================================= |
664 | ||
665 | panic_on_oom | |
666 | ||
667 | This enables or disables panic on out-of-memory feature. | |
668 | ||
669 | If this is set to 0, the kernel will kill some rogue process, | |
670 | called oom_killer. Usually, oom_killer can kill rogue processes and | |
671 | system will survive. | |
672 | ||
673 | If this is set to 1, the kernel panics when out-of-memory happens. | |
674 | However, if a process limits using nodes by mempolicy/cpusets, | |
675 | and those nodes become memory exhaustion status, one process | |
676 | may be killed by oom-killer. No panic occurs in this case. | |
677 | Because other nodes' memory may be free. This means system total status | |
678 | may be not fatal yet. | |
679 | ||
680 | If this is set to 2, the kernel panics compulsorily even on the | |
daaf1e68 KH |
681 | above-mentioned. Even oom happens under memory cgroup, the whole |
682 | system panics. | |
db0fb184 PM |
683 | |
684 | The default value is 0. | |
685 | 1 and 2 are for failover of clustering. Please select either | |
686 | according to your policy of failover. | |
daaf1e68 KH |
687 | panic_on_oom=2+kdump gives you very strong tool to investigate |
688 | why oom happens. You can get snapshot. | |
db0fb184 PM |
689 | |
690 | ============================================================= | |
691 | ||
692 | percpu_pagelist_fraction | |
693 | ||
694 | This is the fraction of pages at most (high mark pcp->high) in each zone that | |
695 | are allocated for each per cpu page list. The min value for this is 8. It | |
696 | means that we don't allow more than 1/8th of pages in each zone to be | |
697 | allocated in any single per_cpu_pagelist. This entry only changes the value | |
698 | of hot per cpu pagelists. User can specify a number like 100 to allocate | |
699 | 1/100th of each zone to each per cpu page list. | |
700 | ||
701 | The batch value of each per cpu pagelist is also updated as a result. It is | |
702 | set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8) | |
703 | ||
704 | The initial value is zero. Kernel does not use this value at boot time to set | |
705 | the high water marks for each per cpu page list. | |
706 | ||
707 | ============================================================== | |
708 | ||
709 | stat_interval | |
710 | ||
711 | The time interval between which vm statistics are updated. The default | |
712 | is 1 second. | |
713 | ||
714 | ============================================================== | |
715 | ||
716 | swappiness | |
717 | ||
718 | This control is used to define how aggressive the kernel will swap | |
719 | memory pages. Higher values will increase agressiveness, lower values | |
8582cb96 AT |
720 | decrease the amount of swap. A value of 0 instructs the kernel not to |
721 | initiate swap until the amount of free and file-backed pages is less | |
722 | than the high water mark in a zone. | |
db0fb184 PM |
723 | |
724 | The default value is 60. | |
725 | ||
726 | ============================================================== | |
727 | ||
c9b1d098 AS |
728 | - user_reserve_kbytes |
729 | ||
730 | When overcommit_memory is set to 2, "never overommit" mode, reserve | |
731 | min(3% of current process size, user_reserve_kbytes) of free memory. | |
732 | This is intended to prevent a user from starting a single memory hogging | |
733 | process, such that they cannot recover (kill the hog). | |
734 | ||
735 | user_reserve_kbytes defaults to min(3% of the current process size, 128MB). | |
736 | ||
737 | If this is reduced to zero, then the user will be allowed to allocate | |
738 | all free memory with a single process, minus admin_reserve_kbytes. | |
739 | Any subsequent attempts to execute a command will result in | |
740 | "fork: Cannot allocate memory". | |
741 | ||
742 | Changing this takes effect whenever an application requests memory. | |
743 | ||
744 | ============================================================== | |
745 | ||
db0fb184 PM |
746 | vfs_cache_pressure |
747 | ------------------ | |
748 | ||
749 | Controls the tendency of the kernel to reclaim the memory which is used for | |
750 | caching of directory and inode objects. | |
751 | ||
752 | At the default value of vfs_cache_pressure=100 the kernel will attempt to | |
753 | reclaim dentries and inodes at a "fair" rate with respect to pagecache and | |
754 | swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer | |
55c37a84 JK |
755 | to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will |
756 | never reclaim dentries and inodes due to memory pressure and this can easily | |
757 | lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100 | |
db0fb184 PM |
758 | causes the kernel to prefer to reclaim dentries and inodes. |
759 | ||
760 | ============================================================== | |
761 | ||
762 | zone_reclaim_mode: | |
763 | ||
764 | Zone_reclaim_mode allows someone to set more or less aggressive approaches to | |
765 | reclaim memory when a zone runs out of memory. If it is set to zero then no | |
766 | zone reclaim occurs. Allocations will be satisfied from other zones / nodes | |
767 | in the system. | |
768 | ||
769 | This is value ORed together of | |
770 | ||
771 | 1 = Zone reclaim on | |
772 | 2 = Zone reclaim writes dirty pages out | |
773 | 4 = Zone reclaim swaps pages | |
774 | ||
4f9b16a6 MG |
775 | zone_reclaim_mode is disabled by default. For file servers or workloads |
776 | that benefit from having their data cached, zone_reclaim_mode should be | |
777 | left disabled as the caching effect is likely to be more important than | |
db0fb184 PM |
778 | data locality. |
779 | ||
4f9b16a6 MG |
780 | zone_reclaim may be enabled if it's known that the workload is partitioned |
781 | such that each partition fits within a NUMA node and that accessing remote | |
782 | memory would cause a measurable performance reduction. The page allocator | |
783 | will then reclaim easily reusable pages (those page cache pages that are | |
784 | currently not used) before allocating off node pages. | |
785 | ||
db0fb184 PM |
786 | Allowing zone reclaim to write out pages stops processes that are |
787 | writing large amounts of data from dirtying pages on other nodes. Zone | |
788 | reclaim will write out dirty pages if a zone fills up and so effectively | |
789 | throttle the process. This may decrease the performance of a single process | |
790 | since it cannot use all of system memory to buffer the outgoing writes | |
791 | anymore but it preserve the memory on other nodes so that the performance | |
792 | of other processes running on other nodes will not be affected. | |
793 | ||
794 | Allowing regular swap effectively restricts allocations to the local | |
795 | node unless explicitly overridden by memory policies or cpuset | |
796 | configurations. | |
797 | ||
798 | ============ End of Document ================================= |