Commit | Line | Data |
---|---|---|
5e7eaade IM |
1 | |
2 | This is the CFS scheduler. | |
3 | ||
4 | 80% of CFS's design can be summed up in a single sentence: CFS basically | |
5 | models an "ideal, precise multi-tasking CPU" on real hardware. | |
6 | ||
7 | "Ideal multi-tasking CPU" is a (non-existent :-)) CPU that has 100% | |
8 | physical power and which can run each task at precise equal speed, in | |
9 | parallel, each at 1/nr_running speed. For example: if there are 2 tasks | |
10 | running then it runs each at 50% physical power - totally in parallel. | |
11 | ||
12 | On real hardware, we can run only a single task at once, so while that | |
13 | one task runs, the other tasks that are waiting for the CPU are at a | |
14 | disadvantage - the current task gets an unfair amount of CPU time. In | |
15 | CFS this fairness imbalance is expressed and tracked via the per-task | |
16 | p->wait_runtime (nanosec-unit) value. "wait_runtime" is the amount of | |
17 | time the task should now run on the CPU for it to become completely fair | |
18 | and balanced. | |
19 | ||
20 | ( small detail: on 'ideal' hardware, the p->wait_runtime value would | |
21 | always be zero - no task would ever get 'out of balance' from the | |
22 | 'ideal' share of CPU time. ) | |
23 | ||
24 | CFS's task picking logic is based on this p->wait_runtime value and it | |
25 | is thus very simple: it always tries to run the task with the largest | |
26 | p->wait_runtime value. In other words, CFS tries to run the task with | |
27 | the 'gravest need' for more CPU time. So CFS always tries to split up | |
28 | CPU time between runnable tasks as close to 'ideal multitasking | |
29 | hardware' as possible. | |
30 | ||
31 | Most of the rest of CFS's design just falls out of this really simple | |
32 | concept, with a few add-on embellishments like nice levels, | |
33 | multiprocessing and various algorithm variants to recognize sleepers. | |
34 | ||
35 | In practice it works like this: the system runs a task a bit, and when | |
36 | the task schedules (or a scheduler tick happens) the task's CPU usage is | |
37 | 'accounted for': the (small) time it just spent using the physical CPU | |
38 | is deducted from p->wait_runtime. [minus the 'fair share' it would have | |
39 | gotten anyway]. Once p->wait_runtime gets low enough so that another | |
40 | task becomes the 'leftmost task' of the time-ordered rbtree it maintains | |
41 | (plus a small amount of 'granularity' distance relative to the leftmost | |
42 | task so that we do not over-schedule tasks and trash the cache) then the | |
43 | new leftmost task is picked and the current task is preempted. | |
44 | ||
45 | The rq->fair_clock value tracks the 'CPU time a runnable task would have | |
46 | fairly gotten, had it been runnable during that time'. So by using | |
47 | rq->fair_clock values we can accurately timestamp and measure the | |
48 | 'expected CPU time' a task should have gotten. All runnable tasks are | |
49 | sorted in the rbtree by the "rq->fair_clock - p->wait_runtime" key, and | |
50 | CFS picks the 'leftmost' task and sticks to it. As the system progresses | |
51 | forwards, newly woken tasks are put into the tree more and more to the | |
52 | right - slowly but surely giving a chance for every task to become the | |
53 | 'leftmost task' and thus get on the CPU within a deterministic amount of | |
54 | time. | |
55 | ||
56 | Some implementation details: | |
57 | ||
58 | - the introduction of Scheduling Classes: an extensible hierarchy of | |
59 | scheduler modules. These modules encapsulate scheduling policy | |
60 | details and are handled by the scheduler core without the core | |
61 | code assuming about them too much. | |
62 | ||
63 | - sched_fair.c implements the 'CFS desktop scheduler': it is a | |
64 | replacement for the vanilla scheduler's SCHED_OTHER interactivity | |
65 | code. | |
66 | ||
67 | I'd like to give credit to Con Kolivas for the general approach here: | |
68 | he has proven via RSDL/SD that 'fair scheduling' is possible and that | |
69 | it results in better desktop scheduling. Kudos Con! | |
70 | ||
71 | The CFS patch uses a completely different approach and implementation | |
72 | from RSDL/SD. My goal was to make CFS's interactivity quality exceed | |
73 | that of RSDL/SD, which is a high standard to meet :-) Testing | |
74 | feedback is welcome to decide this one way or another. [ and, in any | |
75 | case, all of SD's logic could be added via a kernel/sched_sd.c module | |
76 | as well, if Con is interested in such an approach. ] | |
77 | ||
78 | CFS's design is quite radical: it does not use runqueues, it uses a | |
79 | time-ordered rbtree to build a 'timeline' of future task execution, | |
80 | and thus has no 'array switch' artifacts (by which both the vanilla | |
81 | scheduler and RSDL/SD are affected). | |
82 | ||
83 | CFS uses nanosecond granularity accounting and does not rely on any | |
84 | jiffies or other HZ detail. Thus the CFS scheduler has no notion of | |
85 | 'timeslices' and has no heuristics whatsoever. There is only one | |
5f5d3aa1 | 86 | central tunable (you have to switch on CONFIG_SCHED_DEBUG): |
5e7eaade IM |
87 | |
88 | /proc/sys/kernel/sched_granularity_ns | |
89 | ||
90 | which can be used to tune the scheduler from 'desktop' (low | |
91 | latencies) to 'server' (good batching) workloads. It defaults to a | |
92 | setting suitable for desktop workloads. SCHED_BATCH is handled by the | |
93 | CFS scheduler module too. | |
94 | ||
95 | Due to its design, the CFS scheduler is not prone to any of the | |
96 | 'attacks' that exist today against the heuristics of the stock | |
97 | scheduler: fiftyp.c, thud.c, chew.c, ring-test.c, massive_intr.c all | |
98 | work fine and do not impact interactivity and produce the expected | |
99 | behavior. | |
100 | ||
101 | the CFS scheduler has a much stronger handling of nice levels and | |
102 | SCHED_BATCH: both types of workloads should be isolated much more | |
103 | agressively than under the vanilla scheduler. | |
104 | ||
105 | ( another detail: due to nanosec accounting and timeline sorting, | |
106 | sched_yield() support is very simple under CFS, and in fact under | |
107 | CFS sched_yield() behaves much better than under any other | |
108 | scheduler i have tested so far. ) | |
109 | ||
110 | - sched_rt.c implements SCHED_FIFO and SCHED_RR semantics, in a simpler | |
111 | way than the vanilla scheduler does. It uses 100 runqueues (for all | |
112 | 100 RT priority levels, instead of 140 in the vanilla scheduler) | |
113 | and it needs no expired array. | |
114 | ||
115 | - reworked/sanitized SMP load-balancing: the runqueue-walking | |
116 | assumptions are gone from the load-balancing code now, and | |
117 | iterators of the scheduling modules are used. The balancing code got | |
118 | quite a bit simpler as a result. | |
119 |