| 1 | |
| 2 | Concurrency Managed Workqueue (cmwq) |
| 3 | |
| 4 | September, 2010 Tejun Heo <tj@kernel.org> |
| 5 | Florian Mickler <florian@mickler.org> |
| 6 | |
| 7 | CONTENTS |
| 8 | |
| 9 | 1. Introduction |
| 10 | 2. Why cmwq? |
| 11 | 3. The Design |
| 12 | 4. Application Programming Interface (API) |
| 13 | 5. Example Execution Scenarios |
| 14 | 6. Guidelines |
| 15 | 7. Debugging |
| 16 | |
| 17 | |
| 18 | 1. Introduction |
| 19 | |
| 20 | There are many cases where an asynchronous process execution context |
| 21 | is needed and the workqueue (wq) API is the most commonly used |
| 22 | mechanism for such cases. |
| 23 | |
| 24 | When such an asynchronous execution context is needed, a work item |
| 25 | describing which function to execute is put on a queue. An |
| 26 | independent thread serves as the asynchronous execution context. The |
| 27 | queue is called workqueue and the thread is called worker. |
| 28 | |
| 29 | While there are work items on the workqueue the worker executes the |
| 30 | functions associated with the work items one after the other. When |
| 31 | there is no work item left on the workqueue the worker becomes idle. |
| 32 | When a new work item gets queued, the worker begins executing again. |
| 33 | |
| 34 | |
| 35 | 2. Why cmwq? |
| 36 | |
| 37 | In the original wq implementation, a multi threaded (MT) wq had one |
| 38 | worker thread per CPU and a single threaded (ST) wq had one worker |
| 39 | thread system-wide. A single MT wq needed to keep around the same |
| 40 | number of workers as the number of CPUs. The kernel grew a lot of MT |
| 41 | wq users over the years and with the number of CPU cores continuously |
| 42 | rising, some systems saturated the default 32k PID space just booting |
| 43 | up. |
| 44 | |
| 45 | Although MT wq wasted a lot of resource, the level of concurrency |
| 46 | provided was unsatisfactory. The limitation was common to both ST and |
| 47 | MT wq albeit less severe on MT. Each wq maintained its own separate |
| 48 | worker pool. A MT wq could provide only one execution context per CPU |
| 49 | while a ST wq one for the whole system. Work items had to compete for |
| 50 | those very limited execution contexts leading to various problems |
| 51 | including proneness to deadlocks around the single execution context. |
| 52 | |
| 53 | The tension between the provided level of concurrency and resource |
| 54 | usage also forced its users to make unnecessary tradeoffs like libata |
| 55 | choosing to use ST wq for polling PIOs and accepting an unnecessary |
| 56 | limitation that no two polling PIOs can progress at the same time. As |
| 57 | MT wq don't provide much better concurrency, users which require |
| 58 | higher level of concurrency, like async or fscache, had to implement |
| 59 | their own thread pool. |
| 60 | |
| 61 | Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with |
| 62 | focus on the following goals. |
| 63 | |
| 64 | * Maintain compatibility with the original workqueue API. |
| 65 | |
| 66 | * Use per-CPU unified worker pools shared by all wq to provide |
| 67 | flexible level of concurrency on demand without wasting a lot of |
| 68 | resource. |
| 69 | |
| 70 | * Automatically regulate worker pool and level of concurrency so that |
| 71 | the API users don't need to worry about such details. |
| 72 | |
| 73 | |
| 74 | 3. The Design |
| 75 | |
| 76 | In order to ease the asynchronous execution of functions a new |
| 77 | abstraction, the work item, is introduced. |
| 78 | |
| 79 | A work item is a simple struct that holds a pointer to the function |
| 80 | that is to be executed asynchronously. Whenever a driver or subsystem |
| 81 | wants a function to be executed asynchronously it has to set up a work |
| 82 | item pointing to that function and queue that work item on a |
| 83 | workqueue. |
| 84 | |
| 85 | Special purpose threads, called worker threads, execute the functions |
| 86 | off of the queue, one after the other. If no work is queued, the |
| 87 | worker threads become idle. These worker threads are managed in so |
| 88 | called worker-pools. |
| 89 | |
| 90 | The cmwq design differentiates between the user-facing workqueues that |
| 91 | subsystems and drivers queue work items on and the backend mechanism |
| 92 | which manages worker-pools and processes the queued work items. |
| 93 | |
| 94 | There are two worker-pools, one for normal work items and the other |
| 95 | for high priority ones, for each possible CPU and some extra |
| 96 | worker-pools to serve work items queued on unbound workqueues - the |
| 97 | number of these backing pools is dynamic. |
| 98 | |
| 99 | Subsystems and drivers can create and queue work items through special |
| 100 | workqueue API functions as they see fit. They can influence some |
| 101 | aspects of the way the work items are executed by setting flags on the |
| 102 | workqueue they are putting the work item on. These flags include |
| 103 | things like CPU locality, concurrency limits, priority and more. To |
| 104 | get a detailed overview refer to the API description of |
| 105 | alloc_workqueue() below. |
| 106 | |
| 107 | When a work item is queued to a workqueue, the target worker-pool is |
| 108 | determined according to the queue parameters and workqueue attributes |
| 109 | and appended on the shared worklist of the worker-pool. For example, |
| 110 | unless specifically overridden, a work item of a bound workqueue will |
| 111 | be queued on the worklist of either normal or highpri worker-pool that |
| 112 | is associated to the CPU the issuer is running on. |
| 113 | |
| 114 | For any worker pool implementation, managing the concurrency level |
| 115 | (how many execution contexts are active) is an important issue. cmwq |
| 116 | tries to keep the concurrency at a minimal but sufficient level. |
| 117 | Minimal to save resources and sufficient in that the system is used at |
| 118 | its full capacity. |
| 119 | |
| 120 | Each worker-pool bound to an actual CPU implements concurrency |
| 121 | management by hooking into the scheduler. The worker-pool is notified |
| 122 | whenever an active worker wakes up or sleeps and keeps track of the |
| 123 | number of the currently runnable workers. Generally, work items are |
| 124 | not expected to hog a CPU and consume many cycles. That means |
| 125 | maintaining just enough concurrency to prevent work processing from |
| 126 | stalling should be optimal. As long as there are one or more runnable |
| 127 | workers on the CPU, the worker-pool doesn't start execution of a new |
| 128 | work, but, when the last running worker goes to sleep, it immediately |
| 129 | schedules a new worker so that the CPU doesn't sit idle while there |
| 130 | are pending work items. This allows using a minimal number of workers |
| 131 | without losing execution bandwidth. |
| 132 | |
| 133 | Keeping idle workers around doesn't cost other than the memory space |
| 134 | for kthreads, so cmwq holds onto idle ones for a while before killing |
| 135 | them. |
| 136 | |
| 137 | For unbound workqueues, the number of backing pools is dynamic. |
| 138 | Unbound workqueue can be assigned custom attributes using |
| 139 | apply_workqueue_attrs() and workqueue will automatically create |
| 140 | backing worker pools matching the attributes. The responsibility of |
| 141 | regulating concurrency level is on the users. There is also a flag to |
| 142 | mark a bound wq to ignore the concurrency management. Please refer to |
| 143 | the API section for details. |
| 144 | |
| 145 | Forward progress guarantee relies on that workers can be created when |
| 146 | more execution contexts are necessary, which in turn is guaranteed |
| 147 | through the use of rescue workers. All work items which might be used |
| 148 | on code paths that handle memory reclaim are required to be queued on |
| 149 | wq's that have a rescue-worker reserved for execution under memory |
| 150 | pressure. Else it is possible that the worker-pool deadlocks waiting |
| 151 | for execution contexts to free up. |
| 152 | |
| 153 | |
| 154 | 4. Application Programming Interface (API) |
| 155 | |
| 156 | alloc_workqueue() allocates a wq. The original create_*workqueue() |
| 157 | functions are deprecated and scheduled for removal. alloc_workqueue() |
| 158 | takes three arguments - @name, @flags and @max_active. @name is the |
| 159 | name of the wq and also used as the name of the rescuer thread if |
| 160 | there is one. |
| 161 | |
| 162 | A wq no longer manages execution resources but serves as a domain for |
| 163 | forward progress guarantee, flush and work item attributes. @flags |
| 164 | and @max_active control how work items are assigned execution |
| 165 | resources, scheduled and executed. |
| 166 | |
| 167 | @flags: |
| 168 | |
| 169 | WQ_UNBOUND |
| 170 | |
| 171 | Work items queued to an unbound wq are served by the special |
| 172 | woker-pools which host workers which are not bound to any |
| 173 | specific CPU. This makes the wq behave as a simple execution |
| 174 | context provider without concurrency management. The unbound |
| 175 | worker-pools try to start execution of work items as soon as |
| 176 | possible. Unbound wq sacrifices locality but is useful for |
| 177 | the following cases. |
| 178 | |
| 179 | * Wide fluctuation in the concurrency level requirement is |
| 180 | expected and using bound wq may end up creating large number |
| 181 | of mostly unused workers across different CPUs as the issuer |
| 182 | hops through different CPUs. |
| 183 | |
| 184 | * Long running CPU intensive workloads which can be better |
| 185 | managed by the system scheduler. |
| 186 | |
| 187 | WQ_FREEZABLE |
| 188 | |
| 189 | A freezable wq participates in the freeze phase of the system |
| 190 | suspend operations. Work items on the wq are drained and no |
| 191 | new work item starts execution until thawed. |
| 192 | |
| 193 | WQ_MEM_RECLAIM |
| 194 | |
| 195 | All wq which might be used in the memory reclaim paths _MUST_ |
| 196 | have this flag set. The wq is guaranteed to have at least one |
| 197 | execution context regardless of memory pressure. |
| 198 | |
| 199 | WQ_HIGHPRI |
| 200 | |
| 201 | Work items of a highpri wq are queued to the highpri |
| 202 | worker-pool of the target cpu. Highpri worker-pools are |
| 203 | served by worker threads with elevated nice level. |
| 204 | |
| 205 | Note that normal and highpri worker-pools don't interact with |
| 206 | each other. Each maintain its separate pool of workers and |
| 207 | implements concurrency management among its workers. |
| 208 | |
| 209 | WQ_CPU_INTENSIVE |
| 210 | |
| 211 | Work items of a CPU intensive wq do not contribute to the |
| 212 | concurrency level. In other words, runnable CPU intensive |
| 213 | work items will not prevent other work items in the same |
| 214 | worker-pool from starting execution. This is useful for bound |
| 215 | work items which are expected to hog CPU cycles so that their |
| 216 | execution is regulated by the system scheduler. |
| 217 | |
| 218 | Although CPU intensive work items don't contribute to the |
| 219 | concurrency level, start of their executions is still |
| 220 | regulated by the concurrency management and runnable |
| 221 | non-CPU-intensive work items can delay execution of CPU |
| 222 | intensive work items. |
| 223 | |
| 224 | This flag is meaningless for unbound wq. |
| 225 | |
| 226 | Note that the flag WQ_NON_REENTRANT no longer exists as all workqueues |
| 227 | are now non-reentrant - any work item is guaranteed to be executed by |
| 228 | at most one worker system-wide at any given time. |
| 229 | |
| 230 | @max_active: |
| 231 | |
| 232 | @max_active determines the maximum number of execution contexts per |
| 233 | CPU which can be assigned to the work items of a wq. For example, |
| 234 | with @max_active of 16, at most 16 work items of the wq can be |
| 235 | executing at the same time per CPU. |
| 236 | |
| 237 | Currently, for a bound wq, the maximum limit for @max_active is 512 |
| 238 | and the default value used when 0 is specified is 256. For an unbound |
| 239 | wq, the limit is higher of 512 and 4 * num_possible_cpus(). These |
| 240 | values are chosen sufficiently high such that they are not the |
| 241 | limiting factor while providing protection in runaway cases. |
| 242 | |
| 243 | The number of active work items of a wq is usually regulated by the |
| 244 | users of the wq, more specifically, by how many work items the users |
| 245 | may queue at the same time. Unless there is a specific need for |
| 246 | throttling the number of active work items, specifying '0' is |
| 247 | recommended. |
| 248 | |
| 249 | Some users depend on the strict execution ordering of ST wq. The |
| 250 | combination of @max_active of 1 and WQ_UNBOUND is used to achieve this |
| 251 | behavior. Work items on such wq are always queued to the unbound |
| 252 | worker-pools and only one work item can be active at any given time thus |
| 253 | achieving the same ordering property as ST wq. |
| 254 | |
| 255 | |
| 256 | 5. Example Execution Scenarios |
| 257 | |
| 258 | The following example execution scenarios try to illustrate how cmwq |
| 259 | behave under different configurations. |
| 260 | |
| 261 | Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU. |
| 262 | w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms |
| 263 | again before finishing. w1 and w2 burn CPU for 5ms then sleep for |
| 264 | 10ms. |
| 265 | |
| 266 | Ignoring all other tasks, works and processing overhead, and assuming |
| 267 | simple FIFO scheduling, the following is one highly simplified version |
| 268 | of possible sequences of events with the original wq. |
| 269 | |
| 270 | TIME IN MSECS EVENT |
| 271 | 0 w0 starts and burns CPU |
| 272 | 5 w0 sleeps |
| 273 | 15 w0 wakes up and burns CPU |
| 274 | 20 w0 finishes |
| 275 | 20 w1 starts and burns CPU |
| 276 | 25 w1 sleeps |
| 277 | 35 w1 wakes up and finishes |
| 278 | 35 w2 starts and burns CPU |
| 279 | 40 w2 sleeps |
| 280 | 50 w2 wakes up and finishes |
| 281 | |
| 282 | And with cmwq with @max_active >= 3, |
| 283 | |
| 284 | TIME IN MSECS EVENT |
| 285 | 0 w0 starts and burns CPU |
| 286 | 5 w0 sleeps |
| 287 | 5 w1 starts and burns CPU |
| 288 | 10 w1 sleeps |
| 289 | 10 w2 starts and burns CPU |
| 290 | 15 w2 sleeps |
| 291 | 15 w0 wakes up and burns CPU |
| 292 | 20 w0 finishes |
| 293 | 20 w1 wakes up and finishes |
| 294 | 25 w2 wakes up and finishes |
| 295 | |
| 296 | If @max_active == 2, |
| 297 | |
| 298 | TIME IN MSECS EVENT |
| 299 | 0 w0 starts and burns CPU |
| 300 | 5 w0 sleeps |
| 301 | 5 w1 starts and burns CPU |
| 302 | 10 w1 sleeps |
| 303 | 15 w0 wakes up and burns CPU |
| 304 | 20 w0 finishes |
| 305 | 20 w1 wakes up and finishes |
| 306 | 20 w2 starts and burns CPU |
| 307 | 25 w2 sleeps |
| 308 | 35 w2 wakes up and finishes |
| 309 | |
| 310 | Now, let's assume w1 and w2 are queued to a different wq q1 which has |
| 311 | WQ_CPU_INTENSIVE set, |
| 312 | |
| 313 | TIME IN MSECS EVENT |
| 314 | 0 w0 starts and burns CPU |
| 315 | 5 w0 sleeps |
| 316 | 5 w1 and w2 start and burn CPU |
| 317 | 10 w1 sleeps |
| 318 | 15 w2 sleeps |
| 319 | 15 w0 wakes up and burns CPU |
| 320 | 20 w0 finishes |
| 321 | 20 w1 wakes up and finishes |
| 322 | 25 w2 wakes up and finishes |
| 323 | |
| 324 | |
| 325 | 6. Guidelines |
| 326 | |
| 327 | * Do not forget to use WQ_MEM_RECLAIM if a wq may process work items |
| 328 | which are used during memory reclaim. Each wq with WQ_MEM_RECLAIM |
| 329 | set has an execution context reserved for it. If there is |
| 330 | dependency among multiple work items used during memory reclaim, |
| 331 | they should be queued to separate wq each with WQ_MEM_RECLAIM. |
| 332 | |
| 333 | * Unless strict ordering is required, there is no need to use ST wq. |
| 334 | |
| 335 | * Unless there is a specific need, using 0 for @max_active is |
| 336 | recommended. In most use cases, concurrency level usually stays |
| 337 | well under the default limit. |
| 338 | |
| 339 | * A wq serves as a domain for forward progress guarantee |
| 340 | (WQ_MEM_RECLAIM, flush and work item attributes. Work items which |
| 341 | are not involved in memory reclaim and don't need to be flushed as a |
| 342 | part of a group of work items, and don't require any special |
| 343 | attribute, can use one of the system wq. There is no difference in |
| 344 | execution characteristics between using a dedicated wq and a system |
| 345 | wq. |
| 346 | |
| 347 | * Unless work items are expected to consume a huge amount of CPU |
| 348 | cycles, using a bound wq is usually beneficial due to the increased |
| 349 | level of locality in wq operations and work item execution. |
| 350 | |
| 351 | |
| 352 | 7. Debugging |
| 353 | |
| 354 | Because the work functions are executed by generic worker threads |
| 355 | there are a few tricks needed to shed some light on misbehaving |
| 356 | workqueue users. |
| 357 | |
| 358 | Worker threads show up in the process list as: |
| 359 | |
| 360 | root 5671 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/0:1] |
| 361 | root 5672 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/1:2] |
| 362 | root 5673 0.0 0.0 0 0 ? S 12:12 0:00 [kworker/0:0] |
| 363 | root 5674 0.0 0.0 0 0 ? S 12:13 0:00 [kworker/1:0] |
| 364 | |
| 365 | If kworkers are going crazy (using too much cpu), there are two types |
| 366 | of possible problems: |
| 367 | |
| 368 | 1. Something being scheduled in rapid succession |
| 369 | 2. A single work item that consumes lots of cpu cycles |
| 370 | |
| 371 | The first one can be tracked using tracing: |
| 372 | |
| 373 | $ echo workqueue:workqueue_queue_work > /sys/kernel/debug/tracing/set_event |
| 374 | $ cat /sys/kernel/debug/tracing/trace_pipe > out.txt |
| 375 | (wait a few secs) |
| 376 | ^C |
| 377 | |
| 378 | If something is busy looping on work queueing, it would be dominating |
| 379 | the output and the offender can be determined with the work item |
| 380 | function. |
| 381 | |
| 382 | For the second type of problems it should be possible to just check |
| 383 | the stack trace of the offending worker thread. |
| 384 | |
| 385 | $ cat /proc/THE_OFFENDING_KWORKER/stack |
| 386 | |
| 387 | The work item's function should be trivially visible in the stack |
| 388 | trace. |