Commit | Line | Data |
---|---|---|
72f924f6 VG |
1 | Block IO Controller |
2 | =================== | |
3 | Overview | |
4 | ======== | |
5 | cgroup subsys "blkio" implements the block io controller. There seems to be | |
6 | a need of various kinds of IO control policies (like proportional BW, max BW) | |
7 | both at leaf nodes as well as at intermediate nodes in a storage hierarchy. | |
8 | Plan is to use the same cgroup based management interface for blkio controller | |
9 | and based on user options switch IO policies in the background. | |
10 | ||
2786c4e5 VG |
11 | Currently two IO control policies are implemented. First one is proportional |
12 | weight time based division of disk policy. It is implemented in CFQ. Hence | |
13 | this policy takes effect only on leaf nodes when CFQ is being used. The second | |
14 | one is throttling policy which can be used to specify upper IO rate limits | |
15 | on devices. This policy is implemented in generic block layer and can be | |
16 | used on leaf nodes as well as higher level logical devices like device mapper. | |
72f924f6 VG |
17 | |
18 | HOWTO | |
19 | ===== | |
2786c4e5 VG |
20 | Proportional Weight division of bandwidth |
21 | ----------------------------------------- | |
72f924f6 VG |
22 | You can do a very simple testing of running two dd threads in two different |
23 | cgroups. Here is what you can do. | |
24 | ||
afc24d49 VG |
25 | - Enable Block IO controller |
26 | CONFIG_BLK_CGROUP=y | |
27 | ||
72f924f6 VG |
28 | - Enable group scheduling in CFQ |
29 | CONFIG_CFQ_GROUP_IOSCHED=y | |
30 | ||
f6e07d38 JS |
31 | - Compile and boot into kernel and mount IO controller (blkio); see |
32 | cgroups.txt, Why are cgroups needed?. | |
72f924f6 | 33 | |
f6e07d38 JS |
34 | mount -t tmpfs cgroup_root /sys/fs/cgroup |
35 | mkdir /sys/fs/cgroup/blkio | |
36 | mount -t cgroup -o blkio none /sys/fs/cgroup/blkio | |
72f924f6 VG |
37 | |
38 | - Create two cgroups | |
f6e07d38 | 39 | mkdir -p /sys/fs/cgroup/blkio/test1/ /sys/fs/cgroup/blkio/test2 |
72f924f6 VG |
40 | |
41 | - Set weights of group test1 and test2 | |
f6e07d38 JS |
42 | echo 1000 > /sys/fs/cgroup/blkio/test1/blkio.weight |
43 | echo 500 > /sys/fs/cgroup/blkio/test2/blkio.weight | |
72f924f6 VG |
44 | |
45 | - Create two same size files (say 512MB each) on same disk (file1, file2) and | |
46 | launch two dd threads in different cgroup to read those files. | |
47 | ||
48 | sync | |
49 | echo 3 > /proc/sys/vm/drop_caches | |
50 | ||
51 | dd if=/mnt/sdb/zerofile1 of=/dev/null & | |
f6e07d38 JS |
52 | echo $! > /sys/fs/cgroup/blkio/test1/tasks |
53 | cat /sys/fs/cgroup/blkio/test1/tasks | |
72f924f6 VG |
54 | |
55 | dd if=/mnt/sdb/zerofile2 of=/dev/null & | |
f6e07d38 JS |
56 | echo $! > /sys/fs/cgroup/blkio/test2/tasks |
57 | cat /sys/fs/cgroup/blkio/test2/tasks | |
72f924f6 VG |
58 | |
59 | - At macro level, first dd should finish first. To get more precise data, keep | |
60 | on looking at (with the help of script), at blkio.disk_time and | |
61 | blkio.disk_sectors files of both test1 and test2 groups. This will tell how | |
62 | much disk time (in milli seconds), each group got and how many secotors each | |
63 | group dispatched to the disk. We provide fairness in terms of disk time, so | |
64 | ideally io.disk_time of cgroups should be in proportion to the weight. | |
65 | ||
2786c4e5 VG |
66 | Throttling/Upper Limit policy |
67 | ----------------------------- | |
68 | - Enable Block IO controller | |
69 | CONFIG_BLK_CGROUP=y | |
70 | ||
71 | - Enable throttling in block layer | |
72 | CONFIG_BLK_DEV_THROTTLING=y | |
73 | ||
f6e07d38 JS |
74 | - Mount blkio controller (see cgroups.txt, Why are cgroups needed?) |
75 | mount -t cgroup -o blkio none /sys/fs/cgroup/blkio | |
2786c4e5 VG |
76 | |
77 | - Specify a bandwidth rate on particular device for root group. The format | |
78 | for policy is "<major>:<minor> <byes_per_second>". | |
79 | ||
9b61fc4c | 80 | echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device |
2786c4e5 VG |
81 | |
82 | Above will put a limit of 1MB/second on reads happening for root group | |
83 | on device having major/minor number 8:16. | |
84 | ||
85 | - Run dd to read a file and see if rate is throttled to 1MB/s or not. | |
86 | ||
87 | # dd if=/mnt/common/zerofile of=/dev/null bs=4K count=1024 | |
88 | # iflag=direct | |
89 | 1024+0 records in | |
90 | 1024+0 records out | |
91 | 4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s | |
92 | ||
9b61fc4c | 93 | Limits for writes can be put using blkio.throttle.write_bps_device file. |
2786c4e5 | 94 | |
bdc85df7 VG |
95 | Hierarchical Cgroups |
96 | ==================== | |
97 | - Currently none of the IO control policy supports hierarhical groups. But | |
98 | cgroup interface does allow creation of hierarhical cgroups and internally | |
99 | IO policies treat them as flat hierarchy. | |
100 | ||
101 | So this patch will allow creation of cgroup hierarhcy but at the backend | |
102 | everything will be treated as flat. So if somebody created a hierarchy like | |
103 | as follows. | |
104 | ||
105 | root | |
106 | / \ | |
107 | test1 test2 | |
108 | | | |
109 | test3 | |
110 | ||
111 | CFQ and throttling will practically treat all groups at same level. | |
112 | ||
113 | pivot | |
67de0162 | 114 | / / \ \ |
bdc85df7 VG |
115 | root test1 test2 test3 |
116 | ||
117 | Down the line we can implement hierarchical accounting/control support | |
118 | and also introduce a new cgroup file "use_hierarchy" which will control | |
119 | whether cgroup hierarchy is viewed as flat or hierarchical by the policy.. | |
120 | This is how memory controller also has implemented the things. | |
121 | ||
72f924f6 VG |
122 | Various user visible config options |
123 | =================================== | |
72f924f6 | 124 | CONFIG_BLK_CGROUP |
afc24d49 | 125 | - Block IO controller. |
72f924f6 VG |
126 | |
127 | CONFIG_DEBUG_BLK_CGROUP | |
afc24d49 VG |
128 | - Debug help. Right now some additional stats file show up in cgroup |
129 | if this option is enabled. | |
130 | ||
131 | CONFIG_CFQ_GROUP_IOSCHED | |
132 | - Enables group scheduling in CFQ. Currently only 1 level of group | |
133 | creation is allowed. | |
72f924f6 | 134 | |
2786c4e5 VG |
135 | CONFIG_BLK_DEV_THROTTLING |
136 | - Enable block device throttling support in block layer. | |
137 | ||
72f924f6 VG |
138 | Details of cgroup files |
139 | ======================= | |
2786c4e5 VG |
140 | Proportional weight policy files |
141 | -------------------------------- | |
72f924f6 | 142 | - blkio.weight |
da69da18 GJ |
143 | - Specifies per cgroup weight. This is default weight of the group |
144 | on all the devices until and unless overridden by per device rule. | |
145 | (See blkio.weight_device). | |
df457f84 | 146 | Currently allowed range of weights is from 10 to 1000. |
72f924f6 | 147 | |
da69da18 GJ |
148 | - blkio.weight_device |
149 | - One can specify per cgroup per device rules using this interface. | |
150 | These rules override the default value of group weight as specified | |
151 | by blkio.weight. | |
152 | ||
153 | Following is the format. | |
154 | ||
f6e07d38 | 155 | # echo dev_maj:dev_minor weight > blkio.weight_device |
da69da18 GJ |
156 | Configure weight=300 on /dev/sdb (8:16) in this cgroup |
157 | # echo 8:16 300 > blkio.weight_device | |
158 | # cat blkio.weight_device | |
159 | dev weight | |
160 | 8:16 300 | |
161 | ||
162 | Configure weight=500 on /dev/sda (8:0) in this cgroup | |
163 | # echo 8:0 500 > blkio.weight_device | |
164 | # cat blkio.weight_device | |
165 | dev weight | |
166 | 8:0 500 | |
167 | 8:16 300 | |
168 | ||
169 | Remove specific weight for /dev/sda in this cgroup | |
170 | # echo 8:0 0 > blkio.weight_device | |
171 | # cat blkio.weight_device | |
172 | dev weight | |
173 | 8:16 300 | |
174 | ||
72f924f6 VG |
175 | - blkio.time |
176 | - disk time allocated to cgroup per device in milliseconds. First | |
177 | two fields specify the major and minor number of the device and | |
178 | third field specifies the disk time allocated to group in | |
179 | milliseconds. | |
180 | ||
181 | - blkio.sectors | |
182 | - number of sectors transferred to/from disk by the group. First | |
183 | two fields specify the major and minor number of the device and | |
184 | third field specifies the number of sectors transferred by the | |
185 | group to/from the device. | |
186 | ||
84c124da DS |
187 | - blkio.io_service_bytes |
188 | - Number of bytes transferred to/from the disk by the group. These | |
189 | are further divided by the type of operation - read or write, sync | |
190 | or async. First two fields specify the major and minor number of the | |
191 | device, third field specifies the operation type and the fourth field | |
192 | specifies the number of bytes. | |
193 | ||
194 | - blkio.io_serviced | |
195 | - Number of IOs completed to/from the disk by the group. These | |
196 | are further divided by the type of operation - read or write, sync | |
197 | or async. First two fields specify the major and minor number of the | |
198 | device, third field specifies the operation type and the fourth field | |
199 | specifies the number of IOs. | |
200 | ||
201 | - blkio.io_service_time | |
202 | - Total amount of time between request dispatch and request completion | |
203 | for the IOs done by this cgroup. This is in nanoseconds to make it | |
204 | meaningful for flash devices too. For devices with queue depth of 1, | |
205 | this time represents the actual service time. When queue_depth > 1, | |
206 | that is no longer true as requests may be served out of order. This | |
207 | may cause the service time for a given IO to include the service time | |
208 | of multiple IOs when served out of order which may result in total | |
209 | io_service_time > actual time elapsed. This time is further divided by | |
210 | the type of operation - read or write, sync or async. First two fields | |
211 | specify the major and minor number of the device, third field | |
212 | specifies the operation type and the fourth field specifies the | |
213 | io_service_time in ns. | |
214 | ||
215 | - blkio.io_wait_time | |
216 | - Total amount of time the IOs for this cgroup spent waiting in the | |
217 | scheduler queues for service. This can be greater than the total time | |
218 | elapsed since it is cumulative io_wait_time for all IOs. It is not a | |
219 | measure of total time the cgroup spent waiting but rather a measure of | |
220 | the wait_time for its individual IOs. For devices with queue_depth > 1 | |
221 | this metric does not include the time spent waiting for service once | |
222 | the IO is dispatched to the device but till it actually gets serviced | |
223 | (there might be a time lag here due to re-ordering of requests by the | |
224 | device). This is in nanoseconds to make it meaningful for flash | |
225 | devices too. This time is further divided by the type of operation - | |
226 | read or write, sync or async. First two fields specify the major and | |
227 | minor number of the device, third field specifies the operation type | |
228 | and the fourth field specifies the io_wait_time in ns. | |
229 | ||
812d4026 DS |
230 | - blkio.io_merged |
231 | - Total number of bios/requests merged into requests belonging to this | |
232 | cgroup. This is further divided by the type of operation - read or | |
233 | write, sync or async. | |
234 | ||
cdc1184c DS |
235 | - blkio.io_queued |
236 | - Total number of requests queued up at any given instant for this | |
237 | cgroup. This is further divided by the type of operation - read or | |
238 | write, sync or async. | |
239 | ||
240 | - blkio.avg_queue_size | |
afc24d49 | 241 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
cdc1184c DS |
242 | The average queue size for this cgroup over the entire time of this |
243 | cgroup's existence. Queue size samples are taken each time one of the | |
244 | queues of this cgroup gets a timeslice. | |
245 | ||
812df48d | 246 | - blkio.group_wait_time |
afc24d49 | 247 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
812df48d DS |
248 | This is the amount of time the cgroup had to wait since it became busy |
249 | (i.e., went from 0 to 1 request queued) to get a timeslice for one of | |
250 | its queues. This is different from the io_wait_time which is the | |
251 | cumulative total of the amount of time spent by each IO in that cgroup | |
252 | waiting in the scheduler queue. This is in nanoseconds. If this is | |
253 | read when the cgroup is in a waiting (for timeslice) state, the stat | |
254 | will only report the group_wait_time accumulated till the last time it | |
255 | got a timeslice and will not include the current delta. | |
256 | ||
257 | - blkio.empty_time | |
afc24d49 | 258 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
812df48d DS |
259 | This is the amount of time a cgroup spends without any pending |
260 | requests when not being served, i.e., it does not include any time | |
261 | spent idling for one of the queues of the cgroup. This is in | |
262 | nanoseconds. If this is read when the cgroup is in an empty state, | |
263 | the stat will only report the empty_time accumulated till the last | |
264 | time it had a pending request and will not include the current delta. | |
265 | ||
266 | - blkio.idle_time | |
afc24d49 | 267 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
812df48d DS |
268 | This is the amount of time spent by the IO scheduler idling for a |
269 | given cgroup in anticipation of a better request than the exising ones | |
270 | from other queues/cgroups. This is in nanoseconds. If this is read | |
271 | when the cgroup is in an idling state, the stat will only report the | |
272 | idle_time accumulated till the last idle period and will not include | |
273 | the current delta. | |
274 | ||
72f924f6 | 275 | - blkio.dequeue |
afc24d49 | 276 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. This |
72f924f6 VG |
277 | gives the statistics about how many a times a group was dequeued |
278 | from service tree of the device. First two fields specify the major | |
279 | and minor number of the device and third field specifies the number | |
280 | of times a group was dequeued from a particular device. | |
281 | ||
2786c4e5 VG |
282 | Throttling/Upper limit policy files |
283 | ----------------------------------- | |
284 | - blkio.throttle.read_bps_device | |
285 | - Specifies upper limit on READ rate from the device. IO rate is | |
286 | specified in bytes per second. Rules are per deivce. Following is | |
287 | the format. | |
288 | ||
9b61fc4c | 289 | echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.throttle.read_bps_device |
2786c4e5 VG |
290 | |
291 | - blkio.throttle.write_bps_device | |
292 | - Specifies upper limit on WRITE rate to the device. IO rate is | |
293 | specified in bytes per second. Rules are per deivce. Following is | |
294 | the format. | |
295 | ||
9b61fc4c | 296 | echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.throttle.write_bps_device |
2786c4e5 VG |
297 | |
298 | - blkio.throttle.read_iops_device | |
299 | - Specifies upper limit on READ rate from the device. IO rate is | |
300 | specified in IO per second. Rules are per deivce. Following is | |
301 | the format. | |
302 | ||
9b61fc4c | 303 | echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.throttle.read_iops_device |
2786c4e5 VG |
304 | |
305 | - blkio.throttle.write_iops_device | |
306 | - Specifies upper limit on WRITE rate to the device. IO rate is | |
307 | specified in io per second. Rules are per deivce. Following is | |
308 | the format. | |
309 | ||
9b61fc4c | 310 | echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.throttle.write_iops_device |
2786c4e5 VG |
311 | |
312 | Note: If both BW and IOPS rules are specified for a device, then IO is | |
313 | subjectd to both the constraints. | |
314 | ||
315 | - blkio.throttle.io_serviced | |
316 | - Number of IOs (bio) completed to/from the disk by the group (as | |
317 | seen by throttling policy). These are further divided by the type | |
318 | of operation - read or write, sync or async. First two fields specify | |
319 | the major and minor number of the device, third field specifies the | |
320 | operation type and the fourth field specifies the number of IOs. | |
321 | ||
322 | blkio.io_serviced does accounting as seen by CFQ and counts are in | |
323 | number of requests (struct request). On the other hand, | |
324 | blkio.throttle.io_serviced counts number of IO in terms of number | |
325 | of bios as seen by throttling policy. These bios can later be | |
326 | merged by elevator and total number of requests completed can be | |
327 | lesser. | |
328 | ||
329 | - blkio.throttle.io_service_bytes | |
330 | - Number of bytes transferred to/from the disk by the group. These | |
331 | are further divided by the type of operation - read or write, sync | |
332 | or async. First two fields specify the major and minor number of the | |
333 | device, third field specifies the operation type and the fourth field | |
334 | specifies the number of bytes. | |
335 | ||
336 | These numbers should roughly be same as blkio.io_service_bytes as | |
337 | updated by CFQ. The difference between two is that | |
338 | blkio.io_service_bytes will not be updated if CFQ is not operating | |
339 | on request queue. | |
340 | ||
341 | Common files among various policies | |
342 | ----------------------------------- | |
84c124da DS |
343 | - blkio.reset_stats |
344 | - Writing an int to this file will result in resetting all the stats | |
345 | for that cgroup. | |
346 | ||
72f924f6 VG |
347 | CFQ sysfs tunable |
348 | ================= | |
6d6ac1c1 VG |
349 | /sys/block/<disk>/queue/iosched/slice_idle |
350 | ------------------------------------------ | |
351 | On a faster hardware CFQ can be slow, especially with sequential workload. | |
352 | This happens because CFQ idles on a single queue and single queue might not | |
353 | drive deeper request queue depths to keep the storage busy. In such scenarios | |
354 | one can try setting slice_idle=0 and that would switch CFQ to IOPS | |
355 | (IO operations per second) mode on NCQ supporting hardware. | |
356 | ||
357 | That means CFQ will not idle between cfq queues of a cfq group and hence be | |
358 | able to driver higher queue depth and achieve better throughput. That also | |
359 | means that cfq provides fairness among groups in terms of IOPS and not in | |
360 | terms of disk time. | |
361 | ||
362 | /sys/block/<disk>/queue/iosched/group_idle | |
363 | ------------------------------------------ | |
364 | If one disables idling on individual cfq queues and cfq service trees by | |
365 | setting slice_idle=0, group_idle kicks in. That means CFQ will still idle | |
366 | on the group in an attempt to provide fairness among groups. | |
367 | ||
368 | By default group_idle is same as slice_idle and does not do anything if | |
369 | slice_idle is enabled. | |
370 | ||
371 | One can experience an overall throughput drop if you have created multiple | |
372 | groups and put applications in that group which are not driving enough | |
373 | IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle | |
374 | on individual groups and throughput should improve. | |
375 | ||
72f924f6 VG |
376 | What works |
377 | ========== | |
378 | - Currently only sync IO queues are support. All the buffered writes are | |
379 | still system wide and not per group. Hence we will not see service | |
380 | differentiation between buffered writes between groups. |