1===================== 2CFS Bandwidth Control 3===================== 4 5.. note:: 6 This document only discusses CPU bandwidth control for SCHED_NORMAL. 7 The SCHED_RT case is covered in Documentation/scheduler/sched-rt-group.rst 8 9CFS bandwidth control is a CONFIG_FAIR_GROUP_SCHED extension which allows the 10specification of the maximum CPU bandwidth available to a group or hierarchy. 11 12The bandwidth allowed for a group is specified using a quota and period. Within 13each given "period" (microseconds), a task group is allocated up to "quota" 14microseconds of CPU time. That quota is assigned to per-cpu run queues in 15slices as threads in the cgroup become runnable. Once all quota has been 16assigned any additional requests for quota will result in those threads being 17throttled. Throttled threads will not be able to run again until the next 18period when the quota is replenished. 19 20A group's unassigned quota is globally tracked, being refreshed back to 21cfs_quota units at each period boundary. As threads consume this bandwidth it 22is transferred to cpu-local "silos" on a demand basis. The amount transferred 23within each of these updates is tunable and described as the "slice". 24 25Burst feature 26------------- 27This feature borrows time now against our future underrun, at the cost of 28increased interference against the other system users. All nicely bounded. 29 30Traditional (UP-EDF) bandwidth control is something like: 31 32 (U = \Sum u_i) <= 1 33 34This guaranteeds both that every deadline is met and that the system is 35stable. After all, if U were > 1, then for every second of walltime, 36we'd have to run more than a second of program time, and obviously miss 37our deadline, but the next deadline will be further out still, there is 38never time to catch up, unbounded fail. 39 40The burst feature observes that a workload doesn't always executes the full 41quota; this enables one to describe u_i as a statistical distribution. 42 43For example, have u_i = {x,e}_i, where x is the p(95) and x+e p(100) 44(the traditional WCET). This effectively allows u to be smaller, 45increasing the efficiency (we can pack more tasks in the system), but at 46the cost of missing deadlines when all the odds line up. However, it 47does maintain stability, since every overrun must be paired with an 48underrun as long as our x is above the average. 49 50That is, suppose we have 2 tasks, both specify a p(95) value, then we 51have a p(95)*p(95) = 90.25% chance both tasks are within their quota and 52everything is good. At the same time we have a p(5)p(5) = 0.25% chance 53both tasks will exceed their quota at the same time (guaranteed deadline 54fail). Somewhere in between there's a threshold where one exceeds and 55the other doesn't underrun enough to compensate; this depends on the 56specific CDFs. 57 58At the same time, we can say that the worst case deadline miss, will be 59\Sum e_i; that is, there is a bounded tardiness (under the assumption 60that x+e is indeed WCET). 61 62The interferenece when using burst is valued by the possibilities for 63missing the deadline and the average WCET. Test results showed that when 64there many cgroups or CPU is under utilized, the interference is 65limited. More details are shown in: 66https://lore.kernel.org/lkml/5371BD36-55AE-4F71-B9D7-B86DC32E3D2B@linux.alibaba.com/ 67 68Management 69---------- 70Quota, period and burst are managed within the cpu subsystem via cgroupfs. 71 72.. note:: 73 The cgroupfs files described in this section are only applicable 74 to cgroup v1. For cgroup v2, see 75 :ref:`Documentation/admin-guide/cgroup-v2.rst <cgroup-v2-cpu>`. 76 77- cpu.cfs_quota_us: the total available run-time within a period (in 78- cpu.cfs_quota_us: run-time replenished within a period (in microseconds) 79- cpu.cfs_period_us: the length of a period (in microseconds) 80- cpu.stat: exports throttling statistics [explained further below] 81- cpu.cfs_burst_us: the maximum accumulated run-time (in microseconds) 82 83The default values are:: 84 85 cpu.cfs_period_us=100ms 86 cpu.cfs_quota_us=-1 87 cpu.cfs_burst_us=0 88 89A value of -1 for cpu.cfs_quota_us indicates that the group does not have any 90bandwidth restriction in place, such a group is described as an unconstrained 91bandwidth group. This represents the traditional work-conserving behavior for 92CFS. 93 94Writing any (valid) positive value(s) no smaller than cpu.cfs_burst_us will 95enact the specified bandwidth limit. The minimum quota allowed for the quota or 96period is 1ms. There is also an upper bound on the period length of 1s. 97Additional restrictions exist when bandwidth limits are used in a hierarchical 98fashion, these are explained in more detail below. 99 100Writing any negative value to cpu.cfs_quota_us will remove the bandwidth limit 101and return the group to an unconstrained state once more. 102 103A value of 0 for cpu.cfs_burst_us indicates that the group can not accumulate 104any unused bandwidth. It makes the traditional bandwidth control behavior for 105CFS unchanged. Writing any (valid) positive value(s) no larger than 106cpu.cfs_quota_us into cpu.cfs_burst_us will enact the cap on unused bandwidth 107accumulation. 108 109Any updates to a group's bandwidth specification will result in it becoming 110unthrottled if it is in a constrained state. 111 112System wide settings 113-------------------- 114For efficiency run-time is transferred between the global pool and CPU local 115"silos" in a batch fashion. This greatly reduces global accounting pressure 116on large systems. The amount transferred each time such an update is required 117is described as the "slice". 118 119This is tunable via procfs:: 120 121 /proc/sys/kernel/sched_cfs_bandwidth_slice_us (default=5ms) 122 123Larger slice values will reduce transfer overheads, while smaller values allow 124for more fine-grained consumption. 125 126Statistics 127---------- 128A group's bandwidth statistics are exported via 5 fields in cpu.stat. 129 130cpu.stat: 131 132- nr_periods: Number of enforcement intervals that have elapsed. 133- nr_throttled: Number of times the group has been throttled/limited. 134- throttled_time: The total time duration (in nanoseconds) for which entities 135 of the group have been throttled. 136- nr_bursts: Number of periods burst occurs. 137- burst_time: Cumulative wall-time (in nanoseconds) that any CPUs has used 138 above quota in respective periods 139 140This interface is read-only. 141 142Hierarchical considerations 143--------------------------- 144The interface enforces that an individual entity's bandwidth is always 145attainable, that is: max(c_i) <= C. However, over-subscription in the 146aggregate case is explicitly allowed to enable work-conserving semantics 147within a hierarchy: 148 149 e.g. \Sum (c_i) may exceed C 150 151[ Where C is the parent's bandwidth, and c_i its children ] 152 153 154There are two ways in which a group may become throttled: 155 156 a. it fully consumes its own quota within a period 157 b. a parent's quota is fully consumed within its period 158 159In case b) above, even though the child may have runtime remaining it will not 160be allowed to until the parent's runtime is refreshed. 161 162CFS Bandwidth Quota Caveats 163--------------------------- 164Once a slice is assigned to a cpu it does not expire. However all but 1ms of 165the slice may be returned to the global pool if all threads on that cpu become 166unrunnable. This is configured at compile time by the min_cfs_rq_runtime 167variable. This is a performance tweak that helps prevent added contention on 168the global lock. 169 170The fact that cpu-local slices do not expire results in some interesting corner 171cases that should be understood. 172 173For cgroup cpu constrained applications that are cpu limited this is a 174relatively moot point because they will naturally consume the entirety of their 175quota as well as the entirety of each cpu-local slice in each period. As a 176result it is expected that nr_periods roughly equal nr_throttled, and that 177cpuacct.usage will increase roughly equal to cfs_quota_us in each period. 178 179For highly-threaded, non-cpu bound applications this non-expiration nuance 180allows applications to briefly burst past their quota limits by the amount of 181unused slice on each cpu that the task group is running on (typically at most 1821ms per cpu or as defined by min_cfs_rq_runtime). This slight burst only 183applies if quota had been assigned to a cpu and then not fully used or returned 184in previous periods. This burst amount will not be transferred between cores. 185As a result, this mechanism still strictly limits the task group to quota 186average usage, albeit over a longer time window than a single period. This 187also limits the burst ability to no more than 1ms per cpu. This provides 188better more predictable user experience for highly threaded applications with 189small quota limits on high core count machines. It also eliminates the 190propensity to throttle these applications while simultanously using less than 191quota amounts of cpu. Another way to say this, is that by allowing the unused 192portion of a slice to remain valid across periods we have decreased the 193possibility of wastefully expiring quota on cpu-local silos that don't need a 194full slice's amount of cpu time. 195 196The interaction between cpu-bound and non-cpu-bound-interactive applications 197should also be considered, especially when single core usage hits 100%. If you 198gave each of these applications half of a cpu-core and they both got scheduled 199on the same CPU it is theoretically possible that the non-cpu bound application 200will use up to 1ms additional quota in some periods, thereby preventing the 201cpu-bound application from fully using its quota by that same amount. In these 202instances it will be up to the CFS algorithm (see sched-design-CFS.rst) to 203decide which application is chosen to run, as they will both be runnable and 204have remaining quota. This runtime discrepancy will be made up in the following 205periods when the interactive application idles. 206 207Examples 208-------- 2091. Limit a group to 1 CPU worth of runtime:: 210 211 If period is 250ms and quota is also 250ms, the group will get 212 1 CPU worth of runtime every 250ms. 213 214 # echo 250000 > cpu.cfs_quota_us /* quota = 250ms */ 215 # echo 250000 > cpu.cfs_period_us /* period = 250ms */ 216 2172. Limit a group to 2 CPUs worth of runtime on a multi-CPU machine 218 219 With 500ms period and 1000ms quota, the group can get 2 CPUs worth of 220 runtime every 500ms:: 221 222 # echo 1000000 > cpu.cfs_quota_us /* quota = 1000ms */ 223 # echo 500000 > cpu.cfs_period_us /* period = 500ms */ 224 225 The larger period here allows for increased burst capacity. 226 2273. Limit a group to 20% of 1 CPU. 228 229 With 50ms period, 10ms quota will be equivalent to 20% of 1 CPU:: 230 231 # echo 10000 > cpu.cfs_quota_us /* quota = 10ms */ 232 # echo 50000 > cpu.cfs_period_us /* period = 50ms */ 233 234 By using a small period here we are ensuring a consistent latency 235 response at the expense of burst capacity. 236 2374. Limit a group to 40% of 1 CPU, and allow accumulate up to 20% of 1 CPU 238 additionally, in case accumulation has been done. 239 240 With 50ms period, 20ms quota will be equivalent to 40% of 1 CPU. 241 And 10ms burst will be equivalent to 20% of 1 CPU. 242 243 # echo 20000 > cpu.cfs_quota_us /* quota = 20ms */ 244 # echo 50000 > cpu.cfs_period_us /* period = 50ms */ 245 # echo 10000 > cpu.cfs_burst_us /* burst = 10ms */ 246 247 Larger buffer setting (no larger than quota) allows greater burst capacity. 248