/linux/Documentation/admin-guide/cgroup-v1/ |
A D | freezer-subsystem.rst | 6 and stop sets of tasks in order to schedule the resources of a machine 11 a means to start and stop the tasks composing the job. 15 image of the tasks by attempting to force the tasks in a cgroup into a 16 quiescent state. Once the tasks are quiescent another task can 18 quiesced tasks. Checkpointed tasks can be restarted later should a 21 to another node and restarting the tasks there. 49 have a problem with this method of stopping and resuming tasks. 70 FREEZING cgroup transitions into FROZEN state when all tasks 105 # echo $some_pid > /sys/fs/cgroup/freezer/0/tasks 112 to freeze all tasks in the container:: [all …]
|
A D | cpuacct.rst | 5 The CPU accounting controller is used to group tasks using cgroups and 6 account the CPU usage of these groups of tasks. 9 group accumulates the CPU usage of all of its child groups and the tasks 17 visible at /sys/fs/cgroup. At bootup, this group includes all the tasks in 18 the system. /sys/fs/cgroup/tasks lists the tasks in this cgroup. 20 by this group which is essentially the CPU time obtained by all the tasks 27 # echo $$ > g1/tasks 38 user: Time spent by tasks of the cgroup in user mode. 39 system: Time spent by tasks of the cgroup in kernel mode.
|
A D | cgroups.rst | 81 tasks in each cgroup. 189 css_set->tasks. 237 - tasks: list of tasks (by PID) attached to that cgroup. This list 337 /bin/echo $$ > tasks 423 # /bin/echo $$ > tasks 443 # /bin/echo PID > tasks 448 # /bin/echo PID1 > tasks 449 # /bin/echo PID2 > tasks 451 # /bin/echo PIDn > tasks 455 # echo 0 > tasks [all …]
|
A D | cpusets.rst | 47 Cpusets constrain the CPU and Memory placement of tasks to only 307 set of tasks in the cpuset. 414 have tasks running on them unless explicitly assigned. 697 /sys/fs/cgroup/cpuset tasks file for that cpuset. 710 /bin/echo $$ > tasks 766 cpuset.memory_migrate tasks 787 # /bin/echo $$ > tasks 847 # /bin/echo PID > tasks 852 # /bin/echo PID1 > tasks 853 # /bin/echo PID2 > tasks [all …]
|
A D | memcg_test.rst | 188 /bin/echo $pid >$2/tasks 2>/dev/null 195 G1_TASK=`cat ${G1}/tasks` 196 G2_TASK=`cat ${G2}/tasks` 259 # echo 0 > /cgroup/test/tasks 265 # move all tasks in /cgroup/test to /cgroup 275 Out-of-memory caused by memcg's limit will kill tasks under 279 In this case, panic_on_oom shouldn't be invoked and tasks 306 #echo $$ >/cgroup/A/tasks 314 #echo "pid of the program running in group A" >/cgroup/B/tasks 335 # echo $$ >/cgroup/A/tasks
|
/linux/Documentation/admin-guide/hw-vuln/ |
A D | core-scheduling.rst | 6 Core scheduling support allows userspace to define groups of tasks that can 8 group of tasks don't trust another), or for performance usecases (some 42 well as admission and removal of tasks from created groups:: 67 will be performed for all tasks in the task group of ``pid``. 77 Building hierarchies of tasks 91 scheduling group and share it with already running tasks. 158 cookie-0 tasks is also avoided. 160 Once userspace uses the above mentioned interfaces to group sets of tasks, tasks 162 outside. Tasks outside the group also don't trust tasks within. 207 allowing system processes (trusted tasks) to share a core. [all …]
|
/linux/Documentation/scheduler/ |
A D | sched-design-CFS.rst | 19 1/nr_running speed. For example: if there are 2 tasks running, then it runs 26 is its actual runtime normalized to the total number of running tasks. 62 increasing value tracking the smallest vruntime among all tasks in the 125 policy that is used for regular tasks. 127 - SCHED_BATCH: Does not preempt nearly as often as regular tasks 218 SCHED_RR) tasks. 221 SCHED_BATCH) tasks. 236 # mkdir multimedia # create "multimedia" group of tasks 237 # mkdir browser # create "browser" group of tasks 246 # echo <firefox_pid> > browser/tasks [all …]
|
A D | sched-deadline.rst | 53 "deadline", to schedule tasks. A SCHED_DEADLINE task should receive 74 tasks in the following way: 204 runqueue, including the tasks in Inactive state. 269 there are no inactive tasks. 352 non- real-time tasks by real-time tasks. 379 of all the tasks running on a CPU if the sum of the densities of the tasks 456 tasks are not starved and that the tardiness of real-time tasks has an upper 461 the tasks are limited. 566 to -deadline tasks is similar to the one already used for -rt 664 SCHED_DEADLINE tasks. [all …]
|
A D | sched-rt-group.rst | 14 2.3 Basis for grouping tasks 44 multiple groups of realtime tasks, each group must be assigned a fixed portion 57 tasks (SCHED_OTHER). Any allocated run time not used will also be picked up by 72 The remaining CPU time will be used for user input and other tasks. Because 73 realtime tasks have explicitly allocated the CPU time they need to perform 74 their tasks, buffer underruns in the graphics or audio can be eliminated. 110 SCHED_OTHER (non-RT tasks). These defaults were chosen so that a run-away 111 realtime tasks will not lock up the machine but leave a little time to recover 121 not be able to run realtime tasks as any user other than root until you have 126 2.3 Basis for grouping tasks [all …]
|
A D | schedutil.txt | 11 individual tasks to task-group slices to CPU runqueues. As the basis for this 24 is key, since it gives the ability to recompose the averages when tasks move 27 Note that blocked tasks still contribute to the aggregates (task-group slices 92 Because periodic tasks have their averages decayed while they sleep, even 101 A further runqueue wide sum (of runnable tasks) is maintained of: 112 the runqueue keeps an max aggregate of these clamps for all running tasks. 144 XXX: deadline tasks (Sporadic Task Model) allows us to calculate a hard f_min 162 suppose we have a CPU saturated with 4 tasks, then when we migrate a task
|
/linux/samples/bpf/ |
A D | tracex2_user.c | 86 static struct task tasks[1024]; in print_hist() local 94 if (memcmp(&tasks[i], &next_key, SIZE) == 0) in print_hist() 97 memcpy(&tasks[task_cnt++], &next_key, SIZE); in print_hist() 103 (__u32) tasks[i].pid_tgid, in print_hist() 104 tasks[i].comm, in print_hist() 105 (__u32) tasks[i].uid_gid); in print_hist() 106 print_hist_for_pid(fd, &tasks[i]); in print_hist()
|
A D | map_perf_test_user.c | 95 static int pre_test_lru_hash_lookup(int tasks) in pre_test_lru_hash_lookup() argument 293 typedef int (*pre_test_func)(int tasks); 313 static int pre_test(int tasks) in pre_test() argument 319 int ret = pre_test_funcs[i](tasks); in pre_test() 344 static void run_perf_test(int tasks) in run_perf_test() argument 346 pid_t pid[tasks]; in run_perf_test() 349 assert(!pre_test(tasks)); in run_perf_test() 351 for (i = 0; i < tasks; i++) { in run_perf_test() 361 for (i = 0; i < tasks; i++) { in run_perf_test()
|
A D | test_overhead_user.c | 99 static void run_perf_test(int tasks, int flags) in run_perf_test() argument 101 pid_t pid[tasks]; in run_perf_test() 104 for (i = 0; i < tasks; i++) { in run_perf_test() 114 for (i = 0; i < tasks; i++) { in run_perf_test()
|
/linux/Documentation/power/ |
A D | freezing-of-tasks.rst | 2 Freezing of tasks 7 I. What is the freezing of tasks? 10 The freezing of tasks is a mechanism by which user space processes and some 18 and PF_FREEZER_SKIP (the last one is auxiliary). The tasks that have 77 - freezes only userspace tasks 81 kernel threads without freezing userspace tasks 86 userspace tasks, or if we want to postpone the thawing of userspace tasks 90 tasks without thawing kernel threads 139 "RJW:> Why we freeze tasks at all or why we freeze kernel threads? 171 V. Are there any problems related to the freezing of tasks? [all …]
|
/linux/kernel/sched/ |
A D | psi.c | 232 return unlikely(tasks[NR_IOWAIT] && !tasks[NR_RUNNING]); in test_state() 236 return unlikely(tasks[NR_MEMSTALL] && !tasks[NR_RUNNING]); in test_state() 238 return unlikely(tasks[NR_RUNNING] > tasks[NR_ONCPU]); in test_state() 240 return unlikely(tasks[NR_RUNNING] && !tasks[NR_ONCPU]); in test_state() 242 return tasks[NR_IOWAIT] || tasks[NR_MEMSTALL] || in test_state() 243 tasks[NR_RUNNING]; in test_state() 710 if (groupc->tasks[t]) { in psi_group_change() 711 groupc->tasks[t]--; in psi_group_change() 714 cpu, t, groupc->tasks[0], in psi_group_change() 715 groupc->tasks[1], groupc->tasks[2], in psi_group_change() [all …]
|
/linux/tools/perf/scripts/python/ |
A D | sched-migration.py | 101 self.tasks = tuple(tasks) 107 if taskState(prev_state) == "R" and next in self.tasks \ 108 and prev in self.tasks: 114 next_tasks = list(self.tasks[:]) 115 if prev in self.tasks: 127 if old not in self.tasks: 134 if new in self.tasks: 137 next_tasks = self.tasks[:] + tuple([new]) 153 return len(self.tasks) - 1 156 ret = self.tasks.__repr__() [all …]
|
/linux/drivers/gpu/drm/ |
A D | drm_flip_work.c | 117 struct list_head tasks; in flip_worker() local 123 INIT_LIST_HEAD(&tasks); in flip_worker() 125 list_splice_tail(&work->commited, &tasks); in flip_worker() 129 if (list_empty(&tasks)) in flip_worker() 132 list_for_each_entry_safe(task, tmp, &tasks, node) { in flip_worker()
|
/linux/Documentation/admin-guide/kdump/ |
A D | gdbmacros.txt | 17 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks) 20 set $next_t=(((char *)($init_t->tasks).next) - $tasks_off) 51 set $next_t=(char *)($next_t->tasks.next) - $tasks_off 83 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks) 86 set $next_t=(((char *)($init_t->tasks).next) - $tasks_off) 97 set $next_t=(char *)($next_t->tasks.next) - $tasks_off 106 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks) 109 set $next_t=(((char *)($init_t->tasks).next) - $tasks_off) 127 set $next_t=(char *)($next_t->tasks.next) - $tasks_off 139 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks) [all …]
|
/linux/tools/perf/Documentation/ |
A D | perf-timechart.txt | 48 --tasks-only:: 60 Print task info for at least given number of tasks. 65 Highlight tasks (using different color) that run more than given 66 duration or tasks with given name. If number is given it's interpreted 89 --tasks-only:: 90 Record only tasks-related events 114 then generate timechart and highlight 'gcc' tasks:
|
/linux/Documentation/livepatch/ |
A D | livepatch.rst | 85 transition state where tasks are converging to the patched state. 95 safe to patch tasks: 98 tasks. If no affected functions are on the stack of a given task, 100 the tasks on the first try. Otherwise it'll keep trying 108 a) Patching I/O-bound user tasks which are sleeping on an affected 111 b) Patching CPU-bound user tasks. If the task is highly CPU-bound 122 the second approach. It's highly likely that some tasks may still be 124 returns. In this case you would have to signal the tasks. This 139 the transition is in progress. Then all the tasks will attempt to 156 TIF_PATCH_PENDING flag of all tasks and thus forces the tasks to the patched [all …]
|
/linux/Documentation/x86/x86_64/ |
A D | fake-numa-for-cpusets.rst | 14 assign them to cpusets and their attached tasks. This is a way of limiting the 15 amount of system memory that are available to a certain class of tasks. 56 You can now assign tasks to these cpusets to limit the memory resources 59 [root@xroads /exampleset/ddset]# echo $$ > tasks 75 This allows for coarse memory management for the tasks you assign to particular 77 interesting combinations of use-cases for various classes of tasks for your
|
/linux/Documentation/locking/ |
A D | futex-requeue-pi.rst | 5 Requeueing of tasks from a non-PI futex to a PI futex requires 17 pthread_cond_broadcast() must resort to waking all the tasks waiting 47 Once pthread_cond_broadcast() requeues the tasks, the cond->mutex 54 be able to requeue tasks to PI futexes. This support implies that 113 possibly wake the waiting tasks. Internally, this system call is 118 nr_wake+nr_requeue tasks to the PI futex, calling 126 requeue up to nr_wake + nr_requeue tasks. It will wake only as many 127 tasks as it can acquire the lock for, which in the majority of cases
|
/linux/drivers/misc/bcm-vk/ |
A D | Kconfig | 11 multiple specific offload processing tasks in parallel. 12 Such offload tasks assist in such operations as video 13 transcoding, compression, and crypto tasks.
|
/linux/tools/testing/selftests/resctrl/ |
A D | resctrlfs.c | 399 static int write_pid_to_tasks(char *tasks, pid_t pid) in write_pid_to_tasks() argument 403 fp = fopen(tasks, "w"); in write_pid_to_tasks() 439 char tasks[1024]; in write_bm_pid_to_resctrl() local 451 sprintf(tasks, "%s/tasks", controlgroup); in write_bm_pid_to_resctrl() 452 ret = write_pid_to_tasks(tasks, bm_pid); in write_bm_pid_to_resctrl() 466 sprintf(tasks, "%s/mon_groups/%s/tasks", in write_bm_pid_to_resctrl() 468 ret = write_pid_to_tasks(tasks, bm_pid); in write_bm_pid_to_resctrl()
|
/linux/Documentation/RCU/ |
A D | stallwarn.rst | 188 This boot/sysfs parameter controls the RCU-tasks stall warning 189 interval. A value of zero or less suppresses RCU-tasks stall 191 in seconds. An RCU-tasks stall warning starts with the line: 193 INFO: rcu_tasks detected stalls on tasks: 196 task stalling the current RCU-tasks grace period. 202 For non-RCU-tasks flavors of RCU, when a CPU detects that some other 205 INFO: rcu_sched detected stalls on CPUs/tasks: 214 the tasks will be indicated by PID, for example, "P3421". It is even 281 If all CPUs and tasks have passed through quiescent states, but the 302 in RCU CPU stall warnings even when all CPUs and tasks have passed [all …]
|