2011-10-25 10:00:11 +02:00
|
|
|
|
|
|
|
#include <linux/sched.h>
|
2013-02-07 09:46:59 -06:00
|
|
|
#include <linux/sched/sysctl.h>
|
2013-02-07 09:47:07 -06:00
|
|
|
#include <linux/sched/rt.h>
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 11:14:43 +01:00
|
|
|
#include <linux/sched/deadline.h>
|
2011-10-25 10:00:11 +02:00
|
|
|
#include <linux/mutex.h>
|
|
|
|
#include <linux/spinlock.h>
|
|
|
|
#include <linux/stop_machine.h>
|
sched/rt: Use IPI to trigger RT task push migration instead of pulling
When debugging the latencies on a 40 core box, where we hit 300 to
500 microsecond latencies, I found there was a huge contention on the
runqueue locks.
Investigating it further, running ftrace, I found that it was due to
the pulling of RT tasks.
The test that was run was the following:
cyclictest --numa -p95 -m -d0 -i100
This created a thread on each CPU, that would set its wakeup in iterations
of 100 microseconds. The -d0 means that all the threads had the same
interval (100us). Each thread sleeps for 100us and wakes up and measures
its latencies.
cyclictest is maintained at:
git://git.kernel.org/pub/scm/linux/kernel/git/clrkwllms/rt-tests.git
What happened was another RT task would be scheduled on one of the CPUs
that was running our test, when the other CPU tests went to sleep and
scheduled idle. This caused the "pull" operation to execute on all
these CPUs. Each one of these saw the RT task that was overloaded on
the CPU of the test that was still running, and each one tried
to grab that task in a thundering herd way.
To grab the task, each thread would do a double rq lock grab, grabbing
its own lock as well as the rq of the overloaded CPU. As the sched
domains on this box was rather flat for its size, I saw up to 12 CPUs
block on this lock at once. This caused a ripple affect with the
rq locks especially since the taking was done via a double rq lock, which
means that several of the CPUs had their own rq locks held while trying
to take this rq lock. As these locks were blocked, any wakeups or load
balanceing on these CPUs would also block on these locks, and the wait
time escalated.
I've tried various methods to lessen the load, but things like an
atomic counter to only let one CPU grab the task wont work, because
the task may have a limited affinity, and we may pick the wrong
CPU to take that lock and do the pull, to only find out that the
CPU we picked isn't in the task's affinity.
Instead of doing the PULL, I now have the CPUs that want the pull to
send over an IPI to the overloaded CPU, and let that CPU pick what
CPU to push the task to. No more need to grab the rq lock, and the
push/pull algorithm still works fine.
With this patch, the latency dropped to just 150us over a 20 hour run.
Without the patch, the huge latencies would trigger in seconds.
I've created a new sched feature called RT_PUSH_IPI, which is enabled
by default.
When RT_PUSH_IPI is not enabled, the old method of grabbing the rq locks
and having the pulling CPU do the work is implemented. When RT_PUSH_IPI
is enabled, the IPI is sent to the overloaded CPU to do a push.
To enabled or disable this at run time:
# mount -t debugfs nodev /sys/kernel/debug
# echo RT_PUSH_IPI > /sys/kernel/debug/sched_features
or
# echo NO_RT_PUSH_IPI > /sys/kernel/debug/sched_features
Update: This original patch would send an IPI to all CPUs in the RT overload
list. But that could theoretically cause the reverse issue. That is, there
could be lots of overloaded RT queues and one CPU lowers its priority. It would
then send an IPI to all the overloaded RT queues and they could then all try
to grab the rq lock of the CPU lowering its priority, and then we have the
same problem.
The latest design sends out only one IPI to the first overloaded CPU. It tries to
push any tasks that it can, and then looks for the next overloaded CPU that can
push to the source CPU. The IPIs stop when all overloaded CPUs that have pushable
tasks that have priorities greater than the source CPU are covered. In case the
source CPU lowers its priority again, a flag is set to tell the IPI traversal to
restart with the first RT overloaded CPU after the source CPU.
Parts-suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joern Engel <joern@purestorage.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150318144946.2f3cc982@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-18 14:49:46 -04:00
|
|
|
#include <linux/irq_work.h>
|
2013-04-20 14:35:09 +02:00
|
|
|
#include <linux/tick.h>
|
2013-10-07 11:28:57 +01:00
|
|
|
#include <linux/slab.h>
|
2011-10-25 10:00:11 +02:00
|
|
|
|
2011-11-15 17:14:39 +01:00
|
|
|
#include "cpupri.h"
|
2013-11-07 14:43:47 +01:00
|
|
|
#include "cpudeadline.h"
|
2013-03-29 14:36:43 +08:00
|
|
|
#include "cpuacct.h"
|
2011-10-25 10:00:11 +02:00
|
|
|
|
2013-04-19 15:10:49 -04:00
|
|
|
struct rq;
|
2014-09-04 11:32:09 -04:00
|
|
|
struct cpuidle_state;
|
2013-04-19 15:10:49 -04:00
|
|
|
|
2014-08-20 13:47:32 +04:00
|
|
|
/* task_struct::on_rq states: */
|
|
|
|
#define TASK_ON_RQ_QUEUED 1
|
sched: Teach scheduler to understand TASK_ON_RQ_MIGRATING state
This is a new p->on_rq state which will be used to indicate that a task
is in a process of migrating between two RQs. It allows to get
rid of double_rq_lock(), which we used to use to change a rq of
a queued task before.
Let's consider an example. To move a task between src_rq and
dst_rq we will do the following:
raw_spin_lock(&src_rq->lock);
/* p is a task which is queued on src_rq */
p = ...;
dequeue_task(src_rq, p, 0);
p->on_rq = TASK_ON_RQ_MIGRATING;
set_task_cpu(p, dst_cpu);
raw_spin_unlock(&src_rq->lock);
/*
* Both RQs are unlocked here.
* Task p is dequeued from src_rq
* but its on_rq value is not zero.
*/
raw_spin_lock(&dst_rq->lock);
p->on_rq = TASK_ON_RQ_QUEUED;
enqueue_task(dst_rq, p, 0);
raw_spin_unlock(&dst_rq->lock);
While p->on_rq is TASK_ON_RQ_MIGRATING, task is considered as
"migrating", and other parallel scheduler actions with it are
not available to parallel callers. The parallel caller is
spining till migration is completed.
The unavailable actions are changing of cpu affinity, changing
of priority etc, in other words all the functionality which used
to require task_rq(p)->lock before (and related to the task).
To implement TASK_ON_RQ_MIGRATING support we primarily are using
the following fact. Most of scheduler users (from which we are
protecting a migrating task) use task_rq_lock() and
__task_rq_lock() to get the lock of task_rq(p). These primitives
know that task's cpu may change, and they are spining while the
lock of the right RQ is not held. We add one more condition into
them, so they will be also spinning until the migration is
finished.
Signed-off-by: Kirill Tkhai <ktkhai@parallels.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Turner <pjt@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Kirill Tkhai <tkhai@yandex.ru>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1408528062.23412.88.camel@tkhai
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-08-20 13:47:42 +04:00
|
|
|
#define TASK_ON_RQ_MIGRATING 2
|
2014-08-20 13:47:32 +04:00
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
extern __read_mostly int scheduler_running;
|
|
|
|
|
2013-04-19 15:10:49 -04:00
|
|
|
extern unsigned long calc_load_update;
|
|
|
|
extern atomic_long_t calc_load_tasks;
|
|
|
|
|
2015-04-14 13:19:42 +02:00
|
|
|
extern void calc_global_load_tick(struct rq *this_rq);
|
2015-07-27 16:52:12 -07:00
|
|
|
|
2013-04-19 15:10:49 -04:00
|
|
|
extern long calc_load_fold_active(struct rq *this_rq);
|
2015-04-14 13:19:42 +02:00
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
2013-04-19 15:10:49 -04:00
|
|
|
extern void update_cpu_load_active(struct rq *this_rq);
|
2015-04-14 13:19:42 +02:00
|
|
|
#else
|
|
|
|
static inline void update_cpu_load_active(struct rq *this_rq) { }
|
|
|
|
#endif
|
2013-04-19 15:10:49 -04:00
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
/*
|
|
|
|
* Helpers for converting nanosecond timing to jiffy resolution
|
|
|
|
*/
|
|
|
|
#define NS_TO_JIFFIES(TIME) ((unsigned long)(TIME) / (NSEC_PER_SEC / HZ))
|
|
|
|
|
2013-03-05 16:06:09 +08:00
|
|
|
/*
|
|
|
|
* Increase resolution of nice-level calculations for 64-bit architectures.
|
|
|
|
* The extra resolution improves shares distribution and load balancing of
|
|
|
|
* low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
|
|
|
|
* hierarchies, especially on larger systems. This is not a user-visible change
|
|
|
|
* and does not change the user-interface for setting shares/weights.
|
|
|
|
*
|
|
|
|
* We increase resolution only if we have enough bits to allow this increased
|
|
|
|
* resolution (i.e. BITS_PER_LONG > 32). The costs for increasing resolution
|
|
|
|
* when BITS_PER_LONG <= 32 are pretty high and the returns do not justify the
|
|
|
|
* increased costs.
|
|
|
|
*/
|
|
|
|
#if 0 /* BITS_PER_LONG > 32 -- currently broken: it increases power usage under light load */
|
|
|
|
# define SCHED_LOAD_RESOLUTION 10
|
|
|
|
# define scale_load(w) ((w) << SCHED_LOAD_RESOLUTION)
|
|
|
|
# define scale_load_down(w) ((w) >> SCHED_LOAD_RESOLUTION)
|
|
|
|
#else
|
|
|
|
# define SCHED_LOAD_RESOLUTION 0
|
|
|
|
# define scale_load(w) (w)
|
|
|
|
# define scale_load_down(w) (w)
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#define SCHED_LOAD_SHIFT (10 + SCHED_LOAD_RESOLUTION)
|
|
|
|
#define SCHED_LOAD_SCALE (1L << SCHED_LOAD_SHIFT)
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
#define NICE_0_LOAD SCHED_LOAD_SCALE
|
|
|
|
#define NICE_0_SHIFT SCHED_LOAD_SHIFT
|
|
|
|
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:45 +01:00
|
|
|
/*
|
|
|
|
* Single value that decides SCHED_DEADLINE internal math precision.
|
|
|
|
* 10 -> just above 1us
|
|
|
|
* 9 -> just above 0.5us
|
|
|
|
*/
|
|
|
|
#define DL_SCALE (10)
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
/*
|
|
|
|
* These are the 'tuning knobs' of the scheduler:
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* single value that denotes runtime == period, ie unlimited time.
|
|
|
|
*/
|
|
|
|
#define RUNTIME_INF ((u64)~0ULL)
|
|
|
|
|
2015-09-09 17:00:41 +02:00
|
|
|
static inline int idle_policy(int policy)
|
|
|
|
{
|
|
|
|
return policy == SCHED_IDLE;
|
|
|
|
}
|
sched: Add new scheduler syscalls to support an extended scheduling parameters ABI
Add the syscalls needed for supporting scheduling algorithms
with extended scheduling parameters (e.g., SCHED_DEADLINE).
In general, it makes possible to specify a periodic/sporadic task,
that executes for a given amount of runtime at each instance, and is
scheduled according to the urgency of their own timing constraints,
i.e.:
- a (maximum/typical) instance execution time,
- a minimum interval between consecutive instances,
- a time constraint by which each instance must be completed.
Thus, both the data structure that holds the scheduling parameters of
the tasks and the system calls dealing with it must be extended.
Unfortunately, modifying the existing struct sched_param would break
the ABI and result in potentially serious compatibility issues with
legacy binaries.
For these reasons, this patch:
- defines the new struct sched_attr, containing all the fields
that are necessary for specifying a task in the computational
model described above;
- defines and implements the new scheduling related syscalls that
manipulate it, i.e., sched_setattr() and sched_getattr().
Syscalls are introduced for x86 (32 and 64 bits) and ARM only, as a
proof of concept and for developing and testing purposes. Making them
available on other architectures is straightforward.
Since no "user" for these new parameters is introduced in this patch,
the implementation of the new system calls is just identical to their
already existing counterpart. Future patches that implement scheduling
policies able to exploit the new data structure must also take care of
modifying the sched_*attr() calls accordingly with their own purposes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
[ Rewrote to use sched_attr. ]
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
[ Removed sched_setscheduler2() for now. ]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-3-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:36 +01:00
|
|
|
static inline int fair_policy(int policy)
|
|
|
|
{
|
|
|
|
return policy == SCHED_NORMAL || policy == SCHED_BATCH;
|
|
|
|
}
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
static inline int rt_policy(int policy)
|
|
|
|
{
|
sched: Add new scheduler syscalls to support an extended scheduling parameters ABI
Add the syscalls needed for supporting scheduling algorithms
with extended scheduling parameters (e.g., SCHED_DEADLINE).
In general, it makes possible to specify a periodic/sporadic task,
that executes for a given amount of runtime at each instance, and is
scheduled according to the urgency of their own timing constraints,
i.e.:
- a (maximum/typical) instance execution time,
- a minimum interval between consecutive instances,
- a time constraint by which each instance must be completed.
Thus, both the data structure that holds the scheduling parameters of
the tasks and the system calls dealing with it must be extended.
Unfortunately, modifying the existing struct sched_param would break
the ABI and result in potentially serious compatibility issues with
legacy binaries.
For these reasons, this patch:
- defines the new struct sched_attr, containing all the fields
that are necessary for specifying a task in the computational
model described above;
- defines and implements the new scheduling related syscalls that
manipulate it, i.e., sched_setattr() and sched_getattr().
Syscalls are introduced for x86 (32 and 64 bits) and ARM only, as a
proof of concept and for developing and testing purposes. Making them
available on other architectures is straightforward.
Since no "user" for these new parameters is introduced in this patch,
the implementation of the new system calls is just identical to their
already existing counterpart. Future patches that implement scheduling
policies able to exploit the new data structure must also take care of
modifying the sched_*attr() calls accordingly with their own purposes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
[ Rewrote to use sched_attr. ]
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
[ Removed sched_setscheduler2() for now. ]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-3-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:36 +01:00
|
|
|
return policy == SCHED_FIFO || policy == SCHED_RR;
|
2011-10-25 10:00:11 +02:00
|
|
|
}
|
|
|
|
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 11:14:43 +01:00
|
|
|
static inline int dl_policy(int policy)
|
|
|
|
{
|
|
|
|
return policy == SCHED_DEADLINE;
|
|
|
|
}
|
2015-09-09 17:00:41 +02:00
|
|
|
static inline bool valid_policy(int policy)
|
|
|
|
{
|
|
|
|
return idle_policy(policy) || fair_policy(policy) ||
|
|
|
|
rt_policy(policy) || dl_policy(policy);
|
|
|
|
}
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 11:14:43 +01:00
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
static inline int task_has_rt_policy(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return rt_policy(p->policy);
|
|
|
|
}
|
|
|
|
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 11:14:43 +01:00
|
|
|
static inline int task_has_dl_policy(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return dl_policy(p->policy);
|
|
|
|
}
|
|
|
|
|
sched/deadline: Add SCHED_DEADLINE inheritance logic
Some method to deal with rt-mutexes and make sched_dl interact with
the current PI-coded is needed, raising all but trivial issues, that
needs (according to us) to be solved with some restructuring of
the pi-code (i.e., going toward a proxy execution-ish implementation).
This is under development, in the meanwhile, as a temporary solution,
what this commits does is:
- ensure a pi-lock owner with waiters is never throttled down. Instead,
when it runs out of runtime, it immediately gets replenished and it's
deadline is postponed;
- the scheduling parameters (relative deadline and default runtime)
used for that replenishments --during the whole period it holds the
pi-lock-- are the ones of the waiting task with earliest deadline.
Acting this way, we provide some kind of boosting to the lock-owner,
still by using the existing (actually, slightly modified by the previous
commit) pi-architecture.
We would stress the fact that this is only a surely needed, all but
clean solution to the problem. In the end it's only a way to re-start
discussion within the community. So, as always, comments, ideas, rants,
etc.. are welcome! :-)
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
[ Added !RT_MUTEXES build fix. ]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-11-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:44 +01:00
|
|
|
/*
|
|
|
|
* Tells if entity @a should preempt entity @b.
|
|
|
|
*/
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:45 +01:00
|
|
|
static inline bool
|
|
|
|
dl_entity_preempt(struct sched_dl_entity *a, struct sched_dl_entity *b)
|
sched/deadline: Add SCHED_DEADLINE inheritance logic
Some method to deal with rt-mutexes and make sched_dl interact with
the current PI-coded is needed, raising all but trivial issues, that
needs (according to us) to be solved with some restructuring of
the pi-code (i.e., going toward a proxy execution-ish implementation).
This is under development, in the meanwhile, as a temporary solution,
what this commits does is:
- ensure a pi-lock owner with waiters is never throttled down. Instead,
when it runs out of runtime, it immediately gets replenished and it's
deadline is postponed;
- the scheduling parameters (relative deadline and default runtime)
used for that replenishments --during the whole period it holds the
pi-lock-- are the ones of the waiting task with earliest deadline.
Acting this way, we provide some kind of boosting to the lock-owner,
still by using the existing (actually, slightly modified by the previous
commit) pi-architecture.
We would stress the fact that this is only a surely needed, all but
clean solution to the problem. In the end it's only a way to re-start
discussion within the community. So, as always, comments, ideas, rants,
etc.. are welcome! :-)
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
[ Added !RT_MUTEXES build fix. ]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-11-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:44 +01:00
|
|
|
{
|
|
|
|
return dl_time_before(a->deadline, b->deadline);
|
|
|
|
}
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
/*
|
|
|
|
* This is the priority-queue data structure of the RT scheduling class:
|
|
|
|
*/
|
|
|
|
struct rt_prio_array {
|
|
|
|
DECLARE_BITMAP(bitmap, MAX_RT_PRIO+1); /* include 1 bit for delimiter */
|
|
|
|
struct list_head queue[MAX_RT_PRIO];
|
|
|
|
};
|
|
|
|
|
|
|
|
struct rt_bandwidth {
|
|
|
|
/* nests inside the rq lock: */
|
|
|
|
raw_spinlock_t rt_runtime_lock;
|
|
|
|
ktime_t rt_period;
|
|
|
|
u64 rt_runtime;
|
|
|
|
struct hrtimer rt_period_timer;
|
sched,perf: Fix periodic timers
In the below two commits (see Fixes) we have periodic timers that can
stop themselves when they're no longer required, but need to be
(re)-started when their idle condition changes.
Further complications is that we want the timer handler to always do
the forward such that it will always correctly deal with the overruns,
and we do not want to race such that the handler has already decided
to stop, but the (external) restart sees the timer still active and we
end up with a 'lost' timer.
The problem with the current code is that the re-start can come before
the callback does the forward, at which point the forward from the
callback will WARN about forwarding an enqueued timer.
Now, conceptually its easy to detect if you're before or after the fwd
by comparing the expiration time against the current time. Of course,
that's expensive (and racy) because we don't have the current time.
Alternatively one could cache this state inside the timer, but then
everybody pays the overhead of maintaining this extra state, and that
is undesired.
The only other option that I could see is the external timer_active
variable, which I tried to kill before. I would love a nicer interface
for this seemingly simple 'problem' but alas.
Fixes: 272325c4821f ("perf: Fix mux_interval hrtimer wreckage")
Fixes: 77a4d1a1b9a1 ("sched: Cleanup bandwidth timers")
Cc: pjt@google.com
Cc: tglx@linutronix.de
Cc: klamm@yandex-team.ru
Cc: mingo@kernel.org
Cc: bsegall@google.com
Cc: hpa@zytor.com
Cc: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20150514102311.GX21418@twins.programming.kicks-ass.net
2015-05-14 12:23:11 +02:00
|
|
|
unsigned int rt_period_active;
|
2011-10-25 10:00:11 +02:00
|
|
|
};
|
2014-09-19 10:22:39 +01:00
|
|
|
|
|
|
|
void __dl_clear_params(struct task_struct *p);
|
|
|
|
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:45 +01:00
|
|
|
/*
|
|
|
|
* To keep the bandwidth of -deadline tasks and groups under control
|
|
|
|
* we need some place where:
|
|
|
|
* - store the maximum -deadline bandwidth of the system (the group);
|
|
|
|
* - cache the fraction of that bandwidth that is currently allocated.
|
|
|
|
*
|
|
|
|
* This is all done in the data structure below. It is similar to the
|
|
|
|
* one used for RT-throttling (rt_bandwidth), with the main difference
|
|
|
|
* that, since here we are only interested in admission control, we
|
|
|
|
* do not decrease any runtime while the group "executes", neither we
|
|
|
|
* need a timer to replenish it.
|
|
|
|
*
|
|
|
|
* With respect to SMP, the bandwidth is given on a per-CPU basis,
|
|
|
|
* meaning that:
|
|
|
|
* - dl_bw (< 100%) is the bandwidth of the system (group) on each CPU;
|
|
|
|
* - dl_total_bw array contains, in the i-eth element, the currently
|
|
|
|
* allocated bandwidth on the i-eth CPU.
|
|
|
|
* Moreover, groups consume bandwidth on each CPU, while tasks only
|
|
|
|
* consume bandwidth on the CPU they're running on.
|
|
|
|
* Finally, dl_total_bw_cpu is used to cache the index of dl_total_bw
|
|
|
|
* that will be shown the next time the proc or cgroup controls will
|
|
|
|
* be red. It on its turn can be changed by writing on its own
|
|
|
|
* control.
|
|
|
|
*/
|
|
|
|
struct dl_bandwidth {
|
|
|
|
raw_spinlock_t dl_runtime_lock;
|
|
|
|
u64 dl_runtime;
|
|
|
|
u64 dl_period;
|
|
|
|
};
|
|
|
|
|
|
|
|
static inline int dl_bandwidth_enabled(void)
|
|
|
|
{
|
2013-12-17 12:44:49 +01:00
|
|
|
return sysctl_sched_rt_runtime >= 0;
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:45 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
extern struct dl_bw *dl_bw_of(int i);
|
|
|
|
|
|
|
|
struct dl_bw {
|
|
|
|
raw_spinlock_t lock;
|
|
|
|
u64 bw, total_bw;
|
|
|
|
};
|
|
|
|
|
2014-09-19 10:22:40 +01:00
|
|
|
static inline
|
|
|
|
void __dl_clear(struct dl_bw *dl_b, u64 tsk_bw)
|
|
|
|
{
|
|
|
|
dl_b->total_bw -= tsk_bw;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline
|
|
|
|
void __dl_add(struct dl_bw *dl_b, u64 tsk_bw)
|
|
|
|
{
|
|
|
|
dl_b->total_bw += tsk_bw;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline
|
|
|
|
bool __dl_overflow(struct dl_bw *dl_b, int cpus, u64 old_bw, u64 new_bw)
|
|
|
|
{
|
|
|
|
return dl_b->bw != -1 &&
|
|
|
|
dl_b->bw * cpus < dl_b->total_bw - old_bw + new_bw;
|
|
|
|
}
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
extern struct mutex sched_domains_mutex;
|
|
|
|
|
|
|
|
#ifdef CONFIG_CGROUP_SCHED
|
|
|
|
|
|
|
|
#include <linux/cgroup.h>
|
|
|
|
|
|
|
|
struct cfs_rq;
|
|
|
|
struct rt_rq;
|
|
|
|
|
2012-08-07 05:00:13 +02:00
|
|
|
extern struct list_head task_groups;
|
2011-10-25 10:00:11 +02:00
|
|
|
|
|
|
|
struct cfs_bandwidth {
|
|
|
|
#ifdef CONFIG_CFS_BANDWIDTH
|
|
|
|
raw_spinlock_t lock;
|
|
|
|
ktime_t period;
|
|
|
|
u64 quota, runtime;
|
2014-09-20 21:24:36 -04:00
|
|
|
s64 hierarchical_quota;
|
2011-10-25 10:00:11 +02:00
|
|
|
u64 runtime_expires;
|
|
|
|
|
sched,perf: Fix periodic timers
In the below two commits (see Fixes) we have periodic timers that can
stop themselves when they're no longer required, but need to be
(re)-started when their idle condition changes.
Further complications is that we want the timer handler to always do
the forward such that it will always correctly deal with the overruns,
and we do not want to race such that the handler has already decided
to stop, but the (external) restart sees the timer still active and we
end up with a 'lost' timer.
The problem with the current code is that the re-start can come before
the callback does the forward, at which point the forward from the
callback will WARN about forwarding an enqueued timer.
Now, conceptually its easy to detect if you're before or after the fwd
by comparing the expiration time against the current time. Of course,
that's expensive (and racy) because we don't have the current time.
Alternatively one could cache this state inside the timer, but then
everybody pays the overhead of maintaining this extra state, and that
is undesired.
The only other option that I could see is the external timer_active
variable, which I tried to kill before. I would love a nicer interface
for this seemingly simple 'problem' but alas.
Fixes: 272325c4821f ("perf: Fix mux_interval hrtimer wreckage")
Fixes: 77a4d1a1b9a1 ("sched: Cleanup bandwidth timers")
Cc: pjt@google.com
Cc: tglx@linutronix.de
Cc: klamm@yandex-team.ru
Cc: mingo@kernel.org
Cc: bsegall@google.com
Cc: hpa@zytor.com
Cc: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20150514102311.GX21418@twins.programming.kicks-ass.net
2015-05-14 12:23:11 +02:00
|
|
|
int idle, period_active;
|
2011-10-25 10:00:11 +02:00
|
|
|
struct hrtimer period_timer, slack_timer;
|
|
|
|
struct list_head throttled_cfs_rq;
|
|
|
|
|
|
|
|
/* statistics */
|
|
|
|
int nr_periods, nr_throttled;
|
|
|
|
u64 throttled_time;
|
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
|
|
|
/* task group related information */
|
|
|
|
struct task_group {
|
|
|
|
struct cgroup_subsys_state css;
|
|
|
|
|
2015-02-06 18:05:53 +05:30
|
|
|
#ifdef CONFIG_SCHED_HMP
|
|
|
|
bool upmigrate_discouraged;
|
|
|
|
#endif
|
2013-03-11 16:33:42 -07:00
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
|
|
|
/* schedulable entities of this group on each cpu */
|
|
|
|
struct sched_entity **se;
|
|
|
|
/* runqueue "owned" by this group on each cpu */
|
|
|
|
struct cfs_rq **cfs_rq;
|
|
|
|
unsigned long shares;
|
|
|
|
|
2013-06-20 10:18:46 +08:00
|
|
|
#ifdef CONFIG_SMP
|
2013-06-20 10:18:54 +08:00
|
|
|
atomic_long_t load_avg;
|
2011-10-25 10:00:11 +02:00
|
|
|
#endif
|
2013-06-20 10:18:46 +08:00
|
|
|
#endif
|
2011-10-25 10:00:11 +02:00
|
|
|
|
|
|
|
#ifdef CONFIG_RT_GROUP_SCHED
|
|
|
|
struct sched_rt_entity **rt_se;
|
|
|
|
struct rt_rq **rt_rq;
|
|
|
|
|
|
|
|
struct rt_bandwidth rt_bandwidth;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
struct rcu_head rcu;
|
|
|
|
struct list_head list;
|
|
|
|
|
|
|
|
struct task_group *parent;
|
|
|
|
struct list_head siblings;
|
|
|
|
struct list_head children;
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_AUTOGROUP
|
|
|
|
struct autogroup *autogroup;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
struct cfs_bandwidth cfs_bandwidth;
|
|
|
|
};
|
|
|
|
|
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
|
|
|
#define ROOT_TASK_GROUP_LOAD NICE_0_LOAD
|
|
|
|
|
|
|
|
/*
|
|
|
|
* A weight of 0 or 1 can cause arithmetics problems.
|
|
|
|
* A weight of a cfs_rq is the sum of weights of which entities
|
|
|
|
* are queued on this cfs_rq, so a weight of a entity should not be
|
|
|
|
* too large, so as the shares value of a task group.
|
|
|
|
* (The default weight is 1024 - so there's no practical
|
|
|
|
* limitation from this.)
|
|
|
|
*/
|
|
|
|
#define MIN_SHARES (1UL << 1)
|
|
|
|
#define MAX_SHARES (1UL << 18)
|
|
|
|
#endif
|
|
|
|
|
|
|
|
typedef int (*tg_visitor)(struct task_group *, void *);
|
|
|
|
|
|
|
|
extern int walk_tg_tree_from(struct task_group *from,
|
|
|
|
tg_visitor down, tg_visitor up, void *data);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Iterate the full tree, calling @down when first entering a node and @up when
|
|
|
|
* leaving it for the final time.
|
|
|
|
*
|
|
|
|
* Caller must hold rcu_lock or sufficient equivalent.
|
|
|
|
*/
|
|
|
|
static inline int walk_tg_tree(tg_visitor down, tg_visitor up, void *data)
|
|
|
|
{
|
|
|
|
return walk_tg_tree_from(&root_task_group, down, up, data);
|
|
|
|
}
|
|
|
|
|
|
|
|
extern int tg_nop(struct task_group *tg, void *data);
|
|
|
|
|
|
|
|
extern void free_fair_sched_group(struct task_group *tg);
|
|
|
|
extern int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent);
|
2016-01-21 22:24:16 +01:00
|
|
|
extern void unregister_fair_sched_group(struct task_group *tg);
|
2011-10-25 10:00:11 +02:00
|
|
|
extern void init_tg_cfs_entry(struct task_group *tg, struct cfs_rq *cfs_rq,
|
|
|
|
struct sched_entity *se, int cpu,
|
|
|
|
struct sched_entity *parent);
|
|
|
|
extern void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b);
|
|
|
|
extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);
|
|
|
|
|
|
|
|
extern void __refill_cfs_bandwidth_runtime(struct cfs_bandwidth *cfs_b);
|
sched: Cleanup bandwidth timers
Roman reported a 3 cpu lockup scenario involving __start_cfs_bandwidth().
The more I look at that code the more I'm convinced its crack, that
entire __start_cfs_bandwidth() thing is brain melting, we don't need to
cancel a timer before starting it, *hrtimer_start*() will happily remove
the timer for you if its still enqueued.
Removing that, removes a big part of the problem, no more ugly cancel
loop to get stuck in.
So now, if I understand things right, the entire reason you have this
cfs_b->lock guarded ->timer_active nonsense is to make sure we don't
accidentally lose the timer.
It appears to me that it should be possible to guarantee that same by
unconditionally (re)starting the timer when !queued. Because regardless
what hrtimer::function will return, if we beat it to (re)enqueue the
timer, it doesn't matter.
Now, because hrtimers don't come with any serialization guarantees we
must ensure both handler and (re)start loop serialize their access to
the hrtimer to avoid both trying to forward the timer at the same
time.
Update the rt bandwidth timer to match.
This effectively reverts: 09dc4ab03936 ("sched/fair: Fix
tg_set_cfs_bandwidth() deadlock on rq->lock").
Reported-by: Roman Gushchin <klamm@yandex-team.ru>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Ben Segall <bsegall@google.com>
Cc: Paul Turner <pjt@google.com>
Link: http://lkml.kernel.org/r/20150415095011.804589208@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-04-15 11:41:57 +02:00
|
|
|
extern void start_cfs_bandwidth(struct cfs_bandwidth *cfs_b);
|
2011-10-25 10:00:11 +02:00
|
|
|
extern void unthrottle_cfs_rq(struct cfs_rq *cfs_rq);
|
|
|
|
|
|
|
|
extern void free_rt_sched_group(struct task_group *tg);
|
|
|
|
extern int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent);
|
|
|
|
extern void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq,
|
|
|
|
struct sched_rt_entity *rt_se, int cpu,
|
|
|
|
struct sched_rt_entity *parent);
|
|
|
|
|
2013-03-05 16:07:33 +08:00
|
|
|
extern struct task_group *sched_create_group(struct task_group *parent);
|
|
|
|
extern void sched_online_group(struct task_group *tg,
|
|
|
|
struct task_group *parent);
|
|
|
|
extern void sched_destroy_group(struct task_group *tg);
|
|
|
|
extern void sched_offline_group(struct task_group *tg);
|
|
|
|
|
|
|
|
extern void sched_move_task(struct task_struct *tsk);
|
|
|
|
|
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
|
|
|
extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);
|
2017-05-30 14:51:53 +01:00
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
extern void set_task_rq_fair(struct sched_entity *se,
|
|
|
|
struct cfs_rq *prev, struct cfs_rq *next);
|
|
|
|
#else /* !CONFIG_SMP */
|
|
|
|
static inline void set_task_rq_fair(struct sched_entity *se,
|
|
|
|
struct cfs_rq *prev, struct cfs_rq *next) { }
|
|
|
|
#endif /* CONFIG_SMP */
|
|
|
|
#endif /* CONFIG_FAIR_GROUP_SCHED */
|
2013-03-05 16:07:33 +08:00
|
|
|
|
2016-08-01 17:48:21 -07:00
|
|
|
extern struct task_group *css_tg(struct cgroup_subsys_state *css);
|
2011-10-25 10:00:11 +02:00
|
|
|
#else /* CONFIG_CGROUP_SCHED */
|
|
|
|
|
|
|
|
struct cfs_bandwidth { };
|
|
|
|
|
|
|
|
#endif /* CONFIG_CGROUP_SCHED */
|
|
|
|
|
2015-01-16 11:27:31 +05:30
|
|
|
#ifdef CONFIG_SCHED_HMP
|
|
|
|
|
sched: Add the mechanics of top task tracking for frequency guidance
The previous patches in this rewrite of scheduler guided frequency
selection reintroduces the part-picture problem that we addressed in
our initial implementation. In that, when tasks migrate across CPUs
within a cluster, we end up losing the complete picture of the
sequential nature of the workload.
This patch aims to solve that problem slightly differently. We track
the top task on every CPU within a window. Top task is defined as the
task that runs the most in a given window. This enhances our ability
to detect the sequential nature of workloads. A single migrating task
executing for an entire window will cause 100% load to be reported
for frequency guidance instead of the maximum footprint left on any
individual CPU in the task's trail. There are cases, that this new
approach does not address. Namely, cases where the sum of two or more
tasks accurately reflects the true sequential nature of the workload.
Future optimizations might aim to tackle that problem.
To track top tasks, we first realize that there is no strict need to
maintain the task struct itself as long as we know the load exerted by
the top task. We also realize that to maintain top tasks on every CPU
we have to track the execution of every single task that runs during
the window. The load associated with a task needs to be migrated when
the task migrates from one CPU to another. When the top task migrates
away, we need to locate the second top task and so on.
Given the above realizations, we use hashmaps to track top task load
both for the current and the previous window. This hashmap is
implemented as an array of fixed size. The key of the hashmap is given
by task_execution_time_in_a_window / array_size. The size of the array
(number of buckets in the hashmap) dictate the load granularity of each
bucket. The value stored in each bucket is a refcount of all the tasks
that executed long enough to be in that bucket.
This approach has a few benefits. Firstly, any top task stats update
now take O(1) time. While task migration is also O(1), it does still
involve going through up to the size of the array to find the second
top task. Further patches will aim to optimize this behavior. Secondly,
and more importantly, not having to store the task struct itself saves
a lot of memory usage in that 1) there is no need to retrieve task
structs later causing cache misses and 2) we don't have to unnecessarily
hold up task memory for up to 2 full windows by calling get_task_struct()
after a task exits.
Change-Id: I004dba474f41590db7d3f40d9deafe86e71359ac
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-05-31 16:40:45 -07:00
|
|
|
#define NUM_TRACKED_WINDOWS 2
|
|
|
|
#define NUM_LOAD_INDICES 1000
|
sched: Enhance the scheduler migration load fixup feature
In the current frequency guidance implementation the scheduler migrates
task load from the source CPU to the destination CPU when a task migrates.
The underlying assumption is that a task will stay on the destination CPU
following the migration. Hence a CPU's load should reflect the sum of
all tasks that last ran on that CPU prior to window expiration even if
these tasks executed on some other CPU in that window prior to being
migrated.
However, given the ubiquitous nature of migrations the above assumption
is flawed causing the scheduler to often add up load on a single CPU
that in reality ran concurrently on multiple CPUs and will continue to
run concurrently in subsequent windows. This leads to load over
reporting on a single CPU which in turn causes CPU frequency to be higher
than necessary.
This is the first patch in a series of patches that attempts to change
how load fixups are done upon migration to prevent load over reporting.
In this patch, we stop doing migration fixups for intra-cluster
migrations. Inter-cluster migration fixups are still retained.
In order to achieve the above, we make use the per CPU footprint of each
task introduced in the previous patch. Upon inter cluster migration, we
go through every CPU in the source cluster to subtract the migrating
task's contribution to the busy time on each one of those CPUs. The sum
of the contributions is then added to the destination CPU allowing it
to ramp up to the appropriate frequency for that task.
Subtracting load from each of the source CPUs is not trivial, however,
as it would require all runqueue locks to held. To get around this
we introduce a deferred load subtraction mechanism whereby subtracting
load from each of the source CPUs in deferred until an opportune moment.
This opportune moment is when the governor comes asking the scheduler
for load. At that time, all necessary runqueue locks are already held.
There are a few cases to consider when doing deferred subtraction. Since
we are not holding all runqueue locks other CPUs in the source cluster
can be in a different window than the source CPU where the task
is migrating from.
Case 1: Other CPU in the source cluster is in the same window
No special consideration
Case 2: Other CPU in the source cluster is ahead by 1 window
In this case, we will be doing redundant updates to subtraction load
for the prev window. There is no way to avoid this redundant update
though, without holding the rq lock.
Case 3: Other CPU in the source cluster is trailing by 1 window
In this case, we might end up overwriting old data for that CPU. But
this is not a problem as when the other CPU calls update_task_ravg()
it will move to the same window. This relies on maintaining
synchronized windows between CPUs, which is true today.
Finally, we must deal with frequency aggregation. When frequency
aggregation is in effect, there is little point in dealing with per
CPU footprint since the load of all related tasks have to be reported
on a single CPU. Therefore when a task enters a related group we clear
out all per CPU contributions and add it to the task CPU's cpu_time
struct. From that point onwards we stop managing per CPU contributions
upon inter cluster migrations since that work is redundant. Finally
when a task exits a related group we must walk every CPU in reset
all CPU contributions. We then set the task CPU contribution to the
respective curr/prev sum values and add that sum to the task CPU
rq runnable sum.
Change-Id: I1f8d596e6c930f3f6f00e24109ddbe8b121f8d6b
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-05-19 17:06:47 -07:00
|
|
|
|
2015-01-16 11:27:31 +05:30
|
|
|
struct hmp_sched_stats {
|
2015-06-19 12:28:24 -07:00
|
|
|
int nr_big_tasks;
|
2015-01-16 11:27:31 +05:30
|
|
|
u64 cumulative_runnable_avg;
|
2015-06-08 09:08:47 +05:30
|
|
|
u64 pred_demands_sum;
|
2015-01-16 11:27:31 +05:30
|
|
|
};
|
|
|
|
|
sched: Enhance the scheduler migration load fixup feature
In the current frequency guidance implementation the scheduler migrates
task load from the source CPU to the destination CPU when a task migrates.
The underlying assumption is that a task will stay on the destination CPU
following the migration. Hence a CPU's load should reflect the sum of
all tasks that last ran on that CPU prior to window expiration even if
these tasks executed on some other CPU in that window prior to being
migrated.
However, given the ubiquitous nature of migrations the above assumption
is flawed causing the scheduler to often add up load on a single CPU
that in reality ran concurrently on multiple CPUs and will continue to
run concurrently in subsequent windows. This leads to load over
reporting on a single CPU which in turn causes CPU frequency to be higher
than necessary.
This is the first patch in a series of patches that attempts to change
how load fixups are done upon migration to prevent load over reporting.
In this patch, we stop doing migration fixups for intra-cluster
migrations. Inter-cluster migration fixups are still retained.
In order to achieve the above, we make use the per CPU footprint of each
task introduced in the previous patch. Upon inter cluster migration, we
go through every CPU in the source cluster to subtract the migrating
task's contribution to the busy time on each one of those CPUs. The sum
of the contributions is then added to the destination CPU allowing it
to ramp up to the appropriate frequency for that task.
Subtracting load from each of the source CPUs is not trivial, however,
as it would require all runqueue locks to held. To get around this
we introduce a deferred load subtraction mechanism whereby subtracting
load from each of the source CPUs in deferred until an opportune moment.
This opportune moment is when the governor comes asking the scheduler
for load. At that time, all necessary runqueue locks are already held.
There are a few cases to consider when doing deferred subtraction. Since
we are not holding all runqueue locks other CPUs in the source cluster
can be in a different window than the source CPU where the task
is migrating from.
Case 1: Other CPU in the source cluster is in the same window
No special consideration
Case 2: Other CPU in the source cluster is ahead by 1 window
In this case, we will be doing redundant updates to subtraction load
for the prev window. There is no way to avoid this redundant update
though, without holding the rq lock.
Case 3: Other CPU in the source cluster is trailing by 1 window
In this case, we might end up overwriting old data for that CPU. But
this is not a problem as when the other CPU calls update_task_ravg()
it will move to the same window. This relies on maintaining
synchronized windows between CPUs, which is true today.
Finally, we must deal with frequency aggregation. When frequency
aggregation is in effect, there is little point in dealing with per
CPU footprint since the load of all related tasks have to be reported
on a single CPU. Therefore when a task enters a related group we clear
out all per CPU contributions and add it to the task CPU's cpu_time
struct. From that point onwards we stop managing per CPU contributions
upon inter cluster migrations since that work is redundant. Finally
when a task exits a related group we must walk every CPU in reset
all CPU contributions. We then set the task CPU contribution to the
respective curr/prev sum values and add that sum to the task CPU
rq runnable sum.
Change-Id: I1f8d596e6c930f3f6f00e24109ddbe8b121f8d6b
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-05-19 17:06:47 -07:00
|
|
|
struct load_subtractions {
|
|
|
|
u64 window_start;
|
|
|
|
u64 subs;
|
|
|
|
u64 new_subs;
|
|
|
|
};
|
|
|
|
|
2017-01-09 13:56:33 +05:30
|
|
|
struct group_cpu_time {
|
|
|
|
u64 curr_runnable_sum;
|
|
|
|
u64 prev_runnable_sum;
|
|
|
|
u64 nt_curr_runnable_sum;
|
|
|
|
u64 nt_prev_runnable_sum;
|
|
|
|
};
|
|
|
|
|
2015-04-20 12:35:48 +05:30
|
|
|
struct sched_cluster {
|
sched: Enhance the scheduler migration load fixup feature
In the current frequency guidance implementation the scheduler migrates
task load from the source CPU to the destination CPU when a task migrates.
The underlying assumption is that a task will stay on the destination CPU
following the migration. Hence a CPU's load should reflect the sum of
all tasks that last ran on that CPU prior to window expiration even if
these tasks executed on some other CPU in that window prior to being
migrated.
However, given the ubiquitous nature of migrations the above assumption
is flawed causing the scheduler to often add up load on a single CPU
that in reality ran concurrently on multiple CPUs and will continue to
run concurrently in subsequent windows. This leads to load over
reporting on a single CPU which in turn causes CPU frequency to be higher
than necessary.
This is the first patch in a series of patches that attempts to change
how load fixups are done upon migration to prevent load over reporting.
In this patch, we stop doing migration fixups for intra-cluster
migrations. Inter-cluster migration fixups are still retained.
In order to achieve the above, we make use the per CPU footprint of each
task introduced in the previous patch. Upon inter cluster migration, we
go through every CPU in the source cluster to subtract the migrating
task's contribution to the busy time on each one of those CPUs. The sum
of the contributions is then added to the destination CPU allowing it
to ramp up to the appropriate frequency for that task.
Subtracting load from each of the source CPUs is not trivial, however,
as it would require all runqueue locks to held. To get around this
we introduce a deferred load subtraction mechanism whereby subtracting
load from each of the source CPUs in deferred until an opportune moment.
This opportune moment is when the governor comes asking the scheduler
for load. At that time, all necessary runqueue locks are already held.
There are a few cases to consider when doing deferred subtraction. Since
we are not holding all runqueue locks other CPUs in the source cluster
can be in a different window than the source CPU where the task
is migrating from.
Case 1: Other CPU in the source cluster is in the same window
No special consideration
Case 2: Other CPU in the source cluster is ahead by 1 window
In this case, we will be doing redundant updates to subtraction load
for the prev window. There is no way to avoid this redundant update
though, without holding the rq lock.
Case 3: Other CPU in the source cluster is trailing by 1 window
In this case, we might end up overwriting old data for that CPU. But
this is not a problem as when the other CPU calls update_task_ravg()
it will move to the same window. This relies on maintaining
synchronized windows between CPUs, which is true today.
Finally, we must deal with frequency aggregation. When frequency
aggregation is in effect, there is little point in dealing with per
CPU footprint since the load of all related tasks have to be reported
on a single CPU. Therefore when a task enters a related group we clear
out all per CPU contributions and add it to the task CPU's cpu_time
struct. From that point onwards we stop managing per CPU contributions
upon inter cluster migrations since that work is redundant. Finally
when a task exits a related group we must walk every CPU in reset
all CPU contributions. We then set the task CPU contribution to the
respective curr/prev sum values and add that sum to the task CPU
rq runnable sum.
Change-Id: I1f8d596e6c930f3f6f00e24109ddbe8b121f8d6b
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-05-19 17:06:47 -07:00
|
|
|
raw_spinlock_t load_lock;
|
2015-04-20 12:35:48 +05:30
|
|
|
struct list_head list;
|
|
|
|
struct cpumask cpus;
|
|
|
|
int id;
|
|
|
|
int max_power_cost;
|
2015-12-14 14:23:24 +05:30
|
|
|
int min_power_cost;
|
2015-04-20 12:35:48 +05:30
|
|
|
int max_possible_capacity;
|
|
|
|
int capacity;
|
|
|
|
int efficiency; /* Differentiate cpus with different IPC capability */
|
|
|
|
int load_scale_factor;
|
2016-06-17 15:15:04 -07:00
|
|
|
unsigned int exec_scale_factor;
|
2015-04-20 12:35:48 +05:30
|
|
|
/*
|
2016-03-28 14:22:52 -07:00
|
|
|
* max_freq = user maximum
|
|
|
|
* max_mitigated_freq = thermal defined maximum
|
2015-04-20 12:35:48 +05:30
|
|
|
* max_possible_freq = maximum supported by hardware
|
|
|
|
*/
|
2016-03-28 14:22:52 -07:00
|
|
|
unsigned int cur_freq, max_freq, max_mitigated_freq, min_freq;
|
|
|
|
unsigned int max_possible_freq;
|
2015-04-20 12:35:48 +05:30
|
|
|
bool freq_init_done;
|
|
|
|
int dstate, dstate_wakeup_latency, dstate_wakeup_energy;
|
|
|
|
unsigned int static_cluster_pwr_cost;
|
2016-08-12 16:12:53 +05:30
|
|
|
int notifier_sent;
|
2017-01-04 15:56:51 -08:00
|
|
|
bool wake_up_idle;
|
2017-05-30 14:38:55 -07:00
|
|
|
atomic64_t last_cc_update;
|
|
|
|
atomic64_t cycles;
|
2015-04-20 12:35:48 +05:30
|
|
|
};
|
|
|
|
|
|
|
|
extern unsigned long all_cluster_ids[];
|
|
|
|
|
|
|
|
static inline int cluster_first_cpu(struct sched_cluster *cluster)
|
|
|
|
{
|
|
|
|
return cpumask_first(&cluster->cpus);
|
|
|
|
}
|
|
|
|
|
2015-04-24 15:44:31 +05:30
|
|
|
struct related_thread_group {
|
|
|
|
int id;
|
|
|
|
raw_spinlock_t lock;
|
|
|
|
struct list_head tasks;
|
|
|
|
struct list_head list;
|
|
|
|
struct sched_cluster *preferred_cluster;
|
|
|
|
struct rcu_head rcu;
|
|
|
|
u64 last_update;
|
|
|
|
};
|
|
|
|
|
2015-04-22 17:12:09 +05:30
|
|
|
extern struct list_head cluster_head;
|
|
|
|
extern int num_clusters;
|
|
|
|
extern struct sched_cluster *sched_cluster[NR_CPUS];
|
|
|
|
|
2016-04-28 15:22:12 -07:00
|
|
|
struct cpu_cycle {
|
|
|
|
u64 cycles;
|
|
|
|
u64 time;
|
|
|
|
};
|
|
|
|
|
2015-04-22 17:12:09 +05:30
|
|
|
#define for_each_sched_cluster(cluster) \
|
|
|
|
list_for_each_entry_rcu(cluster, &cluster_head, list)
|
|
|
|
|
2016-07-28 19:18:08 -07:00
|
|
|
#endif /* CONFIG_SCHED_HMP */
|
2015-01-16 11:27:31 +05:30
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
/* CFS-related fields in a runqueue */
|
|
|
|
struct cfs_rq {
|
|
|
|
struct load_weight load;
|
2012-04-26 13:12:27 +02:00
|
|
|
unsigned int nr_running, h_nr_running;
|
2011-10-25 10:00:11 +02:00
|
|
|
|
|
|
|
u64 exec_clock;
|
|
|
|
u64 min_vruntime;
|
|
|
|
#ifndef CONFIG_64BIT
|
|
|
|
u64 min_vruntime_copy;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
struct rb_root tasks_timeline;
|
|
|
|
struct rb_node *rb_leftmost;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* 'curr' points to currently running entity on this cfs_rq.
|
|
|
|
* It is set to NULL otherwise (i.e when none are currently running).
|
|
|
|
*/
|
|
|
|
struct sched_entity *curr, *next, *last, *skip;
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
|
|
|
unsigned int nr_spread_over;
|
|
|
|
#endif
|
|
|
|
|
2012-10-04 13:18:30 +02:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/*
|
sched/fair: Rewrite runnable load and utilization average tracking
The idea of runnable load average (let runnable time contribute to weight)
was proposed by Paul Turner and Ben Segall, and it is still followed by
this rewrite. This rewrite aims to solve the following issues:
1. cfs_rq's load average (namely runnable_load_avg and blocked_load_avg) is
updated at the granularity of an entity at a time, which results in the
cfs_rq's load average is stale or partially updated: at any time, only
one entity is up to date, all other entities are effectively lagging
behind. This is undesirable.
To illustrate, if we have n runnable entities in the cfs_rq, as time
elapses, they certainly become outdated:
t0: cfs_rq { e1_old, e2_old, ..., en_old }
and when we update:
t1: update e1, then we have cfs_rq { e1_new, e2_old, ..., en_old }
t2: update e2, then we have cfs_rq { e1_old, e2_new, ..., en_old }
...
We solve this by combining all runnable entities' load averages together
in cfs_rq's avg, and update the cfs_rq's avg as a whole. This is based
on the fact that if we regard the update as a function, then:
w * update(e) = update(w * e) and
update(e1) + update(e2) = update(e1 + e2), then
w1 * update(e1) + w2 * update(e2) = update(w1 * e1 + w2 * e2)
therefore, by this rewrite, we have an entirely updated cfs_rq at the
time we update it:
t1: update cfs_rq { e1_new, e2_new, ..., en_new }
t2: update cfs_rq { e1_new, e2_new, ..., en_new }
...
2. cfs_rq's load average is different between top rq->cfs_rq and other
task_group's per CPU cfs_rqs in whether or not blocked_load_average
contributes to the load.
The basic idea behind runnable load average (the same for utilization)
is that the blocked state is taken into account as opposed to only
accounting for the currently runnable state. Therefore, the average
should include both the runnable/running and blocked load averages.
This rewrite does that.
In addition, we also combine runnable/running and blocked averages
of all entities into the cfs_rq's average, and update it together at
once. This is based on the fact that:
update(runnable) + update(blocked) = update(runnable + blocked)
This significantly reduces the code as we don't need to separately
maintain/update runnable/running load and blocked load.
3. How task_group entities' share is calculated is complex and imprecise.
We reduce the complexity in this rewrite to allow a very simple rule:
the task_group's load_avg is aggregated from its per CPU cfs_rqs's
load_avgs. Then group entity's weight is simply proportional to its
own cfs_rq's load_avg / task_group's load_avg. To illustrate,
if a task_group has { cfs_rq1, cfs_rq2, ..., cfs_rqn }, then,
task_group_avg = cfs_rq1_avg + cfs_rq2_avg + ... + cfs_rqn_avg, then
cfs_rqx's entity's share = cfs_rqx_avg / task_group_avg * task_group's share
To sum up, this rewrite in principle is equivalent to the current one, but
fixes the issues described above. Turns out, it significantly reduces the
code complexity and hence increases clarity and efficiency. In addition,
the new averages are more smooth/continuous (no spurious spikes and valleys)
and updated more consistently and quickly to reflect the load dynamics.
As a result, we have less load tracking overhead, better performance,
and especially better power efficiency due to more balanced load.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arjan@linux.intel.com
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: fengguang.wu@intel.com
Cc: len.brown@intel.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: rafael.j.wysocki@intel.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1436918682-4971-3-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-07-15 08:04:37 +08:00
|
|
|
* CFS load tracking
|
2012-10-04 13:18:30 +02:00
|
|
|
*/
|
sched/fair: Rewrite runnable load and utilization average tracking
The idea of runnable load average (let runnable time contribute to weight)
was proposed by Paul Turner and Ben Segall, and it is still followed by
this rewrite. This rewrite aims to solve the following issues:
1. cfs_rq's load average (namely runnable_load_avg and blocked_load_avg) is
updated at the granularity of an entity at a time, which results in the
cfs_rq's load average is stale or partially updated: at any time, only
one entity is up to date, all other entities are effectively lagging
behind. This is undesirable.
To illustrate, if we have n runnable entities in the cfs_rq, as time
elapses, they certainly become outdated:
t0: cfs_rq { e1_old, e2_old, ..., en_old }
and when we update:
t1: update e1, then we have cfs_rq { e1_new, e2_old, ..., en_old }
t2: update e2, then we have cfs_rq { e1_old, e2_new, ..., en_old }
...
We solve this by combining all runnable entities' load averages together
in cfs_rq's avg, and update the cfs_rq's avg as a whole. This is based
on the fact that if we regard the update as a function, then:
w * update(e) = update(w * e) and
update(e1) + update(e2) = update(e1 + e2), then
w1 * update(e1) + w2 * update(e2) = update(w1 * e1 + w2 * e2)
therefore, by this rewrite, we have an entirely updated cfs_rq at the
time we update it:
t1: update cfs_rq { e1_new, e2_new, ..., en_new }
t2: update cfs_rq { e1_new, e2_new, ..., en_new }
...
2. cfs_rq's load average is different between top rq->cfs_rq and other
task_group's per CPU cfs_rqs in whether or not blocked_load_average
contributes to the load.
The basic idea behind runnable load average (the same for utilization)
is that the blocked state is taken into account as opposed to only
accounting for the currently runnable state. Therefore, the average
should include both the runnable/running and blocked load averages.
This rewrite does that.
In addition, we also combine runnable/running and blocked averages
of all entities into the cfs_rq's average, and update it together at
once. This is based on the fact that:
update(runnable) + update(blocked) = update(runnable + blocked)
This significantly reduces the code as we don't need to separately
maintain/update runnable/running load and blocked load.
3. How task_group entities' share is calculated is complex and imprecise.
We reduce the complexity in this rewrite to allow a very simple rule:
the task_group's load_avg is aggregated from its per CPU cfs_rqs's
load_avgs. Then group entity's weight is simply proportional to its
own cfs_rq's load_avg / task_group's load_avg. To illustrate,
if a task_group has { cfs_rq1, cfs_rq2, ..., cfs_rqn }, then,
task_group_avg = cfs_rq1_avg + cfs_rq2_avg + ... + cfs_rqn_avg, then
cfs_rqx's entity's share = cfs_rqx_avg / task_group_avg * task_group's share
To sum up, this rewrite in principle is equivalent to the current one, but
fixes the issues described above. Turns out, it significantly reduces the
code complexity and hence increases clarity and efficiency. In addition,
the new averages are more smooth/continuous (no spurious spikes and valleys)
and updated more consistently and quickly to reflect the load dynamics.
As a result, we have less load tracking overhead, better performance,
and especially better power efficiency due to more balanced load.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arjan@linux.intel.com
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: fengguang.wu@intel.com
Cc: len.brown@intel.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: rafael.j.wysocki@intel.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1436918682-4971-3-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-07-15 08:04:37 +08:00
|
|
|
struct sched_avg avg;
|
2015-07-15 08:04:41 +08:00
|
|
|
u64 runnable_load_sum;
|
|
|
|
unsigned long runnable_load_avg;
|
2012-10-04 13:18:30 +02:00
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
sched/fair: Rewrite runnable load and utilization average tracking
The idea of runnable load average (let runnable time contribute to weight)
was proposed by Paul Turner and Ben Segall, and it is still followed by
this rewrite. This rewrite aims to solve the following issues:
1. cfs_rq's load average (namely runnable_load_avg and blocked_load_avg) is
updated at the granularity of an entity at a time, which results in the
cfs_rq's load average is stale or partially updated: at any time, only
one entity is up to date, all other entities are effectively lagging
behind. This is undesirable.
To illustrate, if we have n runnable entities in the cfs_rq, as time
elapses, they certainly become outdated:
t0: cfs_rq { e1_old, e2_old, ..., en_old }
and when we update:
t1: update e1, then we have cfs_rq { e1_new, e2_old, ..., en_old }
t2: update e2, then we have cfs_rq { e1_old, e2_new, ..., en_old }
...
We solve this by combining all runnable entities' load averages together
in cfs_rq's avg, and update the cfs_rq's avg as a whole. This is based
on the fact that if we regard the update as a function, then:
w * update(e) = update(w * e) and
update(e1) + update(e2) = update(e1 + e2), then
w1 * update(e1) + w2 * update(e2) = update(w1 * e1 + w2 * e2)
therefore, by this rewrite, we have an entirely updated cfs_rq at the
time we update it:
t1: update cfs_rq { e1_new, e2_new, ..., en_new }
t2: update cfs_rq { e1_new, e2_new, ..., en_new }
...
2. cfs_rq's load average is different between top rq->cfs_rq and other
task_group's per CPU cfs_rqs in whether or not blocked_load_average
contributes to the load.
The basic idea behind runnable load average (the same for utilization)
is that the blocked state is taken into account as opposed to only
accounting for the currently runnable state. Therefore, the average
should include both the runnable/running and blocked load averages.
This rewrite does that.
In addition, we also combine runnable/running and blocked averages
of all entities into the cfs_rq's average, and update it together at
once. This is based on the fact that:
update(runnable) + update(blocked) = update(runnable + blocked)
This significantly reduces the code as we don't need to separately
maintain/update runnable/running load and blocked load.
3. How task_group entities' share is calculated is complex and imprecise.
We reduce the complexity in this rewrite to allow a very simple rule:
the task_group's load_avg is aggregated from its per CPU cfs_rqs's
load_avgs. Then group entity's weight is simply proportional to its
own cfs_rq's load_avg / task_group's load_avg. To illustrate,
if a task_group has { cfs_rq1, cfs_rq2, ..., cfs_rqn }, then,
task_group_avg = cfs_rq1_avg + cfs_rq2_avg + ... + cfs_rqn_avg, then
cfs_rqx's entity's share = cfs_rqx_avg / task_group_avg * task_group's share
To sum up, this rewrite in principle is equivalent to the current one, but
fixes the issues described above. Turns out, it significantly reduces the
code complexity and hence increases clarity and efficiency. In addition,
the new averages are more smooth/continuous (no spurious spikes and valleys)
and updated more consistently and quickly to reflect the load dynamics.
As a result, we have less load tracking overhead, better performance,
and especially better power efficiency due to more balanced load.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arjan@linux.intel.com
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: fengguang.wu@intel.com
Cc: len.brown@intel.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: rafael.j.wysocki@intel.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1436918682-4971-3-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-07-15 08:04:37 +08:00
|
|
|
unsigned long tg_load_avg_contrib;
|
2016-11-08 10:53:45 +01:00
|
|
|
unsigned long propagate_avg;
|
sched/fair: Rewrite runnable load and utilization average tracking
The idea of runnable load average (let runnable time contribute to weight)
was proposed by Paul Turner and Ben Segall, and it is still followed by
this rewrite. This rewrite aims to solve the following issues:
1. cfs_rq's load average (namely runnable_load_avg and blocked_load_avg) is
updated at the granularity of an entity at a time, which results in the
cfs_rq's load average is stale or partially updated: at any time, only
one entity is up to date, all other entities are effectively lagging
behind. This is undesirable.
To illustrate, if we have n runnable entities in the cfs_rq, as time
elapses, they certainly become outdated:
t0: cfs_rq { e1_old, e2_old, ..., en_old }
and when we update:
t1: update e1, then we have cfs_rq { e1_new, e2_old, ..., en_old }
t2: update e2, then we have cfs_rq { e1_old, e2_new, ..., en_old }
...
We solve this by combining all runnable entities' load averages together
in cfs_rq's avg, and update the cfs_rq's avg as a whole. This is based
on the fact that if we regard the update as a function, then:
w * update(e) = update(w * e) and
update(e1) + update(e2) = update(e1 + e2), then
w1 * update(e1) + w2 * update(e2) = update(w1 * e1 + w2 * e2)
therefore, by this rewrite, we have an entirely updated cfs_rq at the
time we update it:
t1: update cfs_rq { e1_new, e2_new, ..., en_new }
t2: update cfs_rq { e1_new, e2_new, ..., en_new }
...
2. cfs_rq's load average is different between top rq->cfs_rq and other
task_group's per CPU cfs_rqs in whether or not blocked_load_average
contributes to the load.
The basic idea behind runnable load average (the same for utilization)
is that the blocked state is taken into account as opposed to only
accounting for the currently runnable state. Therefore, the average
should include both the runnable/running and blocked load averages.
This rewrite does that.
In addition, we also combine runnable/running and blocked averages
of all entities into the cfs_rq's average, and update it together at
once. This is based on the fact that:
update(runnable) + update(blocked) = update(runnable + blocked)
This significantly reduces the code as we don't need to separately
maintain/update runnable/running load and blocked load.
3. How task_group entities' share is calculated is complex and imprecise.
We reduce the complexity in this rewrite to allow a very simple rule:
the task_group's load_avg is aggregated from its per CPU cfs_rqs's
load_avgs. Then group entity's weight is simply proportional to its
own cfs_rq's load_avg / task_group's load_avg. To illustrate,
if a task_group has { cfs_rq1, cfs_rq2, ..., cfs_rqn }, then,
task_group_avg = cfs_rq1_avg + cfs_rq2_avg + ... + cfs_rqn_avg, then
cfs_rqx's entity's share = cfs_rqx_avg / task_group_avg * task_group's share
To sum up, this rewrite in principle is equivalent to the current one, but
fixes the issues described above. Turns out, it significantly reduces the
code complexity and hence increases clarity and efficiency. In addition,
the new averages are more smooth/continuous (no spurious spikes and valleys)
and updated more consistently and quickly to reflect the load dynamics.
As a result, we have less load tracking overhead, better performance,
and especially better power efficiency due to more balanced load.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arjan@linux.intel.com
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: fengguang.wu@intel.com
Cc: len.brown@intel.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: rafael.j.wysocki@intel.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1436918682-4971-3-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-07-15 08:04:37 +08:00
|
|
|
#endif
|
|
|
|
atomic_long_t removed_load_avg, removed_util_avg;
|
|
|
|
#ifndef CONFIG_64BIT
|
|
|
|
u64 load_last_update_time_copy;
|
|
|
|
#endif
|
2012-10-04 13:18:31 +02:00
|
|
|
|
sched/fair: Rewrite runnable load and utilization average tracking
The idea of runnable load average (let runnable time contribute to weight)
was proposed by Paul Turner and Ben Segall, and it is still followed by
this rewrite. This rewrite aims to solve the following issues:
1. cfs_rq's load average (namely runnable_load_avg and blocked_load_avg) is
updated at the granularity of an entity at a time, which results in the
cfs_rq's load average is stale or partially updated: at any time, only
one entity is up to date, all other entities are effectively lagging
behind. This is undesirable.
To illustrate, if we have n runnable entities in the cfs_rq, as time
elapses, they certainly become outdated:
t0: cfs_rq { e1_old, e2_old, ..., en_old }
and when we update:
t1: update e1, then we have cfs_rq { e1_new, e2_old, ..., en_old }
t2: update e2, then we have cfs_rq { e1_old, e2_new, ..., en_old }
...
We solve this by combining all runnable entities' load averages together
in cfs_rq's avg, and update the cfs_rq's avg as a whole. This is based
on the fact that if we regard the update as a function, then:
w * update(e) = update(w * e) and
update(e1) + update(e2) = update(e1 + e2), then
w1 * update(e1) + w2 * update(e2) = update(w1 * e1 + w2 * e2)
therefore, by this rewrite, we have an entirely updated cfs_rq at the
time we update it:
t1: update cfs_rq { e1_new, e2_new, ..., en_new }
t2: update cfs_rq { e1_new, e2_new, ..., en_new }
...
2. cfs_rq's load average is different between top rq->cfs_rq and other
task_group's per CPU cfs_rqs in whether or not blocked_load_average
contributes to the load.
The basic idea behind runnable load average (the same for utilization)
is that the blocked state is taken into account as opposed to only
accounting for the currently runnable state. Therefore, the average
should include both the runnable/running and blocked load averages.
This rewrite does that.
In addition, we also combine runnable/running and blocked averages
of all entities into the cfs_rq's average, and update it together at
once. This is based on the fact that:
update(runnable) + update(blocked) = update(runnable + blocked)
This significantly reduces the code as we don't need to separately
maintain/update runnable/running load and blocked load.
3. How task_group entities' share is calculated is complex and imprecise.
We reduce the complexity in this rewrite to allow a very simple rule:
the task_group's load_avg is aggregated from its per CPU cfs_rqs's
load_avgs. Then group entity's weight is simply proportional to its
own cfs_rq's load_avg / task_group's load_avg. To illustrate,
if a task_group has { cfs_rq1, cfs_rq2, ..., cfs_rqn }, then,
task_group_avg = cfs_rq1_avg + cfs_rq2_avg + ... + cfs_rqn_avg, then
cfs_rqx's entity's share = cfs_rqx_avg / task_group_avg * task_group's share
To sum up, this rewrite in principle is equivalent to the current one, but
fixes the issues described above. Turns out, it significantly reduces the
code complexity and hence increases clarity and efficiency. In addition,
the new averages are more smooth/continuous (no spurious spikes and valleys)
and updated more consistently and quickly to reflect the load dynamics.
As a result, we have less load tracking overhead, better performance,
and especially better power efficiency due to more balanced load.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arjan@linux.intel.com
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: fengguang.wu@intel.com
Cc: len.brown@intel.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: rafael.j.wysocki@intel.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1436918682-4971-3-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-07-15 08:04:37 +08:00
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
2012-10-04 13:18:31 +02:00
|
|
|
/*
|
|
|
|
* h_load = weight * f(tg)
|
|
|
|
*
|
|
|
|
* Where f(tg) is the recursive weight fraction assigned to
|
|
|
|
* this group.
|
|
|
|
*/
|
|
|
|
unsigned long h_load;
|
sched: Move h_load calculation to task_h_load()
The bad thing about update_h_load(), which computes hierarchical load
factor for task groups, is that it is called for each task group in the
system before every load balancer run, and since rebalance can be
triggered very often, this function can eat really a lot of cpu time if
there are many cpu cgroups in the system.
Although the situation was improved significantly by commit a35b646
('sched, cgroup: Reduce rq->lock hold times for large cgroup
hierarchies'), the problem still can arise under some kinds of loads,
e.g. when cpus are switching from idle to busy and back very frequently.
For instance, when I start 1000 of processes that wake up every
millisecond on my 8 cpus host, 'top' and 'perf top' show:
Cpu(s): 17.8%us, 24.3%sy, 0.0%ni, 57.9%id, 0.0%wa, 0.0%hi, 0.0%si
Events: 243K cycles
7.57% [kernel] [k] __schedule
7.08% [kernel] [k] timerqueue_add
6.13% libc-2.12.so [.] usleep
Then if I create 10000 *idle* cpu cgroups (no processes in them), cpu
usage increases significantly although the 'wakers' are still executing
in the root cpu cgroup:
Cpu(s): 19.1%us, 48.7%sy, 0.0%ni, 31.6%id, 0.0%wa, 0.0%hi, 0.7%si
Events: 230K cycles
24.56% [kernel] [k] tg_load_down
5.76% [kernel] [k] __schedule
This happens because this particular kind of load triggers 'new idle'
rebalance very frequently, which requires calling update_h_load(),
which, in turn, calls tg_load_down() for every *idle* cpu cgroup even
though it is absolutely useless, because idle cpu cgroups have no tasks
to pull.
This patch tries to improve the situation by making h_load calculation
proceed only when h_load is really necessary. To achieve this, it
substitutes update_h_load() with update_cfs_rq_h_load(), which computes
h_load only for a given cfs_rq and all its ascendants, and makes the
load balancer call this function whenever it considers if a task should
be pulled, i.e. it moves h_load calculations directly to task_h_load().
For h_load of the same cfs_rq not to be updated multiple times (in case
several tasks in the same cgroup are considered during the same balance
run), the patch keeps the time of the last h_load update for each cfs_rq
and breaks calculation when it finds h_load to be uptodate.
The benefit of it is that h_load is computed only for those cfs_rq's,
which really need it, in particular all idle task groups are skipped.
Although this, in fact, moves h_load calculation under rq lock, it
should not affect latency much, because the amount of work done under rq
lock while trying to pull tasks is limited by sched_nr_migrate.
After the patch applied with the setup described above (1000 wakers in
the root cgroup and 10000 idle cgroups), I get:
Cpu(s): 16.9%us, 24.8%sy, 0.0%ni, 58.4%id, 0.0%wa, 0.0%hi, 0.0%si
Events: 242K cycles
7.57% [kernel] [k] __schedule
6.70% [kernel] [k] timerqueue_add
5.93% libc-2.12.so [.] usleep
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1373896159-1278-1-git-send-email-vdavydov@parallels.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-07-15 17:49:19 +04:00
|
|
|
u64 last_h_load_update;
|
|
|
|
struct sched_entity *h_load_next;
|
|
|
|
#endif /* CONFIG_FAIR_GROUP_SCHED */
|
2012-10-04 13:18:31 +02:00
|
|
|
#endif /* CONFIG_SMP */
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
|
|
|
struct rq *rq; /* cpu runqueue to which this cfs_rq is attached */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* leaf cfs_rqs are those that hold tasks (lowest schedulable entity in
|
|
|
|
* a hierarchy). Non-leaf lrqs hold other higher schedulable entities
|
|
|
|
* (like users, containers etc.)
|
|
|
|
*
|
|
|
|
* leaf_cfs_rq_list ties together list of leaf cfs_rq's in a cpu. This
|
|
|
|
* list is used during load balance.
|
|
|
|
*/
|
|
|
|
int on_list;
|
|
|
|
struct list_head leaf_cfs_rq_list;
|
|
|
|
struct task_group *tg; /* group that "owns" this runqueue */
|
|
|
|
|
|
|
|
#ifdef CONFIG_CFS_BANDWIDTH
|
2015-01-16 13:57:02 +05:30
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_HMP
|
|
|
|
struct hmp_sched_stats hmp_stats;
|
|
|
|
#endif
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
int runtime_enabled;
|
|
|
|
u64 runtime_expires;
|
|
|
|
s64 runtime_remaining;
|
|
|
|
|
2012-10-04 13:18:31 +02:00
|
|
|
u64 throttled_clock, throttled_clock_task;
|
|
|
|
u64 throttled_clock_task_time;
|
2016-06-16 15:57:01 +03:00
|
|
|
int throttled, throttle_count, throttle_uptodate;
|
2011-10-25 10:00:11 +02:00
|
|
|
struct list_head throttled_list;
|
|
|
|
#endif /* CONFIG_CFS_BANDWIDTH */
|
|
|
|
#endif /* CONFIG_FAIR_GROUP_SCHED */
|
|
|
|
};
|
|
|
|
|
|
|
|
static inline int rt_bandwidth_enabled(void)
|
|
|
|
{
|
|
|
|
return sysctl_sched_rt_runtime >= 0;
|
|
|
|
}
|
|
|
|
|
sched/rt: Use IPI to trigger RT task push migration instead of pulling
When debugging the latencies on a 40 core box, where we hit 300 to
500 microsecond latencies, I found there was a huge contention on the
runqueue locks.
Investigating it further, running ftrace, I found that it was due to
the pulling of RT tasks.
The test that was run was the following:
cyclictest --numa -p95 -m -d0 -i100
This created a thread on each CPU, that would set its wakeup in iterations
of 100 microseconds. The -d0 means that all the threads had the same
interval (100us). Each thread sleeps for 100us and wakes up and measures
its latencies.
cyclictest is maintained at:
git://git.kernel.org/pub/scm/linux/kernel/git/clrkwllms/rt-tests.git
What happened was another RT task would be scheduled on one of the CPUs
that was running our test, when the other CPU tests went to sleep and
scheduled idle. This caused the "pull" operation to execute on all
these CPUs. Each one of these saw the RT task that was overloaded on
the CPU of the test that was still running, and each one tried
to grab that task in a thundering herd way.
To grab the task, each thread would do a double rq lock grab, grabbing
its own lock as well as the rq of the overloaded CPU. As the sched
domains on this box was rather flat for its size, I saw up to 12 CPUs
block on this lock at once. This caused a ripple affect with the
rq locks especially since the taking was done via a double rq lock, which
means that several of the CPUs had their own rq locks held while trying
to take this rq lock. As these locks were blocked, any wakeups or load
balanceing on these CPUs would also block on these locks, and the wait
time escalated.
I've tried various methods to lessen the load, but things like an
atomic counter to only let one CPU grab the task wont work, because
the task may have a limited affinity, and we may pick the wrong
CPU to take that lock and do the pull, to only find out that the
CPU we picked isn't in the task's affinity.
Instead of doing the PULL, I now have the CPUs that want the pull to
send over an IPI to the overloaded CPU, and let that CPU pick what
CPU to push the task to. No more need to grab the rq lock, and the
push/pull algorithm still works fine.
With this patch, the latency dropped to just 150us over a 20 hour run.
Without the patch, the huge latencies would trigger in seconds.
I've created a new sched feature called RT_PUSH_IPI, which is enabled
by default.
When RT_PUSH_IPI is not enabled, the old method of grabbing the rq locks
and having the pulling CPU do the work is implemented. When RT_PUSH_IPI
is enabled, the IPI is sent to the overloaded CPU to do a push.
To enabled or disable this at run time:
# mount -t debugfs nodev /sys/kernel/debug
# echo RT_PUSH_IPI > /sys/kernel/debug/sched_features
or
# echo NO_RT_PUSH_IPI > /sys/kernel/debug/sched_features
Update: This original patch would send an IPI to all CPUs in the RT overload
list. But that could theoretically cause the reverse issue. That is, there
could be lots of overloaded RT queues and one CPU lowers its priority. It would
then send an IPI to all the overloaded RT queues and they could then all try
to grab the rq lock of the CPU lowering its priority, and then we have the
same problem.
The latest design sends out only one IPI to the first overloaded CPU. It tries to
push any tasks that it can, and then looks for the next overloaded CPU that can
push to the source CPU. The IPIs stop when all overloaded CPUs that have pushable
tasks that have priorities greater than the source CPU are covered. In case the
source CPU lowers its priority again, a flag is set to tell the IPI traversal to
restart with the first RT overloaded CPU after the source CPU.
Parts-suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joern Engel <joern@purestorage.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150318144946.2f3cc982@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-18 14:49:46 -04:00
|
|
|
/* RT IPI pull logic requires IRQ_WORK */
|
sched/rt: Simplify the IPI based RT balancing logic
commit 4bdced5c9a2922521e325896a7bbbf0132c94e56 upstream.
When a CPU lowers its priority (schedules out a high priority task for a
lower priority one), a check is made to see if any other CPU has overloaded
RT tasks (more than one). It checks the rto_mask to determine this and if so
it will request to pull one of those tasks to itself if the non running RT
task is of higher priority than the new priority of the next task to run on
the current CPU.
When we deal with large number of CPUs, the original pull logic suffered
from large lock contention on a single CPU run queue, which caused a huge
latency across all CPUs. This was caused by only having one CPU having
overloaded RT tasks and a bunch of other CPUs lowering their priority. To
solve this issue, commit:
b6366f048e0c ("sched/rt: Use IPI to trigger RT task push migration instead of pulling")
changed the way to request a pull. Instead of grabbing the lock of the
overloaded CPU's runqueue, it simply sent an IPI to that CPU to do the work.
Although the IPI logic worked very well in removing the large latency build
up, it still could suffer from a large number of IPIs being sent to a single
CPU. On a 80 CPU box, I measured over 200us of processing IPIs. Worse yet,
when I tested this on a 120 CPU box, with a stress test that had lots of
RT tasks scheduling on all CPUs, it actually triggered the hard lockup
detector! One CPU had so many IPIs sent to it, and due to the restart
mechanism that is triggered when the source run queue has a priority status
change, the CPU spent minutes! processing the IPIs.
Thinking about this further, I realized there's no reason for each run queue
to send its own IPI. As all CPUs with overloaded tasks must be scanned
regardless if there's one or many CPUs lowering their priority, because
there's no current way to find the CPU with the highest priority task that
can schedule to one of these CPUs, there really only needs to be one IPI
being sent around at a time.
This greatly simplifies the code!
The new approach is to have each root domain have its own irq work, as the
rto_mask is per root domain. The root domain has the following fields
attached to it:
rto_push_work - the irq work to process each CPU set in rto_mask
rto_lock - the lock to protect some of the other rto fields
rto_loop_start - an atomic that keeps contention down on rto_lock
the first CPU scheduling in a lower priority task
is the one to kick off the process.
rto_loop_next - an atomic that gets incremented for each CPU that
schedules in a lower priority task.
rto_loop - a variable protected by rto_lock that is used to
compare against rto_loop_next
rto_cpu - The cpu to send the next IPI to, also protected by
the rto_lock.
When a CPU schedules in a lower priority task and wants to make sure
overloaded CPUs know about it. It increments the rto_loop_next. Then it
atomically sets rto_loop_start with a cmpxchg. If the old value is not "0",
then it is done, as another CPU is kicking off the IPI loop. If the old
value is "0", then it will take the rto_lock to synchronize with a possible
IPI being sent around to the overloaded CPUs.
If rto_cpu is greater than or equal to nr_cpu_ids, then there's either no
IPI being sent around, or one is about to finish. Then rto_cpu is set to the
first CPU in rto_mask and an IPI is sent to that CPU. If there's no CPUs set
in rto_mask, then there's nothing to be done.
When the CPU receives the IPI, it will first try to push any RT tasks that is
queued on the CPU but can't run because a higher priority RT task is
currently running on that CPU.
Then it takes the rto_lock and looks for the next CPU in the rto_mask. If it
finds one, it simply sends an IPI to that CPU and the process continues.
If there's no more CPUs in the rto_mask, then rto_loop is compared with
rto_loop_next. If they match, everything is done and the process is over. If
they do not match, then a CPU scheduled in a lower priority task as the IPI
was being passed around, and the process needs to start again. The first CPU
in rto_mask is sent the IPI.
This change removes this duplication of work in the IPI logic, and greatly
lowers the latency caused by the IPIs. This removed the lockup happening on
the 120 CPU machine. It also simplifies the code tremendously. What else
could anyone ask for?
Thanks to Peter Zijlstra for simplifying the rto_loop_start atomic logic and
supplying me with the rto_start_trylock() and rto_start_unlock() helper
functions.
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott Wood <swood@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20170424114732.1aac6dc4@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-06 14:05:04 -04:00
|
|
|
#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_SMP)
|
sched/rt: Use IPI to trigger RT task push migration instead of pulling
When debugging the latencies on a 40 core box, where we hit 300 to
500 microsecond latencies, I found there was a huge contention on the
runqueue locks.
Investigating it further, running ftrace, I found that it was due to
the pulling of RT tasks.
The test that was run was the following:
cyclictest --numa -p95 -m -d0 -i100
This created a thread on each CPU, that would set its wakeup in iterations
of 100 microseconds. The -d0 means that all the threads had the same
interval (100us). Each thread sleeps for 100us and wakes up and measures
its latencies.
cyclictest is maintained at:
git://git.kernel.org/pub/scm/linux/kernel/git/clrkwllms/rt-tests.git
What happened was another RT task would be scheduled on one of the CPUs
that was running our test, when the other CPU tests went to sleep and
scheduled idle. This caused the "pull" operation to execute on all
these CPUs. Each one of these saw the RT task that was overloaded on
the CPU of the test that was still running, and each one tried
to grab that task in a thundering herd way.
To grab the task, each thread would do a double rq lock grab, grabbing
its own lock as well as the rq of the overloaded CPU. As the sched
domains on this box was rather flat for its size, I saw up to 12 CPUs
block on this lock at once. This caused a ripple affect with the
rq locks especially since the taking was done via a double rq lock, which
means that several of the CPUs had their own rq locks held while trying
to take this rq lock. As these locks were blocked, any wakeups or load
balanceing on these CPUs would also block on these locks, and the wait
time escalated.
I've tried various methods to lessen the load, but things like an
atomic counter to only let one CPU grab the task wont work, because
the task may have a limited affinity, and we may pick the wrong
CPU to take that lock and do the pull, to only find out that the
CPU we picked isn't in the task's affinity.
Instead of doing the PULL, I now have the CPUs that want the pull to
send over an IPI to the overloaded CPU, and let that CPU pick what
CPU to push the task to. No more need to grab the rq lock, and the
push/pull algorithm still works fine.
With this patch, the latency dropped to just 150us over a 20 hour run.
Without the patch, the huge latencies would trigger in seconds.
I've created a new sched feature called RT_PUSH_IPI, which is enabled
by default.
When RT_PUSH_IPI is not enabled, the old method of grabbing the rq locks
and having the pulling CPU do the work is implemented. When RT_PUSH_IPI
is enabled, the IPI is sent to the overloaded CPU to do a push.
To enabled or disable this at run time:
# mount -t debugfs nodev /sys/kernel/debug
# echo RT_PUSH_IPI > /sys/kernel/debug/sched_features
or
# echo NO_RT_PUSH_IPI > /sys/kernel/debug/sched_features
Update: This original patch would send an IPI to all CPUs in the RT overload
list. But that could theoretically cause the reverse issue. That is, there
could be lots of overloaded RT queues and one CPU lowers its priority. It would
then send an IPI to all the overloaded RT queues and they could then all try
to grab the rq lock of the CPU lowering its priority, and then we have the
same problem.
The latest design sends out only one IPI to the first overloaded CPU. It tries to
push any tasks that it can, and then looks for the next overloaded CPU that can
push to the source CPU. The IPIs stop when all overloaded CPUs that have pushable
tasks that have priorities greater than the source CPU are covered. In case the
source CPU lowers its priority again, a flag is set to tell the IPI traversal to
restart with the first RT overloaded CPU after the source CPU.
Parts-suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joern Engel <joern@purestorage.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150318144946.2f3cc982@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-18 14:49:46 -04:00
|
|
|
# define HAVE_RT_PUSH_IPI
|
|
|
|
#endif
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
/* Real-Time classes' related field in a runqueue: */
|
|
|
|
struct rt_rq {
|
|
|
|
struct rt_prio_array active;
|
2012-04-26 13:12:27 +02:00
|
|
|
unsigned int rt_nr_running;
|
2011-10-25 10:00:11 +02:00
|
|
|
#if defined CONFIG_SMP || defined CONFIG_RT_GROUP_SCHED
|
|
|
|
struct {
|
|
|
|
int curr; /* highest queued rt task prio */
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
int next; /* next highest */
|
|
|
|
#endif
|
|
|
|
} highest_prio;
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
unsigned long rt_nr_migratory;
|
|
|
|
unsigned long rt_nr_total;
|
|
|
|
int overloaded;
|
|
|
|
struct plist_head pushable_tasks;
|
sched/rt: Use IPI to trigger RT task push migration instead of pulling
When debugging the latencies on a 40 core box, where we hit 300 to
500 microsecond latencies, I found there was a huge contention on the
runqueue locks.
Investigating it further, running ftrace, I found that it was due to
the pulling of RT tasks.
The test that was run was the following:
cyclictest --numa -p95 -m -d0 -i100
This created a thread on each CPU, that would set its wakeup in iterations
of 100 microseconds. The -d0 means that all the threads had the same
interval (100us). Each thread sleeps for 100us and wakes up and measures
its latencies.
cyclictest is maintained at:
git://git.kernel.org/pub/scm/linux/kernel/git/clrkwllms/rt-tests.git
What happened was another RT task would be scheduled on one of the CPUs
that was running our test, when the other CPU tests went to sleep and
scheduled idle. This caused the "pull" operation to execute on all
these CPUs. Each one of these saw the RT task that was overloaded on
the CPU of the test that was still running, and each one tried
to grab that task in a thundering herd way.
To grab the task, each thread would do a double rq lock grab, grabbing
its own lock as well as the rq of the overloaded CPU. As the sched
domains on this box was rather flat for its size, I saw up to 12 CPUs
block on this lock at once. This caused a ripple affect with the
rq locks especially since the taking was done via a double rq lock, which
means that several of the CPUs had their own rq locks held while trying
to take this rq lock. As these locks were blocked, any wakeups or load
balanceing on these CPUs would also block on these locks, and the wait
time escalated.
I've tried various methods to lessen the load, but things like an
atomic counter to only let one CPU grab the task wont work, because
the task may have a limited affinity, and we may pick the wrong
CPU to take that lock and do the pull, to only find out that the
CPU we picked isn't in the task's affinity.
Instead of doing the PULL, I now have the CPUs that want the pull to
send over an IPI to the overloaded CPU, and let that CPU pick what
CPU to push the task to. No more need to grab the rq lock, and the
push/pull algorithm still works fine.
With this patch, the latency dropped to just 150us over a 20 hour run.
Without the patch, the huge latencies would trigger in seconds.
I've created a new sched feature called RT_PUSH_IPI, which is enabled
by default.
When RT_PUSH_IPI is not enabled, the old method of grabbing the rq locks
and having the pulling CPU do the work is implemented. When RT_PUSH_IPI
is enabled, the IPI is sent to the overloaded CPU to do a push.
To enabled or disable this at run time:
# mount -t debugfs nodev /sys/kernel/debug
# echo RT_PUSH_IPI > /sys/kernel/debug/sched_features
or
# echo NO_RT_PUSH_IPI > /sys/kernel/debug/sched_features
Update: This original patch would send an IPI to all CPUs in the RT overload
list. But that could theoretically cause the reverse issue. That is, there
could be lots of overloaded RT queues and one CPU lowers its priority. It would
then send an IPI to all the overloaded RT queues and they could then all try
to grab the rq lock of the CPU lowering its priority, and then we have the
same problem.
The latest design sends out only one IPI to the first overloaded CPU. It tries to
push any tasks that it can, and then looks for the next overloaded CPU that can
push to the source CPU. The IPIs stop when all overloaded CPUs that have pushable
tasks that have priorities greater than the source CPU are covered. In case the
source CPU lowers its priority again, a flag is set to tell the IPI traversal to
restart with the first RT overloaded CPU after the source CPU.
Parts-suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joern Engel <joern@purestorage.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150318144946.2f3cc982@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-18 14:49:46 -04:00
|
|
|
#endif /* CONFIG_SMP */
|
2014-03-15 02:15:00 +04:00
|
|
|
int rt_queued;
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
int rt_throttled;
|
|
|
|
u64 rt_time;
|
|
|
|
u64 rt_runtime;
|
|
|
|
/* Nests inside the rq lock: */
|
|
|
|
raw_spinlock_t rt_runtime_lock;
|
|
|
|
|
|
|
|
#ifdef CONFIG_RT_GROUP_SCHED
|
|
|
|
unsigned long rt_nr_boosted;
|
|
|
|
|
|
|
|
struct rq *rq;
|
|
|
|
struct task_group *tg;
|
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 11:14:43 +01:00
|
|
|
/* Deadline class' related fields in a runqueue */
|
|
|
|
struct dl_rq {
|
|
|
|
/* runqueue is an rbtree, ordered by deadline */
|
|
|
|
struct rb_root rb_root;
|
|
|
|
struct rb_node *rb_leftmost;
|
|
|
|
|
|
|
|
unsigned long dl_nr_running;
|
sched/deadline: Add SCHED_DEADLINE SMP-related data structures & logic
Introduces data structures relevant for implementing dynamic
migration of -deadline tasks and the logic for checking if
runqueues are overloaded with -deadline tasks and for choosing
where a task should migrate, when it is the case.
Adds also dynamic migrations to SCHED_DEADLINE, so that tasks can
be moved among CPUs when necessary. It is also possible to bind a
task to a (set of) CPU(s), thus restricting its capability of
migrating, or forbidding migrations at all.
The very same approach used in sched_rt is utilised:
- -deadline tasks are kept into CPU-specific runqueues,
- -deadline tasks are migrated among runqueues to achieve the
following:
* on an M-CPU system the M earliest deadline ready tasks
are always running;
* affinity/cpusets settings of all the -deadline tasks is
always respected.
Therefore, this very special form of "load balancing" is done with
an active method, i.e., the scheduler pushes or pulls tasks between
runqueues when they are woken up and/or (de)scheduled.
IOW, every time a preemption occurs, the descheduled task might be sent
to some other CPU (depending on its deadline) to continue executing
(push). On the other hand, every time a CPU becomes idle, it might pull
the second earliest deadline ready task from some other CPU.
To enforce this, a pull operation is always attempted before taking any
scheduling decision (pre_schedule()), as well as a push one after each
scheduling decision (post_schedule()). In addition, when a task arrives
or wakes up, the best CPU where to resume it is selected taking into
account its affinity mask, the system topology, but also its deadline.
E.g., from the scheduling point of view, the best CPU where to wake
up (and also where to push) a task is the one which is running the task
with the latest deadline among the M executing ones.
In order to facilitate these decisions, per-runqueue "caching" of the
deadlines of the currently running and of the first ready task is used.
Queued but not running tasks are also parked in another rb-tree to
speed-up pushes.
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-5-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:38 +01:00
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/*
|
|
|
|
* Deadline values of the currently executing and the
|
|
|
|
* earliest ready task on this rq. Caching these facilitates
|
|
|
|
* the decision wether or not a ready but not running task
|
|
|
|
* should migrate somewhere else.
|
|
|
|
*/
|
|
|
|
struct {
|
|
|
|
u64 curr;
|
|
|
|
u64 next;
|
|
|
|
} earliest_dl;
|
|
|
|
|
|
|
|
unsigned long dl_nr_migratory;
|
|
|
|
int overloaded;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Tasks on this rq that can be pushed away. They are kept in
|
|
|
|
* an rb-tree, ordered by tasks' deadlines, with caching
|
|
|
|
* of the leftmost (earliest deadline) element.
|
|
|
|
*/
|
|
|
|
struct rb_root pushable_dl_tasks_root;
|
|
|
|
struct rb_node *pushable_dl_tasks_leftmost;
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:45 +01:00
|
|
|
#else
|
|
|
|
struct dl_bw dl_bw;
|
sched/deadline: Add SCHED_DEADLINE SMP-related data structures & logic
Introduces data structures relevant for implementing dynamic
migration of -deadline tasks and the logic for checking if
runqueues are overloaded with -deadline tasks and for choosing
where a task should migrate, when it is the case.
Adds also dynamic migrations to SCHED_DEADLINE, so that tasks can
be moved among CPUs when necessary. It is also possible to bind a
task to a (set of) CPU(s), thus restricting its capability of
migrating, or forbidding migrations at all.
The very same approach used in sched_rt is utilised:
- -deadline tasks are kept into CPU-specific runqueues,
- -deadline tasks are migrated among runqueues to achieve the
following:
* on an M-CPU system the M earliest deadline ready tasks
are always running;
* affinity/cpusets settings of all the -deadline tasks is
always respected.
Therefore, this very special form of "load balancing" is done with
an active method, i.e., the scheduler pushes or pulls tasks between
runqueues when they are woken up and/or (de)scheduled.
IOW, every time a preemption occurs, the descheduled task might be sent
to some other CPU (depending on its deadline) to continue executing
(push). On the other hand, every time a CPU becomes idle, it might pull
the second earliest deadline ready task from some other CPU.
To enforce this, a pull operation is always attempted before taking any
scheduling decision (pre_schedule()), as well as a push one after each
scheduling decision (post_schedule()). In addition, when a task arrives
or wakes up, the best CPU where to resume it is selected taking into
account its affinity mask, the system topology, but also its deadline.
E.g., from the scheduling point of view, the best CPU where to wake
up (and also where to push) a task is the one which is running the task
with the latest deadline among the M executing ones.
In order to facilitate these decisions, per-runqueue "caching" of the
deadlines of the currently running and of the first ready task is used.
Queued but not running tasks are also parked in another rb-tree to
speed-up pushes.
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-5-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:38 +01:00
|
|
|
#endif
|
2015-11-03 10:39:01 +01:00
|
|
|
/* This is the "average utilization" for this runqueue */
|
|
|
|
s64 avg_bw;
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 11:14:43 +01:00
|
|
|
};
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
|
2015-09-26 18:19:54 +01:00
|
|
|
struct max_cpu_capacity {
|
|
|
|
raw_spinlock_t lock;
|
|
|
|
unsigned long val;
|
|
|
|
int cpu;
|
|
|
|
};
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
/*
|
|
|
|
* We add the notion of a root-domain which will be used to define per-domain
|
|
|
|
* variables. Each exclusive cpuset essentially defines an island domain by
|
|
|
|
* fully partitioning the member cpus from any other cpuset. Whenever a new
|
|
|
|
* exclusive cpuset is created, we also create and attach a new root-domain
|
|
|
|
* object.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
struct root_domain {
|
|
|
|
atomic_t refcount;
|
|
|
|
atomic_t rto_count;
|
|
|
|
struct rcu_head rcu;
|
|
|
|
cpumask_var_t span;
|
|
|
|
cpumask_var_t online;
|
|
|
|
|
2014-06-23 12:16:49 -07:00
|
|
|
/* Indicate more than one runnable task for any CPU */
|
|
|
|
bool overload;
|
|
|
|
|
sched: Add over-utilization/tipping point indicator
Energy-aware scheduling is only meant to be active while the system is
_not_ over-utilized. That is, there are spare cycles available to shift
tasks around based on their actual utilization to get a more
energy-efficient task distribution without depriving any tasks. When
above the tipping point task placement is done the traditional way based
on load_avg, spreading the tasks across as many cpus as possible based
on priority scaled load to preserve smp_nice. Below the tipping point we
want to use util_avg instead. We need to define a criteria for when we
make the switch.
The util_avg for each cpu converges towards 100% (1024) regardless of
how many task additional task we may put on it. If we define
over-utilized as:
sum_{cpus}(rq.cfs.avg.util_avg) + margin > sum_{cpus}(rq.capacity)
some individual cpus may be over-utilized running multiple tasks even
when the above condition is false. That should be okay as long as we try
to spread the tasks out to avoid per-cpu over-utilization as much as
possible and if all tasks have the _same_ priority. If the latter isn't
true, we have to consider priority to preserve smp_nice.
For example, we could have n_cpus nice=-10 util_avg=55% tasks and
n_cpus/2 nice=0 util_avg=60% tasks. Balancing based on util_avg we are
likely to end up with nice=-10 tasks sharing cpus and nice=0 tasks
getting their own as we 1.5*n_cpus tasks in total and 55%+55% is less
over-utilized than 55%+60% for those cpus that have to be shared. The
system utilization is only 85% of the system capacity, but we are
breaking smp_nice.
To be sure not to break smp_nice, we have defined over-utilization
conservatively as when any cpu in the system is fully utilized at it's
highest frequency instead:
cpu_rq(any).cfs.avg.util_avg + margin > cpu_rq(any).capacity
IOW, as soon as one cpu is (nearly) 100% utilized, we switch to load_avg
to factor in priority to preserve smp_nice.
With this definition, we can skip periodic load-balance as no cpu has an
always-running task when the system is not over-utilized. All tasks will
be periodic and we can balance them at wake-up. This conservative
condition does however mean that some scenarios that could benefit from
energy-aware decisions even if one cpu is fully utilized would not get
those benefits.
For system where some cpus might have reduced capacity on some cpus
(RT-pressure and/or big.LITTLE), we want periodic load-balance checks as
soon a just a single cpu is fully utilized as it might one of those with
reduced capacity and in that case we want to migrate it.
cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
2015-05-09 16:49:57 +01:00
|
|
|
/* Indicate one or more cpus over-utilized (tipping point) */
|
|
|
|
bool overutilized;
|
|
|
|
|
sched/deadline: Add SCHED_DEADLINE SMP-related data structures & logic
Introduces data structures relevant for implementing dynamic
migration of -deadline tasks and the logic for checking if
runqueues are overloaded with -deadline tasks and for choosing
where a task should migrate, when it is the case.
Adds also dynamic migrations to SCHED_DEADLINE, so that tasks can
be moved among CPUs when necessary. It is also possible to bind a
task to a (set of) CPU(s), thus restricting its capability of
migrating, or forbidding migrations at all.
The very same approach used in sched_rt is utilised:
- -deadline tasks are kept into CPU-specific runqueues,
- -deadline tasks are migrated among runqueues to achieve the
following:
* on an M-CPU system the M earliest deadline ready tasks
are always running;
* affinity/cpusets settings of all the -deadline tasks is
always respected.
Therefore, this very special form of "load balancing" is done with
an active method, i.e., the scheduler pushes or pulls tasks between
runqueues when they are woken up and/or (de)scheduled.
IOW, every time a preemption occurs, the descheduled task might be sent
to some other CPU (depending on its deadline) to continue executing
(push). On the other hand, every time a CPU becomes idle, it might pull
the second earliest deadline ready task from some other CPU.
To enforce this, a pull operation is always attempted before taking any
scheduling decision (pre_schedule()), as well as a push one after each
scheduling decision (post_schedule()). In addition, when a task arrives
or wakes up, the best CPU where to resume it is selected taking into
account its affinity mask, the system topology, but also its deadline.
E.g., from the scheduling point of view, the best CPU where to wake
up (and also where to push) a task is the one which is running the task
with the latest deadline among the M executing ones.
In order to facilitate these decisions, per-runqueue "caching" of the
deadlines of the currently running and of the first ready task is used.
Queued but not running tasks are also parked in another rb-tree to
speed-up pushes.
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-5-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:38 +01:00
|
|
|
/*
|
|
|
|
* The bit corresponding to a CPU gets set here if such CPU has more
|
|
|
|
* than one runnable -deadline task (as it is below for RT tasks).
|
|
|
|
*/
|
|
|
|
cpumask_var_t dlo_mask;
|
|
|
|
atomic_t dlo_count;
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:45 +01:00
|
|
|
struct dl_bw dl_bw;
|
2013-11-07 14:43:47 +01:00
|
|
|
struct cpudl cpudl;
|
sched/deadline: Add SCHED_DEADLINE SMP-related data structures & logic
Introduces data structures relevant for implementing dynamic
migration of -deadline tasks and the logic for checking if
runqueues are overloaded with -deadline tasks and for choosing
where a task should migrate, when it is the case.
Adds also dynamic migrations to SCHED_DEADLINE, so that tasks can
be moved among CPUs when necessary. It is also possible to bind a
task to a (set of) CPU(s), thus restricting its capability of
migrating, or forbidding migrations at all.
The very same approach used in sched_rt is utilised:
- -deadline tasks are kept into CPU-specific runqueues,
- -deadline tasks are migrated among runqueues to achieve the
following:
* on an M-CPU system the M earliest deadline ready tasks
are always running;
* affinity/cpusets settings of all the -deadline tasks is
always respected.
Therefore, this very special form of "load balancing" is done with
an active method, i.e., the scheduler pushes or pulls tasks between
runqueues when they are woken up and/or (de)scheduled.
IOW, every time a preemption occurs, the descheduled task might be sent
to some other CPU (depending on its deadline) to continue executing
(push). On the other hand, every time a CPU becomes idle, it might pull
the second earliest deadline ready task from some other CPU.
To enforce this, a pull operation is always attempted before taking any
scheduling decision (pre_schedule()), as well as a push one after each
scheduling decision (post_schedule()). In addition, when a task arrives
or wakes up, the best CPU where to resume it is selected taking into
account its affinity mask, the system topology, but also its deadline.
E.g., from the scheduling point of view, the best CPU where to wake
up (and also where to push) a task is the one which is running the task
with the latest deadline among the M executing ones.
In order to facilitate these decisions, per-runqueue "caching" of the
deadlines of the currently running and of the first ready task is used.
Queued but not running tasks are also parked in another rb-tree to
speed-up pushes.
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-5-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:38 +01:00
|
|
|
|
sched/rt: Simplify the IPI based RT balancing logic
commit 4bdced5c9a2922521e325896a7bbbf0132c94e56 upstream.
When a CPU lowers its priority (schedules out a high priority task for a
lower priority one), a check is made to see if any other CPU has overloaded
RT tasks (more than one). It checks the rto_mask to determine this and if so
it will request to pull one of those tasks to itself if the non running RT
task is of higher priority than the new priority of the next task to run on
the current CPU.
When we deal with large number of CPUs, the original pull logic suffered
from large lock contention on a single CPU run queue, which caused a huge
latency across all CPUs. This was caused by only having one CPU having
overloaded RT tasks and a bunch of other CPUs lowering their priority. To
solve this issue, commit:
b6366f048e0c ("sched/rt: Use IPI to trigger RT task push migration instead of pulling")
changed the way to request a pull. Instead of grabbing the lock of the
overloaded CPU's runqueue, it simply sent an IPI to that CPU to do the work.
Although the IPI logic worked very well in removing the large latency build
up, it still could suffer from a large number of IPIs being sent to a single
CPU. On a 80 CPU box, I measured over 200us of processing IPIs. Worse yet,
when I tested this on a 120 CPU box, with a stress test that had lots of
RT tasks scheduling on all CPUs, it actually triggered the hard lockup
detector! One CPU had so many IPIs sent to it, and due to the restart
mechanism that is triggered when the source run queue has a priority status
change, the CPU spent minutes! processing the IPIs.
Thinking about this further, I realized there's no reason for each run queue
to send its own IPI. As all CPUs with overloaded tasks must be scanned
regardless if there's one or many CPUs lowering their priority, because
there's no current way to find the CPU with the highest priority task that
can schedule to one of these CPUs, there really only needs to be one IPI
being sent around at a time.
This greatly simplifies the code!
The new approach is to have each root domain have its own irq work, as the
rto_mask is per root domain. The root domain has the following fields
attached to it:
rto_push_work - the irq work to process each CPU set in rto_mask
rto_lock - the lock to protect some of the other rto fields
rto_loop_start - an atomic that keeps contention down on rto_lock
the first CPU scheduling in a lower priority task
is the one to kick off the process.
rto_loop_next - an atomic that gets incremented for each CPU that
schedules in a lower priority task.
rto_loop - a variable protected by rto_lock that is used to
compare against rto_loop_next
rto_cpu - The cpu to send the next IPI to, also protected by
the rto_lock.
When a CPU schedules in a lower priority task and wants to make sure
overloaded CPUs know about it. It increments the rto_loop_next. Then it
atomically sets rto_loop_start with a cmpxchg. If the old value is not "0",
then it is done, as another CPU is kicking off the IPI loop. If the old
value is "0", then it will take the rto_lock to synchronize with a possible
IPI being sent around to the overloaded CPUs.
If rto_cpu is greater than or equal to nr_cpu_ids, then there's either no
IPI being sent around, or one is about to finish. Then rto_cpu is set to the
first CPU in rto_mask and an IPI is sent to that CPU. If there's no CPUs set
in rto_mask, then there's nothing to be done.
When the CPU receives the IPI, it will first try to push any RT tasks that is
queued on the CPU but can't run because a higher priority RT task is
currently running on that CPU.
Then it takes the rto_lock and looks for the next CPU in the rto_mask. If it
finds one, it simply sends an IPI to that CPU and the process continues.
If there's no more CPUs in the rto_mask, then rto_loop is compared with
rto_loop_next. If they match, everything is done and the process is over. If
they do not match, then a CPU scheduled in a lower priority task as the IPI
was being passed around, and the process needs to start again. The first CPU
in rto_mask is sent the IPI.
This change removes this duplication of work in the IPI logic, and greatly
lowers the latency caused by the IPIs. This removed the lockup happening on
the 120 CPU machine. It also simplifies the code tremendously. What else
could anyone ask for?
Thanks to Peter Zijlstra for simplifying the rto_loop_start atomic logic and
supplying me with the rto_start_trylock() and rto_start_unlock() helper
functions.
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott Wood <swood@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20170424114732.1aac6dc4@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-06 14:05:04 -04:00
|
|
|
#ifdef HAVE_RT_PUSH_IPI
|
|
|
|
/*
|
|
|
|
* For IPI pull requests, loop across the rto_mask.
|
|
|
|
*/
|
|
|
|
struct irq_work rto_push_work;
|
|
|
|
raw_spinlock_t rto_lock;
|
|
|
|
/* These are only updated and read within rto_lock */
|
|
|
|
int rto_loop;
|
|
|
|
int rto_cpu;
|
|
|
|
/* These atomics are updated outside of a lock */
|
|
|
|
atomic_t rto_loop_next;
|
|
|
|
atomic_t rto_loop_start;
|
|
|
|
#endif
|
2011-10-25 10:00:11 +02:00
|
|
|
/*
|
|
|
|
* The "RT overload" flag: it gets set if a CPU has more than
|
|
|
|
* one runnable RT task.
|
|
|
|
*/
|
|
|
|
cpumask_var_t rto_mask;
|
|
|
|
struct cpupri cpupri;
|
2015-05-07 18:46:15 +01:00
|
|
|
|
|
|
|
/* Maximum cpu capacity in the system. */
|
2015-09-26 18:19:54 +01:00
|
|
|
struct max_cpu_capacity max_cpu_capacity;
|
2017-01-08 16:16:59 +00:00
|
|
|
|
|
|
|
/* First cpu with maximum and minimum original capacity */
|
|
|
|
int max_cap_orig_cpu, min_cap_orig_cpu;
|
2011-10-25 10:00:11 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
extern struct root_domain def_root_domain;
|
2018-01-23 20:45:38 -05:00
|
|
|
extern void sched_get_rd(struct root_domain *rd);
|
|
|
|
extern void sched_put_rd(struct root_domain *rd);
|
2011-10-25 10:00:11 +02:00
|
|
|
|
sched/rt: Simplify the IPI based RT balancing logic
commit 4bdced5c9a2922521e325896a7bbbf0132c94e56 upstream.
When a CPU lowers its priority (schedules out a high priority task for a
lower priority one), a check is made to see if any other CPU has overloaded
RT tasks (more than one). It checks the rto_mask to determine this and if so
it will request to pull one of those tasks to itself if the non running RT
task is of higher priority than the new priority of the next task to run on
the current CPU.
When we deal with large number of CPUs, the original pull logic suffered
from large lock contention on a single CPU run queue, which caused a huge
latency across all CPUs. This was caused by only having one CPU having
overloaded RT tasks and a bunch of other CPUs lowering their priority. To
solve this issue, commit:
b6366f048e0c ("sched/rt: Use IPI to trigger RT task push migration instead of pulling")
changed the way to request a pull. Instead of grabbing the lock of the
overloaded CPU's runqueue, it simply sent an IPI to that CPU to do the work.
Although the IPI logic worked very well in removing the large latency build
up, it still could suffer from a large number of IPIs being sent to a single
CPU. On a 80 CPU box, I measured over 200us of processing IPIs. Worse yet,
when I tested this on a 120 CPU box, with a stress test that had lots of
RT tasks scheduling on all CPUs, it actually triggered the hard lockup
detector! One CPU had so many IPIs sent to it, and due to the restart
mechanism that is triggered when the source run queue has a priority status
change, the CPU spent minutes! processing the IPIs.
Thinking about this further, I realized there's no reason for each run queue
to send its own IPI. As all CPUs with overloaded tasks must be scanned
regardless if there's one or many CPUs lowering their priority, because
there's no current way to find the CPU with the highest priority task that
can schedule to one of these CPUs, there really only needs to be one IPI
being sent around at a time.
This greatly simplifies the code!
The new approach is to have each root domain have its own irq work, as the
rto_mask is per root domain. The root domain has the following fields
attached to it:
rto_push_work - the irq work to process each CPU set in rto_mask
rto_lock - the lock to protect some of the other rto fields
rto_loop_start - an atomic that keeps contention down on rto_lock
the first CPU scheduling in a lower priority task
is the one to kick off the process.
rto_loop_next - an atomic that gets incremented for each CPU that
schedules in a lower priority task.
rto_loop - a variable protected by rto_lock that is used to
compare against rto_loop_next
rto_cpu - The cpu to send the next IPI to, also protected by
the rto_lock.
When a CPU schedules in a lower priority task and wants to make sure
overloaded CPUs know about it. It increments the rto_loop_next. Then it
atomically sets rto_loop_start with a cmpxchg. If the old value is not "0",
then it is done, as another CPU is kicking off the IPI loop. If the old
value is "0", then it will take the rto_lock to synchronize with a possible
IPI being sent around to the overloaded CPUs.
If rto_cpu is greater than or equal to nr_cpu_ids, then there's either no
IPI being sent around, or one is about to finish. Then rto_cpu is set to the
first CPU in rto_mask and an IPI is sent to that CPU. If there's no CPUs set
in rto_mask, then there's nothing to be done.
When the CPU receives the IPI, it will first try to push any RT tasks that is
queued on the CPU but can't run because a higher priority RT task is
currently running on that CPU.
Then it takes the rto_lock and looks for the next CPU in the rto_mask. If it
finds one, it simply sends an IPI to that CPU and the process continues.
If there's no more CPUs in the rto_mask, then rto_loop is compared with
rto_loop_next. If they match, everything is done and the process is over. If
they do not match, then a CPU scheduled in a lower priority task as the IPI
was being passed around, and the process needs to start again. The first CPU
in rto_mask is sent the IPI.
This change removes this duplication of work in the IPI logic, and greatly
lowers the latency caused by the IPIs. This removed the lockup happening on
the 120 CPU machine. It also simplifies the code tremendously. What else
could anyone ask for?
Thanks to Peter Zijlstra for simplifying the rto_loop_start atomic logic and
supplying me with the rto_start_trylock() and rto_start_unlock() helper
functions.
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott Wood <swood@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20170424114732.1aac6dc4@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-06 14:05:04 -04:00
|
|
|
#ifdef HAVE_RT_PUSH_IPI
|
|
|
|
extern void rto_push_irq_work_func(struct irq_work *work);
|
|
|
|
#endif
|
2011-10-25 10:00:11 +02:00
|
|
|
#endif /* CONFIG_SMP */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This is the main, per-CPU runqueue data structure.
|
|
|
|
*
|
|
|
|
* Locking rule: those places that want to lock multiple runqueues
|
|
|
|
* (such as the load balancing or the thread migration code), lock
|
|
|
|
* acquire operations must be ordered by ascending &runqueue.
|
|
|
|
*/
|
|
|
|
struct rq {
|
|
|
|
/* runqueue lock: */
|
|
|
|
raw_spinlock_t lock;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* nr_running and cpu_load should be in the same cacheline because
|
|
|
|
* remote CPUs use both these fields when doing load calculation.
|
|
|
|
*/
|
2012-04-26 13:12:27 +02:00
|
|
|
unsigned int nr_running;
|
2013-10-07 11:29:33 +01:00
|
|
|
#ifdef CONFIG_NUMA_BALANCING
|
|
|
|
unsigned int nr_numa_running;
|
|
|
|
unsigned int nr_preferred_running;
|
|
|
|
#endif
|
2011-10-25 10:00:11 +02:00
|
|
|
#define CPU_LOAD_IDX_MAX 5
|
|
|
|
unsigned long cpu_load[CPU_LOAD_IDX_MAX];
|
|
|
|
unsigned long last_load_update_tick;
|
2016-02-25 12:47:54 +00:00
|
|
|
unsigned int misfit_task;
|
2011-08-10 23:21:01 +02:00
|
|
|
#ifdef CONFIG_NO_HZ_COMMON
|
2011-10-25 10:00:11 +02:00
|
|
|
u64 nohz_stamp;
|
2011-12-01 17:07:32 -08:00
|
|
|
unsigned long nohz_flags;
|
2013-05-03 03:39:05 +02:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_NO_HZ_FULL
|
|
|
|
unsigned long last_sched_tick;
|
2011-10-25 10:00:11 +02:00
|
|
|
#endif
|
2013-04-22 14:39:18 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_CPU_QUIET
|
|
|
|
/* time-based average load */
|
|
|
|
u64 nr_last_stamp;
|
|
|
|
u64 nr_running_integral;
|
|
|
|
seqcount_t ave_seqcnt;
|
|
|
|
#endif
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
/* capture load from *all* tasks on this cpu: */
|
|
|
|
struct load_weight load;
|
|
|
|
unsigned long nr_load_updates;
|
|
|
|
u64 nr_switches;
|
|
|
|
|
|
|
|
struct cfs_rq cfs;
|
|
|
|
struct rt_rq rt;
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 11:14:43 +01:00
|
|
|
struct dl_rq dl;
|
2011-10-25 10:00:11 +02:00
|
|
|
|
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
|
|
|
/* list of leaf cfs_rq on this cpu: */
|
|
|
|
struct list_head leaf_cfs_rq_list;
|
2016-11-08 10:53:43 +01:00
|
|
|
struct list_head *tmp_alone_branch;
|
2012-08-08 21:46:40 +02:00
|
|
|
#endif /* CONFIG_FAIR_GROUP_SCHED */
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
/*
|
|
|
|
* This is part of a global counter where only the total sum
|
|
|
|
* over all CPUs matters. A task can increase this counter on
|
|
|
|
* one CPU and if it got migrated afterwards it may decrease
|
|
|
|
* it on another CPU. Always updated under the runqueue lock:
|
|
|
|
*/
|
|
|
|
unsigned long nr_uninterruptible;
|
|
|
|
|
|
|
|
struct task_struct *curr, *idle, *stop;
|
|
|
|
unsigned long next_balance;
|
|
|
|
struct mm_struct *prev_mm;
|
|
|
|
|
2015-01-05 11:18:11 +01:00
|
|
|
unsigned int clock_skip_update;
|
2011-10-25 10:00:11 +02:00
|
|
|
u64 clock;
|
|
|
|
u64 clock_task;
|
|
|
|
|
|
|
|
atomic_t nr_iowait;
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
struct root_domain *rd;
|
|
|
|
struct sched_domain *sd;
|
|
|
|
|
2014-05-26 18:19:38 -04:00
|
|
|
unsigned long cpu_capacity;
|
2015-02-27 16:54:09 +01:00
|
|
|
unsigned long cpu_capacity_orig;
|
2011-10-25 10:00:11 +02:00
|
|
|
|
2015-06-11 14:46:37 +02:00
|
|
|
struct callback_head *balance_callback;
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
unsigned char idle_balance;
|
|
|
|
/* For active balancing */
|
|
|
|
int active_balance;
|
|
|
|
int push_cpu;
|
2014-03-31 10:34:41 -07:00
|
|
|
struct task_struct *push_task;
|
2011-10-25 10:00:11 +02:00
|
|
|
struct cpu_stop_work active_balance_work;
|
|
|
|
/* cpu of this runqueue: */
|
|
|
|
int cpu;
|
|
|
|
int online;
|
|
|
|
|
2012-02-20 21:49:09 +01:00
|
|
|
struct list_head cfs_tasks;
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
u64 rt_avg;
|
|
|
|
u64 age_stamp;
|
|
|
|
u64 idle_stamp;
|
|
|
|
u64 avg_idle;
|
2013-09-13 11:26:52 -07:00
|
|
|
|
|
|
|
/* This is used to determine avg_idle's max value */
|
|
|
|
u64 max_idle_balance_cost;
|
2011-10-25 10:00:11 +02:00
|
|
|
#endif
|
|
|
|
|
2014-09-01 13:26:53 +05:30
|
|
|
#ifdef CONFIG_SCHED_HMP
|
2015-04-20 12:35:48 +05:30
|
|
|
struct sched_cluster *cluster;
|
2014-05-06 18:05:50 -07:00
|
|
|
struct cpumask freq_domain_cpumask;
|
2015-01-16 11:27:31 +05:30
|
|
|
struct hmp_sched_stats hmp_stats;
|
|
|
|
|
2016-07-28 11:22:08 -07:00
|
|
|
int cstate, wakeup_latency, wakeup_energy;
|
2014-04-29 12:44:43 -07:00
|
|
|
u64 window_start;
|
2015-01-16 11:27:31 +05:30
|
|
|
unsigned long hmp_flags;
|
2014-04-29 14:01:50 -07:00
|
|
|
|
2014-11-13 13:01:31 -08:00
|
|
|
u64 cur_irqload;
|
|
|
|
u64 avg_irqload;
|
|
|
|
u64 irqload_ts;
|
2015-08-10 16:41:44 -07:00
|
|
|
unsigned int static_cpu_pwr_cost;
|
2015-09-15 12:17:51 -07:00
|
|
|
struct task_struct *ed_task;
|
2016-04-28 15:22:12 -07:00
|
|
|
struct cpu_cycle cc;
|
2015-05-12 15:01:15 +05:30
|
|
|
u64 old_busy_time, old_busy_time_group;
|
2015-06-08 09:08:47 +05:30
|
|
|
u64 old_estimated_time;
|
2014-08-06 15:29:58 +05:30
|
|
|
u64 curr_runnable_sum;
|
|
|
|
u64 prev_runnable_sum;
|
2015-09-15 09:35:53 -07:00
|
|
|
u64 nt_curr_runnable_sum;
|
|
|
|
u64 nt_prev_runnable_sum;
|
2017-01-09 13:56:33 +05:30
|
|
|
struct group_cpu_time grp_time;
|
sched: Add the mechanics of top task tracking for frequency guidance
The previous patches in this rewrite of scheduler guided frequency
selection reintroduces the part-picture problem that we addressed in
our initial implementation. In that, when tasks migrate across CPUs
within a cluster, we end up losing the complete picture of the
sequential nature of the workload.
This patch aims to solve that problem slightly differently. We track
the top task on every CPU within a window. Top task is defined as the
task that runs the most in a given window. This enhances our ability
to detect the sequential nature of workloads. A single migrating task
executing for an entire window will cause 100% load to be reported
for frequency guidance instead of the maximum footprint left on any
individual CPU in the task's trail. There are cases, that this new
approach does not address. Namely, cases where the sum of two or more
tasks accurately reflects the true sequential nature of the workload.
Future optimizations might aim to tackle that problem.
To track top tasks, we first realize that there is no strict need to
maintain the task struct itself as long as we know the load exerted by
the top task. We also realize that to maintain top tasks on every CPU
we have to track the execution of every single task that runs during
the window. The load associated with a task needs to be migrated when
the task migrates from one CPU to another. When the top task migrates
away, we need to locate the second top task and so on.
Given the above realizations, we use hashmaps to track top task load
both for the current and the previous window. This hashmap is
implemented as an array of fixed size. The key of the hashmap is given
by task_execution_time_in_a_window / array_size. The size of the array
(number of buckets in the hashmap) dictate the load granularity of each
bucket. The value stored in each bucket is a refcount of all the tasks
that executed long enough to be in that bucket.
This approach has a few benefits. Firstly, any top task stats update
now take O(1) time. While task migration is also O(1), it does still
involve going through up to the size of the array to find the second
top task. Further patches will aim to optimize this behavior. Secondly,
and more importantly, not having to store the task struct itself saves
a lot of memory usage in that 1) there is no need to retrieve task
structs later causing cache misses and 2) we don't have to unnecessarily
hold up task memory for up to 2 full windows by calling get_task_struct()
after a task exits.
Change-Id: I004dba474f41590db7d3f40d9deafe86e71359ac
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-05-31 16:40:45 -07:00
|
|
|
struct load_subtractions load_subs[NUM_TRACKED_WINDOWS];
|
2016-06-07 15:18:37 -07:00
|
|
|
DECLARE_BITMAP_ARRAY(top_tasks_bitmap,
|
|
|
|
NUM_TRACKED_WINDOWS, NUM_LOAD_INDICES);
|
sched: Add the mechanics of top task tracking for frequency guidance
The previous patches in this rewrite of scheduler guided frequency
selection reintroduces the part-picture problem that we addressed in
our initial implementation. In that, when tasks migrate across CPUs
within a cluster, we end up losing the complete picture of the
sequential nature of the workload.
This patch aims to solve that problem slightly differently. We track
the top task on every CPU within a window. Top task is defined as the
task that runs the most in a given window. This enhances our ability
to detect the sequential nature of workloads. A single migrating task
executing for an entire window will cause 100% load to be reported
for frequency guidance instead of the maximum footprint left on any
individual CPU in the task's trail. There are cases, that this new
approach does not address. Namely, cases where the sum of two or more
tasks accurately reflects the true sequential nature of the workload.
Future optimizations might aim to tackle that problem.
To track top tasks, we first realize that there is no strict need to
maintain the task struct itself as long as we know the load exerted by
the top task. We also realize that to maintain top tasks on every CPU
we have to track the execution of every single task that runs during
the window. The load associated with a task needs to be migrated when
the task migrates from one CPU to another. When the top task migrates
away, we need to locate the second top task and so on.
Given the above realizations, we use hashmaps to track top task load
both for the current and the previous window. This hashmap is
implemented as an array of fixed size. The key of the hashmap is given
by task_execution_time_in_a_window / array_size. The size of the array
(number of buckets in the hashmap) dictate the load granularity of each
bucket. The value stored in each bucket is a refcount of all the tasks
that executed long enough to be in that bucket.
This approach has a few benefits. Firstly, any top task stats update
now take O(1) time. While task migration is also O(1), it does still
involve going through up to the size of the array to find the second
top task. Further patches will aim to optimize this behavior. Secondly,
and more importantly, not having to store the task struct itself saves
a lot of memory usage in that 1) there is no need to retrieve task
structs later causing cache misses and 2) we don't have to unnecessarily
hold up task memory for up to 2 full windows by calling get_task_struct()
after a task exits.
Change-Id: I004dba474f41590db7d3f40d9deafe86e71359ac
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-05-31 16:40:45 -07:00
|
|
|
u8 *top_tasks[NUM_TRACKED_WINDOWS];
|
|
|
|
u8 curr_table;
|
|
|
|
int prev_top;
|
|
|
|
int curr_top;
|
2014-03-29 16:56:45 -07:00
|
|
|
#endif
|
2013-12-12 17:06:11 -08:00
|
|
|
|
2017-11-06 15:07:22 -08:00
|
|
|
#ifdef CONFIG_SCHED_WALT
|
|
|
|
u64 cumulative_runnable_avg;
|
|
|
|
u64 window_start;
|
|
|
|
u64 curr_runnable_sum;
|
|
|
|
u64 prev_runnable_sum;
|
|
|
|
u64 nt_curr_runnable_sum;
|
|
|
|
u64 nt_prev_runnable_sum;
|
|
|
|
u64 cur_irqload;
|
|
|
|
u64 avg_irqload;
|
|
|
|
u64 irqload_ts;
|
2017-02-03 11:15:31 -08:00
|
|
|
u64 cum_window_demand;
|
2017-11-06 15:07:22 -08:00
|
|
|
#endif /* CONFIG_SCHED_WALT */
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
#ifdef CONFIG_IRQ_TIME_ACCOUNTING
|
|
|
|
u64 prev_irq_time;
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_PARAVIRT
|
|
|
|
u64 prev_steal_time;
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
|
|
|
|
u64 prev_steal_time_rq;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* calc_load related fields */
|
|
|
|
unsigned long calc_load_update;
|
|
|
|
long calc_load_active;
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_HRTICK
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
int hrtick_csd_pending;
|
|
|
|
struct call_single_data hrtick_csd;
|
|
|
|
#endif
|
|
|
|
struct hrtimer hrtick_timer;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHEDSTATS
|
|
|
|
/* latency stats */
|
|
|
|
struct sched_info rq_sched_info;
|
|
|
|
unsigned long long rq_cpu_time;
|
|
|
|
/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
|
|
|
|
|
|
|
|
/* sys_sched_yield() stats */
|
|
|
|
unsigned int yld_count;
|
|
|
|
|
|
|
|
/* schedule() stats */
|
|
|
|
unsigned int sched_count;
|
|
|
|
unsigned int sched_goidle;
|
|
|
|
|
|
|
|
/* try_to_wake_up() stats */
|
|
|
|
unsigned int ttwu_count;
|
|
|
|
unsigned int ttwu_local;
|
2017-06-03 15:03:03 +01:00
|
|
|
#ifdef CONFIG_SMP
|
2017-03-22 18:23:13 +00:00
|
|
|
struct eas_stats eas_stats;
|
2011-10-25 10:00:11 +02:00
|
|
|
#endif
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
struct llist_head wake_list;
|
|
|
|
#endif
|
2014-09-04 11:32:09 -04:00
|
|
|
|
|
|
|
#ifdef CONFIG_CPU_IDLE
|
|
|
|
/* Must be inspected within a rcu lock section */
|
|
|
|
struct cpuidle_state *idle_state;
|
2015-01-27 13:48:07 +00:00
|
|
|
int idle_state_idx;
|
2014-09-04 11:32:09 -04:00
|
|
|
#endif
|
2011-10-25 10:00:11 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
static inline int cpu_of(struct rq *rq)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
return rq->cpu;
|
|
|
|
#else
|
|
|
|
return 0;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2014-08-13 13:28:12 -04:00
|
|
|
DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
|
2011-10-25 10:00:11 +02:00
|
|
|
|
2011-12-07 15:07:31 +01:00
|
|
|
#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
|
2014-08-17 12:30:27 -05:00
|
|
|
#define this_rq() this_cpu_ptr(&runqueues)
|
2011-12-07 15:07:31 +01:00
|
|
|
#define task_rq(p) cpu_rq(task_cpu(p))
|
|
|
|
#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
|
2014-08-17 12:30:27 -05:00
|
|
|
#define raw_rq() raw_cpu_ptr(&runqueues)
|
2011-12-07 15:07:31 +01:00
|
|
|
|
2015-01-05 11:18:10 +01:00
|
|
|
static inline u64 __rq_clock_broken(struct rq *rq)
|
|
|
|
{
|
2015-04-28 13:00:20 -07:00
|
|
|
return READ_ONCE(rq->clock);
|
2015-01-05 11:18:10 +01:00
|
|
|
}
|
|
|
|
|
2013-04-12 01:51:02 +02:00
|
|
|
static inline u64 rq_clock(struct rq *rq)
|
|
|
|
{
|
2015-01-05 11:18:10 +01:00
|
|
|
lockdep_assert_held(&rq->lock);
|
2013-04-12 01:51:02 +02:00
|
|
|
return rq->clock;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u64 rq_clock_task(struct rq *rq)
|
|
|
|
{
|
2015-01-05 11:18:10 +01:00
|
|
|
lockdep_assert_held(&rq->lock);
|
2013-04-12 01:51:02 +02:00
|
|
|
return rq->clock_task;
|
|
|
|
}
|
|
|
|
|
2015-01-05 11:18:11 +01:00
|
|
|
#define RQCF_REQ_SKIP 0x01
|
|
|
|
#define RQCF_ACT_SKIP 0x02
|
|
|
|
|
|
|
|
static inline void rq_clock_skip_update(struct rq *rq, bool skip)
|
|
|
|
{
|
|
|
|
lockdep_assert_held(&rq->lock);
|
|
|
|
if (skip)
|
|
|
|
rq->clock_skip_update |= RQCF_REQ_SKIP;
|
|
|
|
else
|
|
|
|
rq->clock_skip_update &= ~RQCF_REQ_SKIP;
|
|
|
|
}
|
|
|
|
|
2014-10-17 03:29:49 -04:00
|
|
|
#ifdef CONFIG_NUMA
|
2014-10-17 03:29:50 -04:00
|
|
|
enum numa_topology_type {
|
|
|
|
NUMA_DIRECT,
|
|
|
|
NUMA_GLUELESS_MESH,
|
|
|
|
NUMA_BACKPLANE,
|
|
|
|
};
|
|
|
|
extern enum numa_topology_type sched_numa_topology_type;
|
2014-10-17 03:29:49 -04:00
|
|
|
extern int sched_max_numa_distance;
|
|
|
|
extern bool find_numa_distance(int distance);
|
|
|
|
#endif
|
|
|
|
|
2013-10-07 11:28:57 +01:00
|
|
|
#ifdef CONFIG_NUMA_BALANCING
|
2014-10-31 02:13:31 +02:00
|
|
|
/* The regions in numa_faults array from task_struct */
|
|
|
|
enum numa_faults_stats {
|
|
|
|
NUMA_MEM = 0,
|
|
|
|
NUMA_CPU,
|
|
|
|
NUMA_MEMBUF,
|
|
|
|
NUMA_CPUBUF
|
|
|
|
};
|
2013-10-07 11:29:33 +01:00
|
|
|
extern void sched_setnuma(struct task_struct *p, int node);
|
2013-10-07 11:29:02 +01:00
|
|
|
extern int migrate_task_to(struct task_struct *p, int cpu);
|
2013-10-07 11:29:16 +01:00
|
|
|
extern int migrate_swap(struct task_struct *, struct task_struct *);
|
2013-10-07 11:28:57 +01:00
|
|
|
#endif /* CONFIG_NUMA_BALANCING */
|
|
|
|
|
2011-12-07 15:07:31 +01:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
|
2015-06-11 14:46:37 +02:00
|
|
|
static inline void
|
|
|
|
queue_balance_callback(struct rq *rq,
|
|
|
|
struct callback_head *head,
|
|
|
|
void (*func)(struct rq *rq))
|
|
|
|
{
|
|
|
|
lockdep_assert_held(&rq->lock);
|
|
|
|
|
|
|
|
if (unlikely(head->next))
|
|
|
|
return;
|
|
|
|
|
|
|
|
head->func = (void (*)(struct callback_head *))func;
|
|
|
|
head->next = rq->balance_callback;
|
|
|
|
rq->balance_callback = head;
|
|
|
|
}
|
|
|
|
|
2014-06-04 10:31:18 -07:00
|
|
|
extern void sched_ttwu_pending(void);
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
#define rcu_dereference_check_sched_domain(p) \
|
|
|
|
rcu_dereference_check((p), \
|
|
|
|
lockdep_is_held(&sched_domains_mutex))
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The domain tree (rq->sd) is protected by RCU's quiescent state transition.
|
|
|
|
* See detach_destroy_domains: synchronize_sched for details.
|
|
|
|
*
|
|
|
|
* The domain tree of any CPU may only be accessed from within
|
|
|
|
* preempt-disabled sections.
|
|
|
|
*/
|
|
|
|
#define for_each_domain(cpu, __sd) \
|
2011-12-07 15:07:31 +01:00
|
|
|
for (__sd = rcu_dereference_check_sched_domain(cpu_rq(cpu)->sd); \
|
|
|
|
__sd; __sd = __sd->parent)
|
2011-10-25 10:00:11 +02:00
|
|
|
|
2011-11-17 11:08:23 -08:00
|
|
|
#define for_each_lower_domain(sd) for (; sd; sd = sd->child)
|
|
|
|
|
2011-12-07 15:07:31 +01:00
|
|
|
/**
|
|
|
|
* highest_flag_domain - Return highest sched_domain containing flag.
|
|
|
|
* @cpu: The cpu whose highest level of sched domain is to
|
|
|
|
* be returned.
|
|
|
|
* @flag: The flag to check for the highest sched_domain
|
|
|
|
* for the given cpu.
|
|
|
|
*
|
|
|
|
* Returns the highest sched_domain of a cpu which contains the given flag.
|
|
|
|
*/
|
|
|
|
static inline struct sched_domain *highest_flag_domain(int cpu, int flag)
|
|
|
|
{
|
|
|
|
struct sched_domain *sd, *hsd = NULL;
|
|
|
|
|
|
|
|
for_each_domain(cpu, sd) {
|
|
|
|
if (!(sd->flags & flag))
|
|
|
|
break;
|
|
|
|
hsd = sd;
|
|
|
|
}
|
|
|
|
|
|
|
|
return hsd;
|
|
|
|
}
|
|
|
|
|
2013-10-07 11:29:17 +01:00
|
|
|
static inline struct sched_domain *lowest_flag_domain(int cpu, int flag)
|
|
|
|
{
|
|
|
|
struct sched_domain *sd;
|
|
|
|
|
|
|
|
for_each_domain(cpu, sd) {
|
|
|
|
if (sd->flags & flag)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return sd;
|
|
|
|
}
|
|
|
|
|
2011-12-07 15:07:31 +01:00
|
|
|
DECLARE_PER_CPU(struct sched_domain *, sd_llc);
|
2013-07-04 12:56:46 +08:00
|
|
|
DECLARE_PER_CPU(int, sd_llc_size);
|
2011-12-07 15:07:31 +01:00
|
|
|
DECLARE_PER_CPU(int, sd_llc_id);
|
2013-10-07 11:29:17 +01:00
|
|
|
DECLARE_PER_CPU(struct sched_domain *, sd_numa);
|
2013-10-30 08:42:52 +05:30
|
|
|
DECLARE_PER_CPU(struct sched_domain *, sd_busy);
|
|
|
|
DECLARE_PER_CPU(struct sched_domain *, sd_asym);
|
2015-01-02 17:08:52 +00:00
|
|
|
DECLARE_PER_CPU(struct sched_domain *, sd_ea);
|
2014-12-18 14:47:18 +00:00
|
|
|
DECLARE_PER_CPU(struct sched_domain *, sd_scs);
|
2011-12-07 15:07:31 +01:00
|
|
|
|
2014-05-26 18:19:37 -04:00
|
|
|
struct sched_group_capacity {
|
2013-03-05 16:06:23 +08:00
|
|
|
atomic_t ref;
|
|
|
|
/*
|
2014-05-26 18:19:37 -04:00
|
|
|
* CPU capacity of this group, SCHED_LOAD_SCALE being max capacity
|
|
|
|
* for a single CPU.
|
2013-03-05 16:06:23 +08:00
|
|
|
*/
|
2016-02-25 12:43:49 +00:00
|
|
|
unsigned long capacity;
|
|
|
|
unsigned long max_capacity; /* Max per-cpu capacity in group */
|
2016-10-14 14:41:09 +01:00
|
|
|
unsigned long min_capacity; /* Min per-CPU capacity in group */
|
2013-03-05 16:06:23 +08:00
|
|
|
unsigned long next_update;
|
2014-05-26 18:19:37 -04:00
|
|
|
int imbalance; /* XXX unrelated to capacity but shared group state */
|
2013-03-05 16:06:23 +08:00
|
|
|
/*
|
|
|
|
* Number of busy cpus in this group.
|
|
|
|
*/
|
|
|
|
atomic_t nr_busy_cpus;
|
|
|
|
|
|
|
|
unsigned long cpumask[0]; /* iteration mask */
|
|
|
|
};
|
|
|
|
|
|
|
|
struct sched_group {
|
|
|
|
struct sched_group *next; /* Must be a circular list */
|
|
|
|
atomic_t ref;
|
|
|
|
|
|
|
|
unsigned int group_weight;
|
2014-05-26 18:19:37 -04:00
|
|
|
struct sched_group_capacity *sgc;
|
2017-03-07 10:37:56 -08:00
|
|
|
const struct sched_group_energy *sge;
|
2013-03-05 16:06:23 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The CPUs this group covers.
|
|
|
|
*
|
|
|
|
* NOTE: this field is variable length. (Allocated dynamically
|
|
|
|
* by attaching extra space to the end of the structure,
|
|
|
|
* depending on how many CPUs the kernel has booted up with)
|
|
|
|
*/
|
|
|
|
unsigned long cpumask[0];
|
|
|
|
};
|
|
|
|
|
|
|
|
static inline struct cpumask *sched_group_cpus(struct sched_group *sg)
|
|
|
|
{
|
|
|
|
return to_cpumask(sg->cpumask);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* cpumask masking which cpus in the group are allowed to iterate up the domain
|
|
|
|
* tree.
|
|
|
|
*/
|
|
|
|
static inline struct cpumask *sched_group_mask(struct sched_group *sg)
|
|
|
|
{
|
2014-05-26 18:19:37 -04:00
|
|
|
return to_cpumask(sg->sgc->cpumask);
|
2013-03-05 16:06:23 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* group_first_cpu - Returns the first cpu in the cpumask of a sched_group.
|
|
|
|
* @group: The group whose first cpu is to be returned.
|
|
|
|
*/
|
|
|
|
static inline unsigned int group_first_cpu(struct sched_group *group)
|
|
|
|
{
|
|
|
|
return cpumask_first(sched_group_cpus(group));
|
|
|
|
}
|
|
|
|
|
2012-05-31 14:47:33 +02:00
|
|
|
extern int group_balance_cpu(struct sched_group *sg);
|
|
|
|
|
2014-06-04 10:31:18 -07:00
|
|
|
#else
|
|
|
|
|
|
|
|
static inline void sched_ttwu_pending(void) { }
|
|
|
|
|
2011-12-07 15:07:31 +01:00
|
|
|
#endif /* CONFIG_SMP */
|
2011-10-25 10:00:11 +02:00
|
|
|
|
2011-11-15 17:14:39 +01:00
|
|
|
#include "stats.h"
|
|
|
|
#include "auto_group.h"
|
2011-10-25 10:00:11 +02:00
|
|
|
|
2017-01-11 15:11:23 +05:30
|
|
|
enum sched_boost_policy {
|
|
|
|
SCHED_BOOST_NONE,
|
|
|
|
SCHED_BOOST_ON_BIG,
|
|
|
|
SCHED_BOOST_ON_ALL,
|
|
|
|
};
|
|
|
|
|
2014-09-01 13:26:53 +05:30
|
|
|
#ifdef CONFIG_SCHED_HMP
|
2014-03-29 16:56:45 -07:00
|
|
|
|
2014-09-04 16:24:42 -07:00
|
|
|
#define WINDOW_STATS_RECENT 0
|
|
|
|
#define WINDOW_STATS_MAX 1
|
|
|
|
#define WINDOW_STATS_MAX_RECENT_AVG 2
|
|
|
|
#define WINDOW_STATS_AVG 3
|
|
|
|
#define WINDOW_STATS_INVALID_POLICY 4
|
2014-08-11 09:22:24 +05:30
|
|
|
|
2016-08-01 17:48:21 -07:00
|
|
|
#define SCHED_UPMIGRATE_MIN_NICE 15
|
|
|
|
#define EXITING_TASK_MARKER 0xdeaddead
|
|
|
|
|
|
|
|
#define UP_MIGRATION 1
|
|
|
|
#define DOWN_MIGRATION 2
|
|
|
|
#define IRQLOAD_MIGRATION 3
|
2016-07-28 19:18:08 -07:00
|
|
|
|
2014-08-20 15:39:05 +05:30
|
|
|
extern struct mutex policy_mutex;
|
2014-03-29 11:40:16 -07:00
|
|
|
extern unsigned int sched_ravg_window;
|
2014-08-19 12:31:54 +05:30
|
|
|
extern unsigned int sched_disable_window_stats;
|
2014-03-29 11:40:16 -07:00
|
|
|
extern unsigned int max_possible_freq;
|
|
|
|
extern unsigned int min_max_freq;
|
2014-03-29 11:40:16 -07:00
|
|
|
extern unsigned int pct_task_load(struct task_struct *p);
|
2014-03-29 19:07:28 -07:00
|
|
|
extern unsigned int max_possible_efficiency;
|
|
|
|
extern unsigned int min_possible_efficiency;
|
|
|
|
extern unsigned int max_capacity;
|
|
|
|
extern unsigned int min_capacity;
|
2014-06-04 13:18:02 -07:00
|
|
|
extern unsigned int max_load_scale_factor;
|
2015-02-20 17:09:41 -08:00
|
|
|
extern unsigned int max_possible_capacity;
|
2015-12-04 06:34:03 +05:30
|
|
|
extern unsigned int min_max_possible_capacity;
|
2016-04-13 15:13:56 +05:30
|
|
|
extern unsigned int max_power_cost;
|
2014-03-29 20:04:42 -07:00
|
|
|
extern unsigned int sched_init_task_load_windows;
|
2015-04-10 15:10:56 +05:30
|
|
|
extern unsigned int up_down_migrate_scale_factor;
|
2015-12-14 14:50:12 +05:30
|
|
|
extern unsigned int sysctl_sched_restrict_cluster_spill;
|
2015-06-08 09:08:47 +05:30
|
|
|
extern unsigned int sched_pred_alert_load;
|
2016-08-01 17:48:21 -07:00
|
|
|
extern struct sched_cluster init_cluster;
|
|
|
|
extern unsigned int __read_mostly sched_short_sleep_task_threshold;
|
|
|
|
extern unsigned int __read_mostly sched_long_cpu_selection_threshold;
|
|
|
|
extern unsigned int __read_mostly sched_big_waker_task_load;
|
|
|
|
extern unsigned int __read_mostly sched_small_wakee_task_load;
|
|
|
|
extern unsigned int __read_mostly sched_spill_load;
|
|
|
|
extern unsigned int __read_mostly sched_upmigrate;
|
|
|
|
extern unsigned int __read_mostly sched_downmigrate;
|
|
|
|
extern unsigned int __read_mostly sysctl_sched_spill_nr_run;
|
sched: Add the mechanics of top task tracking for frequency guidance
The previous patches in this rewrite of scheduler guided frequency
selection reintroduces the part-picture problem that we addressed in
our initial implementation. In that, when tasks migrate across CPUs
within a cluster, we end up losing the complete picture of the
sequential nature of the workload.
This patch aims to solve that problem slightly differently. We track
the top task on every CPU within a window. Top task is defined as the
task that runs the most in a given window. This enhances our ability
to detect the sequential nature of workloads. A single migrating task
executing for an entire window will cause 100% load to be reported
for frequency guidance instead of the maximum footprint left on any
individual CPU in the task's trail. There are cases, that this new
approach does not address. Namely, cases where the sum of two or more
tasks accurately reflects the true sequential nature of the workload.
Future optimizations might aim to tackle that problem.
To track top tasks, we first realize that there is no strict need to
maintain the task struct itself as long as we know the load exerted by
the top task. We also realize that to maintain top tasks on every CPU
we have to track the execution of every single task that runs during
the window. The load associated with a task needs to be migrated when
the task migrates from one CPU to another. When the top task migrates
away, we need to locate the second top task and so on.
Given the above realizations, we use hashmaps to track top task load
both for the current and the previous window. This hashmap is
implemented as an array of fixed size. The key of the hashmap is given
by task_execution_time_in_a_window / array_size. The size of the array
(number of buckets in the hashmap) dictate the load granularity of each
bucket. The value stored in each bucket is a refcount of all the tasks
that executed long enough to be in that bucket.
This approach has a few benefits. Firstly, any top task stats update
now take O(1) time. While task migration is also O(1), it does still
involve going through up to the size of the array to find the second
top task. Further patches will aim to optimize this behavior. Secondly,
and more importantly, not having to store the task struct itself saves
a lot of memory usage in that 1) there is no need to retrieve task
structs later causing cache misses and 2) we don't have to unnecessarily
hold up task memory for up to 2 full windows by calling get_task_struct()
after a task exits.
Change-Id: I004dba474f41590db7d3f40d9deafe86e71359ac
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2016-05-31 16:40:45 -07:00
|
|
|
extern unsigned int __read_mostly sched_load_granule;
|
2015-06-08 09:08:47 +05:30
|
|
|
|
2018-09-20 15:31:36 +05:30
|
|
|
extern void init_new_task_load(struct task_struct *p);
|
2016-08-01 17:48:21 -07:00
|
|
|
extern u64 sched_ktime_clock(void);
|
|
|
|
extern int got_boost_kick(void);
|
|
|
|
extern int register_cpu_cycle_counter_cb(struct cpu_cycle_counter_cb *cb);
|
|
|
|
extern void update_task_ravg(struct task_struct *p, struct rq *rq, int event,
|
|
|
|
u64 wallclock, u64 irqtime);
|
|
|
|
extern bool early_detection_notify(struct rq *rq, u64 wallclock);
|
|
|
|
extern void clear_ed_task(struct task_struct *p, struct rq *rq);
|
|
|
|
extern void fixup_busy_time(struct task_struct *p, int new_cpu);
|
|
|
|
extern void clear_boost_kick(int cpu);
|
|
|
|
extern void clear_hmp_request(int cpu);
|
|
|
|
extern void mark_task_starting(struct task_struct *p);
|
|
|
|
extern void set_window_start(struct rq *rq);
|
|
|
|
extern void update_cluster_topology(void);
|
2016-09-09 19:50:27 +05:30
|
|
|
extern void note_task_waking(struct task_struct *p, u64 wallclock);
|
2016-08-01 17:48:21 -07:00
|
|
|
extern void set_task_last_switch_out(struct task_struct *p, u64 wallclock);
|
|
|
|
extern void init_clusters(void);
|
2015-01-16 13:57:02 +05:30
|
|
|
extern void reset_cpu_hmp_stats(int cpu, int reset_cra);
|
|
|
|
extern unsigned int max_task_load(void);
|
2014-07-30 01:24:34 -07:00
|
|
|
extern void sched_account_irqtime(int cpu, struct task_struct *curr,
|
|
|
|
u64 delta, u64 wallclock);
|
2016-04-29 10:58:21 -07:00
|
|
|
extern void sched_account_irqstart(int cpu, struct task_struct *curr,
|
|
|
|
u64 wallclock);
|
2016-08-01 17:48:21 -07:00
|
|
|
extern unsigned int cpu_temp(int cpu);
|
2015-01-30 11:52:37 +05:30
|
|
|
extern unsigned int nr_eligible_big_tasks(int cpu);
|
2016-08-01 17:48:21 -07:00
|
|
|
extern int update_preferred_cluster(struct related_thread_group *grp,
|
|
|
|
struct task_struct *p, u32 old_load);
|
|
|
|
extern void set_preferred_cluster(struct related_thread_group *grp);
|
2015-10-21 16:04:46 +05:30
|
|
|
extern void add_new_task_to_grp(struct task_struct *new);
|
2016-08-31 16:54:12 -07:00
|
|
|
extern unsigned int update_freq_aggregate_threshold(unsigned int threshold);
|
2016-05-13 02:05:32 -07:00
|
|
|
extern void update_avg_burst(struct task_struct *p);
|
|
|
|
extern void update_avg(u64 *avg, u64 sample);
|
2016-08-01 17:48:21 -07:00
|
|
|
|
2016-08-31 16:54:12 -07:00
|
|
|
#define NO_BOOST 0
|
|
|
|
#define FULL_THROTTLE_BOOST 1
|
|
|
|
#define CONSERVATIVE_BOOST 2
|
|
|
|
#define RESTRAINED_BOOST 3
|
|
|
|
|
2015-04-22 17:12:09 +05:30
|
|
|
static inline struct sched_cluster *cpu_cluster(int cpu)
|
|
|
|
{
|
|
|
|
return cpu_rq(cpu)->cluster;
|
|
|
|
}
|
|
|
|
|
2015-04-20 12:35:48 +05:30
|
|
|
static inline int cpu_capacity(int cpu)
|
|
|
|
{
|
|
|
|
return cpu_rq(cpu)->cluster->capacity;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int cpu_max_possible_capacity(int cpu)
|
|
|
|
{
|
|
|
|
return cpu_rq(cpu)->cluster->max_possible_capacity;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int cpu_load_scale_factor(int cpu)
|
|
|
|
{
|
|
|
|
return cpu_rq(cpu)->cluster->load_scale_factor;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int cpu_efficiency(int cpu)
|
|
|
|
{
|
|
|
|
return cpu_rq(cpu)->cluster->efficiency;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned int cpu_cur_freq(int cpu)
|
|
|
|
{
|
|
|
|
return cpu_rq(cpu)->cluster->cur_freq;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned int cpu_min_freq(int cpu)
|
|
|
|
{
|
|
|
|
return cpu_rq(cpu)->cluster->min_freq;
|
|
|
|
}
|
|
|
|
|
2016-03-28 14:22:52 -07:00
|
|
|
static inline unsigned int cluster_max_freq(struct sched_cluster *cluster)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Governor and thermal driver don't know the other party's mitigation
|
|
|
|
* voting. So struct cluster saves both and return min() for current
|
|
|
|
* cluster fmax.
|
|
|
|
*/
|
|
|
|
return min(cluster->max_mitigated_freq, cluster->max_freq);
|
|
|
|
}
|
|
|
|
|
2015-04-20 12:35:48 +05:30
|
|
|
static inline unsigned int cpu_max_freq(int cpu)
|
|
|
|
{
|
2016-03-28 14:22:52 -07:00
|
|
|
return cluster_max_freq(cpu_rq(cpu)->cluster);
|
2015-04-20 12:35:48 +05:30
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned int cpu_max_possible_freq(int cpu)
|
|
|
|
{
|
|
|
|
return cpu_rq(cpu)->cluster->max_possible_freq;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int same_cluster(int src_cpu, int dst_cpu)
|
|
|
|
{
|
|
|
|
return cpu_rq(src_cpu)->cluster == cpu_rq(dst_cpu)->cluster;
|
|
|
|
}
|
|
|
|
|
2015-12-04 06:34:03 +05:30
|
|
|
static inline int cpu_max_power_cost(int cpu)
|
|
|
|
{
|
|
|
|
return cpu_rq(cpu)->cluster->max_power_cost;
|
|
|
|
}
|
|
|
|
|
2016-09-09 19:38:03 +05:30
|
|
|
static inline int cpu_min_power_cost(int cpu)
|
|
|
|
{
|
|
|
|
return cpu_rq(cpu)->cluster->min_power_cost;
|
|
|
|
}
|
|
|
|
|
2017-11-16 13:39:33 -08:00
|
|
|
static inline u32 cpu_cycles_to_freq(u64 cycles, u64 period)
|
2016-03-08 13:46:04 -08:00
|
|
|
{
|
2016-05-17 20:04:54 -07:00
|
|
|
return div64_u64(cycles, period);
|
2016-03-08 13:46:04 -08:00
|
|
|
}
|
|
|
|
|
2015-12-04 06:34:03 +05:30
|
|
|
static inline bool hmp_capable(void)
|
|
|
|
{
|
|
|
|
return max_possible_capacity != min_max_possible_capacity;
|
|
|
|
}
|
|
|
|
|
2017-05-10 15:43:29 +05:30
|
|
|
static inline bool is_max_capacity_cpu(int cpu)
|
|
|
|
{
|
|
|
|
return cpu_max_possible_capacity(cpu) == max_possible_capacity;
|
|
|
|
}
|
|
|
|
|
2018-02-09 13:53:04 +05:30
|
|
|
static inline bool is_min_capacity_cpu(int cpu)
|
|
|
|
{
|
|
|
|
return cpu_max_possible_capacity(cpu) == min_max_possible_capacity;
|
|
|
|
}
|
|
|
|
|
2015-06-10 14:57:52 -07:00
|
|
|
/*
|
|
|
|
* 'load' is in reference to "best cpu" at its best frequency.
|
|
|
|
* Scale that in reference to a given cpu, accounting for how bad it is
|
|
|
|
* in reference to "best cpu".
|
|
|
|
*/
|
|
|
|
static inline u64 scale_load_to_cpu(u64 task_load, int cpu)
|
|
|
|
{
|
2015-04-20 12:35:48 +05:30
|
|
|
u64 lsf = cpu_load_scale_factor(cpu);
|
2015-06-10 14:57:52 -07:00
|
|
|
|
2015-04-20 12:35:48 +05:30
|
|
|
if (lsf != 1024) {
|
|
|
|
task_load *= lsf;
|
2015-08-24 15:14:44 -07:00
|
|
|
task_load /= 1024;
|
|
|
|
}
|
2015-06-10 14:57:52 -07:00
|
|
|
|
|
|
|
return task_load;
|
|
|
|
}
|
|
|
|
|
2015-07-30 10:44:13 -07:00
|
|
|
static inline unsigned int task_load(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return p->ravg.demand;
|
|
|
|
}
|
2014-12-03 10:18:12 -08:00
|
|
|
|
2014-03-29 11:40:16 -07:00
|
|
|
static inline void
|
2015-01-16 11:27:31 +05:30
|
|
|
inc_cumulative_runnable_avg(struct hmp_sched_stats *stats,
|
|
|
|
struct task_struct *p)
|
2014-03-29 11:40:16 -07:00
|
|
|
{
|
2015-01-16 11:27:31 +05:30
|
|
|
u32 task_load;
|
|
|
|
|
2017-02-01 17:59:51 -08:00
|
|
|
if (sched_disable_window_stats)
|
2015-01-16 11:27:31 +05:30
|
|
|
return;
|
|
|
|
|
2016-07-28 10:53:01 -07:00
|
|
|
task_load = sched_disable_window_stats ? 0 : p->ravg.demand;
|
2015-01-16 11:27:31 +05:30
|
|
|
|
|
|
|
stats->cumulative_runnable_avg += task_load;
|
2016-07-28 19:18:08 -07:00
|
|
|
stats->pred_demands_sum += p->ravg.pred_demand;
|
2014-03-29 11:40:16 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
2015-01-16 11:27:31 +05:30
|
|
|
dec_cumulative_runnable_avg(struct hmp_sched_stats *stats,
|
2016-07-28 19:18:08 -07:00
|
|
|
struct task_struct *p)
|
2014-03-29 11:40:16 -07:00
|
|
|
{
|
2015-01-16 11:27:31 +05:30
|
|
|
u32 task_load;
|
|
|
|
|
2017-02-01 17:59:51 -08:00
|
|
|
if (sched_disable_window_stats)
|
2015-01-16 11:27:31 +05:30
|
|
|
return;
|
|
|
|
|
2016-07-28 10:53:01 -07:00
|
|
|
task_load = sched_disable_window_stats ? 0 : p->ravg.demand;
|
2015-01-16 11:27:31 +05:30
|
|
|
|
|
|
|
stats->cumulative_runnable_avg -= task_load;
|
|
|
|
|
|
|
|
BUG_ON((s64)stats->cumulative_runnable_avg < 0);
|
2015-06-08 09:08:47 +05:30
|
|
|
|
2016-07-28 19:18:08 -07:00
|
|
|
stats->pred_demands_sum -= p->ravg.pred_demand;
|
|
|
|
BUG_ON((s64)stats->pred_demands_sum < 0);
|
2014-03-29 11:40:16 -07:00
|
|
|
}
|
|
|
|
|
2015-07-13 21:04:18 -07:00
|
|
|
static inline void
|
|
|
|
fixup_cumulative_runnable_avg(struct hmp_sched_stats *stats,
|
2015-06-08 09:08:47 +05:30
|
|
|
struct task_struct *p, s64 task_load_delta,
|
|
|
|
s64 pred_demand_delta)
|
2015-07-13 21:04:18 -07:00
|
|
|
{
|
2017-02-01 17:59:51 -08:00
|
|
|
if (sched_disable_window_stats)
|
2015-07-13 21:04:18 -07:00
|
|
|
return;
|
|
|
|
|
2015-07-30 10:44:13 -07:00
|
|
|
stats->cumulative_runnable_avg += task_load_delta;
|
2015-07-13 21:04:18 -07:00
|
|
|
BUG_ON((s64)stats->cumulative_runnable_avg < 0);
|
2015-06-08 09:08:47 +05:30
|
|
|
|
2016-07-28 19:18:08 -07:00
|
|
|
stats->pred_demands_sum += pred_demand_delta;
|
|
|
|
BUG_ON((s64)stats->pred_demands_sum < 0);
|
2015-07-13 21:04:18 -07:00
|
|
|
}
|
|
|
|
|
2014-11-04 15:25:50 +05:30
|
|
|
#define pct_to_real(tunable) \
|
|
|
|
(div64_u64((u64)tunable * (u64)max_task_load(), 100))
|
|
|
|
|
|
|
|
#define real_to_pct(tunable) \
|
|
|
|
(div64_u64((u64)tunable * (u64)100, (u64)max_task_load()))
|
|
|
|
|
2014-11-13 14:58:10 -08:00
|
|
|
#define SCHED_HIGH_IRQ_TIMEOUT 3
|
|
|
|
static inline u64 sched_irqload(int cpu)
|
|
|
|
{
|
|
|
|
struct rq *rq = cpu_rq(cpu);
|
|
|
|
s64 delta;
|
|
|
|
|
|
|
|
delta = get_jiffies_64() - rq->irqload_ts;
|
2014-12-16 14:44:09 -08:00
|
|
|
/*
|
|
|
|
* Current context can be preempted by irq and rq->irqload_ts can be
|
|
|
|
* updated by irq context so that delta can be negative.
|
|
|
|
* But this is okay and we can safely return as this means there
|
|
|
|
* was recent irq occurrence.
|
|
|
|
*/
|
2014-11-13 14:58:10 -08:00
|
|
|
|
|
|
|
if (delta < SCHED_HIGH_IRQ_TIMEOUT)
|
|
|
|
return rq->avg_irqload;
|
|
|
|
else
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int sched_cpu_high_irqload(int cpu)
|
|
|
|
{
|
2014-11-30 16:26:55 -08:00
|
|
|
return sched_irqload(cpu) >= sysctl_sched_cpu_high_irqload;
|
2014-11-13 14:58:10 -08:00
|
|
|
}
|
|
|
|
|
2016-07-29 15:56:29 -07:00
|
|
|
static inline bool task_in_related_thread_group(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return !!(rcu_access_pointer(p->grp) != NULL);
|
|
|
|
}
|
|
|
|
|
2015-04-24 15:44:31 +05:30
|
|
|
static inline
|
|
|
|
struct related_thread_group *task_related_thread_group(struct task_struct *p)
|
|
|
|
{
|
2016-03-05 13:47:52 -08:00
|
|
|
return rcu_dereference(p->grp);
|
2015-04-24 15:44:31 +05:30
|
|
|
}
|
|
|
|
|
2015-06-08 09:08:47 +05:30
|
|
|
#define PRED_DEMAND_DELTA ((s64)new_pred_demand - p->ravg.pred_demand)
|
2015-04-24 15:44:31 +05:30
|
|
|
|
2015-05-12 15:01:15 +05:30
|
|
|
extern void
|
|
|
|
check_for_freq_change(struct rq *rq, bool check_pred, bool check_groups);
|
|
|
|
|
2016-08-02 15:08:13 -07:00
|
|
|
extern void notify_migration(int src_cpu, int dest_cpu,
|
|
|
|
bool src_cpu_dead, struct task_struct *p);
|
|
|
|
|
2014-08-14 22:01:57 +05:30
|
|
|
/* Is frequency of two cpus synchronized with each other? */
|
|
|
|
static inline int same_freq_domain(int src_cpu, int dst_cpu)
|
|
|
|
{
|
|
|
|
struct rq *rq = cpu_rq(src_cpu);
|
|
|
|
|
|
|
|
if (src_cpu == dst_cpu)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
return cpumask_test_cpu(dst_cpu, &rq->freq_domain_cpumask);
|
|
|
|
}
|
|
|
|
|
2014-07-24 06:40:30 -07:00
|
|
|
#define BOOST_KICK 0
|
2014-07-25 08:04:27 -07:00
|
|
|
#define CPU_RESERVED 1
|
|
|
|
|
|
|
|
static inline int is_reserved(int cpu)
|
|
|
|
{
|
|
|
|
struct rq *rq = cpu_rq(cpu);
|
|
|
|
|
|
|
|
return test_bit(CPU_RESERVED, &rq->hmp_flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int mark_reserved(int cpu)
|
|
|
|
{
|
|
|
|
struct rq *rq = cpu_rq(cpu);
|
|
|
|
|
|
|
|
/* Name boost_flags as hmp_flags? */
|
|
|
|
return test_and_set_bit(CPU_RESERVED, &rq->hmp_flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void clear_reserved(int cpu)
|
|
|
|
{
|
|
|
|
struct rq *rq = cpu_rq(cpu);
|
|
|
|
|
|
|
|
clear_bit(CPU_RESERVED, &rq->hmp_flags);
|
|
|
|
}
|
2014-07-24 06:40:30 -07:00
|
|
|
|
2015-08-21 11:02:22 -07:00
|
|
|
static inline u64 cpu_cravg_sync(int cpu, int sync)
|
|
|
|
{
|
|
|
|
struct rq *rq = cpu_rq(cpu);
|
|
|
|
u64 load;
|
|
|
|
|
|
|
|
load = rq->hmp_stats.cumulative_runnable_avg;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If load is being checked in a sync wakeup environment,
|
|
|
|
* we may want to discount the load of the currently running
|
|
|
|
* task.
|
|
|
|
*/
|
|
|
|
if (sync && cpu == smp_processor_id()) {
|
|
|
|
if (load > rq->curr->ravg.demand)
|
|
|
|
load -= rq->curr->ravg.demand;
|
|
|
|
else
|
|
|
|
load = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return load;
|
|
|
|
}
|
|
|
|
|
2016-09-09 19:38:03 +05:30
|
|
|
static inline bool is_short_burst_task(struct task_struct *p)
|
|
|
|
{
|
2016-09-09 19:59:12 +05:30
|
|
|
return p->ravg.avg_burst < sysctl_sched_short_burst &&
|
|
|
|
p->ravg.avg_sleep_time > sysctl_sched_short_sleep;
|
2016-09-09 19:38:03 +05:30
|
|
|
}
|
|
|
|
|
2018-01-05 10:21:34 +05:30
|
|
|
extern void check_for_migration(struct rq *rq, struct task_struct *p);
|
2015-06-19 12:28:24 -07:00
|
|
|
extern void pre_big_task_count_change(const struct cpumask *cpus);
|
|
|
|
extern void post_big_task_count_change(const struct cpumask *cpus);
|
2014-03-29 20:04:42 -07:00
|
|
|
extern void set_hmp_defaults(void);
|
2015-07-21 15:00:59 -07:00
|
|
|
extern int power_delta_exceeded(unsigned int cpu_cost, unsigned int base_cost);
|
2015-08-21 11:02:22 -07:00
|
|
|
extern unsigned int power_cost(int cpu, u64 demand);
|
2014-08-11 09:22:24 +05:30
|
|
|
extern void reset_all_window_stats(u64 window_start, unsigned int window_size);
|
2014-12-03 10:18:12 -08:00
|
|
|
extern int sched_boost(void);
|
2016-08-01 17:48:21 -07:00
|
|
|
extern int task_load_will_fit(struct task_struct *p, u64 task_load, int cpu,
|
2016-08-31 16:54:12 -07:00
|
|
|
enum sched_boost_policy boost_policy);
|
|
|
|
extern enum sched_boost_policy sched_boost_policy(void);
|
2016-08-01 17:48:21 -07:00
|
|
|
extern int task_will_fit(struct task_struct *p, int cpu);
|
|
|
|
extern u64 cpu_load(int cpu);
|
|
|
|
extern u64 cpu_load_sync(int cpu, int sync);
|
|
|
|
extern int preferred_cluster(struct sched_cluster *cluster,
|
|
|
|
struct task_struct *p);
|
|
|
|
extern void inc_nr_big_task(struct hmp_sched_stats *stats,
|
|
|
|
struct task_struct *p);
|
|
|
|
extern void dec_nr_big_task(struct hmp_sched_stats *stats,
|
|
|
|
struct task_struct *p);
|
|
|
|
extern void inc_rq_hmp_stats(struct rq *rq,
|
|
|
|
struct task_struct *p, int change_cra);
|
|
|
|
extern void dec_rq_hmp_stats(struct rq *rq,
|
|
|
|
struct task_struct *p, int change_cra);
|
2016-10-25 11:05:13 -07:00
|
|
|
extern void reset_hmp_stats(struct hmp_sched_stats *stats, int reset_cra);
|
2016-08-01 17:48:21 -07:00
|
|
|
extern int is_big_task(struct task_struct *p);
|
|
|
|
extern int upmigrate_discouraged(struct task_struct *p);
|
|
|
|
extern struct sched_cluster *rq_cluster(struct rq *rq);
|
|
|
|
extern int nr_big_tasks(struct rq *rq);
|
|
|
|
extern void fixup_nr_big_tasks(struct hmp_sched_stats *stats,
|
|
|
|
struct task_struct *p, s64 delta);
|
|
|
|
extern void reset_task_stats(struct task_struct *p);
|
|
|
|
extern void reset_cfs_rq_hmp_stats(int cpu, int reset_cra);
|
|
|
|
extern void _inc_hmp_sched_stats_fair(struct rq *rq,
|
|
|
|
struct task_struct *p, int change_cra);
|
|
|
|
extern u64 cpu_upmigrate_discourage_read_u64(struct cgroup_subsys_state *css,
|
|
|
|
struct cftype *cft);
|
|
|
|
extern int cpu_upmigrate_discourage_write_u64(struct cgroup_subsys_state *css,
|
|
|
|
struct cftype *cft, u64 upmigrate_discourage);
|
2016-08-31 16:54:12 -07:00
|
|
|
extern void sched_boost_parse_dt(void);
|
2016-06-07 15:18:37 -07:00
|
|
|
extern void clear_top_tasks_bitmap(unsigned long *bitmap);
|
2014-03-29 20:04:42 -07:00
|
|
|
|
2016-08-31 16:54:12 -07:00
|
|
|
#if defined(CONFIG_SCHED_TUNE) && defined(CONFIG_CGROUP_SCHEDTUNE)
|
|
|
|
extern bool task_sched_boost(struct task_struct *p);
|
|
|
|
extern int sync_cgroup_colocation(struct task_struct *p, bool insert);
|
|
|
|
extern bool same_schedtune(struct task_struct *tsk1, struct task_struct *tsk2);
|
|
|
|
extern void update_cgroup_boost_settings(void);
|
|
|
|
extern void restore_cgroup_boost_settings(void);
|
|
|
|
|
|
|
|
#else
|
|
|
|
static inline bool
|
|
|
|
same_schedtune(struct task_struct *tsk1, struct task_struct *tsk2)
|
|
|
|
{
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool task_sched_boost(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void update_cgroup_boost_settings(void) { }
|
|
|
|
static inline void restore_cgroup_boost_settings(void) { }
|
|
|
|
#endif
|
|
|
|
|
2016-11-28 13:41:18 -08:00
|
|
|
extern int alloc_related_thread_groups(void);
|
|
|
|
|
2016-07-28 19:18:08 -07:00
|
|
|
#else /* CONFIG_SCHED_HMP */
|
|
|
|
|
|
|
|
struct hmp_sched_stats;
|
|
|
|
struct related_thread_group;
|
2016-08-01 17:48:21 -07:00
|
|
|
struct sched_cluster;
|
|
|
|
|
2017-01-11 15:11:23 +05:30
|
|
|
static inline enum sched_boost_policy sched_boost_policy(void)
|
|
|
|
{
|
|
|
|
return SCHED_BOOST_NONE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool task_sched_boost(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2016-08-01 17:48:21 -07:00
|
|
|
static inline int got_boost_kick(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void update_task_ravg(struct task_struct *p, struct rq *rq,
|
|
|
|
int event, u64 wallclock, u64 irqtime) { }
|
|
|
|
|
|
|
|
static inline bool early_detection_notify(struct rq *rq, u64 wallclock)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void clear_ed_task(struct task_struct *p, struct rq *rq) { }
|
|
|
|
static inline void fixup_busy_time(struct task_struct *p, int new_cpu) { }
|
|
|
|
static inline void clear_boost_kick(int cpu) { }
|
|
|
|
static inline void clear_hmp_request(int cpu) { }
|
|
|
|
static inline void mark_task_starting(struct task_struct *p) { }
|
|
|
|
static inline void set_window_start(struct rq *rq) { }
|
2016-10-01 11:06:13 +05:30
|
|
|
static inline void init_clusters(void) {}
|
2016-08-01 17:48:21 -07:00
|
|
|
static inline void update_cluster_topology(void) { }
|
2016-09-09 19:50:27 +05:30
|
|
|
static inline void note_task_waking(struct task_struct *p, u64 wallclock) { }
|
2016-08-01 17:48:21 -07:00
|
|
|
static inline void set_task_last_switch_out(struct task_struct *p,
|
|
|
|
u64 wallclock) { }
|
|
|
|
|
|
|
|
static inline int task_will_fit(struct task_struct *p, int cpu)
|
|
|
|
{
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int select_best_cpu(struct task_struct *p, int target,
|
|
|
|
int reason, int sync)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned int power_cost(int cpu, u64 demand)
|
|
|
|
{
|
|
|
|
return SCHED_CAPACITY_SCALE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int sched_boost(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int is_big_task(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int nr_big_tasks(struct rq *rq)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int is_cpu_throttling_imminent(int cpu)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int is_task_migration_throttled(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned int cpu_temp(int cpu)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
inc_rq_hmp_stats(struct rq *rq, struct task_struct *p, int change_cra) { }
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
dec_rq_hmp_stats(struct rq *rq, struct task_struct *p, int change_cra) { }
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
inc_hmp_sched_stats_fair(struct rq *rq, struct task_struct *p) { }
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
dec_hmp_sched_stats_fair(struct rq *rq, struct task_struct *p) { }
|
|
|
|
|
|
|
|
static inline int
|
|
|
|
preferred_cluster(struct sched_cluster *cluster, struct task_struct *p)
|
|
|
|
{
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct sched_cluster *rq_cluster(struct rq *rq)
|
|
|
|
{
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2018-09-20 15:31:36 +05:30
|
|
|
static inline void init_new_task_load(struct task_struct *p)
|
2016-05-09 16:28:07 -07:00
|
|
|
{
|
|
|
|
}
|
2016-07-28 19:18:08 -07:00
|
|
|
|
|
|
|
static inline u64 scale_load_to_cpu(u64 load, int cpu)
|
|
|
|
{
|
|
|
|
return load;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned int nr_eligible_big_tasks(int cpu)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-05-10 15:43:29 +05:30
|
|
|
static inline bool is_max_capacity_cpu(int cpu) { return true; }
|
|
|
|
|
2016-07-28 19:18:08 -07:00
|
|
|
static inline int pct_task_load(struct task_struct *p) { return 0; }
|
|
|
|
|
|
|
|
static inline int cpu_capacity(int cpu)
|
|
|
|
{
|
|
|
|
return SCHED_LOAD_SCALE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int same_cluster(int src_cpu, int dst_cpu) { return 1; }
|
|
|
|
|
|
|
|
static inline void inc_cumulative_runnable_avg(struct hmp_sched_stats *stats,
|
|
|
|
struct task_struct *p)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void dec_cumulative_runnable_avg(struct hmp_sched_stats *stats,
|
|
|
|
struct task_struct *p)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void sched_account_irqtime(int cpu, struct task_struct *curr,
|
|
|
|
u64 delta, u64 wallclock)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void sched_account_irqstart(int cpu, struct task_struct *curr,
|
|
|
|
u64 wallclock)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int sched_cpu_high_irqload(int cpu) { return 0; }
|
|
|
|
|
|
|
|
static inline void set_preferred_cluster(struct related_thread_group *grp) { }
|
|
|
|
|
2016-07-29 15:56:29 -07:00
|
|
|
static inline bool task_in_related_thread_group(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2016-07-28 19:18:08 -07:00
|
|
|
static inline
|
|
|
|
struct related_thread_group *task_related_thread_group(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u32 task_load(struct task_struct *p) { return 0; }
|
|
|
|
|
|
|
|
static inline int update_preferred_cluster(struct related_thread_group *grp,
|
|
|
|
struct task_struct *p, u32 old_load)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
2014-03-29 20:04:42 -07:00
|
|
|
|
2015-10-21 16:04:46 +05:30
|
|
|
static inline void add_new_task_to_grp(struct task_struct *new) {}
|
|
|
|
|
2016-07-28 19:18:08 -07:00
|
|
|
#define PRED_DEMAND_DELTA (0)
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
check_for_freq_change(struct rq *rq, bool check_pred, bool check_groups) { }
|
|
|
|
|
2016-08-02 15:08:13 -07:00
|
|
|
static inline void notify_migration(int src_cpu, int dest_cpu,
|
|
|
|
bool src_cpu_dead, struct task_struct *p) { }
|
|
|
|
|
2016-07-28 19:18:08 -07:00
|
|
|
static inline int same_freq_domain(int src_cpu, int dst_cpu)
|
|
|
|
{
|
|
|
|
return 1;
|
|
|
|
}
|
2014-07-21 02:05:24 -07:00
|
|
|
|
2018-01-05 10:21:34 +05:30
|
|
|
static inline void check_for_migration(struct rq *rq, struct task_struct *p) { }
|
2015-06-19 12:28:24 -07:00
|
|
|
static inline void pre_big_task_count_change(void) { }
|
|
|
|
static inline void post_big_task_count_change(void) { }
|
2014-03-29 20:04:42 -07:00
|
|
|
static inline void set_hmp_defaults(void) { }
|
|
|
|
|
2014-07-25 08:04:27 -07:00
|
|
|
static inline void clear_reserved(int cpu) { }
|
2016-08-31 16:54:12 -07:00
|
|
|
static inline void sched_boost_parse_dt(void) {}
|
2016-11-28 13:41:18 -08:00
|
|
|
static inline int alloc_related_thread_groups(void) { return 0; }
|
2014-07-25 08:04:27 -07:00
|
|
|
|
2014-03-31 18:10:21 -07:00
|
|
|
#define trace_sched_cpu_load(...)
|
2015-11-02 15:08:20 -08:00
|
|
|
#define trace_sched_cpu_load_lb(...)
|
|
|
|
#define trace_sched_cpu_load_cgroup(...)
|
|
|
|
#define trace_sched_cpu_load_wakeup(...)
|
2014-03-31 18:10:21 -07:00
|
|
|
|
2016-05-13 02:05:32 -07:00
|
|
|
static inline void update_avg_burst(struct task_struct *p) {}
|
|
|
|
|
2016-07-28 19:18:08 -07:00
|
|
|
#endif /* CONFIG_SCHED_HMP */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Returns the rq capacity of any rq in a group. This does not play
|
|
|
|
* well with groups where rq capacity can change independently.
|
|
|
|
*/
|
|
|
|
#define group_rq_capacity(group) cpu_capacity(group_first_cpu(group))
|
2014-03-29 20:04:42 -07:00
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
#ifdef CONFIG_CGROUP_SCHED
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return the group to which this tasks belongs.
|
|
|
|
*
|
2013-08-08 20:11:22 -04:00
|
|
|
* We cannot use task_css() and friends because the cgroup subsystem
|
|
|
|
* changes that value before the cgroup_subsys::attach() method is called,
|
|
|
|
* therefore we cannot pin it and might observe the wrong value.
|
2012-06-22 13:36:05 +02:00
|
|
|
*
|
|
|
|
* The same is true for autogroup's p->signal->autogroup->tg, the autogroup
|
|
|
|
* core changes this before calling sched_move_task().
|
|
|
|
*
|
|
|
|
* Instead we use a 'copy' which is updated from sched_move_task() while
|
|
|
|
* holding both task_struct::pi_lock and rq::lock.
|
2011-10-25 10:00:11 +02:00
|
|
|
*/
|
|
|
|
static inline struct task_group *task_group(struct task_struct *p)
|
|
|
|
{
|
2012-06-22 13:36:05 +02:00
|
|
|
return p->sched_task_group;
|
2011-10-25 10:00:11 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Change a task's cfs_rq and parent entity if it moves across CPUs/groups */
|
|
|
|
static inline void set_task_rq(struct task_struct *p, unsigned int cpu)
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_FAIR_GROUP_SCHED) || defined(CONFIG_RT_GROUP_SCHED)
|
|
|
|
struct task_group *tg = task_group(p);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
2017-05-30 14:51:53 +01:00
|
|
|
set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]);
|
2011-10-25 10:00:11 +02:00
|
|
|
p->se.cfs_rq = tg->cfs_rq[cpu];
|
|
|
|
p->se.parent = tg->se[cpu];
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_RT_GROUP_SCHED
|
|
|
|
p->rt.rt_rq = tg->rt_rq[cpu];
|
|
|
|
p->rt.parent = tg->rt_se[cpu];
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
#else /* CONFIG_CGROUP_SCHED */
|
|
|
|
|
|
|
|
static inline void set_task_rq(struct task_struct *p, unsigned int cpu) { }
|
|
|
|
static inline struct task_group *task_group(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_CGROUP_SCHED */
|
|
|
|
|
|
|
|
static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
|
|
|
|
{
|
|
|
|
set_task_rq(p, cpu);
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/*
|
|
|
|
* After ->cpu is set up to a new value, task_rq_lock(p, ...) can be
|
|
|
|
* successfuly executed on another CPU. We must ensure that updates of
|
|
|
|
* per-task data have been completed by this moment.
|
|
|
|
*/
|
|
|
|
smp_wmb();
|
2016-09-13 14:29:24 -07:00
|
|
|
#ifdef CONFIG_THREAD_INFO_IN_TASK
|
|
|
|
p->cpu = cpu;
|
|
|
|
#else
|
2011-10-25 10:00:11 +02:00
|
|
|
task_thread_info(p)->cpu = cpu;
|
2016-09-13 14:29:24 -07:00
|
|
|
#endif
|
2013-10-07 11:29:16 +01:00
|
|
|
p->wake_cpu = cpu;
|
2011-10-25 10:00:11 +02:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Tunables that become constants when CONFIG_SCHED_DEBUG is off:
|
|
|
|
*/
|
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
2012-02-24 08:31:31 +01:00
|
|
|
# include <linux/static_key.h>
|
2011-10-25 10:00:11 +02:00
|
|
|
# define const_debug __read_mostly
|
|
|
|
#else
|
|
|
|
# define const_debug const
|
|
|
|
#endif
|
|
|
|
|
|
|
|
extern const_debug unsigned int sysctl_sched_features;
|
|
|
|
|
|
|
|
#define SCHED_FEAT(name, enabled) \
|
|
|
|
__SCHED_FEAT_##name ,
|
|
|
|
|
|
|
|
enum {
|
2011-11-15 17:14:39 +01:00
|
|
|
#include "features.h"
|
2011-07-06 14:20:14 +02:00
|
|
|
__SCHED_FEAT_NR,
|
2011-10-25 10:00:11 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
#undef SCHED_FEAT
|
|
|
|
|
2011-07-06 14:20:14 +02:00
|
|
|
#if defined(CONFIG_SCHED_DEBUG) && defined(HAVE_JUMP_LABEL)
|
|
|
|
#define SCHED_FEAT(name, enabled) \
|
2012-02-24 08:31:31 +01:00
|
|
|
static __always_inline bool static_branch_##name(struct static_key *key) \
|
2011-07-06 14:20:14 +02:00
|
|
|
{ \
|
2014-07-02 15:52:41 +00:00
|
|
|
return static_key_##enabled(key); \
|
2011-07-06 14:20:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
#include "features.h"
|
|
|
|
|
|
|
|
#undef SCHED_FEAT
|
|
|
|
|
2012-02-24 08:31:31 +01:00
|
|
|
extern struct static_key sched_feat_keys[__SCHED_FEAT_NR];
|
2011-07-06 14:20:14 +02:00
|
|
|
#define sched_feat(x) (static_branch_##x(&sched_feat_keys[__SCHED_FEAT_##x]))
|
|
|
|
#else /* !(SCHED_DEBUG && HAVE_JUMP_LABEL) */
|
2011-10-25 10:00:11 +02:00
|
|
|
#define sched_feat(x) (sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
|
2011-07-06 14:20:14 +02:00
|
|
|
#endif /* SCHED_DEBUG && HAVE_JUMP_LABEL */
|
2011-10-25 10:00:11 +02:00
|
|
|
|
2015-08-11 21:54:21 +05:30
|
|
|
extern struct static_key_false sched_numa_balancing;
|
2012-10-25 14:16:43 +02:00
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
static inline u64 global_rt_period(void)
|
|
|
|
{
|
|
|
|
return (u64)sysctl_sched_rt_period * NSEC_PER_USEC;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u64 global_rt_runtime(void)
|
|
|
|
{
|
|
|
|
if (sysctl_sched_rt_runtime < 0)
|
|
|
|
return RUNTIME_INF;
|
|
|
|
|
|
|
|
return (u64)sysctl_sched_rt_runtime * NSEC_PER_USEC;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int task_current(struct rq *rq, struct task_struct *p)
|
|
|
|
{
|
|
|
|
return rq->curr == p;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int task_running(struct rq *rq, struct task_struct *p)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
return p->on_cpu;
|
|
|
|
#else
|
|
|
|
return task_current(rq, p);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2014-08-20 13:47:32 +04:00
|
|
|
static inline int task_on_rq_queued(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return p->on_rq == TASK_ON_RQ_QUEUED;
|
|
|
|
}
|
2011-10-25 10:00:11 +02:00
|
|
|
|
sched: Teach scheduler to understand TASK_ON_RQ_MIGRATING state
This is a new p->on_rq state which will be used to indicate that a task
is in a process of migrating between two RQs. It allows to get
rid of double_rq_lock(), which we used to use to change a rq of
a queued task before.
Let's consider an example. To move a task between src_rq and
dst_rq we will do the following:
raw_spin_lock(&src_rq->lock);
/* p is a task which is queued on src_rq */
p = ...;
dequeue_task(src_rq, p, 0);
p->on_rq = TASK_ON_RQ_MIGRATING;
set_task_cpu(p, dst_cpu);
raw_spin_unlock(&src_rq->lock);
/*
* Both RQs are unlocked here.
* Task p is dequeued from src_rq
* but its on_rq value is not zero.
*/
raw_spin_lock(&dst_rq->lock);
p->on_rq = TASK_ON_RQ_QUEUED;
enqueue_task(dst_rq, p, 0);
raw_spin_unlock(&dst_rq->lock);
While p->on_rq is TASK_ON_RQ_MIGRATING, task is considered as
"migrating", and other parallel scheduler actions with it are
not available to parallel callers. The parallel caller is
spining till migration is completed.
The unavailable actions are changing of cpu affinity, changing
of priority etc, in other words all the functionality which used
to require task_rq(p)->lock before (and related to the task).
To implement TASK_ON_RQ_MIGRATING support we primarily are using
the following fact. Most of scheduler users (from which we are
protecting a migrating task) use task_rq_lock() and
__task_rq_lock() to get the lock of task_rq(p). These primitives
know that task's cpu may change, and they are spining while the
lock of the right RQ is not held. We add one more condition into
them, so they will be also spinning until the migration is
finished.
Signed-off-by: Kirill Tkhai <ktkhai@parallels.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Turner <pjt@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Kirill Tkhai <tkhai@yandex.ru>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1408528062.23412.88.camel@tkhai
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-08-20 13:47:42 +04:00
|
|
|
static inline int task_on_rq_migrating(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return p->on_rq == TASK_ON_RQ_MIGRATING;
|
|
|
|
}
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
#ifndef prepare_arch_switch
|
|
|
|
# define prepare_arch_switch(next) do { } while (0)
|
|
|
|
#endif
|
2011-11-27 21:43:10 +00:00
|
|
|
#ifndef finish_arch_post_lock_switch
|
|
|
|
# define finish_arch_post_lock_switch() do { } while (0)
|
|
|
|
#endif
|
2011-10-25 10:00:11 +02:00
|
|
|
|
|
|
|
static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/*
|
|
|
|
* We can optimise this out completely for !SMP, because the
|
|
|
|
* SMP rebalancing from interrupt is the only thing that cares
|
|
|
|
* here.
|
|
|
|
*/
|
|
|
|
next->on_cpu = 1;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/*
|
|
|
|
* After ->on_cpu is cleared, the task can be moved to a different CPU.
|
|
|
|
* We must ensure this doesn't happen until the switch is completely
|
|
|
|
* finished.
|
2015-09-29 14:45:09 +02:00
|
|
|
*
|
2015-10-06 14:36:17 +02:00
|
|
|
* In particular, the load of prev->state in finish_task_switch() must
|
|
|
|
* happen before this.
|
|
|
|
*
|
2015-09-29 14:45:09 +02:00
|
|
|
* Pairs with the control dependency and rmb in try_to_wake_up().
|
2011-10-25 10:00:11 +02:00
|
|
|
*/
|
2015-09-29 14:45:09 +02:00
|
|
|
smp_store_release(&prev->on_cpu, 0);
|
2011-10-25 10:00:11 +02:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_DEBUG_SPINLOCK
|
|
|
|
/* this is a valid case when another task releases the spinlock */
|
|
|
|
rq->lock.owner = current;
|
|
|
|
#endif
|
|
|
|
/*
|
|
|
|
* If we are tracking spinlock dependencies then we have to
|
|
|
|
* fix up the runqueue lock - which gets 'carried over' from
|
|
|
|
* prev into current:
|
|
|
|
*/
|
|
|
|
spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
|
|
|
|
|
|
|
|
raw_spin_unlock_irq(&rq->lock);
|
|
|
|
}
|
|
|
|
|
2013-03-05 16:06:38 +08:00
|
|
|
/*
|
|
|
|
* wake flags
|
|
|
|
*/
|
|
|
|
#define WF_SYNC 0x01 /* waker goes to sleep after wakeup */
|
|
|
|
#define WF_FORK 0x02 /* child wakeup after fork */
|
|
|
|
#define WF_MIGRATED 0x4 /* internal use, task got migrated */
|
2016-01-05 10:53:30 -08:00
|
|
|
#define WF_NO_NOTIFIER 0x08 /* do not notify governor */
|
2013-03-05 16:06:38 +08:00
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
/*
|
|
|
|
* To aid in avoiding the subversion of "niceness" due to uneven distribution
|
|
|
|
* of tasks with abnormal "nice" values across CPUs the contribution that
|
|
|
|
* each task makes to its run queue's load is weighted according to its
|
|
|
|
* scheduling class and "nice" value. For SCHED_NORMAL tasks this is just a
|
|
|
|
* scaled version of the new time slice allocation that they receive on time
|
|
|
|
* slice expiry etc.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define WEIGHT_IDLEPRIO 3
|
|
|
|
#define WMULT_IDLEPRIO 1431655765
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Nice levels are multiplicative, with a gentle 10% change for every
|
|
|
|
* nice level changed. I.e. when a CPU-bound task goes from nice 0 to
|
|
|
|
* nice 1, it will get ~10% less CPU time than another CPU-bound task
|
|
|
|
* that remained on nice 0.
|
|
|
|
*
|
|
|
|
* The "10% effect" is relative and cumulative: from _any_ nice level,
|
|
|
|
* if you go up 1 level, it's -10% CPU usage, if you go down 1 level
|
|
|
|
* it's +10% CPU usage. (to achieve that we use a multiplier of 1.25.
|
|
|
|
* If a task goes up by ~10% and another task goes down by ~10% then
|
|
|
|
* the relative distance between them is ~25%.)
|
|
|
|
*/
|
|
|
|
static const int prio_to_weight[40] = {
|
|
|
|
/* -20 */ 88761, 71755, 56483, 46273, 36291,
|
|
|
|
/* -15 */ 29154, 23254, 18705, 14949, 11916,
|
|
|
|
/* -10 */ 9548, 7620, 6100, 4904, 3906,
|
|
|
|
/* -5 */ 3121, 2501, 1991, 1586, 1277,
|
|
|
|
/* 0 */ 1024, 820, 655, 526, 423,
|
|
|
|
/* 5 */ 335, 272, 215, 172, 137,
|
|
|
|
/* 10 */ 110, 87, 70, 56, 45,
|
|
|
|
/* 15 */ 36, 29, 23, 18, 15,
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Inverse (2^32/x) values of the prio_to_weight[] array, precalculated.
|
|
|
|
*
|
|
|
|
* In cases where the weight does not change often, we can use the
|
|
|
|
* precalculated inverse to speed up arithmetics by turning divisions
|
|
|
|
* into multiplications:
|
|
|
|
*/
|
|
|
|
static const u32 prio_to_wmult[40] = {
|
|
|
|
/* -20 */ 48388, 59856, 76040, 92818, 118348,
|
|
|
|
/* -15 */ 147320, 184698, 229616, 287308, 360437,
|
|
|
|
/* -10 */ 449829, 563644, 704093, 875809, 1099582,
|
|
|
|
/* -5 */ 1376151, 1717300, 2157191, 2708050, 3363326,
|
|
|
|
/* 0 */ 4194304, 5237765, 6557202, 8165337, 10153587,
|
|
|
|
/* 5 */ 12820798, 15790321, 19976592, 24970740, 31350126,
|
|
|
|
/* 10 */ 39045157, 49367440, 61356676, 76695844, 95443717,
|
|
|
|
/* 15 */ 119304647, 148102320, 186737708, 238609294, 286331153,
|
|
|
|
};
|
|
|
|
|
2016-01-18 15:27:07 +01:00
|
|
|
/*
|
|
|
|
* {de,en}queue flags:
|
|
|
|
*
|
|
|
|
* DEQUEUE_SLEEP - task is no longer runnable
|
|
|
|
* ENQUEUE_WAKEUP - task just became runnable
|
|
|
|
*
|
|
|
|
* SAVE/RESTORE - an otherwise spurious dequeue/enqueue, done to ensure tasks
|
|
|
|
* are in a known state which allows modification. Such pairs
|
|
|
|
* should preserve as much state as possible.
|
|
|
|
*
|
|
|
|
* MOVE - paired with SAVE/RESTORE, explicitly does not preserve the location
|
|
|
|
* in the runqueue.
|
|
|
|
*
|
|
|
|
* ENQUEUE_HEAD - place at front of runqueue (tail if not specified)
|
|
|
|
* ENQUEUE_REPLENISH - CBS (replenish runtime and postpone deadline)
|
|
|
|
* ENQUEUE_WAKING - sched_class::task_waking was called
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define DEQUEUE_SLEEP 0x01
|
|
|
|
#define DEQUEUE_SAVE 0x02 /* matches ENQUEUE_RESTORE */
|
|
|
|
#define DEQUEUE_MOVE 0x04 /* matches ENQUEUE_MOVE */
|
|
|
|
|
sched/core: Fix task and run queue sched_info::run_delay inconsistencies
Mike Meyer reported the following bug:
> During evaluation of some performance data, it was discovered thread
> and run queue run_delay accounting data was inconsistent with the other
> accounting data that was collected. Further investigation found under
> certain circumstances execution time was leaking into the task and
> run queue accounting of run_delay.
>
> Consider the following sequence:
>
> a. thread is running.
> b. thread moves beween cgroups, changes scheduling class or priority.
> c. thread sleeps OR
> d. thread involuntarily gives up cpu.
>
> a. implies:
>
> thread->sched_info.last_queued = 0
>
> a. and b. results in the following:
>
> 1. dequeue_task(rq, thread)
>
> sched_info_dequeued(rq, thread)
> delta = 0
>
> sched_info_reset_dequeued(thread)
> thread->sched_info.last_queued = 0
>
> thread->sched_info.run_delay += delta
>
> 2. enqueue_task(rq, thread)
>
> sched_info_queued(rq, thread)
>
> /* thread is still on cpu at this point. */
> thread->sched_info.last_queued = task_rq(thread)->clock;
>
> c. results in:
>
> dequeue_task(rq, thread)
>
> sched_info_dequeued(rq, thread)
>
> /* delta is execution time not run_delay. */
> delta = task_rq(thread)->clock - thread->sched_info.last_queued
>
> sched_info_reset_dequeued(thread)
> thread->sched_info.last_queued = 0
>
> thread->sched_info.run_delay += delta
>
> Since thread was running between enqueue_task(rq, thread) and
> dequeue_task(rq, thread), the delta above is really execution
> time and not run_delay.
>
> d. results in:
>
> __sched_info_switch(thread, next_thread)
>
> sched_info_depart(rq, thread)
>
> sched_info_queued(rq, thread)
>
> /* last_queued not updated due to being non-zero */
> return
>
> Since thread was running between enqueue_task(rq, thread) and
> __sched_info_switch(thread, next_thread), the execution time
> between enqueue_task(rq, thread) and
> __sched_info_switch(thread, next_thread) now will become
> associated with run_delay due to when last_queued was last updated.
>
This alternative patch solves the problem by not calling
sched_info_{de,}queued() in {de,en}queue_task(). Therefore the
sched_info state is preserved and things work as expected.
By inlining the {de,en}queue_task() functions the new condition
becomes (mostly) a compile-time constant and we'll not emit any new
branch instructions.
It even shrinks the code (due to inlining {en,de}queue_task()):
$ size defconfig-build/kernel/sched/core.o defconfig-build/kernel/sched/core.o.orig
text data bss dec hex filename
64019 23378 2344 89741 15e8d defconfig-build/kernel/sched/core.o
64149 23378 2344 89871 15f0f defconfig-build/kernel/sched/core.o.orig
Reported-by: Mike Meyer <Mike.Meyer@Teradata.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/20150930154413.GO3604@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-09-30 17:44:13 +02:00
|
|
|
#define ENQUEUE_WAKEUP 0x01
|
2016-01-18 15:27:07 +01:00
|
|
|
#define ENQUEUE_RESTORE 0x02
|
|
|
|
#define ENQUEUE_MOVE 0x04
|
|
|
|
|
|
|
|
#define ENQUEUE_HEAD 0x08
|
|
|
|
#define ENQUEUE_REPLENISH 0x10
|
2013-03-05 16:06:55 +08:00
|
|
|
#ifdef CONFIG_SMP
|
2016-01-18 15:27:07 +01:00
|
|
|
#define ENQUEUE_WAKING 0x20
|
2013-03-05 16:06:55 +08:00
|
|
|
#else
|
sched/core: Fix task and run queue sched_info::run_delay inconsistencies
Mike Meyer reported the following bug:
> During evaluation of some performance data, it was discovered thread
> and run queue run_delay accounting data was inconsistent with the other
> accounting data that was collected. Further investigation found under
> certain circumstances execution time was leaking into the task and
> run queue accounting of run_delay.
>
> Consider the following sequence:
>
> a. thread is running.
> b. thread moves beween cgroups, changes scheduling class or priority.
> c. thread sleeps OR
> d. thread involuntarily gives up cpu.
>
> a. implies:
>
> thread->sched_info.last_queued = 0
>
> a. and b. results in the following:
>
> 1. dequeue_task(rq, thread)
>
> sched_info_dequeued(rq, thread)
> delta = 0
>
> sched_info_reset_dequeued(thread)
> thread->sched_info.last_queued = 0
>
> thread->sched_info.run_delay += delta
>
> 2. enqueue_task(rq, thread)
>
> sched_info_queued(rq, thread)
>
> /* thread is still on cpu at this point. */
> thread->sched_info.last_queued = task_rq(thread)->clock;
>
> c. results in:
>
> dequeue_task(rq, thread)
>
> sched_info_dequeued(rq, thread)
>
> /* delta is execution time not run_delay. */
> delta = task_rq(thread)->clock - thread->sched_info.last_queued
>
> sched_info_reset_dequeued(thread)
> thread->sched_info.last_queued = 0
>
> thread->sched_info.run_delay += delta
>
> Since thread was running between enqueue_task(rq, thread) and
> dequeue_task(rq, thread), the delta above is really execution
> time and not run_delay.
>
> d. results in:
>
> __sched_info_switch(thread, next_thread)
>
> sched_info_depart(rq, thread)
>
> sched_info_queued(rq, thread)
>
> /* last_queued not updated due to being non-zero */
> return
>
> Since thread was running between enqueue_task(rq, thread) and
> __sched_info_switch(thread, next_thread), the execution time
> between enqueue_task(rq, thread) and
> __sched_info_switch(thread, next_thread) now will become
> associated with run_delay due to when last_queued was last updated.
>
This alternative patch solves the problem by not calling
sched_info_{de,}queued() in {de,en}queue_task(). Therefore the
sched_info state is preserved and things work as expected.
By inlining the {de,en}queue_task() functions the new condition
becomes (mostly) a compile-time constant and we'll not emit any new
branch instructions.
It even shrinks the code (due to inlining {en,de}queue_task()):
$ size defconfig-build/kernel/sched/core.o defconfig-build/kernel/sched/core.o.orig
text data bss dec hex filename
64019 23378 2344 89741 15e8d defconfig-build/kernel/sched/core.o
64149 23378 2344 89871 15f0f defconfig-build/kernel/sched/core.o.orig
Reported-by: Mike Meyer <Mike.Meyer@Teradata.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/20150930154413.GO3604@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-09-30 17:44:13 +02:00
|
|
|
#define ENQUEUE_WAKING 0x00
|
2013-03-05 16:06:55 +08:00
|
|
|
#endif
|
2016-01-18 15:27:07 +01:00
|
|
|
#define ENQUEUE_WAKEUP_NEW 0x40
|
2013-03-05 16:06:55 +08:00
|
|
|
|
2014-02-14 12:25:08 +01:00
|
|
|
#define RETRY_TASK ((void *)-1UL)
|
|
|
|
|
2013-03-05 16:06:55 +08:00
|
|
|
struct sched_class {
|
|
|
|
const struct sched_class *next;
|
|
|
|
|
|
|
|
void (*enqueue_task) (struct rq *rq, struct task_struct *p, int flags);
|
|
|
|
void (*dequeue_task) (struct rq *rq, struct task_struct *p, int flags);
|
|
|
|
void (*yield_task) (struct rq *rq);
|
|
|
|
bool (*yield_to_task) (struct rq *rq, struct task_struct *p, bool preempt);
|
|
|
|
|
|
|
|
void (*check_preempt_curr) (struct rq *rq, struct task_struct *p, int flags);
|
|
|
|
|
2012-02-11 06:05:00 +01:00
|
|
|
/*
|
|
|
|
* It is the responsibility of the pick_next_task() method that will
|
|
|
|
* return the next task to call put_prev_task() on the @prev task or
|
|
|
|
* something equivalent.
|
2014-02-14 12:25:08 +01:00
|
|
|
*
|
|
|
|
* May return RETRY_TASK when it finds a higher prio class has runnable
|
|
|
|
* tasks.
|
2012-02-11 06:05:00 +01:00
|
|
|
*/
|
|
|
|
struct task_struct * (*pick_next_task) (struct rq *rq,
|
|
|
|
struct task_struct *prev);
|
2013-03-05 16:06:55 +08:00
|
|
|
void (*put_prev_task) (struct rq *rq, struct task_struct *p);
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
FROMLIST: sched/fair: Use wake_q length as a hint for wake_wide
(from https://patchwork.kernel.org/patch/9895261/)
This patch adds a parameter to select_task_rq, sibling_count_hint
allowing the caller, where it has this information, to inform the
sched_class the number of tasks that are being woken up as part of
the same event.
The wake_q mechanism is one case where this information is available.
select_task_rq_fair can then use the information to detect that it
needs to widen the search space for task placement in order to avoid
overloading the last-level cache domain's CPUs.
* * *
The reason I am investigating this change is the following use case
on ARM big.LITTLE (asymmetrical CPU capacity): 1 task per CPU, which
all repeatedly do X amount of work then
pthread_barrier_wait (i.e. sleep until the last task finishes its X
and hits the barrier). On big.LITTLE, the tasks which get a "big" CPU
finish faster, and then those CPUs pull over the tasks that are still
running:
v CPU v ->time->
-------------
0 (big) 11111 /333
-------------
1 (big) 22222 /444|
-------------
2 (LITTLE) 333333/
-------------
3 (LITTLE) 444444/
-------------
Now when task 4 hits the barrier (at |) and wakes the others up,
there are 4 tasks with prev_cpu=<big> and 0 tasks with
prev_cpu=<little>. want_affine therefore means that we'll only look
in CPUs 0 and 1 (sd_llc), so tasks will be unnecessarily coscheduled
on the bigs until the next load balance, something like this:
v CPU v ->time->
------------------------
0 (big) 11111 /333 31313\33333
------------------------
1 (big) 22222 /444|424\4444444
------------------------
2 (LITTLE) 333333/ \222222
------------------------
3 (LITTLE) 444444/ \1111
------------------------
^^^
underutilization
So, I'm trying to get want_affine = 0 for these tasks.
I don't _think_ any incarnation of the wakee_flips mechanism can help
us here because which task is waker and which tasks are wakees
generally changes with each iteration.
However pthread_barrier_wait (or more accurately FUTEX_WAKE) has the
nice property that we know exactly how many tasks are being woken, so
we can cheat.
It might be a disadvantage that we "widen" _every_ task that's woken in
an event, while select_idle_sibling would work fine for the first
sd_llc_size - 1 tasks.
IIUC, if wake_affine() behaves correctly this trick wouldn't be
necessary on SMP systems, so it might be best guarded by the presence
of SD_ASYM_CPUCAPACITY?
* * *
Final note..
In order to observe "perfect" behaviour for this use case, I also had
to disable the TTWU_QUEUE sched feature. Suppose during the wakeup
above we are working through the work queue and have placed tasks 3
and 2, and are about to place task 1:
v CPU v ->time->
--------------
0 (big) 11111 /333 3
--------------
1 (big) 22222 /444|4
--------------
2 (LITTLE) 333333/ 2
--------------
3 (LITTLE) 444444/ <- Task 1 should go here
--------------
If TTWU_QUEUE is enabled, we will not yet have enqueued task
2 (having instead sent a reschedule IPI) or attached its load to CPU
2. So we are likely to also place task 1 on cpu 2. Disabling
TTWU_QUEUE means that we enqueue task 2 before placing task 1,
solving this issue. TTWU_QUEUE is there to minimise rq lock
contention, and I guess that this contention is less of an issue on
big.LITTLE systems since they have relatively few CPUs, which
suggests the trade-off makes sense here.
Change-Id: I2080302839a263e0841a89efea8589ea53bbda9c
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
2017-08-07 15:46:13 +01:00
|
|
|
int (*select_task_rq)(struct task_struct *p, int task_cpu, int sd_flag, int flags,
|
|
|
|
int subling_count_hint);
|
2015-09-23 14:55:59 +08:00
|
|
|
void (*migrate_task_rq)(struct task_struct *p);
|
2013-03-05 16:06:55 +08:00
|
|
|
|
|
|
|
void (*task_waking) (struct task_struct *task);
|
|
|
|
void (*task_woken) (struct rq *this_rq, struct task_struct *task);
|
|
|
|
|
|
|
|
void (*set_cpus_allowed)(struct task_struct *p,
|
|
|
|
const struct cpumask *newmask);
|
|
|
|
|
|
|
|
void (*rq_online)(struct rq *rq);
|
|
|
|
void (*rq_offline)(struct rq *rq);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
void (*set_curr_task) (struct rq *rq);
|
|
|
|
void (*task_tick) (struct rq *rq, struct task_struct *p, int queued);
|
|
|
|
void (*task_fork) (struct task_struct *p);
|
2013-11-07 14:43:35 +01:00
|
|
|
void (*task_dead) (struct task_struct *p);
|
2013-03-05 16:06:55 +08:00
|
|
|
|
2014-10-27 17:40:52 +03:00
|
|
|
/*
|
|
|
|
* The switched_from() call is allowed to drop rq->lock, therefore we
|
|
|
|
* cannot assume the switched_from/switched_to pair is serliazed by
|
|
|
|
* rq->lock. They are however serialized by p->pi_lock.
|
|
|
|
*/
|
2013-03-05 16:06:55 +08:00
|
|
|
void (*switched_from) (struct rq *this_rq, struct task_struct *task);
|
|
|
|
void (*switched_to) (struct rq *this_rq, struct task_struct *task);
|
|
|
|
void (*prio_changed) (struct rq *this_rq, struct task_struct *task,
|
|
|
|
int oldprio);
|
|
|
|
|
|
|
|
unsigned int (*get_rr_interval) (struct rq *rq,
|
|
|
|
struct task_struct *task);
|
|
|
|
|
sched/cputime: Fix clock_nanosleep()/clock_gettime() inconsistency
Commit d670ec13178d0 "posix-cpu-timers: Cure SMP wobbles" fixes one glibc
test case in cost of breaking another one. After that commit, calling
clock_nanosleep(TIMER_ABSTIME, X) and then clock_gettime(&Y) can result
of Y time being smaller than X time.
Reproducer/tester can be found further below, it can be compiled and ran by:
gcc -o tst-cpuclock2 tst-cpuclock2.c -pthread
while ./tst-cpuclock2 ; do : ; done
This reproducer, when running on a buggy kernel, will complain
about "clock_gettime difference too small".
Issue happens because on start in thread_group_cputimer() we initialize
sum_exec_runtime of cputimer with threads runtime not yet accounted and
then add the threads runtime to running cputimer again on scheduler
tick, making it's sum_exec_runtime bigger than actual threads runtime.
KOSAKI Motohiro posted a fix for this problem, but that patch was never
applied: https://lkml.org/lkml/2013/5/26/191 .
This patch takes different approach to cure the problem. It calls
update_curr() when cputimer starts, that assure we will have updated
stats of running threads and on the next schedule tick we will account
only the runtime that elapsed from cputimer start. That also assure we
have consistent state between cpu times of individual threads and cpu
time of the process consisted by those threads.
Full reproducer (tst-cpuclock2.c):
#define _GNU_SOURCE
#include <unistd.h>
#include <sys/syscall.h>
#include <stdio.h>
#include <time.h>
#include <pthread.h>
#include <stdint.h>
#include <inttypes.h>
/* Parameters for the Linux kernel ABI for CPU clocks. */
#define CPUCLOCK_SCHED 2
#define MAKE_PROCESS_CPUCLOCK(pid, clock) \
((~(clockid_t) (pid) << 3) | (clockid_t) (clock))
static pthread_barrier_t barrier;
/* Help advance the clock. */
static void *chew_cpu(void *arg)
{
pthread_barrier_wait(&barrier);
while (1) ;
return NULL;
}
/* Don't use the glibc wrapper. */
static int do_nanosleep(int flags, const struct timespec *req)
{
clockid_t clock_id = MAKE_PROCESS_CPUCLOCK(0, CPUCLOCK_SCHED);
return syscall(SYS_clock_nanosleep, clock_id, flags, req, NULL);
}
static int64_t tsdiff(const struct timespec *before, const struct timespec *after)
{
int64_t before_i = before->tv_sec * 1000000000ULL + before->tv_nsec;
int64_t after_i = after->tv_sec * 1000000000ULL + after->tv_nsec;
return after_i - before_i;
}
int main(void)
{
int result = 0;
pthread_t th;
pthread_barrier_init(&barrier, NULL, 2);
if (pthread_create(&th, NULL, chew_cpu, NULL) != 0) {
perror("pthread_create");
return 1;
}
pthread_barrier_wait(&barrier);
/* The test. */
struct timespec before, after, sleeptimeabs;
int64_t sleepdiff, diffabs;
const struct timespec sleeptime = {.tv_sec = 0,.tv_nsec = 100000000 };
/* The relative nanosleep. Not sure why this is needed, but its presence
seems to make it easier to reproduce the problem. */
if (do_nanosleep(0, &sleeptime) != 0) {
perror("clock_nanosleep");
return 1;
}
/* Get the current time. */
if (clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &before) < 0) {
perror("clock_gettime[2]");
return 1;
}
/* Compute the absolute sleep time based on the current time. */
uint64_t nsec = before.tv_nsec + sleeptime.tv_nsec;
sleeptimeabs.tv_sec = before.tv_sec + nsec / 1000000000;
sleeptimeabs.tv_nsec = nsec % 1000000000;
/* Sleep for the computed time. */
if (do_nanosleep(TIMER_ABSTIME, &sleeptimeabs) != 0) {
perror("absolute clock_nanosleep");
return 1;
}
/* Get the time after the sleep. */
if (clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &after) < 0) {
perror("clock_gettime[3]");
return 1;
}
/* The time after sleep should always be equal to or after the absolute sleep
time passed to clock_nanosleep. */
sleepdiff = tsdiff(&sleeptimeabs, &after);
if (sleepdiff < 0) {
printf("absolute clock_nanosleep woke too early: %" PRId64 "\n", sleepdiff);
result = 1;
printf("Before %llu.%09llu\n", before.tv_sec, before.tv_nsec);
printf("After %llu.%09llu\n", after.tv_sec, after.tv_nsec);
printf("Sleep %llu.%09llu\n", sleeptimeabs.tv_sec, sleeptimeabs.tv_nsec);
}
/* The difference between the timestamps taken before and after the
clock_nanosleep call should be equal to or more than the duration of the
sleep. */
diffabs = tsdiff(&before, &after);
if (diffabs < sleeptime.tv_nsec) {
printf("clock_gettime difference too small: %" PRId64 "\n", diffabs);
result = 1;
}
pthread_cancel(th);
return result;
}
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20141112155843.GA24803@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-11-12 16:58:44 +01:00
|
|
|
void (*update_curr) (struct rq *rq);
|
|
|
|
|
2016-06-17 13:38:55 +02:00
|
|
|
#define TASK_SET_GROUP 0
|
|
|
|
#define TASK_MOVE_GROUP 1
|
|
|
|
|
2013-03-05 16:06:55 +08:00
|
|
|
#ifdef CONFIG_FAIR_GROUP_SCHED
|
2016-06-17 13:38:55 +02:00
|
|
|
void (*task_change_group)(struct task_struct *p, int type);
|
2013-03-05 16:06:55 +08:00
|
|
|
#endif
|
2015-01-16 11:27:31 +05:30
|
|
|
#ifdef CONFIG_SCHED_HMP
|
|
|
|
void (*inc_hmp_sched_stats)(struct rq *rq, struct task_struct *p);
|
|
|
|
void (*dec_hmp_sched_stats)(struct rq *rq, struct task_struct *p);
|
2015-07-13 21:04:18 -07:00
|
|
|
void (*fixup_hmp_sched_stats)(struct rq *rq, struct task_struct *p,
|
2015-06-08 09:08:47 +05:30
|
|
|
u32 new_task_load, u32 new_pred_demand);
|
2015-01-16 11:27:31 +05:30
|
|
|
#endif
|
2013-03-05 16:06:55 +08:00
|
|
|
};
|
2011-10-25 10:00:11 +02:00
|
|
|
|
2014-02-12 10:49:30 +01:00
|
|
|
static inline void put_prev_task(struct rq *rq, struct task_struct *prev)
|
|
|
|
{
|
|
|
|
prev->sched_class->put_prev_task(rq, prev);
|
|
|
|
}
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
#define sched_class_highest (&stop_sched_class)
|
|
|
|
#define for_each_class(class) \
|
|
|
|
for (class = sched_class_highest; class; class = class->next)
|
|
|
|
|
|
|
|
extern const struct sched_class stop_sched_class;
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 11:14:43 +01:00
|
|
|
extern const struct sched_class dl_sched_class;
|
2011-10-25 10:00:11 +02:00
|
|
|
extern const struct sched_class rt_sched_class;
|
|
|
|
extern const struct sched_class fair_sched_class;
|
|
|
|
extern const struct sched_class idle_sched_class;
|
|
|
|
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
|
2016-07-22 11:35:59 +01:00
|
|
|
extern void init_max_cpu_capacity(struct max_cpu_capacity *mcc);
|
2014-05-26 18:19:37 -04:00
|
|
|
extern void update_group_capacity(struct sched_domain *sd, int cpu);
|
2013-03-07 10:00:26 +08:00
|
|
|
|
2014-01-06 12:34:38 +01:00
|
|
|
extern void trigger_load_balance(struct rq *rq);
|
2016-05-31 14:34:46 -07:00
|
|
|
extern void nohz_balance_clear_nohz_mask(int cpu);
|
2011-10-25 10:00:11 +02:00
|
|
|
|
sched: Fix wrong rq's runnable_avg update with rt tasks
The current update of the rq's load can be erroneous when RT
tasks are involved.
The update of the load of a rq that becomes idle, is done only
if the avg_idle is less than sysctl_sched_migration_cost. If RT
tasks and short idle duration alternate, the runnable_avg will
not be updated correctly and the time will be accounted as idle
time when a CFS task wakes up.
A new idle_enter function is called when the next task is the
idle function so the elapsed time will be accounted as run time
in the load of the rq, whatever the average idle time is. The
function update_rq_runnable_avg is removed from idle_balance.
When a RT task is scheduled on an idle CPU, the update of the
rq's load is not done when the rq exit idle state because CFS's
functions are not called. Then, the idle_balance, which is
called just before entering the idle function, updates the rq's
load and makes the assumption that the elapsed time since the
last update, was only running time.
As a consequence, the rq's load of a CPU that only runs a
periodic RT task, is close to LOAD_AVG_MAX whatever the running
duration of the RT task is.
A new idle_exit function is called when the prev task is the
idle function so the elapsed time will be accounted as idle time
in the rq's load.
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: linaro-kernel@lists.linaro.org
Cc: peterz@infradead.org
Cc: pjt@google.com
Cc: fweisbec@gmail.com
Cc: efault@gmx.de
Link: http://lkml.kernel.org/r/1366302867-5055-1-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-04-18 18:34:26 +02:00
|
|
|
extern void idle_enter_fair(struct rq *this_rq);
|
|
|
|
extern void idle_exit_fair(struct rq *this_rq);
|
|
|
|
|
2015-05-15 17:43:35 +02:00
|
|
|
extern void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_mask);
|
|
|
|
|
2014-02-12 15:47:29 +01:00
|
|
|
#else
|
|
|
|
|
|
|
|
static inline void idle_enter_fair(struct rq *rq) { }
|
|
|
|
static inline void idle_exit_fair(struct rq *rq) { }
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
#endif
|
|
|
|
|
2014-09-04 11:32:09 -04:00
|
|
|
#ifdef CONFIG_CPU_IDLE
|
|
|
|
static inline void idle_set_state(struct rq *rq,
|
|
|
|
struct cpuidle_state *idle_state)
|
|
|
|
{
|
|
|
|
rq->idle_state = idle_state;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct cpuidle_state *idle_get_state(struct rq *rq)
|
|
|
|
{
|
|
|
|
WARN_ON(!rcu_read_lock_held());
|
|
|
|
return rq->idle_state;
|
|
|
|
}
|
2015-01-27 13:48:07 +00:00
|
|
|
|
|
|
|
static inline void idle_set_state_idx(struct rq *rq, int idle_state_idx)
|
|
|
|
{
|
|
|
|
rq->idle_state_idx = idle_state_idx;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int idle_get_state_idx(struct rq *rq)
|
|
|
|
{
|
|
|
|
WARN_ON(!rcu_read_lock_held());
|
|
|
|
return rq->idle_state_idx;
|
|
|
|
}
|
2014-09-04 11:32:09 -04:00
|
|
|
#else
|
|
|
|
static inline void idle_set_state(struct rq *rq,
|
|
|
|
struct cpuidle_state *idle_state)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct cpuidle_state *idle_get_state(struct rq *rq)
|
|
|
|
{
|
|
|
|
return NULL;
|
|
|
|
}
|
2015-01-27 13:48:07 +00:00
|
|
|
|
|
|
|
static inline void idle_set_state_idx(struct rq *rq, int idle_state_idx)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int idle_get_state_idx(struct rq *rq)
|
|
|
|
{
|
|
|
|
return -1;
|
|
|
|
}
|
2014-09-04 11:32:09 -04:00
|
|
|
#endif
|
|
|
|
|
2013-12-05 20:01:32 -08:00
|
|
|
#ifdef CONFIG_SYSRQ_SCHED_DEBUG
|
2011-10-25 10:00:11 +02:00
|
|
|
extern void sysrq_sched_debug_show(void);
|
2013-12-05 20:01:32 -08:00
|
|
|
#endif
|
2011-10-25 10:00:11 +02:00
|
|
|
extern void sched_init_granularity(void);
|
|
|
|
extern void update_max_interval(void);
|
sched/deadline: Add SCHED_DEADLINE SMP-related data structures & logic
Introduces data structures relevant for implementing dynamic
migration of -deadline tasks and the logic for checking if
runqueues are overloaded with -deadline tasks and for choosing
where a task should migrate, when it is the case.
Adds also dynamic migrations to SCHED_DEADLINE, so that tasks can
be moved among CPUs when necessary. It is also possible to bind a
task to a (set of) CPU(s), thus restricting its capability of
migrating, or forbidding migrations at all.
The very same approach used in sched_rt is utilised:
- -deadline tasks are kept into CPU-specific runqueues,
- -deadline tasks are migrated among runqueues to achieve the
following:
* on an M-CPU system the M earliest deadline ready tasks
are always running;
* affinity/cpusets settings of all the -deadline tasks is
always respected.
Therefore, this very special form of "load balancing" is done with
an active method, i.e., the scheduler pushes or pulls tasks between
runqueues when they are woken up and/or (de)scheduled.
IOW, every time a preemption occurs, the descheduled task might be sent
to some other CPU (depending on its deadline) to continue executing
(push). On the other hand, every time a CPU becomes idle, it might pull
the second earliest deadline ready task from some other CPU.
To enforce this, a pull operation is always attempted before taking any
scheduling decision (pre_schedule()), as well as a push one after each
scheduling decision (post_schedule()). In addition, when a task arrives
or wakes up, the best CPU where to resume it is selected taking into
account its affinity mask, the system topology, but also its deadline.
E.g., from the scheduling point of view, the best CPU where to wake
up (and also where to push) a task is the one which is running the task
with the latest deadline among the M executing ones.
In order to facilitate these decisions, per-runqueue "caching" of the
deadlines of the currently running and of the first ready task is used.
Queued but not running tasks are also parked in another rb-tree to
speed-up pushes.
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-5-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:38 +01:00
|
|
|
|
|
|
|
extern void init_sched_dl_class(void);
|
2011-10-25 10:00:11 +02:00
|
|
|
extern void init_sched_rt_class(void);
|
|
|
|
extern void init_sched_fair_class(void);
|
|
|
|
|
2014-06-29 00:03:57 +04:00
|
|
|
extern void resched_curr(struct rq *rq);
|
2011-10-25 10:00:11 +02:00
|
|
|
extern void resched_cpu(int cpu);
|
|
|
|
|
|
|
|
extern struct rt_bandwidth def_rt_bandwidth;
|
|
|
|
extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime);
|
2017-09-11 17:10:37 -07:00
|
|
|
extern void init_rt_schedtune_timer(struct sched_rt_entity *rt_se);
|
2011-10-25 10:00:11 +02:00
|
|
|
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:45 +01:00
|
|
|
extern struct dl_bandwidth def_dl_bandwidth;
|
|
|
|
extern void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime);
|
sched/deadline: Add SCHED_DEADLINE structures & implementation
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
Adds a scheduling class, in sched/dl.c and a new policy called
SCHED_DEADLINE. It is an implementation of the Earliest Deadline
First (EDF) scheduling algorithm, augmented with a mechanism (called
Constant Bandwidth Server, CBS) that makes it possible to isolate
the behaviour of tasks between each other.
The typical -deadline task will be made up of a computation phase
(instance) which is activated on a periodic or sporadic fashion. The
expected (maximum) duration of such computation is called the task's
runtime; the time interval by which each instance need to be completed
is called the task's relative deadline. The task's absolute deadline
is dynamically calculated as the time instant a task (better, an
instance) activates plus the relative deadline.
The EDF algorithms selects the task with the smallest absolute
deadline as the one to be executed first, while the CBS ensures each
task to run for at most its runtime every (relative) deadline
length time interval, avoiding any interference between different
tasks (bandwidth isolation).
Thanks to this feature, also tasks that do not strictly comply with
the computational model sketched above can effectively use the new
policy.
To summarize, this patch:
- introduces the data structures, constants and symbols needed;
- implements the core logic of the scheduling algorithm in the new
scheduling class file;
- provides all the glue code between the new scheduling class and
the core scheduler and refines the interactions between sched/dl
and the other existing scheduling classes.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Fabio Checconi <fchecconi@gmail.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-4-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-28 11:14:43 +01:00
|
|
|
extern void init_dl_task_timer(struct sched_dl_entity *dl_se);
|
|
|
|
|
sched/deadline: Add bandwidth management for SCHED_DEADLINE tasks
In order of deadline scheduling to be effective and useful, it is
important that some method of having the allocation of the available
CPU bandwidth to tasks and task groups under control.
This is usually called "admission control" and if it is not performed
at all, no guarantee can be given on the actual scheduling of the
-deadline tasks.
Since when RT-throttling has been introduced each task group have a
bandwidth associated to itself, calculated as a certain amount of
runtime over a period. Moreover, to make it possible to manipulate
such bandwidth, readable/writable controls have been added to both
procfs (for system wide settings) and cgroupfs (for per-group
settings).
Therefore, the same interface is being used for controlling the
bandwidth distrubution to -deadline tasks and task groups, i.e.,
new controls but with similar names, equivalent meaning and with
the same usage paradigm are added.
However, more discussion is needed in order to figure out how
we want to manage SCHED_DEADLINE bandwidth at the task group level.
Therefore, this patch adds a less sophisticated, but actually
very sensible, mechanism to ensure that a certain utilization
cap is not overcome per each root_domain (the single rq for !SMP
configurations).
Another main difference between deadline bandwidth management and
RT-throttling is that -deadline tasks have bandwidth on their own
(while -rt ones doesn't!), and thus we don't need an higher level
throttling mechanism to enforce the desired bandwidth.
This patch, therefore:
- adds system wide deadline bandwidth management by means of:
* /proc/sys/kernel/sched_dl_runtime_us,
* /proc/sys/kernel/sched_dl_period_us,
that determine (i.e., runtime / period) the total bandwidth
available on each CPU of each root_domain for -deadline tasks;
- couples the RT and deadline bandwidth management, i.e., enforces
that the sum of how much bandwidth is being devoted to -rt
-deadline tasks to stay below 100%.
This means that, for a root_domain comprising M CPUs, -deadline tasks
can be created until the sum of their bandwidths stay below:
M * (sched_dl_runtime_us / sched_dl_period_us)
It is also possible to disable this bandwidth management logic, and
be thus free of oversubscribing the system up to any arbitrary level.
Signed-off-by: Dario Faggioli <raistlin@linux.it>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1383831828-15501-12-git-send-email-juri.lelli@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-11-07 14:43:45 +01:00
|
|
|
unsigned long to_ratio(u64 period, u64 runtime);
|
|
|
|
|
2015-07-15 08:04:39 +08:00
|
|
|
extern void init_entity_runnable_average(struct sched_entity *se);
|
2016-03-30 04:30:56 +08:00
|
|
|
extern void post_init_entity_util_avg(struct sched_entity *se);
|
2013-06-20 10:18:47 +08:00
|
|
|
|
2013-04-22 14:39:18 +08:00
|
|
|
static inline void __add_nr_running(struct rq *rq, unsigned count)
|
2011-10-25 10:00:11 +02:00
|
|
|
{
|
2014-05-09 03:00:14 +04:00
|
|
|
unsigned prev_nr = rq->nr_running;
|
|
|
|
|
2015-02-16 16:42:59 +05:30
|
|
|
sched_update_nr_prod(cpu_of(rq), count, true);
|
2014-05-09 03:00:14 +04:00
|
|
|
rq->nr_running = prev_nr + count;
|
2013-04-20 14:35:09 +02:00
|
|
|
|
2014-05-09 03:00:14 +04:00
|
|
|
if (prev_nr < 2 && rq->nr_running >= 2) {
|
2014-06-23 12:16:49 -07:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
if (!rq->rd->overload)
|
|
|
|
rq->rd->overload = true;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_NO_HZ_FULL
|
2013-04-20 14:35:09 +02:00
|
|
|
if (tick_nohz_full_cpu(rq->cpu)) {
|
2014-03-18 22:54:04 +01:00
|
|
|
/*
|
|
|
|
* Tick is needed if more than one task runs on a CPU.
|
|
|
|
* Send the target an IPI to kick it out of nohz mode.
|
|
|
|
*
|
|
|
|
* We assume that IPI implies full memory barrier and the
|
|
|
|
* new value of rq->nr_running is visible on reception
|
|
|
|
* from the target.
|
|
|
|
*/
|
2014-03-18 21:12:53 +01:00
|
|
|
tick_nohz_full_kick_cpu(rq->cpu);
|
2013-04-20 14:35:09 +02:00
|
|
|
}
|
|
|
|
#endif
|
2014-06-23 12:16:49 -07:00
|
|
|
}
|
2011-10-25 10:00:11 +02:00
|
|
|
}
|
|
|
|
|
2013-04-22 14:39:18 +08:00
|
|
|
static inline void __sub_nr_running(struct rq *rq, unsigned count)
|
2011-10-25 10:00:11 +02:00
|
|
|
{
|
2015-02-16 16:42:59 +05:30
|
|
|
sched_update_nr_prod(cpu_of(rq), count, false);
|
2014-05-09 03:00:14 +04:00
|
|
|
rq->nr_running -= count;
|
2011-10-25 10:00:11 +02:00
|
|
|
}
|
|
|
|
|
2013-04-22 14:39:18 +08:00
|
|
|
#ifdef CONFIG_CPU_QUIET
|
|
|
|
#define NR_AVE_SCALE(x) ((x) << FSHIFT)
|
|
|
|
static inline u64 do_nr_running_integral(struct rq *rq)
|
|
|
|
{
|
|
|
|
s64 nr, deltax;
|
|
|
|
u64 nr_running_integral = rq->nr_running_integral;
|
|
|
|
|
|
|
|
deltax = rq->clock_task - rq->nr_last_stamp;
|
|
|
|
nr = NR_AVE_SCALE(rq->nr_running);
|
|
|
|
|
|
|
|
nr_running_integral += nr * deltax;
|
|
|
|
|
|
|
|
return nr_running_integral;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void add_nr_running(struct rq *rq, unsigned count)
|
|
|
|
{
|
|
|
|
write_seqcount_begin(&rq->ave_seqcnt);
|
|
|
|
rq->nr_running_integral = do_nr_running_integral(rq);
|
|
|
|
rq->nr_last_stamp = rq->clock_task;
|
|
|
|
__add_nr_running(rq, count);
|
|
|
|
write_seqcount_end(&rq->ave_seqcnt);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void sub_nr_running(struct rq *rq, unsigned count)
|
|
|
|
{
|
|
|
|
write_seqcount_begin(&rq->ave_seqcnt);
|
|
|
|
rq->nr_running_integral = do_nr_running_integral(rq);
|
|
|
|
rq->nr_last_stamp = rq->clock_task;
|
|
|
|
__sub_nr_running(rq, count);
|
|
|
|
write_seqcount_end(&rq->ave_seqcnt);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
#define add_nr_running __add_nr_running
|
|
|
|
#define sub_nr_running __sub_nr_running
|
|
|
|
#endif
|
|
|
|
|
2013-05-03 03:39:05 +02:00
|
|
|
static inline void rq_last_tick_reset(struct rq *rq)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_NO_HZ_FULL
|
|
|
|
rq->last_sched_tick = jiffies;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
extern void update_rq_clock(struct rq *rq);
|
|
|
|
|
|
|
|
extern void activate_task(struct rq *rq, struct task_struct *p, int flags);
|
|
|
|
extern void deactivate_task(struct rq *rq, struct task_struct *p, int flags);
|
|
|
|
|
|
|
|
extern void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags);
|
|
|
|
|
|
|
|
extern const_debug unsigned int sysctl_sched_time_avg;
|
|
|
|
extern const_debug unsigned int sysctl_sched_nr_migrate;
|
|
|
|
extern const_debug unsigned int sysctl_sched_migration_cost;
|
|
|
|
|
|
|
|
static inline u64 sched_avg_period(void)
|
|
|
|
{
|
|
|
|
return (u64)sysctl_sched_time_avg * NSEC_PER_MSEC / 2;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_HRTICK
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Use hrtick when:
|
|
|
|
* - enabled by features
|
|
|
|
* - hrtimer is actually high res
|
|
|
|
*/
|
|
|
|
static inline int hrtick_enabled(struct rq *rq)
|
|
|
|
{
|
|
|
|
if (!sched_feat(HRTICK))
|
|
|
|
return 0;
|
|
|
|
if (!cpu_active(cpu_of(rq)))
|
|
|
|
return 0;
|
|
|
|
return hrtimer_is_hres_active(&rq->hrtick_timer);
|
|
|
|
}
|
|
|
|
|
|
|
|
void hrtick_start(struct rq *rq, u64 delay);
|
|
|
|
|
2011-11-22 15:20:07 +01:00
|
|
|
#else
|
|
|
|
|
|
|
|
static inline int hrtick_enabled(struct rq *rq)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
#endif /* CONFIG_SCHED_HRTICK */
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
extern void sched_avg_update(struct rq *rq);
|
2015-03-23 14:19:05 +01:00
|
|
|
|
|
|
|
#ifndef arch_scale_freq_capacity
|
|
|
|
static __always_inline
|
|
|
|
unsigned long arch_scale_freq_capacity(struct sched_domain *sd, int cpu)
|
|
|
|
{
|
|
|
|
return SCHED_CAPACITY_SCALE;
|
|
|
|
}
|
|
|
|
#endif
|
2015-02-27 16:54:08 +01:00
|
|
|
|
2015-08-14 17:23:10 +01:00
|
|
|
#ifndef arch_scale_cpu_capacity
|
|
|
|
static __always_inline
|
|
|
|
unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu)
|
|
|
|
{
|
2015-08-15 00:04:41 +01:00
|
|
|
if (sd && (sd->flags & SD_SHARE_CPUCAPACITY) && (sd->span_weight > 1))
|
2015-08-14 17:23:10 +01:00
|
|
|
return sd->smt_gain / sd->span_weight;
|
|
|
|
|
|
|
|
return SCHED_CAPACITY_SCALE;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2015-06-25 14:12:33 +01:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
static inline unsigned long capacity_of(int cpu)
|
|
|
|
{
|
|
|
|
return cpu_rq(cpu)->cpu_capacity;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long capacity_orig_of(int cpu)
|
|
|
|
{
|
|
|
|
return cpu_rq(cpu)->cpu_capacity_orig;
|
|
|
|
}
|
|
|
|
|
2016-05-31 09:08:38 -07:00
|
|
|
extern unsigned int sysctl_sched_use_walt_cpu_util;
|
|
|
|
extern unsigned int walt_ravg_window;
|
2017-08-10 17:26:20 -07:00
|
|
|
extern bool walt_disabled;
|
2016-05-31 09:08:38 -07:00
|
|
|
|
2015-06-25 14:12:33 +01:00
|
|
|
/*
|
|
|
|
* cpu_util returns the amount of capacity of a CPU that is used by CFS
|
|
|
|
* tasks. The unit of the return value must be the one of capacity so we can
|
|
|
|
* compare the utilization with the capacity of the CPU that is available for
|
|
|
|
* CFS task (ie cpu_capacity).
|
|
|
|
*
|
|
|
|
* cfs_rq.avg.util_avg is the sum of running time of runnable tasks plus the
|
|
|
|
* recent utilization of currently non-runnable tasks on a CPU. It represents
|
|
|
|
* the amount of utilization of a CPU in the range [0..capacity_orig] where
|
|
|
|
* capacity_orig is the cpu_capacity available at the highest frequency
|
|
|
|
* (arch_scale_freq_capacity()).
|
|
|
|
* The utilization of a CPU converges towards a sum equal to or less than the
|
|
|
|
* current capacity (capacity_curr <= capacity_orig) of the CPU because it is
|
|
|
|
* the running time on this CPU scaled by capacity_curr.
|
|
|
|
*
|
|
|
|
* Nevertheless, cfs_rq.avg.util_avg can be higher than capacity_curr or even
|
|
|
|
* higher than capacity_orig because of unfortunate rounding in
|
|
|
|
* cfs.avg.util_avg or just after migrating tasks and new task wakeups until
|
|
|
|
* the average stabilizes with the new running time. We need to check that the
|
|
|
|
* utilization stays within the range of [0..capacity_orig] and cap it if
|
|
|
|
* necessary. Without utilization capping, a group could be seen as overloaded
|
|
|
|
* (CPU0 utilization at 121% + CPU1 utilization at 80%) whereas CPU1 has 20% of
|
|
|
|
* available capacity. We allow utilization to overshoot capacity_curr (but not
|
|
|
|
* capacity_orig) as it useful for predicting the capacity required after task
|
|
|
|
* migrations (scheduler-driven DVFS).
|
|
|
|
*/
|
|
|
|
static inline unsigned long __cpu_util(int cpu, int delta)
|
|
|
|
{
|
|
|
|
unsigned long util = cpu_rq(cpu)->cfs.avg.util_avg;
|
|
|
|
unsigned long capacity = capacity_orig_of(cpu);
|
|
|
|
|
2016-05-31 09:08:38 -07:00
|
|
|
#ifdef CONFIG_SCHED_WALT
|
2017-01-20 11:10:15 -08:00
|
|
|
if (!walt_disabled && sysctl_sched_use_walt_cpu_util)
|
|
|
|
util = div64_u64(cpu_rq(cpu)->cumulative_runnable_avg,
|
|
|
|
walt_ravg_window >> SCHED_LOAD_SHIFT);
|
2016-05-31 09:08:38 -07:00
|
|
|
#endif
|
2017-09-21 13:19:38 -07:00
|
|
|
|
2015-06-25 14:12:33 +01:00
|
|
|
delta += util;
|
|
|
|
if (delta < 0)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return (delta >= capacity) ? capacity : delta;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long cpu_util(int cpu)
|
|
|
|
{
|
|
|
|
return __cpu_util(cpu, 0);
|
|
|
|
}
|
|
|
|
|
2016-12-08 16:12:12 -08:00
|
|
|
static inline unsigned long cpu_util_freq(int cpu)
|
|
|
|
{
|
|
|
|
unsigned long util = cpu_rq(cpu)->cfs.avg.util_avg;
|
|
|
|
unsigned long capacity = capacity_orig_of(cpu);
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_WALT
|
2017-01-20 11:10:15 -08:00
|
|
|
if (!walt_disabled && sysctl_sched_use_walt_cpu_util)
|
|
|
|
util = div64_u64(cpu_rq(cpu)->prev_runnable_sum,
|
|
|
|
walt_ravg_window >> SCHED_LOAD_SHIFT);
|
2016-12-08 16:12:12 -08:00
|
|
|
#endif
|
|
|
|
return (util >= capacity) ? capacity : util;
|
|
|
|
}
|
|
|
|
|
2015-06-25 14:12:33 +01:00
|
|
|
#endif
|
|
|
|
|
Merge branch 'v4.4-16.09-android-tmp' into lsk-v4.4-16.09-android
* v4.4-16.09-android-tmp:
unsafe_[get|put]_user: change interface to use a error target label
usercopy: remove page-spanning test for now
usercopy: fix overlap check for kernel text
mm/slub: support left redzone
Linux 4.4.21
lib/mpi: mpi_write_sgl(): fix skipping of leading zero limbs
regulator: anatop: allow regulator to be in bypass mode
hwrng: exynos - Disable runtime PM on probe failure
cpufreq: Fix GOV_LIMITS handling for the userspace governor
metag: Fix atomic_*_return inline asm constraints
scsi: fix upper bounds check of sense key in scsi_sense_key_string()
ALSA: timer: fix NULL pointer dereference on memory allocation failure
ALSA: timer: fix division by zero after SNDRV_TIMER_IOCTL_CONTINUE
ALSA: timer: fix NULL pointer dereference in read()/ioctl() race
ALSA: hda - Enable subwoofer on Dell Inspiron 7559
ALSA: hda - Add headset mic quirk for Dell Inspiron 5468
ALSA: rawmidi: Fix possible deadlock with virmidi registration
ALSA: fireworks: accessing to user space outside spinlock
ALSA: firewire-tascam: accessing to user space outside spinlock
ALSA: usb-audio: Add sample rate inquiry quirk for B850V3 CP2114
crypto: caam - fix IV loading for authenc (giv)decryption
uprobes: Fix the memcg accounting
x86/apic: Do not init irq remapping if ioapic is disabled
vhost/scsi: fix reuse of &vq->iov[out] in response
bcache: RESERVE_PRIO is too small by one when prio_buckets() is a power of two.
ubifs: Fix assertion in layout_in_gaps()
ovl: fix workdir creation
ovl: listxattr: use strnlen()
ovl: remove posix_acl_default from workdir
ovl: don't copy up opaqueness
wrappers for ->i_mutex access
lustre: remove unused declaration
timekeeping: Avoid taking lock in NMI path with CONFIG_DEBUG_TIMEKEEPING
timekeeping: Cap array access in timekeeping_debug
xfs: fix superblock inprogress check
ASoC: atmel_ssc_dai: Don't unconditionally reset SSC on stream startup
drm/msm: fix use of copy_from_user() while holding spinlock
drm: Reject page_flip for !DRIVER_MODESET
drm/radeon: fix radeon_move_blit on 32bit systems
s390/sclp_ctl: fix potential information leak with /dev/sclp
rds: fix an infoleak in rds_inc_info_copy
powerpc/tm: Avoid SLB faults in treclaim/trecheckpoint when RI=0
nvme: Call pci_disable_device on the error path.
cgroup: reduce read locked section of cgroup_threadgroup_rwsem during fork
block: make sure a big bio is split into at most 256 bvecs
block: Fix race triggered by blk_set_queue_dying()
ext4: avoid modifying checksum fields directly during checksum verification
ext4: avoid deadlock when expanding inode size
ext4: properly align shifted xattrs when expanding inodes
ext4: fix xattr shifting when expanding inodes part 2
ext4: fix xattr shifting when expanding inodes
ext4: validate that metadata blocks do not overlap superblock
net: Use ns_capable_noaudit() when determining net sysctl permissions
kernel: Add noaudit variant of ns_capable()
KEYS: Fix ASN.1 indefinite length object parsing
drivers:hv: Lock access to hyperv_mmio resource tree
cxlflash: Move to exponential back-off when cmd_room is not available
netfilter: x_tables: check for size overflow
drm/amdgpu/cz: enable/disable vce dpm even if vce pg is disabled
cred: Reject inodes with invalid ids in set_create_file_as()
fs: Check for invalid i_uid in may_follow_link()
IB/IPoIB: Do not set skb truesize since using one linearskb
udp: properly support MSG_PEEK with truncated buffers
crypto: nx-842 - Mask XERS0 bit in return value
cxlflash: Fix to avoid virtual LUN failover failure
cxlflash: Fix to escalate LINK_RESET also on port 1
tipc: fix nl compat regression for link statistics
tipc: fix an infoleak in tipc_nl_compat_link_dump
netfilter: x_tables: check for size overflow
Bluetooth: Add support for Intel Bluetooth device 8265 [8087:0a2b]
drm/i915: Check VBT for port presence in addition to the strap on VLV/CHV
drm/i915: Only ignore eDP ports that are connected
Input: xpad - move pending clear to the correct location
net: thunderx: Fix link status reporting
x86/hyperv: Avoid reporting bogus NMI status for Gen2 instances
crypto: vmx - IV size failing on skcipher API
tda10071: Fix dependency to REGMAP_I2C
crypto: vmx - Fix ABI detection
crypto: vmx - comply with ABIs that specify vrsave as reserved.
HID: core: prevent out-of-bound readings
lpfc: Fix DMA faults observed upon plugging loopback connector
block: fix blk_rq_get_max_sectors for driver private requests
irqchip/gicv3-its: numa: Enable workaround for Cavium thunderx erratum 23144
clocksource: Allow unregistering the watchdog
btrfs: Continue write in case of can_not_nocow
blk-mq: End unstarted requests on dying queue
cxlflash: Fix to resolve dead-lock during EEH recovery
drm/radeon/mst: fix regression in lane/link handling.
ecryptfs: fix handling of directory opening
ALSA: hda: add AMD Polaris-10/11 AZ PCI IDs with proper driver caps
drm: Balance error path for GEM handle allocation
ntp: Fix ADJ_SETOFFSET being used w/ ADJ_NANO
time: Verify time values in adjtimex ADJ_SETOFFSET to avoid overflow
Input: xpad - correctly handle concurrent LED and FF requests
net: thunderx: Fix receive packet stats
net: thunderx: Fix for multiqset not configured upon interface toggle
perf/x86/cqm: Fix CQM memory leak and notifier leak
perf/x86/cqm: Fix CQM handling of grouping events into a cache_group
s390/crypto: provide correct file mode at device register.
proc: revert /proc/<pid>/maps [stack:TID] annotation
intel_idle: Support for Intel Xeon Phi Processor x200 Product Family
cxlflash: Fix to avoid unnecessary scan with internal LUNs
Drivers: hv: vmbus: don't manipulate with clocksources on crash
Drivers: hv: vmbus: avoid scheduling in interrupt context in vmbus_initiate_unload()
Drivers: hv: vmbus: avoid infinite loop in init_vp_index()
arcmsr: fixes not release allocated resource
arcmsr: fixed getting wrong configuration data
s390/pci_dma: fix DMA table corruption with > 4 TB main memory
net/mlx5e: Don't modify CQ before it was created
net/mlx5e: Don't try to modify CQ moderation if it is not supported
mmc: sdhci: Do not BUG on invalid vdd
UVC: Add support for R200 depth camera
sched/numa: Fix use-after-free bug in the task_numa_compare
ALSA: hda - add codec support for Kabylake display audio codec
drm/i915: Fix hpd live status bits for g4x
tipc: fix nullptr crash during subscription cancel
arm64: Add workaround for Cavium erratum 27456
net: thunderx: Fix for Qset error due to CQ full
drm/radeon: fix dp link rate selection (v2)
drm/amdgpu: fix dp link rate selection (v2)
qla2xxx: Use ATIO type to send correct tmr response
mmc: sdhci: 64-bit DMA actually has 4-byte alignment
drm/atomic: Do not unset crtc when an encoder is stolen
drm/i915/skl: Add missing SKL ids
drm/i915/bxt: update list of PCIIDs
hrtimer: Catch illegal clockids
i40e/i40evf: Fix RSS rx-flow-hash configuration through ethtool
mpt3sas: Fix for Asynchronous completion of timedout IO and task abort of timedout IO.
mpt3sas: A correction in unmap_resources
net: cavium: liquidio: fix check for in progress flag
arm64: KVM: Configure TCR_EL2.PS at runtime
irqchip/gic-v3: Make sure read from ICC_IAR1_EL1 is visible on redestributor
pwm: lpc32xx: fix and simplify duty cycle and period calculations
pwm: lpc32xx: correct number of PWM channels from 2 to 1
pwm: fsl-ftm: Fix clock enable/disable when using PM
megaraid_sas: Add an i/o barrier
megaraid_sas: Fix SMAP issue
megaraid_sas: Do not allow PCI access during OCR
s390/cio: update measurement characteristics
s390/cio: ensure consistent measurement state
s390/cio: fix measurement characteristics memleak
qeth: initialize net_device with carrier off
lpfc: Fix external loopback failure.
lpfc: Fix mbox reuse in PLOGI completion
lpfc: Fix RDP Speed reporting.
lpfc: Fix crash in fcp command completion path.
lpfc: Fix driver crash when module parameter lpfc_fcp_io_channel set to 16
lpfc: Fix RegLogin failed error seen on Lancer FC during port bounce
lpfc: Fix the FLOGI discovery logic to comply with T11 standards
lpfc: Fix FCF Infinite loop in lpfc_sli4_fcf_rr_next_index_get.
cxl: Enable PCI device ID for future IBM CXL adapter
cxl: fix build for GCC 4.6.x
cxlflash: Enable device id for future IBM CXL adapter
cxlflash: Resolve oops in wait_port_offline
cxlflash: Fix to resolve cmd leak after host reset
cxl: Fix DSI misses when the context owning task exits
cxl: Fix possible idr warning when contexts are released
Drivers: hv: vmbus: fix rescind-offer handling for device without a driver
Drivers: hv: vmbus: serialize process_chn_event() and vmbus_close_internal()
Drivers: hv: vss: run only on supported host versions
drivers/hv: cleanup synic msrs if vmbus connect failed
Drivers: hv: util: catch allocation errors
tools: hv: report ENOSPC errors in hv_fcopy_daemon
Drivers: hv: utils: run polling callback always in interrupt context
Drivers: hv: util: Increase the timeout for util services
lightnvm: fix missing grown bad block type
lightnvm: fix locking and mempool in rrpc_lun_gc
lightnvm: unlock rq and free ppa_list on submission fail
lightnvm: add check after mempool allocation
lightnvm: fix incorrect nr_free_blocks stat
lightnvm: fix bio submission issue
cxlflash: a couple off by one bugs
fm10k: Cleanup exception handling for mailbox interrupt
fm10k: Cleanup MSI-X interrupts in case of failure
fm10k: reinitialize queuing scheme after calling init_hw
fm10k: always check init_hw for errors
fm10k: reset max_queues on init_hw_vf failure
fm10k: Fix handling of NAPI budget when multiple queues are enabled per vector
fm10k: Correct MTU for jumbo frames
fm10k: do not assume VF always has 1 queue
clk: xgene: Fix divider with non-zero shift value
e1000e: fix division by zero on jumbo MTUs
e1000: fix data race between tx_ring->next_to_clean
ixgbe: Fix handling of NAPI budget when multiple queues are enabled per vector
igb: fix NULL derefs due to skipped SR-IOV enabling
igb: use the correct i210 register for EEMNGCTL
igb: don't unmap NULL hw_addr
i40e: Fix Rx hash reported to the stack by our driver
i40e: clean whole mac filter list
i40evf: check rings before freeing resources
i40e: don't add zero MAC filter
i40e: properly delete VF MAC filters
i40e: Fix memory leaks, sideband filter programming
i40e: fix: do not sleep in netdev_ops
i40e/i40evf: Fix RS bit update in Tx path and disable force WB workaround
i40evf: handle many MAC filters correctly
i40e: Workaround fix for mss < 256 issue
UPSTREAM: audit: fix a double fetch in audit_log_single_execve_arg()
UPSTREAM: ARM: 8494/1: mm: Enable PXN when running non-LPAE kernel on LPAE processor
FIXUP: sched/tune: update accouting before CPU capacity
FIXUP: sched/tune: add fixes missing from a previous patch
arm: Fix #if/#ifdef typo in topology.c
arm: Fix build error "conflicting types for 'scale_cpu_capacity'"
sched/walt: use do_div instead of division operator
DEBUG: cpufreq: fix cpu_capacity tracing build for non-smp systems
sched/walt: include missing header for arm_timer_read_counter()
cpufreq: Kconfig: Fixup incorrect selection by CPU_FREQ_DEFAULT_GOV_SCHED
sched/fair: Avoid redundant idle_cpu() call in update_sg_lb_stats()
FIXUP: sched: scheduler-driven cpu frequency selection
sched/rt: Add Kconfig option to enable panicking for RT throttling
sched/rt: print RT tasks when RT throttling is activated
UPSTREAM: sched: Fix a race between __kthread_bind() and sched_setaffinity()
sched/fair: Favor higher cpus only for boosted tasks
vmstat: make vmstat_updater deferrable again and shut down on idle
sched/fair: call OPP update when going idle after migration
sched/cpufreq_sched: fix thermal capping events
sched/fair: Picking cpus with low OPPs for tasks that prefer idle CPUs
FIXUP: sched/tune: do initialization as a postcore_initicall
DEBUG: sched: add tracepoint for RD overutilized
sched/tune: Introducing a new schedtune attribute prefer_idle
sched: use util instead of capacity to select busy cpu
arch_timer: add error handling when the MPM global timer is cleared
FIXUP: sched: Fix double-release of spinlock in move_queued_task
FIXUP: sched/fair: Fix hang during suspend in sched_group_energy
FIXUP: sched: fix SchedFreq integration for both PELT and WALT
sched: EAS: Avoid causing spikes to max-freq unnecessarily
FIXUP: sched: fix set_cfs_cpu_capacity when WALT is in use
sched/walt: Accounting for number of irqs pending on each core
sched: Introduce Window Assisted Load Tracking (WALT)
sched/tune: fix PB and PC cuts indexes definition
sched/fair: optimize idle cpu selection for boosted tasks
FIXUP: sched/tune: fix accounting for runnable tasks
sched/tune: use a single initialisation function
sched/{fair,tune}: simplify fair.c code
FIXUP: sched/tune: fix payoff calculation for boost region
sched/tune: Add support for negative boost values
FIX: sched/tune: move schedtune_nornalize_energy into fair.c
FIX: sched/tune: update usage of boosted task utilisation on CPU selection
sched/fair: add tunable to set initial task load
sched/fair: add tunable to force selection at cpu granularity
sched: EAS: take cstate into account when selecting idle core
sched/cpufreq_sched: Consolidated update
FIXUP: sched: fix build for non-SMP target
DEBUG: sched/tune: add tracepoint on P-E space filtering
DEBUG: sched/tune: add tracepoint for energy_diff() values
DEBUG: sched/tune: add tracepoint for task boost signal
arm: topology: Define TC2 energy and provide it to the scheduler
CHROMIUM: sched: update the average of nr_running
DEBUG: schedtune: add tracepoint for schedtune_tasks_update() values
DEBUG: schedtune: add tracepoint for CPU boost signal
DEBUG: schedtune: add tracepoint for SchedTune configuration update
DEBUG: sched: add energy procfs interface
DEBUG: sched,cpufreq: add cpu_capacity change tracepoint
DEBUG: sched: add tracepoint for CPU load/util signals
DEBUG: sched: add tracepoint for task load/util signals
DEBUG: sched: add tracepoint for cpu/freq scale invariance
sched/fair: filter energy_diff() based on energy_payoff value
sched/tune: add support to compute normalized energy
sched/fair: keep track of energy/capacity variations
sched/fair: add boosted task utilization
sched/{fair,tune}: track RUNNABLE tasks impact on per CPU boost value
sched/tune: compute and keep track of per CPU boost value
sched/tune: add initial support for CGroups based boosting
sched/fair: add boosted CPU usage
sched/fair: add function to convert boost value into "margin"
sched/tune: add sysctl interface to define a boost value
sched/tune: add detailed documentation
fixup! sched/fair: jump to max OPP when crossing UP threshold
fixup! sched: scheduler-driven cpu frequency selection
sched: rt scheduler sets capacity requirement
sched: deadline: use deadline bandwidth in scale_rt_capacity
sched: remove call of sched_avg_update from sched_rt_avg_update
sched/cpufreq_sched: add trace events
sched/fair: jump to max OPP when crossing UP threshold
sched/fair: cpufreq_sched triggers for load balancing
sched/{core,fair}: trigger OPP change request on fork()
sched/fair: add triggers for OPP change requests
sched: scheduler-driven cpu frequency selection
cpufreq: introduce cpufreq_driver_is_slow
sched: Consider misfit tasks when load-balancing
sched: Add group_misfit_task load-balance type
sched: Add per-cpu max capacity to sched_group_capacity
sched: Do eas idle balance regardless of the rq avg idle value
arm64: Enable max freq invariant scheduler load-tracking and capacity support
arm: Enable max freq invariant scheduler load-tracking and capacity support
sched: Update max cpu capacity in case of max frequency constraints
cpufreq: Max freq invariant scheduler load-tracking and cpu capacity support
arm64, topology: Updates to use DT bindings for EAS costing data
sched: Support for extracting EAS energy costs from DT
Documentation: DT bindings for energy model cost data required by EAS
sched: Disable energy-unfriendly nohz kicks
sched: Consider a not over-utilized energy-aware system as balanced
sched: Energy-aware wake-up task placement
sched: Determine the current sched_group idle-state
sched, cpuidle: Track cpuidle state index in the scheduler
sched: Add over-utilization/tipping point indicator
sched: Estimate energy impact of scheduling decisions
sched: Extend sched_group_energy to test load-balancing decisions
sched: Calculate energy consumption of sched_group
sched: Highest energy aware balancing sched_domain level pointer
sched: Relocated cpu_util() and change return type
sched: Compute cpu capacity available at current frequency
arm64: Cpu invariant scheduler load-tracking and capacity support
arm: Cpu invariant scheduler load-tracking and capacity support
sched: Introduce SD_SHARE_CAP_STATES sched_domain flag
sched: Initialize energy data structures
sched: Introduce energy data structures
sched: Make energy awareness a sched feature
sched: Documentation for scheduler energy cost model
sched: Prevent unnecessary active balance of single task in sched group
sched: Enable idle balance to pull single task towards cpu with higher capacity
sched: Consider spare cpu capacity at task wake-up
sched: Add cpu capacity awareness to wakeup balancing
sched: Store system-wide maximum cpu capacity in root domain
arm: Update arch_scale_cpu_capacity() to reflect change to define
arm64: Enable frequency invariant scheduler load-tracking support
arm: Enable frequency invariant scheduler load-tracking support
cpufreq: Frequency invariant scheduler load-tracking support
sched/fair: Fix new task's load avg removed from source CPU in wake_up_new_task()
FROMLIST: pstore: drop pmsg bounce buffer
UPSTREAM: usercopy: remove page-spanning test for now
UPSTREAM: usercopy: force check_object_size() inline
BACKPORT: usercopy: fold builtin_const check into inline function
UPSTREAM: x86/uaccess: force copy_*_user() to be inlined
UPSTREAM: HID: core: prevent out-of-bound readings
Android: Fix build breakages.
UPSTREAM: tty: Prevent ldisc drivers from re-using stale tty fields
UPSTREAM: netfilter: nfnetlink: correctly validate length of batch messages
cpuset: Make cpusets restore on hotplug
UPSTREAM: mm/slub: support left redzone
UPSTREAM: Make the hardened user-copy code depend on having a hardened allocator
Android: MMC/UFS IO Latency Histograms.
UPSTREAM: usercopy: fix overlap check for kernel text
UPSTREAM: usercopy: avoid potentially undefined behavior in pointer math
UPSTREAM: unsafe_[get|put]_user: change interface to use a error target label
BACKPORT: arm64: mm: fix location of _etext
BACKPORT: ARM: 8583/1: mm: fix location of _etext
BACKPORT: Don't show empty tag stats for unprivileged uids
UPSTREAM: tcp: fix use after free in tcp_xmit_retransmit_queue()
ANDROID: base-cfg: drop SECCOMP_FILTER config
UPSTREAM: [media] xc2028: unlock on error in xc2028_set_config()
UPSTREAM: [media] xc2028: avoid use after free
ANDROID: base-cfg: enable SECCOMP config
ANDROID: rcu_sync: Export rcu_sync_lockdep_assert
RFC: FROMLIST: cgroup: reduce read locked section of cgroup_threadgroup_rwsem during fork
RFC: FROMLIST: cgroup: avoid synchronize_sched() in __cgroup_procs_write()
RFC: FROMLIST: locking/percpu-rwsem: Optimize readers and reduce global impact
net: ipv6: Fix ping to link-local addresses.
ipv6: fix endianness error in icmpv6_err
ANDROID: dm: android-verity: Allow android-verity to be compiled as an independent module
backporting: a brief introduce of backported feautures on 4.4
Linux 4.4.20
sysfs: correctly handle read offset on PREALLOC attrs
hwmon: (iio_hwmon) fix memory leak in name attribute
ALSA: line6: Fix POD sysfs attributes segfault
ALSA: line6: Give up on the lock while URBs are released.
ALSA: line6: Remove double line6_pcm_release() after failed acquire.
ACPI / SRAT: fix SRAT parsing order with both LAPIC and X2APIC present
ACPI / sysfs: fix error code in get_status()
ACPI / drivers: replace acpi_probe_lock spinlock with mutex
ACPI / drivers: fix typo in ACPI_DECLARE_PROBE_ENTRY macro
staging: comedi: ni_mio_common: fix wrong insn_write handler
staging: comedi: ni_mio_common: fix AO inttrig backwards compatibility
staging: comedi: comedi_test: fix timer race conditions
staging: comedi: daqboard2000: bug fix board type matching code
USB: serial: option: add WeTelecom 0x6802 and 0x6803 products
USB: serial: option: add WeTelecom WM-D200
USB: serial: mos7840: fix non-atomic allocation in write path
USB: serial: mos7720: fix non-atomic allocation in write path
USB: fix typo in wMaxPacketSize validation
usb: chipidea: udc: don't touch DP when controller is in host mode
USB: avoid left shift by -1
dmaengine: usb-dmac: check CHCR.DE bit in usb_dmac_isr_channel()
crypto: qat - fix aes-xts key sizes
crypto: nx - off by one bug in nx_of_update_msc()
Input: i8042 - set up shared ps2_cmd_mutex for AUX ports
Input: i8042 - break load dependency between atkbd/psmouse and i8042
Input: tegra-kbc - fix inverted reset logic
btrfs: properly track when rescan worker is running
btrfs: waiting on qgroup rescan should not always be interruptible
fs/seq_file: fix out-of-bounds read
gpio: Fix OF build problem on UM
usb: renesas_usbhs: gadget: fix return value check in usbhs_mod_gadget_probe()
megaraid_sas: Fix probing cards without io port
mpt3sas: Fix resume on WarpDrive flash cards
cdc-acm: fix wrong pipe type on rx interrupt xfers
i2c: cros-ec-tunnel: Fix usage of cros_ec_cmd_xfer()
mfd: cros_ec: Add cros_ec_cmd_xfer_status() helper
aacraid: Check size values after double-fetch from user
ARC: Elide redundant setup of DMA callbacks
ARC: Call trace_hardirqs_on() before enabling irqs
ARC: use correct offset in pt_regs for saving/restoring user mode r25
ARC: build: Better way to detect ISA compatible toolchain
drm/i915: fix aliasing_ppgtt leak
drm/amdgpu: record error code when ring test failed
drm/amd/amdgpu: sdma resume fail during S4 on CI
drm/amdgpu: skip TV/CV in display parsing
drm/amdgpu: avoid a possible array overflow
drm/amdgpu: fix amdgpu_move_blit on 32bit systems
drm/amdgpu: Change GART offset to 64-bit
iio: fix sched WARNING "do not call blocking ops when !TASK_RUNNING"
sched/nohz: Fix affine unpinned timers mess
sched/cputime: Fix NO_HZ_FULL getrusage() monotonicity regression
of: fix reference counting in of_graph_get_endpoint_by_regs
arm64: dts: rockchip: add reset saradc node for rk3368 SoCs
mac80211: fix purging multicast PS buffer queue
s390/dasd: fix hanging device after clear subchannel
EDAC: Increment correct counter in edac_inc_ue_error()
pinctrl/amd: Remove the default de-bounce time
iommu/arm-smmu: Don't BUG() if we find aborting STEs with disable_bypass
iommu/arm-smmu: Fix CMDQ error handling
iommu/dma: Don't put uninitialised IOVA domains
xhci: Make sure xhci handles USB_SPEED_SUPER_PLUS devices.
USB: serial: ftdi_sio: add PIDs for Ivium Technologies devices
USB: serial: ftdi_sio: add device ID for WICED USB UART dev board
USB: serial: option: add support for Telit LE920A4
USB: serial: option: add D-Link DWM-156/A3
USB: serial: fix memleak in driver-registration error path
xhci: don't dereference a xhci member after removing xhci
usb: xhci: Fix panic if disconnect
xhci: always handle "Command Ring Stopped" events
usb/gadget: fix gadgetfs aio support.
usb: gadget: fsl_qe_udc: off by one in setup_received_handle()
USB: validate wMaxPacketValue entries in endpoint descriptors
usb: renesas_usbhs: Use dmac only if the pipe type is bulk
usb: renesas_usbhs: clear the BRDYSTS in usbhsg_ep_enable()
USB: hub: change the locking in hub_activate
USB: hub: fix up early-exit pathway in hub_activate
usb: hub: Fix unbalanced reference count/memory leak/deadlocks
usb: define USB_SPEED_SUPER_PLUS speed for SuperSpeedPlus USB3.1 devices
usb: dwc3: gadget: increment request->actual once
usb: dwc3: pci: add Intel Kabylake PCI ID
usb: misc: usbtest: add fix for driver hang
usb: ehci: change order of register cleanup during shutdown
crypto: caam - defer aead_set_sh_desc in case of zero authsize
crypto: caam - fix echainiv(authenc) encrypt shared descriptor
crypto: caam - fix non-hmac hashes
genirq/msi: Make sure PCI MSIs are activated early
genirq/msi: Remove unused MSI_FLAG_IDENTITY_MAP
um: Don't discard .text.exit section
ACPI / CPPC: Prevent cpc_desc_ptr points to the invalid data
ACPI: CPPC: Return error if _CPC is invalid on a CPU
mmc: sdhci-acpi: Reduce Baytrail eMMC/SD/SDIO hangs
PCI: Limit config space size for Netronome NFP4000
PCI: Add Netronome NFP4000 PF device ID
PCI: Limit config space size for Netronome NFP6000 family
PCI: Add Netronome vendor and device IDs
PCI: Support PCIe devices with short cfg_size
NVMe: Don't unmap controller registers on reset
ALSA: hda - Manage power well properly for resume
libnvdimm, nd_blk: mask off reserved status bits
perf intel-pt: Fix occasional decoding errors when tracing system-wide
vfio/pci: Fix NULL pointer oops in error interrupt setup handling
virtio: fix memory leak in virtqueue_add()
parisc: Fix order of EREFUSED define in errno.h
arm64: Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFO
ALSA: usb-audio: Add quirk for ELP HD USB Camera
ALSA: usb-audio: Add a sample rate quirk for Creative Live! Cam Socialize HD (VF0610)
powerpc/eeh: eeh_pci_enable(): fix checking of post-request state
SUNRPC: allow for upcalls for same uid but different gss service
SUNRPC: Handle EADDRNOTAVAIL on connection failures
tools/testing/nvdimm: fix SIGTERM vs hotplug crash
uprobes/x86: Fix RIP-relative handling of EVEX-encoded instructions
x86/mm: Disable preemption during CR3 read+write
hugetlb: fix nr_pmds accounting with shared page tables
mm: SLUB hardened usercopy support
mm: SLAB hardened usercopy support
s390/uaccess: Enable hardened usercopy
sparc/uaccess: Enable hardened usercopy
powerpc/uaccess: Enable hardened usercopy
ia64/uaccess: Enable hardened usercopy
arm64/uaccess: Enable hardened usercopy
ARM: uaccess: Enable hardened usercopy
x86/uaccess: Enable hardened usercopy
x86: remove more uaccess_32.h complexity
x86: remove pointless uaccess_32.h complexity
x86: fix SMAP in 32-bit environments
Use the new batched user accesses in generic user string handling
Add 'unsafe' user access functions for batched accesses
x86: reorganize SMAP handling in user space accesses
mm: Hardened usercopy
mm: Implement stack frame object validation
mm: Add is_migrate_cma_page
Linux 4.4.19
Documentation/module-signing.txt: Note need for version info if reusing a key
module: Invalidate signatures on force-loaded modules
dm flakey: error READ bios during the down_interval
rtc: s3c: Add s3c_rtc_{enable/disable}_clk in s3c_rtc_setfreq()
lpfc: fix oops in lpfc_sli4_scmd_to_wqidx_distr() from lpfc_send_taskmgmt()
ACPI / EC: Work around method reentrancy limit in ACPICA for _Qxx
x86/platform/intel_mid_pci: Rework IRQ0 workaround
PCI: Mark Atheros AR9485 and QCA9882 to avoid bus reset
MIPS: hpet: Increase HPET_MIN_PROG_DELTA and decrease HPET_MIN_CYCLES
MIPS: Don't register r4k sched clock when CPUFREQ enabled
MIPS: mm: Fix definition of R6 cache instruction
SUNRPC: Don't allocate a full sockaddr_storage for tracing
Input: elan_i2c - properly wake up touchpad on ASUS laptops
target: Fix ordered task CHECK_CONDITION early exception handling
target: Fix max_unmap_lba_count calc overflow
target: Fix race between iscsi-target connection shutdown + ABORT_TASK
target: Fix missing complete during ABORT_TASK + CMD_T_FABRIC_STOP
target: Fix ordered task target_setup_cmd_from_cdb exception hang
iscsi-target: Fix panic when adding second TCP connection to iSCSI session
ubi: Fix race condition between ubi device creation and udev
ubi: Fix early logging
ubi: Make volume resize power cut aware
of: fix memory leak related to safe_name()
IB/mlx4: Fix memory leak if QP creation failed
IB/mlx4: Fix error flow when sending mads under SRIOV
IB/mlx4: Fix the SQ size of an RC QP
IB/IWPM: Fix a potential skb leak
IB/IPoIB: Don't update neigh validity for unresolved entries
IB/SA: Use correct free function
IB/mlx5: Return PORT_ERR in Active to Initializing tranisition
IB/mlx5: Fix post send fence logic
IB/mlx5: Fix entries check in mlx5_ib_resize_cq
IB/mlx5: Fix returned values of query QP
IB/mlx5: Fix entries checks in mlx5_ib_create_cq
IB/mlx5: Fix MODIFY_QP command input structure
ALSA: hda - Fix headset mic detection problem for two dell machines
ALSA: hda: add AMD Bonaire AZ PCI ID with proper driver caps
ALSA: hda/realtek - Can't adjust speaker's volume on a Dell AIO
ALSA: hda: Fix krealloc() with __GFP_ZERO usage
mm/hugetlb: avoid soft lockup in set_max_huge_pages()
mtd: nand: fix bug writing 1 byte less than page size
block: fix bdi vs gendisk lifetime mismatch
block: add missing group association in bio-cloning functions
metag: Fix __cmpxchg_u32 asm constraint for CMP
ftrace/recordmcount: Work around for addition of metag magic but not relocations
balloon: check the number of available pages in leak balloon
drm/i915/dp: Revert "drm/i915/dp: fall back to 18 bpp when sink capability is unknown"
drm/i915: Never fully mask the the EI up rps interrupt on SNB/IVB
drm/edid: Add 6 bpc quirk for display AEO model 0.
drm: Restore double clflush on the last partial cacheline
drm/nouveau/fbcon: fix font width not divisible by 8
drm/nouveau/gr/nv3x: fix instobj write offsets in gr setup
drm/nouveau: check for supported chipset before booting fbdev off the hw
drm/radeon: support backlight control for UNIPHY3
drm/radeon: fix firmware info version checks
drm/radeon: Poll for both connect/disconnect on analog connectors
drm/radeon: add a delay after ATPX dGPU power off
drm/amdgpu/gmc7: add missing mullins case
drm/amdgpu: fix firmware info version checks
drm/amdgpu: Disable RPM helpers while reprobing connectors on resume
drm/amdgpu: support backlight control for UNIPHY3
drm/amdgpu: Poll for both connect/disconnect on analog connectors
drm/amdgpu: add a delay after ATPX dGPU power off
w1:omap_hdq: fix regression
netlabel: add address family checks to netlbl_{sock,req}_delattr()
ARM: dts: sunxi: Add a startup delay for fixed regulator enabled phys
audit: fix a double fetch in audit_log_single_execve_arg()
iommu/amd: Update Alias-DTE in update_device_table()
iommu/amd: Init unity mappings only for dma_ops domains
iommu/amd: Handle IOMMU_DOMAIN_DMA in ops->domain_free call-back
iommu/vt-d: Return error code in domain_context_mapping_one()
iommu/exynos: Suppress unbinding to prevent system failure
drm/i915: Don't complain about lack of ACPI video bios
nfsd: don't return an unhashed lock stateid after taking mutex
nfsd: Fix race between FREE_STATEID and LOCK
nfs: don't create zero-length requests
MIPS: KVM: Propagate kseg0/mapped tlb fault errors
MIPS: KVM: Fix gfn range check in kseg0 tlb faults
MIPS: KVM: Add missing gfn range check
MIPS: KVM: Fix mapped fault broken commpage handling
random: add interrupt callback to VMBus IRQ handler
random: print a warning for the first ten uninitialized random users
random: initialize the non-blocking pool via add_hwgenerator_randomness()
CIFS: Fix a possible invalid memory access in smb2_query_symlink()
cifs: fix crash due to race in hmac(md5) handling
cifs: Check for existing directory when opening file with O_CREAT
fs/cifs: make share unaccessible at root level mountable
jbd2: make journal y2038 safe
ARC: mm: don't loose PTE_SPECIAL in pte_modify()
remoteproc: Fix potential race condition in rproc_add
ovl: disallow overlayfs as upperdir
HID: uhid: fix timeout when probe races with IO
EDAC: Correct channel count limit
Bluetooth: Fix l2cap_sock_setsockopt() with optname BT_RCVMTU
spi: pxa2xx: Clear all RFT bits in reset_sccr1() on Intel Quark
i2c: efm32: fix a failure path in efm32_i2c_probe()
s5p-mfc: Add release callback for memory region devs
s5p-mfc: Set device name for reserved memory region devs
hp-wmi: Fix wifi cannot be hard-unblocked
dm: set DMF_SUSPENDED* _before_ clearing DMF_NOFLUSH_SUSPENDING
sur40: fix occasional oopses on device close
sur40: lower poll interval to fix occasional FPS drops to ~56 FPS
Fix RC5 decoding with Fintek CIR chipset
vb2: core: Skip planes array verification if pb is NULL
videobuf2-v4l2: Verify planes array in buffer dequeueing
media: dvb_ringbuffer: Add memory barriers
media: usbtv: prevent access to free'd resources
mfd: qcom_rpm: Parametrize also ack selector size
mfd: qcom_rpm: Fix offset error for msm8660
intel_pstate: Fix MSR_CONFIG_TDP_x addressing in core_get_max_pstate()
s390/cio: allow to reset channel measurement block
KVM: nVMX: Fix memory corruption when using VMCS shadowing
KVM: VMX: handle PML full VMEXIT that occurs during event delivery
KVM: MTRR: fix kvm_mtrr_check_gfn_range_consistency page fault
KVM: PPC: Book3S HV: Save/restore TM state in H_CEDE
KVM: PPC: Book3S HV: Pull out TM state save/restore into separate procedures
arm64: mm: avoid fdt_check_header() before the FDT is fully mapped
arm64: dts: rockchip: fixes the gic400 2nd region size for rk3368
pinctrl: cherryview: prevent concurrent access to GPIO controllers
Bluetooth: hci_intel: Fix null gpio desc pointer dereference
gpio: intel-mid: Remove potentially harmful code
gpio: pca953x: Fix NBANK calculation for PCA9536
tty/serial: atmel: fix RS485 half duplex with DMA
serial: samsung: Fix ERR pointer dereference on deferred probe
tty: serial: msm: Don't read off end of tx fifo
arm64: Fix incorrect per-cpu usage for boot CPU
arm64: debug: unmask PSTATE.D earlier
arm64: kernel: Save and restore UAO and addr_limit on exception entry
USB: usbfs: fix potential infoleak in devio
usb: renesas_usbhs: fix NULL pointer dereference in xfer_work()
USB: serial: option: add support for Telit LE910 PID 0x1206
usb: dwc3: fix for the isoc transfer EP_BUSY flag
usb: quirks: Add no-lpm quirk for Elan
usb: renesas_usbhs: protect the CFIFOSEL setting in usbhsg_ep_enable()
usb: f_fs: off by one bug in _ffs_func_bind()
usb: gadget: avoid exposing kernel stack
UPSTREAM: usb: gadget: configfs: add mutex lock before unregister gadget
ANDROID: dm-verity: adopt changes made to dm callbacks
UPSTREAM: ecryptfs: fix handling of directory opening
ANDROID: net: core: fix UID-based routing
ANDROID: net: fib: remove duplicate assignment
FROMLIST: proc: Fix timerslack_ns CAP_SYS_NICE check when adjusting self
ANDROID: dm verity fec: pack the fec_header structure
ANDROID: dm: android-verity: Verify header before fetching table
ANDROID: dm: allow adb disable-verity only in userdebug
ANDROID: dm: mount as linear target if eng build
ANDROID: dm: use default verity public key
ANDROID: dm: fix signature verification flag
ANDROID: dm: use name_to_dev_t
ANDROID: dm: rename dm-linear methods for dm-android-verity
ANDROID: dm: Minor cleanup
ANDROID: dm: Mounting root as linear device when verity disabled
ANDROID: dm-android-verity: Rebase on top of 4.1
ANDROID: dm: Add android verity target
ANDROID: dm: fix dm_substitute_devices()
ANDROID: dm: Rebase on top of 4.1
CHROMIUM: dm: boot time specification of dm=
Implement memory_state_time, used by qcom,cpubw
Revert "panic: Add board ID to panic output"
usb: gadget: f_accessory: remove duplicate endpoint alloc
BACKPORT: brcmfmac: defer DPC processing during probe
FROMLIST: proc: Add LSM hook checks to /proc/<tid>/timerslack_ns
FROMLIST: proc: Relax /proc/<tid>/timerslack_ns capability requirements
UPSTREAM: ppp: defer netns reference release for ppp channel
cpuset: Add allow_attach hook for cpusets on android.
UPSTREAM: KEYS: Fix ASN.1 indefinite length object parsing
ANDROID: sdcardfs: fix itnull.cocci warnings
android-recommended.cfg: enable fstack-protector-strong
Linux 4.4.18
mm: memcontrol: fix memcg id ref counter on swap charge move
mm: memcontrol: fix swap counter leak on swapout from offline cgroup
mm: memcontrol: fix cgroup creation failure after many small jobs
ext4: fix reference counting bug on block allocation error
ext4: short-cut orphan cleanup on error
ext4: validate s_reserved_gdt_blocks on mount
ext4: don't call ext4_should_journal_data() on the journal inode
ext4: fix deadlock during page writeback
ext4: check for extents that wrap around
crypto: scatterwalk - Fix test in scatterwalk_done
crypto: gcm - Filter out async ghash if necessary
fs/dcache.c: avoid soft-lockup in dput()
fuse: fix wrong assignment of ->flags in fuse_send_init()
fuse: fuse_flush must check mapping->flags for errors
fuse: fsync() did not return IO errors
sysv, ipc: fix security-layer leaking
block: fix use-after-free in seq file
x86/syscalls/64: Add compat_sys_keyctl for 32-bit userspace
drm/i915: Pretend cursor is always on for ILK-style WM calculations (v2)
x86/mm/pat: Fix BUG_ON() in mmap_mem() on QEMU/i386
x86/pat: Document the PAT initialization sequence
x86/xen, pat: Remove PAT table init code from Xen
x86/mtrr: Fix PAT init handling when MTRR is disabled
x86/mtrr: Fix Xorg crashes in Qemu sessions
x86/mm/pat: Replace cpu_has_pat with boot_cpu_has()
x86/mm/pat: Add pat_disable() interface
x86/mm/pat: Add support of non-default PAT MSR setting
devpts: clean up interface to pty drivers
random: strengthen input validation for RNDADDTOENTCNT
apparmor: fix ref count leak when profile sha1 hash is read
Revert "s390/kdump: Clear subchannel ID to signal non-CCW/SCSI IPL"
KEYS: 64-bit MIPS needs to use compat_sys_keyctl for 32-bit userspace
arm: oabi compat: add missing access checks
cdc_ncm: do not call usbnet_link_change from cdc_ncm_bind
i2c: i801: Allow ACPI SystemIO OpRegion to conflict with PCI BAR
x86/mm/32: Enable full randomization on i386 and X86_32
HID: sony: do not bail out when the sixaxis refuses the output report
PNP: Add Broadwell to Intel MCH size workaround
PNP: Add Haswell-ULT to Intel MCH size workaround
scsi: ignore errors from scsi_dh_add_device()
ipath: Restrict use of the write() interface
tcp: consider recv buf for the initial window scale
qed: Fix setting/clearing bit in completion bitmap
net/irda: fix NULL pointer dereference on memory allocation failure
net: bgmac: Fix infinite loop in bgmac_dma_tx_add()
bonding: set carrier off for devices created through netlink
ipv4: reject RTNH_F_DEAD and RTNH_F_LINKDOWN from user space
tcp: enable per-socket rate limiting of all 'challenge acks'
tcp: make challenge acks less predictable
arm64: relocatable: suppress R_AARCH64_ABS64 relocations in vmlinux
arm64: vmlinux.lds: make __rela_offset and __dynsym_offset ABSOLUTE
Linux 4.4.17
vfs: fix deadlock in file_remove_privs() on overlayfs
intel_th: Fix a deadlock in modprobing
intel_th: pci: Add Kaby Lake PCH-H support
net: mvneta: set real interrupt per packet for tx_done
libceph: apply new_state before new_up_client on incrementals
libata: LITE-ON CX1-JB256-HP needs lower max_sectors
i2c: mux: reg: wrong condition checked for of_address_to_resource return value
posix_cpu_timer: Exit early when process has been reaped
media: fix airspy usb probe error path
ipr: Clear interrupt on croc/crocodile when running with LSI
SCSI: fix new bug in scsi_dev_info_list string matching
RDS: fix rds_tcp_init() error path
can: fix oops caused by wrong rtnl dellink usage
can: fix handling of unmodifiable configuration options fix
can: c_can: Update D_CAN TX and RX functions to 32 bit - fix Altera Cyclone access
can: at91_can: RX queue could get stuck at high bus load
perf/x86: fix PEBS issues on Intel Atom/Core2
ovl: handle ATTR_KILL*
sched/fair: Fix effective_load() to consistently use smoothed load
mmc: block: fix packed command header endianness
block: fix use-after-free in sys_ioprio_get()
qeth: delete napi struct when removing a qeth device
platform/chrome: cros_ec_dev - double fetch bug in ioctl
clk: rockchip: initialize flags of clk_init_data in mmc-phase clock
spi: sun4i: fix FIFO limit
spi: sunxi: fix transfer timeout
namespace: update event counter when umounting a deleted dentry
9p: use file_dentry()
ext4: verify extent header depth
ecryptfs: don't allow mmap when the lower fs doesn't support it
Revert "ecryptfs: forbid opening files without mmap handler"
locks: use file_inode()
power_supply: power_supply_read_temp only if use_cnt > 0
cgroup: set css->id to -1 during init
pinctrl: imx: Do not treat a PIN without MUX register as an error
pinctrl: single: Fix missing flush of posted write for a wakeirq
pvclock: Add CPU barriers to get correct version value
Input: tsc200x - report proper input_dev name
Input: xpad - validate USB endpoint count during probe
Input: wacom_w8001 - w8001_MAX_LENGTH should be 13
Input: xpad - fix oops when attaching an unknown Xbox One gamepad
Input: elantech - add more IC body types to the list
Input: vmmouse - remove port reservation
ALSA: timer: Fix leak in events via snd_timer_user_tinterrupt
ALSA: timer: Fix leak in events via snd_timer_user_ccallback
ALSA: timer: Fix leak in SNDRV_TIMER_IOCTL_PARAMS
xenbus: don't bail early from xenbus_dev_request_and_reply()
xenbus: don't BUG() on user mode induced condition
xen/pciback: Fix conf_space read/write overlap check.
ARC: unwind: ensure that .debug_frame is generated (vs. .eh_frame)
arc: unwind: warn only once if DW2_UNWIND is disabled
kernel/sysrq, watchdog, sched/core: Reset watchdog on all CPUs while processing sysrq-w
pps: do not crash when failed to register
vmlinux.lds: account for destructor sections
mm, meminit: ensure node is online before checking whether pages are uninitialised
mm, meminit: always return a valid node from early_pfn_to_nid
mm, compaction: prevent VM_BUG_ON when terminating freeing scanner
fs/nilfs2: fix potential underflow in call to crc32_le
mm, compaction: abort free scanner if split fails
mm, sl[au]b: add __GFP_ATOMIC to the GFP reclaim mask
dmaengine: at_xdmac: double FIFO flush needed to compute residue
dmaengine: at_xdmac: fix residue corruption
dmaengine: at_xdmac: align descriptors on 64 bits
x86/quirks: Add early quirk to reset Apple AirPort card
x86/quirks: Reintroduce scanning of secondary buses
x86/quirks: Apply nvidia_bugs quirk only on root bus
USB: OHCI: Don't mark EDs as ED_OPER if scheduling fails
Conflicts:
arch/arm/kernel/topology.c
arch/arm64/include/asm/arch_gicv3.h
arch/arm64/kernel/topology.c
block/bio.c
drivers/cpufreq/Kconfig
drivers/md/Makefile
drivers/media/dvb-core/dvb_ringbuffer.c
drivers/media/tuners/tuner-xc2028.c
drivers/misc/Kconfig
drivers/misc/Makefile
drivers/mmc/core/host.c
drivers/scsi/ufs/ufshcd.c
drivers/scsi/ufs/ufshcd.h
drivers/usb/dwc3/gadget.c
drivers/usb/gadget/configfs.c
fs/ecryptfs/file.c
include/linux/mmc/core.h
include/linux/mmc/host.h
include/linux/mmzone.h
include/linux/sched.h
include/linux/sched/sysctl.h
include/trace/events/power.h
include/trace/events/sched.h
init/Kconfig
kernel/cpuset.c
kernel/exit.c
kernel/sched/Makefile
kernel/sched/core.c
kernel/sched/cputime.c
kernel/sched/fair.c
kernel/sched/features.h
kernel/sched/rt.c
kernel/sched/sched.h
kernel/sched/stop_task.c
kernel/sched/tune.c
lib/Kconfig.debug
mm/Makefile
mm/vmstat.c
Change-Id: I243a43231ca56a6362076fa6301827e1b0493be5
Signed-off-by: Runmin Wang <runminw@codeaurora.org>
2016-12-12 15:32:39 -08:00
|
|
|
#ifdef CONFIG_SCHED_HMP
|
|
|
|
/*
|
|
|
|
* HMP and EAS are orthogonal. Hopefully the compiler just elides out all code
|
|
|
|
* with the energy_aware() check, so that we don't even pay the comparison
|
|
|
|
* penalty at runtime.
|
|
|
|
*/
|
|
|
|
#define energy_aware() false
|
|
|
|
#else
|
|
|
|
static inline bool energy_aware(void)
|
|
|
|
{
|
|
|
|
return sched_feat(ENERGY_AWARE);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta)
|
|
|
|
{
|
2015-02-27 16:54:08 +01:00
|
|
|
rq->rt_avg += rt_delta * arch_scale_freq_capacity(NULL, cpu_of(rq));
|
2011-10-25 10:00:11 +02:00
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) { }
|
|
|
|
static inline void sched_avg_update(struct rq *rq) { }
|
|
|
|
#endif
|
|
|
|
|
2015-02-17 13:22:25 +01:00
|
|
|
/*
|
|
|
|
* __task_rq_lock - lock the rq @p resides on.
|
|
|
|
*/
|
|
|
|
static inline struct rq *__task_rq_lock(struct task_struct *p)
|
|
|
|
__acquires(rq->lock)
|
|
|
|
{
|
|
|
|
struct rq *rq;
|
|
|
|
|
|
|
|
lockdep_assert_held(&p->pi_lock);
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
rq = task_rq(p);
|
|
|
|
raw_spin_lock(&rq->lock);
|
2015-06-11 14:46:54 +02:00
|
|
|
if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
|
|
|
|
lockdep_pin_lock(&rq->lock);
|
2015-02-17 13:22:25 +01:00
|
|
|
return rq;
|
2015-06-11 14:46:54 +02:00
|
|
|
}
|
2015-02-17 13:22:25 +01:00
|
|
|
raw_spin_unlock(&rq->lock);
|
|
|
|
|
|
|
|
while (unlikely(task_on_rq_migrating(p)))
|
|
|
|
cpu_relax();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
|
|
|
|
*/
|
|
|
|
static inline struct rq *task_rq_lock(struct task_struct *p, unsigned long *flags)
|
|
|
|
__acquires(p->pi_lock)
|
|
|
|
__acquires(rq->lock)
|
|
|
|
{
|
|
|
|
struct rq *rq;
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
raw_spin_lock_irqsave(&p->pi_lock, *flags);
|
|
|
|
rq = task_rq(p);
|
|
|
|
raw_spin_lock(&rq->lock);
|
|
|
|
/*
|
|
|
|
* move_queued_task() task_rq_lock()
|
|
|
|
*
|
|
|
|
* ACQUIRE (rq->lock)
|
|
|
|
* [S] ->on_rq = MIGRATING [L] rq = task_rq()
|
|
|
|
* WMB (__set_task_cpu()) ACQUIRE (rq->lock);
|
|
|
|
* [S] ->cpu = new_cpu [L] task_rq()
|
|
|
|
* [L] ->on_rq
|
|
|
|
* RELEASE (rq->lock)
|
|
|
|
*
|
|
|
|
* If we observe the old cpu in task_rq_lock, the acquire of
|
|
|
|
* the old rq->lock will fully serialize against the stores.
|
|
|
|
*
|
|
|
|
* If we observe the new cpu in task_rq_lock, the acquire will
|
|
|
|
* pair with the WMB to ensure we must then also see migrating.
|
|
|
|
*/
|
2015-06-11 14:46:54 +02:00
|
|
|
if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
|
|
|
|
lockdep_pin_lock(&rq->lock);
|
2015-02-17 13:22:25 +01:00
|
|
|
return rq;
|
2015-06-11 14:46:54 +02:00
|
|
|
}
|
2015-02-17 13:22:25 +01:00
|
|
|
raw_spin_unlock(&rq->lock);
|
|
|
|
raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
|
|
|
|
|
|
|
|
while (unlikely(task_on_rq_migrating(p)))
|
|
|
|
cpu_relax();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void __task_rq_unlock(struct rq *rq)
|
|
|
|
__releases(rq->lock)
|
|
|
|
{
|
2015-06-11 14:46:54 +02:00
|
|
|
lockdep_unpin_lock(&rq->lock);
|
2015-02-17 13:22:25 +01:00
|
|
|
raw_spin_unlock(&rq->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
task_rq_unlock(struct rq *rq, struct task_struct *p, unsigned long *flags)
|
|
|
|
__releases(rq->lock)
|
|
|
|
__releases(p->pi_lock)
|
|
|
|
{
|
2015-06-11 14:46:54 +02:00
|
|
|
lockdep_unpin_lock(&rq->lock);
|
2015-02-17 13:22:25 +01:00
|
|
|
raw_spin_unlock(&rq->lock);
|
|
|
|
raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
|
|
|
|
}
|
|
|
|
|
FIXUP: sched/tune: fix accounting for runnable tasks
Contains:
sched/tune: fix accounting for runnable tasks (1/5)
The accounting for tasks into boost groups of different CPUs is currently
broken mainly because:
a) we do not properly track the change of boost group of a RUNNABLE task
b) there are race conditions between migration code and accounting code
This patch provides a fixes to ensure enqueue/dequeue
accounting also for throttled tasks.
Without this patch is can happen that a task is enqueued into a throttled
RQ thus not being accounted for the boosting of the corresponding RQ.
We could argue that a throttled task should not boost a CPU, however:
a) properly implementing CPU boosting considering throttled tasks will
increase a lot the complexity of the solution
b) it's not easy to quantify the benefits introduced by such a more
complex solution
Since task throttling requires the usage of the CFS bandwidth controller,
which is not widely used on mobile systems (at least not by Android kernels
so far), for the time being we go for the simple solution and boost also
for throttled RQs.
sched/tune: fix accounting for runnable tasks (2/5)
This patch provides the code required to enforce proper locking.
A per boost group spinlock has been added to grant atomic
accounting of tasks as well as to serialise enqueue/dequeue operations,
triggered by tasks migrations, with cgroups's attach/detach operations.
sched/tune: fix accounting for runnable tasks (3/5)
This patch adds cgroups {allow,can,cancel}_attach callbacks.
Since a task can be migrated between boost groups while it's running,
the CGroups's attach callbacks have been added to properly migrate
boost contributions of RUNNABLE tasks.
The RQ's lock is used to serialise enqueue/dequeue operations, triggered
by tasks migrations, with cgroups's attach/detach operations. While the
SchedTune's CPU lock is used to grant atrocity of the accounting within
the CPU.
NOTE: the current implementation does not allows a concurrent CPU migration
and CGroups change.
sched/tune: fix accounting for runnable tasks (4/5)
This fixes accounting for exiting tasks by adding a dedicated call early
in the do_exit() syscall, which disables SchedTune accounting as soon as a
task is flagged PF_EXITING.
This flag is set before the multiple dequeue/enqueue dance triggered
by cgroup_exit() which is useful only to inject useless tasks movements
thus increasing possibilities for race conditions with the migration code.
The schedtune_exit_task() call does the last dequeue of a task from its
current boost group. This is a solution more aligned with what happens in
mainline kernels (>v4.4) where the exit_cgroup does not move anymore a dying
task to the root control group.
sched/tune: fix accounting for runnable tasks (5/5)
To avoid accounting issues at startup, this patch disable the SchedTune
accounting until the required data structures have been properly
initialized.
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
[jstultz: fwdported to 4.4]
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-07-28 18:44:40 +01:00
|
|
|
extern struct rq *lock_rq_of(struct task_struct *p, unsigned long *flags);
|
|
|
|
extern void unlock_rq_of(struct rq *rq, struct task_struct *p, unsigned long *flags);
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
#ifdef CONFIG_PREEMPT
|
|
|
|
|
|
|
|
static inline void double_rq_lock(struct rq *rq1, struct rq *rq2);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* fair double_lock_balance: Safely acquires both rq->locks in a fair
|
|
|
|
* way at the expense of forcing extra atomic operations in all
|
|
|
|
* invocations. This assures that the double_lock is acquired using the
|
|
|
|
* same underlying policy as the spinlock_t on this architecture, which
|
|
|
|
* reduces latency compared to the unfair variant below. However, it
|
|
|
|
* also adds more overhead and therefore may reduce throughput.
|
|
|
|
*/
|
|
|
|
static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
|
|
|
__releases(this_rq->lock)
|
|
|
|
__acquires(busiest->lock)
|
|
|
|
__acquires(this_rq->lock)
|
|
|
|
{
|
|
|
|
raw_spin_unlock(&this_rq->lock);
|
|
|
|
double_rq_lock(this_rq, busiest);
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
#else
|
|
|
|
/*
|
|
|
|
* Unfair double_lock_balance: Optimizes throughput at the expense of
|
|
|
|
* latency by eliminating extra atomic operations when the locks are
|
|
|
|
* already in proper order on entry. This favors lower cpu-ids and will
|
|
|
|
* grant the double lock to lower cpus over higher ids under contention,
|
|
|
|
* regardless of entry order into the function.
|
|
|
|
*/
|
|
|
|
static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
|
|
|
__releases(this_rq->lock)
|
|
|
|
__acquires(busiest->lock)
|
|
|
|
__acquires(this_rq->lock)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if (unlikely(!raw_spin_trylock(&busiest->lock))) {
|
|
|
|
if (busiest < this_rq) {
|
|
|
|
raw_spin_unlock(&this_rq->lock);
|
|
|
|
raw_spin_lock(&busiest->lock);
|
|
|
|
raw_spin_lock_nested(&this_rq->lock,
|
|
|
|
SINGLE_DEPTH_NESTING);
|
|
|
|
ret = 1;
|
|
|
|
} else
|
|
|
|
raw_spin_lock_nested(&busiest->lock,
|
|
|
|
SINGLE_DEPTH_NESTING);
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* CONFIG_PREEMPT */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* double_lock_balance - lock the busiest runqueue, this_rq is locked already.
|
|
|
|
*/
|
|
|
|
static inline int double_lock_balance(struct rq *this_rq, struct rq *busiest)
|
|
|
|
{
|
|
|
|
if (unlikely(!irqs_disabled())) {
|
|
|
|
/* printk() doesn't work good under rq->lock */
|
|
|
|
raw_spin_unlock(&this_rq->lock);
|
|
|
|
BUG_ON(1);
|
|
|
|
}
|
|
|
|
|
|
|
|
return _double_lock_balance(this_rq, busiest);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void double_unlock_balance(struct rq *this_rq, struct rq *busiest)
|
|
|
|
__releases(busiest->lock)
|
|
|
|
{
|
2016-07-04 15:04:45 +01:00
|
|
|
if (this_rq != busiest)
|
|
|
|
raw_spin_unlock(&busiest->lock);
|
2011-10-25 10:00:11 +02:00
|
|
|
lock_set_subclass(&this_rq->lock.dep_map, 0, _RET_IP_);
|
|
|
|
}
|
|
|
|
|
sched: Fix race in migrate_swap_stop()
There is a subtle race in migrate_swap, when task P, on CPU A, decides to swap
places with task T, on CPU B.
Task P:
- call migrate_swap
Task T:
- go to sleep, removing itself from the runqueue
Task P:
- double lock the runqueues on CPU A & B
Task T:
- get woken up, place itself on the runqueue of CPU C
Task P:
- see that task T is on a runqueue, and pretend to remove it
from the runqueue on CPU B
Now CPUs B & C both have corrupted scheduler data structures.
This patch fixes it, by holding the pi_lock for both of the tasks
involved in the migrate swap. This prevents task T from waking up,
and placing itself onto another runqueue, until after migrate_swap
has released all locks.
This means that, when migrate_swap checks, task T will be either
on the runqueue where it was originally seen, or not on any
runqueue at all. Migrate_swap deals correctly with of those cases.
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: hannes@cmpxchg.org
Cc: aarcange@redhat.com
Cc: srikar@linux.vnet.ibm.com
Cc: tglx@linutronix.de
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/20131010181722.GO13848@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-10 20:17:22 +02:00
|
|
|
static inline void double_lock(spinlock_t *l1, spinlock_t *l2)
|
|
|
|
{
|
|
|
|
if (l1 > l2)
|
|
|
|
swap(l1, l2);
|
|
|
|
|
|
|
|
spin_lock(l1);
|
|
|
|
spin_lock_nested(l2, SINGLE_DEPTH_NESTING);
|
|
|
|
}
|
|
|
|
|
2014-04-07 10:55:15 +02:00
|
|
|
static inline void double_lock_irq(spinlock_t *l1, spinlock_t *l2)
|
|
|
|
{
|
|
|
|
if (l1 > l2)
|
|
|
|
swap(l1, l2);
|
|
|
|
|
|
|
|
spin_lock_irq(l1);
|
|
|
|
spin_lock_nested(l2, SINGLE_DEPTH_NESTING);
|
|
|
|
}
|
|
|
|
|
sched: Fix race in migrate_swap_stop()
There is a subtle race in migrate_swap, when task P, on CPU A, decides to swap
places with task T, on CPU B.
Task P:
- call migrate_swap
Task T:
- go to sleep, removing itself from the runqueue
Task P:
- double lock the runqueues on CPU A & B
Task T:
- get woken up, place itself on the runqueue of CPU C
Task P:
- see that task T is on a runqueue, and pretend to remove it
from the runqueue on CPU B
Now CPUs B & C both have corrupted scheduler data structures.
This patch fixes it, by holding the pi_lock for both of the tasks
involved in the migrate swap. This prevents task T from waking up,
and placing itself onto another runqueue, until after migrate_swap
has released all locks.
This means that, when migrate_swap checks, task T will be either
on the runqueue where it was originally seen, or not on any
runqueue at all. Migrate_swap deals correctly with of those cases.
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: hannes@cmpxchg.org
Cc: aarcange@redhat.com
Cc: srikar@linux.vnet.ibm.com
Cc: tglx@linutronix.de
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/20131010181722.GO13848@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-10 20:17:22 +02:00
|
|
|
static inline void double_raw_lock(raw_spinlock_t *l1, raw_spinlock_t *l2)
|
|
|
|
{
|
|
|
|
if (l1 > l2)
|
|
|
|
swap(l1, l2);
|
|
|
|
|
|
|
|
raw_spin_lock(l1);
|
|
|
|
raw_spin_lock_nested(l2, SINGLE_DEPTH_NESTING);
|
|
|
|
}
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
/*
|
|
|
|
* double_rq_lock - safely lock two runqueues
|
|
|
|
*
|
|
|
|
* Note this does not disable interrupts like task_rq_lock,
|
|
|
|
* you need to do so manually before calling.
|
|
|
|
*/
|
|
|
|
static inline void double_rq_lock(struct rq *rq1, struct rq *rq2)
|
|
|
|
__acquires(rq1->lock)
|
|
|
|
__acquires(rq2->lock)
|
|
|
|
{
|
|
|
|
BUG_ON(!irqs_disabled());
|
|
|
|
if (rq1 == rq2) {
|
|
|
|
raw_spin_lock(&rq1->lock);
|
|
|
|
__acquire(rq2->lock); /* Fake it out ;) */
|
|
|
|
} else {
|
|
|
|
if (rq1 < rq2) {
|
|
|
|
raw_spin_lock(&rq1->lock);
|
|
|
|
raw_spin_lock_nested(&rq2->lock, SINGLE_DEPTH_NESTING);
|
|
|
|
} else {
|
|
|
|
raw_spin_lock(&rq2->lock);
|
|
|
|
raw_spin_lock_nested(&rq1->lock, SINGLE_DEPTH_NESTING);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* double_rq_unlock - safely unlock two runqueues
|
|
|
|
*
|
|
|
|
* Note this does not restore interrupts like task_rq_unlock,
|
|
|
|
* you need to do so manually after calling.
|
|
|
|
*/
|
|
|
|
static inline void double_rq_unlock(struct rq *rq1, struct rq *rq2)
|
|
|
|
__releases(rq1->lock)
|
|
|
|
__releases(rq2->lock)
|
|
|
|
{
|
|
|
|
raw_spin_unlock(&rq1->lock);
|
|
|
|
if (rq1 != rq2)
|
|
|
|
raw_spin_unlock(&rq2->lock);
|
|
|
|
else
|
|
|
|
__release(rq2->lock);
|
|
|
|
}
|
|
|
|
|
2016-09-15 08:52:27 -07:00
|
|
|
/*
|
|
|
|
* task_may_not_preempt - check whether a task may not be preemptible soon
|
|
|
|
*/
|
|
|
|
extern bool task_may_not_preempt(struct task_struct *task, int cpu);
|
|
|
|
|
2011-10-25 10:00:11 +02:00
|
|
|
#else /* CONFIG_SMP */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* double_rq_lock - safely lock two runqueues
|
|
|
|
*
|
|
|
|
* Note this does not disable interrupts like task_rq_lock,
|
|
|
|
* you need to do so manually before calling.
|
|
|
|
*/
|
|
|
|
static inline void double_rq_lock(struct rq *rq1, struct rq *rq2)
|
|
|
|
__acquires(rq1->lock)
|
|
|
|
__acquires(rq2->lock)
|
|
|
|
{
|
|
|
|
BUG_ON(!irqs_disabled());
|
|
|
|
BUG_ON(rq1 != rq2);
|
|
|
|
raw_spin_lock(&rq1->lock);
|
|
|
|
__acquire(rq2->lock); /* Fake it out ;) */
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* double_rq_unlock - safely unlock two runqueues
|
|
|
|
*
|
|
|
|
* Note this does not restore interrupts like task_rq_unlock,
|
|
|
|
* you need to do so manually after calling.
|
|
|
|
*/
|
|
|
|
static inline void double_rq_unlock(struct rq *rq1, struct rq *rq2)
|
|
|
|
__releases(rq1->lock)
|
|
|
|
__releases(rq2->lock)
|
|
|
|
{
|
|
|
|
BUG_ON(rq1 != rq2);
|
|
|
|
raw_spin_unlock(&rq1->lock);
|
|
|
|
__release(rq2->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
|
|
|
extern struct sched_entity *__pick_first_entity(struct cfs_rq *cfs_rq);
|
|
|
|
extern struct sched_entity *__pick_last_entity(struct cfs_rq *cfs_rq);
|
2015-06-25 22:51:41 +05:30
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
2011-10-25 10:00:11 +02:00
|
|
|
extern void print_cfs_stats(struct seq_file *m, int cpu);
|
|
|
|
extern void print_rt_stats(struct seq_file *m, int cpu);
|
2014-10-31 06:39:33 +08:00
|
|
|
extern void print_dl_stats(struct seq_file *m, int cpu);
|
2015-06-25 22:51:41 +05:30
|
|
|
extern void
|
|
|
|
print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq);
|
2015-06-25 22:51:43 +05:30
|
|
|
|
|
|
|
#ifdef CONFIG_NUMA_BALANCING
|
|
|
|
extern void
|
|
|
|
show_numa_stats(struct task_struct *p, struct seq_file *m);
|
|
|
|
extern void
|
|
|
|
print_numa_stats(struct seq_file *m, int node, unsigned long tsf,
|
|
|
|
unsigned long tpf, unsigned long gsf, unsigned long gpf);
|
|
|
|
#endif /* CONFIG_NUMA_BALANCING */
|
|
|
|
#endif /* CONFIG_SCHED_DEBUG */
|
2011-10-25 10:00:11 +02:00
|
|
|
|
|
|
|
extern void init_cfs_rq(struct cfs_rq *cfs_rq);
|
2015-03-03 13:50:27 +02:00
|
|
|
extern void init_rt_rq(struct rt_rq *rt_rq);
|
|
|
|
extern void init_dl_rq(struct dl_rq *dl_rq);
|
2011-10-25 10:00:11 +02:00
|
|
|
|
2013-10-16 11:16:12 -07:00
|
|
|
extern void cfs_bandwidth_usage_inc(void);
|
|
|
|
extern void cfs_bandwidth_usage_dec(void);
|
2011-12-01 17:07:32 -08:00
|
|
|
|
2011-08-10 23:21:01 +02:00
|
|
|
#ifdef CONFIG_NO_HZ_COMMON
|
2011-12-01 17:07:32 -08:00
|
|
|
enum rq_nohz_flag_bits {
|
|
|
|
NOHZ_TICK_STOPPED,
|
|
|
|
NOHZ_BALANCE_KICK,
|
|
|
|
};
|
|
|
|
|
2015-12-04 06:34:03 +05:30
|
|
|
#define NOHZ_KICK_ANY 0
|
|
|
|
#define NOHZ_KICK_RESTRICT 1
|
|
|
|
|
2011-12-01 17:07:32 -08:00
|
|
|
#define nohz_flags(cpu) (&cpu_rq(cpu)->nohz_flags)
|
|
|
|
#endif
|
2012-06-16 15:57:37 +02:00
|
|
|
|
|
|
|
#ifdef CONFIG_IRQ_TIME_ACCOUNTING
|
|
|
|
|
|
|
|
DECLARE_PER_CPU(u64, cpu_hardirq_time);
|
|
|
|
DECLARE_PER_CPU(u64, cpu_softirq_time);
|
|
|
|
|
|
|
|
#ifndef CONFIG_64BIT
|
|
|
|
DECLARE_PER_CPU(seqcount_t, irq_time_seq);
|
|
|
|
|
|
|
|
static inline void irq_time_write_begin(void)
|
|
|
|
{
|
|
|
|
__this_cpu_inc(irq_time_seq.sequence);
|
|
|
|
smp_wmb();
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void irq_time_write_end(void)
|
|
|
|
{
|
|
|
|
smp_wmb();
|
|
|
|
__this_cpu_inc(irq_time_seq.sequence);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u64 irq_time_read(int cpu)
|
|
|
|
{
|
|
|
|
u64 irq_time;
|
|
|
|
unsigned seq;
|
|
|
|
|
|
|
|
do {
|
|
|
|
seq = read_seqcount_begin(&per_cpu(irq_time_seq, cpu));
|
|
|
|
irq_time = per_cpu(cpu_softirq_time, cpu) +
|
|
|
|
per_cpu(cpu_hardirq_time, cpu);
|
|
|
|
} while (read_seqcount_retry(&per_cpu(irq_time_seq, cpu), seq));
|
|
|
|
|
|
|
|
return irq_time;
|
|
|
|
}
|
|
|
|
#else /* CONFIG_64BIT */
|
|
|
|
static inline void irq_time_write_begin(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void irq_time_write_end(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline u64 irq_time_read(int cpu)
|
|
|
|
{
|
|
|
|
return per_cpu(cpu_softirq_time, cpu) + per_cpu(cpu_hardirq_time, cpu);
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_64BIT */
|
|
|
|
#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
|
2016-03-04 15:59:42 +01:00
|
|
|
|
2016-11-11 14:04:43 -08:00
|
|
|
#ifdef CONFIG_CPU_FREQ
|
|
|
|
DECLARE_PER_CPU(struct update_util_data *, cpufreq_update_util_data);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* cpufreq_update_util - Take a note about CPU utilization changes.
|
|
|
|
* @rq: Runqueue to carry out the update for.
|
|
|
|
* @flags: Update reason flags.
|
|
|
|
*
|
|
|
|
* This function is called by the scheduler on the CPU whose utilization is
|
|
|
|
* being updated.
|
|
|
|
*
|
|
|
|
* It can only be called from RCU-sched read-side critical sections.
|
|
|
|
*
|
|
|
|
* The way cpufreq is currently arranged requires it to evaluate the CPU
|
|
|
|
* performance state (frequency/voltage) on a regular basis to prevent it from
|
|
|
|
* being stuck in a completely inadequate performance level for too long.
|
|
|
|
* That is not guaranteed to happen if the updates are only triggered from CFS,
|
|
|
|
* though, because they may not be coming in if RT or deadline tasks are active
|
|
|
|
* all the time (or there are RT and DL tasks only).
|
|
|
|
*
|
|
|
|
* As a workaround for that issue, this function is called by the RT and DL
|
|
|
|
* sched classes to trigger extra cpufreq updates to prevent it from stalling,
|
|
|
|
* but that really is a band-aid. Going forward it should be replaced with
|
|
|
|
* solutions targeted more specifically at RT and DL tasks.
|
|
|
|
*/
|
|
|
|
static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
|
|
|
|
{
|
|
|
|
struct update_util_data *data;
|
|
|
|
|
|
|
|
data = rcu_dereference_sched(*this_cpu_ptr(&cpufreq_update_util_data));
|
|
|
|
if (data)
|
|
|
|
data->func(data, rq_clock(rq), flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void cpufreq_update_this_cpu(struct rq *rq, unsigned int flags)
|
|
|
|
{
|
|
|
|
if (cpu_of(rq) == smp_processor_id())
|
|
|
|
cpufreq_update_util(rq, flags);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
|
|
|
|
static inline void cpufreq_update_this_cpu(struct rq *rq, unsigned int flags) {}
|
|
|
|
#endif /* CONFIG_CPU_FREQ */
|
|
|
|
|
2017-02-03 11:15:31 -08:00
|
|
|
#ifdef CONFIG_SCHED_WALT
|
|
|
|
|
|
|
|
static inline bool
|
|
|
|
walt_task_in_cum_window_demand(struct rq *rq, struct task_struct *p)
|
|
|
|
{
|
|
|
|
return cpu_of(rq) == task_cpu(p) &&
|
|
|
|
(p->on_rq || p->last_sleep_ts >= rq->window_start);
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* CONFIG_SCHED_WALT */
|
|
|
|
|
2016-11-11 14:04:43 -08:00
|
|
|
#ifdef arch_scale_freq_capacity
|
|
|
|
#ifndef arch_scale_freq_invariant
|
|
|
|
#define arch_scale_freq_invariant() (true)
|
|
|
|
#endif
|
|
|
|
#else /* arch_scale_freq_capacity */
|
|
|
|
#define arch_scale_freq_invariant() (false)
|
|
|
|
#endif
|