Correct the input argument to pass in the valid end address for the dmac
flush range function.
Change-Id: Ib0e9690fc158a76dcebbd5ae45f67aaeca016a48
Signed-off-by: Ram Chandrasekar <rkumbako@codeaurora.org>
Correct the input argument to pass in the valid end address for the dmac
flush range function.
Change-Id: I2bc1eb26bcc7ed4aaa381417045d08b6779679ee
Signed-off-by: Ram Chandrasekar <rkumbako@codeaurora.org>
Correct the input argument to pass in the valid end address for the dmac
flush range function.
Change-Id: Iefcf85eaa5ea5542888269b7506b8f6e0e861243
Signed-off-by: Ram Chandrasekar <rkumbako@codeaurora.org>
Stop GSI channel for IPA producer endpoint includes
sending IPA DMA_TASK immediate command to IPA.
If the IPA DL group is in HOLB state, the DMA_TASK
will not be processed and ACK on it will not be sent to
the driver. In this case, ACK is redundant as the DL
data will release the IPA TX and GSI STOP indication
will be sent to S/W.
CRs-fixed: 1078380
Change-Id: I115524d562b63a8ec76b327207919b6ac9327fe2
Signed-off-by: Ghanim Fodi <gfodi@codeaurora.org>
RPM clocks are required for allowing clock operations on the clocks
managed by RPM. Add support for the same.
Change-Id: I622533807c7e4653a7aa3c51bf4e4f0db1a7a5ff
Signed-off-by: Taniya Das <tdas@codeaurora.org>
Add sysfs for suspend-resume test in pixart pat9125 driver.
This sysfs is used for doing regression testing for pixart
pat9125 device.
Change-Id: Ia90832f9280f69c367c5d9f404b0d27c656e5c28
Signed-off-by: Shantanu Jain <shjain@codeaurora.org>
Add some debug logs to specific places at IPA driver
to improve debugging capabilities.
Change-Id: Ibc53bd27a58c90d309a38937d6de6eef62ddc99a
CRs-Fixed: 1073482
Signed-off-by: Ghanim Fodi <gfodi@codeaurora.org>
If the string passed is of a huge size, then bytes_read can be
higher and can overflow "pos" to a small value. This can cause
a potential buffer overflow when "pos" is used again in sscanf.
Fix this by validating bytes_read before it is used.
CRs-Fixed: 1077693
Change-Id: I59d4472b49b67f481992867a34e6779a4589d035
Signed-off-by: Subbaraman Narayanamurthy <subbaram@codeaurora.org>
During scheduler boost the sched_task_load ftrace event might not log
the correct flag value. Ensure that the flag is always initialized with
the selected cluster information.
Change-Id: Ia986d0fbc512c8e9ed1b5fb5b2ac4bc564cc4ba9
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Enable esd check with TE method for jdi qhd command mode panel
in msmcobalt platform.
Change-Id: Ia03f76cf13d3787e2e13e27ae0360723fe36d615
Signed-off-by: Ingrid Gallardo <ingridg@codeaurora.org>
Revert commit 238f87868d ("soc: qcom: Listen to
SUBSYS_AFTER_SHUTDOWN notification").
SUBSYS_AFTER_SHUTDOWN notification is late for subsystem as they
are not able to communicate with firmware.
CRs-Fixed: 1078743
Change-Id: I61b308dce7e92b0e28033750885eac4a003dc01a
Signed-off-by: Puja Gupta <pujag@codeaurora.org>
The parallel charging code has grown organically.
Clean up the following:
1. Use correct units for all unit based variables.
2. Use slave percent instead of master percent.
3. Remove parallel master module parameter.
4. Put PARALLEL_DISABLE where it belongs in battery psy.
5. Create a get_jeita_cc_delta function similar to get_step_cc_delta
function.
6. Print errors when returning error codes.
Change-Id: I27ec29c3a6c5f3aac31705e60e1b8cf3270322a1
Signed-off-by: Nicholas Troast <ntroast@codeaurora.org>
Signed-off-by: Abhijeet Dharmapurikar <adharmap@codeaurora.org>
Add support for Lpass PIL which facilitates the loading of Lpass
firmware, authentication and bringing it out of reset.
Change-Id: I367f4b3afdae9d0f78081e142be34132aaf07ab4
Signed-off-by: Gaurav Kohli <gkohli@codeaurora.org>
The previous patches in this series introduce the mechanics of CPU
load tracking without fixups for intra cluster migration and top task
load tracking. Add a tunable that dictates what of the above needs to
be considered when reporting load to the governor. The default policy
is to take the maximum of the CPU load and top task load.
Change-Id: Ie585a11ed774b929910d04c41471db3a2a102ec5
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
find_next_top_index() is responsible for finding the second top task
on a CPU when the top task migrates away from that CPU. This operation
is expensive as we need to iterate the entire array of top tasks to
find the second top task.
Optimize this by introducing bitmaps for tracking top task indices.
There are two bitmaps; one for the previous window and one for the
current window. Each bit in a bitmap tracks whether the corresponding
bucket in the top task hashmap has a non zero refcount. The bit is set
when the refcount becomes non zero and is cleared when it becomes zero.
Finding the second top task upon migration is then simply a matter of
finding the highest set bit in the bitmap.
Change-Id: Ibafaf66eed756b0328704dfaa89c17ab0d84e359
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
The previous patches in this rewrite of scheduler guided frequency
selection reintroduces the part-picture problem that we addressed in
our initial implementation. In that, when tasks migrate across CPUs
within a cluster, we end up losing the complete picture of the
sequential nature of the workload.
This patch aims to solve that problem slightly differently. We track
the top task on every CPU within a window. Top task is defined as the
task that runs the most in a given window. This enhances our ability
to detect the sequential nature of workloads. A single migrating task
executing for an entire window will cause 100% load to be reported
for frequency guidance instead of the maximum footprint left on any
individual CPU in the task's trail. There are cases, that this new
approach does not address. Namely, cases where the sum of two or more
tasks accurately reflects the true sequential nature of the workload.
Future optimizations might aim to tackle that problem.
To track top tasks, we first realize that there is no strict need to
maintain the task struct itself as long as we know the load exerted by
the top task. We also realize that to maintain top tasks on every CPU
we have to track the execution of every single task that runs during
the window. The load associated with a task needs to be migrated when
the task migrates from one CPU to another. When the top task migrates
away, we need to locate the second top task and so on.
Given the above realizations, we use hashmaps to track top task load
both for the current and the previous window. This hashmap is
implemented as an array of fixed size. The key of the hashmap is given
by task_execution_time_in_a_window / array_size. The size of the array
(number of buckets in the hashmap) dictate the load granularity of each
bucket. The value stored in each bucket is a refcount of all the tasks
that executed long enough to be in that bucket.
This approach has a few benefits. Firstly, any top task stats update
now take O(1) time. While task migration is also O(1), it does still
involve going through up to the size of the array to find the second
top task. Further patches will aim to optimize this behavior. Secondly,
and more importantly, not having to store the task struct itself saves
a lot of memory usage in that 1) there is no need to retrieve task
structs later causing cache misses and 2) we don't have to unnecessarily
hold up task memory for up to 2 full windows by calling get_task_struct()
after a task exits.
Change-Id: I004dba474f41590db7d3f40d9deafe86e71359ac
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
In the current frequency guidance implementation the scheduler migrates
task load from the source CPU to the destination CPU when a task migrates.
The underlying assumption is that a task will stay on the destination CPU
following the migration. Hence a CPU's load should reflect the sum of
all tasks that last ran on that CPU prior to window expiration even if
these tasks executed on some other CPU in that window prior to being
migrated.
However, given the ubiquitous nature of migrations the above assumption
is flawed causing the scheduler to often add up load on a single CPU
that in reality ran concurrently on multiple CPUs and will continue to
run concurrently in subsequent windows. This leads to load over
reporting on a single CPU which in turn causes CPU frequency to be higher
than necessary.
This is the first patch in a series of patches that attempts to change
how load fixups are done upon migration to prevent load over reporting.
In this patch, we stop doing migration fixups for intra-cluster
migrations. Inter-cluster migration fixups are still retained.
In order to achieve the above, we make use the per CPU footprint of each
task introduced in the previous patch. Upon inter cluster migration, we
go through every CPU in the source cluster to subtract the migrating
task's contribution to the busy time on each one of those CPUs. The sum
of the contributions is then added to the destination CPU allowing it
to ramp up to the appropriate frequency for that task.
Subtracting load from each of the source CPUs is not trivial, however,
as it would require all runqueue locks to held. To get around this
we introduce a deferred load subtraction mechanism whereby subtracting
load from each of the source CPUs in deferred until an opportune moment.
This opportune moment is when the governor comes asking the scheduler
for load. At that time, all necessary runqueue locks are already held.
There are a few cases to consider when doing deferred subtraction. Since
we are not holding all runqueue locks other CPUs in the source cluster
can be in a different window than the source CPU where the task
is migrating from.
Case 1: Other CPU in the source cluster is in the same window
No special consideration
Case 2: Other CPU in the source cluster is ahead by 1 window
In this case, we will be doing redundant updates to subtraction load
for the prev window. There is no way to avoid this redundant update
though, without holding the rq lock.
Case 3: Other CPU in the source cluster is trailing by 1 window
In this case, we might end up overwriting old data for that CPU. But
this is not a problem as when the other CPU calls update_task_ravg()
it will move to the same window. This relies on maintaining
synchronized windows between CPUs, which is true today.
Finally, we must deal with frequency aggregation. When frequency
aggregation is in effect, there is little point in dealing with per
CPU footprint since the load of all related tasks have to be reported
on a single CPU. Therefore when a task enters a related group we clear
out all per CPU contributions and add it to the task CPU's cpu_time
struct. From that point onwards we stop managing per CPU contributions
upon inter cluster migrations since that work is redundant. Finally
when a task exits a related group we must walk every CPU in reset
all CPU contributions. We then set the task CPU contribution to the
respective curr/prev sum values and add that sum to the task CPU
rq runnable sum.
Change-Id: I1f8d596e6c930f3f6f00e24109ddbe8b121f8d6b
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
Keeping a track of the load footprint of each task on every CPU
that it executed on gives the scheduler much more flexibility in
terms of the number of frequency guidance policies. These new fields
will be used in subsequent patches as we alter the load fixup
mechanism upon task migration. We still need to maintain the
curr/prev_window sums as they will also be required in subsequent
patches as we start to track top tasks based on cumulative load.
Also, we need to call init_new_task_load() for the idle task. This
is an existing harmless bug as load tracking for the idle task is
irrelevant. However, in this patch we are adding pointers to the
ravg structure. These pointers have to be initialized even for the
idle task.
Finally move init_new_task_load() to sched_fork(). This was always
the more appropriate place, however, following the introduction of
new pointers in the ravg struct, this is necessary to avoid races
with functions such as reset_all_task_stats().
Change-Id: Ib584372eb539706da4319973314e54dae04e5934
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>