GPU speed-bin 2 supports fmax of 560MHz and DDR 1555MHz.
Add this config to MSM8996v3 to support required GPU fmax.
Change-Id: Ibdf9bb63c7d8f0e980fbf3c192d536adeaeec52d
Signed-off-by: Dumpeti Sathish Kumar <sathyanov14@codeaurora.org>
Use the appropriate lock when adding sparse bindings, to protect the
list of sparse bindings from concurrent updates by multiple threads.
Change-Id: Ice9750c96fca42f4049ed352533f4722b5166221
Signed-off-by: Lynus Vaz <lvaz@codeaurora.org>
If we find that a different thread has already added bindings at the
same offset we wanted to add to the sparse object, don't get stuck in
an infinite loop, and return with an error.
Change-Id: I6b17c91eccb14c07e13cae24135dfe7b13f3301d
Signed-off-by: Lynus Vaz <lvaz@codeaurora.org>
In WAN ioctls user-supplied data structures
contain string members,but there's no guarantee
they're null-terminated, add the string terminator
to prevent vulnerability of string buffer overflows.
Change-Id: I17c06c94aa619a2cd3a678c495a31541a65a7741
Acked-by: Ashok Vuyyuru <avuyyuru@qti.qualcomm.com>
Signed-off-by: Mohammed Javid <mjavid@codeaurora.org>
Currently, when a channel is read in continuous mode and the read
operation fails, RR_ADC would be left enabled in continuous mode.
Disable the continuous mode in such cases so that the other read
operations which doesn't need continuous mode can go through.
Change-Id: I2bf257bd535e1e4a30e18b6257e584a5be96b69d
Signed-off-by: Subbaraman Narayanamurthy <subbaram@codeaurora.org>
Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
(from https://patchwork.kernel.org/patch/9939405/)
commit 7a4408c6bd3e ("binder: make sure accesses to proc/thread are
safe") made a change to enqueue tcomplete to thread->todo before
enqueuing the transaction. However, in err_dead_proc_or_thread case,
the tcomplete is directly freed, without dequeued. It may cause the
thread->todo list to be corrupted.
So, dequeue it before freeing.
Bug: 65333488
Change-Id: I14ef48095d9f690148b1a50ea62d05dd67779505
Signed-off-by: Xu YiPing <xuyiping@hisilicon.com>
Signed-off-by: Todd Kjos <tkjos@google.com>
Git-commit: 86578a0fd70edffb11c78b5df85b8e113e44bfe1
Git-repo: https://android.googlesource.com/kernel/common
Signed-off-by: Kyle Yan <kyan@codeaurora.org>
Add device specific flag for new vmid heap shared feature.
Change-Id: I35cc0073a5fa10c715d520ebb9d77936a6820aa9
Signed-off-by: Tharun Kumar Merugu <mtharu@codeaurora.org>
The current implementation of synchronize_sched_expedited() incorrectly
assumes that resched_cpu() is unconditional, which it is not. This means
that synchronize_sched_expedited() can hang when resched_cpu()'s trylock
fails as follows (analysis by Neeraj Upadhyay):
o CPU1 is waiting for expedited wait to complete:
sync_rcu_exp_select_cpus
rdp->exp_dynticks_snap & 0x1 // returns 1 for CPU5
IPI sent to CPU5
synchronize_sched_expedited_wait
ret = swait_event_timeout(
rsp->expedited_wq,
sync_rcu_preempt_exp_done(rnp_root),
jiffies_stall);
expmask = 0x20 , and CPU 5 is in idle path (in cpuidle_enter())
o CPU5 handles IPI and fails to acquire rq lock.
Handles IPI
sync_sched_exp_handler
resched_cpu
returns while failing to try lock acquire rq->lock
need_resched is not set
o CPU5 calls rcu_idle_enter() and as need_resched is not set, goes to
idle (schedule() is not called).
o CPU 1 reports RCU stall.
Given that resched_cpu() is now used only by RCU, this commit fixes the
assumption by making resched_cpu() unconditional.
Change-Id: I67cbf28612004f4b78e355dd00b5abdd0f31ec13
Reported-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Suggested-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Patch-mainline: linux-kernel @ 18/09/17, 09:01
Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>