Enable the legacy interrupts immediately after halt interrupt is
handled, from cmdq_irq() itself.
When cmdq halt is initiated, as per existing logic driver waits either
for halt interrupt to fire or for certain period of time. In case if
cmdq is halted but halt interrupt got delayed than wait-timeout period,
driver disables cmdq interrupts since cmdq is in halt state.
The delayed halt interrupt gets fired only when cmdq is unhalted next
time and cmdq interrupts are enabled. And this delayed interrupt is
treated as an unexpected interrupt.
By enabling legacy interrupts (i.e., disabling cmdq interrupts) from
cmdq_irq(), we can ensure that we don't disable cmdq interrupts until
halt interrupt get fired. So we can avoid above mentioned scenario.
Change-Id: Ic052d41fac789b6390a5d80dfaee91767bdb783f
Signed-off-by: Veerabhadrarao Badiganti <vbadigan@codeaurora.org>
When system is heavily loaded, in some cases interrupt servicing
is getting effected and cmdq halt interrupt handler is getting invoked
after 1 sec delay. Since wait time of HAC interrupt in cmdq driver
is 1 sec, the delayed interrupt is being treated as unexpected
interrupt.
For fixing this case, increasing the timeout to 10 seconds.
Change-Id: I55879095aa2b81a10f40963aee02b2068a3305b4
Signed-off-by: Veerabhadrarao Badiganti <vbadigan@codeaurora.org>
Add ST touch controller device node for MSM8998 HDK835.
Touch controller is connected to the host processor via
I2C.
Change-Id: Id94f2feaddfa0c7aca74a52448b652afcd013ed7
Signed-off-by: Jin Fu <jinf@codeaurora.org>
A context may be detached without submitting any commands
to GPU ringbuffer. This may cause us to wait on a timestamp
that will never be retired. So return immediately from
adreno_drawctxt_wait_rb() if context has not submitted any
commands to GPU ringbuffer.
Change-Id: If8b3f8df92ec9b54a1a83d2f6704d4d15eb1b979
Signed-off-by: Hareesh Gundu <hareeshg@codeaurora.org>
In internal codec with WSA, wsa881x codec registers
accessed using soundwire expect to have delay.
So back to back registers access needs to ensure
proper delay, otherwise previous filled register
value is read and results in speaker mute.
CRs-Fixed: 2000566
Change-Id: I6b3441f206a3a9d0531b40d701636d7dd5a74cc0
Signed-off-by: Laxminath Kasam <lkasam@codeaurora.org>
Add address cells and size cells for digital audio node present
within analog codec node.
CRs-Fixed: 2000566
Change-Id: Iaf7ce40e9bcf8a1eabba0552377372fe2dd43ea3
Signed-off-by: Laxminath Kasam <lkasam@codeaurora.org>
The gcc_hmss_ahb_clk will be controlled by RPM. Remove all
control of it from the HLOS clock driver.
Change-Id: I26525787352cb0b85937cc005afba7c37a7989ff
Signed-off-by: Amit Nischal <anischal@codeaurora.org>
vbp_min is calculated as 0 because variable is incorrectly
compared. Fix this to correct prefill bw calculation.
Change-Id: I80c3db2898111b5e8acdccc35af745dde30c0509
Signed-off-by: Jayant Shekhar <jshekhar@codeaurora.org>
For SDHC version 5.0 onwards, ICE3.0 specific
registers are added in CQ register space, due to
which few CQ registers(like CQ_VENDOR_GFG,
CQ_CMD_DBG_RAM) are shifted. This change is to
update CQ register offset for sdm660.
Change-Id: Ie85b8f6c68511dccd2b545bd9cc17c747f3da8e7
Signed-off-by: Sayali Lokhande <sayalil@codeaurora.org>
Randy reported below build error.
> In file included from ../include/linux/balloon_compaction.h:48:0,
> from ../mm/balloon_compaction.c:11:
> ../include/linux/compaction.h:237:51: warning: 'struct node' declared inside parameter list [enabled by default]
> static inline int compaction_register_node(struct node *node)
> ../include/linux/compaction.h:237:51: warning: its scope is only this definition or declaration, which is probably not what you want [enabled by default]
> ../include/linux/compaction.h:242:54: warning: 'struct node' declared inside parameter list [enabled by default]
> static inline void compaction_unregister_node(struct node *node)
>
It was caused by non-lru page migration which needs compaction.h but
compaction.h doesn't include any header to be standalone.
I think proper header for non-lru page migration is migrate.h rather
than compaction.h because migrate.h has already headers needed to work
non-lru page migration indirectly like isolate_mode_t, migrate_mode
MIGRATEPAGE_SUCCESS.
[akpm@linux-foundation.org: revert mm-balloon-use-general-non-lru-movable-page-feature-fix.patch temp fix]
Link: http://lkml.kernel.org/r/20160610003304.GE29779@bbox
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Gioh Kim <gi-oh.kim@profitbricks.com>
Cc: Rafael Aquini <aquini@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: dd4123f324bbaec7618b677b7bce2b11aee9594e
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Change-Id: If9a33a471f69bf008f4c2d40f21ff30abce200bb
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
This patch introduces run-time migration feature for zspage.
For migration, VM uses page.lru field so it would be better to not use
page.next field which is unified with page.lru for own purpose. For
that, firstly, we can get first object offset of the page via runtime
calculation instead of using page.index so we can use page.index as link
for page chaining instead of page.next.
In case of huge object, it stores handle to page.index instead of next
link of page chaining because huge object doesn't need to next link for
page chaining. So get_next_page need to identify huge object to return
NULL. For it, this patch uses PG_owner_priv_1 flag of the page flag.
For migration, it supports three functions
* zs_page_isolate
It isolates a zspage which includes a subpage VM want to migrate from
class so anyone cannot allocate new object from the zspage.
We could try to isolate a zspage by the number of subpage so subsequent
isolation trial of other subpage of the zpsage shouldn't fail. For
that, we introduce zspage.isolated count. With that, zs_page_isolate
can know whether zspage is already isolated or not for migration so if
it is isolated for migration, subsequent isolation trial can be
successful without trying further isolation.
* zs_page_migrate
First of all, it holds write-side zspage->lock to prevent migrate other
subpage in zspage. Then, lock all objects in the page VM want to
migrate. The reason we should lock all objects in the page is due to
race between zs_map_object and zs_page_migrate.
zs_map_object zs_page_migrate
pin_tag(handle)
obj = handle_to_obj(handle)
obj_to_location(obj, &page, &obj_idx);
write_lock(&zspage->lock)
if (!trypin_tag(handle))
goto unpin_object
zspage = get_zspage(page);
read_lock(&zspage->lock);
If zs_page_migrate doesn't do trypin_tag, zs_map_object's page can be
stale by migration so it goes crash.
If it locks all of objects successfully, it copies content from old page
to new one, finally, create new zspage chain with new page. And if it's
last isolated subpage in the zspage, put the zspage back to class.
* zs_page_putback
It returns isolated zspage to right fullness_group list if it fails to
migrate a page. If it find a zspage is ZS_EMPTY, it queues zspage
freeing to workqueue. See below about async zspage freeing.
This patch introduces asynchronous zspage free. The reason to need it
is we need page_lock to clear PG_movable but unfortunately, zs_free path
should be atomic so the apporach is try to grab page_lock. If it got
page_lock of all of pages successfully, it can free zspage immediately.
Otherwise, it queues free request and free zspage via workqueue in
process context.
If zs_free finds the zspage is isolated when it try to free zspage, it
delays the freeing until zs_page_putback finds it so it will free free
the zspage finally.
In this patch, we expand fullness_list from ZS_EMPTY to ZS_FULL. First
of all, it will use ZS_EMPTY list for delay freeing. And with adding
ZS_FULL list, it makes to identify whether zspage is isolated or not via
list_empty(&zspage->list) test.
[minchan@kernel.org: zsmalloc: keep first object offset in struct page]
Link: http://lkml.kernel.org/r/1465788015-23195-1-git-send-email-minchan@kernel.org
[minchan@kernel.org: zsmalloc: zspage sanity check]
Link: http://lkml.kernel.org/r/20160603010129.GC3304@bbox
Link: http://lkml.kernel.org/r/1464736881-24886-12-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: 48b4800a1c6af2cdda344ea4e2c843dcc1f6afc9
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Change-Id: Iee2c3448872d258d5bed71fe4c0870ba186f0d05
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Zsmalloc stores first free object's <PFN, obj_idx> position into freeobj
in each zspage. If we change it with index from first_page instead of
position, it makes page migration simple because we don't need to
correct other entries for linked list if a page is migrated out.
Link: http://lkml.kernel.org/r/1464736881-24886-11-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: bfd093f5e7f09c1e41c43e7605893069975cd734
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Change-Id: I435ac58d8a87767e884901397358f4a8253ae163
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Currently, putback_zspage does free zspage under class->lock if fullness
become ZS_EMPTY but it makes trouble to implement locking scheme for new
zspage migration. So, this patch is to separate free_zspage from
putback_zspage and free zspage out of class->lock which is preparation
for zspage migration.
Link: http://lkml.kernel.org/r/1464736881-24886-10-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: 4aa409cab7c39c90f4b725ff22f52bbf5d2fc4e0
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Change-Id: I5cb0c37206d1bb7ceb0a362733103cde335ace39
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
We have squeezed meta data of zspage into first page's descriptor. So,
to get meta data from subpage, we should get first page first of all.
But it makes trouble to implment page migration feature of zsmalloc
because any place where to get first page from subpage can be raced with
first page migration. IOW, first page it got could be stale. For
preventing it, I have tried several approahces but it made code
complicated so finally, I concluded to separate metadata from first
page. Of course, it consumes more memory. IOW, 16bytes per zspage on
32bit at the moment. It means we lost 1% at *worst case*(40B/4096B)
which is not bad I think at the cost of maintenance.
Link: http://lkml.kernel.org/r/1464736881-24886-9-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: 3783689a1aa82ef27a6418b043dd7a077b8330c5
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Change-Id: I2f0c67d7b85ebfe0655b92e13854fdbde5f26f2b
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
For page migration, we need to create page chain of zspage dynamically
so this patch factors it out from alloc_zspage.
Link: http://lkml.kernel.org/r/1464736881-24886-8-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: bdb0af7ca8f0e9f4c03a9169a744b22890641b64
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Change-Id: I6e6c925a85afb92ec2e24ea0109b9032f96fcbf0
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Upcoming patch will change how to encode zspage meta so for easy review,
this patch wraps code to access metadata as accessor.
Link: http://lkml.kernel.org/r/1464736881-24886-7-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: 4f42047bbde059823fe70381387257a9e3bd229c
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Change-Id: I972605e6a4c048dbf49ea7b52a4b2eb896a2c8d6
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Use kernel standard bit spin-lock instead of custom mess. Even, it has
a bug which doesn't disable preemption. The reason we don't have any
problem is that we have used it during preemption disable section by
class->lock spinlock. So no need to go to stable.
Link: http://lkml.kernel.org/r/1464736881-24886-6-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: 1b8320b620d6caa5879380f83f3884908ceedd4a
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Change-Id: I07548b9ea31f5379265217f6fa0362befc6f8663
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Every zspage in a size_class has same number of max objects so we could
move it to a size_class.
Link: http://lkml.kernel.org/r/1464736881-24886-5-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: 1fc6e27d7b8613afe6e5c1b8cdf94339a1bce640
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Change-Id: I45f4f956c49628cb44fdaef23c93fbe9bd2a59a4
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Pass GFP flags to zs_malloc() instead of using a fixed mask supplied to
zs_create_pool(), so we can be more flexible, but, more importantly, we
need this to switch zram to per-cpu compression streams -- zram will try
to allocate handle with preemption disabled in a fast path and switch to
a slow path (using different gfp mask) if the fast one has failed.
Apart from that, this also align zs_malloc() interface with zspool/zbud.
[sergey.senozhatsky@gmail.com: pass GFP flags to zs_malloc() instead of using a fixed mask]
Link: http://lkml.kernel.org/r/20160429150942.GA637@swordfish
Link: http://lkml.kernel.org/r/20160429150942.GA637@swordfish
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: d0d8da2dc49dfdfe1d788eaf4d55eb5d4964d926
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Change-Id: I61b1339bd244a3ea4e31eed893aea9a2f3ffe248
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>