Fix clock rate setting in the mxs-mmc driver. Previously, if div2 was 0
then the value for TIMING_CLOCK_RATE would have been 255 instead of 0.
The limits for div1 (TIMING_CLOCK_DIVIDE) and div2 (TIMING_CLOCK_RATE+1)
were also not correctly defined.
Can easily be reproduced on mx23evk: default clock for high speed sdio
cards is 50 MHz. With a SSP_CLK of 28.8 MHz default), this resulted in
an actual clock rate of about 56 kHz. Tested on mx23evk.
Signed-off-by: Koen Beel <koen.beel@barco.com>
Reviewed-by: Wolfram Sang <w.sang@pengutronix.de>
Signed-off-by: Chris Ball <cjb@laptop.org>
Currently the tmio-mmc driver contains a recursive runtime PM method
invocation, which leads to a deadlock on a mutex. Avoid it by taking
care not to request DMA too early.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Chris Ball <cjb@laptop.org>
A recent commit "mmc: tmio: Share register access functions" has swapped
arguments of a macro and broken DMA with TMIO MMC. This patch fixes the
arguments back.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Chris Ball <cjb@laptop.org>
This patch uses runtime PM to allow the system to power down the MMC
controller, when the MMC closk is switched off.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Chris Ball <cjb@laptop.org>
This patch uses runtime PM to allow the system to power down the MMC
controller, when the MMC closk is switched off.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Chris Ball <cjb@laptop.org>
Calling mmc_request_done() under a spinlock with interrupts disabled
leads to a recursive spin-lock on request retry path and to
scheduling in atomic context. This patch fixes both these problems
by moving mmc_request_done() to the scheduler workqueue.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Chris Ball <cjb@laptop.org>
Ricoh 1180:e823 does not recognize certain types of SD/MMC cards,
as reported at http://launchpad.net/bugs/773524. Lowering the SD
base clock frequency from 200Mhz to 50Mhz fixes this issue. This
solution was suggest by Koji Matsumuro, Ricoh Company, Ltd.
This change has no negative performance effect on standard SD
cards, though it's quite possible that there will be one on
UHS-1 cards.
Signed-off-by: Manoj Iyer <manoj.iyer@canonical.com>
Tested-by: Daniel Manrique <daniel.manrique@canonical.com>
Cc: Koji Matsumuro <matsumur@nts.ricoh.co.jp>
Cc: <stable@kernel.org>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
In the case of an I/O error, the DMA will have been cleaned up in
the MMC interrupt and the request structure pointer will be null.
In that case, it is essential to check if the DMA is over before
dereferencing host->mrq->data.
Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
There are a few places with the same functionality. This patch creates
two functions omap_hsmmc_set_bus_width() and omap_hsmmc_set_bus_mode()
to do the job.
Signed-off-by: Andy Shevchenko <ext-andriy.shevchenko@nokia.com>
Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
There are two pieces of code which are similar, but not the same.
Each of them contains a bug.
The SYSCTL register should be read before writing to it in
omap_hsmmc_context_restore() to retain the state of the reserved bits.
Before setting the clock divisor and DTO bits the value from the SYSCTL
register should be masked properly. We were lucky to have no problems
with DTO bits. So, make sure we have clear DTO bits properly in
omap_hsmmc_set_ios().
Additionally get rid of msleep(1). The actual time is rarely higher
than 30us on OMAP 3630.
The resulting pieces of code are refactored into the
omap_hsmmc_set_clock() function.
Signed-off-by: Andy Shevchenko <ext-andriy.shevchenko@nokia.com>
Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
There is similar code in two functions which enable the clock. Refactor
this code to omap_hsmmc_start_clock(). Re-use omap_hsmmc_stop_clock() in
omap_hsmmc_context_restore() as well.
Signed-off-by: Andy Shevchenko <ext-andriy.shevchenko@nokia.com>
Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
There are two places where the same calculations are done.
Let's split them into a separate function.
In addition, simplify by using the DIV_ROUND_UP kernel macro.
Signed-off-by: Andy Shevchenko <ext-andriy.shevchenko@nokia.com>
Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
Move the min and max frequency constants to the definition block in
the source file.
Signed-off-by: Andy Shevchenko <ext-andriy.shevchenko@nokia.com>
Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
CERR and BADA were in the wrong place and there are only
32 not 35.
Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
We already check for ongoing async transfers when handling discard
requests, but not in mmc_blk_issue_flush(). This patch fixes that
omission.
Tested with an SDHCI controller and eMMC4.41.
Signed-off-by: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Per Forlin <per.forlin@linaro.org>
Cc: <stable@kernel.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
Documentation about the background and the design of mmc non-blocking.
Host driver guidelines to minimize request preparation overhead.
Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: Chris Ball <cjb@laptop.org>
This patch adds support for the CSR panel built by XAT.
Signed-off-by: Ice Chien <ice.chien@accupoint.com.tw>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
The following symbols are not referenced outside this file so
there's no need for it to be in the global name space.
pcmidi_sustained_note_release
init_sustain_timers
stop_sustain_timers
pcmidi_handle_report
pcmidi_setup_extra_keys
pcmidi_snd_initialise
pcmidi_snd_terminate
Make them static.
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Some gcc versions warn about prototypes without "inline" when the declaration
includes the "inline" keyword. The fix generates a false error message
"marked inline, but without a definition" with sparse below 0.4.2.
Signed-off-by: Chris Friesen <chris.friesen@genband.com>
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Signed-off-by: Patrick McHardy <kaber@trash.net>
If overlapping networks with different interfaces was added to
the set, the type did not handle it properly. Example
ipset create test hash:net,iface
ipset add test 192.168.0.0/16,eth0
ipset add test 192.168.0.0/24,eth1
Now, if a packet was sent from 192.168.0.0/24,eth0, the type returned
a match.
In the patch the algorithm is fixed in order to correctly handle
overlapping networks.
Limitation: the same network cannot be stored with more than 64 different
interfaces in a single set.
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Signed-off-by: Patrick McHardy <kaber@trash.net>
The meth for calculating the # of outstanding buffers gives
incorrect results when vq->upend_idx wraps around zero.
Fix that.
Signed-off-by: Shirley Ma <xma@us.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The non-debug variant of mutex_destroy is a no-op, currently
implemented as a macro which does nothing. This approach fails
to check the type of the parameter, so an error would only show
when debugging gets enabled. Using an inline function instead,
offers type checking for earlier bug catching.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20110716174200.41002352@endymion.delvare
Signed-off-by: Ingo Molnar <mingo@elte.hu>
On backend change, we flushed out outstanding skbs
but forgot to update the used ring, so that
done entries were left in the ubuf_info ring.
As a result we lose heads or complete incorrect ones,
crashing the guest or leaking memory.
Fix by updating the used ring.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
For improving insight into IO completion behaviour.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
The list of active AIL cursors uses a roll-your-own linked list with
special casing for the AIL push cursor. Simplify this code by
replacing the list with standard struct list_head lists, and use a
separate list_head to track the active cursors. This allows us to
treat the AIL push cursor as a generic cursor rather than as a
special case, further simplifying the code.
Further, fix the duplicate push cursor initialisation that the
special case handling was hiding, and clean up all the comments
around the active cursor list handling.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
xfs_trans_ail_cursor_set() doesn't set the cursor to the current log
item, it sets it to the next item. There is already a function for
doing this - xfs_trans_ail_cursor_next() - and the _set function is
simply a two line wrapper. Remove it and open code the setting of
the cursor in the two locations that call it to remove the
confusion.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Delayed logging can insert tens of thousands of log items into the
AIL at the same LSN. When the committing of log commit records
occur, we can get insertions occurring at an LSN that is not at the
end of the AIL. If there are thousands of items in the AIL on the
tail LSN, each insertion has to walk the AIL to find the correct
place to insert the new item into the AIL. This can consume large
amounts of CPU time and block other operations from occurring while
the traversals are in progress.
To avoid this repeated walk, use a AIL cursor to record
where we should be inserting the new items into the AIL without
having to repeat the walk. The cursor infrastructure already
provides this functionality for push walks, so is a simple extension
of existing code. While this will not avoid the initial walk, it
will avoid repeating it tens of thousands of times during a single
checkpoint commit.
This version includes logic improvements from Christoph Hellwig.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
On xfs exports, nfsd is incorrectly returning ENOENT instead of
ESTALE on attempts to use a filehandle of a deleted file (spotted
with pynfs test PUTFH3). The ENOENT was coming from xfs_iget.
(It's tempting to wonder whether we should just map all xfs_iget
errors to ESTALE, but I don't believe so--xfs_iget can also return
ENOMEM at least, which we wouldn't want mapped to ESTALE.)
While we're at it, the other return of ENOENT in xfs_nfs_get_inode()
also looks wrong.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Remove the second parameter to xfs_sb_count() since all callers of
the function set them.
Also, fix the header comment regarding it being called periodically.
Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
* 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
sched: Avoid creating superfluous NUMA domains on non-NUMA systems
sched: Allow for overlapping sched_domain spans
sched: Break out cpu_power from the sched_group structure
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
x86. reboot: Make Dell Latitude E6320 use reboot=pci
x86, doc only: Correct real-mode kernel header offset for init_size
x86: Disable AMD_NUMA for 32bit for now
After runtime conversion to handle clk, iclk node is not used.
However fclk node is still used to get clock rate.
Signed-off-by: Balaji T K <balajitk@ti.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
* Add runtime pm support to HSMMC host controller.
* Use runtime pm API to enable/disable HSMMC clock.
* Use runtime autosuspend APIs to enable auto suspend delay.
Based on OMAP HSMMC runtime implementation by Kevin Hilman and
Kishore Kadiyala.
Signed-off-by: Balaji T K <balajitk@ti.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
lazy_disable framework in OMAP HSMMC manages multiple low power states and
card is powered off after inactivity time of 8 seconds. Based on previous
discussion on the list, card power (regulator) handling (when to power
OFF/ON) should ideally be handled by core layer. Remove usage of lazy
disable to allow core layer _only_ to handle card power. With the removal
of lazy disable framework, MMC regulators are left ON until MMC_POWER_OFF
via set_ios.
Signed-off-by: Balaji T K <balajitk@ti.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
Non default Drive Strength cannot be set automatically. It is a function
of the board design and only if there is a specific platform handler can
it be set. The platform handler needs to take into account the board
design. Pass to the platform code the necessary information.
For example: The card and host controller may indicate they support HIGH
and LOW drive strength. There is no way to know what should be chosen
without specific board knowledge. Setting HIGH may lead to reflections
and setting LOW may not suffice. There is no mechanism (like ethernet
duplex or speed pulses) to determine what should be done automatically.
If no platform handler is defined -- use the default value.
Signed-off-by: Philip Rakity <prakity@marvell.com>
Reviewed-by: Arindam Nath <arindam.nath@amd.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
Change mmc_blk_issue_rw_rq() to become asynchronous.
The execution flow looks like this:
* The mmc-queue calls issue_rw_rq(), which sends the request
to the host and returns back to the mmc-queue.
* The mmc-queue calls issue_rw_rq() again with a new request.
* This new request is prepared in issue_rw_rq(), then it waits for
the active request to complete before pushing it to the host.
* When the mmc-queue is empty it will call issue_rw_rq() with a NULL
req to finish off the active request without starting a new request.
Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
Add an additional mmc queue request instance to make way for two active
block requests. One request may be active while the other request is
being prepared.
Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
Break out code without functional changes. This simplifies the code and
makes way for handling two parallel requests.
Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar<sourav.poddar@ti.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
Break out code from mmc_blk_issue_rw_rq to create a block request prepare
function. This doesn't change any functionallity. This helps when handling
more than one active block request.
Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
The way the request data is organized in the mmc queue struct, it only
allows processing of one request at a time. This patch adds a new struct
to hold mmc queue request data such as sg list, request, blk request and
bounce buffers, and updates any functions depending on the mmc queue
struct. This prepares for using multiple active requests in one mmc queue.
Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
Add a test that measures how the mmc bandwidth depends on the numbers of
sg elements in the sg list. The transfer size if fixed and sg length goes
from a few up to 512. The purpose is to measure overhead caused by
multiple sg elements.
Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
Add four tests for read and write performance per
different transfer size, 4k to 4M.
* Read using blocking mmc request
* Read using non-blocking mmc request
* Write using blocking mmc request
* Write using non-blocking mmc request
The host driver must support pre_req() and post_req()
in order to run the non-blocking test cases.
Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar<sourav.poddar@ti.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
Add a debugfs file "testlist" to print all available tests.
Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar<sourav.poddar@ti.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
pre_req() runs dma_map_sg() and prepares the dma descriptor for the next
mmc data transfer. post_req() runs dma_unmap_sg. If not calling pre_req()
before mmci_request(), mmci_request() will prepare the cache and dma just
like it did it before. It is optional to use pre_req() and post_req()
for mmci.
Signed-off-by: Per Forlin <per.forlin@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
pre_req() runs dma_map_sg(), post_req() runs dma_unmap_sg. If not calling
pre_req() before omap_hsmmc_request(), dma_map_sg will be issued before
starting the transfer. It is optional to use pre_req(). If issuing
pre_req(), post_req() must be called as well.
Signed-off-by: Per Forlin <per.forlin@linaro.org>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
Previously there has only been one function mmc_wait_for_req()
to start and wait for a request. This patch adds:
* mmc_start_req() - starts a request wihtout waiting
If there is on ongoing request wait for completion
of that request and start the new one and return.
Does not wait for the new command to complete.
This patch also adds new function members in struct mmc_host_ops
only called from core.c:
* pre_req - asks the host driver to prepare for the next job
* post_req - asks the host driver to clean up after a completed job
The intention is to use pre_req() and post_req() to do cache maintenance
while a request is active. pre_req() can be called while a request is
active to minimize latency to start next job. post_req() can be used after
the next job is started to clean up the request. This will minimize the
host driver request end latency. post_req() is typically used before
ending the block request and handing over the buffer to the block layer.
Add a host-private member in mmc_data to be used by pre_req to mark the
data. The host driver will then check this mark to see if the data is
prepared or not.
Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>