Commit graph

44942 commits

Author SHA1 Message Date
Andy Grover
e3d6f909ed target: Core cleanups from AGrover (round 1)
This patch contains the squashed version of a number of cleanups and
minor fixes from Andy's initial series (round 1) for target core this
past spring.  The condensed log looks like:

target: use errno values instead of returning -1 for everything
target: Rename transport_calc_sg_num to transport_init_task_sg
target: Fix leak in error path in transport_init_task_sg
target/pscsi: Remove pscsi_get_sh() usage
target: Make two runtime checks into WARN_ONs
target: Remove hba queue depth and convert to spin_lock_irq usage
target: dev->dev_status_queue_obj is unused
target: Make struct se_queue_req.cmd type struct se_cmd *
target: Remove __transport_get_qr_from_queue()
target: Rename se_dev->g_se_dev_list to se_dev_node
target: Remove struct se_global
target: Simplify scsi mib index table code
target: Make dev_queue_obj a member of se_device instead of a pointer
target: remove extraneous returns at end of void functions
target: Ensure transport_dump_vpd_ident_type returns null-terminated str
target: Function pointers don't need to use '&' to be assigned
target: Fix comment in __transport_execute_tasks()
target: Misc style cleanups
target: rename struct pr_reservation_template to pr_reservation
target: Remove #defines that just perform indirection
target: Inline transport_get_task_from_execute_queue()
target: Minor header comment fixes

Signed-off-by: Andy Grover <agrover@redhat.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2011-07-22 09:37:43 +00:00
Nicholas Bellinger
efa4988d72 target: Remove unnecessary *cdb transport_get_lun_for_cmd parameter
This patch removes the now unnecessary 'unsigned char *cdb' function
parameter from transport_get_lun_for_cmd().  This also includes updating
lio-target, tcm_loop and tcm_fc usage of transport_get_lun_for_cmd().

Reported-by: Fubo Chen <fubo.chen@gmail.com>
Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
2011-07-22 09:37:42 +00:00
Takashi Iwai
7d339ae997 Merge branch 'topic/misc' into for-linus 2011-07-22 08:43:24 +02:00
Takashi Iwai
13b137ef03 Merge branch 'topic/asoc' into for-linus 2011-07-22 08:43:19 +02:00
Rusty Russell
5dea1c88ed lguest: use a special 1:1 linear pagetable mode until first switch.
The Host used to create some page tables for the Guest to use at the
top of Guest memory; it would then tell the Guest where this was.  In
particular, it created linear mappings for 0 and 0xC0000000 addresses
because lguest used to switch to its real page tables quite late in
boot.

However, since d50d8fe19 Linux initialized boot page tables in
head_32.S even before the "are we lguest?" boot jump.  So, now we can
simplify things: the Host pagetable code assumes 1:1 linear mapping
until it first calls the LHCALL_NEW_PGTABLE hypercall, which we now do
before we reach C code.

This also means that the Host doesn't need to know anything about the
Guest's PAGE_OFFSET.  (Non-Linux guests might not even have such a
thing).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2011-07-22 14:39:48 +09:30
Eric Dumazet
87c48fa3b4 ipv6: make fragment identifications less predictable
IPv6 fragment identification generation is way beyond what we use for
IPv4 : It uses a single generator. Its not scalable and allows DOS
attacks.

Now inetpeer is IPv6 aware, we can use it to provide a more secure and
scalable frag ident generator (per destination, instead of system wide)

This patch :
1) defines a new secure_ipv6_id() helper
2) extends inet_getid() to provide 32bit results
3) extends ipv6_select_ident() with a new dest parameter

Reported-by: Fernando Gont <fernando@gont.com.ar>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-21 21:25:58 -07:00
Choi, Jong-Hwan
e3dbc46c2a net: Kobj and queues_kset should be used when CONFIG_XPS is enabled
Kobj and queues_kset are used with CONFIG_XPS=y.

Signed-off-by: Choi, Jong-Hwan <jhbird.choi@samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-21 20:39:47 -07:00
Giuseppe CAVALLARO
36bcfe7d74 stmmac: unify MAC and PHY configuration parameters (V2)
Prior to this change, most PHY configuration parameters were passed
into the STMMAC device as a separate PHY device. As well as being
unusual, this made it difficult to make changes to the MAC/PHY
relationship.

This patch moves all the PHY parameters into the MAC configuration
structure, mainly as a separate structure. This allows us to completely
ignore the MDIO bus attached to a stmmac if desired, and not create
the PHY bus. It also allows the stmmac driver to use a different PHY
from the one it is connected to, for example a fixed PHY or bit banging
PHY.

Also derive the stmmac/PHY connection type (MII/RMII etc) from the
mode can be passed into <platf>_configure_ethernet.
STLinux kernel at git://git.stlinux.com/stm/linux-sh4-2.6.32.y.git
provides several examples how to use this new infrastructure (that
actually is easier to maintain and clearer).

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-21 15:29:16 -07:00
Jiri Pirko
536d1d4a07 vlan: move vlan_group_[gs]et_device to public header
there are no users outside vlan code

Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-21 13:47:58 -07:00
Jiri Pirko
7890a5b9cb vlan: kill ndo_vlan_rx_register
has no users so remove it

Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-21 13:47:58 -07:00
Jiri Pirko
ffcf9b7672 vlan: kill vlan_gro_frags and vlan_gro_receive
Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-21 13:47:57 -07:00
Jiri Pirko
a4aeb26628 vlan: kill __vlan_hwaccel_rx and vlan_hwaccel_rx
Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-21 13:47:56 -07:00
Jiri Pirko
6dacaddd48 vlan: kill vlan_hwaccel_receive_skb
Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-21 13:47:56 -07:00
Jiri Pirko
9fea03302a lro: do vlan cleanup
- remove useless vlan parameters and pointers

Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-21 13:47:54 -07:00
Jiri Pirko
0f7257281d lro: kill lro_vlan_hwaccel_receive_frags
Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-21 13:47:54 -07:00
Jiri Pirko
7756a96e19 lro: kill lro_vlan_hwaccel_receive_skb
no longer used

Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-21 13:47:54 -07:00
Jiri Pirko
cec9c13363 vlan: introduce __vlan_find_dev_deep()
Since vlan_group_get_device and vlan_group is not going to be accessible
from device drivers, introduce function which substitutes it.

Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-21 13:47:53 -07:00
Jiri Pirko
f605234066 vlan: finish removing vlan_find_dev from public header
else case remained forgotten.

Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-21 13:47:53 -07:00
David S. Miller
033b1142f4 Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
Conflicts:
	net/bluetooth/l2cap_core.c
2011-07-21 13:38:42 -07:00
H. Peter Anvin
ae7bd11b47 clocksource: Change __ARCH_HAS_CLOCKSOURCE_DATA to a CONFIG option
The machinery for __ARCH_HAS_CLOCKSOURCE_DATA assumed a file in
asm-generic would be the default for architectures without their own
file in asm/, but that is not how it works.

Replace it with a Kconfig option instead.

Link: http://lkml.kernel.org/r/4E288AA6.7090804@zytor.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Cc: Andy Lutomirski <luto@mit.edu>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Tony Luck <tony.luck@intel.com>
2011-07-21 13:34:05 -07:00
David S. Miller
f5caadbb3d Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/kaber/nf-next-2.6 2011-07-21 12:39:35 -07:00
Ingo Molnar
994bf1c922 Merge branch 'linus' into sched/core
Merge reason: pick up the latest scheduler fixes.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-07-21 18:00:01 +02:00
Phil Carmody
497888cf69 treewide: fix potentially dangerous trailing ';' in #defined values/expressions
All these are instances of
  #define NAME value;
or
  #define NAME(params_opt) value;

These of course fail to build when used in contexts like
  if(foo $OP NAME)
  while(bar $OP NAME)
and may silently generate the wrong code in contexts such as
  foo = NAME + 1;    /* foo = value; + 1; */
  bar = NAME - 1;    /* bar = value; - 1; */
  baz = NAME & quux; /* baz = value; & quux; */

Reported on comp.lang.c,
Message-ID: <ab0d55fe-25e5-482b-811e-c475aa6065c3@c29g2000yqd.googlegroups.com>
Initial analysis of the dangers provided by Keith Thompson in that thread.

There are many more instances of more complicated macros having unnecessary
trailing semicolons, but this pile seems to be all of the cases of simple
values suffering from the problem. (Thus things that are likely to be found
in one of the contexts above, more complicated ones aren't.)

Signed-off-by: Phil Carmody <ext-phil.2.carmody@nokia.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2011-07-21 14:10:00 +02:00
Chris Friesen
0f598f0b4c netfilter: ipset: fix compiler warnings "'hash_ip4_data_next' declared inline after being called"
Some gcc versions warn about prototypes without "inline" when the declaration
includes the "inline" keyword. The fix generates a false error message
"marked inline, but without a definition" with sparse below 0.4.2.

Signed-off-by: Chris Friesen <chris.friesen@genband.com>
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Signed-off-by: Patrick McHardy <kaber@trash.net>
2011-07-21 12:07:10 +02:00
Jozsef Kadlecsik
89dc79b787 netfilter: ipset: hash:net,iface fixed to handle overlapping nets behind different interfaces
If overlapping networks with different interfaces was added to
the set, the type did not handle it properly. Example

    ipset create test hash:net,iface
    ipset add test 192.168.0.0/16,eth0
    ipset add test 192.168.0.0/24,eth1

Now, if a packet was sent from 192.168.0.0/24,eth0, the type returned
a match.

In the patch the algorithm is fixed in order to correctly handle
overlapping networks.

Limitation: the same network cannot be stored with more than 64 different
interfaces in a single set.

Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Signed-off-by: Patrick McHardy <kaber@trash.net>
2011-07-21 12:06:18 +02:00
Jozsef Kadlecsik
a6a7b759ba netfilter: ipset: make possible to hash some part of the data element only
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Signed-off-by: Patrick McHardy <kaber@trash.net>
2011-07-21 12:05:31 +02:00
Jean Delvare
4582c0a486 mutex: Make mutex_destroy() an inline function
The non-debug variant of mutex_destroy is a no-op, currently
implemented as a macro which does nothing. This approach fails
to check the type of the parameter, so an error would only show
when debugging gets enabled. Using an inline function instead,
offers type checking for earlier bug catching.

Signed-off-by: Jean Delvare <khali@linux-fr.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20110716174200.41002352@endymion.delvare
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-07-21 09:36:09 +02:00
Ingo Molnar
40bcea7bbe Merge branch 'tip/perf/core' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into perf/core 2011-07-21 09:32:40 +02:00
Ingo Molnar
492f73a303 Merge branch 'perf/urgent' into perf/core
Merge reason: pick up the latest fixes - they won't make v3.0.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-07-21 09:29:21 +02:00
Wanlong Gao
295cc522c2 fs:update the NOTE of the file_operations structure
Big kernel lock had been removed and setlease now use the lock_flocks()
to hold a special spin lock file_lock_lock by Matthew.
So just remove the out-of-date NOTE.

Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 20:48:05 -04:00
Josef Bacik
02c24a8218 fs: push i_mutex and filemap_write_and_wait down into ->fsync() handlers
Btrfs needs to be able to control how filemap_write_and_wait_range() is called
in fsync to make it less of a painful operation, so push down taking i_mutex and
the calling of filemap_write_and_wait() down into the ->fsync() handlers.  Some
file systems can drop taking the i_mutex altogether it seems, like ext3 and
ocfs2.  For correctness sake I just pushed everything down in all cases to make
sure that we keep the current behavior the same for everybody, and then each
individual fs maintainer can make up their mind about what to do from there.
Thanks,

Acked-by: Jan Kara <jack@suse.cz>
Signed-off-by: Josef Bacik <josef@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 20:47:59 -04:00
Josef Bacik
982d816581 fs: add SEEK_HOLE and SEEK_DATA flags
This just gets us ready to support the SEEK_HOLE and SEEK_DATA flags.  Turns out
using fiemap in things like cp cause more problems than it solves, so lets try
and give userspace an interface that doesn't suck.  We need to match solaris
here, and the definitions are

*o* If /whence/ is SEEK_HOLE, the offset of the start of the
next hole greater than or equal to the supplied offset
is returned. The definition of a hole is provided near
the end of the DESCRIPTION.

*o* If /whence/ is SEEK_DATA, the file pointer is set to the
start of the next non-hole file region greater than or
equal to the supplied offset.

So in the generic case the entire file is data and there is a virtual hole at
the end.  That means we will just return i_size for SEEK_HOLE and will return
the same offset for SEEK_DATA.  This is how Solaris does it so we have to do it
the same way.

Thanks,

Signed-off-by: Josef Bacik <josef@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 20:47:56 -04:00
Kay Sievers
f15146380d fs: seq_file - add event counter to simplify poll() support
Moving the event counter into the dynamically allocated 'struc seq_file'
allows poll() support without the need to allocate its own tracking
structure.

All current users are switched over to use the new counter.

Requested-by: Andrew Morton akpm@linux-foundation.org
Acked-by: NeilBrown <neilb@suse.de>
Tested-by: Lucas De Marchi lucas.demarchi@profusion.mobi
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 20:47:50 -04:00
Christoph Hellwig
aacfc19c62 fs: simplify the blockdev_direct_IO prototype
Simple filesystems always pass inode->i_sb_bdev as the block device
argument, and never need a end_io handler.  Let's simply things for
them and for my grepping activity by dropping these arguments.  The
only thing not falling into that scheme is ext4, which passes and
end_io handler without needing special flags (yet), but given how
messy the direct I/O code there is use of __blockdev_direct_IO
in one instead of two out of three cases isn't going to make a large
difference anyway.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 20:47:49 -04:00
Christoph Hellwig
11b80f459a rw_semaphore: remove up/down_read_non_owner
Now that the last users is gone these can be removed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 20:47:47 -04:00
Christoph Hellwig
bd5fe6c5eb fs: kill i_alloc_sem
i_alloc_sem is a rather special rw_semaphore.  It's the last one that may
be released by a non-owner, and it's write side is always mirrored by
real exclusion.  It's intended use it to wait for all pending direct I/O
requests to finish before starting a truncate.

Replace it with a hand-grown construct:

 - exclusion for truncates is already guaranteed by i_mutex, so it can
   simply fall way
 - the reader side is replaced by an i_dio_count member in struct inode
   that counts the number of pending direct I/O requests.  Truncate can't
   proceed as long as it's non-zero
 - when i_dio_count reaches non-zero we wake up a pending truncate using
   wake_up_bit on a new bit in i_flags
 - new references to i_dio_count can't appear while we are waiting for
   it to read zero because the direct I/O count always needs i_mutex
   (or an equivalent like XFS's i_iolock) for starting a new operation.

This scheme is much simpler, and saves the space of a spinlock_t and a
struct list_head in struct inode (typically 160 bits on a non-debug 64-bit
system).

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 20:47:46 -04:00
Tomasz Stanislawski
e46ebd2782 anonfd: fix missing declaration
The forward declaration of struct file_operations is
added to avoid compilation warnings.

Signed-off-by: Tomasz Stanislawski <t.stanislaws@samsung.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 20:47:43 -04:00
Dave Chinner
0e1fdafd93 superblock: add filesystem shrinker operations
Now we have a per-superblock shrinker implementation, we can add a
filesystem specific callout to it to allow filesystem internal
caches to be shrunk by the superblock shrinker.

Rather than perpetuate the multipurpose shrinker callback API (i.e.
nr_to_scan == 0 meaning "tell me how many objects freeable in the
cache), two operations will be added. The first will return the
number of objects that are freeable, the second is the actual
shrinker call.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 20:47:41 -04:00
Dave Chinner
b0d40c92ad superblock: introduce per-sb cache shrinker infrastructure
With context based shrinkers, we can implement a per-superblock
shrinker that shrinks the caches attached to the superblock. We
currently have global shrinkers for the inode and dentry caches that
split up into per-superblock operations via a coarse proportioning
method that does not batch very well.  The global shrinkers also
have a dependency - dentries pin inodes - so we have to be very
careful about how we register the global shrinkers so that the
implicit call order is always correct.

With a per-sb shrinker callout, we can encode this dependency
directly into the per-sb shrinker, hence avoiding the need for
strictly ordering shrinker registrations. We also have no need for
any proportioning code for the shrinker subsystem already provides
this functionality across all shrinkers. Allowing the shrinker to
operate on a single superblock at a time means that we do less
superblock list traversals and locking and reclaim should batch more
effectively. This should result in less CPU overhead for reclaim and
potentially faster reclaim of items from each filesystem.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20 20:47:10 -04:00
J. Bruce Fields
8fb47a4fbf locks: rename lock-manager ops
Both the filesystem and the lock manager can associate operations with a
lock.  Confusingly, one of them (fl_release_private) actually has the
same name in both operation structures.

It would save some confusion to give the lock-manager ops different
names.

Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2011-07-20 20:23:19 -04:00
Linus Torvalds
cf6ace16a3 Merge branch 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  signal: align __lock_task_sighand() irq disabling and RCU
  softirq,rcu: Inform RCU of irq_exit() activity
  sched: Add irq_{enter,exit}() to scheduler_ipi()
  rcu: protect __rcu_read_unlock() against scheduler-using irq handlers
  rcu: Streamline code produced by __rcu_read_unlock()
  rcu: Fix RCU_BOOST race handling current->rcu_read_unlock_special
  rcu: decrease rcu_report_exp_rnp coupling with scheduler
2011-07-20 15:56:25 -07:00
Philip Rakity
ca8e99b32e mmc: core: Set non-default Drive Strength via platform hook
Non default Drive Strength cannot be set automatically.  It is a function
of the board design and only if there is a specific platform handler can
it be set.  The platform handler needs to take into account the board
design.  Pass to the platform code the necessary information.

For example:  The card and host controller may indicate they support HIGH
and LOW drive strength.  There is no way to know what should be chosen
without specific board knowledge.  Setting HIGH may lead to reflections
and setting LOW may not suffice.  There is no mechanism (like ethernet
duplex or speed pulses) to determine what should be done automatically.

If no platform handler is defined -- use the default value.

Signed-off-by: Philip Rakity <prakity@marvell.com>
Reviewed-by: Arindam Nath <arindam.nath@amd.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:21:16 -04:00
Per Forlin
aa8b683a7d mmc: core: add non-blocking mmc request function
Previously there has only been one function mmc_wait_for_req()
to start and wait for a request. This patch adds:

 * mmc_start_req() - starts a request wihtout waiting
   If there is on ongoing request wait for completion
   of that request and start the new one and return.
   Does not wait for the new command to complete.

This patch also adds new function members in struct mmc_host_ops
only called from core.c:

 * pre_req - asks the host driver to prepare for the next job
 * post_req - asks the host driver to clean up after a completed job

The intention is to use pre_req() and post_req() to do cache maintenance
while a request is active. pre_req() can be called while a request is
active to minimize latency to start next job. post_req() can be used after
the next job is started to clean up the request. This will minimize the
host driver request end latency. post_req() is typically used before
ending the block request and handing over the buffer to the block layer.

Add a host-private member in mmc_data to be used by pre_req to mark the
data. The host driver will then check this mark to see if the data is
prepared or not.

Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Venkatraman S <svenkatr@ti.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:21:10 -04:00
James Hogan
03e8cb534e mmc: dw_mmc: fix stop when fallen back to PIO
There are several situations when dw_mci_submit_data_dma() decides to
fall back to PIO mode instead of using DMA, due to a short (to avoid
overhead) or "complex" (e.g. with unaligned buffers) transaction, even
though host->use_dma is set. However dw_mci_stop_dma() decides whether
to stop DMA or set the EVENT_XFER_COMPLETE event based on host->use_dma.
When falling back to PIO mode this results in data timeout errors
getting missed and the driver locking up.

Therefore add host->using_dma to indicate whether the current
transaction is using dma or not, and adjust dw_mci_stop_dma() to use
that instead.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Will Newton <will.newton@imgtec.com>
Tested-by: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:21:05 -04:00
Adrian Hunter
e056a1b5b6 mmc: queue: let host controllers specify maximum discard timeout
Some host controllers will not operate without a hardware
timeout that is limited in value.  However large discards
require large timeouts, so there needs to be a way to
specify the maximum discard size.

A host controller driver may now specify the maximum discard
timeout possible so that max_discard_sectors can be calculated.

However, for eMMC when the High Capacity Erase Group Size
is not in use, the timeout calculation depends on clock
rate which may change.  For that case Preferred Erase Size
is used instead.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:21:03 -04:00
James Hogan
34b664a20e mmc: dw_mmc: handle unaligned buffers and sizes
Update functions for PIO pushing and pulling data to and from the FIFO
so that they can handle unaligned output buffers and unaligned buffer
lengths. This makes more of the tests in mmc_test pass.

Unaligned lengths in pulls are handled by reading the full FIFO item,
and storing the remaining bytes in a small internal buffer (part_buf).
The next data pull will copy data out of this buffer first before
accessing the FIFO again. Similarly, for pushes the final bytes that
don't fill a FIFO item are stored in the part_buf (or sent anyway if
it's the last transfer), and then the part_buf is included at the
beginning of the next buffer pushed.

Unaligned buffers in pulls are handled specially if the architecture
cannot do efficient unaligned accesses, by reading FIFO items into a
aligned local buffer, and memcpy'ing them into the output buffer, again
storing any remaining bytes in the internal buffer. Similarly for pushes
the buffer is memcpy'd into an aligned local buffer then written to the
FIFO.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Will Newton <will.newton@imgtec.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:21:00 -04:00
James Hogan
b86d825323 mmc: dw_mmc: don't hard code fifo depth, fix usage
The FIFO_DEPTH hardware configuration parameter can be found from the
power-on value of RX_WMark in the FIFOTH register. This is used to
initialise the watermarks, but when calculating the number of free fifo
spaces a preprocessor definition is used which is hard coded to 32.

Fix reading the value out of FIFOTH (the default value in the RX_WMark
field is FIFO_DEPTH-1 not FIFO_DEPTH). Allow the fifo depth to be
overriden by platform data (since a bootloader may have changed FIFOTH
making auto-detection unreliable). Store the fifo_depth for later use.
Also fix the calculation to find the number of free bytes in the fifo to
include the fifo depth in the left shift by the data shift, since the
fifo depth is measured in fifo items not bytes.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Will Newton <will.newton@imgtec.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:20:59 -04:00
James Hogan
1791b13ea4 mmc: dw_mmc: convert card tasklet to workqueue
Convert the card insert/remove tasklet to a workqueue, and call the
setpower platform specific callback without the spinlock held. This
means neither of the setpower or get_cd callbacks are called from atomic
context which allows them to sleep.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Will Newton <will.newton@imgtec.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:20:58 -04:00
Simon Horman
973ed3af1a mmc: sdhi: Add write16_hook
Some controllers require waiting for the bus to become idle
before writing to some registers. I have implemented this
by adding a hook to sd_ctrl_write16() and implementing
a hook for SDHI which waits for the bus to become idle.

Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Cc: Magnus Damm <magnus.damm@gmail.com>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:20:57 -04:00
Simon Horman
95c7348d94 mmc: tmio: name 0xd8 as CTL_DMA_ENABLE
This reflects at least the current usage of this register
and I think it improves the readability of the code ever so slightly.

Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Cc: Magnus Damm <magnus.damm@gmail.com>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-20 17:20:55 -04:00