Atom S12x0 CPUs are identified by the CPU host bridge ID. Add an override
table based on PCI IDs as well as code to detect it.
PCI access functions can now be called with PCI disabled, so unlike previous
attempts to use PCI IDs, the code no longer depends on it. If PCI is disabled,
the CPU will not be identified correctly. Since it is unlikely that anything
will work in this case, this is an acceptable limitation.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Don't use DEFINE_PCI_DEVICE_TABLE macro, because this macro
is not preferred.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Acked-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Single regression fix for nouveau
* 'drm-nouveau-next' of git://git.freedesktop.org/git/nouveau/linux-2.6:
drm/nouveau: fix null ptr dereferences on some boards
Regression from "device: populate master subdev pointer only when fully
constructed"
Reported-by: Bob Gleitsmann <rjgleits@bellsouth.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This reverts commit be35f48610 ("dm: wait until embedded kobject is
released before destroying a device") and provides an improved fix.
The kobject release code that calls the completion must be placed in a
non-module file, otherwise there is a module unload race (if the process
calling dm_kobject_release is preempted and the DM module unloaded after
the completion is triggered, but before dm_kobject_release returns).
To fix this race, this patch moves the completion code to dm-builtin.c
which is always compiled directly into the kernel if BLK_DEV_DM is
selected.
The patch introduces a new dm_kobject_holder structure, its purpose is
to keep the completion and kobject in one place, so that it can be
accessed from non-module code without the need to export the layout of
struct mapped_device to that code.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
This patch modifies dm-snapshot so that it prefetches the buffers when
loading the exceptions.
The number of buffers read ahead is specified in the DM_PREFETCH_CHUNKS
macro. The current value for DM_PREFETCH_CHUNKS (12) was found to
provide the best performance on a single 15k SCSI spindle. In the
future we may modify this default or make it configurable.
Also, introduce the function dm_bufio_set_minimum_buffers to setup
bufio's number of internal buffers before freeing happens. dm-bufio may
hold more buffers if enough memory is available. There is no guarantee
that the specified number of buffers will be available - if you need a
guarantee, use the argument reserved_buffers for
dm_bufio_client_create.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Use dm-bufio for initial loading of the exceptions.
Introduce a new function dm_bufio_forget that frees the given buffer.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
We rename aesni-intel_avx.S to aesni-intel_avx-x86_64.S to indicate
that it is only used by x86_64 architecture.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fixes the following sparse warning:
drivers/crypto/mxs-dcp.c:103:1: warning:
symbol 'global_mutex' was not declared. Should it be static?
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Acked-by: Marek Vasut <marex@denx.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CCP cannot be hot-plugged so it will either be there
or it won't. Do not allow the driver to stay loaded if the
CCP does not successfully initialize.
Provide stub routines in the ccp.h file that return -ENODEV
if the CCP has not been configured in the build.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cleanup the ahash digest invocations to check the init
return code and make use of the finup routine.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When performing a hash operation if the amount of data buffered and a
request at or near the maximum data length is received then the length
calcuation could wrap causing an error in executing the hash operation.
Fix this by using a u64 type for the input and output data lengths in
all CCP operations.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For a hash operation, the caller doesn't have to supply a result
area on every call so don't use it / update it if it hasn't
been supplied.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cleanup up the usage of scatterlists to make the code cleaner
and avoid extra memory allocations when not needed.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix some memory allocations to use the appropriate gfp_t type based
on the CRYPTO_TFM_REQ_MAY_SLEEP flag.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
vlan gets the same netdev features except vlan filter.
Signed-off-by: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bob Falken reported that after 4G packets, multicast forwarding stopped
working. This was because of a rule reference counter overflow which
freed the rule as soon as the overflow happend.
This patch solves this by adding the FIB_LOOKUP_NOREF flag to
fib_rules_lookup calls. This is safe even from non-rcu locked sections
as in this case the flag only implies not taking a reference to the rule,
which we don't need at all.
Rules only hold references to the namespace, which are guaranteed to be
available during the call of the non-rcu protected function reg_vif_xmit
because of the interface reference which itself holds a reference to
the net namespace.
Fixes: f0ad0860d0 ("ipv4: ipmr: support multiple tables")
Fixes: d1db275dd3 ("ipv6: ip6mr: support multiple tables")
Reported-by: Bob Falken <NetFestivalHaveFun@gmx.com>
Cc: Patrick McHardy <kaber@trash.net>
Cc: Thomas Graf <tgraf@suug.ch>
Cc: Julian Anastasov <ja@ssi.bg>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
A number of new dm96xx variants now exist.
Reported-by: Joseph Chang <joseph_chang@davicom.com.tw>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since virtio is an OASIS standard draft now, virtio implementation
discussions are taking place on the virtio-dev OASIS mailing list.
Update MAINTAINERS.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This adds the workaround for erratum 793 as a precaution in case not
every BIOS implements it. This addresses CVE-2013-6885.
Erratum text:
[Revision Guide for AMD Family 16h Models 00h-0Fh Processors,
document 51810 Rev. 3.04 November 2013]
793 Specific Combination of Writes to Write Combined Memory Types and
Locked Instructions May Cause Core Hang
Description
Under a highly specific and detailed set of internal timing
conditions, a locked instruction may trigger a timing sequence whereby
the write to a write combined memory type is not flushed, causing the
locked instruction to stall indefinitely.
Potential Effect on System
Processor core hang.
Suggested Workaround
BIOS should set MSR
C001_1020[15] = 1b.
Fix Planned
No fix planned
[ hpa: updated description, fixed typo in MSR name ]
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: http://lkml.kernel.org/r/20140114230711.GS29865@pd.tnic
Tested-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Fix a memory leak in the ieee802154_add_iface() error handling path.
Detected by Coverity: CID 710490.
Signed-off-by: Christian Engelmayer <cengelma@gmx.at>
Signed-off-by: David S. Miller <davem@davemloft.net>
When doing a function/slot/bus reset PCI grabs the device_lock for each
device to block things like suspend and driver probes, but call paths exist
where this lock may already be held. This creates an opportunity for
deadlock. For instance, vfio allows userspace to issue resets so long as
it owns the device(s). If a driver unbind .remove callback races with
userspace issuing a reset, we have a deadlock as userspace gets stuck
waiting on device_lock while another thread has device_lock and waits for
.remove to complete. To resolve this, we can make a version of the reset
interfaces which use trylock. With this, we can safely attempt a reset and
return error to userspace if there is contention.
[bhelgaas: the deadlock happens when A (userspace) has a file descriptor for
the device, and B waits in this path:
driver_detach
device_lock # take device_lock
__device_release_driver
pci_device_remove # pci_bus_type.remove
vfio_pci_remove # pci_driver .remove
vfio_del_group_dev
wait_event(vfio.release_q, !vfio_dev_present) # wait (holding device_lock)
Now B is stuck until A gives up the file descriptor. If A tries to acquire
device_lock for any reason, we deadlock because A is waiting for B to release
the lock, and B is waiting for A to release the file descriptor.]
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
The Samsung dmaengine ASoC driver is used with two different dmaengine drivers.
The pl80x, which properly supports residue reporting and the pl330, which
reports that it does not support residue reporting. So there is no need to
manually set the NO_RESIDUE flag. This has the advantage that a proper (race
condition free) PCM pointer() implementation is used when the pl80x driver is
used. Also once the pl330 driver supports residue reporting the ASoC PCM driver
will automatically start using it.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
The pl330 driver properly reports that it does not have residue reporting
support, which means the PCM dmanegine driver is able to figure this out on its
own. So there is no need to set the flag manually. Removing the flag has the
advantage that once the pl330 driver gains support for residue reporting it will
automatically be used by the generic dmaengine PCM driver.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
The dmaengine framework now exposes the granularity with which it is able to
report the transfer residue for a certain DMA channel. Check the granularity in
the generic dmaengine PCM driver and
a) Set the SNDRV_PCM_INFO_BATCH if the granularity is per period or worse.
b) Fallback to the (race condition prone) period counting if the driver does
not support any residue reporting.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
Currently we have two different snd_soc_platform_driver structs in the generic
dmaengine PCM driver. One for dmaengine drivers that support residue reporting
and one for those which do not. When registering the PCM component we check
whether the NO_RESIDUE flag is set or not and use the corresponding
snd_soc_platform_driver. This patch modifies the driver to only have one
snd_soc_platform_driver struct where the pointer() callback checks the
NO_RESIDUE flag at runtime. This allows us to set the NO_RESIDUE flag after the
PCM component has been registered. This becomes necessary when querying whether
the dmaengine driver supports residue reporting from the dmaengine driver itself
since the DMA channel might only be requested after the PCM component has been
registered.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
The pl330 driver currently does not support residue reporting, so set the
residue granularity to DMA_RESIDUE_GRANULARITY_DESCRIPTOR.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
This patch adds a new field to the dma_slave_caps struct which indicates the
granularity with which the driver is able to update the residue field of the
dma_tx_state struct. Making this information available to dmaengine users allows
them to make better decisions on how to operate. E.g. for audio certain features
like wakeup less operation or timer based scheduling only make sense and work
correctly if the reported residue is fine-grained enough.
Right now four different levels of granularity are supported:
* DESCRIPTOR: The DMA channel is only able to tell whether a descriptor has
been completed or not, which means residue reporting is not supported by
this channel. The residue field of the dma_tx_state field will always be
0.
* SEGMENT: The DMA channel updates the residue field after each successfully
completed segment of the transfer (For cyclic transfers this is after each
period). This is typically implemented by having the hardware generate an
interrupt after each transferred segment and then the drivers updates the
outstanding residue by the size of the segment. Another possibility is if
the hardware supports SG and the segment descriptor has a field which gets
set after the segment has been completed. The driver then counts the
number of segments without the flag set to compute the residue.
* BURST: The DMA channel updates the residue field after each transferred
burst. This is typically only supported if the hardware has a progress
register of some sort (E.g. a register with the current read/write address
or a register with the amount of bursts/beats/bytes that have been
transferred or still need to be transferred).
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
It's a bug that writing to the platform data directly, for it should
be constant. So just copy it before writing.
Signed-off-by: Xiubo Li <Li.Xiubo@freescale.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
To be consistent with the equivalent option in 'stat', also, for the
same reason, use -D as the one letter alias.
Suggested-by: Ingo Molnar <mingo@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-p5yjnopajb3a8x0xha7yl5w8@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
That is how the option summary describes it and so that we can free
--delay to replace --initial-delay and then be consistent with stat's
--delay equivalent option.
Suggested-by: Ingo Molnar <mingo@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-f8hd2010uhjl2zzb34hepbmi@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Instead of open-coding the intersecting of two rate masks (and getting slightly
wrong for some of the corner cases) use the new snd_pcm_rate_mask_intersect()
helper function.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
A bit of special care is necessary when creating the intersection of two rate
masks. This comes from the special meaning of the SNDRV_PCM_RATE_CONTINUOUS and
SNDRV_PCM_RATE_KNOT bits, which needs special handling when intersecting two
rate masks. SNDRV_PCM_RATE_CONTINUOUS means the hardware supports all rates in a
specific interval. SNDRV_PCM_RATE_KNOT means the hardware supports a set of
discrete rates specified by a list constraint. For all other cases the supported
rates are specified directly in the rate mask.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Reviewed-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
SNDRV_PCM_RATE_CONTINUOUS means that all rates (possibly limited to a certain
interval) are supported. There is no need to manually set other rate bits.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Daniel Glöckner <daniel-gl@gmx.net>
Signed-off-by: Mark Brown <broonie@linaro.org>
SNDRV_PCM_RATE_CONTINUOUS means that all rates (possibly limited to a certain
interval) are supported. There is no need to manually set other rate bits.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
If none of the components (CODEC or CPU DAI) sets a maximum sample rate we'll
end up with the rate_max field of the runtime hardware set to 0. (Note that it
is still possible for the components to constrain the supported sample rates
using other methods, e.g. setting a list constraint) If rate_max is 0 this means
that the sound card doesn't support any rates at all, which is not the desired
result. So initialize rate_max to UINT_MAX. For symmetry reasons also set
rate_min to 0.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@linaro.org>
The equivalent uapi struct uses __u32 so make the kernel
uses u32 too.
This can prevent some oddities where the limit is
logged/emitted as a negative value.
Convert kstrtol to kstrtouint to disallow negative values.
Signed-off-by: Joe Perches <joe@perches.com>
[eparis: do not remove static from audit_default declaration]
Add pr_fmt to prefix "audit: " to output
Convert printk(KERN_<LEVEL> to pr_<level>
Coalesce formats
Use pr_cont
Move a brace after switch
Signed-off-by: Joe Perches <joe@perches.com>
Using the generic kernel function causes the
object size to increase with gcc 4.8.1.
$ size kernel/audit.o*
text data bss dec hex filename
18577 6079 8436 33092 8144 kernel/audit.o.new
18579 6015 8420 33014 80f6 kernel/audit.o.old
Unsigned...
Gradually, the global qd_lock is being used for less and less.
After this patch it will only be used for the per super block
list whose purpose is to allow syncing of changes back to the
master quota file from the local quota changes file. Fixing
up that process to make it more efficient will be the subject
of a later patch, however this patch removes another barrier
to doing that.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Cc: Abhijith Das <adas@redhat.com>
Quota slot allocation has historically used a vector of pages
and a set of homegrown find/test/set/clear bit functions. Since
the size of the bitmap is likely to be based on the default
qc file size, thats a couple of pages at most. So we ought
to be able to allocate that as a single chunk, with a vmalloc
fallback, just in case of memory fragmentation.
We are then able to use the kernel's own find/test/set/clear
bit functions, rather than rolling our own.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Cc: Abhijith Das <adas@redhat.com>
While investigating a rather strange bit of code in the quota
clean up function, I spotted that the reason for its existence
was that when remounting read only, we were not stopping the
quotad thread, and thus it was possible for it to still have
a reference to some of the quotas in that case.
This patch moves the logd and quota thread start and stop into
the make_fs_rw/ro functions, so that we now stop those threads
when mounted read only.
This means that quotad will always be stopped before we call
the quota clean up function, and we can thus dispose of the
(rather hackish) code that waits for it to give up its
reference on the quotas.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Cc: Abhijith Das <adas@redhat.com>
Prior to this patch, GFS2 kept all the quotas for each
super block in a single linked list. This is rather slow
when there are large numbers of quotas.
This patch introduces a hlist_bl based hash table, similar
to the one used for glocks. The initial look up of the quota
is now lockless in the case where it is already cached,
although we still have to take the per quota spinlock in
order to bump the ref count. Either way though, this is a
big improvement on what was there before.
The qd_lock and the per super block list is preserved, for
the time being. However it is intended that since this is no
longer used for its original role, it should be possible to
shrink the number of items on that list in due course and
remove the requirement to take qd_lock in qd_get.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Cc: Abhijith Das <adas@redhat.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
We recently fixed the writeback of pages prior to performing
direct i/o, however the initial fix was perhaps a bit heavy
handed. There is no need to invalidate pages if the direct i/o
is only a read, since they will be identical to what has been
flushed to disk anyway.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Race conditions are theoretically possible between the MPT PCI device
removal and the generic PCI bus rescan and device removal that can be
triggered via sysfs.
To avoid those race conditions make the MPT PCI code use
pci_stop_and_remove_bus_device_locked().
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Multiple race conditions are possible between the rfkill hotplug in the
asus-wmi and eeepc-laptop drivers and the generic PCI bus rescan and device
removal that can be triggered via sysfs.
To avoid those race conditions make asus-wmi and eeepc-laptop use global
PCI rescan-remove locking around the rfkill hotplug.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>