It appears that performing a "movs pc, lr" to force the kernel into
SVC mode on the OMAP2420 (ARM1136) prevents the platform from booting
correctly (change introduced in 80c59da [ARM: virt: allow the kernel
to be entered in HYP mode]).
While the reason it fails is not understood yet (the same code runs
fine on the OMAP2430, ARM1136 as well), partially revert that change
for platforms that do not enter in HYP mode, preserving the new
feature and restoring a working kernel on the OMAP2420.
Reported-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARM v7 architecture introduced the concept of cache levels and related
control registers. New processors like A7 and A15 embed an L2 unified cache
controller that becomes part of the cache level hierarchy. Some operations in
the kernel like cpu_suspend and __cpu_disable do not require a flush of the
entire cache hierarchy to DRAM but just the cache levels belonging to the
Level of Unification Inner Shareable (LoUIS), which in most of ARM v7 systems
correspond to L1.
The current cache flushing API used in cpu_suspend and __cpu_disable,
flush_cache_all(), ends up flushing the whole cache hierarchy since for
v7 it cleans and invalidates all cache levels up to Level of Coherency
(LoC) which cripples system performance when used in hot paths like hotplug
and cpuidle.
Therefore a new kernel cache maintenance API must be added to cope with
latest ARM system requirements.
This patch adds flush_cache_louis() to the ARM kernel cache maintenance API.
This function cleans and invalidates all data cache levels up to the
Level of Unification Inner Shareable (LoUIS) and invalidates the instruction
cache for processors that support it (> v7).
This patch also creates an alias of the cache LoUIS function to flush_kern_all
for all processor versions prior to v7, so that the current cache flushing
behaviour is unchanged for those processors.
v7 cache maintenance code implements a cache LoUIS function that cleans and
invalidates the D-cache up to LoUIS and invalidates the I-cache, according
to the new API.
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
kcmp has appeared on x86, but has not been noticed because
checksyscalls.sh is broken at the moment. Reserve ARM syscall 378
for this should we ever need it, and add an __IGNORE entry for this
unimplemented syscall.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In order to easily detect pathological cases, print some diagnostics
when the kernel boots.
This also provides helpers to detect that HYP mode is actually available,
which can be used by other subsystems to enable HYP specific features.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
This patch does two things:
* Ensure that asynchronous aborts are masked at kernel entry.
The bootloader should be masking these anyway, but this reduces
the damage window just in case it doesn't.
* Enter svc mode via exception return to ensure that CPU state is
properly serialised. This does not matter when switching from
an ordinary privileged mode ("PL1" modes in ARMv7-AR rev C
parlance), but it potentially does matter when switching from a
another privileged mode such as hyp mode.
This should allow the kernel to boot safely either from svc mode or
hyp mode, even if no support for use of the ARM Virtualization
Extensions is built into the kernel.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Enabling boot from HYP mode requires the use of some more
virt-specific instructions ("eret" and "msr elr_hyp, reg").
Add the necessary encoding to asm/opcode-virt.h.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
For now, this patch just adds a definition for the HVC instruction.
More can be added here later, as needed.
Now that we have a real example of how to use the opcode injection
macros properly, this patch also adds a cross-reference from the
explanation in opcodes.h (since without an example, figuring out
how to use the macros is not that easy).
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds some __inst_() macros for injecting custom opcodes
in assembler (both inline and in .S files). They should make it
easier and cleaner to get things right in little-/big-
endian/ARM/Thumb-2 kernels without a lot of #ifdefs.
This pure-preprocessor approach is preferred over the alternative
method of wedging extra assembler directives into the assembler
input using top-level asm() blocks, since there is no way to
guarantee that the compiler won't reorder those with respect to
each other or with respect to non-toplevel asm() blocks, unless
-fno-toplevel-reorder is passed (which is in itself somewhat
undesirable because it defeats some potential optimisations).
Currently <asm/unified.h> _does_ silently rely on the compiler not
reordering at the top level, but it seems better to avoid adding
extra code which depends on this if the same result can be achieved
in another way.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Most of the existing macros don't work with assembler, due to the
use of type casts and C functions from <linux/swab.h>.
This patch abstracts out those operations and provides simple
explicit versions for use in assembly code.
__opcode_is_thumb32() and __opcode_is_thumb16() are also converted
to do bitmask-based testing to avoid confusion if these are used in
assembly code (the assembler typically treats all arithmetic values
as signed).
These changes avoid the need for the compiler to pre-evaluate
constant expressions used to generate opcodes. By ensuring that
the forms of these expressions can be evaluated directly by the
assembler, we can just stringify the expressions directly into the
asm during the preprocessing pass. The alternative approach
(passing the evaluated expression via an inline asm "i" constraint)
gets painful because the contents of the asm and the constraints
must be kept in sync. This makes the resulting macros awkward to
use.
Retaining the C forms of the macros allows more efficient code to
be generated when opcodes are generated programmatically at run-
time, but there is no way to embed run-time-generated opcodes in
asm() blocks.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The existing __mem_to_opcode_thumb32() is incorrect for BE32
platforms. However, these don't support Thumb-2 kernels, so this
option is not so relevant for those platforms anyway.
This operation is complicated by the lack of unaligned memory
access support prior to ARMv6.
Rather than provide a "working" macro which will probably won't get
used (or worse, will get misused), this patch removes the macro for
BE32 kernels. People manipulating Thumb opcodes prior to ARMv6
should almost certainly be splitting these operations into
halfwords anyway, using __opcode_thumb32_{first,second,compose}()
and the 16-bit opcode transformations.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Pull ARM fixes from Russell King:
"It's been a while... so there's a little more here than normal.
Mostly updates from Will for the breakpoint stuff, and plugging a few
holes in the user access functions which crept in when domain support
was disabled for ARMv7 CPUs."
* 'fixes' of git://git.linaro.org/people/rmk/linux-arm:
ARM: 7529/1: delay: set loops_per_jiffy when moving to timer-based loop
ARM: 7528/1: uaccess: annotate [__]{get,put}_user functions with might_fault()
ARM: 7527/1: uaccess: explicitly check __user pointer when !CPU_USE_DOMAINS
ARM: 7526/1: traps: send SIGILL if get_user fails on undef handling path
ARM: 7521/1: Fix semihosting Kconfig text
ARM: 7513/1: Make sure dtc is built before running it
ARM: 7512/1: Fix XIP build due to PHYS_OFFSET definition moving
ARM: 7499/1: mm: Fix vmalloc overlap check for !HIGHMEM
ARM: 7503/1: mm: only flush both pmd entries for classic MMU
ARM: 7502/1: contextidr: avoid using bfi instruction during notifier
ARM: 7501/1: decompressor: reset ttbcr for VMSA ARMv7 cores
ARM: 7497/1: hw_breakpoint: allow single-byte watchpoints on all addresses
ARM: 7496/1: hw_breakpoint: don't rely on dfsr to show watchpoint access type
ARM: Fix ioremap() of address zero
The user access functions may generate a fault, resulting in invocation
of a handler that may sleep.
This patch annotates the accessors with might_fault() so that we print a
warning if they are invoked from atomic context and help lockdep keep
track of mmap_sem.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The {get,put}_user macros don't perform range checking on the provided
__user address when !CPU_HAS_DOMAINS.
This patch reworks the out-of-line assembly accessors to check the user
address against a specified limit, returning -EFAULT if is is out of
range.
[will: changed get_user register allocation to match put_user]
[rmk: fixed building on older ARM architectures]
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
During the p2v changes, the PHYS_OFFSET #define moved into a
!__ASSEMBLY__ section. This causes a XIP build to fail with
arch/arm/kernel/head.o: In function 'stext':
arch/arm/kernel/head.S:146: undefined reference to 'PHYS_OFFSET'
Momentarily leave the #ifndef __ASSEMBLY__ section so we can
define PHYS_OFFSET for all compilation units.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Some platforms might require to increase atomic coherent pool to make
sure that their device will be able to allocate all their buffers from
atomic context. This function can be also used to decrease atomic
coherent pool size if coherent allocations are not used for the given
sub-platform.
Suggested-by: Josh Coombs <josh.coombs@gmail.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
LPAE does not use two pmd entries for a pte, so the additional tlb
flushing is not required.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Page migration encodes the pfn in the offset field of a swp_entry_t.
For LPAE, we support physical addresses of up to 36 bits (due to
sparsemem limitations with the size of page flags), requiring 24 bits
to represent a pfn. A further 3 bits are used to encode a swp_entry into
a pte, leaving 5 bits for the type field. Furthermore, the core code
defines MAX_SWAPFILES_SHIFT as 5, so the additional type bit does not
get used.
This patch reduces the width of the type field to 5 bits, allowing us
to create up to 31 swapfiles of 64GB each.
Cc: <stable@vger.kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Swap entries are encoding in ptes such that !pte_present(pte) and
pte_file(pte). The remaining bits of the descriptor are used to identify
the swapfile and offset within it to the swap entry.
When writing such a pte for a user virtual address, set_pte_at
unconditionally sets the nG bit, which (in the case of LPAE) will
corrupt the swapfile offset and lead to a BUG:
[ 140.494067] swap_free: Unused swap offset entry 000763b4
[ 140.509989] BUG: Bad page map in process rs:main Q:Reg pte:0ec76800 pmd:8f92e003
This patch fixes the problem by only setting the nG bit for user
mappings that are actually present.
Cc: <stable@vger.kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Many clocks that are used to provide sched_clock will reset during
suspend. If read_sched_clock returns 0 after suspend, sched_clock will
appear to jump forward. This patch resets cd.epoch_cyc to the current
value of read_sched_clock during resume, which causes sched_clock() just
after suspend to return the same value as sched_clock() just before
suspend.
In addition, during the window where epoch_ns has been updated before
suspend, but epoch_cyc has not been updated after suspend, it is unknown
whether the clock has reset or not, and sched_clock() could return a
bogus value. Add a suspended flag, and return the pre-suspend epoch_ns
value during this period.
The new behavior is triggered by calling setup_sched_clock_needs_suspend
instead of setup_sched_clock.
Signed-off-by: Colin Cross <ccross@android.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Pull ARM fixes from Russell King:
"This fixes various issues found during July"
* 'fixes' of git://git.linaro.org/people/rmk/linux-arm:
ARM: 7479/1: mm: avoid NULL dereference when flushing gate_vma with VIVT caches
ARM: Fix undefined instruction exception handling
ARM: 7480/1: only call smp_send_stop() on SMP
ARM: 7478/1: errata: extend workaround for erratum #720789
ARM: 7477/1: vfp: Always save VFP state in vfp_pm_suspend on UP
ARM: 7476/1: vfp: only clear vfp state for current cpu in vfp_pm_suspend
ARM: 7468/1: ftrace: Trace function entry before updating index
ARM: 7467/1: mutex: use generic xchg-based implementation for ARMv6+
ARM: 7466/1: disable interrupt before spinning endlessly
ARM: 7465/1: Handle >4GB memory sizes in device tree and mem=size@start option
The vivt_flush_cache_{range,page} functions check that the mm_struct
of the VMA being flushed has been active on the current CPU before
performing the cache maintenance.
The gate_vma has a NULL mm_struct pointer and, as such, will cause a
kernel fault if we try to flush it with the above operations. This
happens during ELF core dumps, which include the gate_vma as it may be
useful for debugging purposes.
This patch adds checks to the VIVT cache flushing functions so that VMAs
with a NULL mm_struct are flushed unconditionally (the vectors page may
be dirty if we use it to store the current TLS pointer).
Cc: <stable@vger.kernel.org> # 3.4+
Reported-by: Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org>
Tested-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The open-coded mutex implementation for ARMv6+ cores suffers from a
severe lack of barriers, so in the uncontended case we don't actually
protect any accesses performed during the critical section.
Furthermore, the code is largely a duplication of the ARMv6+ atomic_dec
code but optimised to remove a branch instruction, as the mutex fastpath
was previously inlined. Now that this is executed out-of-line, we can
reuse the atomic access code for the locking (in fact, we use the xchg
code as this produces shorter critical sections).
This patch uses the generic xchg based implementation for mutexes on
ARMv6+, which introduces barriers to the lock/unlock operations and also
has the benefit of removing a fair amount of inline assembly code.
Cc: <stable@vger.kernel.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Nicolas Pitre <nico@linaro.org>
Reported-by: Shan Kang <kangshan0910@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Merge Andrew's first set of patches:
"Non-MM patches:
- lots of misc bits
- tree-wide have_clk() cleanups
- quite a lot of printk tweaks. I draw your attention to "printk:
convert the format for KERN_<LEVEL> to a 2 byte pattern" which
looks a bit scary. But afaict it's solid.
- backlight updates
- lib/ feature work (notably the addition and use of memweight())
- checkpatch updates
- rtc updates
- nilfs updates
- fatfs updates (partial, still waiting for acks)
- kdump, proc, fork, IPC, sysctl, taskstats, pps, etc
- new fault-injection feature work"
* Merge emailed patches from Andrew Morton <akpm@linux-foundation.org>: (128 commits)
drivers/misc/lkdtm.c: fix missing allocation failure check
lib/scatterlist: do not re-write gfp_flags in __sg_alloc_table()
fault-injection: add tool to run command with failslab or fail_page_alloc
fault-injection: add selftests for cpu and memory hotplug
powerpc: pSeries reconfig notifier error injection module
memory: memory notifier error injection module
PM: PM notifier error injection module
cpu: rewrite cpu-notifier-error-inject module
fault-injection: notifier error injection
c/r: fcntl: add F_GETOWNER_UIDS option
resource: make sure requested range is included in the root range
include/linux/aio.h: cpp->C conversions
fs: cachefiles: add support for large files in filesystem caching
pps: return PTR_ERR on error in device_create
taskstats: check nla_reserve() return
sysctl: suppress kmemleak messages
ipc: use Kconfig options for __ARCH_WANT_[COMPAT_]IPC_PARSE_VERSION
ipc: compat: use signed size_t types for msgsnd and msgrcv
ipc: allow compat IPC version field parsing if !ARCH_WANT_OLD_COMPAT_IPC
ipc: add COMPAT_SHMLBA support
...
Rather than #define the options manually in the architecture code, add
Kconfig options for them and select them there instead. This also allows
us to select the compat IPC version parsing automatically for platforms
using the old compat IPC interface.
Reported-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull DMA-mapping updates from Marek Szyprowski:
"Those patches are continuation of my earlier work.
They contains extensions to DMA-mapping framework to remove limitation
of the current ARM implementation (like limited total size of DMA
coherent/write combine buffers), improve performance of buffer sharing
between devices (attributes to skip cpu cache operations or creation
of additional kernel mapping for some specific use cases) as well as
some unification of the common code for dma_mmap_attrs() and
dma_mmap_coherent() functions. All extensions have been implemented
and tested for ARM architecture."
* 'for-linus-for-3.6-rc1' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping:
ARM: dma-mapping: add support for DMA_ATTR_SKIP_CPU_SYNC attribute
common: DMA-mapping: add DMA_ATTR_SKIP_CPU_SYNC attribute
ARM: dma-mapping: add support for dma_get_sgtable()
common: dma-mapping: introduce dma_get_sgtable() function
ARM: dma-mapping: add support for DMA_ATTR_NO_KERNEL_MAPPING attribute
common: DMA-mapping: add DMA_ATTR_NO_KERNEL_MAPPING attribute
common: dma-mapping: add support for generic dma_mmap_* calls
ARM: dma-mapping: fix error path for memory allocation failure
ARM: dma-mapping: add more sanity checks in arm_dma_mmap()
ARM: dma-mapping: remove custom consistent dma region
mm: vmalloc: use const void * for caller argument
scatterlist: add sg_alloc_table_from_pages function
This patch adds support for dma_get_sgtable() function which is required
to let drivers to share the buffers allocated by DMA-mapping subsystem.
Generic implementation based on virt_to_page() is not suitable for ARM
dma-mapping subsystem.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
Commit 9adc5374 ('common: dma-mapping: introduce mmap method') added a
generic method for implementing mmap user call to dma_map_ops structure.
This patch converts ARM and PowerPC architectures (the only providers of
dma_mmap_coherent/dma_mmap_writecombine calls) to use this generic
dma_map_ops based call and adds a generic cross architecture
definition for dma_mmap_attrs, dma_mmap_coherent, dma_mmap_writecombine
functions.
The generic mmap virt_to_page-based fallback implementation is provided for
architectures which don't provide their own implementation for mmap method.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
This patch changes dma-mapping subsystem to use generic vmalloc areas
for all consistent dma allocations. This increases the total size limit
of the consistent allocations and removes platform hacks and a lot of
duplicated code.
Atomic allocations are served from special pool preallocated on boot,
because vmalloc areas cannot be reliably created in atomic context.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Minchan Kim <minchan@kernel.org>
The memory regions which are passed to arm_add_memory() from
device tree blobs via early_init_dt_add_memory_arch() can
have sizes which are larger than will fit in a 32 bit integer,
so switch to using a phys_addr_t to hold them, to avoid
silently dropping the top 32 bits of the size. Similarly, use
phys_addr_t in early_mem() so that mem=size@start command line
options specifying more than 4GB behave sensibly.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Pull ARM updates from Russell King:
"First ARM push of this merge window, post me coming back from holiday.
This is what has been in linux-next for the last few weeks. Not much
to say which isn't described by the commit summaries."
* 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (32 commits)
ARM: 7463/1: topology: Update cpu_power according to DT information
ARM: 7462/1: topology: factorize the update of sibling masks
ARM: 7461/1: topology: Add arch_scale_freq_power function
ARM: 7456/1: ptrace: provide separate functions for tracing syscall {entry,exit}
ARM: 7455/1: audit: move syscall auditing until after ptrace SIGTRAP handling
ARM: 7454/1: entry: don't bother with syscall tracing on ret_from_fork path
ARM: 7453/1: audit: only allow syscall auditing for pure EABI userspace
ARM: 7452/1: delay: allow timer-based delay implementation to be selected
ARM: 7451/1: arch timer: implement read_current_timer and get_cycles
ARM: 7450/1: dcache: select DCACHE_WORD_ACCESS for little-endian ARMv6+ CPUs
ARM: 7449/1: use generic strnlen_user and strncpy_from_user functions
ARM: 7448/1: perf: remove arm_perf_pmu_ids global enumeration
ARM: 7447/1: rwlocks: remove unused branch labels from trylock routines
ARM: 7446/1: spinlock: use ticket algorithm for ARMv6+ locking implementation
ARM: 7445/1: mm: update CONTEXTIDR register to contain PID of current process
ARM: 7444/1: kernel: add arch-timer C3STOP feature
ARM: 7460/1: remove asm/locks.h
ARM: 7439/1: head.S: simplify initial page table mapping
ARM: 7437/1: zImage: Allow DTB command line concatenation with ATAG_CMDLINE
ARM: 7436/1: Do not map the vectors page as write-through on UP systems
...
Pull final kmap_atomic cleanups from Cong Wang:
"This should be the final round of cleanup, as the definitions of enum
km_type finally get removed from the whole tree. The patches have
been in linux-next for a long time."
* 'kmap_atomic' of git://github.com/congwang/linux:
pipe: remove KM_USER0 from comments
vmalloc: remove KM_USER0 from comments
feature-removal-schedule.txt: remove kmap_atomic(page, km_type)
tile: remove km_type definitions
um: remove km_type definitions
asm-generic: remove km_type definitions
avr32: remove km_type definitions
frv: remove km_type definitions
powerpc: remove km_type definitions
arm: remove km_type definitions
highmem: remove the deprecated form of kmap_atomic
tile: remove usage of enum km_type
frv: remove the second parameter of kmap_atomic_primary()
jbd2: remove the second argument of kmap_atomic
The I.MX platform is getting converted to use sparse IRQs. We are doing
this for all platforms over time, because this is one of the
requirements for building a multiplatform kernel, and generally a good
idea.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIVAwUAUA2dgGCrR//JCVInAQJOJRAA4xWh3CUqpuJ13yk2tOBcGU9+orer93fP
U3DAG6jJ75blQzfA9wBoWPjqFhJo76ZtLFA9comkGI4vH8nTNNbXr1ZNt2/PjOGS
PqZIYJk/f5QqOzCd10V5bK4EVFR/mjvQ8sBP8qfaHII0GVPomfa8jnXnFnLFjreX
hhucTCf4z2HBvIjfOiilPtJbFpdrQ9oquM4W4Go1lrMZMU8B+Z3ClqytoxYW2Bw3
uQ0EzFlAwaXZ1CMcDn4eQJxNt8dO4SbGI7AHd3HmW3tFaaJC9dpey/7pIoW3gyz8
2e3wrdHXCAYq3sUlwzIrROCcBfW502E+KUmDi8ePT7zgZmxAmrqRCTNEqwaaGYrS
Q/mk+Kpnjvtl6w21ss1LxNHP18TNL/f8isYW9vUQG8yogWlVin6NhPvNQXDDBWoD
lfAyeL5JSoVxVGxft8EhLI/inPKBnWe2heE+zrRGQzUhTuUSyspmwK6o3b7JYNTX
16fY6OhW90CaZm6r1yKZsiE96Dd63oL4OVFELsgPQdsBNdWeKmOjs6fq46Bp0Hdf
SAQ543yabPkDr4ZanaNqo0s3Vu2xVnvBS4FR0gbx6+LGuagkmBpSkYnlhgNJBbdK
6D5GCRoX0ayzJvg29kKzek2h8NhGtDco4dr7K5Hl9NebeZ++CjZ7xTbFQN6olt+D
8CcIdD2J0PY=
=Y2Sm
-----END PGP SIGNATURE-----
Merge tag 'irq' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull arm-soc sparse IRQ conversion from Arnd Bergmann:
"The I.MX platform is getting converted to use sparse IRQs. We are
doing this for all platforms over time, because this is one of the
requirements for building a multiplatform kernel, and generally a good
idea."
* tag 'irq' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
ARM: imx: select USE_OF
ARM: imx: Fix build error due to missing irqs.h include
ARM: imx: enable SPARSE_IRQ for imx platform
ARM: fiq: change FIQ_START to a variable
tty: serial: imx: remove the use of MXC_INTERNAL_IRQS
ARM: imx: remove unneeded mach/irq.h inclusion
i2c: imx: remove unneeded mach/irqs.h inclusion
ARM: imx: add a legacy irqdomain for mx31ads
ARM: imx: add a legacy irqdomain for 3ds_debugboard
ARM: imx: pass gpio than irq number into mxc_expio_init
ARM: imx: leave irq_base of wm8350_platform_data uninitialized
dma: ipu: remove the use of ipu_platform_data
ARM: imx: move irq_domain_add_legacy call into avic driver
ARM: imx: move irq_domain_add_legacy call into tzic driver
gpio/mxc: move irq_domain_add_legacy call into gpio driver
ARM: imx: eliminate macro IRQ_GPIOx()
ARM: imx: eliminate macro IOMUX_TO_IRQ()
ARM: imx: eliminate macro IMX_GPIO_TO_IRQ()
This patch allows a timer-based delay implementation to be selected by
switching the delay routines over to use get_cycles, which is
implemented in terms of read_current_timer. This further allows us to
skip the loop calibration and have a consistent delay function in the
face of core frequency scaling.
To avoid the pain of dealing with memory-mapped counters, this
implementation uses the co-processor interface to the architected timers
when they are available. The previous loop-based implementation is
kept around for CPUs without the architected timers and we retain both
the maximum delay (2ms) and the corresponding conversion factors for
determining the number of loops required for a given interval. Since the
indirection of the timer routines will only work when called from C,
the sa1100 sleep routines are modified to branch to the loop-based delay
functions directly.
Tested-by: Shinya Kuribayashi <shinya.kuribayashi.px@renesas.com>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch implements read_current_timer using the architected timers
when they are selected via CONFIG_ARM_ARCH_TIMER. If they are detected
not to be usable at runtime, we return -ENXIO to the caller.
Furthermore, if read_current_timer is exported then we can implement
get_cycles in terms of it for use as both an entropy source and for
implementing __udelay and friends.
Tested-by: Shinya Kuribayashi <shinya.kuribayashi.px@renesas.com>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
DCACHE_WORD_ACCESS uses the word-at-a-time API for optimised string
comparisons in the vfs layer.
This patch implements support for load_unaligned_zeropad for ARM CPUs
with native support for unaligned memory accesses (v6+) when running
little-endian.
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch implements the word-at-a-time interface for ARM using the
same algorithm as x86. We use the fls macro from ARMv5 onwards, where
we have a clz instruction available which saves us a mov instruction
when targetting Thumb-2. For older CPUs, we use the magic 0x0ff0001
constant. Big-endian configurations make use of the implementation from
asm-generic.
With this implemented, we can replace our byte-at-a-time strnlen_user
and strncpy_from_user functions with the optimised generic versions.
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In order to provide PMU name strings compatible with the OProfile
user ABI, an enumeration of all PMUs is currently used by perf to
identify each PMU uniquely. Unfortunately, this does not scale well
in the presence of multiple PMUs and creates a single, global namespace
across all PMUs in the system.
This patch removes the enumeration and instead uses the name string
for the PMU to map onto the OProfile variant. perf_pmu_name is
implemented for CPU PMUs, which is all that OProfile cares about anyway.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The ARM arch_{read,write}_trylock implementations include unused
backwards branch labels, since we don't retry the locking operation
if the exclusive store fails.
This patch removes the labels.
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Ticket spinlocks ensure locking fairness by introducing a FIFO-like
nature to the granting of lock acquisitions and also reducing the
thundering herd effect when spinning on a lock by allowing the cacheline
to remain in a shared state amongst the waiting CPUs. This is especially
important on systems where memory-access times are not necessarily
uniform when accessing the lock structure (for example, on a
multi-cluster platform where the lock is allocated into L1 when a CPU
releases it).
This patch implements the ticket spinlock algorithm for ARM, replacing
the simpler implementation for ARMv6+ processors.
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 64ac24e738 ("Generic semaphore
implementation") removed the last include of this header. Apparently it
was just an oversight to keep this header. It can safely be removed now.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Fix:
net/netfilter/xt_connbytes.c: In function 'connbytes_mt':
net/netfilter/xt_connbytes.c:43: warning: passing argument 1 of 'atomic64_read' discards qualifiers from pointer target type
...
by adding the missing const.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This reverts commit 6b5c8045ec.
Conflicts:
arch/arm/kernel/ptrace.c
The new syscall restarting code can lead to problems if we take an
interrupt in userspace just before restarting the svc instruction. If
a signal is delivered when returning from the interrupt, the
TIF_SYSCALL_RESTARTSYS will remain set and cause any syscalls executed
from the signal handler to be treated as a restart of the previously
interrupted system call. This includes the final sigreturn call, meaning
that we may fail to exit from the signal context. Furthermore, if a
system call made from the signal handler requires a restart via the
restart_block, it is possible to clear the thread flag and fail to
restart the originally interrupted system call.
The right solution to this problem is to perform the restarting in the
kernel, avoiding the possibility of handling a further signal before the
restart is complete. Since we're almost at -rc6, let's revert the new
method for now and aim for in-kernel restarting at a later date.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Avoid polluting drivers with a set_domain() macro, which interferes with
structure member names:
drivers/net/wireless/ath/ath9k/dfs_pattern_detector.c:294:33: error: macro "set_domain" passed 2 arguments, but takes just 1
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
From Shawn Guo <shawn.guo@linaro.org>, this makes it possible to use
sparse irqs with mach-imx.
* 'imx/sparse-irq' of git://git.linaro.org/people/shawnguo/linux-2.6:
ARM: imx: enable SPARSE_IRQ for imx platform
ARM: fiq: change FIQ_START to a variable
tty: serial: imx: remove the use of MXC_INTERNAL_IRQS
ARM: imx: remove unneeded mach/irq.h inclusion
i2c: imx: remove unneeded mach/irqs.h inclusion
ARM: imx: add a legacy irqdomain for mx31ads
ARM: imx: add a legacy irqdomain for 3ds_debugboard
ARM: imx: pass gpio than irq number into mxc_expio_init
ARM: imx: leave irq_base of wm8350_platform_data uninitialized
dma: ipu: remove the use of ipu_platform_data
ARM: imx: move irq_domain_add_legacy call into avic driver
ARM: imx: move irq_domain_add_legacy call into tzic driver
gpio/mxc: move irq_domain_add_legacy call into gpio driver
ARM: imx: eliminate macro IRQ_GPIOx()
ARM: imx: eliminate macro IOMUX_TO_IRQ()
ARM: imx: eliminate macro IMX_GPIO_TO_IRQ()
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
The commit a2be01b (ARM: only include mach/irqs.h for !SPARSE_IRQ)
makes mach/irqs.h only be included for !SPARSE_IRQ build. There are
a nubmer of platforms have FIQ_START defined in mach/irqs.h for FIQ
support.
arch/arm/mach-rpc/include/mach/irqs.h:#define FIQ_START 64
arch/arm/mach-s3c24xx/include/mach/irqs.h:#define FIQ_START IRQ_EINT0
arch/arm/plat-mxc/include/mach/irqs.h:#define FIQ_START 0
If SPARSE_IRQ is enabled for any of these platforms, the following
compile error will be seen.
arch/arm/kernel/fiq.c: In function ‘enable_fiq’:
arch/arm/kernel/fiq.c:127:19: error: ‘FIQ_START’ undeclared (first use in this function)
arch/arm/kernel/fiq.c:127:19: note: each undeclared identifier is reported only once for each function it appears in
arch/arm/kernel/fiq.c: In function ‘disable_fiq’:
arch/arm/kernel/fiq.c:132:20: error: ‘FIQ_START’ undeclared (first use in this function)
The patch changes fiq code to have init_FIQ take FIQ_START from
platforms as a parameter and assign it to variable fiq_start which
is to replace FIQ_START uses in enable_fiq/disable_fiq.
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
Cc: Kukjin Kim <kgene.kim@samsung.com>
Cc: Sascha Hauer <s.hauer@pengutronix.de>
Cc: Rob Herring <rob.herring@calxeda.com>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>