Commit graph

12069 commits

Author SHA1 Message Date
Shaohui Xie
a655f724df powerpc/85xx: handle the eLBC error interrupt if it exists in dts
On P1020, P1021, P1022, and P1023, eLBC event interrupts are routed
to internal interrupt 3 while ELBC error interrupts are routed to
internal interrupt 0.  We need to call request_irq for each.

Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com>
Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
[scottwood@freescale.com: reworded commit message and fixed author]
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-10 17:19:27 -06:00
Wang Dongsheng
297649b9f5 powerpc/dts: fix lbc lack of error interrupt
P1020, P1021, P1022, P1023 when the lbc get error, the error
interrupt will be triggered. The corresponding interrupt is
internal IRQ0. So system have to process the lbc IRQ0 interrupt.

The corresponding lbc general interrupt is internal IRQ3.

Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com>
[scottwood@freescale.com: bracketed individual list elements]
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-10 17:18:36 -06:00
Stephen Chivers
be2019816e powerpc/embedded6xx: Add support for Motorola/Emerson MVME5100
Add support for the Motorola/Emerson MVME5100 Single Board Computer.

The MVME5100 is a 6U form factor VME64 computer with:

	- A single MPC7410 or MPC750 CPU
	- A HAWK Processor Host Bridge (CPU to PCI) and
	  MultiProcessor Interrupt Controller (MPIC)
	- Up to 500Mb of onboard memory
	- A M48T37 Real Time Clock (RTC) and Non-Volatile Memory chip
	- Two 16550 compatible UARTS
	- Two Intel E100 Fast Ethernets
	- Two PCI Mezzanine Card (PMC) Slots
	- PPCBug Firmware

The HAWK PHB/MPIC is compatible with the MPC10x devices.

There is no onboard disk support. This is usually provided by installing a PMC
in first PMC slot.

This patch revives the board support, it was present in early 2.6
series kernels. The board support in those days was by Matt Porter of
MontaVista Software.

CSC Australia has around 31 of these boards in service. The kernel in use
for the boards is based on 2.6.31. The boards are operated without disks
from a file server.

This patch is based on linux-3.13-rc2 and has been boot tested.

Only boards with 512 Mb of memory are known to work.

Signed-off-by: Stephen Chivers <schivers@csc.com>
Tested-by: Alessio Igor Bogani <alessio.bogani@elettra.eu>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:20 -06:00
Scott Wood
bbead78c06 powerpc/fsl-book3e-64: Use paca for hugetlb TLB1 entry selection
This keeps usage coordinated for hugetlb and indirect entries, which
should make entry selection more predictable and probably improve overall
performance when mixing the two.

Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:20 -06:00
Scott Wood
28efc35fe6 powerpc/e6500: TLB miss handler with hardware tablewalk support
There are a few things that make the existing hw tablewalk handlers
unsuitable for e6500:

 - Indirect entries go in TLB1 (though the resulting direct entries go in
   TLB0).

 - It has threads, but no "tlbsrx." -- so we need a spinlock and
   a normal "tlbsx".  Because we need this lock, hardware tablewalk
   is mandatory on e6500 unless we want to add spinlock+tlbsx to
   the normal bolted TLB miss handler.

 - TLB1 has no HES (nor next-victim hint) so we need software round robin
   (TODO: integrate this round robin data with hugetlb/KVM)

 - The existing tablewalk handlers map half of a page table at a time,
   because IBM hardware has a fixed 1MiB indirect page size.  e6500
   has variable size indirect entries, with a minimum of 2MiB.
   So we can't do the half-page indirect mapping, and even if we
   could it would be less efficient than mapping the full page.

 - Like on e5500, the linear mapping is bolted, so we don't need the
   overhead of supporting nested tlb misses.

Note that hardware tablewalk does not work in rev1 of e6500.
We do not expect to support e6500 rev1 in mainline Linux.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Cc: Mihai Caraman <mihai.caraman@freescale.com>
2014-01-09 17:52:19 -06:00
Scott Wood
47ce8af420 powerpc: add barrier after writing kernel PTE
There is no barrier between something like ioremap() writing to
a PTE, and returning the value to a caller that may then store the
pointer in a place that is visible to other CPUs.  Such callers
generally don't perform barriers of their own.

Even if callers of ioremap() and similar things did use barriers,
the most logical choise would be smp_wmb(), which is not
architecturally sufficient when BookE hardware tablewalk is used.  A
full sync is specified by the architecture.

For userspace mappings, OTOH, we generally already have an lwsync due
to locking, and if we occasionally take a spurious fault due to not
having a full sync with hardware tablewalk, it will not be fatal
because we will retry rather than oops.

Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:19 -06:00
Kevin Hao
dde7dd3d67 powerpc/fsl_booke: enable the relocatable for the kdump kernel
The RELOCATABLE is more flexible and without any alignment restriction.
And it is a superset of DYNAMIC_MEMSTART. So use it by default for
a kdump kernel.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:18 -06:00
Kevin Hao
0be7d969b0 powerpc/fsl_booke: smp support for booting a relocatable kernel above 64M
When booting above the 64M for a secondary cpu, we also face the
same issue as the boot cpu that the PAGE_OFFSET map two different
physical address for the init tlb and the final map. So we have to use
switch_to_as1/restore_to_as0 between the conversion of these two
maps. When restoring to as0 for a secondary cpu, we only need to
return to the caller. So add a new parameter for function
restore_to_as0 for this purpose.

Use LOAD_REG_ADDR_PIC to get the address of variables which may
be used before we set the final map in cams for the secondary cpu.
Move the setting of cams a bit earlier in order to avoid the
unnecessary using of LOAD_REG_ADDR_PIC.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:18 -06:00
Kevin Hao
7d2471f9fa powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel
This is always true for a non-relocatable kernel. Otherwise the kernel
would get stuck. But for a relocatable kernel, it seems a little
complicated. When booting a relocatable kernel, we just align the
kernel start addr to 64M and map the PAGE_OFFSET from there. The
relocation will base on this virtual address. But if this address
is not the same as the memstart_addr, we will have to change the
map of PAGE_OFFSET to the real memstart_addr and do another relocation
again.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
[scottwood@freescale.com: make offset long and non-negative in simple case]
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:17 -06:00
Kevin Hao
813125d833 powerpc/fsl_booke: introduce map_mem_in_cams_addr
Introduce this function so we can set both the physical and virtual
address for the map in cams. This will be used by the relocation code.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:17 -06:00
Kevin Hao
b27652dd21 powerpc: introduce early_get_first_memblock_info
For a relocatable kernel since it can be loaded at any place, there
is no any relation between the kernel start addr and the memstart_addr.
So we can't calculate the memstart_addr from kernel start addr. And
also we can't wait to do the relocation after we get the real
memstart_addr from device tree because it is so late. So introduce
a new function we can use to get the first memblock address and size
in a very early stage (before machine_init).

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:17 -06:00
Kevin Hao
78a235efdc powerpc/fsl_booke: set the tlb entry for the kernel address in AS1
We use the tlb1 entries to map low mem to the kernel space. In the
current code, it assumes that the first tlb entry would cover the
kernel image. But this is not true for some special cases, such as
when we run a relocatable kernel above the 64M or set
CONFIG_KERNEL_START above 64M. So we choose to switch to address
space 1 before setting these tlb entries.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:16 -06:00
Kevin Hao
dd189692d4 powerpc: enable the relocatable support for the fsl booke 32bit kernel
This is based on the codes in the head_44x.S. The difference is that
the init tlb size we used is 64M. With this patch we can only load the
kernel at address between memstart_addr ~ memstart_addr + 64M. We will
fix this restriction in the following patches.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:16 -06:00
Kevin Hao
1c49abec67 powerpc: introduce macro LOAD_REG_ADDR_PIC
This is used to get the address of a variable when the kernel is not
running at the linked or relocated address.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:15 -06:00
Kevin Hao
99739611e8 powerpc/fsl_booke: introduce get_phys_addr function
Move the codes which translate a effective address to physical address
to a separate function. So it can be reused by other code.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:15 -06:00
Kevin Hao
7c732cba3d powerpc/fsl_booke: protect the access to MAS7
The e500v1 doesn't implement the MAS7, so we should avoid to access
this register on that implementations. In the current kernel, the
access to MAS7 are protected by either CONFIG_PHYS_64BIT or
MMU_FTR_BIG_PHYS. Since some code are executed before the code
patching, we have to use CONFIG_PHYS_64BIT in these cases.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:14 -06:00
Wang Dongsheng
d2dc13b533 powerpc/mpic_timer: fix convert ticks to time subtraction overflow
In some cases tmp_sec may be greater than ticks, because in the process
of calculation ticks and tmp_sec will be rounded.

Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:14 -06:00
Wang Dongsheng
0fd79588f9 powerpc/mpic_timer: fix the time is not accurate caused by GTCRR toggle bit
When the timer GTCCR toggle bit is inverted, we calculated the rest
of the time is not accurate. So we need to ignore this bit.

Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:13 -06:00
Wang Dongsheng
7f83a50ce3 powerpc/p1022ds: add a interrupt for rtc node
Add an external interrupt for rtc node.

Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:13 -06:00
Wang Dongsheng
1e7bf895cc powerpc/p1022ds: fix rtc compatible string
RTC Hardware(ds3232) and rtc compatible string does not match.
Change "dallas,ds1339" to "dallas,ds3232".

Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:52:13 -06:00
Wang Dongsheng
a7189483f0 powerpc/85xx: add sysfs for pw20 state and altivec idle
Add a sys interface to enable/diable pw20 state or altivec idle, and
control the wait entry time.

Enable/Disable interface:
    0, disable. 1, enable.
    /sys/devices/system/cpu/cpuX/pw20_state
    /sys/devices/system/cpu/cpuX/altivec_idle

Set wait time interface:(Nanosecond)
    /sys/devices/system/cpu/cpuX/pw20_wait_time
    /sys/devices/system/cpu/cpuX/altivec_idle_wait_time
Example: Base on TBfreq is 41MHZ.
    1~48(ns): TB[63]
    49~97(ns): TB[62]
    98~195(ns): TB[61]
    196~390(ns): TB[60]
    391~780(ns): TB[59]
    781~1560(ns): TB[58]
    ...

Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com>
[scottwood@freescale.com: change ifdef]
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-09 17:51:38 -06:00
Alexander Graf
7a8ff56be6 KVM: PPC: Unify kvmppc_get_last_inst and sc
We had code duplication between the inline functions to get our last
instruction on normal interrupts and system call interrupts. Unify
both helper functions towards a single implementation.

Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 11:10:46 +01:00
Zhouyi Zhou
47d45d9f53 KVM: PPC: NULL return of kvmppc_mmu_hpte_cache_next should be handled
NULL return of kvmppc_mmu_hpte_cache_next should be handled

Signed-off-by: Zhouyi Zhou <yizhouzhou@ict.ac.cn>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 10:15:11 +01:00
Tiejun Chen
9bd880a2c8 KVM: PPC: Book3E HV: call RECONCILE_IRQ_STATE to sync the software state
Rather than calling hard_irq_disable() when we're back in C code
we can just call RECONCILE_IRQ_STATE to soft disable IRQs while
we're already in hard disabled state.

This should be functionally equivalent to the code before, but
cleaner and faster.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
[agraf: fix comment, commit message]
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 10:15:10 +01:00
Bharat Bhushan
08c9a188d0 kvm: powerpc: use caching attributes as per linux pte
KVM uses same WIM tlb attributes as the corresponding qemu pte.
For this we now search the linux pte for the requested page and
get these cache caching/coherency attributes from pte.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Reviewed-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 10:15:08 +01:00
Bharat Bhushan
f5e3fe091f kvm: powerpc: define a linux pte lookup function
We need to search linux "pte" to get "pte" attributes for setting TLB in KVM.
This patch defines a lookup_linux_ptep() function which returns pte pointer.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Reviewed-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 10:15:07 +01:00
Bharat Bhushan
7c85e6b39c kvm: book3s: rename lookup_linux_pte() to lookup_linux_pte_and_update()
lookup_linux_pte() is doing more than lookup, updating the pte,
so for clarity it is renamed to lookup_linux_pte_and_update()

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Reviewed-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 10:15:06 +01:00
Bharat Bhushan
30a91fe24b kvm: booke: clear host tlb reference flag on guest tlb invalidation
On booke, "struct tlbe_ref" contains host tlb mapping information
(pfn: for guest-pfn to pfn, flags: attribute associated with this mapping)
for a guest tlb entry. So when a guest creates a TLB entry then
"struct tlbe_ref" is set to point to valid "pfn" and set attributes in
"flags" field of the above said structure. When a guest TLB entry is
invalidated then flags field of corresponding "struct tlbe_ref" is
updated to point that this is no more valid, also we selectively clear
some other attribute bits, example: if E500_TLB_BITMAP was set then we clear
E500_TLB_BITMAP, if E500_TLB_TLB0 is set then we clear this.

Ideally we should clear complete "flags" as this entry is invalid and does not
have anything to re-used. The other part of the problem is that when we use
the same entry again then also we do not clear (started doing or-ing etc).

So far it was working because the selectively clearing mentioned above
actually clears "flags" what was set during TLB mapping. But the problem
starts coming when we add more attributes to this then we need to selectively
clear them and which is not needed.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Reviewed-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 10:15:04 +01:00
Paul Mackerras
595e4f7e69 KVM: PPC: Book3S HV: Use load/store_fp_state functions in HV guest entry/exit
This modifies kvmppc_load_fp and kvmppc_save_fp to use the generic
FP/VSX and VMX load/store functions instead of open-coding the
FP/VSX/VMX load/store instructions.  Since kvmppc_load/save_fp don't
follow C calling conventions, we make them private symbols within
book3s_hv_rmhandlers.S.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 10:15:03 +01:00
Paul Mackerras
99dae3bad2 KVM: PPC: Load/save FP/VMX/VSX state directly to/from vcpu struct
Now that we have the vcpu floating-point and vector state stored in
the same type of struct as the main kernel uses, we can load that
state directly from the vcpu struct instead of having extra copies
to/from the thread_struct.  Similarly, when the guest state needs to
be saved, we can have it saved it directly to the vcpu struct by
setting the current->thread.fp_save_area and current->thread.vr_save_area
pointers.  That also means that we don't need to back up and restore
userspace's FP/vector state.  This all makes the code simpler and
faster.

Note that it's not necessary to save or modify current->thread.fpexc_mode,
since nothing in KVM uses or is affected by its value.  Nor is it
necessary to touch used_vr or used_vsr.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 10:15:02 +01:00
Paul Mackerras
efff191223 KVM: PPC: Store FP/VSX/VMX state in thread_fp/vr_state structures
This uses struct thread_fp_state and struct thread_vr_state to store
the floating-point, VMX/Altivec and VSX state, rather than flat arrays.
This makes transferring the state to/from the thread_struct simpler
and allows us to unify the get/set_one_reg implementations for the
VSX registers.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 10:15:00 +01:00
Paul Mackerras
09548fdaf3 KVM: PPC: Use load_fp/vr_state rather than load_up_fpu/altivec
The load_up_fpu and load_up_altivec functions were never intended to
be called from C, and do things like modifying the MSR value in their
callers' stack frames, which are assumed to be interrupt frames.  In
addition, on 32-bit Book S they require the MMU to be off.

This makes KVM use the new load_fp_state() and load_vr_state() functions
instead of load_up_fpu/altivec.  This means we can remove the assembler
glue in book3s_rmhandlers.S, and potentially fixes a bug on Book E,
where load_up_fpu was called directly from C.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 10:14:59 +01:00
Bharat Bhushan
b1f0d94c26 kvm/powerpc: move kvm_hypercall0() and friends to epapr_hypercall0()
kvm_hypercall0() and friends have nothing KVM specific so moved to
epapr_hypercall0() and friends. Also they are moved from
arch/powerpc/include/asm/kvm_para.h to arch/powerpc/include/asm/epapr_hcalls.h

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 10:14:57 +01:00
Bharat Bhushan
1820a8d216 kvm/powerpc: rename kvm_hypercall() to epapr_hypercall()
kvm_hypercall() have nothing KVM specific, so renamed to epapr_hypercall().
Also this in moved to arch/powerpc/include/asm/epapr_hcalls.h

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 10:14:56 +01:00
Gleb Natapov
458ff3c099 KVM: PPC: fix couple of memory leaks in MPIC/XICS devices
XICS failed to free xics structure on error path. MPIC destroy handler
forgot to delete kvm_device structure.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 10:14:54 +01:00
Alexander Graf
398a76c677 KVM: PPC: Add devname:kvm aliases for modules
Systems that support automatic loading of kernel modules through
device aliases should try and automatically load kvm when /dev/kvm
gets opened.

Add code to support that magic for all PPC kvm targets, even the
ones that don't support modules yet.

Signed-off-by: Alexander Graf <agraf@suse.de>
2014-01-09 10:14:00 +01:00
Wang Dongsheng
1d47ddf7c3 powerpc/85xx: add hardware automatically enter pw20 state
Using hardware features make core automatically enter PW20 state.
Set a TB count to hardware, the effective count begins when PW10
is entered. When the effective period has expired, the core will
proceed from PW10 to PW20 if no exit conditions have occurred during
the period.

Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-07 19:40:28 -06:00
Wang Dongsheng
202e059ce3 powerpc/85xx: add hardware automatically enter altivec idle state
Each core's AltiVec unit may be placed into a power savings mode
by turning off power to the unit. Core hardware will automatically
power down the AltiVec unit after no AltiVec instructions have
executed in N cycles. The AltiVec power-control is triggered by hardware.

Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-07 19:39:48 -06:00
Wang Dongsheng
71a6fa17e1 powerpc/fsl: add E6500 PVR and SPRN_PWRMGTCR0 define
E6500 PVR and SPRN_PWRMGTCR0 will be used in subsequent pw20/altivec
idle patches.

Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-07 19:29:23 -06:00
Christian Engelmayer
1e83bf875e powerpc/sysdev: Fix a pci section mismatch for Book E
Moved the following functions out of the __init section:

   arch/powerpc/sysdev/fsl_pci.c      : fsl_add_bridge()
   arch/powerpc/sysdev/indirect_pci.c : setup_indirect_pci()

Those are referenced by arch/powerpc/sysdev/fsl_pci.c : fsl_pci_probe() when
compiling for Book E support.

Signed-off-by: Christian Engelmayer <cengelma@gmx.at>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-07 19:23:31 -06:00
Zhao Qiang
8b52312880 powerpc/p1010rdb-pa: modify phy interrupt.
It is not correct according to p1010rdb-pa user guide.
So modify it.

Signed-off-by: Zhao Qiang <B45475@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-07 19:15:12 -06:00
LEROY Christophe
189046981b powerpc 8xx: defconfig: slice by 4 is more efficient than the default slice by 8 on Powerpc 8xx.
On PPC_8xx, CRC32_SLICEBY4 is more efficient (almost twice) than CRC32_SLICEBY8,
as shown below:

With CRC32_SLICEBY8:
[    1.109204] crc32: CRC_LE_BITS = 64, CRC_BE BITS = 64
[    1.114401] crc32: self tests passed, processed 225944 bytes in 15118910 nsec
[    1.130655] crc32c: CRC_LE_BITS = 64
[    1.134235] crc32c: self tests passed, processed 225944 bytes in 4479879 nsec

With CRC32_SLICEBY4:
[    1.097129] crc32: CRC_LE_BITS = 32, CRC_BE BITS = 32
[    1.101878] crc32: self tests passed, processed 225944 bytes in 8616242 nsec
[    1.116298] crc32c: CRC_LE_BITS = 32
[    1.119607] crc32c: self tests passed, processed 225944 bytes in 3289576 nsec

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-07 19:11:20 -06:00
Xie Xiaobo
8a6be2bdb6 powerpc/85xx: Add TWR-P1025 board support
TWR-P1025 Overview
 -----------------
 512Mbyte DDR3 (on board DDR)
 64MB Nor Flash
 eTSEC1: Connected to RGMII PHY AR8035
 eTSEC3: Connected to RGMII PHY AR8035
 Two USB2.0 Type A
 One microSD Card slot
 One mini-PCIe slot
 One mini-USB TypeB dual UART

Signed-off-by: Michael Johnston <michael.johnston@freescale.com>
Signed-off-by: Xie Xiaobo <X.Xie@freescale.com>
[scottwood@freescale.com: use pr_info rather than KERN_INFO]
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-07 19:09:29 -06:00
Xie Xiaobo
72c916ae97 powerpc/85xx: Add QE common init function
Define a QE init function in common file, and avoid
the same codes being duplicated in board files.

Signed-off-by: Xie Xiaobo <X.Xie@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-07 19:08:53 -06:00
Lijun Pan
3d73eb69fb powerpc/85xx: Merge 85xx/p1023_defconfig into mpc85xx_smp and mpc85xx
mpc85xx_smp_defconfig and mpc85xx_defconfig already have CONFIG_P1023RDS=y.
Merge CONFIG_P1023RDB=y and other relevant configurations into
mpc85xx_smp_defconfig and mpc85_defconfig.

Signed-off-by: Lijun Pan <Lijun.Pan@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-07 19:06:04 -06:00
Scott Wood
b58a7bd6df powerpc/fsl-booke: Use SPRN_SPRGn rather than mfsprg/mtsprg
This fixes a build break that was probably introduced with the removal
of -Wa,-me500 (commit f49596a4cf), where
the assembler refuses to recognize SPRG4-7 with a generic PPC target.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Cc: Dongsheng Wang <dongsheng.wang@freescale.com>
Cc: Anton Vorontsov <avorontsov@mvista.com>
Reviewed-by: Wang Dongsheng <dongsheng.wang@freescale.com>
Tested-by: Wang Dongsheng <dongsheng.wang@freescale.com>
2014-01-07 19:06:03 -06:00
Kevin Hao
455d23a890 powerpc/85xx: don't init the mpic ipi for the SoC which has doorbell support
It makes no sense to initialize the mpic ipi for the SoC which has
doorbell support. So set the smp_85xx_ops.probe to NULL for this
case. Since the smp_85xx_ops.probe is also used in function
smp_85xx_setup_cpu() to check if we need to invoke
mpic_setup_this_cpu(), we introduce a new setup_cpu function
smp_85xx_basic_setup() to remove this dependency.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-07 19:06:03 -06:00
Zhao Qiang
0ff649ca50 powerpc/p1010rdb:update mtd of nand to adapt to both old and new p1010rdb
P1010rdb-pa and p1010rdb-pb have different mtd of nand.
So update dts to adapt to both p1010rdb-pa and p1010rdb-pb.

Move the nand-mtd from p1010rdb.dtsi to p1010rdb-pa*.dts.
Remove nand-mtd for p1010rdb-pb, whick will use mtdparts
from u-boot instead of nand-mtd in device tree.

Signed-off-by: Zhao Qiang <B45475@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-07 19:06:02 -06:00
Zhao Qiang
9667a36486 powerpc/p1010rdb:update dts to adapt to both old and new p1010rdb
P1010rdb-pa and p1010rdb-pb have different phy interrupts.
So update dts to adapt to both p1010rdb-pa and p1010rdb-pb.

Signed-off-by: Shengzhou Liu <Shengzhou.Liu@freescale.com>
Signed-off-by: Zhao Qiang <B45475@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-07 19:06:02 -06:00
Joseph Myers
01c9ccee3c powerpc: fix e500 SPE float SIGFPE generation
The e500 SPE floating-point emulation code is called from
SPEFloatingPointException and SPEFloatingPointRoundException in
arch/powerpc/kernel/traps.c.  Those functions have support for
generating SIGFPE, but do_spe_mathemu and speround_handler don't
generate a return value to indicate that this should be done.  Such a
return value should depend on whether an exception is raised that has
been set via prctl to generate SIGFPE.  This patch adds the relevant
logic in these functions so that SIGFPE is generated as expected by
the glibc testsuite.

Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-01-07 18:43:42 -06:00