Merge remote-tracking branch 'lsk-44/linux-linaro-lsk-v4.4' into 44rc2
* lsk-44/linux-linaro-lsk-v4.4: Linux 4.4.3 modules: fix modparam async_probe request module: wrapper for symbol name. itimers: Handle relative timers with CONFIG_TIME_LOW_RES proper posix-timers: Handle relative timers with CONFIG_TIME_LOW_RES proper timerfd: Handle relative timers with CONFIG_TIME_LOW_RES proper prctl: take mmap sem for writing to protect against others xfs: log mount failures don't wait for buffers to be released Revert "xfs: clear PF_NOFREEZE for xfsaild kthread" xfs: inode recovery readahead can race with inode buffer creation libxfs: pack the agfl header structure so XFS_AGFL_SIZE is correct ovl: setattr: check permissions before copy-up ovl: root: copy attr ovl: check dentry positiveness in ovl_cleanup_whiteouts() ovl: use a minimal buffer in ovl_copy_xattr ovl: allow zero size xattr futex: Drop refcount if requeue_pi() acquired the rtmutex devm_memremap_release(): fix memremap'd addr handling ipc/shm: handle removed segments gracefully in shm_mmap() intel_scu_ipcutil: underflow in scu_reg_access() mm,thp: khugepaged: call pte flush at the time of collapse dump_stack: avoid potential deadlocks radix-tree: fix oops after radix_tree_iter_retry drivers/hwspinlock: fix race between radix tree insertion and lookup radix-tree: fix race in gang lookup MAINTAINERS: return arch/sh to maintained state, with new maintainers memcg: only free spare array when readers are done numa: fix /proc/<pid>/numa_maps for hugetlbfs on s390 fs/hugetlbfs/inode.c: fix bugs in hugetlb_vmtruncate_list() scripts/bloat-o-meter: fix python3 syntax error dma-debug: switch check from _text to _stext m32r: fix m32104ut_defconfig build fail xhci: Fix list corruption in urb dequeue at host removal Revert "xhci: don't finish a TD if we get a short-transfer event mid TD" iommu/vt-d: Clear PPR bit to ensure we get more page request interrupts iommu/vt-d: Fix 64-bit accesses to 32-bit DMAR_GSTS_REG iommu/vt-d: Fix mm refcounting to hold mm_count not mm_users iommu/amd: Correct the wrong setting of alias DTE in do_attach iommu/vt-d: Don't skip PCI devices when disabling IOTLB Input: vmmouse - fix absolute device registration string_helpers: fix precision loss for some inputs Input: i8042 - add Fujitsu Lifebook U745 to the nomux list Input: elantech - mark protocols v2 and v3 as semi-mt mm: fix regression in remap_file_pages() emulation mm: replace vma_lock_anon_vma with anon_vma_lock_read/write mm: fix mlock accouting libnvdimm: fix namespace object confusion in is_uuid_busy() mm: soft-offline: check return value in second __get_any_page() call perf kvm record/report: 'unprocessable sample' error while recording/reporting guest data KVM: PPC: Fix ONE_REG AltiVec support KVM: PPC: Fix emulation of H_SET_DABR/X on POWER8 KVM: arm/arm64: Fix reference to uninitialised VGIC arm64: dma-mapping: fix handling of devices registered before arch_initcall ARM: OMAP2+: Fix ppa_zero_params and ppa_por_params for rodata ARM: OMAP2+: Fix save_secure_ram_context for rodata ARM: OMAP2+: Fix l2dis_3630 for rodata ARM: OMAP2+: Fix l2_inv_api_params for rodata ARM: OMAP2+: Fix wait_dll_lock_timed for rodata ARM: dts: at91: sama5d4ek: add phy address and IRQ for macb0 ARM: dts: at91: sama5d4 xplained: fix phy0 IRQ type ARM: dts: at91: sama5d4: fix instance id of DBGU ARM: dts: at91: sama5d4 xplained: properly mux phy interrupt ARM: dts: omap5-board-common: enable rtc and charging of backup battery ARM: dts: Fix omap5 PMIC control lines for RTC writes ARM: dts: Fix wl12xx missing clocks that cause hangs ARM: nomadik: fix up SD/MMC DT settings ARM: 8517/1: ICST: avoid arithmetic overflow in icst_hz() ARM: 8519/1: ICST: try other dividends than 1 arm64: mm: avoid calling apply_to_page_range on empty range ARM: mvebu: remove duplicated regulator definition in Armada 388 GP powerpc/ioda: Set "read" permission when "write" is set powerpc/powernv: Fix stale PE primary bus powerpc/eeh: Fix stale cached primary bus powerpc/eeh: Fix PE location code SUNRPC: Fixup socket wait for memory udf: Check output buffer length when converting name to CS0 udf: Prevent buffer overrun with multi-byte characters udf: limit the maximum number of indirect extents in a row pNFS/flexfiles: Fix an XDR encoding bug in layoutreturn nfs: Fix race in __update_open_stateid() pNFS/flexfiles: Fix an Oopsable typo in ff_mirror_match_fh() NFS: Fix attribute cache revalidation cifs: fix erroneous return value cifs_dbg() outputs an uninitialized buffer in cifs_readdir() cifs: fix race between call_async() and reconnect() cifs: Ratelimit kernel log messages iio: inkern: fix a NULL dereference on error iio: pressure: mpl115: fix temperature offset sign iio: light: acpi-als: Report data as processed iio: dac: mcp4725: set iio name property in sysfs iio: add IIO_TRIGGER dependency to STK8BA50 iio: add HAS_IOMEM dependency to VF610_ADC iio-light: Use a signed return type for ltr501_match_samp_freq() iio:adc:ti_am335x_adc Fix buffered mode by identifying as software buffer. iio: adis_buffer: Fix out-of-bounds memory access scsi: fix soft lockup in scsi_remove_target() on module removal SCSI: Add Marvell Console to VPD blacklist scsi_dh_rdac: always retry MODE SELECT on command lock violation drivers/scsi/sg.c: mark VMA as VM_IO to prevent migration SCSI: fix crashes in sd and sr runtime PM iscsi-target: Fix potential dead-lock during node acl delete scsi: add Synology to 1024 sector blacklist klist: fix starting point removed bug in klist iterators tracepoints: Do not trace when cpu is offline tracing: Fix freak link error caused by branch tracer perf tools: tracepoint_error() can receive e=NULL, robustify it tools lib traceevent: Fix output of %llu for 64 bit values read on 32 bit machines ptrace: use fsuid, fsgid, effective creds for fs access checks Btrfs: fix direct IO requests not reporting IO error to user space Btrfs: fix hang on extent buffer lock caused by the inode_paths ioctl Btrfs: fix page reading in extent_same ioctl leading to csum errors Btrfs: fix invalid page accesses in extent_same (dedup) ioctl btrfs: properly set the termination value of ctx->pos in readdir Revert "btrfs: clear PF_NOFREEZE in cleaner_kthread()" Btrfs: fix fitrim discarding device area reserved for boot loader's use btrfs: handle invalid num_stripes in sys_array ext4: don't read blocks from disk after extents being swapped ext4: fix potential integer overflow ext4: fix scheduling in atomic on group checksum failure serial: omap: Prevent DoS using unprivileged ioctl(TIOCSRS485) serial: 8250_pci: Add Intel Broadwell ports tty: Add support for PCIe WCH382 2S multi-IO card pty: make sure super_block is still valid in final /dev/tty close pty: fix possible use after free of tty->driver_data staging/speakup: Use tty_ldisc_ref() for paste kworker phy: twl4030-usb: Fix unbalanced pm_runtime_enable on module reload phy: twl4030-usb: Relase usb phy on unload ALSA: seq: Fix double port list deletion ALSA: seq: Fix leak of pool buffer at concurrent writes ALSA: pcm: Fix rwsem deadlock for non-atomic PCM stream ALSA: hda - Cancel probe work instead of flush at remove x86/mm: Fix vmalloc_fault() to handle large pages properly x86/uaccess/64: Handle the caching of 4-byte nocache copies properly in __copy_user_nocache() x86/uaccess/64: Make the __copy_user_nocache() assembly code more readable x86/mm/pat: Avoid truncation when converting cpa->numpages to address x86/mm: Fix types used in pgprot cacheability flags translations Linux 4.4.2 HID: multitouch: fix input mode switching on some Elan panels mm, vmstat: fix wrong WQ sleep when memory reclaim doesn't make any progress zsmalloc: fix migrate_zspage-zs_free race condition zram: don't call idr_remove() from zram_remove() zram: try vmalloc() after kmalloc() zram/zcomp: use GFP_NOIO to allocate streams rtlwifi: rtl8821ae: Fix 5G failure when EEPROM is incorrectly encoded rtlwifi: rtl8821ae: Fix errors in parameter initialization crypto: marvell/cesa - fix test in mv_cesa_dev_dma_init() crypto: atmel-sha - remove calls of clk_prepare() from atomic contexts crypto: atmel-sha - fix atmel_sha_remove() crypto: algif_skcipher - Do not set MAY_BACKLOG on the async path crypto: algif_skcipher - Do not dereference ctx without socket lock crypto: algif_skcipher - Do not assume that req is unchanged crypto: user - lock crypto_alg_list on alg dump EVM: Use crypto_memneq() for digest comparisons crypto: algif_hash - wait for crypto_ahash_init() to complete crypto: shash - Fix has_key setting crypto: chacha20-ssse3 - Align stack pointer to 64 bytes crypto: caam - make write transactions bufferable on PPC platforms crypto: algif_skcipher - sendmsg SG marking is off by one crypto: algif_skcipher - Load TX SG list after waiting crypto: crc32c - Fix crc32c soft dependency crypto: algif_skcipher - Fix race condition in skcipher_check_key crypto: algif_hash - Fix race condition in hash_check_key crypto: af_alg - Forbid bind(2) when nokey child sockets are present crypto: algif_skcipher - Remove custom release parent function crypto: algif_hash - Remove custom release parent function crypto: af_alg - Allow af_af_alg_release_parent to be called on nokey path ahci: Intel DNV device IDs SATA libata: disable forced PORTS_IMPL for >= AHCI 1.3 crypto: algif_skcipher - Add key check exception for cipher_null crypto: skcipher - Add crypto_skcipher_has_setkey crypto: algif_hash - Require setkey before accept(2) crypto: hash - Add crypto_ahash_has_setkey crypto: algif_skcipher - Add nokey compatibility path crypto: af_alg - Add nokey compatibility path crypto: af_alg - Fix socket double-free when accept fails crypto: af_alg - Disallow bind/setkey/... after accept(2) crypto: algif_skcipher - Require setkey before accept(2) sched: Fix crash in sched_init_numa() ext4 crypto: add missing locking for keyring_key access iommu/io-pgtable-arm: Ensure we free the final level on teardown tty: Fix unsafe ldisc reference via ioctl(TIOCGETD) tty: Retry failed reopen if tty teardown in-progress tty: Wait interruptibly for tty lock on reopen n_tty: Fix unsafe reference to "other" ldisc usb: xhci: apply XHCI_PME_STUCK_QUIRK to Intel Broxton-M platforms usb: xhci: handle both SSIC ports in PME stuck quirk usb: phy: msm: fix error handling in probe. usb: cdc-acm: send zero packet for intel 7260 modem usb: cdc-acm: handle unlinked urb in acm read callback USB: option: fix Cinterion AHxx enumeration USB: serial: option: Adding support for Telit LE922 USB: cp210x: add ID for IAI USB to RS485 adaptor USB: serial: ftdi_sio: add support for Yaesu SCU-18 cable usb: hub: do not clear BOS field during reset device USB: visor: fix null-deref at probe USB: serial: visor: fix crash on detecting device without write_urbs ASoC: rt5645: fix the shift bit of IN1 boost saa7134-alsa: Only frees registered sound cards ALSA: dummy: Implement timer backend switching more safely ALSA: hda - Fix bad dereference of jack object ALSA: hda - Fix speaker output from VAIO AiO machines Revert "ALSA: hda - Fix noise on Gigabyte Z170X mobo" ALSA: hda - Fix static checker warning in patch_hdmi.c ALSA: hda - Add fixup for Mac Mini 7,1 model ALSA: timer: Fix race between stop and interrupt ALSA: timer: Fix wrong instance passed to slave callbacks ALSA: timer: Fix race at concurrent reads ALSA: timer: Fix link corruption due to double start or stop ALSA: timer: Fix leftover link at closing ALSA: timer: Code cleanup ALSA: seq: Fix lockdep warnings due to double mutex locks ALSA: seq: Fix race at closing in virmidi driver ALSA: seq: Fix yet another races among ALSA timer accesses ASoC: dpcm: fix the BE state on hw_free ALSA: pcm: Fix potential deadlock in OSS emulation ALSA: hda/realtek - Support Dell headset mode for ALC225 ALSA: hda/realtek - Support headset mode for ALC225 ALSA: hda/realtek - New codec support of ALC225 ALSA: rawmidi: Fix race at copying & updating the position ALSA: rawmidi: Remove kernel WARNING for NULL user-space buffer check ALSA: rawmidi: Make snd_rawmidi_transmit() race-free ALSA: seq: Degrade the error message for too many opens ALSA: seq: Fix incorrect sanity check at snd_seq_oss_synth_cleanup() ALSA: dummy: Disable switching timer backend via sysfs ALSA: compress: Disable GET_CODEC_CAPS ioctl for some architectures ALSA: hda - disable dynamic clock gating on Broxton before reset ALSA: Add missing dependency on CONFIG_SND_TIMER ALSA: bebob: Use a signed return type for get_formation_index ALSA: usb-audio: avoid freeing umidi object twice ALSA: usb-audio: Add native DSD support for PS Audio NuWave DAC ALSA: usb-audio: Fix OPPO HA-1 vendor ID ALSA: usb-audio: Add quirk for Microsoft LifeCam HD-6000 ALSA: usb-audio: Fix TEAC UD-501/UD-503/NT-503 usb delay hrtimer: Handle remaining time proper for TIME_LOW_RES md/raid: only permit hot-add of compatible integrity profiles media: i2c: Don't export ir-kbd-i2c module alias parisc: Fix __ARCH_SI_PREAMBLE_SIZE parisc: Protect huge page pte changes with spinlocks printk: do cond_resched() between lines while outputting to consoles tracing/stacktrace: Show entire trace if passed in function not found tracing: Fix stacktrace skip depth in trace_buffer_unlock_commit_regs() PCI: Fix minimum allocation address overwrite PCI: host: Mark PCIe/PCI (MSI) IRQ cascade handlers as IRQF_NO_THREAD mtd: nand: assign reasonable default name for NAND drivers wlcore/wl12xx: spi: fix NULL pointer dereference (Oops) wlcore/wl12xx: spi: fix oops on firmware load ocfs2/dlm: clear refmap bit of recovery lock while doing local recovery cleanup ocfs2/dlm: ignore cleaning the migration mle that is inuse ALSA: hda - Implement loopback control switch for Realtek and other codecs block: fix bio splitting on max sectors base/platform: Fix platform drivers with no probe callback HID: usbhid: fix recursive deadlock ocfs2: NFS hangs in __ocfs2_cluster_lock due to race with ocfs2_unblock_lock block: split bios to max possible length NFSv4.1/pnfs: Fixup an lo->plh_block_lgets imbalance in layoutreturn crypto: sun4i-ss - add missing statesize Linux 4.4.1 arm64: kernel: fix architected PMU registers unconditional access arm64: kernel: enforce pmuserenr_el0 initialization and restore arm64: mm: ensure that the zero page is visible to the page table walker arm64: Clear out any singlestep state on a ptrace detach operation powerpc/module: Handle R_PPC64_ENTRY relocations scripts/recordmcount.pl: support data in text section on powerpc powerpc: Make {cmp}xchg* and their atomic_ versions fully ordered powerpc: Make value-returning atomics fully ordered powerpc/tm: Check for already reclaimed tasks batman-adv: Drop immediate orig_node free function batman-adv: Drop immediate batadv_hard_iface free function batman-adv: Drop immediate neigh_ifinfo free function batman-adv: Drop immediate batadv_neigh_node free function batman-adv: Drop immediate batadv_orig_ifinfo free function batman-adv: Avoid recursive call_rcu for batadv_nc_node batman-adv: Avoid recursive call_rcu for batadv_bla_claim team: Replace rcu_read_lock with a mutex in team_vlan_rx_kill_vid net/mlx5_core: Fix trimming down IRQ number bridge: fix lockdep addr_list_lock false positive splat ipv6: update skb->csum when CE mark is propagated net: bpf: reject invalid shifts phonet: properly unshare skbs in phonet_rcv() dwc_eth_qos: Fix dma address for multi-fragment skbs bonding: Prevent IPv6 link local address on enslaved devices net: preserve IP control block during GSO segmentation udp: disallow UFO for sockets with SO_NO_CHECK option net: pktgen: fix null ptr deref in skb allocation sched,cls_flower: set key address type when present tcp_yeah: don't set ssthresh below 2 ipv6: tcp: add rcu locking in tcp_v6_send_synack() net: sctp: prevent writes to cookie_hmac_alg from accessing invalid memory vxlan: fix test which detect duplicate vxlan iface unix: properly account for FDs passed over unix sockets xhci: refuse loading if nousb is used usb: core: lpm: fix usb3_hardware_lpm sysfs node USB: cp210x: add ID for ELV Marble Sound Board 1 rtlwifi: fix memory leak for USB device ASoC: compress: Fix compress device direction check ASoC: wm5110: Fix PGA clear when disabling DRE ALSA: timer: Handle disconnection more safely ALSA: hda - Flush the pending probe work at remove ALSA: hda - Fix missing module loading with model=generic option ALSA: hda - Fix bass pin fixup for ASUS N550JX ALSA: control: Avoid kernel warnings from tlv ioctl with numid 0 ALSA: hrtimer: Fix stall by hrtimer_cancel() ALSA: pcm: Fix snd_pcm_hw_params struct copy in compat mode ALSA: seq: Fix snd_seq_call_port_info_ioctl in compat mode ALSA: hda - Add fixup for Dell Latitidue E6540 ALSA: timer: Fix double unlink of active_list ALSA: timer: Fix race among timer ioctls ALSA: hda - fix the headset mic detection problem for a Dell laptop ALSA: timer: Harden slave timer list handling ALSA: usb-audio: Fix mixer ctl regression of Native Instrument devices ALSA: hda - Fix white noise on Dell Latitude E5550 ALSA: seq: Fix race at timer setup and close ALSA: usb-audio: Avoid calling usb_autopm_put_interface() at disconnect ALSA: seq: Fix missing NULL check at remove_events ioctl ALSA: hda - Fixup inverted internal mic for Lenovo E50-80 ALSA: usb: Add native DSD support for Oppo HA-1 x86/mm: Improve switch_mm() barrier comments x86/mm: Add barriers and document switch_mm()-vs-flush synchronization x86/boot: Double BOOT_HEAP_SIZE to 64KB x86/reboot/quirks: Add iMac10,1 to pci_reboot_dmi_table[] kvm: x86: Fix vmwrite to SECONDARY_VM_EXEC_CONTROL KVM: x86: correctly print #AC in traces KVM: x86: expose MSR_TSC_AUX to userspace x86/xen: don't reset vcpu_info on a cancelled suspend KEYS: Fix keyring ref leak in join_session_keyring() Conflicts: arch/arm64/kernel/perf_event.c drivers/scsi/sd.c sound/core/compress_offload.c Change-Id: I9f77fe42aaae249c24cd6e170202110ab1426878 Signed-off-by: Trilok Soni <tsoni@codeaurora.org>
This commit is contained in:
commit
f2b1fed1bd
316 changed files with 3633 additions and 1430 deletions
|
@ -134,19 +134,21 @@ Description:
|
|||
enabled for the device. Developer can write y/Y/1 or n/N/0 to
|
||||
the file to enable/disable the feature.
|
||||
|
||||
What: /sys/bus/usb/devices/.../power/usb3_hardware_lpm
|
||||
Date: June 2015
|
||||
What: /sys/bus/usb/devices/.../power/usb3_hardware_lpm_u1
|
||||
/sys/bus/usb/devices/.../power/usb3_hardware_lpm_u2
|
||||
Date: November 2015
|
||||
Contact: Kevin Strasser <kevin.strasser@linux.intel.com>
|
||||
Lu Baolu <baolu.lu@linux.intel.com>
|
||||
Description:
|
||||
If CONFIG_PM is set and a USB 3.0 lpm-capable device is plugged
|
||||
in to a xHCI host which supports link PM, it will check if U1
|
||||
and U2 exit latencies have been set in the BOS descriptor; if
|
||||
the check is is passed and the host supports USB3 hardware LPM,
|
||||
the check is passed and the host supports USB3 hardware LPM,
|
||||
USB3 hardware LPM will be enabled for the device and the USB
|
||||
device directory will contain a file named
|
||||
power/usb3_hardware_lpm. The file holds a string value (enable
|
||||
or disable) indicating whether or not USB3 hardware LPM is
|
||||
enabled for the device.
|
||||
device directory will contain two files named
|
||||
power/usb3_hardware_lpm_u1 and power/usb3_hardware_lpm_u2. These
|
||||
files hold a string value (enable or disable) indicating whether
|
||||
or not USB3 hardware LPM U1 or U2 is enabled for the device.
|
||||
|
||||
What: /sys/bus/usb/devices/.../removable
|
||||
Date: February 2012
|
||||
|
|
|
@ -537,17 +537,18 @@ relevant attribute files are usb2_hardware_lpm and usb3_hardware_lpm.
|
|||
can write y/Y/1 or n/N/0 to the file to enable/disable
|
||||
USB2 hardware LPM manually. This is for test purpose mainly.
|
||||
|
||||
power/usb3_hardware_lpm
|
||||
power/usb3_hardware_lpm_u1
|
||||
power/usb3_hardware_lpm_u2
|
||||
|
||||
When a USB 3.0 lpm-capable device is plugged in to a
|
||||
xHCI host which supports link PM, it will check if U1
|
||||
and U2 exit latencies have been set in the BOS
|
||||
descriptor; if the check is is passed and the host
|
||||
supports USB3 hardware LPM, USB3 hardware LPM will be
|
||||
enabled for the device and this file will be created.
|
||||
The file holds a string value (enable or disable)
|
||||
indicating whether or not USB3 hardware LPM is
|
||||
enabled for the device.
|
||||
enabled for the device and these files will be created.
|
||||
The files hold a string value (enable or disable)
|
||||
indicating whether or not USB3 hardware LPM U1 or U2
|
||||
is enabled for the device.
|
||||
|
||||
USB Port Power Control
|
||||
----------------------
|
||||
|
|
|
@ -10289,9 +10289,11 @@ S: Maintained
|
|||
F: drivers/net/ethernet/dlink/sundance.c
|
||||
|
||||
SUPERH
|
||||
M: Yoshinori Sato <ysato@users.sourceforge.jp>
|
||||
M: Rich Felker <dalias@libc.org>
|
||||
L: linux-sh@vger.kernel.org
|
||||
Q: http://patchwork.kernel.org/project/linux-sh/list/
|
||||
S: Orphan
|
||||
S: Maintained
|
||||
F: Documentation/sh/
|
||||
F: arch/sh/
|
||||
F: drivers/sh/
|
||||
|
|
2
Makefile
2
Makefile
|
@ -1,6 +1,6 @@
|
|||
VERSION = 4
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 0
|
||||
SUBLEVEL = 3
|
||||
EXTRAVERSION =
|
||||
NAME = Blurry Fish Butt
|
||||
|
||||
|
|
|
@ -303,16 +303,6 @@
|
|||
gpio = <&expander0 4 GPIO_ACTIVE_HIGH>;
|
||||
};
|
||||
|
||||
reg_usb2_1_vbus: v5-vbus1 {
|
||||
compatible = "regulator-fixed";
|
||||
regulator-name = "v5.0-vbus1";
|
||||
regulator-min-microvolt = <5000000>;
|
||||
regulator-max-microvolt = <5000000>;
|
||||
enable-active-high;
|
||||
regulator-always-on;
|
||||
gpio = <&expander0 4 GPIO_ACTIVE_HIGH>;
|
||||
};
|
||||
|
||||
reg_sata0: pwr-sata0 {
|
||||
compatible = "regulator-fixed";
|
||||
regulator-name = "pwr_en_sata0";
|
||||
|
|
|
@ -86,10 +86,12 @@
|
|||
macb0: ethernet@f8020000 {
|
||||
phy-mode = "rmii";
|
||||
status = "okay";
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&pinctrl_macb0_rmii &pinctrl_macb0_phy_irq>;
|
||||
|
||||
phy0: ethernet-phy@1 {
|
||||
interrupt-parent = <&pioE>;
|
||||
interrupts = <1 IRQ_TYPE_EDGE_FALLING>;
|
||||
interrupts = <1 IRQ_TYPE_LEVEL_LOW>;
|
||||
reg = <1>;
|
||||
};
|
||||
};
|
||||
|
@ -152,6 +154,10 @@
|
|||
atmel,pins =
|
||||
<AT91_PIOE 8 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_UP_DEGLITCH>;
|
||||
};
|
||||
pinctrl_macb0_phy_irq: macb0_phy_irq_0 {
|
||||
atmel,pins =
|
||||
<AT91_PIOE 1 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_UP_DEGLITCH>;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
|
|
@ -160,8 +160,15 @@
|
|||
};
|
||||
|
||||
macb0: ethernet@f8020000 {
|
||||
pinctrl-0 = <&pinctrl_macb0_rmii &pinctrl_macb0_phy_irq>;
|
||||
phy-mode = "rmii";
|
||||
status = "okay";
|
||||
|
||||
ethernet-phy@1 {
|
||||
reg = <0x1>;
|
||||
interrupt-parent = <&pioE>;
|
||||
interrupts = <1 IRQ_TYPE_LEVEL_LOW>;
|
||||
};
|
||||
};
|
||||
|
||||
mmc1: mmc@fc000000 {
|
||||
|
@ -193,6 +200,10 @@
|
|||
|
||||
pinctrl@fc06a000 {
|
||||
board {
|
||||
pinctrl_macb0_phy_irq: macb0_phy_irq {
|
||||
atmel,pins =
|
||||
<AT91_PIOE 1 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
|
||||
};
|
||||
pinctrl_mmc0_cd: mmc0_cd {
|
||||
atmel,pins =
|
||||
<AT91_PIOE 5 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_UP_DEGLITCH>;
|
||||
|
|
|
@ -122,6 +122,7 @@
|
|||
interrupt-parent = <&gpio5>;
|
||||
interrupts = <24 IRQ_TYPE_LEVEL_HIGH>; /* gpio 152 */
|
||||
ref-clock-frequency = <26000000>;
|
||||
tcxo-clock-frequency = <26000000>;
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -130,6 +130,16 @@
|
|||
};
|
||||
};
|
||||
|
||||
&gpio8 {
|
||||
/* TI trees use GPIO instead of msecure, see also muxing */
|
||||
p234 {
|
||||
gpio-hog;
|
||||
gpios = <10 GPIO_ACTIVE_HIGH>;
|
||||
output-high;
|
||||
line-name = "gpio8_234/msecure";
|
||||
};
|
||||
};
|
||||
|
||||
&omap5_pmx_core {
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <
|
||||
|
@ -213,6 +223,13 @@
|
|||
>;
|
||||
};
|
||||
|
||||
/* TI trees use GPIO mode; msecure mode does not work reliably? */
|
||||
palmas_msecure_pins: palmas_msecure_pins {
|
||||
pinctrl-single,pins = <
|
||||
OMAP5_IOPAD(0x180, PIN_OUTPUT | MUX_MODE6) /* gpio8_234 */
|
||||
>;
|
||||
};
|
||||
|
||||
usbhost_pins: pinmux_usbhost_pins {
|
||||
pinctrl-single,pins = <
|
||||
0x84 (PIN_INPUT | MUX_MODE0) /* usbb2_hsic_strobe */
|
||||
|
@ -278,6 +295,12 @@
|
|||
&usbhost_wkup_pins
|
||||
>;
|
||||
|
||||
palmas_sys_nirq_pins: pinmux_palmas_sys_nirq_pins {
|
||||
pinctrl-single,pins = <
|
||||
OMAP5_IOPAD(0x068, PIN_INPUT_PULLUP | MUX_MODE0) /* sys_nirq1 */
|
||||
>;
|
||||
};
|
||||
|
||||
usbhost_wkup_pins: pinmux_usbhost_wkup_pins {
|
||||
pinctrl-single,pins = <
|
||||
0x1A (PIN_OUTPUT | MUX_MODE0) /* fref_clk1_out, USB hub clk */
|
||||
|
@ -345,6 +368,8 @@
|
|||
interrupt-controller;
|
||||
#interrupt-cells = <2>;
|
||||
ti,system-power-controller;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&palmas_sys_nirq_pins &palmas_msecure_pins>;
|
||||
|
||||
extcon_usb3: palmas_usb {
|
||||
compatible = "ti,palmas-usb-vid";
|
||||
|
@ -358,6 +383,14 @@
|
|||
#clock-cells = <0>;
|
||||
};
|
||||
|
||||
rtc {
|
||||
compatible = "ti,palmas-rtc";
|
||||
interrupt-parent = <&palmas>;
|
||||
interrupts = <8 IRQ_TYPE_NONE>;
|
||||
ti,backup-battery-chargeable;
|
||||
ti,backup-battery-charge-high-current;
|
||||
};
|
||||
|
||||
palmas_pmic {
|
||||
compatible = "ti,palmas-pmic";
|
||||
interrupt-parent = <&palmas>;
|
||||
|
|
|
@ -1342,7 +1342,7 @@
|
|||
dbgu: serial@fc069000 {
|
||||
compatible = "atmel,at91sam9260-dbgu", "atmel,at91sam9260-usart";
|
||||
reg = <0xfc069000 0x200>;
|
||||
interrupts = <2 IRQ_TYPE_LEVEL_HIGH 7>;
|
||||
interrupts = <45 IRQ_TYPE_LEVEL_HIGH 7>;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&pinctrl_dbgu>;
|
||||
clocks = <&dbgu_clk>;
|
||||
|
|
|
@ -127,22 +127,14 @@
|
|||
};
|
||||
mmcsd_default_mode: mmcsd_default {
|
||||
mmcsd_default_cfg1 {
|
||||
/* MCCLK */
|
||||
pins = "GPIO8_B10";
|
||||
ste,output = <0>;
|
||||
};
|
||||
mmcsd_default_cfg2 {
|
||||
/* MCCMDDIR, MCDAT0DIR, MCDAT31DIR, MCDATDIR2 */
|
||||
pins = "GPIO10_C11", "GPIO15_A12",
|
||||
"GPIO16_C13", "GPIO23_D15";
|
||||
ste,output = <1>;
|
||||
};
|
||||
mmcsd_default_cfg3 {
|
||||
/* MCCMD, MCDAT3-0, MCMSFBCLK */
|
||||
pins = "GPIO9_A10", "GPIO11_B11",
|
||||
"GPIO12_A11", "GPIO13_C12",
|
||||
"GPIO14_B12", "GPIO24_C15";
|
||||
ste,input = <1>;
|
||||
/*
|
||||
* MCCLK, MCCMDDIR, MCDAT0DIR, MCDAT31DIR, MCDATDIR2
|
||||
* MCCMD, MCDAT3-0, MCMSFBCLK
|
||||
*/
|
||||
pins = "GPIO8_B10", "GPIO9_A10", "GPIO10_C11", "GPIO11_B11",
|
||||
"GPIO12_A11", "GPIO13_C12", "GPIO14_B12", "GPIO15_A12",
|
||||
"GPIO16_C13", "GPIO23_D15", "GPIO24_C15";
|
||||
ste,output = <2>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
@ -802,10 +794,21 @@
|
|||
clock-names = "mclk", "apb_pclk";
|
||||
interrupt-parent = <&vica>;
|
||||
interrupts = <22>;
|
||||
max-frequency = <48000000>;
|
||||
max-frequency = <400000>;
|
||||
bus-width = <4>;
|
||||
cap-mmc-highspeed;
|
||||
cap-sd-highspeed;
|
||||
full-pwr-cycle;
|
||||
/*
|
||||
* The STw4811 circuit used with the Nomadik strictly
|
||||
* requires that all of these signal direction pins be
|
||||
* routed and used for its 4-bit levelshifter.
|
||||
*/
|
||||
st,sig-dir-dat0;
|
||||
st,sig-dir-dat2;
|
||||
st,sig-dir-dat31;
|
||||
st,sig-dir-cmd;
|
||||
st,sig-pin-fbclk;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&mmcsd_default_mux>, <&mmcsd_default_mode>;
|
||||
vmmc-supply = <&vmmc_regulator>;
|
||||
|
|
|
@ -16,7 +16,7 @@
|
|||
*/
|
||||
#include <linux/module.h>
|
||||
#include <linux/kernel.h>
|
||||
|
||||
#include <asm/div64.h>
|
||||
#include <asm/hardware/icst.h>
|
||||
|
||||
/*
|
||||
|
@ -29,7 +29,11 @@ EXPORT_SYMBOL(icst525_s2div);
|
|||
|
||||
unsigned long icst_hz(const struct icst_params *p, struct icst_vco vco)
|
||||
{
|
||||
return p->ref * 2 * (vco.v + 8) / ((vco.r + 2) * p->s2div[vco.s]);
|
||||
u64 dividend = p->ref * 2 * (u64)(vco.v + 8);
|
||||
u32 divisor = (vco.r + 2) * p->s2div[vco.s];
|
||||
|
||||
do_div(dividend, divisor);
|
||||
return (unsigned long)dividend;
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL(icst_hz);
|
||||
|
@ -58,6 +62,7 @@ icst_hz_to_vco(const struct icst_params *p, unsigned long freq)
|
|||
|
||||
if (f > p->vco_min && f <= p->vco_max)
|
||||
break;
|
||||
i++;
|
||||
} while (i < 8);
|
||||
|
||||
if (i >= 8)
|
||||
|
|
|
@ -86,13 +86,18 @@ ENTRY(enable_omap3630_toggle_l2_on_restore)
|
|||
stmfd sp!, {lr} @ save registers on stack
|
||||
/* Setup so that we will disable and enable l2 */
|
||||
mov r1, #0x1
|
||||
adrl r2, l2dis_3630 @ may be too distant for plain adr
|
||||
str r1, [r2]
|
||||
adrl r3, l2dis_3630_offset @ may be too distant for plain adr
|
||||
ldr r2, [r3] @ value for offset
|
||||
str r1, [r2, r3] @ write to l2dis_3630
|
||||
ldmfd sp!, {pc} @ restore regs and return
|
||||
ENDPROC(enable_omap3630_toggle_l2_on_restore)
|
||||
|
||||
.text
|
||||
/* Function to call rom code to save secure ram context */
|
||||
/*
|
||||
* Function to call rom code to save secure ram context. This gets
|
||||
* relocated to SRAM, so it can be all in .data section. Otherwise
|
||||
* we need to initialize api_params separately.
|
||||
*/
|
||||
.data
|
||||
.align 3
|
||||
ENTRY(save_secure_ram_context)
|
||||
stmfd sp!, {r4 - r11, lr} @ save registers on stack
|
||||
|
@ -126,6 +131,8 @@ ENDPROC(save_secure_ram_context)
|
|||
ENTRY(save_secure_ram_context_sz)
|
||||
.word . - save_secure_ram_context
|
||||
|
||||
.text
|
||||
|
||||
/*
|
||||
* ======================
|
||||
* == Idle entry point ==
|
||||
|
@ -289,12 +296,6 @@ wait_sdrc_ready:
|
|||
bic r5, r5, #0x40
|
||||
str r5, [r4]
|
||||
|
||||
/*
|
||||
* PC-relative stores lead to undefined behaviour in Thumb-2: use a r7 as a
|
||||
* base instead.
|
||||
* Be careful not to clobber r7 when maintaing this code.
|
||||
*/
|
||||
|
||||
is_dll_in_lock_mode:
|
||||
/* Is dll in lock mode? */
|
||||
ldr r4, sdrc_dlla_ctrl
|
||||
|
@ -302,11 +303,7 @@ is_dll_in_lock_mode:
|
|||
tst r5, #0x4
|
||||
bne exit_nonoff_modes @ Return if locked
|
||||
/* wait till dll locks */
|
||||
adr r7, kick_counter
|
||||
wait_dll_lock_timed:
|
||||
ldr r4, wait_dll_lock_counter
|
||||
add r4, r4, #1
|
||||
str r4, [r7, #wait_dll_lock_counter - kick_counter]
|
||||
ldr r4, sdrc_dlla_status
|
||||
/* Wait 20uS for lock */
|
||||
mov r6, #8
|
||||
|
@ -330,9 +327,6 @@ kick_dll:
|
|||
orr r6, r6, #(1<<3) @ enable dll
|
||||
str r6, [r4]
|
||||
dsb
|
||||
ldr r4, kick_counter
|
||||
add r4, r4, #1
|
||||
str r4, [r7] @ kick_counter
|
||||
b wait_dll_lock_timed
|
||||
|
||||
exit_nonoff_modes:
|
||||
|
@ -360,15 +354,6 @@ sdrc_dlla_status:
|
|||
.word SDRC_DLLA_STATUS_V
|
||||
sdrc_dlla_ctrl:
|
||||
.word SDRC_DLLA_CTRL_V
|
||||
/*
|
||||
* When exporting to userspace while the counters are in SRAM,
|
||||
* these 2 words need to be at the end to facilitate retrival!
|
||||
*/
|
||||
kick_counter:
|
||||
.word 0
|
||||
wait_dll_lock_counter:
|
||||
.word 0
|
||||
|
||||
ENTRY(omap3_do_wfi_sz)
|
||||
.word . - omap3_do_wfi
|
||||
|
||||
|
@ -437,7 +422,9 @@ ENTRY(omap3_restore)
|
|||
cmp r2, #0x0 @ Check if target power state was OFF or RET
|
||||
bne logic_l1_restore
|
||||
|
||||
ldr r0, l2dis_3630
|
||||
adr r1, l2dis_3630_offset @ address for offset
|
||||
ldr r0, [r1] @ value for offset
|
||||
ldr r0, [r1, r0] @ value at l2dis_3630
|
||||
cmp r0, #0x1 @ should we disable L2 on 3630?
|
||||
bne skipl2dis
|
||||
mrc p15, 0, r0, c1, c0, 1
|
||||
|
@ -449,12 +436,14 @@ skipl2dis:
|
|||
and r1, #0x700
|
||||
cmp r1, #0x300
|
||||
beq l2_inv_gp
|
||||
adr r0, l2_inv_api_params_offset
|
||||
ldr r3, [r0]
|
||||
add r3, r3, r0 @ r3 points to dummy parameters
|
||||
mov r0, #40 @ set service ID for PPA
|
||||
mov r12, r0 @ copy secure Service ID in r12
|
||||
mov r1, #0 @ set task id for ROM code in r1
|
||||
mov r2, #4 @ set some flags in r2, r6
|
||||
mov r6, #0xff
|
||||
adr r3, l2_inv_api_params @ r3 points to dummy parameters
|
||||
dsb @ data write barrier
|
||||
dmb @ data memory barrier
|
||||
smc #1 @ call SMI monitor (smi #1)
|
||||
|
@ -488,8 +477,8 @@ skipl2dis:
|
|||
b logic_l1_restore
|
||||
|
||||
.align
|
||||
l2_inv_api_params:
|
||||
.word 0x1, 0x00
|
||||
l2_inv_api_params_offset:
|
||||
.long l2_inv_api_params - .
|
||||
l2_inv_gp:
|
||||
/* Execute smi to invalidate L2 cache */
|
||||
mov r12, #0x1 @ set up to invalidate L2
|
||||
|
@ -506,7 +495,9 @@ l2_inv_gp:
|
|||
mov r12, #0x2
|
||||
smc #0 @ Call SMI monitor (smieq)
|
||||
logic_l1_restore:
|
||||
ldr r1, l2dis_3630
|
||||
adr r0, l2dis_3630_offset @ adress for offset
|
||||
ldr r1, [r0] @ value for offset
|
||||
ldr r1, [r0, r1] @ value at l2dis_3630
|
||||
cmp r1, #0x1 @ Test if L2 re-enable needed on 3630
|
||||
bne skipl2reen
|
||||
mrc p15, 0, r1, c1, c0, 1
|
||||
|
@ -535,9 +526,17 @@ control_stat:
|
|||
.word CONTROL_STAT
|
||||
control_mem_rta:
|
||||
.word CONTROL_MEM_RTA_CTRL
|
||||
l2dis_3630_offset:
|
||||
.long l2dis_3630 - .
|
||||
|
||||
.data
|
||||
l2dis_3630:
|
||||
.word 0
|
||||
|
||||
.data
|
||||
l2_inv_api_params:
|
||||
.word 0x1, 0x00
|
||||
|
||||
/*
|
||||
* Internal functions
|
||||
*/
|
||||
|
|
|
@ -29,12 +29,6 @@
|
|||
dsb
|
||||
.endm
|
||||
|
||||
ppa_zero_params:
|
||||
.word 0x0
|
||||
|
||||
ppa_por_params:
|
||||
.word 1, 0
|
||||
|
||||
#ifdef CONFIG_ARCH_OMAP4
|
||||
|
||||
/*
|
||||
|
@ -266,7 +260,9 @@ ENTRY(omap4_cpu_resume)
|
|||
beq skip_ns_smp_enable
|
||||
ppa_actrl_retry:
|
||||
mov r0, #OMAP4_PPA_CPU_ACTRL_SMP_INDEX
|
||||
adr r3, ppa_zero_params @ Pointer to parameters
|
||||
adr r1, ppa_zero_params_offset
|
||||
ldr r3, [r1]
|
||||
add r3, r3, r1 @ Pointer to ppa_zero_params
|
||||
mov r1, #0x0 @ Process ID
|
||||
mov r2, #0x4 @ Flag
|
||||
mov r6, #0xff
|
||||
|
@ -303,7 +299,9 @@ skip_ns_smp_enable:
|
|||
ldr r0, =OMAP4_PPA_L2_POR_INDEX
|
||||
ldr r1, =OMAP44XX_SAR_RAM_BASE
|
||||
ldr r4, [r1, #L2X0_PREFETCH_CTRL_OFFSET]
|
||||
adr r3, ppa_por_params
|
||||
adr r1, ppa_por_params_offset
|
||||
ldr r3, [r1]
|
||||
add r3, r3, r1 @ Pointer to ppa_por_params
|
||||
str r4, [r3, #0x04]
|
||||
mov r1, #0x0 @ Process ID
|
||||
mov r2, #0x4 @ Flag
|
||||
|
@ -328,6 +326,8 @@ skip_l2en:
|
|||
#endif
|
||||
|
||||
b cpu_resume @ Jump to generic resume
|
||||
ppa_por_params_offset:
|
||||
.long ppa_por_params - .
|
||||
ENDPROC(omap4_cpu_resume)
|
||||
#endif /* CONFIG_ARCH_OMAP4 */
|
||||
|
||||
|
@ -380,4 +380,13 @@ ENTRY(omap_do_wfi)
|
|||
nop
|
||||
|
||||
ldmfd sp!, {pc}
|
||||
ppa_zero_params_offset:
|
||||
.long ppa_zero_params - .
|
||||
ENDPROC(omap_do_wfi)
|
||||
|
||||
.data
|
||||
ppa_zero_params:
|
||||
.word 0
|
||||
|
||||
ppa_por_params:
|
||||
.word 1, 0
|
||||
|
|
|
@ -512,9 +512,14 @@ CPU_LE( movk x0, #0x30d0, lsl #16 ) // Clear EE and E0E on LE systems
|
|||
#endif
|
||||
|
||||
/* EL2 debug */
|
||||
mrs x0, id_aa64dfr0_el1 // Check ID_AA64DFR0_EL1 PMUVer
|
||||
sbfx x0, x0, #8, #4
|
||||
cmp x0, #1
|
||||
b.lt 4f // Skip if no PMU present
|
||||
mrs x0, pmcr_el0 // Disable debug access traps
|
||||
ubfx x0, x0, #11, #5 // to EL2 and allow access to
|
||||
msr mdcr_el2, x0 // all PMU counters from EL1
|
||||
4:
|
||||
|
||||
/* Stage-2 translation */
|
||||
msr vttbr_el2, xzr
|
||||
|
|
|
@ -58,6 +58,12 @@
|
|||
*/
|
||||
void ptrace_disable(struct task_struct *child)
|
||||
{
|
||||
/*
|
||||
* This would be better off in core code, but PTRACE_DETACH has
|
||||
* grown its fair share of arch-specific worts and changing it
|
||||
* is likely to cause regressions on obscure architectures.
|
||||
*/
|
||||
user_disable_single_step(child);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HAVE_HW_BREAKPOINT
|
||||
|
|
|
@ -1047,6 +1047,10 @@ static int __init __iommu_dma_init(void)
|
|||
ret = register_iommu_dma_ops_notifier(&platform_bus_type);
|
||||
if (!ret)
|
||||
ret = register_iommu_dma_ops_notifier(&amba_bustype);
|
||||
|
||||
/* handle devices queued before this arch_initcall */
|
||||
if (!ret)
|
||||
__iommu_attach_notifier(NULL, BUS_NOTIFY_ADD_DEVICE, NULL);
|
||||
return ret;
|
||||
}
|
||||
arch_initcall(__iommu_dma_init);
|
||||
|
|
|
@ -596,6 +596,9 @@ void __init paging_init(void)
|
|||
|
||||
empty_zero_page = virt_to_page(zero_page);
|
||||
|
||||
/* Ensure the zero page is visible to the page table walker */
|
||||
dsb(ishst);
|
||||
|
||||
/*
|
||||
* TTBR0 is only used for the identity mapping at this stage. Make it
|
||||
* point to zero page to avoid speculatively fetching new entries.
|
||||
|
|
|
@ -59,6 +59,9 @@ static int change_memory_common(unsigned long addr, int numpages,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!numpages)
|
||||
return 0;
|
||||
|
||||
data.set_mask = set_mask;
|
||||
data.clear_mask = clear_mask;
|
||||
|
||||
|
|
|
@ -62,3 +62,15 @@
|
|||
bfi \valreg, \tmpreg, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH
|
||||
#endif
|
||||
.endm
|
||||
|
||||
/*
|
||||
* reset_pmuserenr_el0 - reset PMUSERENR_EL0 if PMUv3 present
|
||||
*/
|
||||
.macro reset_pmuserenr_el0, tmpreg
|
||||
mrs \tmpreg, id_aa64dfr0_el1 // Check ID_AA64DFR0_EL1 PMUVer
|
||||
sbfx \tmpreg, \tmpreg, #8, #4
|
||||
cmp \tmpreg, #1 // Skip if no PMU present
|
||||
b.lt 9000f
|
||||
msr pmuserenr_el0, xzr // Disable PMU access from EL0
|
||||
9000:
|
||||
.endm
|
||||
|
|
|
@ -163,6 +163,7 @@ ENTRY(cpu_do_resume)
|
|||
*/
|
||||
ubfx x11, x11, #1, #1
|
||||
msr oslar_el1, x11
|
||||
reset_pmuserenr_el0 x0 // Disable PMU access from EL0
|
||||
mov x0, x12
|
||||
dsb nsh // Make sure local tlb invalidation completed
|
||||
isb
|
||||
|
@ -201,6 +202,7 @@ ENTRY(__cpu_setup)
|
|||
msr cpacr_el1, x0 // Enable FP/ASIMD
|
||||
mov x0, #1 << 12 // Reset mdscr_el1 and disable
|
||||
msr mdscr_el1, x0 // access to the DCC from EL0
|
||||
reset_pmuserenr_el0 x0 // Disable PMU access from EL0
|
||||
/*
|
||||
* Memory region attributes for LPAE:
|
||||
*
|
||||
|
|
|
@ -81,7 +81,10 @@ static struct resource code_resource = {
|
|||
};
|
||||
|
||||
unsigned long memory_start;
|
||||
EXPORT_SYMBOL(memory_start);
|
||||
|
||||
unsigned long memory_end;
|
||||
EXPORT_SYMBOL(memory_end);
|
||||
|
||||
void __init setup_arch(char **);
|
||||
int get_cpuinfo(char *);
|
||||
|
|
|
@ -54,24 +54,12 @@ static inline pte_t huge_pte_wrprotect(pte_t pte)
|
|||
return pte_wrprotect(pte);
|
||||
}
|
||||
|
||||
static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
pte_t old_pte = *ptep;
|
||||
set_huge_pte_at(mm, addr, ptep, pte_wrprotect(old_pte));
|
||||
}
|
||||
void huge_ptep_set_wrprotect(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep);
|
||||
|
||||
static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
|
||||
int huge_ptep_set_access_flags(struct vm_area_struct *vma,
|
||||
unsigned long addr, pte_t *ptep,
|
||||
pte_t pte, int dirty)
|
||||
{
|
||||
int changed = !pte_same(*ptep, pte);
|
||||
if (changed) {
|
||||
set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
|
||||
flush_tlb_page(vma, addr);
|
||||
}
|
||||
return changed;
|
||||
}
|
||||
pte_t pte, int dirty);
|
||||
|
||||
static inline pte_t huge_ptep_get(pte_t *ptep)
|
||||
{
|
||||
|
|
|
@ -1,6 +1,10 @@
|
|||
#ifndef _PARISC_SIGINFO_H
|
||||
#define _PARISC_SIGINFO_H
|
||||
|
||||
#if defined(__LP64__)
|
||||
#define __ARCH_SI_PREAMBLE_SIZE (4 * sizeof(int))
|
||||
#endif
|
||||
|
||||
#include <asm-generic/siginfo.h>
|
||||
|
||||
#undef NSIGTRAP
|
||||
|
|
|
@ -105,15 +105,13 @@ static inline void purge_tlb_entries_huge(struct mm_struct *mm, unsigned long ad
|
|||
addr |= _HUGE_PAGE_SIZE_ENCODING_DEFAULT;
|
||||
|
||||
for (i = 0; i < (1 << (HPAGE_SHIFT-REAL_HPAGE_SHIFT)); i++) {
|
||||
mtsp(mm->context, 1);
|
||||
pdtlb(addr);
|
||||
if (unlikely(split_tlb))
|
||||
pitlb(addr);
|
||||
purge_tlb_entries(mm, addr);
|
||||
addr += (1UL << REAL_HPAGE_SHIFT);
|
||||
}
|
||||
}
|
||||
|
||||
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
/* __set_huge_pte_at() must be called holding the pa_tlb_lock. */
|
||||
static void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, pte_t entry)
|
||||
{
|
||||
unsigned long addr_start;
|
||||
|
@ -123,14 +121,9 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
|
|||
addr_start = addr;
|
||||
|
||||
for (i = 0; i < (1 << HUGETLB_PAGE_ORDER); i++) {
|
||||
/* Directly write pte entry. We could call set_pte_at(mm, addr, ptep, entry)
|
||||
* instead, but then we get double locking on pa_tlb_lock. */
|
||||
*ptep = entry;
|
||||
set_pte(ptep, entry);
|
||||
ptep++;
|
||||
|
||||
/* Drop the PAGE_SIZE/non-huge tlb entry */
|
||||
purge_tlb_entries(mm, addr);
|
||||
|
||||
addr += PAGE_SIZE;
|
||||
pte_val(entry) += PAGE_SIZE;
|
||||
}
|
||||
|
@ -138,18 +131,61 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
|
|||
purge_tlb_entries_huge(mm, addr_start);
|
||||
}
|
||||
|
||||
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, pte_t entry)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
purge_tlb_start(flags);
|
||||
__set_huge_pte_at(mm, addr, ptep, entry);
|
||||
purge_tlb_end(flags);
|
||||
}
|
||||
|
||||
|
||||
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep)
|
||||
{
|
||||
unsigned long flags;
|
||||
pte_t entry;
|
||||
|
||||
purge_tlb_start(flags);
|
||||
entry = *ptep;
|
||||
set_huge_pte_at(mm, addr, ptep, __pte(0));
|
||||
__set_huge_pte_at(mm, addr, ptep, __pte(0));
|
||||
purge_tlb_end(flags);
|
||||
|
||||
return entry;
|
||||
}
|
||||
|
||||
|
||||
void huge_ptep_set_wrprotect(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
unsigned long flags;
|
||||
pte_t old_pte;
|
||||
|
||||
purge_tlb_start(flags);
|
||||
old_pte = *ptep;
|
||||
__set_huge_pte_at(mm, addr, ptep, pte_wrprotect(old_pte));
|
||||
purge_tlb_end(flags);
|
||||
}
|
||||
|
||||
int huge_ptep_set_access_flags(struct vm_area_struct *vma,
|
||||
unsigned long addr, pte_t *ptep,
|
||||
pte_t pte, int dirty)
|
||||
{
|
||||
unsigned long flags;
|
||||
int changed;
|
||||
|
||||
purge_tlb_start(flags);
|
||||
changed = !pte_same(*ptep, pte);
|
||||
if (changed) {
|
||||
__set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
|
||||
}
|
||||
purge_tlb_end(flags);
|
||||
return changed;
|
||||
}
|
||||
|
||||
|
||||
int pmd_huge(pmd_t pmd)
|
||||
{
|
||||
return 0;
|
||||
|
|
|
@ -18,12 +18,12 @@ __xchg_u32(volatile void *p, unsigned long val)
|
|||
unsigned long prev;
|
||||
|
||||
__asm__ __volatile__(
|
||||
PPC_RELEASE_BARRIER
|
||||
PPC_ATOMIC_ENTRY_BARRIER
|
||||
"1: lwarx %0,0,%2 \n"
|
||||
PPC405_ERR77(0,%2)
|
||||
" stwcx. %3,0,%2 \n\
|
||||
bne- 1b"
|
||||
PPC_ACQUIRE_BARRIER
|
||||
PPC_ATOMIC_EXIT_BARRIER
|
||||
: "=&r" (prev), "+m" (*(volatile unsigned int *)p)
|
||||
: "r" (p), "r" (val)
|
||||
: "cc", "memory");
|
||||
|
@ -61,12 +61,12 @@ __xchg_u64(volatile void *p, unsigned long val)
|
|||
unsigned long prev;
|
||||
|
||||
__asm__ __volatile__(
|
||||
PPC_RELEASE_BARRIER
|
||||
PPC_ATOMIC_ENTRY_BARRIER
|
||||
"1: ldarx %0,0,%2 \n"
|
||||
PPC405_ERR77(0,%2)
|
||||
" stdcx. %3,0,%2 \n\
|
||||
bne- 1b"
|
||||
PPC_ACQUIRE_BARRIER
|
||||
PPC_ATOMIC_EXIT_BARRIER
|
||||
: "=&r" (prev), "+m" (*(volatile unsigned long *)p)
|
||||
: "r" (p), "r" (val)
|
||||
: "cc", "memory");
|
||||
|
@ -151,14 +151,14 @@ __cmpxchg_u32(volatile unsigned int *p, unsigned long old, unsigned long new)
|
|||
unsigned int prev;
|
||||
|
||||
__asm__ __volatile__ (
|
||||
PPC_RELEASE_BARRIER
|
||||
PPC_ATOMIC_ENTRY_BARRIER
|
||||
"1: lwarx %0,0,%2 # __cmpxchg_u32\n\
|
||||
cmpw 0,%0,%3\n\
|
||||
bne- 2f\n"
|
||||
PPC405_ERR77(0,%2)
|
||||
" stwcx. %4,0,%2\n\
|
||||
bne- 1b"
|
||||
PPC_ACQUIRE_BARRIER
|
||||
PPC_ATOMIC_EXIT_BARRIER
|
||||
"\n\
|
||||
2:"
|
||||
: "=&r" (prev), "+m" (*p)
|
||||
|
@ -197,13 +197,13 @@ __cmpxchg_u64(volatile unsigned long *p, unsigned long old, unsigned long new)
|
|||
unsigned long prev;
|
||||
|
||||
__asm__ __volatile__ (
|
||||
PPC_RELEASE_BARRIER
|
||||
PPC_ATOMIC_ENTRY_BARRIER
|
||||
"1: ldarx %0,0,%2 # __cmpxchg_u64\n\
|
||||
cmpd 0,%0,%3\n\
|
||||
bne- 2f\n\
|
||||
stdcx. %4,0,%2\n\
|
||||
bne- 1b"
|
||||
PPC_ACQUIRE_BARRIER
|
||||
PPC_ATOMIC_EXIT_BARRIER
|
||||
"\n\
|
||||
2:"
|
||||
: "=&r" (prev), "+m" (*p)
|
||||
|
|
|
@ -81,6 +81,7 @@ struct pci_dn;
|
|||
#define EEH_PE_KEEP (1 << 8) /* Keep PE on hotplug */
|
||||
#define EEH_PE_CFG_RESTRICTED (1 << 9) /* Block config on error */
|
||||
#define EEH_PE_REMOVED (1 << 10) /* Removed permanently */
|
||||
#define EEH_PE_PRI_BUS (1 << 11) /* Cached primary bus */
|
||||
|
||||
struct eeh_pe {
|
||||
int type; /* PE type: PHB/Bus/Device */
|
||||
|
|
|
@ -44,7 +44,7 @@ static inline void isync(void)
|
|||
MAKE_LWSYNC_SECTION_ENTRY(97, __lwsync_fixup);
|
||||
#define PPC_ACQUIRE_BARRIER "\n" stringify_in_c(__PPC_ACQUIRE_BARRIER)
|
||||
#define PPC_RELEASE_BARRIER stringify_in_c(LWSYNC) "\n"
|
||||
#define PPC_ATOMIC_ENTRY_BARRIER "\n" stringify_in_c(LWSYNC) "\n"
|
||||
#define PPC_ATOMIC_ENTRY_BARRIER "\n" stringify_in_c(sync) "\n"
|
||||
#define PPC_ATOMIC_EXIT_BARRIER "\n" stringify_in_c(sync) "\n"
|
||||
#else
|
||||
#define PPC_ACQUIRE_BARRIER
|
||||
|
|
|
@ -295,6 +295,8 @@ do { \
|
|||
#define R_PPC64_TLSLD 108
|
||||
#define R_PPC64_TOCSAVE 109
|
||||
|
||||
#define R_PPC64_ENTRY 118
|
||||
|
||||
#define R_PPC64_REL16 249
|
||||
#define R_PPC64_REL16_LO 250
|
||||
#define R_PPC64_REL16_HI 251
|
||||
|
|
|
@ -564,6 +564,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus)
|
|||
*/
|
||||
eeh_pe_state_mark(pe, EEH_PE_KEEP);
|
||||
if (bus) {
|
||||
eeh_pe_state_clear(pe, EEH_PE_PRI_BUS);
|
||||
pci_lock_rescan_remove();
|
||||
pcibios_remove_pci_devices(bus);
|
||||
pci_unlock_rescan_remove();
|
||||
|
@ -803,6 +804,7 @@ perm_error:
|
|||
* the their PCI config any more.
|
||||
*/
|
||||
if (frozen_bus) {
|
||||
eeh_pe_state_clear(pe, EEH_PE_PRI_BUS);
|
||||
eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
|
||||
|
||||
pci_lock_rescan_remove();
|
||||
|
@ -886,6 +888,7 @@ static void eeh_handle_special_event(void)
|
|||
continue;
|
||||
|
||||
/* Notify all devices to be down */
|
||||
eeh_pe_state_clear(pe, EEH_PE_PRI_BUS);
|
||||
bus = eeh_pe_bus_get(phb_pe);
|
||||
eeh_pe_dev_traverse(pe,
|
||||
eeh_report_failure, NULL);
|
||||
|
|
|
@ -883,32 +883,29 @@ void eeh_pe_restore_bars(struct eeh_pe *pe)
|
|||
const char *eeh_pe_loc_get(struct eeh_pe *pe)
|
||||
{
|
||||
struct pci_bus *bus = eeh_pe_bus_get(pe);
|
||||
struct device_node *dn = pci_bus_to_OF_node(bus);
|
||||
struct device_node *dn;
|
||||
const char *loc = NULL;
|
||||
|
||||
if (!dn)
|
||||
goto out;
|
||||
while (bus) {
|
||||
dn = pci_bus_to_OF_node(bus);
|
||||
if (!dn) {
|
||||
bus = bus->parent;
|
||||
continue;
|
||||
}
|
||||
|
||||
/* PHB PE or root PE ? */
|
||||
if (pci_is_root_bus(bus)) {
|
||||
loc = of_get_property(dn, "ibm,loc-code", NULL);
|
||||
if (!loc)
|
||||
if (pci_is_root_bus(bus))
|
||||
loc = of_get_property(dn, "ibm,io-base-loc-code", NULL);
|
||||
if (loc)
|
||||
goto out;
|
||||
else
|
||||
loc = of_get_property(dn, "ibm,slot-location-code",
|
||||
NULL);
|
||||
|
||||
/* Check the root port */
|
||||
dn = dn->child;
|
||||
if (!dn)
|
||||
goto out;
|
||||
if (loc)
|
||||
return loc;
|
||||
|
||||
bus = bus->parent;
|
||||
}
|
||||
|
||||
loc = of_get_property(dn, "ibm,loc-code", NULL);
|
||||
if (!loc)
|
||||
loc = of_get_property(dn, "ibm,slot-location-code", NULL);
|
||||
|
||||
out:
|
||||
return loc ? loc : "N/A";
|
||||
return "N/A";
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -931,7 +928,7 @@ struct pci_bus *eeh_pe_bus_get(struct eeh_pe *pe)
|
|||
bus = pe->phb->bus;
|
||||
} else if (pe->type & EEH_PE_BUS ||
|
||||
pe->type & EEH_PE_DEVICE) {
|
||||
if (pe->bus) {
|
||||
if (pe->state & EEH_PE_PRI_BUS) {
|
||||
bus = pe->bus;
|
||||
goto out;
|
||||
}
|
||||
|
|
|
@ -635,6 +635,33 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
|
|||
*/
|
||||
break;
|
||||
|
||||
case R_PPC64_ENTRY:
|
||||
/*
|
||||
* Optimize ELFv2 large code model entry point if
|
||||
* the TOC is within 2GB range of current location.
|
||||
*/
|
||||
value = my_r2(sechdrs, me) - (unsigned long)location;
|
||||
if (value + 0x80008000 > 0xffffffff)
|
||||
break;
|
||||
/*
|
||||
* Check for the large code model prolog sequence:
|
||||
* ld r2, ...(r12)
|
||||
* add r2, r2, r12
|
||||
*/
|
||||
if ((((uint32_t *)location)[0] & ~0xfffc)
|
||||
!= 0xe84c0000)
|
||||
break;
|
||||
if (((uint32_t *)location)[1] != 0x7c426214)
|
||||
break;
|
||||
/*
|
||||
* If found, replace it with:
|
||||
* addis r2, r12, (.TOC.-func)@ha
|
||||
* addi r2, r12, (.TOC.-func)@l
|
||||
*/
|
||||
((uint32_t *)location)[0] = 0x3c4c0000 + PPC_HA(value);
|
||||
((uint32_t *)location)[1] = 0x38420000 + PPC_LO(value);
|
||||
break;
|
||||
|
||||
case R_PPC64_REL16_HA:
|
||||
/* Subtract location pointer */
|
||||
value -= (unsigned long)location;
|
||||
|
|
|
@ -551,6 +551,24 @@ static void tm_reclaim_thread(struct thread_struct *thr,
|
|||
msr_diff &= MSR_FP | MSR_VEC | MSR_VSX | MSR_FE0 | MSR_FE1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Use the current MSR TM suspended bit to track if we have
|
||||
* checkpointed state outstanding.
|
||||
* On signal delivery, we'd normally reclaim the checkpointed
|
||||
* state to obtain stack pointer (see:get_tm_stackpointer()).
|
||||
* This will then directly return to userspace without going
|
||||
* through __switch_to(). However, if the stack frame is bad,
|
||||
* we need to exit this thread which calls __switch_to() which
|
||||
* will again attempt to reclaim the already saved tm state.
|
||||
* Hence we need to check that we've not already reclaimed
|
||||
* this state.
|
||||
* We do this using the current MSR, rather tracking it in
|
||||
* some specific thread_struct bit, as it has the additional
|
||||
* benifit of checking for a potential TM bad thing exception.
|
||||
*/
|
||||
if (!MSR_TM_SUSPENDED(mfmsr()))
|
||||
return;
|
||||
|
||||
/*
|
||||
* Use the current MSR TM suspended bit to track if we have
|
||||
* checkpointed state outstanding.
|
||||
|
|
|
@ -2153,7 +2153,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
|
|||
|
||||
/* Emulate H_SET_DABR/X on P8 for the sake of compat mode guests */
|
||||
2: rlwimi r5, r4, 5, DAWRX_DR | DAWRX_DW
|
||||
rlwimi r5, r4, 1, DAWRX_WT
|
||||
rlwimi r5, r4, 2, DAWRX_WT
|
||||
clrrdi r4, r4, 3
|
||||
std r4, VCPU_DAWR(r3)
|
||||
std r5, VCPU_DAWRX(r3)
|
||||
|
|
|
@ -919,21 +919,17 @@ int kvm_vcpu_ioctl_get_one_reg(struct kvm_vcpu *vcpu, struct kvm_one_reg *reg)
|
|||
r = -ENXIO;
|
||||
break;
|
||||
}
|
||||
vcpu->arch.vr.vr[reg->id - KVM_REG_PPC_VR0] = val.vval;
|
||||
val.vval = vcpu->arch.vr.vr[reg->id - KVM_REG_PPC_VR0];
|
||||
break;
|
||||
case KVM_REG_PPC_VSCR:
|
||||
if (!cpu_has_feature(CPU_FTR_ALTIVEC)) {
|
||||
r = -ENXIO;
|
||||
break;
|
||||
}
|
||||
vcpu->arch.vr.vscr.u[3] = set_reg_val(reg->id, val);
|
||||
val = get_reg_val(reg->id, vcpu->arch.vr.vscr.u[3]);
|
||||
break;
|
||||
case KVM_REG_PPC_VRSAVE:
|
||||
if (!cpu_has_feature(CPU_FTR_ALTIVEC)) {
|
||||
r = -ENXIO;
|
||||
break;
|
||||
}
|
||||
vcpu->arch.vrsave = set_reg_val(reg->id, val);
|
||||
val = get_reg_val(reg->id, vcpu->arch.vrsave);
|
||||
break;
|
||||
#endif /* CONFIG_ALTIVEC */
|
||||
default:
|
||||
|
@ -974,17 +970,21 @@ int kvm_vcpu_ioctl_set_one_reg(struct kvm_vcpu *vcpu, struct kvm_one_reg *reg)
|
|||
r = -ENXIO;
|
||||
break;
|
||||
}
|
||||
val.vval = vcpu->arch.vr.vr[reg->id - KVM_REG_PPC_VR0];
|
||||
vcpu->arch.vr.vr[reg->id - KVM_REG_PPC_VR0] = val.vval;
|
||||
break;
|
||||
case KVM_REG_PPC_VSCR:
|
||||
if (!cpu_has_feature(CPU_FTR_ALTIVEC)) {
|
||||
r = -ENXIO;
|
||||
break;
|
||||
}
|
||||
val = get_reg_val(reg->id, vcpu->arch.vr.vscr.u[3]);
|
||||
vcpu->arch.vr.vscr.u[3] = set_reg_val(reg->id, val);
|
||||
break;
|
||||
case KVM_REG_PPC_VRSAVE:
|
||||
val = get_reg_val(reg->id, vcpu->arch.vrsave);
|
||||
if (!cpu_has_feature(CPU_FTR_ALTIVEC)) {
|
||||
r = -ENXIO;
|
||||
break;
|
||||
}
|
||||
vcpu->arch.vrsave = set_reg_val(reg->id, val);
|
||||
break;
|
||||
#endif /* CONFIG_ALTIVEC */
|
||||
default:
|
||||
|
|
|
@ -444,9 +444,12 @@ static void *pnv_eeh_probe(struct pci_dn *pdn, void *data)
|
|||
* PCI devices of the PE are expected to be removed prior
|
||||
* to PE reset.
|
||||
*/
|
||||
if (!edev->pe->bus)
|
||||
if (!(edev->pe->state & EEH_PE_PRI_BUS)) {
|
||||
edev->pe->bus = pci_find_bus(hose->global_number,
|
||||
pdn->busno);
|
||||
if (edev->pe->bus)
|
||||
edev->pe->state |= EEH_PE_PRI_BUS;
|
||||
}
|
||||
|
||||
/*
|
||||
* Enable EEH explicitly so that we will do EEH check
|
||||
|
|
|
@ -3034,6 +3034,7 @@ static void pnv_pci_ioda_shutdown(struct pci_controller *hose)
|
|||
|
||||
static const struct pci_controller_ops pnv_pci_ioda_controller_ops = {
|
||||
.dma_dev_setup = pnv_pci_dma_dev_setup,
|
||||
.dma_bus_setup = pnv_pci_dma_bus_setup,
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
.setup_msi_irqs = pnv_setup_msi_irqs,
|
||||
.teardown_msi_irqs = pnv_teardown_msi_irqs,
|
||||
|
|
|
@ -601,6 +601,9 @@ int pnv_tce_build(struct iommu_table *tbl, long index, long npages,
|
|||
u64 rpn = __pa(uaddr) >> tbl->it_page_shift;
|
||||
long i;
|
||||
|
||||
if (proto_tce & TCE_PCI_WRITE)
|
||||
proto_tce |= TCE_PCI_READ;
|
||||
|
||||
for (i = 0; i < npages; i++) {
|
||||
unsigned long newtce = proto_tce |
|
||||
((rpn + i) << tbl->it_page_shift);
|
||||
|
@ -622,6 +625,9 @@ int pnv_tce_xchg(struct iommu_table *tbl, long index,
|
|||
|
||||
BUG_ON(*hpa & ~IOMMU_PAGE_MASK(tbl));
|
||||
|
||||
if (newtce & TCE_PCI_WRITE)
|
||||
newtce |= TCE_PCI_READ;
|
||||
|
||||
oldtce = xchg(pnv_tce(tbl, idx), cpu_to_be64(newtce));
|
||||
*hpa = be64_to_cpu(oldtce) & ~(TCE_PCI_READ | TCE_PCI_WRITE);
|
||||
*direction = iommu_tce_direction(oldtce);
|
||||
|
@ -762,6 +768,26 @@ void pnv_pci_dma_dev_setup(struct pci_dev *pdev)
|
|||
phb->dma_dev_setup(phb, pdev);
|
||||
}
|
||||
|
||||
void pnv_pci_dma_bus_setup(struct pci_bus *bus)
|
||||
{
|
||||
struct pci_controller *hose = bus->sysdata;
|
||||
struct pnv_phb *phb = hose->private_data;
|
||||
struct pnv_ioda_pe *pe;
|
||||
|
||||
list_for_each_entry(pe, &phb->ioda.pe_list, list) {
|
||||
if (!(pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL)))
|
||||
continue;
|
||||
|
||||
if (!pe->pbus)
|
||||
continue;
|
||||
|
||||
if (bus->number == ((pe->rid >> 8) & 0xFF)) {
|
||||
pe->pbus = bus;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void pnv_pci_shutdown(void)
|
||||
{
|
||||
struct pci_controller *hose;
|
||||
|
|
|
@ -235,6 +235,7 @@ extern void pnv_pci_reset_secondary_bus(struct pci_dev *dev);
|
|||
extern int pnv_eeh_phb_reset(struct pci_controller *hose, int option);
|
||||
|
||||
extern void pnv_pci_dma_dev_setup(struct pci_dev *pdev);
|
||||
extern void pnv_pci_dma_bus_setup(struct pci_bus *bus);
|
||||
extern int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type);
|
||||
extern void pnv_teardown_msi_irqs(struct pci_dev *pdev);
|
||||
|
||||
|
|
|
@ -157,7 +157,9 @@ ENTRY(chacha20_4block_xor_ssse3)
|
|||
# done with the slightly better performing SSSE3 byte shuffling,
|
||||
# 7/12-bit word rotation uses traditional shift+OR.
|
||||
|
||||
sub $0x40,%rsp
|
||||
mov %rsp,%r11
|
||||
sub $0x80,%rsp
|
||||
and $~63,%rsp
|
||||
|
||||
# x0..15[0-3] = s0..3[0..3]
|
||||
movq 0x00(%rdi),%xmm1
|
||||
|
@ -620,6 +622,6 @@ ENTRY(chacha20_4block_xor_ssse3)
|
|||
pxor %xmm1,%xmm15
|
||||
movdqu %xmm15,0xf0(%rsi)
|
||||
|
||||
add $0x40,%rsp
|
||||
mov %r11,%rsp
|
||||
ret
|
||||
ENDPROC(chacha20_4block_xor_ssse3)
|
||||
|
|
|
@ -27,7 +27,7 @@
|
|||
#define BOOT_HEAP_SIZE 0x400000
|
||||
#else /* !CONFIG_KERNEL_BZIP2 */
|
||||
|
||||
#define BOOT_HEAP_SIZE 0x8000
|
||||
#define BOOT_HEAP_SIZE 0x10000
|
||||
|
||||
#endif /* !CONFIG_KERNEL_BZIP2 */
|
||||
|
||||
|
|
|
@ -116,8 +116,36 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
|
|||
#endif
|
||||
cpumask_set_cpu(cpu, mm_cpumask(next));
|
||||
|
||||
/* Re-load page tables */
|
||||
/*
|
||||
* Re-load page tables.
|
||||
*
|
||||
* This logic has an ordering constraint:
|
||||
*
|
||||
* CPU 0: Write to a PTE for 'next'
|
||||
* CPU 0: load bit 1 in mm_cpumask. if nonzero, send IPI.
|
||||
* CPU 1: set bit 1 in next's mm_cpumask
|
||||
* CPU 1: load from the PTE that CPU 0 writes (implicit)
|
||||
*
|
||||
* We need to prevent an outcome in which CPU 1 observes
|
||||
* the new PTE value and CPU 0 observes bit 1 clear in
|
||||
* mm_cpumask. (If that occurs, then the IPI will never
|
||||
* be sent, and CPU 0's TLB will contain a stale entry.)
|
||||
*
|
||||
* The bad outcome can occur if either CPU's load is
|
||||
* reordered before that CPU's store, so both CPUs must
|
||||
* execute full barriers to prevent this from happening.
|
||||
*
|
||||
* Thus, switch_mm needs a full barrier between the
|
||||
* store to mm_cpumask and any operation that could load
|
||||
* from next->pgd. TLB fills are special and can happen
|
||||
* due to instruction fetches or for no reason at all,
|
||||
* and neither LOCK nor MFENCE orders them.
|
||||
* Fortunately, load_cr3() is serializing and gives the
|
||||
* ordering guarantee we need.
|
||||
*
|
||||
*/
|
||||
load_cr3(next->pgd);
|
||||
|
||||
trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
|
||||
|
||||
/* Stop flush ipis for the previous mm */
|
||||
|
@ -156,10 +184,14 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
|
|||
* schedule, protecting us from simultaneous changes.
|
||||
*/
|
||||
cpumask_set_cpu(cpu, mm_cpumask(next));
|
||||
|
||||
/*
|
||||
* We were in lazy tlb mode and leave_mm disabled
|
||||
* tlb flush IPI delivery. We must reload CR3
|
||||
* to make sure to use no freed page tables.
|
||||
*
|
||||
* As above, load_cr3() is serializing and orders TLB
|
||||
* fills with respect to the mm_cpumask write.
|
||||
*/
|
||||
load_cr3(next->pgd);
|
||||
trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
|
||||
|
|
|
@ -363,20 +363,18 @@ static inline enum page_cache_mode pgprot2cachemode(pgprot_t pgprot)
|
|||
}
|
||||
static inline pgprot_t pgprot_4k_2_large(pgprot_t pgprot)
|
||||
{
|
||||
pgprotval_t val = pgprot_val(pgprot);
|
||||
pgprot_t new;
|
||||
unsigned long val;
|
||||
|
||||
val = pgprot_val(pgprot);
|
||||
pgprot_val(new) = (val & ~(_PAGE_PAT | _PAGE_PAT_LARGE)) |
|
||||
((val & _PAGE_PAT) << (_PAGE_BIT_PAT_LARGE - _PAGE_BIT_PAT));
|
||||
return new;
|
||||
}
|
||||
static inline pgprot_t pgprot_large_2_4k(pgprot_t pgprot)
|
||||
{
|
||||
pgprotval_t val = pgprot_val(pgprot);
|
||||
pgprot_t new;
|
||||
unsigned long val;
|
||||
|
||||
val = pgprot_val(pgprot);
|
||||
pgprot_val(new) = (val & ~(_PAGE_PAT | _PAGE_PAT_LARGE)) |
|
||||
((val & _PAGE_PAT_LARGE) >>
|
||||
(_PAGE_BIT_PAT_LARGE - _PAGE_BIT_PAT));
|
||||
|
|
|
@ -182,6 +182,14 @@ static struct dmi_system_id __initdata reboot_dmi_table[] = {
|
|||
DMI_MATCH(DMI_PRODUCT_NAME, "iMac9,1"),
|
||||
},
|
||||
},
|
||||
{ /* Handle problems with rebooting on the iMac10,1. */
|
||||
.callback = set_pci_reboot,
|
||||
.ident = "Apple iMac10,1",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "iMac10,1"),
|
||||
},
|
||||
},
|
||||
|
||||
/* ASRock */
|
||||
{ /* Handle problems with rebooting on ASRock Q1900DC-ITX */
|
||||
|
|
|
@ -268,7 +268,7 @@ TRACE_EVENT(kvm_inj_virq,
|
|||
#define kvm_trace_sym_exc \
|
||||
EXS(DE), EXS(DB), EXS(BP), EXS(OF), EXS(BR), EXS(UD), EXS(NM), \
|
||||
EXS(DF), EXS(TS), EXS(NP), EXS(SS), EXS(GP), EXS(PF), \
|
||||
EXS(MF), EXS(MC)
|
||||
EXS(MF), EXS(AC), EXS(MC)
|
||||
|
||||
/*
|
||||
* Tracepoint for kvm interrupt injection:
|
||||
|
|
|
@ -8932,7 +8932,8 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
|
|||
best->ebx &= ~bit(X86_FEATURE_INVPCID);
|
||||
}
|
||||
|
||||
vmcs_set_secondary_exec_control(secondary_exec_ctl);
|
||||
if (cpu_has_secondary_exec_ctrls())
|
||||
vmcs_set_secondary_exec_control(secondary_exec_ctl);
|
||||
|
||||
if (static_cpu_has(X86_FEATURE_PCOMMIT) && nested) {
|
||||
if (guest_cpuid_has_pcommit(vcpu))
|
||||
|
|
|
@ -951,7 +951,7 @@ static u32 msrs_to_save[] = {
|
|||
MSR_CSTAR, MSR_KERNEL_GS_BASE, MSR_SYSCALL_MASK, MSR_LSTAR,
|
||||
#endif
|
||||
MSR_IA32_TSC, MSR_IA32_CR_PAT, MSR_VM_HSAVE_PA,
|
||||
MSR_IA32_FEATURE_CONTROL, MSR_IA32_BNDCFGS
|
||||
MSR_IA32_FEATURE_CONTROL, MSR_IA32_BNDCFGS, MSR_TSC_AUX,
|
||||
};
|
||||
|
||||
static unsigned num_msrs_to_save;
|
||||
|
@ -4006,16 +4006,17 @@ static void kvm_init_msr_list(void)
|
|||
|
||||
/*
|
||||
* Even MSRs that are valid in the host may not be exposed
|
||||
* to the guests in some cases. We could work around this
|
||||
* in VMX with the generic MSR save/load machinery, but it
|
||||
* is not really worthwhile since it will really only
|
||||
* happen with nested virtualization.
|
||||
* to the guests in some cases.
|
||||
*/
|
||||
switch (msrs_to_save[i]) {
|
||||
case MSR_IA32_BNDCFGS:
|
||||
if (!kvm_x86_ops->mpx_supported())
|
||||
continue;
|
||||
break;
|
||||
case MSR_TSC_AUX:
|
||||
if (!kvm_x86_ops->rdtscp_supported())
|
||||
continue;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
|
|
@ -232,17 +232,31 @@ ENDPROC(copy_user_enhanced_fast_string)
|
|||
|
||||
/*
|
||||
* copy_user_nocache - Uncached memory copy with exception handling
|
||||
* This will force destination/source out of cache for more performance.
|
||||
* This will force destination out of cache for more performance.
|
||||
*
|
||||
* Note: Cached memory copy is used when destination or size is not
|
||||
* naturally aligned. That is:
|
||||
* - Require 8-byte alignment when size is 8 bytes or larger.
|
||||
* - Require 4-byte alignment when size is 4 bytes.
|
||||
*/
|
||||
ENTRY(__copy_user_nocache)
|
||||
ASM_STAC
|
||||
|
||||
/* If size is less than 8 bytes, go to 4-byte copy */
|
||||
cmpl $8,%edx
|
||||
jb 20f /* less then 8 bytes, go to byte copy loop */
|
||||
jb .L_4b_nocache_copy_entry
|
||||
|
||||
/* If destination is not 8-byte aligned, "cache" copy to align it */
|
||||
ALIGN_DESTINATION
|
||||
|
||||
/* Set 4x8-byte copy count and remainder */
|
||||
movl %edx,%ecx
|
||||
andl $63,%edx
|
||||
shrl $6,%ecx
|
||||
jz 17f
|
||||
jz .L_8b_nocache_copy_entry /* jump if count is 0 */
|
||||
|
||||
/* Perform 4x8-byte nocache loop-copy */
|
||||
.L_4x8b_nocache_copy_loop:
|
||||
1: movq (%rsi),%r8
|
||||
2: movq 1*8(%rsi),%r9
|
||||
3: movq 2*8(%rsi),%r10
|
||||
|
@ -262,60 +276,106 @@ ENTRY(__copy_user_nocache)
|
|||
leaq 64(%rsi),%rsi
|
||||
leaq 64(%rdi),%rdi
|
||||
decl %ecx
|
||||
jnz 1b
|
||||
17: movl %edx,%ecx
|
||||
jnz .L_4x8b_nocache_copy_loop
|
||||
|
||||
/* Set 8-byte copy count and remainder */
|
||||
.L_8b_nocache_copy_entry:
|
||||
movl %edx,%ecx
|
||||
andl $7,%edx
|
||||
shrl $3,%ecx
|
||||
jz 20f
|
||||
18: movq (%rsi),%r8
|
||||
19: movnti %r8,(%rdi)
|
||||
jz .L_4b_nocache_copy_entry /* jump if count is 0 */
|
||||
|
||||
/* Perform 8-byte nocache loop-copy */
|
||||
.L_8b_nocache_copy_loop:
|
||||
20: movq (%rsi),%r8
|
||||
21: movnti %r8,(%rdi)
|
||||
leaq 8(%rsi),%rsi
|
||||
leaq 8(%rdi),%rdi
|
||||
decl %ecx
|
||||
jnz 18b
|
||||
20: andl %edx,%edx
|
||||
jz 23f
|
||||
jnz .L_8b_nocache_copy_loop
|
||||
|
||||
/* If no byte left, we're done */
|
||||
.L_4b_nocache_copy_entry:
|
||||
andl %edx,%edx
|
||||
jz .L_finish_copy
|
||||
|
||||
/* If destination is not 4-byte aligned, go to byte copy: */
|
||||
movl %edi,%ecx
|
||||
andl $3,%ecx
|
||||
jnz .L_1b_cache_copy_entry
|
||||
|
||||
/* Set 4-byte copy count (1 or 0) and remainder */
|
||||
movl %edx,%ecx
|
||||
21: movb (%rsi),%al
|
||||
22: movb %al,(%rdi)
|
||||
andl $3,%edx
|
||||
shrl $2,%ecx
|
||||
jz .L_1b_cache_copy_entry /* jump if count is 0 */
|
||||
|
||||
/* Perform 4-byte nocache copy: */
|
||||
30: movl (%rsi),%r8d
|
||||
31: movnti %r8d,(%rdi)
|
||||
leaq 4(%rsi),%rsi
|
||||
leaq 4(%rdi),%rdi
|
||||
|
||||
/* If no bytes left, we're done: */
|
||||
andl %edx,%edx
|
||||
jz .L_finish_copy
|
||||
|
||||
/* Perform byte "cache" loop-copy for the remainder */
|
||||
.L_1b_cache_copy_entry:
|
||||
movl %edx,%ecx
|
||||
.L_1b_cache_copy_loop:
|
||||
40: movb (%rsi),%al
|
||||
41: movb %al,(%rdi)
|
||||
incq %rsi
|
||||
incq %rdi
|
||||
decl %ecx
|
||||
jnz 21b
|
||||
23: xorl %eax,%eax
|
||||
jnz .L_1b_cache_copy_loop
|
||||
|
||||
/* Finished copying; fence the prior stores */
|
||||
.L_finish_copy:
|
||||
xorl %eax,%eax
|
||||
ASM_CLAC
|
||||
sfence
|
||||
ret
|
||||
|
||||
.section .fixup,"ax"
|
||||
30: shll $6,%ecx
|
||||
.L_fixup_4x8b_copy:
|
||||
shll $6,%ecx
|
||||
addl %ecx,%edx
|
||||
jmp 60f
|
||||
40: lea (%rdx,%rcx,8),%rdx
|
||||
jmp 60f
|
||||
50: movl %ecx,%edx
|
||||
60: sfence
|
||||
jmp .L_fixup_handle_tail
|
||||
.L_fixup_8b_copy:
|
||||
lea (%rdx,%rcx,8),%rdx
|
||||
jmp .L_fixup_handle_tail
|
||||
.L_fixup_4b_copy:
|
||||
lea (%rdx,%rcx,4),%rdx
|
||||
jmp .L_fixup_handle_tail
|
||||
.L_fixup_1b_copy:
|
||||
movl %ecx,%edx
|
||||
.L_fixup_handle_tail:
|
||||
sfence
|
||||
jmp copy_user_handle_tail
|
||||
.previous
|
||||
|
||||
_ASM_EXTABLE(1b,30b)
|
||||
_ASM_EXTABLE(2b,30b)
|
||||
_ASM_EXTABLE(3b,30b)
|
||||
_ASM_EXTABLE(4b,30b)
|
||||
_ASM_EXTABLE(5b,30b)
|
||||
_ASM_EXTABLE(6b,30b)
|
||||
_ASM_EXTABLE(7b,30b)
|
||||
_ASM_EXTABLE(8b,30b)
|
||||
_ASM_EXTABLE(9b,30b)
|
||||
_ASM_EXTABLE(10b,30b)
|
||||
_ASM_EXTABLE(11b,30b)
|
||||
_ASM_EXTABLE(12b,30b)
|
||||
_ASM_EXTABLE(13b,30b)
|
||||
_ASM_EXTABLE(14b,30b)
|
||||
_ASM_EXTABLE(15b,30b)
|
||||
_ASM_EXTABLE(16b,30b)
|
||||
_ASM_EXTABLE(18b,40b)
|
||||
_ASM_EXTABLE(19b,40b)
|
||||
_ASM_EXTABLE(21b,50b)
|
||||
_ASM_EXTABLE(22b,50b)
|
||||
_ASM_EXTABLE(1b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(2b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(3b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(4b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(5b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(6b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(7b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(8b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(9b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(10b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(11b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(12b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(13b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(14b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(15b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(16b,.L_fixup_4x8b_copy)
|
||||
_ASM_EXTABLE(20b,.L_fixup_8b_copy)
|
||||
_ASM_EXTABLE(21b,.L_fixup_8b_copy)
|
||||
_ASM_EXTABLE(30b,.L_fixup_4b_copy)
|
||||
_ASM_EXTABLE(31b,.L_fixup_4b_copy)
|
||||
_ASM_EXTABLE(40b,.L_fixup_1b_copy)
|
||||
_ASM_EXTABLE(41b,.L_fixup_1b_copy)
|
||||
ENDPROC(__copy_user_nocache)
|
||||
|
|
|
@ -287,6 +287,9 @@ static noinline int vmalloc_fault(unsigned long address)
|
|||
if (!pmd_k)
|
||||
return -1;
|
||||
|
||||
if (pmd_huge(*pmd_k))
|
||||
return 0;
|
||||
|
||||
pte_k = pte_offset_kernel(pmd_k, address);
|
||||
if (!pte_present(*pte_k))
|
||||
return -1;
|
||||
|
@ -360,8 +363,6 @@ void vmalloc_sync_all(void)
|
|||
* 64-bit:
|
||||
*
|
||||
* Handle a fault on the vmalloc area
|
||||
*
|
||||
* This assumes no large pages in there.
|
||||
*/
|
||||
static noinline int vmalloc_fault(unsigned long address)
|
||||
{
|
||||
|
@ -403,17 +404,23 @@ static noinline int vmalloc_fault(unsigned long address)
|
|||
if (pud_none(*pud_ref))
|
||||
return -1;
|
||||
|
||||
if (pud_none(*pud) || pud_page_vaddr(*pud) != pud_page_vaddr(*pud_ref))
|
||||
if (pud_none(*pud) || pud_pfn(*pud) != pud_pfn(*pud_ref))
|
||||
BUG();
|
||||
|
||||
if (pud_huge(*pud))
|
||||
return 0;
|
||||
|
||||
pmd = pmd_offset(pud, address);
|
||||
pmd_ref = pmd_offset(pud_ref, address);
|
||||
if (pmd_none(*pmd_ref))
|
||||
return -1;
|
||||
|
||||
if (pmd_none(*pmd) || pmd_page(*pmd) != pmd_page(*pmd_ref))
|
||||
if (pmd_none(*pmd) || pmd_pfn(*pmd) != pmd_pfn(*pmd_ref))
|
||||
BUG();
|
||||
|
||||
if (pmd_huge(*pmd))
|
||||
return 0;
|
||||
|
||||
pte_ref = pte_offset_kernel(pmd_ref, address);
|
||||
if (!pte_present(*pte_ref))
|
||||
return -1;
|
||||
|
|
|
@ -33,7 +33,7 @@ struct cpa_data {
|
|||
pgd_t *pgd;
|
||||
pgprot_t mask_set;
|
||||
pgprot_t mask_clr;
|
||||
int numpages;
|
||||
unsigned long numpages;
|
||||
int flags;
|
||||
unsigned long pfn;
|
||||
unsigned force_split : 1;
|
||||
|
@ -1345,7 +1345,7 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
|
|||
* CPA operation. Either a large page has been
|
||||
* preserved or a single page update happened.
|
||||
*/
|
||||
BUG_ON(cpa->numpages > numpages);
|
||||
BUG_ON(cpa->numpages > numpages || !cpa->numpages);
|
||||
numpages -= cpa->numpages;
|
||||
if (cpa->flags & (CPA_PAGES_ARRAY | CPA_ARRAY))
|
||||
cpa->curpage++;
|
||||
|
|
|
@ -161,7 +161,10 @@ void flush_tlb_current_task(void)
|
|||
preempt_disable();
|
||||
|
||||
count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
|
||||
|
||||
/* This is an implicit full barrier that synchronizes with switch_mm. */
|
||||
local_flush_tlb();
|
||||
|
||||
trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL);
|
||||
if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
|
||||
flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL);
|
||||
|
@ -188,17 +191,29 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
|
|||
unsigned long base_pages_to_flush = TLB_FLUSH_ALL;
|
||||
|
||||
preempt_disable();
|
||||
if (current->active_mm != mm)
|
||||
if (current->active_mm != mm) {
|
||||
/* Synchronize with switch_mm. */
|
||||
smp_mb();
|
||||
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!current->mm) {
|
||||
leave_mm(smp_processor_id());
|
||||
|
||||
/* Synchronize with switch_mm. */
|
||||
smp_mb();
|
||||
|
||||
goto out;
|
||||
}
|
||||
|
||||
if ((end != TLB_FLUSH_ALL) && !(vmflag & VM_HUGETLB))
|
||||
base_pages_to_flush = (end - start) >> PAGE_SHIFT;
|
||||
|
||||
/*
|
||||
* Both branches below are implicit full barriers (MOV to CR or
|
||||
* INVLPG) that synchronize with switch_mm.
|
||||
*/
|
||||
if (base_pages_to_flush > tlb_single_page_flush_ceiling) {
|
||||
base_pages_to_flush = TLB_FLUSH_ALL;
|
||||
count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
|
||||
|
@ -228,10 +243,18 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long start)
|
|||
preempt_disable();
|
||||
|
||||
if (current->active_mm == mm) {
|
||||
if (current->mm)
|
||||
if (current->mm) {
|
||||
/*
|
||||
* Implicit full barrier (INVLPG) that synchronizes
|
||||
* with switch_mm.
|
||||
*/
|
||||
__flush_tlb_one(start);
|
||||
else
|
||||
} else {
|
||||
leave_mm(smp_processor_id());
|
||||
|
||||
/* Synchronize with switch_mm. */
|
||||
smp_mb();
|
||||
}
|
||||
}
|
||||
|
||||
if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
|
||||
|
|
|
@ -34,7 +34,8 @@ static void xen_hvm_post_suspend(int suspend_cancelled)
|
|||
{
|
||||
#ifdef CONFIG_XEN_PVHVM
|
||||
int cpu;
|
||||
xen_hvm_init_shared_info();
|
||||
if (!suspend_cancelled)
|
||||
xen_hvm_init_shared_info();
|
||||
xen_callback_vector();
|
||||
xen_unplug_emulated_devices();
|
||||
if (xen_feature(XENFEAT_hvm_safe_pvclock)) {
|
||||
|
|
|
@ -68,6 +68,18 @@ static struct bio *blk_bio_write_same_split(struct request_queue *q,
|
|||
return bio_split(bio, q->limits.max_write_same_sectors, GFP_NOIO, bs);
|
||||
}
|
||||
|
||||
static inline unsigned get_max_io_size(struct request_queue *q,
|
||||
struct bio *bio)
|
||||
{
|
||||
unsigned sectors = blk_max_size_offset(q, bio->bi_iter.bi_sector);
|
||||
unsigned mask = queue_logical_block_size(q) - 1;
|
||||
|
||||
/* aligned to logical block size */
|
||||
sectors &= ~(mask >> 9);
|
||||
|
||||
return sectors;
|
||||
}
|
||||
|
||||
static struct bio *blk_bio_segment_split(struct request_queue *q,
|
||||
struct bio *bio,
|
||||
struct bio_set *bs,
|
||||
|
@ -79,11 +91,9 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
|
|||
unsigned front_seg_size = bio->bi_seg_front_size;
|
||||
bool do_split = true;
|
||||
struct bio *new = NULL;
|
||||
const unsigned max_sectors = get_max_io_size(q, bio);
|
||||
|
||||
bio_for_each_segment(bv, bio, iter) {
|
||||
if (sectors + (bv.bv_len >> 9) > queue_max_sectors(q))
|
||||
goto split;
|
||||
|
||||
/*
|
||||
* If the queue doesn't support SG gaps and adding this
|
||||
* offset would create a gap, disallow it.
|
||||
|
@ -91,6 +101,21 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
|
|||
if (bvprvp && bvec_gap_to_prev(q, bvprvp, bv.bv_offset))
|
||||
goto split;
|
||||
|
||||
if (sectors + (bv.bv_len >> 9) > max_sectors) {
|
||||
/*
|
||||
* Consider this a new segment if we're splitting in
|
||||
* the middle of this vector.
|
||||
*/
|
||||
if (nsegs < queue_max_segments(q) &&
|
||||
sectors < max_sectors) {
|
||||
nsegs++;
|
||||
sectors = max_sectors;
|
||||
}
|
||||
if (sectors)
|
||||
goto split;
|
||||
/* Make this single bvec as the 1st segment */
|
||||
}
|
||||
|
||||
if (bvprvp && blk_queue_cluster(q)) {
|
||||
if (seg_size + bv.bv_len > queue_max_segment_size(q))
|
||||
goto new_segment;
|
||||
|
|
|
@ -76,6 +76,8 @@ int af_alg_register_type(const struct af_alg_type *type)
|
|||
goto unlock;
|
||||
|
||||
type->ops->owner = THIS_MODULE;
|
||||
if (type->ops_nokey)
|
||||
type->ops_nokey->owner = THIS_MODULE;
|
||||
node->type = type;
|
||||
list_add(&node->list, &alg_types);
|
||||
err = 0;
|
||||
|
@ -125,6 +127,26 @@ int af_alg_release(struct socket *sock)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(af_alg_release);
|
||||
|
||||
void af_alg_release_parent(struct sock *sk)
|
||||
{
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
unsigned int nokey = ask->nokey_refcnt;
|
||||
bool last = nokey && !ask->refcnt;
|
||||
|
||||
sk = ask->parent;
|
||||
ask = alg_sk(sk);
|
||||
|
||||
lock_sock(sk);
|
||||
ask->nokey_refcnt -= nokey;
|
||||
if (!last)
|
||||
last = !--ask->refcnt;
|
||||
release_sock(sk);
|
||||
|
||||
if (last)
|
||||
sock_put(sk);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(af_alg_release_parent);
|
||||
|
||||
static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
|
||||
{
|
||||
const u32 forbidden = CRYPTO_ALG_INTERNAL;
|
||||
|
@ -133,6 +155,7 @@ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
|
|||
struct sockaddr_alg *sa = (void *)uaddr;
|
||||
const struct af_alg_type *type;
|
||||
void *private;
|
||||
int err;
|
||||
|
||||
if (sock->state == SS_CONNECTED)
|
||||
return -EINVAL;
|
||||
|
@ -160,16 +183,22 @@ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
|
|||
return PTR_ERR(private);
|
||||
}
|
||||
|
||||
err = -EBUSY;
|
||||
lock_sock(sk);
|
||||
if (ask->refcnt | ask->nokey_refcnt)
|
||||
goto unlock;
|
||||
|
||||
swap(ask->type, type);
|
||||
swap(ask->private, private);
|
||||
|
||||
err = 0;
|
||||
|
||||
unlock:
|
||||
release_sock(sk);
|
||||
|
||||
alg_do_release(type, private);
|
||||
|
||||
return 0;
|
||||
return err;
|
||||
}
|
||||
|
||||
static int alg_setkey(struct sock *sk, char __user *ukey,
|
||||
|
@ -202,11 +231,15 @@ static int alg_setsockopt(struct socket *sock, int level, int optname,
|
|||
struct sock *sk = sock->sk;
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
const struct af_alg_type *type;
|
||||
int err = -ENOPROTOOPT;
|
||||
int err = -EBUSY;
|
||||
|
||||
lock_sock(sk);
|
||||
if (ask->refcnt)
|
||||
goto unlock;
|
||||
|
||||
type = ask->type;
|
||||
|
||||
err = -ENOPROTOOPT;
|
||||
if (level != SOL_ALG || !type)
|
||||
goto unlock;
|
||||
|
||||
|
@ -238,6 +271,7 @@ int af_alg_accept(struct sock *sk, struct socket *newsock)
|
|||
struct alg_sock *ask = alg_sk(sk);
|
||||
const struct af_alg_type *type;
|
||||
struct sock *sk2;
|
||||
unsigned int nokey;
|
||||
int err;
|
||||
|
||||
lock_sock(sk);
|
||||
|
@ -257,20 +291,29 @@ int af_alg_accept(struct sock *sk, struct socket *newsock)
|
|||
security_sk_clone(sk, sk2);
|
||||
|
||||
err = type->accept(ask->private, sk2);
|
||||
if (err) {
|
||||
sk_free(sk2);
|
||||
|
||||
nokey = err == -ENOKEY;
|
||||
if (nokey && type->accept_nokey)
|
||||
err = type->accept_nokey(ask->private, sk2);
|
||||
|
||||
if (err)
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
sk2->sk_family = PF_ALG;
|
||||
|
||||
sock_hold(sk);
|
||||
if (nokey || !ask->refcnt++)
|
||||
sock_hold(sk);
|
||||
ask->nokey_refcnt += nokey;
|
||||
alg_sk(sk2)->parent = sk;
|
||||
alg_sk(sk2)->type = type;
|
||||
alg_sk(sk2)->nokey_refcnt = nokey;
|
||||
|
||||
newsock->ops = type->ops;
|
||||
newsock->state = SS_CONNECTED;
|
||||
|
||||
if (nokey)
|
||||
newsock->ops = type->ops_nokey;
|
||||
|
||||
err = 0;
|
||||
|
||||
unlock:
|
||||
|
|
|
@ -451,6 +451,7 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
|
|||
struct ahash_alg *alg = crypto_ahash_alg(hash);
|
||||
|
||||
hash->setkey = ahash_nosetkey;
|
||||
hash->has_setkey = false;
|
||||
hash->export = ahash_no_export;
|
||||
hash->import = ahash_no_import;
|
||||
|
||||
|
@ -463,8 +464,10 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
|
|||
hash->finup = alg->finup ?: ahash_def_finup;
|
||||
hash->digest = alg->digest;
|
||||
|
||||
if (alg->setkey)
|
||||
if (alg->setkey) {
|
||||
hash->setkey = alg->setkey;
|
||||
hash->has_setkey = true;
|
||||
}
|
||||
if (alg->export)
|
||||
hash->export = alg->export;
|
||||
if (alg->import)
|
||||
|
|
|
@ -34,6 +34,11 @@ struct hash_ctx {
|
|||
struct ahash_request req;
|
||||
};
|
||||
|
||||
struct algif_hash_tfm {
|
||||
struct crypto_ahash *hash;
|
||||
bool has_key;
|
||||
};
|
||||
|
||||
static int hash_sendmsg(struct socket *sock, struct msghdr *msg,
|
||||
size_t ignored)
|
||||
{
|
||||
|
@ -49,7 +54,8 @@ static int hash_sendmsg(struct socket *sock, struct msghdr *msg,
|
|||
|
||||
lock_sock(sk);
|
||||
if (!ctx->more) {
|
||||
err = crypto_ahash_init(&ctx->req);
|
||||
err = af_alg_wait_for_completion(crypto_ahash_init(&ctx->req),
|
||||
&ctx->completion);
|
||||
if (err)
|
||||
goto unlock;
|
||||
}
|
||||
|
@ -120,6 +126,7 @@ static ssize_t hash_sendpage(struct socket *sock, struct page *page,
|
|||
} else {
|
||||
if (!ctx->more) {
|
||||
err = crypto_ahash_init(&ctx->req);
|
||||
err = af_alg_wait_for_completion(err, &ctx->completion);
|
||||
if (err)
|
||||
goto unlock;
|
||||
}
|
||||
|
@ -235,19 +242,151 @@ static struct proto_ops algif_hash_ops = {
|
|||
.accept = hash_accept,
|
||||
};
|
||||
|
||||
static int hash_check_key(struct socket *sock)
|
||||
{
|
||||
int err = 0;
|
||||
struct sock *psk;
|
||||
struct alg_sock *pask;
|
||||
struct algif_hash_tfm *tfm;
|
||||
struct sock *sk = sock->sk;
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
|
||||
lock_sock(sk);
|
||||
if (ask->refcnt)
|
||||
goto unlock_child;
|
||||
|
||||
psk = ask->parent;
|
||||
pask = alg_sk(ask->parent);
|
||||
tfm = pask->private;
|
||||
|
||||
err = -ENOKEY;
|
||||
lock_sock_nested(psk, SINGLE_DEPTH_NESTING);
|
||||
if (!tfm->has_key)
|
||||
goto unlock;
|
||||
|
||||
if (!pask->refcnt++)
|
||||
sock_hold(psk);
|
||||
|
||||
ask->refcnt = 1;
|
||||
sock_put(psk);
|
||||
|
||||
err = 0;
|
||||
|
||||
unlock:
|
||||
release_sock(psk);
|
||||
unlock_child:
|
||||
release_sock(sk);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int hash_sendmsg_nokey(struct socket *sock, struct msghdr *msg,
|
||||
size_t size)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = hash_check_key(sock);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return hash_sendmsg(sock, msg, size);
|
||||
}
|
||||
|
||||
static ssize_t hash_sendpage_nokey(struct socket *sock, struct page *page,
|
||||
int offset, size_t size, int flags)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = hash_check_key(sock);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return hash_sendpage(sock, page, offset, size, flags);
|
||||
}
|
||||
|
||||
static int hash_recvmsg_nokey(struct socket *sock, struct msghdr *msg,
|
||||
size_t ignored, int flags)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = hash_check_key(sock);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return hash_recvmsg(sock, msg, ignored, flags);
|
||||
}
|
||||
|
||||
static int hash_accept_nokey(struct socket *sock, struct socket *newsock,
|
||||
int flags)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = hash_check_key(sock);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return hash_accept(sock, newsock, flags);
|
||||
}
|
||||
|
||||
static struct proto_ops algif_hash_ops_nokey = {
|
||||
.family = PF_ALG,
|
||||
|
||||
.connect = sock_no_connect,
|
||||
.socketpair = sock_no_socketpair,
|
||||
.getname = sock_no_getname,
|
||||
.ioctl = sock_no_ioctl,
|
||||
.listen = sock_no_listen,
|
||||
.shutdown = sock_no_shutdown,
|
||||
.getsockopt = sock_no_getsockopt,
|
||||
.mmap = sock_no_mmap,
|
||||
.bind = sock_no_bind,
|
||||
.setsockopt = sock_no_setsockopt,
|
||||
.poll = sock_no_poll,
|
||||
|
||||
.release = af_alg_release,
|
||||
.sendmsg = hash_sendmsg_nokey,
|
||||
.sendpage = hash_sendpage_nokey,
|
||||
.recvmsg = hash_recvmsg_nokey,
|
||||
.accept = hash_accept_nokey,
|
||||
};
|
||||
|
||||
static void *hash_bind(const char *name, u32 type, u32 mask)
|
||||
{
|
||||
return crypto_alloc_ahash(name, type, mask);
|
||||
struct algif_hash_tfm *tfm;
|
||||
struct crypto_ahash *hash;
|
||||
|
||||
tfm = kzalloc(sizeof(*tfm), GFP_KERNEL);
|
||||
if (!tfm)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
hash = crypto_alloc_ahash(name, type, mask);
|
||||
if (IS_ERR(hash)) {
|
||||
kfree(tfm);
|
||||
return ERR_CAST(hash);
|
||||
}
|
||||
|
||||
tfm->hash = hash;
|
||||
|
||||
return tfm;
|
||||
}
|
||||
|
||||
static void hash_release(void *private)
|
||||
{
|
||||
crypto_free_ahash(private);
|
||||
struct algif_hash_tfm *tfm = private;
|
||||
|
||||
crypto_free_ahash(tfm->hash);
|
||||
kfree(tfm);
|
||||
}
|
||||
|
||||
static int hash_setkey(void *private, const u8 *key, unsigned int keylen)
|
||||
{
|
||||
return crypto_ahash_setkey(private, key, keylen);
|
||||
struct algif_hash_tfm *tfm = private;
|
||||
int err;
|
||||
|
||||
err = crypto_ahash_setkey(tfm->hash, key, keylen);
|
||||
tfm->has_key = !err;
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static void hash_sock_destruct(struct sock *sk)
|
||||
|
@ -261,12 +400,14 @@ static void hash_sock_destruct(struct sock *sk)
|
|||
af_alg_release_parent(sk);
|
||||
}
|
||||
|
||||
static int hash_accept_parent(void *private, struct sock *sk)
|
||||
static int hash_accept_parent_nokey(void *private, struct sock *sk)
|
||||
{
|
||||
struct hash_ctx *ctx;
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
unsigned len = sizeof(*ctx) + crypto_ahash_reqsize(private);
|
||||
unsigned ds = crypto_ahash_digestsize(private);
|
||||
struct algif_hash_tfm *tfm = private;
|
||||
struct crypto_ahash *hash = tfm->hash;
|
||||
unsigned len = sizeof(*ctx) + crypto_ahash_reqsize(hash);
|
||||
unsigned ds = crypto_ahash_digestsize(hash);
|
||||
|
||||
ctx = sock_kmalloc(sk, len, GFP_KERNEL);
|
||||
if (!ctx)
|
||||
|
@ -286,7 +427,7 @@ static int hash_accept_parent(void *private, struct sock *sk)
|
|||
|
||||
ask->private = ctx;
|
||||
|
||||
ahash_request_set_tfm(&ctx->req, private);
|
||||
ahash_request_set_tfm(&ctx->req, hash);
|
||||
ahash_request_set_callback(&ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG,
|
||||
af_alg_complete, &ctx->completion);
|
||||
|
||||
|
@ -295,12 +436,24 @@ static int hash_accept_parent(void *private, struct sock *sk)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int hash_accept_parent(void *private, struct sock *sk)
|
||||
{
|
||||
struct algif_hash_tfm *tfm = private;
|
||||
|
||||
if (!tfm->has_key && crypto_ahash_has_setkey(tfm->hash))
|
||||
return -ENOKEY;
|
||||
|
||||
return hash_accept_parent_nokey(private, sk);
|
||||
}
|
||||
|
||||
static const struct af_alg_type algif_type_hash = {
|
||||
.bind = hash_bind,
|
||||
.release = hash_release,
|
||||
.setkey = hash_setkey,
|
||||
.accept = hash_accept_parent,
|
||||
.accept_nokey = hash_accept_parent_nokey,
|
||||
.ops = &algif_hash_ops,
|
||||
.ops_nokey = &algif_hash_ops_nokey,
|
||||
.name = "hash",
|
||||
.owner = THIS_MODULE
|
||||
};
|
||||
|
|
|
@ -31,6 +31,11 @@ struct skcipher_sg_list {
|
|||
struct scatterlist sg[0];
|
||||
};
|
||||
|
||||
struct skcipher_tfm {
|
||||
struct crypto_skcipher *skcipher;
|
||||
bool has_key;
|
||||
};
|
||||
|
||||
struct skcipher_ctx {
|
||||
struct list_head tsgl;
|
||||
struct af_alg_sgl rsgl;
|
||||
|
@ -60,18 +65,10 @@ struct skcipher_async_req {
|
|||
struct skcipher_async_rsgl first_sgl;
|
||||
struct list_head list;
|
||||
struct scatterlist *tsg;
|
||||
char iv[];
|
||||
atomic_t *inflight;
|
||||
struct skcipher_request req;
|
||||
};
|
||||
|
||||
#define GET_SREQ(areq, ctx) (struct skcipher_async_req *)((char *)areq + \
|
||||
crypto_skcipher_reqsize(crypto_skcipher_reqtfm(&ctx->req)))
|
||||
|
||||
#define GET_REQ_SIZE(ctx) \
|
||||
crypto_skcipher_reqsize(crypto_skcipher_reqtfm(&ctx->req))
|
||||
|
||||
#define GET_IV_SIZE(ctx) \
|
||||
crypto_skcipher_ivsize(crypto_skcipher_reqtfm(&ctx->req))
|
||||
|
||||
#define MAX_SGL_ENTS ((4096 - sizeof(struct skcipher_sg_list)) / \
|
||||
sizeof(struct scatterlist) - 1)
|
||||
|
||||
|
@ -97,15 +94,12 @@ static void skcipher_free_async_sgls(struct skcipher_async_req *sreq)
|
|||
|
||||
static void skcipher_async_cb(struct crypto_async_request *req, int err)
|
||||
{
|
||||
struct sock *sk = req->data;
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
struct skcipher_ctx *ctx = ask->private;
|
||||
struct skcipher_async_req *sreq = GET_SREQ(req, ctx);
|
||||
struct skcipher_async_req *sreq = req->data;
|
||||
struct kiocb *iocb = sreq->iocb;
|
||||
|
||||
atomic_dec(&ctx->inflight);
|
||||
atomic_dec(sreq->inflight);
|
||||
skcipher_free_async_sgls(sreq);
|
||||
kfree(req);
|
||||
kzfree(sreq);
|
||||
iocb->ki_complete(iocb, err, err);
|
||||
}
|
||||
|
||||
|
@ -301,8 +295,11 @@ static int skcipher_sendmsg(struct socket *sock, struct msghdr *msg,
|
|||
{
|
||||
struct sock *sk = sock->sk;
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
struct sock *psk = ask->parent;
|
||||
struct alg_sock *pask = alg_sk(psk);
|
||||
struct skcipher_ctx *ctx = ask->private;
|
||||
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(&ctx->req);
|
||||
struct skcipher_tfm *skc = pask->private;
|
||||
struct crypto_skcipher *tfm = skc->skcipher;
|
||||
unsigned ivsize = crypto_skcipher_ivsize(tfm);
|
||||
struct skcipher_sg_list *sgl;
|
||||
struct af_alg_control con = {};
|
||||
|
@ -387,7 +384,8 @@ static int skcipher_sendmsg(struct socket *sock, struct msghdr *msg,
|
|||
|
||||
sgl = list_entry(ctx->tsgl.prev, struct skcipher_sg_list, list);
|
||||
sg = sgl->sg;
|
||||
sg_unmark_end(sg + sgl->cur);
|
||||
if (sgl->cur)
|
||||
sg_unmark_end(sg + sgl->cur - 1);
|
||||
do {
|
||||
i = sgl->cur;
|
||||
plen = min_t(int, len, PAGE_SIZE);
|
||||
|
@ -503,37 +501,43 @@ static int skcipher_recvmsg_async(struct socket *sock, struct msghdr *msg,
|
|||
{
|
||||
struct sock *sk = sock->sk;
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
struct sock *psk = ask->parent;
|
||||
struct alg_sock *pask = alg_sk(psk);
|
||||
struct skcipher_ctx *ctx = ask->private;
|
||||
struct skcipher_tfm *skc = pask->private;
|
||||
struct crypto_skcipher *tfm = skc->skcipher;
|
||||
struct skcipher_sg_list *sgl;
|
||||
struct scatterlist *sg;
|
||||
struct skcipher_async_req *sreq;
|
||||
struct skcipher_request *req;
|
||||
struct skcipher_async_rsgl *last_rsgl = NULL;
|
||||
unsigned int txbufs = 0, len = 0, tx_nents = skcipher_all_sg_nents(ctx);
|
||||
unsigned int reqlen = sizeof(struct skcipher_async_req) +
|
||||
GET_REQ_SIZE(ctx) + GET_IV_SIZE(ctx);
|
||||
unsigned int txbufs = 0, len = 0, tx_nents;
|
||||
unsigned int reqsize = crypto_skcipher_reqsize(tfm);
|
||||
unsigned int ivsize = crypto_skcipher_ivsize(tfm);
|
||||
int err = -ENOMEM;
|
||||
bool mark = false;
|
||||
char *iv;
|
||||
|
||||
sreq = kzalloc(sizeof(*sreq) + reqsize + ivsize, GFP_KERNEL);
|
||||
if (unlikely(!sreq))
|
||||
goto out;
|
||||
|
||||
req = &sreq->req;
|
||||
iv = (char *)(req + 1) + reqsize;
|
||||
sreq->iocb = msg->msg_iocb;
|
||||
INIT_LIST_HEAD(&sreq->list);
|
||||
sreq->inflight = &ctx->inflight;
|
||||
|
||||
lock_sock(sk);
|
||||
req = kmalloc(reqlen, GFP_KERNEL);
|
||||
if (unlikely(!req))
|
||||
goto unlock;
|
||||
|
||||
sreq = GET_SREQ(req, ctx);
|
||||
sreq->iocb = msg->msg_iocb;
|
||||
memset(&sreq->first_sgl, '\0', sizeof(struct skcipher_async_rsgl));
|
||||
INIT_LIST_HEAD(&sreq->list);
|
||||
tx_nents = skcipher_all_sg_nents(ctx);
|
||||
sreq->tsg = kcalloc(tx_nents, sizeof(*sg), GFP_KERNEL);
|
||||
if (unlikely(!sreq->tsg)) {
|
||||
kfree(req);
|
||||
if (unlikely(!sreq->tsg))
|
||||
goto unlock;
|
||||
}
|
||||
sg_init_table(sreq->tsg, tx_nents);
|
||||
memcpy(sreq->iv, ctx->iv, GET_IV_SIZE(ctx));
|
||||
skcipher_request_set_tfm(req, crypto_skcipher_reqtfm(&ctx->req));
|
||||
skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
|
||||
skcipher_async_cb, sk);
|
||||
memcpy(iv, ctx->iv, ivsize);
|
||||
skcipher_request_set_tfm(req, tfm);
|
||||
skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
|
||||
skcipher_async_cb, sreq);
|
||||
|
||||
while (iov_iter_count(&msg->msg_iter)) {
|
||||
struct skcipher_async_rsgl *rsgl;
|
||||
|
@ -609,20 +613,22 @@ static int skcipher_recvmsg_async(struct socket *sock, struct msghdr *msg,
|
|||
sg_mark_end(sreq->tsg + txbufs - 1);
|
||||
|
||||
skcipher_request_set_crypt(req, sreq->tsg, sreq->first_sgl.sgl.sg,
|
||||
len, sreq->iv);
|
||||
len, iv);
|
||||
err = ctx->enc ? crypto_skcipher_encrypt(req) :
|
||||
crypto_skcipher_decrypt(req);
|
||||
if (err == -EINPROGRESS) {
|
||||
atomic_inc(&ctx->inflight);
|
||||
err = -EIOCBQUEUED;
|
||||
sreq = NULL;
|
||||
goto unlock;
|
||||
}
|
||||
free:
|
||||
skcipher_free_async_sgls(sreq);
|
||||
kfree(req);
|
||||
unlock:
|
||||
skcipher_wmem_wakeup(sk);
|
||||
release_sock(sk);
|
||||
kzfree(sreq);
|
||||
out:
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -631,9 +637,12 @@ static int skcipher_recvmsg_sync(struct socket *sock, struct msghdr *msg,
|
|||
{
|
||||
struct sock *sk = sock->sk;
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
struct sock *psk = ask->parent;
|
||||
struct alg_sock *pask = alg_sk(psk);
|
||||
struct skcipher_ctx *ctx = ask->private;
|
||||
unsigned bs = crypto_skcipher_blocksize(crypto_skcipher_reqtfm(
|
||||
&ctx->req));
|
||||
struct skcipher_tfm *skc = pask->private;
|
||||
struct crypto_skcipher *tfm = skc->skcipher;
|
||||
unsigned bs = crypto_skcipher_blocksize(tfm);
|
||||
struct skcipher_sg_list *sgl;
|
||||
struct scatterlist *sg;
|
||||
int err = -EAGAIN;
|
||||
|
@ -642,13 +651,6 @@ static int skcipher_recvmsg_sync(struct socket *sock, struct msghdr *msg,
|
|||
|
||||
lock_sock(sk);
|
||||
while (msg_data_left(msg)) {
|
||||
sgl = list_first_entry(&ctx->tsgl,
|
||||
struct skcipher_sg_list, list);
|
||||
sg = sgl->sg;
|
||||
|
||||
while (!sg->length)
|
||||
sg++;
|
||||
|
||||
if (!ctx->used) {
|
||||
err = skcipher_wait_for_data(sk, flags);
|
||||
if (err)
|
||||
|
@ -669,6 +671,13 @@ static int skcipher_recvmsg_sync(struct socket *sock, struct msghdr *msg,
|
|||
if (!used)
|
||||
goto free;
|
||||
|
||||
sgl = list_first_entry(&ctx->tsgl,
|
||||
struct skcipher_sg_list, list);
|
||||
sg = sgl->sg;
|
||||
|
||||
while (!sg->length)
|
||||
sg++;
|
||||
|
||||
skcipher_request_set_crypt(&ctx->req, sg, ctx->rsgl.sg, used,
|
||||
ctx->iv);
|
||||
|
||||
|
@ -748,19 +757,139 @@ static struct proto_ops algif_skcipher_ops = {
|
|||
.poll = skcipher_poll,
|
||||
};
|
||||
|
||||
static int skcipher_check_key(struct socket *sock)
|
||||
{
|
||||
int err = 0;
|
||||
struct sock *psk;
|
||||
struct alg_sock *pask;
|
||||
struct skcipher_tfm *tfm;
|
||||
struct sock *sk = sock->sk;
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
|
||||
lock_sock(sk);
|
||||
if (ask->refcnt)
|
||||
goto unlock_child;
|
||||
|
||||
psk = ask->parent;
|
||||
pask = alg_sk(ask->parent);
|
||||
tfm = pask->private;
|
||||
|
||||
err = -ENOKEY;
|
||||
lock_sock_nested(psk, SINGLE_DEPTH_NESTING);
|
||||
if (!tfm->has_key)
|
||||
goto unlock;
|
||||
|
||||
if (!pask->refcnt++)
|
||||
sock_hold(psk);
|
||||
|
||||
ask->refcnt = 1;
|
||||
sock_put(psk);
|
||||
|
||||
err = 0;
|
||||
|
||||
unlock:
|
||||
release_sock(psk);
|
||||
unlock_child:
|
||||
release_sock(sk);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int skcipher_sendmsg_nokey(struct socket *sock, struct msghdr *msg,
|
||||
size_t size)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = skcipher_check_key(sock);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return skcipher_sendmsg(sock, msg, size);
|
||||
}
|
||||
|
||||
static ssize_t skcipher_sendpage_nokey(struct socket *sock, struct page *page,
|
||||
int offset, size_t size, int flags)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = skcipher_check_key(sock);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return skcipher_sendpage(sock, page, offset, size, flags);
|
||||
}
|
||||
|
||||
static int skcipher_recvmsg_nokey(struct socket *sock, struct msghdr *msg,
|
||||
size_t ignored, int flags)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = skcipher_check_key(sock);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return skcipher_recvmsg(sock, msg, ignored, flags);
|
||||
}
|
||||
|
||||
static struct proto_ops algif_skcipher_ops_nokey = {
|
||||
.family = PF_ALG,
|
||||
|
||||
.connect = sock_no_connect,
|
||||
.socketpair = sock_no_socketpair,
|
||||
.getname = sock_no_getname,
|
||||
.ioctl = sock_no_ioctl,
|
||||
.listen = sock_no_listen,
|
||||
.shutdown = sock_no_shutdown,
|
||||
.getsockopt = sock_no_getsockopt,
|
||||
.mmap = sock_no_mmap,
|
||||
.bind = sock_no_bind,
|
||||
.accept = sock_no_accept,
|
||||
.setsockopt = sock_no_setsockopt,
|
||||
|
||||
.release = af_alg_release,
|
||||
.sendmsg = skcipher_sendmsg_nokey,
|
||||
.sendpage = skcipher_sendpage_nokey,
|
||||
.recvmsg = skcipher_recvmsg_nokey,
|
||||
.poll = skcipher_poll,
|
||||
};
|
||||
|
||||
static void *skcipher_bind(const char *name, u32 type, u32 mask)
|
||||
{
|
||||
return crypto_alloc_skcipher(name, type, mask);
|
||||
struct skcipher_tfm *tfm;
|
||||
struct crypto_skcipher *skcipher;
|
||||
|
||||
tfm = kzalloc(sizeof(*tfm), GFP_KERNEL);
|
||||
if (!tfm)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
skcipher = crypto_alloc_skcipher(name, type, mask);
|
||||
if (IS_ERR(skcipher)) {
|
||||
kfree(tfm);
|
||||
return ERR_CAST(skcipher);
|
||||
}
|
||||
|
||||
tfm->skcipher = skcipher;
|
||||
|
||||
return tfm;
|
||||
}
|
||||
|
||||
static void skcipher_release(void *private)
|
||||
{
|
||||
crypto_free_skcipher(private);
|
||||
struct skcipher_tfm *tfm = private;
|
||||
|
||||
crypto_free_skcipher(tfm->skcipher);
|
||||
kfree(tfm);
|
||||
}
|
||||
|
||||
static int skcipher_setkey(void *private, const u8 *key, unsigned int keylen)
|
||||
{
|
||||
return crypto_skcipher_setkey(private, key, keylen);
|
||||
struct skcipher_tfm *tfm = private;
|
||||
int err;
|
||||
|
||||
err = crypto_skcipher_setkey(tfm->skcipher, key, keylen);
|
||||
tfm->has_key = !err;
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static void skcipher_wait(struct sock *sk)
|
||||
|
@ -788,24 +917,26 @@ static void skcipher_sock_destruct(struct sock *sk)
|
|||
af_alg_release_parent(sk);
|
||||
}
|
||||
|
||||
static int skcipher_accept_parent(void *private, struct sock *sk)
|
||||
static int skcipher_accept_parent_nokey(void *private, struct sock *sk)
|
||||
{
|
||||
struct skcipher_ctx *ctx;
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
unsigned int len = sizeof(*ctx) + crypto_skcipher_reqsize(private);
|
||||
struct skcipher_tfm *tfm = private;
|
||||
struct crypto_skcipher *skcipher = tfm->skcipher;
|
||||
unsigned int len = sizeof(*ctx) + crypto_skcipher_reqsize(skcipher);
|
||||
|
||||
ctx = sock_kmalloc(sk, len, GFP_KERNEL);
|
||||
if (!ctx)
|
||||
return -ENOMEM;
|
||||
|
||||
ctx->iv = sock_kmalloc(sk, crypto_skcipher_ivsize(private),
|
||||
ctx->iv = sock_kmalloc(sk, crypto_skcipher_ivsize(skcipher),
|
||||
GFP_KERNEL);
|
||||
if (!ctx->iv) {
|
||||
sock_kfree_s(sk, ctx, len);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
memset(ctx->iv, 0, crypto_skcipher_ivsize(private));
|
||||
memset(ctx->iv, 0, crypto_skcipher_ivsize(skcipher));
|
||||
|
||||
INIT_LIST_HEAD(&ctx->tsgl);
|
||||
ctx->len = len;
|
||||
|
@ -818,8 +949,9 @@ static int skcipher_accept_parent(void *private, struct sock *sk)
|
|||
|
||||
ask->private = ctx;
|
||||
|
||||
skcipher_request_set_tfm(&ctx->req, private);
|
||||
skcipher_request_set_callback(&ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG,
|
||||
skcipher_request_set_tfm(&ctx->req, skcipher);
|
||||
skcipher_request_set_callback(&ctx->req, CRYPTO_TFM_REQ_MAY_SLEEP |
|
||||
CRYPTO_TFM_REQ_MAY_BACKLOG,
|
||||
af_alg_complete, &ctx->completion);
|
||||
|
||||
sk->sk_destruct = skcipher_sock_destruct;
|
||||
|
@ -827,12 +959,24 @@ static int skcipher_accept_parent(void *private, struct sock *sk)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int skcipher_accept_parent(void *private, struct sock *sk)
|
||||
{
|
||||
struct skcipher_tfm *tfm = private;
|
||||
|
||||
if (!tfm->has_key && crypto_skcipher_has_setkey(tfm->skcipher))
|
||||
return -ENOKEY;
|
||||
|
||||
return skcipher_accept_parent_nokey(private, sk);
|
||||
}
|
||||
|
||||
static const struct af_alg_type algif_type_skcipher = {
|
||||
.bind = skcipher_bind,
|
||||
.release = skcipher_release,
|
||||
.setkey = skcipher_setkey,
|
||||
.accept = skcipher_accept_parent,
|
||||
.accept_nokey = skcipher_accept_parent_nokey,
|
||||
.ops = &algif_skcipher_ops,
|
||||
.ops_nokey = &algif_skcipher_ops_nokey,
|
||||
.name = "skcipher",
|
||||
.owner = THIS_MODULE
|
||||
};
|
||||
|
|
|
@ -172,4 +172,3 @@ MODULE_DESCRIPTION("CRC32c (Castagnoli) calculations wrapper for lib/crc32c");
|
|||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_CRYPTO("crc32c");
|
||||
MODULE_ALIAS_CRYPTO("crc32c-generic");
|
||||
MODULE_SOFTDEP("pre: crc32c");
|
||||
|
|
|
@ -499,6 +499,7 @@ static int crypto_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
|
|||
if (link->dump == NULL)
|
||||
return -EINVAL;
|
||||
|
||||
down_read(&crypto_alg_sem);
|
||||
list_for_each_entry(alg, &crypto_alg_list, cra_list)
|
||||
dump_alloc += CRYPTO_REPORT_MAXSIZE;
|
||||
|
||||
|
@ -508,8 +509,11 @@ static int crypto_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
|
|||
.done = link->done,
|
||||
.min_dump_alloc = dump_alloc,
|
||||
};
|
||||
return netlink_dump_start(crypto_nlsk, skb, nlh, &c);
|
||||
err = netlink_dump_start(crypto_nlsk, skb, nlh, &c);
|
||||
}
|
||||
up_read(&crypto_alg_sem);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
err = nlmsg_parse(nlh, crypto_msg_min[type], attrs, CRYPTOCFGA_MAX,
|
||||
|
|
|
@ -354,9 +354,10 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
|
|||
crt->final = shash_async_final;
|
||||
crt->finup = shash_async_finup;
|
||||
crt->digest = shash_async_digest;
|
||||
crt->setkey = shash_async_setkey;
|
||||
|
||||
crt->has_setkey = alg->setkey != shash_no_setkey;
|
||||
|
||||
if (alg->setkey)
|
||||
crt->setkey = shash_async_setkey;
|
||||
if (alg->export)
|
||||
crt->export = shash_async_export;
|
||||
if (alg->import)
|
||||
|
|
|
@ -118,6 +118,7 @@ static int crypto_init_skcipher_ops_blkcipher(struct crypto_tfm *tfm)
|
|||
skcipher->decrypt = skcipher_decrypt_blkcipher;
|
||||
|
||||
skcipher->ivsize = crypto_blkcipher_ivsize(blkcipher);
|
||||
skcipher->has_setkey = calg->cra_blkcipher.max_keysize;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -210,6 +211,7 @@ static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm)
|
|||
skcipher->ivsize = crypto_ablkcipher_ivsize(ablkcipher);
|
||||
skcipher->reqsize = crypto_ablkcipher_reqsize(ablkcipher) +
|
||||
sizeof(struct ablkcipher_request);
|
||||
skcipher->has_setkey = calg->cra_ablkcipher.max_keysize;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -264,6 +264,26 @@ static const struct pci_device_id ahci_pci_tbl[] = {
|
|||
{ PCI_VDEVICE(INTEL, 0x3b2b), board_ahci }, /* PCH RAID */
|
||||
{ PCI_VDEVICE(INTEL, 0x3b2c), board_ahci }, /* PCH RAID */
|
||||
{ PCI_VDEVICE(INTEL, 0x3b2f), board_ahci }, /* PCH AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19b0), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19b1), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19b2), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19b3), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19b4), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19b5), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19b6), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19b7), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19bE), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19bF), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19c0), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19c1), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19c2), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19c3), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19c4), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19c5), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19c6), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19c7), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19cE), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x19cF), board_ahci }, /* DNV AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x1c02), board_ahci }, /* CPT AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x1c03), board_ahci }, /* CPT AHCI */
|
||||
{ PCI_VDEVICE(INTEL, 0x1c04), board_ahci }, /* CPT RAID */
|
||||
|
|
|
@ -495,8 +495,8 @@ void ahci_save_initial_config(struct device *dev, struct ahci_host_priv *hpriv)
|
|||
}
|
||||
}
|
||||
|
||||
/* fabricate port_map from cap.nr_ports */
|
||||
if (!port_map) {
|
||||
/* fabricate port_map from cap.nr_ports for < AHCI 1.3 */
|
||||
if (!port_map && vers < 0x10300) {
|
||||
port_map = (1 << ahci_nr_ports(cap)) - 1;
|
||||
dev_warn(dev, "forcing PORTS_IMPL to 0x%x\n", port_map);
|
||||
|
||||
|
|
|
@ -513,10 +513,15 @@ static int platform_drv_probe(struct device *_dev)
|
|||
return ret;
|
||||
|
||||
ret = dev_pm_domain_attach(_dev, true);
|
||||
if (ret != -EPROBE_DEFER && drv->probe) {
|
||||
ret = drv->probe(dev);
|
||||
if (ret)
|
||||
dev_pm_domain_detach(_dev, true);
|
||||
if (ret != -EPROBE_DEFER) {
|
||||
if (drv->probe) {
|
||||
ret = drv->probe(dev);
|
||||
if (ret)
|
||||
dev_pm_domain_detach(_dev, true);
|
||||
} else {
|
||||
/* don't fail if just dev_pm_domain_attach failed */
|
||||
ret = 0;
|
||||
}
|
||||
}
|
||||
|
||||
if (drv->prevent_deferred_probe && ret == -EPROBE_DEFER) {
|
||||
|
|
|
@ -76,7 +76,7 @@ static void zcomp_strm_free(struct zcomp *comp, struct zcomp_strm *zstrm)
|
|||
*/
|
||||
static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp)
|
||||
{
|
||||
struct zcomp_strm *zstrm = kmalloc(sizeof(*zstrm), GFP_KERNEL);
|
||||
struct zcomp_strm *zstrm = kmalloc(sizeof(*zstrm), GFP_NOIO);
|
||||
if (!zstrm)
|
||||
return NULL;
|
||||
|
||||
|
@ -85,7 +85,7 @@ static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp)
|
|||
* allocate 2 pages. 1 for compressed data, plus 1 extra for the
|
||||
* case when compressed size is larger than the original one
|
||||
*/
|
||||
zstrm->buffer = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 1);
|
||||
zstrm->buffer = (void *)__get_free_pages(GFP_NOIO | __GFP_ZERO, 1);
|
||||
if (!zstrm->private || !zstrm->buffer) {
|
||||
zcomp_strm_free(comp, zstrm);
|
||||
zstrm = NULL;
|
||||
|
|
|
@ -10,17 +10,36 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/lz4.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/mm.h>
|
||||
|
||||
#include "zcomp_lz4.h"
|
||||
|
||||
static void *zcomp_lz4_create(void)
|
||||
{
|
||||
return kzalloc(LZ4_MEM_COMPRESS, GFP_KERNEL);
|
||||
void *ret;
|
||||
|
||||
/*
|
||||
* This function can be called in swapout/fs write path
|
||||
* so we can't use GFP_FS|IO. And it assumes we already
|
||||
* have at least one stream in zram initialization so we
|
||||
* don't do best effort to allocate more stream in here.
|
||||
* A default stream will work well without further multiple
|
||||
* streams. That's why we use NORETRY | NOWARN.
|
||||
*/
|
||||
ret = kzalloc(LZ4_MEM_COMPRESS, GFP_NOIO | __GFP_NORETRY |
|
||||
__GFP_NOWARN);
|
||||
if (!ret)
|
||||
ret = __vmalloc(LZ4_MEM_COMPRESS,
|
||||
GFP_NOIO | __GFP_NORETRY | __GFP_NOWARN |
|
||||
__GFP_ZERO | __GFP_HIGHMEM,
|
||||
PAGE_KERNEL);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void zcomp_lz4_destroy(void *private)
|
||||
{
|
||||
kfree(private);
|
||||
kvfree(private);
|
||||
}
|
||||
|
||||
static int zcomp_lz4_compress(const unsigned char *src, unsigned char *dst,
|
||||
|
|
|
@ -10,17 +10,36 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/lzo.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/mm.h>
|
||||
|
||||
#include "zcomp_lzo.h"
|
||||
|
||||
static void *lzo_create(void)
|
||||
{
|
||||
return kzalloc(LZO1X_MEM_COMPRESS, GFP_KERNEL);
|
||||
void *ret;
|
||||
|
||||
/*
|
||||
* This function can be called in swapout/fs write path
|
||||
* so we can't use GFP_FS|IO. And it assumes we already
|
||||
* have at least one stream in zram initialization so we
|
||||
* don't do best effort to allocate more stream in here.
|
||||
* A default stream will work well without further multiple
|
||||
* streams. That's why we use NORETRY | NOWARN.
|
||||
*/
|
||||
ret = kzalloc(LZO1X_MEM_COMPRESS, GFP_NOIO | __GFP_NORETRY |
|
||||
__GFP_NOWARN);
|
||||
if (!ret)
|
||||
ret = __vmalloc(LZO1X_MEM_COMPRESS,
|
||||
GFP_NOIO | __GFP_NORETRY | __GFP_NOWARN |
|
||||
__GFP_ZERO | __GFP_HIGHMEM,
|
||||
PAGE_KERNEL);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void lzo_destroy(void *private)
|
||||
{
|
||||
kfree(private);
|
||||
kvfree(private);
|
||||
}
|
||||
|
||||
static int lzo_compress(const unsigned char *src, unsigned char *dst,
|
||||
|
|
|
@ -1326,7 +1326,6 @@ static int zram_remove(struct zram *zram)
|
|||
|
||||
pr_info("Removed device: %s\n", zram->disk->disk_name);
|
||||
|
||||
idr_remove(&zram_index_idr, zram->disk->first_minor);
|
||||
blk_cleanup_queue(zram->disk->queue);
|
||||
del_gendisk(zram->disk);
|
||||
put_disk(zram->disk);
|
||||
|
@ -1368,10 +1367,12 @@ static ssize_t hot_remove_store(struct class *class,
|
|||
mutex_lock(&zram_index_mutex);
|
||||
|
||||
zram = idr_find(&zram_index_idr, dev_id);
|
||||
if (zram)
|
||||
if (zram) {
|
||||
ret = zram_remove(zram);
|
||||
else
|
||||
idr_remove(&zram_index_idr, dev_id);
|
||||
} else {
|
||||
ret = -ENODEV;
|
||||
}
|
||||
|
||||
mutex_unlock(&zram_index_mutex);
|
||||
return ret ? ret : count;
|
||||
|
|
|
@ -783,7 +783,7 @@ static void atmel_sha_finish_req(struct ahash_request *req, int err)
|
|||
dd->flags &= ~(SHA_FLAGS_BUSY | SHA_FLAGS_FINAL | SHA_FLAGS_CPU |
|
||||
SHA_FLAGS_DMA_READY | SHA_FLAGS_OUTPUT_READY);
|
||||
|
||||
clk_disable_unprepare(dd->iclk);
|
||||
clk_disable(dd->iclk);
|
||||
|
||||
if (req->base.complete)
|
||||
req->base.complete(&req->base, err);
|
||||
|
@ -796,7 +796,7 @@ static int atmel_sha_hw_init(struct atmel_sha_dev *dd)
|
|||
{
|
||||
int err;
|
||||
|
||||
err = clk_prepare_enable(dd->iclk);
|
||||
err = clk_enable(dd->iclk);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
@ -823,7 +823,7 @@ static void atmel_sha_hw_version_init(struct atmel_sha_dev *dd)
|
|||
dev_info(dd->dev,
|
||||
"version: 0x%x\n", dd->hw_version);
|
||||
|
||||
clk_disable_unprepare(dd->iclk);
|
||||
clk_disable(dd->iclk);
|
||||
}
|
||||
|
||||
static int atmel_sha_handle_queue(struct atmel_sha_dev *dd,
|
||||
|
@ -1411,6 +1411,10 @@ static int atmel_sha_probe(struct platform_device *pdev)
|
|||
goto res_err;
|
||||
}
|
||||
|
||||
err = clk_prepare(sha_dd->iclk);
|
||||
if (err)
|
||||
goto res_err;
|
||||
|
||||
atmel_sha_hw_version_init(sha_dd);
|
||||
|
||||
atmel_sha_get_cap(sha_dd);
|
||||
|
@ -1422,12 +1426,12 @@ static int atmel_sha_probe(struct platform_device *pdev)
|
|||
if (IS_ERR(pdata)) {
|
||||
dev_err(&pdev->dev, "platform data not available\n");
|
||||
err = PTR_ERR(pdata);
|
||||
goto res_err;
|
||||
goto iclk_unprepare;
|
||||
}
|
||||
}
|
||||
if (!pdata->dma_slave) {
|
||||
err = -ENXIO;
|
||||
goto res_err;
|
||||
goto iclk_unprepare;
|
||||
}
|
||||
err = atmel_sha_dma_init(sha_dd, pdata);
|
||||
if (err)
|
||||
|
@ -1458,6 +1462,8 @@ err_algs:
|
|||
if (sha_dd->caps.has_dma)
|
||||
atmel_sha_dma_cleanup(sha_dd);
|
||||
err_sha_dma:
|
||||
iclk_unprepare:
|
||||
clk_unprepare(sha_dd->iclk);
|
||||
res_err:
|
||||
tasklet_kill(&sha_dd->done_task);
|
||||
sha_dd_err:
|
||||
|
@ -1484,12 +1490,7 @@ static int atmel_sha_remove(struct platform_device *pdev)
|
|||
if (sha_dd->caps.has_dma)
|
||||
atmel_sha_dma_cleanup(sha_dd);
|
||||
|
||||
iounmap(sha_dd->io_base);
|
||||
|
||||
clk_put(sha_dd->iclk);
|
||||
|
||||
if (sha_dd->irq >= 0)
|
||||
free_irq(sha_dd->irq, sha_dd);
|
||||
clk_unprepare(sha_dd->iclk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -534,8 +534,8 @@ static int caam_probe(struct platform_device *pdev)
|
|||
* long pointers in master configuration register
|
||||
*/
|
||||
clrsetbits_32(&ctrl->mcr, MCFGR_AWCACHE_MASK, MCFGR_AWCACHE_CACH |
|
||||
MCFGR_WDENABLE | (sizeof(dma_addr_t) == sizeof(u64) ?
|
||||
MCFGR_LONG_PTR : 0));
|
||||
MCFGR_AWCACHE_BUFF | MCFGR_WDENABLE |
|
||||
(sizeof(dma_addr_t) == sizeof(u64) ? MCFGR_LONG_PTR : 0));
|
||||
|
||||
/*
|
||||
* Read the Compile Time paramters and SCFGR to determine
|
||||
|
|
|
@ -306,7 +306,7 @@ static int mv_cesa_dev_dma_init(struct mv_cesa_dev *cesa)
|
|||
return -ENOMEM;
|
||||
|
||||
dma->padding_pool = dmam_pool_create("cesa_padding", dev, 72, 1, 0);
|
||||
if (!dma->cache_pool)
|
||||
if (!dma->padding_pool)
|
||||
return -ENOMEM;
|
||||
|
||||
cesa->dma = dma;
|
||||
|
|
|
@ -39,6 +39,7 @@ static struct sun4i_ss_alg_template ss_algs[] = {
|
|||
.import = sun4i_hash_import_md5,
|
||||
.halg = {
|
||||
.digestsize = MD5_DIGEST_SIZE,
|
||||
.statesize = sizeof(struct md5_state),
|
||||
.base = {
|
||||
.cra_name = "md5",
|
||||
.cra_driver_name = "md5-sun4i-ss",
|
||||
|
@ -66,6 +67,7 @@ static struct sun4i_ss_alg_template ss_algs[] = {
|
|||
.import = sun4i_hash_import_sha1,
|
||||
.halg = {
|
||||
.digestsize = SHA1_DIGEST_SIZE,
|
||||
.statesize = sizeof(struct sha1_state),
|
||||
.base = {
|
||||
.cra_name = "sha1",
|
||||
.cra_driver_name = "sha1-sun4i-ss",
|
||||
|
|
|
@ -357,8 +357,19 @@ static void mt_feature_mapping(struct hid_device *hdev,
|
|||
break;
|
||||
}
|
||||
|
||||
td->inputmode = field->report->id;
|
||||
td->inputmode_index = usage->usage_index;
|
||||
if (td->inputmode < 0) {
|
||||
td->inputmode = field->report->id;
|
||||
td->inputmode_index = usage->usage_index;
|
||||
} else {
|
||||
/*
|
||||
* Some elan panels wrongly declare 2 input mode
|
||||
* features, and silently ignore when we set the
|
||||
* value in the second field. Skip the second feature
|
||||
* and hope for the best.
|
||||
*/
|
||||
dev_info(&hdev->dev,
|
||||
"Ignoring the extra HID_DG_INPUTMODE\n");
|
||||
}
|
||||
|
||||
break;
|
||||
case HID_DG_CONTACTMAX:
|
||||
|
|
|
@ -477,8 +477,6 @@ static void hid_ctrl(struct urb *urb)
|
|||
struct usbhid_device *usbhid = hid->driver_data;
|
||||
int unplug = 0, status = urb->status;
|
||||
|
||||
spin_lock(&usbhid->lock);
|
||||
|
||||
switch (status) {
|
||||
case 0: /* success */
|
||||
if (usbhid->ctrl[usbhid->ctrltail].dir == USB_DIR_IN)
|
||||
|
@ -498,6 +496,8 @@ static void hid_ctrl(struct urb *urb)
|
|||
hid_warn(urb->dev, "ctrl urb status %d received\n", status);
|
||||
}
|
||||
|
||||
spin_lock(&usbhid->lock);
|
||||
|
||||
if (unplug) {
|
||||
usbhid->ctrltail = usbhid->ctrlhead;
|
||||
} else {
|
||||
|
|
|
@ -313,6 +313,10 @@ int of_hwspin_lock_get_id(struct device_node *np, int index)
|
|||
hwlock = radix_tree_deref_slot(slot);
|
||||
if (unlikely(!hwlock))
|
||||
continue;
|
||||
if (radix_tree_is_indirect_ptr(hwlock)) {
|
||||
slot = radix_tree_iter_retry(&iter);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (hwlock->bank->dev->of_node == args.np) {
|
||||
ret = 0;
|
||||
|
|
|
@ -173,6 +173,7 @@ config STK8312
|
|||
config STK8BA50
|
||||
tristate "Sensortek STK8BA50 3-Axis Accelerometer Driver"
|
||||
depends on I2C
|
||||
depends on IIO_TRIGGER
|
||||
help
|
||||
Say yes here to get support for the Sensortek STK8BA50 3-axis
|
||||
accelerometer.
|
||||
|
|
|
@ -372,6 +372,7 @@ config TWL6030_GPADC
|
|||
config VF610_ADC
|
||||
tristate "Freescale vf610 ADC driver"
|
||||
depends on OF
|
||||
depends on HAS_IOMEM
|
||||
select IIO_BUFFER
|
||||
select IIO_TRIGGERED_BUFFER
|
||||
help
|
||||
|
|
|
@ -289,7 +289,7 @@ static int tiadc_iio_buffered_hardware_setup(struct iio_dev *indio_dev,
|
|||
goto error_kfifo_free;
|
||||
|
||||
indio_dev->setup_ops = setup_ops;
|
||||
indio_dev->modes |= INDIO_BUFFER_HARDWARE;
|
||||
indio_dev->modes |= INDIO_BUFFER_SOFTWARE;
|
||||
|
||||
return 0;
|
||||
|
||||
|
|
|
@ -300,6 +300,7 @@ static int mcp4725_probe(struct i2c_client *client,
|
|||
data->client = client;
|
||||
|
||||
indio_dev->dev.parent = &client->dev;
|
||||
indio_dev->name = id->name;
|
||||
indio_dev->info = &mcp4725_info;
|
||||
indio_dev->channels = &mcp4725_channel;
|
||||
indio_dev->num_channels = 1;
|
||||
|
|
|
@ -43,7 +43,7 @@ int adis_update_scan_mode(struct iio_dev *indio_dev,
|
|||
return -ENOMEM;
|
||||
|
||||
rx = adis->buffer;
|
||||
tx = rx + indio_dev->scan_bytes;
|
||||
tx = rx + scan_count;
|
||||
|
||||
spi_message_init(&adis->msg);
|
||||
|
||||
|
|
|
@ -351,6 +351,8 @@ EXPORT_SYMBOL_GPL(iio_channel_get);
|
|||
|
||||
void iio_channel_release(struct iio_channel *channel)
|
||||
{
|
||||
if (!channel)
|
||||
return;
|
||||
iio_device_put(channel->indio_dev);
|
||||
kfree(channel);
|
||||
}
|
||||
|
|
|
@ -54,7 +54,9 @@ static const struct iio_chan_spec acpi_als_channels[] = {
|
|||
.realbits = 32,
|
||||
.storagebits = 32,
|
||||
},
|
||||
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
|
||||
/* _RAW is here for backward ABI compatibility */
|
||||
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
|
||||
BIT(IIO_CHAN_INFO_PROCESSED),
|
||||
},
|
||||
};
|
||||
|
||||
|
@ -152,7 +154,7 @@ static int acpi_als_read_raw(struct iio_dev *indio_dev,
|
|||
s32 temp_val;
|
||||
int ret;
|
||||
|
||||
if (mask != IIO_CHAN_INFO_RAW)
|
||||
if ((mask != IIO_CHAN_INFO_PROCESSED) && (mask != IIO_CHAN_INFO_RAW))
|
||||
return -EINVAL;
|
||||
|
||||
/* we support only illumination (_ALI) so far. */
|
||||
|
|
|
@ -180,7 +180,7 @@ static const struct ltr501_samp_table ltr501_ps_samp_table[] = {
|
|||
{500000, 2000000}
|
||||
};
|
||||
|
||||
static unsigned int ltr501_match_samp_freq(const struct ltr501_samp_table *tab,
|
||||
static int ltr501_match_samp_freq(const struct ltr501_samp_table *tab,
|
||||
int len, int val, int val2)
|
||||
{
|
||||
int i, freq;
|
||||
|
|
|
@ -117,7 +117,7 @@ static int mpl115_read_raw(struct iio_dev *indio_dev,
|
|||
*val = ret >> 6;
|
||||
return IIO_VAL_INT;
|
||||
case IIO_CHAN_INFO_OFFSET:
|
||||
*val = 605;
|
||||
*val = -605;
|
||||
*val2 = 750000;
|
||||
return IIO_VAL_INT_PLUS_MICRO;
|
||||
case IIO_CHAN_INFO_SCALE:
|
||||
|
|
|
@ -756,7 +756,7 @@ struct ib_cq *mlx5_ib_create_cq(struct ib_device *ibdev,
|
|||
int uninitialized_var(index);
|
||||
int uninitialized_var(inlen);
|
||||
int cqe_size;
|
||||
int irqn;
|
||||
unsigned int irqn;
|
||||
int eqn;
|
||||
int err;
|
||||
|
||||
|
|
|
@ -1222,7 +1222,7 @@ static int elantech_set_input_params(struct psmouse *psmouse)
|
|||
input_set_abs_params(dev, ABS_TOOL_WIDTH, ETP_WMIN_V2,
|
||||
ETP_WMAX_V2, 0, 0);
|
||||
}
|
||||
input_mt_init_slots(dev, 2, 0);
|
||||
input_mt_init_slots(dev, 2, INPUT_MT_SEMI_MT);
|
||||
input_set_abs_params(dev, ABS_MT_POSITION_X, x_min, x_max, 0, 0);
|
||||
input_set_abs_params(dev, ABS_MT_POSITION_Y, y_min, y_max, 0, 0);
|
||||
break;
|
||||
|
|
|
@ -458,8 +458,6 @@ int vmmouse_init(struct psmouse *psmouse)
|
|||
priv->abs_dev = abs_dev;
|
||||
psmouse->private = priv;
|
||||
|
||||
input_set_capability(rel_dev, EV_REL, REL_WHEEL);
|
||||
|
||||
/* Set up and register absolute device */
|
||||
snprintf(priv->phys, sizeof(priv->phys), "%s/input1",
|
||||
psmouse->ps2dev.serio->phys);
|
||||
|
@ -475,10 +473,6 @@ int vmmouse_init(struct psmouse *psmouse)
|
|||
abs_dev->id.version = psmouse->model;
|
||||
abs_dev->dev.parent = &psmouse->ps2dev.serio->dev;
|
||||
|
||||
error = input_register_device(priv->abs_dev);
|
||||
if (error)
|
||||
goto init_fail;
|
||||
|
||||
/* Set absolute device capabilities */
|
||||
input_set_capability(abs_dev, EV_KEY, BTN_LEFT);
|
||||
input_set_capability(abs_dev, EV_KEY, BTN_RIGHT);
|
||||
|
@ -488,6 +482,13 @@ int vmmouse_init(struct psmouse *psmouse)
|
|||
input_set_abs_params(abs_dev, ABS_X, 0, VMMOUSE_MAX_X, 0, 0);
|
||||
input_set_abs_params(abs_dev, ABS_Y, 0, VMMOUSE_MAX_Y, 0, 0);
|
||||
|
||||
error = input_register_device(priv->abs_dev);
|
||||
if (error)
|
||||
goto init_fail;
|
||||
|
||||
/* Add wheel capability to the relative device */
|
||||
input_set_capability(rel_dev, EV_REL, REL_WHEEL);
|
||||
|
||||
psmouse->protocol_handler = vmmouse_process_byte;
|
||||
psmouse->disconnect = vmmouse_disconnect;
|
||||
psmouse->reconnect = vmmouse_reconnect;
|
||||
|
|
|
@ -257,6 +257,13 @@ static const struct dmi_system_id __initconst i8042_dmi_nomux_table[] = {
|
|||
DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook S6230"),
|
||||
},
|
||||
},
|
||||
{
|
||||
/* Fujitsu Lifebook U745 */
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U745"),
|
||||
},
|
||||
},
|
||||
{
|
||||
/* Fujitsu T70H */
|
||||
.matches = {
|
||||
|
|
|
@ -1905,7 +1905,7 @@ static void do_attach(struct iommu_dev_data *dev_data,
|
|||
/* Update device table */
|
||||
set_dte_entry(dev_data->devid, domain, ats);
|
||||
if (alias != dev_data->devid)
|
||||
set_dte_entry(dev_data->devid, domain, ats);
|
||||
set_dte_entry(alias, domain, ats);
|
||||
|
||||
device_flush_dte(dev_data);
|
||||
}
|
||||
|
|
|
@ -1347,7 +1347,7 @@ void dmar_disable_qi(struct intel_iommu *iommu)
|
|||
|
||||
raw_spin_lock_irqsave(&iommu->register_lock, flags);
|
||||
|
||||
sts = dmar_readq(iommu->reg + DMAR_GSTS_REG);
|
||||
sts = readl(iommu->reg + DMAR_GSTS_REG);
|
||||
if (!(sts & DMA_GSTS_QIES))
|
||||
goto end;
|
||||
|
||||
|
|
|
@ -1489,7 +1489,7 @@ static void iommu_disable_dev_iotlb(struct device_domain_info *info)
|
|||
{
|
||||
struct pci_dev *pdev;
|
||||
|
||||
if (dev_is_pci(info->dev))
|
||||
if (!dev_is_pci(info->dev))
|
||||
return;
|
||||
|
||||
pdev = to_pci_dev(info->dev);
|
||||
|
|
|
@ -249,12 +249,30 @@ static void intel_flush_pasid_dev(struct intel_svm *svm, struct intel_svm_dev *s
|
|||
static void intel_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
|
||||
{
|
||||
struct intel_svm *svm = container_of(mn, struct intel_svm, notifier);
|
||||
struct intel_svm_dev *sdev;
|
||||
|
||||
/* This might end up being called from exit_mmap(), *before* the page
|
||||
* tables are cleared. And __mmu_notifier_release() will delete us from
|
||||
* the list of notifiers so that our invalidate_range() callback doesn't
|
||||
* get called when the page tables are cleared. So we need to protect
|
||||
* against hardware accessing those page tables.
|
||||
*
|
||||
* We do it by clearing the entry in the PASID table and then flushing
|
||||
* the IOTLB and the PASID table caches. This might upset hardware;
|
||||
* perhaps we'll want to point the PASID to a dummy PGD (like the zero
|
||||
* page) so that we end up taking a fault that the hardware really
|
||||
* *has* to handle gracefully without affecting other processes.
|
||||
*/
|
||||
svm->iommu->pasid_table[svm->pasid].val = 0;
|
||||
wmb();
|
||||
|
||||
rcu_read_lock();
|
||||
list_for_each_entry_rcu(sdev, &svm->devs, list) {
|
||||
intel_flush_pasid_dev(svm, sdev, svm->pasid);
|
||||
intel_flush_svm_range_dev(svm, sdev, 0, -1, 0, !svm->mm);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
/* There's no need to do any flush because we can't get here if there
|
||||
* are any devices left anyway. */
|
||||
WARN_ON(!list_empty(&svm->devs));
|
||||
}
|
||||
|
||||
static const struct mmu_notifier_ops intel_mmuops = {
|
||||
|
@ -379,7 +397,6 @@ int intel_svm_bind_mm(struct device *dev, int *pasid, int flags, struct svm_dev_
|
|||
goto out;
|
||||
}
|
||||
iommu->pasid_table[svm->pasid].val = (u64)__pa(mm->pgd) | 1;
|
||||
mm = NULL;
|
||||
} else
|
||||
iommu->pasid_table[svm->pasid].val = (u64)__pa(init_mm.pgd) | 1 | (1ULL << 11);
|
||||
wmb();
|
||||
|
@ -442,11 +459,11 @@ int intel_svm_unbind_mm(struct device *dev, int pasid)
|
|||
kfree_rcu(sdev, rcu);
|
||||
|
||||
if (list_empty(&svm->devs)) {
|
||||
mmu_notifier_unregister(&svm->notifier, svm->mm);
|
||||
|
||||
idr_remove(&svm->iommu->pasid_idr, svm->pasid);
|
||||
if (svm->mm)
|
||||
mmput(svm->mm);
|
||||
mmu_notifier_unregister(&svm->notifier, svm->mm);
|
||||
|
||||
/* We mandate that no page faults may be outstanding
|
||||
* for the PASID when intel_svm_unbind_mm() is called.
|
||||
* If that is not obeyed, subtle errors will happen.
|
||||
|
@ -507,6 +524,10 @@ static irqreturn_t prq_event_thread(int irq, void *d)
|
|||
struct intel_svm *svm = NULL;
|
||||
int head, tail, handled = 0;
|
||||
|
||||
/* Clear PPR bit before reading head/tail registers, to
|
||||
* ensure that we get a new interrupt if needed. */
|
||||
writel(DMA_PRS_PPR, iommu->reg + DMAR_PRS_REG);
|
||||
|
||||
tail = dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK;
|
||||
head = dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK;
|
||||
while (head != tail) {
|
||||
|
@ -551,6 +572,9 @@ static irqreturn_t prq_event_thread(int irq, void *d)
|
|||
* any faults on kernel addresses. */
|
||||
if (!svm->mm)
|
||||
goto bad_req;
|
||||
/* If the mm is already defunct, don't handle faults. */
|
||||
if (!atomic_inc_not_zero(&svm->mm->mm_users))
|
||||
goto bad_req;
|
||||
down_read(&svm->mm->mmap_sem);
|
||||
vma = find_extend_vma(svm->mm, address);
|
||||
if (!vma || address < vma->vm_start)
|
||||
|
@ -567,6 +591,7 @@ static irqreturn_t prq_event_thread(int irq, void *d)
|
|||
result = QI_RESP_SUCCESS;
|
||||
invalid:
|
||||
up_read(&svm->mm->mmap_sem);
|
||||
mmput(svm->mm);
|
||||
bad_req:
|
||||
/* Accounting for major/minor faults? */
|
||||
rcu_read_lock();
|
||||
|
|
|
@ -629,7 +629,7 @@ static void iommu_disable_irq_remapping(struct intel_iommu *iommu)
|
|||
|
||||
raw_spin_lock_irqsave(&iommu->register_lock, flags);
|
||||
|
||||
sts = dmar_readq(iommu->reg + DMAR_GSTS_REG);
|
||||
sts = readl(iommu->reg + DMAR_GSTS_REG);
|
||||
if (!(sts & DMA_GSTS_IRES))
|
||||
goto end;
|
||||
|
||||
|
|
|
@ -628,7 +628,12 @@ static void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl,
|
|||
table_size = 1UL << data->pg_shift;
|
||||
|
||||
start = ptep;
|
||||
end = (void *)ptep + table_size;
|
||||
|
||||
/* Only leaf entries at the last level */
|
||||
if (lvl == ARM_LPAE_MAX_LEVELS - 1)
|
||||
end = ptep;
|
||||
else
|
||||
end = (void *)ptep + table_size;
|
||||
|
||||
/* Only leaf entries at the last level */
|
||||
if (lvl == ARM_LPAE_MAX_LEVELS - 1)
|
||||
|
|
|
@ -2017,28 +2017,32 @@ int md_integrity_register(struct mddev *mddev)
|
|||
}
|
||||
EXPORT_SYMBOL(md_integrity_register);
|
||||
|
||||
/* Disable data integrity if non-capable/non-matching disk is being added */
|
||||
void md_integrity_add_rdev(struct md_rdev *rdev, struct mddev *mddev)
|
||||
/*
|
||||
* Attempt to add an rdev, but only if it is consistent with the current
|
||||
* integrity profile
|
||||
*/
|
||||
int md_integrity_add_rdev(struct md_rdev *rdev, struct mddev *mddev)
|
||||
{
|
||||
struct blk_integrity *bi_rdev;
|
||||
struct blk_integrity *bi_mddev;
|
||||
char name[BDEVNAME_SIZE];
|
||||
|
||||
if (!mddev->gendisk)
|
||||
return;
|
||||
return 0;
|
||||
|
||||
bi_rdev = bdev_get_integrity(rdev->bdev);
|
||||
bi_mddev = blk_get_integrity(mddev->gendisk);
|
||||
|
||||
if (!bi_mddev) /* nothing to do */
|
||||
return;
|
||||
if (rdev->raid_disk < 0) /* skip spares */
|
||||
return;
|
||||
if (bi_rdev && blk_integrity_compare(mddev->gendisk,
|
||||
rdev->bdev->bd_disk) >= 0)
|
||||
return;
|
||||
WARN_ON_ONCE(!mddev->suspended);
|
||||
printk(KERN_NOTICE "disabling data integrity on %s\n", mdname(mddev));
|
||||
blk_integrity_unregister(mddev->gendisk);
|
||||
return 0;
|
||||
|
||||
if (blk_integrity_compare(mddev->gendisk, rdev->bdev->bd_disk) != 0) {
|
||||
printk(KERN_NOTICE "%s: incompatible integrity profile for %s\n",
|
||||
mdname(mddev), bdevname(rdev->bdev, name));
|
||||
return -ENXIO;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(md_integrity_add_rdev);
|
||||
|
||||
|
|
|
@ -657,7 +657,7 @@ extern void md_wait_for_blocked_rdev(struct md_rdev *rdev, struct mddev *mddev);
|
|||
extern void md_set_array_sectors(struct mddev *mddev, sector_t array_sectors);
|
||||
extern int md_check_no_bitmap(struct mddev *mddev);
|
||||
extern int md_integrity_register(struct mddev *mddev);
|
||||
extern void md_integrity_add_rdev(struct md_rdev *rdev, struct mddev *mddev);
|
||||
extern int md_integrity_add_rdev(struct md_rdev *rdev, struct mddev *mddev);
|
||||
extern int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale);
|
||||
|
||||
extern void mddev_init(struct mddev *mddev);
|
||||
|
|
|
@ -257,6 +257,9 @@ static int multipath_add_disk(struct mddev *mddev, struct md_rdev *rdev)
|
|||
disk_stack_limits(mddev->gendisk, rdev->bdev,
|
||||
rdev->data_offset << 9);
|
||||
|
||||
err = md_integrity_add_rdev(rdev, mddev);
|
||||
if (err)
|
||||
break;
|
||||
spin_lock_irq(&conf->device_lock);
|
||||
mddev->degraded--;
|
||||
rdev->raid_disk = path;
|
||||
|
@ -264,9 +267,6 @@ static int multipath_add_disk(struct mddev *mddev, struct md_rdev *rdev)
|
|||
spin_unlock_irq(&conf->device_lock);
|
||||
rcu_assign_pointer(p->rdev, rdev);
|
||||
err = 0;
|
||||
mddev_suspend(mddev);
|
||||
md_integrity_add_rdev(rdev, mddev);
|
||||
mddev_resume(mddev);
|
||||
break;
|
||||
}
|
||||
|
||||
|
|
|
@ -1589,6 +1589,9 @@ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev)
|
|||
if (mddev->recovery_disabled == conf->recovery_disabled)
|
||||
return -EBUSY;
|
||||
|
||||
if (md_integrity_add_rdev(rdev, mddev))
|
||||
return -ENXIO;
|
||||
|
||||
if (rdev->raid_disk >= 0)
|
||||
first = last = rdev->raid_disk;
|
||||
|
||||
|
@ -1632,9 +1635,6 @@ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev)
|
|||
break;
|
||||
}
|
||||
}
|
||||
mddev_suspend(mddev);
|
||||
md_integrity_add_rdev(rdev, mddev);
|
||||
mddev_resume(mddev);
|
||||
if (mddev->queue && blk_queue_discard(bdev_get_queue(rdev->bdev)))
|
||||
queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, mddev->queue);
|
||||
print_conf(conf);
|
||||
|
|
|
@ -1698,6 +1698,9 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev)
|
|||
if (rdev->saved_raid_disk < 0 && !_enough(conf, 1, -1))
|
||||
return -EINVAL;
|
||||
|
||||
if (md_integrity_add_rdev(rdev, mddev))
|
||||
return -ENXIO;
|
||||
|
||||
if (rdev->raid_disk >= 0)
|
||||
first = last = rdev->raid_disk;
|
||||
|
||||
|
@ -1739,9 +1742,6 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev)
|
|||
rcu_assign_pointer(p->rdev, rdev);
|
||||
break;
|
||||
}
|
||||
mddev_suspend(mddev);
|
||||
md_integrity_add_rdev(rdev, mddev);
|
||||
mddev_resume(mddev);
|
||||
if (mddev->queue && blk_queue_discard(bdev_get_queue(rdev->bdev)))
|
||||
queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, mddev->queue);
|
||||
|
||||
|
|
|
@ -478,7 +478,6 @@ static const struct i2c_device_id ir_kbd_id[] = {
|
|||
{ "ir_rx_z8f0811_hdpvr", 0 },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(i2c, ir_kbd_id);
|
||||
|
||||
static struct i2c_driver ir_kbd_driver = {
|
||||
.driver = {
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue