Merge tag v4.4.55 into branch 'msm-4.4'
refs/heads/tmp-28ec98b: Linux 4.4.55 ext4: don't BUG when truncating encrypted inodes on the orphan list dm: flush queued bios when process blocks to avoid deadlock nfit, libnvdimm: fix interleave set cookie calculation s390/kdump: Use "LINUX" ELF note name instead of "CORE" KVM: s390: Fix guest migration for huge guests resulting in panic mvsas: fix misleading indentation serial: samsung: Continue to work if DMA request fails USB: serial: io_ti: fix information leak in completion handler USB: serial: io_ti: fix NULL-deref in interrupt callback USB: iowarrior: fix NULL-deref in write USB: iowarrior: fix NULL-deref at probe USB: serial: omninet: fix reference leaks at open USB: serial: safe_serial: fix information leak in completion handler usb: host: xhci-plat: Fix timeout on removal of hot pluggable xhci controllers usb: host: xhci-dbg: HCIVERSION should be a binary number usb: gadget: function: f_fs: pass companion descriptor along usb: dwc3: gadget: make Set Endpoint Configuration macros safe usb: gadget: dummy_hcd: clear usb_gadget region before registration powerpc: Emulation support for load/store instructions on LE tracing: Add #undef to fix compile error MIPS: Netlogic: Fix CP0_EBASE redefinition warnings MIPS: DEC: Avoid la pseudo-instruction in delay slots mm: memcontrol: avoid unused function warning cpmac: remove hopeless #warning MIPS: ralink: Remove unused rt*_wdt_reset functions MIPS: ralink: Cosmetic change to prom_init(). mtd: pmcmsp: use kstrndup instead of kmalloc+strncpy MIPS: Update lemote2f_defconfig for CPU_FREQ_STAT change MIPS: ip22: Fix ip28 build for modern gcc MIPS: Update ip27_defconfig for SCSI_DH change MIPS: ip27: Disable qlge driver in defconfig MIPS: Update defconfigs for NF_CT_PROTO_DCCP/UDPLITE change crypto: improve gcc optimization flags for serpent and wp512 USB: serial: digi_acceleport: fix OOB-event processing USB: serial: digi_acceleport: fix OOB data sanity check Linux 4.4.54 drivers: hv: Turn off write permission on the hypercall page fat: fix using uninitialized fields of fat_inode/fsinfo_inode libceph: use BUG() instead of BUG_ON(1) drm/i915/dsi: Do not clear DPOUNIT_CLOCK_GATE_DISABLE from vlv_init_display_clock_gating fakelb: fix schedule while atomic drm/atomic: fix an error code in mode_fixup() drm/ttm: Make sure BOs being swapped out are cacheable drm/edid: Add EDID_QUIRK_FORCE_8BPC quirk for Rotel RSX-1058 drm/ast: Fix AST2400 POST failure without BMC FW or VBIOS drm/ast: Call open_key before enable_mmio in POST code drm/ast: Fix test for VGA enabled drm/amdgpu: add more cases to DCE11 possible crtc mask setup mac80211: flush delayed work when entering suspend xtensa: move parse_tag_fdt out of #ifdef CONFIG_BLK_DEV_INITRD pwm: pca9685: Fix period change with same duty cycle nlm: Ensure callback code also checks that the files match target: Fix NULL dereference during LUN lookup + active I/O shutdown ceph: remove req from unsafe list when unregistering it ktest: Fix child exit code processing IB/srp: Fix race conditions related to task management IB/srp: Avoid that duplicate responses trigger a kernel bug IB/IPoIB: Add destination address when re-queue packet IB/ipoib: Fix deadlock between rmmod and set_mode mnt: Tuck mounts under others instead of creating shadow/side mounts. net: mvpp2: fix DMA address calculation in mvpp2_txq_inc_put() s390: use correct input data address for setup_randomness s390: make setup_randomness work s390: TASK_SIZE for kernel threads s390/dcssblk: fix device size calculation in dcssblk_direct_access() s390/qdio: clear DSCI prior to scanning multiple input queues Bluetooth: Add another AR3012 04ca:3018 device KVM: VMX: use correct vmcs_read/write for guest segment selector/base KVM: s390: Disable dirty log retrieval for UCONTROL guests serial: 8250_pci: Add MKS Tenta SCOM-0800 and SCOM-0801 cards tty: n_hdlc: get rid of racy n_hdlc.tbuf TTY: n_hdlc, fix lockdep false positive Linux 4.4.53 scsi: lpfc: Correct WQ creation for pagesize MIPS: IP22: Fix build error due to binutils 2.25 uselessnes. MIPS: IP22: Reformat inline assembler code to modern standards. powerpc/xmon: Fix data-breakpoint dmaengine: ipu: Make sure the interrupt routine checks all interrupts. bcma: use (get|put)_device when probing/removing device driver md linear: fix a race between linear_add() and linear_congested() rtc: sun6i: Switch to the external oscillator rtc: sun6i: Add some locking NFSv4: fix getacl ERANGE for some ACL buffer sizes NFSv4: fix getacl head length estimation NFSv4: Fix memory and state leak in _nfs4_open_and_get_state nfsd: special case truncates some more nfsd: minor nfsd_setattr cleanup rtlwifi: rtl8192c-common: Fix "BUG: KASAN: rtlwifi: Fix alignment issues gfs2: Add missing rcu locking for glock lookup rdma_cm: fail iwarp accepts w/o connection params RDMA/core: Fix incorrect structure packing for booleans Drivers: hv: util: Backup: Fix a rescind processing issue Drivers: hv: util: Fcopy: Fix a rescind processing issue Drivers: hv: util: kvp: Fix a rescind processing issue hv: init percpu_list in hv_synic_alloc() hv: allocate synic pages for all present CPUs usb: gadget: udc: fsl: Add missing complete function. usb: host: xhci: plat: check hcc_params after add hcd usb: musb: da8xx: Remove CPPI 3.0 quirk and methods w1: ds2490: USB transfer buffers need to be DMAable w1: don't leak refcount on slave attach failure in w1_attach_slave_device() can: usb_8dev: Fix memory leak of priv->cmd_msg_buffer iio: pressure: mpl3115: do not rely on structure field ordering iio: pressure: mpl115: do not rely on structure field ordering arm/arm64: KVM: Enforce unconditional flush to PoC when mapping to stage-2 fuse: add missing FR_FORCE crypto: testmgr - Pad aes_ccm_enc_tv_template vector ath9k: use correct OTP register offsets for the AR9340 and AR9550 ath9k: fix race condition in enabling/disabling IRQs ath5k: drop bogus warning on drv_set_key with unsupported cipher target: Fix multi-session dynamic se_node_acl double free OOPs target: Obtain se_node_acl->acl_kref during get_initiator_node_acl samples/seccomp: fix 64-bit comparison macros ext4: return EROFS if device is r/o and journal replay is needed ext4: preserve the needs_recovery flag when the journal is aborted ext4: fix inline data error paths ext4: fix data corruption in data=journal mode ext4: trim allocation requests to group size ext4: do not polute the extents cache while shifting extents ext4: Include forgotten start block on fallocate insert range loop: fix LO_FLAGS_PARTSCAN hang block/loop: fix race between I/O and set_status jbd2: don't leak modified metadata buffers on an aborted journal Fix: Disable sys_membarrier when nohz_full is enabled sd: get disk reference in sd_check_events() scsi: use 'scsi_device_from_queue()' for scsi_dh scsi: aacraid: Reorder Adapter status check scsi: storvsc: properly set residual data length on errors scsi: storvsc: properly handle SRB_ERROR when sense message is present scsi: storvsc: use tagged SRB requests if supported by the device dm stats: fix a leaked s->histogram_boundaries array dm cache: fix corruption seen when using cache > 2TB ipc/shm: Fix shmat mmap nil-page protection mm: do not access page->mapping directly on page_endio mm: vmpressure: fix sending wrong events on underflow mm/page_alloc: fix nodes for reclaim in fast path iommu/vt-d: Tylersburg isoch identity map check is done too late. iommu/vt-d: Fix some macros that are incorrectly specified in intel-iommu regulator: Fix regulator_summary for deviceless consumers staging: rtl: fix possible NULL pointer dereference ALSA: hda - Fix micmute hotkey problem for a lenovo AIO machine ALSA: hda - Add subwoofer support for Dell Inspiron 17 7000 Gaming ALSA: seq: Fix link corruption by event error handling ALSA: ctxfi: Fallback DMA mask to 32bit ALSA: timer: Reject user params with too small ticks ALSA: hda - fix Lewisburg audio issue ALSA: hda/realtek - Cannot adjust speaker's volume on a Dell AIO ARM: dts: at91: Enable DMA on sama5d2_xplained console ARM: dts: at91: Enable DMA on sama5d4_xplained console ARM: at91: define LPDDR types media: fix dm1105.c build error uvcvideo: Fix a wrong macro am437x-vpfe: always assign bpp variable MIPS: Handle microMIPS jumps in the same way as MIPS32/MIPS64 jumps MIPS: Calculate microMIPS ra properly when unwinding the stack MIPS: Fix is_jump_ins() handling of 16b microMIPS instructions MIPS: Fix get_frame_info() handling of microMIPS function size MIPS: Prevent unaligned accesses during stack unwinding MIPS: Clear ISA bit correctly in get_frame_info() MIPS: Lantiq: Keep ethernet enabled during boot MIPS: OCTEON: Fix copy_from_user fault handling for large buffers MIPS: BCM47XX: Fix button inversion for Asus WL-500W MIPS: Fix special case in 64 bit IP checksumming. samples: move mic/mpssd example code from Documentation Linux 4.4.52 kvm: vmx: ensure VMCS is current while enabling PML Revert "usb: chipidea: imx: enable CI_HDRC_SET_NON_ZERO_TTHA" rtlwifi: rtl_usb: Fix for URB leaking when doing ifconfig up/down block: fix double-free in the failure path of cgwb_bdi_init() goldfish: Sanitize the broken interrupt handler x86/platform/goldfish: Prevent unconditional loading USB: serial: ark3116: fix register-accessor error handling USB: serial: opticon: fix CTS retrieval at open USB: serial: spcp8x5: fix modem-status handling USB: serial: ftdi_sio: fix line-status over-reporting USB: serial: ftdi_sio: fix extreme low-latency setting USB: serial: ftdi_sio: fix modem-status error handling USB: serial: cp210x: add new IDs for GE Bx50v3 boards USB: serial: mos7840: fix another NULL-deref at open tty: serial: msm: Fix module autoload net: socket: fix recvmmsg not returning error from sock_error ip: fix IP_CHECKSUM handling irda: Fix lockdep annotations in hashbin_delete(). dccp: fix freeing skb too early for IPV6_RECVPKTINFO packet: Do not call fanout_release from atomic contexts packet: fix races in fanout_add() net/llc: avoid BUG_ON() in skb_orphan() blk-mq: really fix plug list flushing for nomerge queues rtc: interface: ignore expired timers when enqueuing new timers rtlwifi: rtl_usb: Fix missing entry in USB driver's private data Linux 4.4.51 mmc: core: fix multi-bit bus width without high-speed mode bcache: Make gc wakeup sane, remove set_task_state() ntb_transport: Pick an unused queue NTB: ntb_transport: fix debugfs_remove_recursive printk: use rcuidle console tracepoint ARM: 8658/1: uaccess: fix zeroing of 64-bit get_user() futex: Move futex_init() to core_initcall drm/dp/mst: fix kernel oops when turning off secondary monitor drm/radeon: Use mode h/vdisplay fields to hide out of bounds HW cursor Input: elan_i2c - add ELAN0605 to the ACPI table Fix missing sanity check in /dev/sg scsi: don't BUG_ON() empty DMA transfers fuse: fix use after free issue in fuse_dev_do_read() siano: make it work again with CONFIG_VMAP_STACK vfs: fix uninitialized flags in splice_to_pipe() Linux 4.4.50 l2tp: do not use udp_ioctl() ping: fix a null pointer dereference packet: round up linear to header len net: introduce device min_header_len sit: fix a double free on error path sctp: avoid BUG_ON on sctp_wait_for_sndbuf mlx4: Invoke softirqs after napi_reschedule macvtap: read vnet_hdr_size once tun: read vnet_hdr_sz once tcp: avoid infinite loop in tcp_splice_read() ipv6: tcp: add a missing tcp_v6_restore_cb() ip6_gre: fix ip6gre_err() invalid reads netlabel: out of bound access in cipso_v4_validate() ipv4: keep skb->dst around in presence of IP options net: use a work queue to defer net_disable_timestamp() work tcp: fix 0 divide in __tcp_select_window() ipv6: pointer math error in ip6_tnl_parse_tlv_enc_lim() ipv6: fix ip6_tnl_parse_tlv_enc_lim() can: Fix kernel panic at security_sock_rcv_skb Conflicts: drivers/scsi/sd.c drivers/usb/gadget/function/f_fs.c drivers/usb/host/xhci-plat.c CRs-Fixed: 2023471 Change-Id: I396051a8de30271af77b3890d4b19787faa1c31e Signed-off-by: Blagovest Kolenichev <bkolenichev@codeaurora.org>
This commit is contained in:
commit
a4b9c109c2
239 changed files with 2053 additions and 1140 deletions
|
@ -1,4 +1,4 @@
|
|||
subdir-y := accounting auxdisplay blackfin connector \
|
||||
filesystems filesystems ia64 laptops mic misc-devices \
|
||||
filesystems filesystems ia64 laptops misc-devices \
|
||||
networking pcmcia prctl ptp spi timers vDSO video4linux \
|
||||
watchdog
|
||||
|
|
|
@ -1265,6 +1265,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
|||
When zero, profiling data is discarded and associated
|
||||
debugfs files are removed at module unload time.
|
||||
|
||||
goldfish [X86] Enable the goldfish android emulator platform.
|
||||
Don't use this when you are not running on the
|
||||
android emulator
|
||||
|
||||
gpt [EFI] Forces disk with valid GPT signature but
|
||||
invalid Protective MBR to be treated as GPT. If the
|
||||
primary GPT is corrupted, it enables the backup/alternate
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
subdir-y := mpssd
|
|
@ -1,21 +0,0 @@
|
|||
ifndef CROSS_COMPILE
|
||||
# List of programs to build
|
||||
hostprogs-$(CONFIG_X86_64) := mpssd
|
||||
|
||||
mpssd-objs := mpssd.o sysfs.o
|
||||
|
||||
# Tell kbuild to always build the programs
|
||||
always := $(hostprogs-y)
|
||||
|
||||
HOSTCFLAGS += -I$(objtree)/usr/include -I$(srctree)/tools/include
|
||||
|
||||
ifdef DEBUG
|
||||
HOSTCFLAGS += -DDEBUG=$(DEBUG)
|
||||
endif
|
||||
|
||||
HOSTLOADLIBES_mpssd := -lpthread
|
||||
|
||||
install:
|
||||
install mpssd /usr/sbin/mpssd
|
||||
install micctrl /usr/sbin/micctrl
|
||||
endif
|
2
Makefile
2
Makefile
|
@ -1,6 +1,6 @@
|
|||
VERSION = 4
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 49
|
||||
SUBLEVEL = 55
|
||||
EXTRAVERSION =
|
||||
NAME = Blurry Fish Butt
|
||||
|
||||
|
|
|
@ -122,6 +122,8 @@
|
|||
uart1: serial@f8020000 {
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&pinctrl_uart1_default>;
|
||||
atmel,use-dma-rx;
|
||||
atmel,use-dma-tx;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
|
|
|
@ -110,6 +110,8 @@
|
|||
};
|
||||
|
||||
usart3: serial@fc00c000 {
|
||||
atmel,use-dma-rx;
|
||||
atmel,use-dma-tx;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
|
|
|
@ -205,18 +205,12 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, pfn_t pfn,
|
|||
* and iterate over the range.
|
||||
*/
|
||||
|
||||
bool need_flush = !vcpu_has_cache_enabled(vcpu) || ipa_uncached;
|
||||
|
||||
VM_BUG_ON(size & ~PAGE_MASK);
|
||||
|
||||
if (!need_flush && !icache_is_pipt())
|
||||
goto vipt_cache;
|
||||
|
||||
while (size) {
|
||||
void *va = kmap_atomic_pfn(pfn);
|
||||
|
||||
if (need_flush)
|
||||
kvm_flush_dcache_to_poc(va, PAGE_SIZE);
|
||||
kvm_flush_dcache_to_poc(va, PAGE_SIZE);
|
||||
|
||||
if (icache_is_pipt())
|
||||
__cpuc_coherent_user_range((unsigned long)va,
|
||||
|
@ -228,7 +222,6 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, pfn_t pfn,
|
|||
kunmap_atomic(va);
|
||||
}
|
||||
|
||||
vipt_cache:
|
||||
if (!icache_is_pipt() && !icache_is_vivt_asid_tagged()) {
|
||||
/* any kind of VIPT cache */
|
||||
__flush_icache_all();
|
||||
|
|
|
@ -67,7 +67,7 @@ ENTRY(__get_user_4)
|
|||
ENDPROC(__get_user_4)
|
||||
|
||||
ENTRY(__get_user_8)
|
||||
check_uaccess r0, 8, r1, r2, __get_user_bad
|
||||
check_uaccess r0, 8, r1, r2, __get_user_bad8
|
||||
#ifdef CONFIG_THUMB2_KERNEL
|
||||
5: TUSER(ldr) r2, [r0]
|
||||
6: TUSER(ldr) r3, [r0, #4]
|
||||
|
|
|
@ -237,8 +237,7 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, pfn_t pfn,
|
|||
{
|
||||
void *va = page_address(pfn_to_page(pfn));
|
||||
|
||||
if (!vcpu_has_cache_enabled(vcpu) || ipa_uncached)
|
||||
kvm_flush_dcache_to_poc(va, size);
|
||||
kvm_flush_dcache_to_poc(va, size);
|
||||
|
||||
if (!icache_is_aliasing()) { /* PIPT */
|
||||
flush_icache_range((unsigned long)va,
|
||||
|
|
|
@ -17,6 +17,12 @@
|
|||
.active_low = 1, \
|
||||
}
|
||||
|
||||
#define BCM47XX_GPIO_KEY_H(_gpio, _code) \
|
||||
{ \
|
||||
.code = _code, \
|
||||
.gpio = _gpio, \
|
||||
}
|
||||
|
||||
/* Asus */
|
||||
|
||||
static const struct gpio_keys_button
|
||||
|
@ -79,8 +85,8 @@ bcm47xx_buttons_asus_wl500gpv2[] __initconst = {
|
|||
|
||||
static const struct gpio_keys_button
|
||||
bcm47xx_buttons_asus_wl500w[] __initconst = {
|
||||
BCM47XX_GPIO_KEY(6, KEY_RESTART),
|
||||
BCM47XX_GPIO_KEY(7, KEY_WPS_BUTTON),
|
||||
BCM47XX_GPIO_KEY_H(6, KEY_RESTART),
|
||||
BCM47XX_GPIO_KEY_H(7, KEY_WPS_BUTTON),
|
||||
};
|
||||
|
||||
static const struct gpio_keys_button
|
||||
|
|
|
@ -208,18 +208,18 @@ EXC( STORE t2, UNIT(6)(dst), s_exc_p10u)
|
|||
ADD src, src, 16*NBYTES
|
||||
EXC( STORE t3, UNIT(7)(dst), s_exc_p9u)
|
||||
ADD dst, dst, 16*NBYTES
|
||||
EXC( LOAD t0, UNIT(-8)(src), l_exc_copy)
|
||||
EXC( LOAD t1, UNIT(-7)(src), l_exc_copy)
|
||||
EXC( LOAD t2, UNIT(-6)(src), l_exc_copy)
|
||||
EXC( LOAD t3, UNIT(-5)(src), l_exc_copy)
|
||||
EXC( LOAD t0, UNIT(-8)(src), l_exc_copy_rewind16)
|
||||
EXC( LOAD t1, UNIT(-7)(src), l_exc_copy_rewind16)
|
||||
EXC( LOAD t2, UNIT(-6)(src), l_exc_copy_rewind16)
|
||||
EXC( LOAD t3, UNIT(-5)(src), l_exc_copy_rewind16)
|
||||
EXC( STORE t0, UNIT(-8)(dst), s_exc_p8u)
|
||||
EXC( STORE t1, UNIT(-7)(dst), s_exc_p7u)
|
||||
EXC( STORE t2, UNIT(-6)(dst), s_exc_p6u)
|
||||
EXC( STORE t3, UNIT(-5)(dst), s_exc_p5u)
|
||||
EXC( LOAD t0, UNIT(-4)(src), l_exc_copy)
|
||||
EXC( LOAD t1, UNIT(-3)(src), l_exc_copy)
|
||||
EXC( LOAD t2, UNIT(-2)(src), l_exc_copy)
|
||||
EXC( LOAD t3, UNIT(-1)(src), l_exc_copy)
|
||||
EXC( LOAD t0, UNIT(-4)(src), l_exc_copy_rewind16)
|
||||
EXC( LOAD t1, UNIT(-3)(src), l_exc_copy_rewind16)
|
||||
EXC( LOAD t2, UNIT(-2)(src), l_exc_copy_rewind16)
|
||||
EXC( LOAD t3, UNIT(-1)(src), l_exc_copy_rewind16)
|
||||
EXC( STORE t0, UNIT(-4)(dst), s_exc_p4u)
|
||||
EXC( STORE t1, UNIT(-3)(dst), s_exc_p3u)
|
||||
EXC( STORE t2, UNIT(-2)(dst), s_exc_p2u)
|
||||
|
@ -383,6 +383,10 @@ done:
|
|||
nop
|
||||
END(memcpy)
|
||||
|
||||
l_exc_copy_rewind16:
|
||||
/* Rewind src and dst by 16*NBYTES for l_exc_copy */
|
||||
SUB src, src, 16*NBYTES
|
||||
SUB dst, dst, 16*NBYTES
|
||||
l_exc_copy:
|
||||
/*
|
||||
* Copy bytes from src until faulting load address (or until a
|
||||
|
|
|
@ -68,8 +68,8 @@ CONFIG_NETFILTER_NETLINK_QUEUE=m
|
|||
CONFIG_NF_CONNTRACK=m
|
||||
CONFIG_NF_CONNTRACK_SECMARK=y
|
||||
CONFIG_NF_CONNTRACK_EVENTS=y
|
||||
CONFIG_NF_CT_PROTO_DCCP=m
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=m
|
||||
CONFIG_NF_CT_PROTO_DCCP=y
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=y
|
||||
CONFIG_NF_CONNTRACK_AMANDA=m
|
||||
CONFIG_NF_CONNTRACK_FTP=m
|
||||
CONFIG_NF_CONNTRACK_H323=m
|
||||
|
|
|
@ -134,7 +134,7 @@ CONFIG_LIBFC=m
|
|||
CONFIG_SCSI_QLOGIC_1280=y
|
||||
CONFIG_SCSI_PMCRAID=m
|
||||
CONFIG_SCSI_BFA_FC=m
|
||||
CONFIG_SCSI_DH=m
|
||||
CONFIG_SCSI_DH=y
|
||||
CONFIG_SCSI_DH_RDAC=m
|
||||
CONFIG_SCSI_DH_HP_SW=m
|
||||
CONFIG_SCSI_DH_EMC=m
|
||||
|
@ -206,7 +206,6 @@ CONFIG_MLX4_EN=m
|
|||
# CONFIG_MLX4_DEBUG is not set
|
||||
CONFIG_TEHUTI=m
|
||||
CONFIG_BNX2X=m
|
||||
CONFIG_QLGE=m
|
||||
CONFIG_SFC=m
|
||||
CONFIG_BE2NET=m
|
||||
CONFIG_LIBERTAS_THINFIRM=m
|
||||
|
|
|
@ -39,7 +39,7 @@ CONFIG_HIBERNATION=y
|
|||
CONFIG_PM_STD_PARTITION="/dev/hda3"
|
||||
CONFIG_CPU_FREQ=y
|
||||
CONFIG_CPU_FREQ_DEBUG=y
|
||||
CONFIG_CPU_FREQ_STAT=m
|
||||
CONFIG_CPU_FREQ_STAT=y
|
||||
CONFIG_CPU_FREQ_STAT_DETAILS=y
|
||||
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
|
||||
CONFIG_CPU_FREQ_GOV_POWERSAVE=m
|
||||
|
|
|
@ -59,8 +59,8 @@ CONFIG_NETFILTER=y
|
|||
CONFIG_NF_CONNTRACK=m
|
||||
CONFIG_NF_CONNTRACK_SECMARK=y
|
||||
CONFIG_NF_CONNTRACK_EVENTS=y
|
||||
CONFIG_NF_CT_PROTO_DCCP=m
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=m
|
||||
CONFIG_NF_CT_PROTO_DCCP=y
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=y
|
||||
CONFIG_NF_CONNTRACK_AMANDA=m
|
||||
CONFIG_NF_CONNTRACK_FTP=m
|
||||
CONFIG_NF_CONNTRACK_H323=m
|
||||
|
|
|
@ -60,8 +60,8 @@ CONFIG_NETFILTER=y
|
|||
CONFIG_NF_CONNTRACK=m
|
||||
CONFIG_NF_CONNTRACK_SECMARK=y
|
||||
CONFIG_NF_CONNTRACK_EVENTS=y
|
||||
CONFIG_NF_CT_PROTO_DCCP=m
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=m
|
||||
CONFIG_NF_CT_PROTO_DCCP=y
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=y
|
||||
CONFIG_NF_CONNTRACK_AMANDA=m
|
||||
CONFIG_NF_CONNTRACK_FTP=m
|
||||
CONFIG_NF_CONNTRACK_H323=m
|
||||
|
|
|
@ -59,8 +59,8 @@ CONFIG_NETFILTER=y
|
|||
CONFIG_NF_CONNTRACK=m
|
||||
CONFIG_NF_CONNTRACK_SECMARK=y
|
||||
CONFIG_NF_CONNTRACK_EVENTS=y
|
||||
CONFIG_NF_CT_PROTO_DCCP=m
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=m
|
||||
CONFIG_NF_CT_PROTO_DCCP=y
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=y
|
||||
CONFIG_NF_CONNTRACK_AMANDA=m
|
||||
CONFIG_NF_CONNTRACK_FTP=m
|
||||
CONFIG_NF_CONNTRACK_H323=m
|
||||
|
|
|
@ -61,8 +61,8 @@ CONFIG_NETFILTER=y
|
|||
CONFIG_NF_CONNTRACK=m
|
||||
CONFIG_NF_CONNTRACK_SECMARK=y
|
||||
CONFIG_NF_CONNTRACK_EVENTS=y
|
||||
CONFIG_NF_CT_PROTO_DCCP=m
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=m
|
||||
CONFIG_NF_CT_PROTO_DCCP=y
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=y
|
||||
CONFIG_NF_CONNTRACK_AMANDA=m
|
||||
CONFIG_NF_CONNTRACK_FTP=m
|
||||
CONFIG_NF_CONNTRACK_H323=m
|
||||
|
|
|
@ -111,7 +111,7 @@ CONFIG_NETFILTER=y
|
|||
CONFIG_NF_CONNTRACK=m
|
||||
CONFIG_NF_CONNTRACK_SECMARK=y
|
||||
CONFIG_NF_CONNTRACK_EVENTS=y
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=m
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=y
|
||||
CONFIG_NF_CONNTRACK_AMANDA=m
|
||||
CONFIG_NF_CONNTRACK_FTP=m
|
||||
CONFIG_NF_CONNTRACK_H323=m
|
||||
|
|
|
@ -91,7 +91,7 @@ CONFIG_NETFILTER=y
|
|||
CONFIG_NF_CONNTRACK=m
|
||||
CONFIG_NF_CONNTRACK_SECMARK=y
|
||||
CONFIG_NF_CONNTRACK_EVENTS=y
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=m
|
||||
CONFIG_NF_CT_PROTO_UDPLITE=y
|
||||
CONFIG_NF_CONNTRACK_AMANDA=m
|
||||
CONFIG_NF_CONNTRACK_FTP=m
|
||||
CONFIG_NF_CONNTRACK_H323=m
|
||||
|
|
|
@ -146,7 +146,25 @@
|
|||
/*
|
||||
* Find irq with highest priority
|
||||
*/
|
||||
PTR_LA t1,cpu_mask_nr_tbl
|
||||
# open coded PTR_LA t1, cpu_mask_nr_tbl
|
||||
#if (_MIPS_SZPTR == 32)
|
||||
# open coded la t1, cpu_mask_nr_tbl
|
||||
lui t1, %hi(cpu_mask_nr_tbl)
|
||||
addiu t1, %lo(cpu_mask_nr_tbl)
|
||||
|
||||
#endif
|
||||
#if (_MIPS_SZPTR == 64)
|
||||
# open coded dla t1, cpu_mask_nr_tbl
|
||||
.set push
|
||||
.set noat
|
||||
lui t1, %highest(cpu_mask_nr_tbl)
|
||||
lui AT, %hi(cpu_mask_nr_tbl)
|
||||
daddiu t1, t1, %higher(cpu_mask_nr_tbl)
|
||||
daddiu AT, AT, %lo(cpu_mask_nr_tbl)
|
||||
dsll t1, 32
|
||||
daddu t1, t1, AT
|
||||
.set pop
|
||||
#endif
|
||||
1: lw t2,(t1)
|
||||
nop
|
||||
and t2,t0
|
||||
|
@ -195,7 +213,25 @@
|
|||
/*
|
||||
* Find irq with highest priority
|
||||
*/
|
||||
PTR_LA t1,asic_mask_nr_tbl
|
||||
# open coded PTR_LA t1,asic_mask_nr_tbl
|
||||
#if (_MIPS_SZPTR == 32)
|
||||
# open coded la t1, asic_mask_nr_tbl
|
||||
lui t1, %hi(asic_mask_nr_tbl)
|
||||
addiu t1, %lo(asic_mask_nr_tbl)
|
||||
|
||||
#endif
|
||||
#if (_MIPS_SZPTR == 64)
|
||||
# open coded dla t1, asic_mask_nr_tbl
|
||||
.set push
|
||||
.set noat
|
||||
lui t1, %highest(asic_mask_nr_tbl)
|
||||
lui AT, %hi(asic_mask_nr_tbl)
|
||||
daddiu t1, t1, %higher(asic_mask_nr_tbl)
|
||||
daddiu AT, AT, %lo(asic_mask_nr_tbl)
|
||||
dsll t1, 32
|
||||
daddu t1, t1, AT
|
||||
.set pop
|
||||
#endif
|
||||
2: lw t2,(t1)
|
||||
nop
|
||||
and t2,t0
|
||||
|
|
|
@ -186,7 +186,9 @@ static inline __wsum csum_tcpudp_nofold(__be32 saddr,
|
|||
" daddu %0, %4 \n"
|
||||
" dsll32 $1, %0, 0 \n"
|
||||
" daddu %0, $1 \n"
|
||||
" sltu $1, %0, $1 \n"
|
||||
" dsra32 %0, %0, 0 \n"
|
||||
" addu %0, $1 \n"
|
||||
#endif
|
||||
" .set pop"
|
||||
: "=r" (sum)
|
||||
|
|
|
@ -191,11 +191,9 @@ struct mips_frame_info {
|
|||
#define J_TARGET(pc,target) \
|
||||
(((unsigned long)(pc) & 0xf0000000) | ((target) << 2))
|
||||
|
||||
static inline int is_ra_save_ins(union mips_instruction *ip)
|
||||
static inline int is_ra_save_ins(union mips_instruction *ip, int *poff)
|
||||
{
|
||||
#ifdef CONFIG_CPU_MICROMIPS
|
||||
union mips_instruction mmi;
|
||||
|
||||
/*
|
||||
* swsp ra,offset
|
||||
* swm16 reglist,offset(sp)
|
||||
|
@ -205,29 +203,71 @@ static inline int is_ra_save_ins(union mips_instruction *ip)
|
|||
*
|
||||
* microMIPS is way more fun...
|
||||
*/
|
||||
if (mm_insn_16bit(ip->halfword[0])) {
|
||||
mmi.word = (ip->halfword[0] << 16);
|
||||
return (mmi.mm16_r5_format.opcode == mm_swsp16_op &&
|
||||
mmi.mm16_r5_format.rt == 31) ||
|
||||
(mmi.mm16_m_format.opcode == mm_pool16c_op &&
|
||||
mmi.mm16_m_format.func == mm_swm16_op);
|
||||
if (mm_insn_16bit(ip->halfword[1])) {
|
||||
switch (ip->mm16_r5_format.opcode) {
|
||||
case mm_swsp16_op:
|
||||
if (ip->mm16_r5_format.rt != 31)
|
||||
return 0;
|
||||
|
||||
*poff = ip->mm16_r5_format.simmediate;
|
||||
*poff = (*poff << 2) / sizeof(ulong);
|
||||
return 1;
|
||||
|
||||
case mm_pool16c_op:
|
||||
switch (ip->mm16_m_format.func) {
|
||||
case mm_swm16_op:
|
||||
*poff = ip->mm16_m_format.imm;
|
||||
*poff += 1 + ip->mm16_m_format.rlist;
|
||||
*poff = (*poff << 2) / sizeof(ulong);
|
||||
return 1;
|
||||
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
else {
|
||||
mmi.halfword[0] = ip->halfword[1];
|
||||
mmi.halfword[1] = ip->halfword[0];
|
||||
return (mmi.mm_m_format.opcode == mm_pool32b_op &&
|
||||
mmi.mm_m_format.rd > 9 &&
|
||||
mmi.mm_m_format.base == 29 &&
|
||||
mmi.mm_m_format.func == mm_swm32_func) ||
|
||||
(mmi.i_format.opcode == mm_sw32_op &&
|
||||
mmi.i_format.rs == 29 &&
|
||||
mmi.i_format.rt == 31);
|
||||
|
||||
switch (ip->i_format.opcode) {
|
||||
case mm_sw32_op:
|
||||
if (ip->i_format.rs != 29)
|
||||
return 0;
|
||||
if (ip->i_format.rt != 31)
|
||||
return 0;
|
||||
|
||||
*poff = ip->i_format.simmediate / sizeof(ulong);
|
||||
return 1;
|
||||
|
||||
case mm_pool32b_op:
|
||||
switch (ip->mm_m_format.func) {
|
||||
case mm_swm32_func:
|
||||
if (ip->mm_m_format.rd < 0x10)
|
||||
return 0;
|
||||
if (ip->mm_m_format.base != 29)
|
||||
return 0;
|
||||
|
||||
*poff = ip->mm_m_format.simmediate;
|
||||
*poff += (ip->mm_m_format.rd & 0xf) * sizeof(u32);
|
||||
*poff /= sizeof(ulong);
|
||||
return 1;
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
#else
|
||||
/* sw / sd $ra, offset($sp) */
|
||||
return (ip->i_format.opcode == sw_op || ip->i_format.opcode == sd_op) &&
|
||||
ip->i_format.rs == 29 &&
|
||||
ip->i_format.rt == 31;
|
||||
if ((ip->i_format.opcode == sw_op || ip->i_format.opcode == sd_op) &&
|
||||
ip->i_format.rs == 29 && ip->i_format.rt == 31) {
|
||||
*poff = ip->i_format.simmediate / sizeof(ulong);
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
#endif
|
||||
}
|
||||
|
||||
|
@ -242,13 +282,16 @@ static inline int is_jump_ins(union mips_instruction *ip)
|
|||
*
|
||||
* microMIPS is kind of more fun...
|
||||
*/
|
||||
union mips_instruction mmi;
|
||||
if (mm_insn_16bit(ip->halfword[1])) {
|
||||
if ((ip->mm16_r5_format.opcode == mm_pool16c_op &&
|
||||
(ip->mm16_r5_format.rt & mm_jr16_op) == mm_jr16_op))
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
mmi.word = (ip->halfword[0] << 16);
|
||||
|
||||
if ((mmi.mm16_r5_format.opcode == mm_pool16c_op &&
|
||||
(mmi.mm16_r5_format.rt & mm_jr16_op) == mm_jr16_op) ||
|
||||
ip->j_format.opcode == mm_jal32_op)
|
||||
if (ip->j_format.opcode == mm_j32_op)
|
||||
return 1;
|
||||
if (ip->j_format.opcode == mm_jal32_op)
|
||||
return 1;
|
||||
if (ip->r_format.opcode != mm_pool32a_op ||
|
||||
ip->r_format.func != mm_pool32axf_op)
|
||||
|
@ -276,15 +319,13 @@ static inline int is_sp_move_ins(union mips_instruction *ip)
|
|||
*
|
||||
* microMIPS is not more fun...
|
||||
*/
|
||||
if (mm_insn_16bit(ip->halfword[0])) {
|
||||
union mips_instruction mmi;
|
||||
|
||||
mmi.word = (ip->halfword[0] << 16);
|
||||
return (mmi.mm16_r3_format.opcode == mm_pool16d_op &&
|
||||
mmi.mm16_r3_format.simmediate && mm_addiusp_func) ||
|
||||
(mmi.mm16_r5_format.opcode == mm_pool16d_op &&
|
||||
mmi.mm16_r5_format.rt == 29);
|
||||
if (mm_insn_16bit(ip->halfword[1])) {
|
||||
return (ip->mm16_r3_format.opcode == mm_pool16d_op &&
|
||||
ip->mm16_r3_format.simmediate && mm_addiusp_func) ||
|
||||
(ip->mm16_r5_format.opcode == mm_pool16d_op &&
|
||||
ip->mm16_r5_format.rt == 29);
|
||||
}
|
||||
|
||||
return ip->mm_i_format.opcode == mm_addiu32_op &&
|
||||
ip->mm_i_format.rt == 29 && ip->mm_i_format.rs == 29;
|
||||
#else
|
||||
|
@ -299,30 +340,36 @@ static inline int is_sp_move_ins(union mips_instruction *ip)
|
|||
|
||||
static int get_frame_info(struct mips_frame_info *info)
|
||||
{
|
||||
#ifdef CONFIG_CPU_MICROMIPS
|
||||
union mips_instruction *ip = (void *) (((char *) info->func) - 1);
|
||||
#else
|
||||
union mips_instruction *ip = info->func;
|
||||
#endif
|
||||
unsigned max_insns = info->func_size / sizeof(union mips_instruction);
|
||||
unsigned i;
|
||||
bool is_mmips = IS_ENABLED(CONFIG_CPU_MICROMIPS);
|
||||
union mips_instruction insn, *ip, *ip_end;
|
||||
const unsigned int max_insns = 128;
|
||||
unsigned int i;
|
||||
|
||||
info->pc_offset = -1;
|
||||
info->frame_size = 0;
|
||||
|
||||
ip = (void *)msk_isa16_mode((ulong)info->func);
|
||||
if (!ip)
|
||||
goto err;
|
||||
|
||||
if (max_insns == 0)
|
||||
max_insns = 128U; /* unknown function size */
|
||||
max_insns = min(128U, max_insns);
|
||||
ip_end = (void *)ip + info->func_size;
|
||||
|
||||
for (i = 0; i < max_insns; i++, ip++) {
|
||||
for (i = 0; i < max_insns && ip < ip_end; i++, ip++) {
|
||||
if (is_mmips && mm_insn_16bit(ip->halfword[0])) {
|
||||
insn.halfword[0] = 0;
|
||||
insn.halfword[1] = ip->halfword[0];
|
||||
} else if (is_mmips) {
|
||||
insn.halfword[0] = ip->halfword[1];
|
||||
insn.halfword[1] = ip->halfword[0];
|
||||
} else {
|
||||
insn.word = ip->word;
|
||||
}
|
||||
|
||||
if (is_jump_ins(ip))
|
||||
if (is_jump_ins(&insn))
|
||||
break;
|
||||
|
||||
if (!info->frame_size) {
|
||||
if (is_sp_move_ins(ip))
|
||||
if (is_sp_move_ins(&insn))
|
||||
{
|
||||
#ifdef CONFIG_CPU_MICROMIPS
|
||||
if (mm_insn_16bit(ip->halfword[0]))
|
||||
|
@ -345,11 +392,9 @@ static int get_frame_info(struct mips_frame_info *info)
|
|||
}
|
||||
continue;
|
||||
}
|
||||
if (info->pc_offset == -1 && is_ra_save_ins(ip)) {
|
||||
info->pc_offset =
|
||||
ip->i_format.simmediate / sizeof(long);
|
||||
if (info->pc_offset == -1 &&
|
||||
is_ra_save_ins(&insn, &info->pc_offset))
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (info->frame_size && info->pc_offset >= 0) /* nested */
|
||||
return 0;
|
||||
|
|
|
@ -545,7 +545,7 @@ void __init ltq_soc_init(void)
|
|||
clkdev_add_pmu("1a800000.pcie", "msi", 1, 1, PMU1_PCIE2_MSI);
|
||||
clkdev_add_pmu("1a800000.pcie", "pdi", 1, 1, PMU1_PCIE2_PDI);
|
||||
clkdev_add_pmu("1a800000.pcie", "ctl", 1, 1, PMU1_PCIE2_CTL);
|
||||
clkdev_add_pmu("1e108000.eth", NULL, 1, 0, PMU_SWITCH | PMU_PPE_DP);
|
||||
clkdev_add_pmu("1e108000.eth", NULL, 0, 0, PMU_SWITCH | PMU_PPE_DP);
|
||||
clkdev_add_pmu("1da00000.usif", "NULL", 1, 0, PMU_USIF);
|
||||
clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU);
|
||||
} else if (of_machine_is_compatible("lantiq,ar10")) {
|
||||
|
@ -553,7 +553,7 @@ void __init ltq_soc_init(void)
|
|||
ltq_ar10_fpi_hz(), ltq_ar10_pp32_hz());
|
||||
clkdev_add_pmu("1e101000.usb", "ctl", 1, 0, PMU_USB0);
|
||||
clkdev_add_pmu("1e106000.usb", "ctl", 1, 0, PMU_USB1);
|
||||
clkdev_add_pmu("1e108000.eth", NULL, 1, 0, PMU_SWITCH |
|
||||
clkdev_add_pmu("1e108000.eth", NULL, 0, 0, PMU_SWITCH |
|
||||
PMU_PPE_DP | PMU_PPE_TC);
|
||||
clkdev_add_pmu("1da00000.usif", "NULL", 1, 0, PMU_USIF);
|
||||
clkdev_add_pmu("1f203000.rcu", "gphy", 1, 0, PMU_GPHY);
|
||||
|
@ -575,11 +575,11 @@ void __init ltq_soc_init(void)
|
|||
clkdev_add_pmu(NULL, "ahb", 1, 0, PMU_AHBM | PMU_AHBS);
|
||||
|
||||
clkdev_add_pmu("1da00000.usif", "NULL", 1, 0, PMU_USIF);
|
||||
clkdev_add_pmu("1e108000.eth", NULL, 1, 0,
|
||||
clkdev_add_pmu("1e108000.eth", NULL, 0, 0,
|
||||
PMU_SWITCH | PMU_PPE_DPLUS | PMU_PPE_DPLUM |
|
||||
PMU_PPE_EMA | PMU_PPE_TC | PMU_PPE_SLL01 |
|
||||
PMU_PPE_QSB | PMU_PPE_TOP);
|
||||
clkdev_add_pmu("1f203000.rcu", "gphy", 1, 0, PMU_GPHY);
|
||||
clkdev_add_pmu("1f203000.rcu", "gphy", 0, 0, PMU_GPHY);
|
||||
clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_SDIO);
|
||||
clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU);
|
||||
clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE);
|
||||
|
|
|
@ -31,26 +31,40 @@ static inline void indy_sc_wipe(unsigned long first, unsigned long last)
|
|||
unsigned long tmp;
|
||||
|
||||
__asm__ __volatile__(
|
||||
".set\tpush\t\t\t# indy_sc_wipe\n\t"
|
||||
".set\tnoreorder\n\t"
|
||||
".set\tmips3\n\t"
|
||||
".set\tnoat\n\t"
|
||||
"mfc0\t%2, $12\n\t"
|
||||
"li\t$1, 0x80\t\t\t# Go 64 bit\n\t"
|
||||
"mtc0\t$1, $12\n\t"
|
||||
|
||||
"dli\t$1, 0x9000000080000000\n\t"
|
||||
"or\t%0, $1\t\t\t# first line to flush\n\t"
|
||||
"or\t%1, $1\t\t\t# last line to flush\n\t"
|
||||
".set\tat\n\t"
|
||||
|
||||
"1:\tsw\t$0, 0(%0)\n\t"
|
||||
"bne\t%0, %1, 1b\n\t"
|
||||
" daddu\t%0, 32\n\t"
|
||||
|
||||
"mtc0\t%2, $12\t\t\t# Back to 32 bit\n\t"
|
||||
"nop; nop; nop; nop;\n\t"
|
||||
".set\tpop"
|
||||
" .set push # indy_sc_wipe \n"
|
||||
" .set noreorder \n"
|
||||
" .set mips3 \n"
|
||||
" .set noat \n"
|
||||
" mfc0 %2, $12 \n"
|
||||
" li $1, 0x80 # Go 64 bit \n"
|
||||
" mtc0 $1, $12 \n"
|
||||
" \n"
|
||||
" # \n"
|
||||
" # Open code a dli $1, 0x9000000080000000 \n"
|
||||
" # \n"
|
||||
" # Required because binutils 2.25 will happily accept \n"
|
||||
" # 64 bit instructions in .set mips3 mode but puke on \n"
|
||||
" # 64 bit constants when generating 32 bit ELF \n"
|
||||
" # \n"
|
||||
" lui $1,0x9000 \n"
|
||||
" dsll $1,$1,0x10 \n"
|
||||
" ori $1,$1,0x8000 \n"
|
||||
" dsll $1,$1,0x10 \n"
|
||||
" \n"
|
||||
" or %0, $1 # first line to flush \n"
|
||||
" or %1, $1 # last line to flush \n"
|
||||
" .set at \n"
|
||||
" \n"
|
||||
"1: sw $0, 0(%0) \n"
|
||||
" bne %0, %1, 1b \n"
|
||||
" daddu %0, 32 \n"
|
||||
" \n"
|
||||
" mtc0 %2, $12 # Back to 32 bit \n"
|
||||
" nop # pipeline hazard \n"
|
||||
" nop \n"
|
||||
" nop \n"
|
||||
" nop \n"
|
||||
" .set pop \n"
|
||||
: "=r" (first), "=r" (last), "=&r" (tmp)
|
||||
: "0" (first), "1" (last));
|
||||
}
|
||||
|
|
|
@ -50,7 +50,6 @@
|
|||
#include <asm/netlogic/xlp-hal/sys.h>
|
||||
#include <asm/netlogic/xlp-hal/cpucontrol.h>
|
||||
|
||||
#define CP0_EBASE $15
|
||||
#define SYS_CPU_COHERENT_BASE CKSEG1ADDR(XLP_DEFAULT_IO_BASE) + \
|
||||
XLP_IO_SYS_OFFSET(0) + XLP_IO_PCI_HDRSZ + \
|
||||
SYS_CPU_NONCOHERENT_MODE * 4
|
||||
|
@ -92,7 +91,7 @@
|
|||
* registers. On XLPII CPUs, usual cache instructions work.
|
||||
*/
|
||||
.macro xlp_flush_l1_dcache
|
||||
mfc0 t0, CP0_EBASE, 0
|
||||
mfc0 t0, CP0_PRID
|
||||
andi t0, t0, PRID_IMP_MASK
|
||||
slt t1, t0, 0x1200
|
||||
beqz t1, 15f
|
||||
|
@ -171,7 +170,7 @@ FEXPORT(nlm_reset_entry)
|
|||
nop
|
||||
|
||||
1: /* Entry point on core wakeup */
|
||||
mfc0 t0, CP0_EBASE, 0 /* processor ID */
|
||||
mfc0 t0, CP0_PRID /* processor ID */
|
||||
andi t0, PRID_IMP_MASK
|
||||
li t1, 0x1500 /* XLP 9xx */
|
||||
beq t0, t1, 2f /* does not need to set coherent */
|
||||
|
@ -182,8 +181,8 @@ FEXPORT(nlm_reset_entry)
|
|||
nop
|
||||
|
||||
/* set bit in SYS coherent register for the core */
|
||||
mfc0 t0, CP0_EBASE, 1
|
||||
mfc0 t1, CP0_EBASE, 1
|
||||
mfc0 t0, CP0_EBASE
|
||||
mfc0 t1, CP0_EBASE
|
||||
srl t1, 5
|
||||
andi t1, 0x3 /* t1 <- node */
|
||||
li t2, 0x40000
|
||||
|
@ -232,7 +231,7 @@ EXPORT(nlm_boot_siblings)
|
|||
|
||||
* NOTE: All GPR contents are lost after the mtcr above!
|
||||
*/
|
||||
mfc0 v0, CP0_EBASE, 1
|
||||
mfc0 v0, CP0_EBASE
|
||||
andi v0, 0x3ff /* v0 <- node/core */
|
||||
|
||||
/*
|
||||
|
|
|
@ -48,8 +48,6 @@
|
|||
#include <asm/netlogic/xlp-hal/sys.h>
|
||||
#include <asm/netlogic/xlp-hal/cpucontrol.h>
|
||||
|
||||
#define CP0_EBASE $15
|
||||
|
||||
.set noreorder
|
||||
.set noat
|
||||
.set arch=xlr /* for mfcr/mtcr, XLR is sufficient */
|
||||
|
@ -86,7 +84,7 @@ NESTED(nlm_boot_secondary_cpus, 16, sp)
|
|||
PTR_L gp, 0(t1)
|
||||
|
||||
/* a0 has the processor id */
|
||||
mfc0 a0, CP0_EBASE, 1
|
||||
mfc0 a0, CP0_EBASE
|
||||
andi a0, 0x3ff /* a0 <- node/core */
|
||||
PTR_LA t0, nlm_early_init_secondary
|
||||
jalr t0
|
||||
|
|
|
@ -30,8 +30,10 @@ const char *get_system_type(void)
|
|||
return soc_info.sys_type;
|
||||
}
|
||||
|
||||
static __init void prom_init_cmdline(int argc, char **argv)
|
||||
static __init void prom_init_cmdline(void)
|
||||
{
|
||||
int argc;
|
||||
char **argv;
|
||||
int i;
|
||||
|
||||
pr_debug("prom: fw_arg0=%08x fw_arg1=%08x fw_arg2=%08x fw_arg3=%08x\n",
|
||||
|
@ -60,14 +62,11 @@ static __init void prom_init_cmdline(int argc, char **argv)
|
|||
|
||||
void __init prom_init(void)
|
||||
{
|
||||
int argc;
|
||||
char **argv;
|
||||
|
||||
prom_soc_init(&soc_info);
|
||||
|
||||
pr_info("SoC Type: %s\n", get_system_type());
|
||||
|
||||
prom_init_cmdline(argc, argv);
|
||||
prom_init_cmdline();
|
||||
}
|
||||
|
||||
void __init prom_free_prom_memory(void)
|
||||
|
|
|
@ -40,16 +40,6 @@ static struct rt2880_pmx_group rt2880_pinmux_data_act[] = {
|
|||
{ 0 }
|
||||
};
|
||||
|
||||
static void rt288x_wdt_reset(void)
|
||||
{
|
||||
u32 t;
|
||||
|
||||
/* enable WDT reset output on pin SRAM_CS_N */
|
||||
t = rt_sysc_r32(SYSC_REG_CLKCFG);
|
||||
t |= CLKCFG_SRAM_CS_N_WDT;
|
||||
rt_sysc_w32(t, SYSC_REG_CLKCFG);
|
||||
}
|
||||
|
||||
void __init ralink_clk_init(void)
|
||||
{
|
||||
unsigned long cpu_rate, wmac_rate = 40000000;
|
||||
|
|
|
@ -89,17 +89,6 @@ static struct rt2880_pmx_group rt5350_pinmux_data[] = {
|
|||
{ 0 }
|
||||
};
|
||||
|
||||
static void rt305x_wdt_reset(void)
|
||||
{
|
||||
u32 t;
|
||||
|
||||
/* enable WDT reset output on pin SRAM_CS_N */
|
||||
t = rt_sysc_r32(SYSC_REG_SYSTEM_CONFIG);
|
||||
t |= RT305X_SYSCFG_SRAM_CS0_MODE_WDT <<
|
||||
RT305X_SYSCFG_SRAM_CS0_MODE_SHIFT;
|
||||
rt_sysc_w32(t, SYSC_REG_SYSTEM_CONFIG);
|
||||
}
|
||||
|
||||
static unsigned long rt5350_get_mem_size(void)
|
||||
{
|
||||
void __iomem *sysc = (void __iomem *) KSEG1ADDR(RT305X_SYSC_BASE);
|
||||
|
|
|
@ -63,16 +63,6 @@ static struct rt2880_pmx_group rt3883_pinmux_data[] = {
|
|||
{ 0 }
|
||||
};
|
||||
|
||||
static void rt3883_wdt_reset(void)
|
||||
{
|
||||
u32 t;
|
||||
|
||||
/* enable WDT reset output on GPIO 2 */
|
||||
t = rt_sysc_r32(RT3883_SYSC_REG_SYSCFG1);
|
||||
t |= RT3883_SYSCFG1_GPIO2_AS_WDT_OUT;
|
||||
rt_sysc_w32(t, RT3883_SYSC_REG_SYSCFG1);
|
||||
}
|
||||
|
||||
void __init ralink_clk_init(void)
|
||||
{
|
||||
unsigned long cpu_rate, sys_rate;
|
||||
|
|
|
@ -25,7 +25,7 @@ endif
|
|||
# Simplified: what IP22 does at 128MB+ in ksegN, IP28 does at 512MB+ in xkphys
|
||||
#
|
||||
ifdef CONFIG_SGI_IP28
|
||||
ifeq ($(call cc-option-yn,-mr10k-cache-barrier=store), n)
|
||||
ifeq ($(call cc-option-yn,-march=r10000 -mr10k-cache-barrier=store), n)
|
||||
$(error gcc doesn't support needed option -mr10k-cache-barrier=store)
|
||||
endif
|
||||
endif
|
||||
|
|
|
@ -227,8 +227,10 @@ int __kprobes hw_breakpoint_handler(struct die_args *args)
|
|||
rcu_read_lock();
|
||||
|
||||
bp = __this_cpu_read(bp_per_reg);
|
||||
if (!bp)
|
||||
if (!bp) {
|
||||
rc = NOTIFY_DONE;
|
||||
goto out;
|
||||
}
|
||||
info = counter_arch_bp(bp);
|
||||
|
||||
/*
|
||||
|
|
|
@ -1806,8 +1806,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
|
|||
goto instr_done;
|
||||
|
||||
case LARX:
|
||||
if (regs->msr & MSR_LE)
|
||||
return 0;
|
||||
if (op.ea & (size - 1))
|
||||
break; /* can't handle misaligned */
|
||||
err = -EFAULT;
|
||||
|
@ -1829,8 +1827,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
|
|||
goto ldst_done;
|
||||
|
||||
case STCX:
|
||||
if (regs->msr & MSR_LE)
|
||||
return 0;
|
||||
if (op.ea & (size - 1))
|
||||
break; /* can't handle misaligned */
|
||||
err = -EFAULT;
|
||||
|
@ -1854,8 +1850,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
|
|||
goto ldst_done;
|
||||
|
||||
case LOAD:
|
||||
if (regs->msr & MSR_LE)
|
||||
return 0;
|
||||
err = read_mem(®s->gpr[op.reg], op.ea, size, regs);
|
||||
if (!err) {
|
||||
if (op.type & SIGNEXT)
|
||||
|
@ -1867,8 +1861,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
|
|||
|
||||
#ifdef CONFIG_PPC_FPU
|
||||
case LOAD_FP:
|
||||
if (regs->msr & MSR_LE)
|
||||
return 0;
|
||||
if (size == 4)
|
||||
err = do_fp_load(op.reg, do_lfs, op.ea, size, regs);
|
||||
else
|
||||
|
@ -1877,15 +1869,11 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
|
|||
#endif
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
case LOAD_VMX:
|
||||
if (regs->msr & MSR_LE)
|
||||
return 0;
|
||||
err = do_vec_load(op.reg, do_lvx, op.ea & ~0xfUL, regs);
|
||||
goto ldst_done;
|
||||
#endif
|
||||
#ifdef CONFIG_VSX
|
||||
case LOAD_VSX:
|
||||
if (regs->msr & MSR_LE)
|
||||
return 0;
|
||||
err = do_vsx_load(op.reg, do_lxvd2x, op.ea, regs);
|
||||
goto ldst_done;
|
||||
#endif
|
||||
|
@ -1908,8 +1896,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
|
|||
goto instr_done;
|
||||
|
||||
case STORE:
|
||||
if (regs->msr & MSR_LE)
|
||||
return 0;
|
||||
if ((op.type & UPDATE) && size == sizeof(long) &&
|
||||
op.reg == 1 && op.update_reg == 1 &&
|
||||
!(regs->msr & MSR_PR) &&
|
||||
|
@ -1922,8 +1908,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
|
|||
|
||||
#ifdef CONFIG_PPC_FPU
|
||||
case STORE_FP:
|
||||
if (regs->msr & MSR_LE)
|
||||
return 0;
|
||||
if (size == 4)
|
||||
err = do_fp_store(op.reg, do_stfs, op.ea, size, regs);
|
||||
else
|
||||
|
@ -1932,15 +1916,11 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
|
|||
#endif
|
||||
#ifdef CONFIG_ALTIVEC
|
||||
case STORE_VMX:
|
||||
if (regs->msr & MSR_LE)
|
||||
return 0;
|
||||
err = do_vec_store(op.reg, do_stvx, op.ea & ~0xfUL, regs);
|
||||
goto ldst_done;
|
||||
#endif
|
||||
#ifdef CONFIG_VSX
|
||||
case STORE_VSX:
|
||||
if (regs->msr & MSR_LE)
|
||||
return 0;
|
||||
err = do_vsx_store(op.reg, do_stxvd2x, op.ea, regs);
|
||||
goto ldst_done;
|
||||
#endif
|
||||
|
|
|
@ -74,7 +74,8 @@ extern void execve_tail(void);
|
|||
* User space process size: 2GB for 31 bit, 4TB or 8PT for 64 bit.
|
||||
*/
|
||||
|
||||
#define TASK_SIZE_OF(tsk) ((tsk)->mm->context.asce_limit)
|
||||
#define TASK_SIZE_OF(tsk) ((tsk)->mm ? \
|
||||
(tsk)->mm->context.asce_limit : TASK_MAX_SIZE)
|
||||
#define TASK_UNMAPPED_BASE (test_thread_flag(TIF_31BIT) ? \
|
||||
(1UL << 30) : (1UL << 41))
|
||||
#define TASK_SIZE TASK_SIZE_OF(current)
|
||||
|
|
|
@ -23,6 +23,8 @@
|
|||
#define PTR_SUB(x, y) (((char *) (x)) - ((unsigned long) (y)))
|
||||
#define PTR_DIFF(x, y) ((unsigned long)(((char *) (x)) - ((unsigned long) (y))))
|
||||
|
||||
#define LINUX_NOTE_NAME "LINUX"
|
||||
|
||||
static struct memblock_region oldmem_region;
|
||||
|
||||
static struct memblock_type oldmem_type = {
|
||||
|
@ -312,7 +314,7 @@ static void *nt_fpregset(void *ptr, struct save_area *sa)
|
|||
static void *nt_s390_timer(void *ptr, struct save_area *sa)
|
||||
{
|
||||
return nt_init(ptr, NT_S390_TIMER, &sa->timer, sizeof(sa->timer),
|
||||
KEXEC_CORE_NOTE_NAME);
|
||||
LINUX_NOTE_NAME);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -321,7 +323,7 @@ static void *nt_s390_timer(void *ptr, struct save_area *sa)
|
|||
static void *nt_s390_tod_cmp(void *ptr, struct save_area *sa)
|
||||
{
|
||||
return nt_init(ptr, NT_S390_TODCMP, &sa->clk_cmp,
|
||||
sizeof(sa->clk_cmp), KEXEC_CORE_NOTE_NAME);
|
||||
sizeof(sa->clk_cmp), LINUX_NOTE_NAME);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -330,7 +332,7 @@ static void *nt_s390_tod_cmp(void *ptr, struct save_area *sa)
|
|||
static void *nt_s390_tod_preg(void *ptr, struct save_area *sa)
|
||||
{
|
||||
return nt_init(ptr, NT_S390_TODPREG, &sa->tod_reg,
|
||||
sizeof(sa->tod_reg), KEXEC_CORE_NOTE_NAME);
|
||||
sizeof(sa->tod_reg), LINUX_NOTE_NAME);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -339,7 +341,7 @@ static void *nt_s390_tod_preg(void *ptr, struct save_area *sa)
|
|||
static void *nt_s390_ctrs(void *ptr, struct save_area *sa)
|
||||
{
|
||||
return nt_init(ptr, NT_S390_CTRS, &sa->ctrl_regs,
|
||||
sizeof(sa->ctrl_regs), KEXEC_CORE_NOTE_NAME);
|
||||
sizeof(sa->ctrl_regs), LINUX_NOTE_NAME);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -348,7 +350,7 @@ static void *nt_s390_ctrs(void *ptr, struct save_area *sa)
|
|||
static void *nt_s390_prefix(void *ptr, struct save_area *sa)
|
||||
{
|
||||
return nt_init(ptr, NT_S390_PREFIX, &sa->pref_reg,
|
||||
sizeof(sa->pref_reg), KEXEC_CORE_NOTE_NAME);
|
||||
sizeof(sa->pref_reg), LINUX_NOTE_NAME);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -357,7 +359,7 @@ static void *nt_s390_prefix(void *ptr, struct save_area *sa)
|
|||
static void *nt_s390_vx_high(void *ptr, __vector128 *vx_regs)
|
||||
{
|
||||
return nt_init(ptr, NT_S390_VXRS_HIGH, &vx_regs[16],
|
||||
16 * sizeof(__vector128), KEXEC_CORE_NOTE_NAME);
|
||||
16 * sizeof(__vector128), LINUX_NOTE_NAME);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -370,12 +372,12 @@ static void *nt_s390_vx_low(void *ptr, __vector128 *vx_regs)
|
|||
int i;
|
||||
|
||||
note = (Elf64_Nhdr *)ptr;
|
||||
note->n_namesz = strlen(KEXEC_CORE_NOTE_NAME) + 1;
|
||||
note->n_namesz = strlen(LINUX_NOTE_NAME) + 1;
|
||||
note->n_descsz = 16 * 8;
|
||||
note->n_type = NT_S390_VXRS_LOW;
|
||||
len = sizeof(Elf64_Nhdr);
|
||||
|
||||
memcpy(ptr + len, KEXEC_CORE_NOTE_NAME, note->n_namesz);
|
||||
memcpy(ptr + len, LINUX_NOTE_NAME, note->n_namesz);
|
||||
len = roundup(len + note->n_namesz, 4);
|
||||
|
||||
ptr += len;
|
||||
|
|
|
@ -805,10 +805,10 @@ static void __init setup_randomness(void)
|
|||
{
|
||||
struct sysinfo_3_2_2 *vmms;
|
||||
|
||||
vmms = (struct sysinfo_3_2_2 *) alloc_page(GFP_KERNEL);
|
||||
if (vmms && stsi(vmms, 3, 2, 2) == 0 && vmms->count)
|
||||
add_device_randomness(&vmms, vmms->count);
|
||||
free_page((unsigned long) vmms);
|
||||
vmms = (struct sysinfo_3_2_2 *) memblock_alloc(PAGE_SIZE, PAGE_SIZE);
|
||||
if (stsi(vmms, 3, 2, 2) == 0 && vmms->count)
|
||||
add_device_randomness(&vmms->vm, sizeof(vmms->vm[0]) * vmms->count);
|
||||
memblock_free((unsigned long) vmms, PAGE_SIZE);
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -295,6 +295,9 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
|
|||
struct kvm_memory_slot *memslot;
|
||||
int is_dirty = 0;
|
||||
|
||||
if (kvm_is_ucontrol(kvm))
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&kvm->slots_lock);
|
||||
|
||||
r = -EINVAL;
|
||||
|
|
|
@ -1237,11 +1237,28 @@ EXPORT_SYMBOL_GPL(s390_reset_cmma);
|
|||
*/
|
||||
bool gmap_test_and_clear_dirty(unsigned long address, struct gmap *gmap)
|
||||
{
|
||||
pgd_t *pgd;
|
||||
pud_t *pud;
|
||||
pmd_t *pmd;
|
||||
pte_t *pte;
|
||||
spinlock_t *ptl;
|
||||
bool dirty = false;
|
||||
|
||||
pte = get_locked_pte(gmap->mm, address, &ptl);
|
||||
pgd = pgd_offset(gmap->mm, address);
|
||||
pud = pud_alloc(gmap->mm, pgd, address);
|
||||
if (!pud)
|
||||
return false;
|
||||
pmd = pmd_alloc(gmap->mm, pud, address);
|
||||
if (!pmd)
|
||||
return false;
|
||||
/* We can't run guests backed by huge pages, but userspace can
|
||||
* still set them up and then try to migrate them without any
|
||||
* migration support.
|
||||
*/
|
||||
if (pmd_large(*pmd))
|
||||
return true;
|
||||
|
||||
pte = pte_alloc_map_lock(gmap->mm, pmd, address, &ptl);
|
||||
if (unlikely(!pte))
|
||||
return false;
|
||||
|
||||
|
|
|
@ -3499,7 +3499,7 @@ static void fix_rmode_seg(int seg, struct kvm_segment *save)
|
|||
}
|
||||
|
||||
vmcs_write16(sf->selector, var.selector);
|
||||
vmcs_write32(sf->base, var.base);
|
||||
vmcs_writel(sf->base, var.base);
|
||||
vmcs_write32(sf->limit, var.limit);
|
||||
vmcs_write32(sf->ar_bytes, vmx_segment_access_rights(&var));
|
||||
}
|
||||
|
@ -4867,6 +4867,12 @@ static int vmx_vcpu_setup(struct vcpu_vmx *vmx)
|
|||
if (vmx_xsaves_supported())
|
||||
vmcs_write64(XSS_EXIT_BITMAP, VMX_XSS_EXIT_BITMAP);
|
||||
|
||||
if (enable_pml) {
|
||||
ASSERT(vmx->pml_pg);
|
||||
vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg));
|
||||
vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -7839,22 +7845,6 @@ static void vmx_get_exit_info(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2)
|
|||
*info2 = vmcs_read32(VM_EXIT_INTR_INFO);
|
||||
}
|
||||
|
||||
static int vmx_create_pml_buffer(struct vcpu_vmx *vmx)
|
||||
{
|
||||
struct page *pml_pg;
|
||||
|
||||
pml_pg = alloc_page(GFP_KERNEL | __GFP_ZERO);
|
||||
if (!pml_pg)
|
||||
return -ENOMEM;
|
||||
|
||||
vmx->pml_pg = pml_pg;
|
||||
|
||||
vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg));
|
||||
vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void vmx_destroy_pml_buffer(struct vcpu_vmx *vmx)
|
||||
{
|
||||
if (vmx->pml_pg) {
|
||||
|
@ -7915,7 +7905,7 @@ static void kvm_flush_pml_buffers(struct kvm *kvm)
|
|||
static void vmx_dump_sel(char *name, uint32_t sel)
|
||||
{
|
||||
pr_err("%s sel=0x%04x, attr=0x%05x, limit=0x%08x, base=0x%016lx\n",
|
||||
name, vmcs_read32(sel),
|
||||
name, vmcs_read16(sel),
|
||||
vmcs_read32(sel + GUEST_ES_AR_BYTES - GUEST_ES_SELECTOR),
|
||||
vmcs_read32(sel + GUEST_ES_LIMIT - GUEST_ES_SELECTOR),
|
||||
vmcs_readl(sel + GUEST_ES_BASE - GUEST_ES_SELECTOR));
|
||||
|
@ -8789,14 +8779,26 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
|
|||
if (err)
|
||||
goto free_vcpu;
|
||||
|
||||
err = -ENOMEM;
|
||||
|
||||
/*
|
||||
* If PML is turned on, failure on enabling PML just results in failure
|
||||
* of creating the vcpu, therefore we can simplify PML logic (by
|
||||
* avoiding dealing with cases, such as enabling PML partially on vcpus
|
||||
* for the guest, etc.
|
||||
*/
|
||||
if (enable_pml) {
|
||||
vmx->pml_pg = alloc_page(GFP_KERNEL | __GFP_ZERO);
|
||||
if (!vmx->pml_pg)
|
||||
goto uninit_vcpu;
|
||||
}
|
||||
|
||||
vmx->guest_msrs = kmalloc(PAGE_SIZE, GFP_KERNEL);
|
||||
BUILD_BUG_ON(ARRAY_SIZE(vmx_msr_index) * sizeof(vmx->guest_msrs[0])
|
||||
> PAGE_SIZE);
|
||||
|
||||
err = -ENOMEM;
|
||||
if (!vmx->guest_msrs) {
|
||||
goto uninit_vcpu;
|
||||
}
|
||||
if (!vmx->guest_msrs)
|
||||
goto free_pml;
|
||||
|
||||
vmx->loaded_vmcs = &vmx->vmcs01;
|
||||
vmx->loaded_vmcs->vmcs = alloc_vmcs();
|
||||
|
@ -8840,18 +8842,6 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
|
|||
vmx->nested.current_vmptr = -1ull;
|
||||
vmx->nested.current_vmcs12 = NULL;
|
||||
|
||||
/*
|
||||
* If PML is turned on, failure on enabling PML just results in failure
|
||||
* of creating the vcpu, therefore we can simplify PML logic (by
|
||||
* avoiding dealing with cases, such as enabling PML partially on vcpus
|
||||
* for the guest, etc.
|
||||
*/
|
||||
if (enable_pml) {
|
||||
err = vmx_create_pml_buffer(vmx);
|
||||
if (err)
|
||||
goto free_vmcs;
|
||||
}
|
||||
|
||||
return &vmx->vcpu;
|
||||
|
||||
free_vmcs:
|
||||
|
@ -8859,6 +8849,8 @@ free_vmcs:
|
|||
free_loaded_vmcs(vmx->loaded_vmcs);
|
||||
free_msrs:
|
||||
kfree(vmx->guest_msrs);
|
||||
free_pml:
|
||||
vmx_destroy_pml_buffer(vmx);
|
||||
uninit_vcpu:
|
||||
kvm_vcpu_uninit(&vmx->vcpu);
|
||||
free_vcpu:
|
||||
|
|
|
@ -42,10 +42,22 @@ static struct resource goldfish_pdev_bus_resources[] = {
|
|||
}
|
||||
};
|
||||
|
||||
static bool goldfish_enable __initdata;
|
||||
|
||||
static int __init goldfish_setup(char *str)
|
||||
{
|
||||
goldfish_enable = true;
|
||||
return 0;
|
||||
}
|
||||
__setup("goldfish", goldfish_setup);
|
||||
|
||||
static int __init goldfish_init(void)
|
||||
{
|
||||
if (!goldfish_enable)
|
||||
return -ENODEV;
|
||||
|
||||
platform_device_register_simple("goldfish_pdev_bus", -1,
|
||||
goldfish_pdev_bus_resources, 2);
|
||||
goldfish_pdev_bus_resources, 2);
|
||||
return 0;
|
||||
}
|
||||
device_initcall(goldfish_init);
|
||||
|
|
|
@ -133,6 +133,8 @@ static int __init parse_tag_initrd(const bp_tag_t* tag)
|
|||
|
||||
__tagtable(BP_TAG_INITRD, parse_tag_initrd);
|
||||
|
||||
#endif /* CONFIG_BLK_DEV_INITRD */
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
|
||||
static int __init parse_tag_fdt(const bp_tag_t *tag)
|
||||
|
@ -145,8 +147,6 @@ __tagtable(BP_TAG_FDT, parse_tag_fdt);
|
|||
|
||||
#endif /* CONFIG_OF */
|
||||
|
||||
#endif /* CONFIG_BLK_DEV_INITRD */
|
||||
|
||||
static int __init parse_tag_cmdline(const bp_tag_t* tag)
|
||||
{
|
||||
strlcpy(command_line, (char *)(tag->data), COMMAND_LINE_SIZE);
|
||||
|
|
|
@ -1259,12 +1259,9 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
|
|||
|
||||
blk_queue_split(q, &bio, q->bio_split);
|
||||
|
||||
if (!is_flush_fua && !blk_queue_nomerges(q)) {
|
||||
if (blk_attempt_plug_merge(q, bio, &request_count,
|
||||
&same_queue_rq))
|
||||
return BLK_QC_T_NONE;
|
||||
} else
|
||||
request_count = blk_plug_queued_count(q);
|
||||
if (!is_flush_fua && !blk_queue_nomerges(q) &&
|
||||
blk_attempt_plug_merge(q, bio, &request_count, &same_queue_rq))
|
||||
return BLK_QC_T_NONE;
|
||||
|
||||
rq = blk_mq_map_request(q, bio, &data);
|
||||
if (unlikely(!rq))
|
||||
|
@ -1355,9 +1352,11 @@ static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
|
|||
|
||||
blk_queue_split(q, &bio, q->bio_split);
|
||||
|
||||
if (!is_flush_fua && !blk_queue_nomerges(q) &&
|
||||
blk_attempt_plug_merge(q, bio, &request_count, NULL))
|
||||
return BLK_QC_T_NONE;
|
||||
if (!is_flush_fua && !blk_queue_nomerges(q)) {
|
||||
if (blk_attempt_plug_merge(q, bio, &request_count, NULL))
|
||||
return BLK_QC_T_NONE;
|
||||
} else
|
||||
request_count = blk_plug_queued_count(q);
|
||||
|
||||
rq = blk_mq_map_request(q, bio, &data);
|
||||
if (unlikely(!rq))
|
||||
|
|
|
@ -62,6 +62,7 @@ obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
|
|||
obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
|
||||
obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o
|
||||
obj-$(CONFIG_CRYPTO_WP512) += wp512.o
|
||||
CFLAGS_wp512.o := $(call cc-option,-fno-schedule-insns) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149
|
||||
obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o
|
||||
obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o
|
||||
obj-$(CONFIG_CRYPTO_ECB) += ecb.o
|
||||
|
@ -85,6 +86,7 @@ obj-$(CONFIG_CRYPTO_BLOWFISH_COMMON) += blowfish_common.o
|
|||
obj-$(CONFIG_CRYPTO_TWOFISH) += twofish_generic.o
|
||||
obj-$(CONFIG_CRYPTO_TWOFISH_COMMON) += twofish_common.o
|
||||
obj-$(CONFIG_CRYPTO_SERPENT) += serpent_generic.o
|
||||
CFLAGS_serpent_generic.o := $(call cc-option,-fsched-pressure) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149
|
||||
obj-$(CONFIG_CRYPTO_AES) += aes_generic.o
|
||||
obj-$(CONFIG_CRYPTO_CAMELLIA) += camellia_generic.o
|
||||
obj-$(CONFIG_CRYPTO_CAST_COMMON) += cast_common.o
|
||||
|
|
|
@ -21778,7 +21778,7 @@ static struct aead_testvec aes_ccm_enc_tv_template[] = {
|
|||
"\x09\x75\x9a\x9b\x3c\x9b\x27\x39",
|
||||
.klen = 32,
|
||||
.iv = "\x03\xf9\xd9\x4e\x63\xb5\x3d\x9d"
|
||||
"\x43\xf6\x1e\x50",
|
||||
"\x43\xf6\x1e\x50\0\0\0\0",
|
||||
.assoc = "\x57\xf5\x6b\x8b\x57\x5c\x3d\x3b"
|
||||
"\x13\x02\x01\x0c\x83\x4c\x96\x35"
|
||||
"\x8e\xd6\x39\xcf\x7d\x14\x9b\x94"
|
||||
|
|
|
@ -965,7 +965,7 @@ static size_t sizeof_nfit_set_info(int num_mappings)
|
|||
+ num_mappings * sizeof(struct nfit_set_info_map);
|
||||
}
|
||||
|
||||
static int cmp_map(const void *m0, const void *m1)
|
||||
static int cmp_map_compat(const void *m0, const void *m1)
|
||||
{
|
||||
const struct nfit_set_info_map *map0 = m0;
|
||||
const struct nfit_set_info_map *map1 = m1;
|
||||
|
@ -974,6 +974,14 @@ static int cmp_map(const void *m0, const void *m1)
|
|||
sizeof(u64));
|
||||
}
|
||||
|
||||
static int cmp_map(const void *m0, const void *m1)
|
||||
{
|
||||
const struct nfit_set_info_map *map0 = m0;
|
||||
const struct nfit_set_info_map *map1 = m1;
|
||||
|
||||
return map0->region_offset - map1->region_offset;
|
||||
}
|
||||
|
||||
/* Retrieve the nth entry referencing this spa */
|
||||
static struct acpi_nfit_memory_map *memdev_from_spa(
|
||||
struct acpi_nfit_desc *acpi_desc, u16 range_index, int n)
|
||||
|
@ -1029,6 +1037,12 @@ static int acpi_nfit_init_interleave_set(struct acpi_nfit_desc *acpi_desc,
|
|||
sort(&info->mapping[0], nr, sizeof(struct nfit_set_info_map),
|
||||
cmp_map, NULL);
|
||||
nd_set->cookie = nd_fletcher64(info, sizeof_nfit_set_info(nr), 0);
|
||||
|
||||
/* support namespaces created with the wrong sort order */
|
||||
sort(&info->mapping[0], nr, sizeof(struct nfit_set_info_map),
|
||||
cmp_map_compat, NULL);
|
||||
nd_set->altcookie = nd_fletcher64(info, sizeof_nfit_set_info(nr), 0);
|
||||
|
||||
ndr_desc->nd_set = nd_set;
|
||||
devm_kfree(dev, info);
|
||||
|
||||
|
|
|
@ -640,8 +640,11 @@ static int bcma_device_probe(struct device *dev)
|
|||
drv);
|
||||
int err = 0;
|
||||
|
||||
get_device(dev);
|
||||
if (adrv->probe)
|
||||
err = adrv->probe(core);
|
||||
if (err)
|
||||
put_device(dev);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
@ -654,6 +657,7 @@ static int bcma_device_remove(struct device *dev)
|
|||
|
||||
if (adrv->remove)
|
||||
adrv->remove(core);
|
||||
put_device(dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -1108,9 +1108,12 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
|
|||
if ((unsigned int) info->lo_encrypt_key_size > LO_KEY_SIZE)
|
||||
return -EINVAL;
|
||||
|
||||
/* I/O need to be drained during transfer transition */
|
||||
blk_mq_freeze_queue(lo->lo_queue);
|
||||
|
||||
err = loop_release_xfer(lo);
|
||||
if (err)
|
||||
return err;
|
||||
goto exit;
|
||||
|
||||
if (info->lo_encrypt_type) {
|
||||
unsigned int type = info->lo_encrypt_type;
|
||||
|
@ -1125,12 +1128,14 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
|
|||
|
||||
err = loop_init_xfer(lo, xfer, info);
|
||||
if (err)
|
||||
return err;
|
||||
goto exit;
|
||||
|
||||
if (lo->lo_offset != info->lo_offset ||
|
||||
lo->lo_sizelimit != info->lo_sizelimit)
|
||||
if (figure_loop_size(lo, info->lo_offset, info->lo_sizelimit))
|
||||
return -EFBIG;
|
||||
if (figure_loop_size(lo, info->lo_offset, info->lo_sizelimit)) {
|
||||
err = -EFBIG;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
loop_config_discard(lo);
|
||||
|
||||
|
@ -1148,13 +1153,6 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
|
|||
(info->lo_flags & LO_FLAGS_AUTOCLEAR))
|
||||
lo->lo_flags ^= LO_FLAGS_AUTOCLEAR;
|
||||
|
||||
if ((info->lo_flags & LO_FLAGS_PARTSCAN) &&
|
||||
!(lo->lo_flags & LO_FLAGS_PARTSCAN)) {
|
||||
lo->lo_flags |= LO_FLAGS_PARTSCAN;
|
||||
lo->lo_disk->flags &= ~GENHD_FL_NO_PART_SCAN;
|
||||
loop_reread_partitions(lo, lo->lo_device);
|
||||
}
|
||||
|
||||
lo->lo_encrypt_key_size = info->lo_encrypt_key_size;
|
||||
lo->lo_init[0] = info->lo_init[0];
|
||||
lo->lo_init[1] = info->lo_init[1];
|
||||
|
@ -1167,7 +1165,17 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
|
|||
/* update dio if lo_offset or transfer is changed */
|
||||
__loop_update_dio(lo, lo->use_dio);
|
||||
|
||||
return 0;
|
||||
exit:
|
||||
blk_mq_unfreeze_queue(lo->lo_queue);
|
||||
|
||||
if (!err && (info->lo_flags & LO_FLAGS_PARTSCAN) &&
|
||||
!(lo->lo_flags & LO_FLAGS_PARTSCAN)) {
|
||||
lo->lo_flags |= LO_FLAGS_PARTSCAN;
|
||||
lo->lo_disk->flags &= ~GENHD_FL_NO_PART_SCAN;
|
||||
loop_reread_partitions(lo, lo->lo_device);
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int
|
||||
|
|
|
@ -94,6 +94,7 @@ static const struct usb_device_id ath3k_table[] = {
|
|||
{ USB_DEVICE(0x04CA, 0x300f) },
|
||||
{ USB_DEVICE(0x04CA, 0x3010) },
|
||||
{ USB_DEVICE(0x04CA, 0x3014) },
|
||||
{ USB_DEVICE(0x04CA, 0x3018) },
|
||||
{ USB_DEVICE(0x0930, 0x0219) },
|
||||
{ USB_DEVICE(0x0930, 0x021c) },
|
||||
{ USB_DEVICE(0x0930, 0x0220) },
|
||||
|
@ -160,6 +161,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
|
|||
{ USB_DEVICE(0x04ca, 0x300f), .driver_info = BTUSB_ATH3012 },
|
||||
{ USB_DEVICE(0x04ca, 0x3010), .driver_info = BTUSB_ATH3012 },
|
||||
{ USB_DEVICE(0x04ca, 0x3014), .driver_info = BTUSB_ATH3012 },
|
||||
{ USB_DEVICE(0x04ca, 0x3018), .driver_info = BTUSB_ATH3012 },
|
||||
{ USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
|
||||
{ USB_DEVICE(0x0930, 0x021c), .driver_info = BTUSB_ATH3012 },
|
||||
{ USB_DEVICE(0x0930, 0x0220), .driver_info = BTUSB_ATH3012 },
|
||||
|
|
|
@ -208,6 +208,7 @@ static const struct usb_device_id blacklist_table[] = {
|
|||
{ USB_DEVICE(0x04ca, 0x300f), .driver_info = BTUSB_ATH3012 },
|
||||
{ USB_DEVICE(0x04ca, 0x3010), .driver_info = BTUSB_ATH3012 },
|
||||
{ USB_DEVICE(0x04ca, 0x3014), .driver_info = BTUSB_ATH3012 },
|
||||
{ USB_DEVICE(0x04ca, 0x3018), .driver_info = BTUSB_ATH3012 },
|
||||
{ USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
|
||||
{ USB_DEVICE(0x0930, 0x021c), .driver_info = BTUSB_ATH3012 },
|
||||
{ USB_DEVICE(0x0930, 0x0220), .driver_info = BTUSB_ATH3012 },
|
||||
|
|
|
@ -272,7 +272,7 @@ static void ipu_irq_handler(struct irq_desc *desc)
|
|||
u32 status;
|
||||
int i, line;
|
||||
|
||||
for (i = IPU_IRQ_NR_FN_BANKS; i < IPU_IRQ_NR_BANKS; i++) {
|
||||
for (i = 0; i < IPU_IRQ_NR_BANKS; i++) {
|
||||
struct ipu_irq_bank *bank = irq_bank + i;
|
||||
|
||||
raw_spin_lock(&bank_lock);
|
||||
|
|
|
@ -3704,9 +3704,15 @@ static void dce_v11_0_encoder_add(struct amdgpu_device *adev,
|
|||
default:
|
||||
encoder->possible_crtcs = 0x3;
|
||||
break;
|
||||
case 3:
|
||||
encoder->possible_crtcs = 0x7;
|
||||
break;
|
||||
case 4:
|
||||
encoder->possible_crtcs = 0xf;
|
||||
break;
|
||||
case 5:
|
||||
encoder->possible_crtcs = 0x1f;
|
||||
break;
|
||||
case 6:
|
||||
encoder->possible_crtcs = 0x3f;
|
||||
break;
|
||||
|
|
|
@ -58,13 +58,9 @@ bool ast_is_vga_enabled(struct drm_device *dev)
|
|||
/* TODO 1180 */
|
||||
} else {
|
||||
ch = ast_io_read8(ast, AST_IO_VGA_ENABLE_PORT);
|
||||
if (ch) {
|
||||
ast_open_key(ast);
|
||||
ch = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb6, 0xff);
|
||||
return ch & 0x04;
|
||||
}
|
||||
return !!(ch & 0x01);
|
||||
}
|
||||
return 0;
|
||||
return false;
|
||||
}
|
||||
|
||||
static const u8 extreginfo[] = { 0x0f, 0x04, 0x1c, 0xff };
|
||||
|
@ -375,8 +371,8 @@ void ast_post_gpu(struct drm_device *dev)
|
|||
pci_write_config_dword(ast->dev->pdev, 0x04, reg);
|
||||
|
||||
ast_enable_vga(dev);
|
||||
ast_enable_mmio(dev);
|
||||
ast_open_key(ast);
|
||||
ast_enable_mmio(dev);
|
||||
ast_set_def_ext_reg(dev);
|
||||
|
||||
if (ast->chip == AST2300 || ast->chip == AST2400)
|
||||
|
@ -1630,12 +1626,44 @@ static void ast_init_dram_2300(struct drm_device *dev)
|
|||
temp |= 0x73;
|
||||
ast_write32(ast, 0x12008, temp);
|
||||
|
||||
param.dram_freq = 396;
|
||||
param.dram_type = AST_DDR3;
|
||||
temp = ast_mindwm(ast, 0x1e6e2070);
|
||||
if (temp & 0x01000000)
|
||||
param.dram_type = AST_DDR2;
|
||||
param.dram_chipid = ast->dram_type;
|
||||
param.dram_freq = ast->mclk;
|
||||
param.vram_size = ast->vram_size;
|
||||
switch (temp & 0x18000000) {
|
||||
case 0:
|
||||
param.dram_chipid = AST_DRAM_512Mx16;
|
||||
break;
|
||||
default:
|
||||
case 0x08000000:
|
||||
param.dram_chipid = AST_DRAM_1Gx16;
|
||||
break;
|
||||
case 0x10000000:
|
||||
param.dram_chipid = AST_DRAM_2Gx16;
|
||||
break;
|
||||
case 0x18000000:
|
||||
param.dram_chipid = AST_DRAM_4Gx16;
|
||||
break;
|
||||
}
|
||||
switch (temp & 0x0c) {
|
||||
default:
|
||||
case 0x00:
|
||||
param.vram_size = AST_VIDMEM_SIZE_8M;
|
||||
break;
|
||||
|
||||
case 0x04:
|
||||
param.vram_size = AST_VIDMEM_SIZE_16M;
|
||||
break;
|
||||
|
||||
case 0x08:
|
||||
param.vram_size = AST_VIDMEM_SIZE_32M;
|
||||
break;
|
||||
|
||||
case 0x0c:
|
||||
param.vram_size = AST_VIDMEM_SIZE_64M;
|
||||
break;
|
||||
}
|
||||
|
||||
if (param.dram_type == AST_DDR3) {
|
||||
get_ddr3_info(ast, ¶m);
|
||||
|
|
|
@ -265,7 +265,7 @@ mode_fixup(struct drm_atomic_state *state)
|
|||
struct drm_connector *connector;
|
||||
struct drm_connector_state *conn_state;
|
||||
int i;
|
||||
bool ret;
|
||||
int ret;
|
||||
|
||||
for_each_crtc_in_state(state, crtc, crtc_state, i) {
|
||||
if (!crtc_state->mode_changed &&
|
||||
|
|
|
@ -1812,7 +1812,7 @@ int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr)
|
|||
mgr->payloads[i].num_slots = req_payload.num_slots;
|
||||
} else if (mgr->payloads[i].num_slots) {
|
||||
mgr->payloads[i].num_slots = 0;
|
||||
drm_dp_destroy_payload_step1(mgr, port, port->vcpi.vcpi, &mgr->payloads[i]);
|
||||
drm_dp_destroy_payload_step1(mgr, port, mgr->payloads[i].vcpi, &mgr->payloads[i]);
|
||||
req_payload.payload_state = mgr->payloads[i].payload_state;
|
||||
mgr->payloads[i].start_slot = 0;
|
||||
}
|
||||
|
|
|
@ -144,6 +144,9 @@ static struct edid_quirk {
|
|||
|
||||
/* Panel in Samsung NP700G7A-S01PL notebook reports 6bpc */
|
||||
{ "SEC", 0xd033, EDID_QUIRK_FORCE_8BPC },
|
||||
|
||||
/* Rotel RSX-1058 forwards sink's EDID but only does HDMI 1.1*/
|
||||
{ "ETR", 13896, EDID_QUIRK_FORCE_8BPC },
|
||||
};
|
||||
|
||||
/*
|
||||
|
|
|
@ -6803,7 +6803,18 @@ static void ivybridge_init_clock_gating(struct drm_device *dev)
|
|||
|
||||
static void vlv_init_display_clock_gating(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
I915_WRITE(DSPCLK_GATE_D, VRHUNIT_CLOCK_GATE_DISABLE);
|
||||
u32 val;
|
||||
|
||||
/*
|
||||
* On driver load, a pipe may be active and driving a DSI display.
|
||||
* Preserve DPOUNIT_CLOCK_GATE_DISABLE to avoid the pipe getting stuck
|
||||
* (and never recovering) in this case. intel_dsi_post_disable() will
|
||||
* clear it when we turn off the display.
|
||||
*/
|
||||
val = I915_READ(DSPCLK_GATE_D);
|
||||
val &= DPOUNIT_CLOCK_GATE_DISABLE;
|
||||
val |= VRHUNIT_CLOCK_GATE_DISABLE;
|
||||
I915_WRITE(DSPCLK_GATE_D, val);
|
||||
|
||||
/*
|
||||
* Disable trickle feed and enable pnd deadline calculation
|
||||
|
|
|
@ -205,8 +205,8 @@ static int radeon_cursor_move_locked(struct drm_crtc *crtc, int x, int y)
|
|||
}
|
||||
|
||||
if (x <= (crtc->x - w) || y <= (crtc->y - radeon_crtc->cursor_height) ||
|
||||
x >= (crtc->x + crtc->mode.crtc_hdisplay) ||
|
||||
y >= (crtc->y + crtc->mode.crtc_vdisplay))
|
||||
x >= (crtc->x + crtc->mode.hdisplay) ||
|
||||
y >= (crtc->y + crtc->mode.vdisplay))
|
||||
goto out_of_bounds;
|
||||
|
||||
x += xorigin;
|
||||
|
|
|
@ -1621,7 +1621,6 @@ static int ttm_bo_swapout(struct ttm_mem_shrink *shrink)
|
|||
struct ttm_buffer_object *bo;
|
||||
int ret = -EBUSY;
|
||||
int put_count;
|
||||
uint32_t swap_placement = (TTM_PL_FLAG_CACHED | TTM_PL_FLAG_SYSTEM);
|
||||
|
||||
spin_lock(&glob->lru_lock);
|
||||
list_for_each_entry(bo, &glob->swap_lru, swap) {
|
||||
|
@ -1657,7 +1656,8 @@ static int ttm_bo_swapout(struct ttm_mem_shrink *shrink)
|
|||
if (unlikely(ret != 0))
|
||||
goto out;
|
||||
|
||||
if ((bo->mem.placement & swap_placement) != swap_placement) {
|
||||
if (bo->mem.mem_type != TTM_PL_SYSTEM ||
|
||||
bo->ttm->caching_state != tt_cached) {
|
||||
struct ttm_mem_reg evict_mem;
|
||||
|
||||
evict_mem = bo->mem;
|
||||
|
|
|
@ -219,7 +219,7 @@ int hv_init(void)
|
|||
/* See if the hypercall page is already set */
|
||||
rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
|
||||
|
||||
virtaddr = __vmalloc(PAGE_SIZE, GFP_KERNEL, PAGE_KERNEL_EXEC);
|
||||
virtaddr = __vmalloc(PAGE_SIZE, GFP_KERNEL, PAGE_KERNEL_RX);
|
||||
|
||||
if (!virtaddr)
|
||||
goto cleanup;
|
||||
|
@ -422,7 +422,7 @@ int hv_synic_alloc(void)
|
|||
goto err;
|
||||
}
|
||||
|
||||
for_each_online_cpu(cpu) {
|
||||
for_each_present_cpu(cpu) {
|
||||
hv_context.event_dpc[cpu] = kmalloc(size, GFP_ATOMIC);
|
||||
if (hv_context.event_dpc[cpu] == NULL) {
|
||||
pr_err("Unable to allocate event dpc\n");
|
||||
|
@ -461,6 +461,8 @@ int hv_synic_alloc(void)
|
|||
pr_err("Unable to allocate post msg page\n");
|
||||
goto err;
|
||||
}
|
||||
|
||||
INIT_LIST_HEAD(&hv_context.percpu_list[cpu]);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -485,7 +487,7 @@ void hv_synic_free(void)
|
|||
int cpu;
|
||||
|
||||
kfree(hv_context.hv_numa_map);
|
||||
for_each_online_cpu(cpu)
|
||||
for_each_present_cpu(cpu)
|
||||
hv_synic_free_cpu(cpu);
|
||||
}
|
||||
|
||||
|
@ -555,8 +557,6 @@ void hv_synic_init(void *arg)
|
|||
rdmsrl(HV_X64_MSR_VP_INDEX, vp_index);
|
||||
hv_context.vp_index[cpu] = (u32)vp_index;
|
||||
|
||||
INIT_LIST_HEAD(&hv_context.percpu_list[cpu]);
|
||||
|
||||
/*
|
||||
* Register the per-cpu clockevent source.
|
||||
*/
|
||||
|
|
|
@ -61,6 +61,7 @@ static DECLARE_WORK(fcopy_send_work, fcopy_send_data);
|
|||
static const char fcopy_devname[] = "vmbus/hv_fcopy";
|
||||
static u8 *recv_buffer;
|
||||
static struct hvutil_transport *hvt;
|
||||
static struct completion release_event;
|
||||
/*
|
||||
* This state maintains the version number registered by the daemon.
|
||||
*/
|
||||
|
@ -312,12 +313,14 @@ static void fcopy_on_reset(void)
|
|||
|
||||
if (cancel_delayed_work_sync(&fcopy_timeout_work))
|
||||
fcopy_respond_to_host(HV_E_FAIL);
|
||||
complete(&release_event);
|
||||
}
|
||||
|
||||
int hv_fcopy_init(struct hv_util_service *srv)
|
||||
{
|
||||
recv_buffer = srv->recv_buffer;
|
||||
|
||||
init_completion(&release_event);
|
||||
/*
|
||||
* When this driver loads, the user level daemon that
|
||||
* processes the host requests may not yet be running.
|
||||
|
@ -339,4 +342,5 @@ void hv_fcopy_deinit(void)
|
|||
fcopy_transaction.state = HVUTIL_DEVICE_DYING;
|
||||
cancel_delayed_work_sync(&fcopy_timeout_work);
|
||||
hvutil_transport_destroy(hvt);
|
||||
wait_for_completion(&release_event);
|
||||
}
|
||||
|
|
|
@ -86,6 +86,7 @@ static DECLARE_WORK(kvp_sendkey_work, kvp_send_key);
|
|||
static const char kvp_devname[] = "vmbus/hv_kvp";
|
||||
static u8 *recv_buffer;
|
||||
static struct hvutil_transport *hvt;
|
||||
static struct completion release_event;
|
||||
/*
|
||||
* Register the kernel component with the user-level daemon.
|
||||
* As part of this registration, pass the LIC version number.
|
||||
|
@ -682,6 +683,7 @@ static void kvp_on_reset(void)
|
|||
if (cancel_delayed_work_sync(&kvp_timeout_work))
|
||||
kvp_respond_to_host(NULL, HV_E_FAIL);
|
||||
kvp_transaction.state = HVUTIL_DEVICE_INIT;
|
||||
complete(&release_event);
|
||||
}
|
||||
|
||||
int
|
||||
|
@ -689,6 +691,7 @@ hv_kvp_init(struct hv_util_service *srv)
|
|||
{
|
||||
recv_buffer = srv->recv_buffer;
|
||||
|
||||
init_completion(&release_event);
|
||||
/*
|
||||
* When this driver loads, the user level daemon that
|
||||
* processes the host requests may not yet be running.
|
||||
|
@ -711,4 +714,5 @@ void hv_kvp_deinit(void)
|
|||
cancel_delayed_work_sync(&kvp_timeout_work);
|
||||
cancel_work_sync(&kvp_sendkey_work);
|
||||
hvutil_transport_destroy(hvt);
|
||||
wait_for_completion(&release_event);
|
||||
}
|
||||
|
|
|
@ -66,6 +66,7 @@ static int dm_reg_value;
|
|||
static const char vss_devname[] = "vmbus/hv_vss";
|
||||
static __u8 *recv_buffer;
|
||||
static struct hvutil_transport *hvt;
|
||||
static struct completion release_event;
|
||||
|
||||
static void vss_send_op(struct work_struct *dummy);
|
||||
static void vss_timeout_func(struct work_struct *dummy);
|
||||
|
@ -326,11 +327,13 @@ static void vss_on_reset(void)
|
|||
if (cancel_delayed_work_sync(&vss_timeout_work))
|
||||
vss_respond_to_host(HV_E_FAIL);
|
||||
vss_transaction.state = HVUTIL_DEVICE_INIT;
|
||||
complete(&release_event);
|
||||
}
|
||||
|
||||
int
|
||||
hv_vss_init(struct hv_util_service *srv)
|
||||
{
|
||||
init_completion(&release_event);
|
||||
if (vmbus_proto_version < VERSION_WIN8_1) {
|
||||
pr_warn("Integration service 'Backup (volume snapshot)'"
|
||||
" not supported on this host version.\n");
|
||||
|
@ -360,4 +363,5 @@ void hv_vss_deinit(void)
|
|||
cancel_delayed_work_sync(&vss_timeout_work);
|
||||
cancel_work_sync(&vss_send_op_work);
|
||||
hvutil_transport_destroy(hvt);
|
||||
wait_for_completion(&release_event);
|
||||
}
|
||||
|
|
|
@ -136,6 +136,7 @@ static const struct iio_chan_spec mpl115_channels[] = {
|
|||
{
|
||||
.type = IIO_TEMP,
|
||||
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
|
||||
.info_mask_shared_by_type =
|
||||
BIT(IIO_CHAN_INFO_OFFSET) | BIT(IIO_CHAN_INFO_SCALE),
|
||||
},
|
||||
};
|
||||
|
|
|
@ -182,7 +182,7 @@ static const struct iio_chan_spec mpl3115_channels[] = {
|
|||
{
|
||||
.type = IIO_PRESSURE,
|
||||
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
|
||||
BIT(IIO_CHAN_INFO_SCALE),
|
||||
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
|
||||
.scan_index = 0,
|
||||
.scan_type = {
|
||||
.sign = 'u',
|
||||
|
@ -195,7 +195,7 @@ static const struct iio_chan_spec mpl3115_channels[] = {
|
|||
{
|
||||
.type = IIO_TEMP,
|
||||
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
|
||||
BIT(IIO_CHAN_INFO_SCALE),
|
||||
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
|
||||
.scan_index = 1,
|
||||
.scan_type = {
|
||||
.sign = 's',
|
||||
|
|
|
@ -3349,6 +3349,9 @@ static int cma_accept_iw(struct rdma_id_private *id_priv,
|
|||
struct iw_cm_conn_param iw_param;
|
||||
int ret;
|
||||
|
||||
if (!conn_param)
|
||||
return -EINVAL;
|
||||
|
||||
ret = cma_modify_qp_rtr(id_priv, conn_param);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
|
|
@ -1488,12 +1488,14 @@ static ssize_t set_mode(struct device *d, struct device_attribute *attr,
|
|||
|
||||
ret = ipoib_set_mode(dev, buf);
|
||||
|
||||
rtnl_unlock();
|
||||
/* The assumption is that the function ipoib_set_mode returned
|
||||
* with the rtnl held by it, if not the value -EBUSY returned,
|
||||
* then no need to rtnl_unlock
|
||||
*/
|
||||
if (ret != -EBUSY)
|
||||
rtnl_unlock();
|
||||
|
||||
if (!ret)
|
||||
return count;
|
||||
|
||||
return ret;
|
||||
return (!ret || ret == -EBUSY) ? count : ret;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(mode, S_IWUSR | S_IRUGO, show_mode, set_mode);
|
||||
|
|
|
@ -464,8 +464,7 @@ int ipoib_set_mode(struct net_device *dev, const char *buf)
|
|||
priv->tx_wr.wr.send_flags &= ~IB_SEND_IP_CSUM;
|
||||
|
||||
ipoib_flush_paths(dev);
|
||||
rtnl_lock();
|
||||
return 0;
|
||||
return (!rtnl_trylock()) ? -EBUSY : 0;
|
||||
}
|
||||
|
||||
if (!strcmp(buf, "datagram\n")) {
|
||||
|
@ -474,8 +473,7 @@ int ipoib_set_mode(struct net_device *dev, const char *buf)
|
|||
dev_set_mtu(dev, min(priv->mcast_mtu, dev->mtu));
|
||||
rtnl_unlock();
|
||||
ipoib_flush_paths(dev);
|
||||
rtnl_lock();
|
||||
return 0;
|
||||
return (!rtnl_trylock()) ? -EBUSY : 0;
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
|
@ -628,6 +626,14 @@ void ipoib_mark_paths_invalid(struct net_device *dev)
|
|||
spin_unlock_irq(&priv->lock);
|
||||
}
|
||||
|
||||
static void push_pseudo_header(struct sk_buff *skb, const char *daddr)
|
||||
{
|
||||
struct ipoib_pseudo_header *phdr;
|
||||
|
||||
phdr = (struct ipoib_pseudo_header *)skb_push(skb, sizeof(*phdr));
|
||||
memcpy(phdr->hwaddr, daddr, INFINIBAND_ALEN);
|
||||
}
|
||||
|
||||
void ipoib_flush_paths(struct net_device *dev)
|
||||
{
|
||||
struct ipoib_dev_priv *priv = netdev_priv(dev);
|
||||
|
@ -852,8 +858,7 @@ static void neigh_add_path(struct sk_buff *skb, u8 *daddr,
|
|||
}
|
||||
if (skb_queue_len(&neigh->queue) <
|
||||
IPOIB_MAX_PATH_REC_QUEUE) {
|
||||
/* put pseudoheader back on for next time */
|
||||
skb_push(skb, IPOIB_PSEUDO_LEN);
|
||||
push_pseudo_header(skb, neigh->daddr);
|
||||
__skb_queue_tail(&neigh->queue, skb);
|
||||
} else {
|
||||
ipoib_warn(priv, "queue length limit %d. Packet drop.\n",
|
||||
|
@ -871,10 +876,12 @@ static void neigh_add_path(struct sk_buff *skb, u8 *daddr,
|
|||
|
||||
if (!path->query && path_rec_start(dev, path))
|
||||
goto err_path;
|
||||
if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE)
|
||||
if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
|
||||
push_pseudo_header(skb, neigh->daddr);
|
||||
__skb_queue_tail(&neigh->queue, skb);
|
||||
else
|
||||
} else {
|
||||
goto err_drop;
|
||||
}
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&priv->lock, flags);
|
||||
|
@ -910,8 +917,7 @@ static void unicast_arp_send(struct sk_buff *skb, struct net_device *dev,
|
|||
}
|
||||
if (path) {
|
||||
if (skb_queue_len(&path->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
|
||||
/* put pseudoheader back on for next time */
|
||||
skb_push(skb, IPOIB_PSEUDO_LEN);
|
||||
push_pseudo_header(skb, phdr->hwaddr);
|
||||
__skb_queue_tail(&path->queue, skb);
|
||||
} else {
|
||||
++dev->stats.tx_dropped;
|
||||
|
@ -943,8 +949,7 @@ static void unicast_arp_send(struct sk_buff *skb, struct net_device *dev,
|
|||
return;
|
||||
} else if ((path->query || !path_rec_start(dev, path)) &&
|
||||
skb_queue_len(&path->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
|
||||
/* put pseudoheader back on for next time */
|
||||
skb_push(skb, IPOIB_PSEUDO_LEN);
|
||||
push_pseudo_header(skb, phdr->hwaddr);
|
||||
__skb_queue_tail(&path->queue, skb);
|
||||
} else {
|
||||
++dev->stats.tx_dropped;
|
||||
|
@ -1025,8 +1030,7 @@ send_using_neigh:
|
|||
}
|
||||
|
||||
if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
|
||||
/* put pseudoheader back on for next time */
|
||||
skb_push(skb, sizeof(*phdr));
|
||||
push_pseudo_header(skb, phdr->hwaddr);
|
||||
spin_lock_irqsave(&priv->lock, flags);
|
||||
__skb_queue_tail(&neigh->queue, skb);
|
||||
spin_unlock_irqrestore(&priv->lock, flags);
|
||||
|
@ -1058,7 +1062,6 @@ static int ipoib_hard_header(struct sk_buff *skb,
|
|||
unsigned short type,
|
||||
const void *daddr, const void *saddr, unsigned len)
|
||||
{
|
||||
struct ipoib_pseudo_header *phdr;
|
||||
struct ipoib_header *header;
|
||||
|
||||
header = (struct ipoib_header *) skb_push(skb, sizeof *header);
|
||||
|
@ -1071,8 +1074,7 @@ static int ipoib_hard_header(struct sk_buff *skb,
|
|||
* destination address into skb hard header so we can figure out where
|
||||
* to send the packet later.
|
||||
*/
|
||||
phdr = (struct ipoib_pseudo_header *) skb_push(skb, sizeof(*phdr));
|
||||
memcpy(phdr->hwaddr, daddr, INFINIBAND_ALEN);
|
||||
push_pseudo_header(skb, daddr);
|
||||
|
||||
return IPOIB_HARD_LEN;
|
||||
}
|
||||
|
|
|
@ -1787,17 +1787,24 @@ static void srp_process_rsp(struct srp_rdma_ch *ch, struct srp_rsp *rsp)
|
|||
if (unlikely(rsp->tag & SRP_TAG_TSK_MGMT)) {
|
||||
spin_lock_irqsave(&ch->lock, flags);
|
||||
ch->req_lim += be32_to_cpu(rsp->req_lim_delta);
|
||||
if (rsp->tag == ch->tsk_mgmt_tag) {
|
||||
ch->tsk_mgmt_status = -1;
|
||||
if (be32_to_cpu(rsp->resp_data_len) >= 4)
|
||||
ch->tsk_mgmt_status = rsp->data[3];
|
||||
complete(&ch->tsk_mgmt_done);
|
||||
} else {
|
||||
shost_printk(KERN_ERR, target->scsi_host,
|
||||
"Received tsk mgmt response too late for tag %#llx\n",
|
||||
rsp->tag);
|
||||
}
|
||||
spin_unlock_irqrestore(&ch->lock, flags);
|
||||
|
||||
ch->tsk_mgmt_status = -1;
|
||||
if (be32_to_cpu(rsp->resp_data_len) >= 4)
|
||||
ch->tsk_mgmt_status = rsp->data[3];
|
||||
complete(&ch->tsk_mgmt_done);
|
||||
} else {
|
||||
scmnd = scsi_host_find_tag(target->scsi_host, rsp->tag);
|
||||
if (scmnd) {
|
||||
if (scmnd && scmnd->host_scribble) {
|
||||
req = (void *)scmnd->host_scribble;
|
||||
scmnd = srp_claim_req(ch, req, NULL, scmnd);
|
||||
} else {
|
||||
scmnd = NULL;
|
||||
}
|
||||
if (!scmnd) {
|
||||
shost_printk(KERN_ERR, target->scsi_host,
|
||||
|
@ -2469,19 +2476,18 @@ srp_change_queue_depth(struct scsi_device *sdev, int qdepth)
|
|||
}
|
||||
|
||||
static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag, u64 lun,
|
||||
u8 func)
|
||||
u8 func, u8 *status)
|
||||
{
|
||||
struct srp_target_port *target = ch->target;
|
||||
struct srp_rport *rport = target->rport;
|
||||
struct ib_device *dev = target->srp_host->srp_dev->dev;
|
||||
struct srp_iu *iu;
|
||||
struct srp_tsk_mgmt *tsk_mgmt;
|
||||
int res;
|
||||
|
||||
if (!ch->connected || target->qp_in_error)
|
||||
return -1;
|
||||
|
||||
init_completion(&ch->tsk_mgmt_done);
|
||||
|
||||
/*
|
||||
* Lock the rport mutex to avoid that srp_create_ch_ib() is
|
||||
* invoked while a task management function is being sent.
|
||||
|
@ -2504,10 +2510,16 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag, u64 lun,
|
|||
|
||||
tsk_mgmt->opcode = SRP_TSK_MGMT;
|
||||
int_to_scsilun(lun, &tsk_mgmt->lun);
|
||||
tsk_mgmt->tag = req_tag | SRP_TAG_TSK_MGMT;
|
||||
tsk_mgmt->tsk_mgmt_func = func;
|
||||
tsk_mgmt->task_tag = req_tag;
|
||||
|
||||
spin_lock_irq(&ch->lock);
|
||||
ch->tsk_mgmt_tag = (ch->tsk_mgmt_tag + 1) | SRP_TAG_TSK_MGMT;
|
||||
tsk_mgmt->tag = ch->tsk_mgmt_tag;
|
||||
spin_unlock_irq(&ch->lock);
|
||||
|
||||
init_completion(&ch->tsk_mgmt_done);
|
||||
|
||||
ib_dma_sync_single_for_device(dev, iu->dma, sizeof *tsk_mgmt,
|
||||
DMA_TO_DEVICE);
|
||||
if (srp_post_send(ch, iu, sizeof(*tsk_mgmt))) {
|
||||
|
@ -2516,13 +2528,15 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag, u64 lun,
|
|||
|
||||
return -1;
|
||||
}
|
||||
res = wait_for_completion_timeout(&ch->tsk_mgmt_done,
|
||||
msecs_to_jiffies(SRP_ABORT_TIMEOUT_MS));
|
||||
if (res > 0 && status)
|
||||
*status = ch->tsk_mgmt_status;
|
||||
mutex_unlock(&rport->mutex);
|
||||
|
||||
if (!wait_for_completion_timeout(&ch->tsk_mgmt_done,
|
||||
msecs_to_jiffies(SRP_ABORT_TIMEOUT_MS)))
|
||||
return -1;
|
||||
WARN_ON_ONCE(res < 0);
|
||||
|
||||
return 0;
|
||||
return res > 0 ? 0 : -1;
|
||||
}
|
||||
|
||||
static int srp_abort(struct scsi_cmnd *scmnd)
|
||||
|
@ -2548,7 +2562,7 @@ static int srp_abort(struct scsi_cmnd *scmnd)
|
|||
shost_printk(KERN_ERR, target->scsi_host,
|
||||
"Sending SRP abort for tag %#x\n", tag);
|
||||
if (srp_send_tsk_mgmt(ch, tag, scmnd->device->lun,
|
||||
SRP_TSK_ABORT_TASK) == 0)
|
||||
SRP_TSK_ABORT_TASK, NULL) == 0)
|
||||
ret = SUCCESS;
|
||||
else if (target->rport->state == SRP_RPORT_LOST)
|
||||
ret = FAST_IO_FAIL;
|
||||
|
@ -2566,14 +2580,15 @@ static int srp_reset_device(struct scsi_cmnd *scmnd)
|
|||
struct srp_target_port *target = host_to_target(scmnd->device->host);
|
||||
struct srp_rdma_ch *ch;
|
||||
int i;
|
||||
u8 status;
|
||||
|
||||
shost_printk(KERN_ERR, target->scsi_host, "SRP reset_device called\n");
|
||||
|
||||
ch = &target->ch[0];
|
||||
if (srp_send_tsk_mgmt(ch, SRP_TAG_NO_REQ, scmnd->device->lun,
|
||||
SRP_TSK_LUN_RESET))
|
||||
SRP_TSK_LUN_RESET, &status))
|
||||
return FAILED;
|
||||
if (ch->tsk_mgmt_status)
|
||||
if (status)
|
||||
return FAILED;
|
||||
|
||||
for (i = 0; i < target->ch_count; i++) {
|
||||
|
|
|
@ -168,6 +168,7 @@ struct srp_rdma_ch {
|
|||
int max_ti_iu_len;
|
||||
int comp_vector;
|
||||
|
||||
u64 tsk_mgmt_tag;
|
||||
struct completion tsk_mgmt_done;
|
||||
u8 tsk_mgmt_status;
|
||||
bool connected;
|
||||
|
|
|
@ -1232,6 +1232,7 @@ static const struct acpi_device_id elan_acpi_id[] = {
|
|||
{ "ELAN0000", 0 },
|
||||
{ "ELAN0100", 0 },
|
||||
{ "ELAN0600", 0 },
|
||||
{ "ELAN0605", 0 },
|
||||
{ "ELAN1000", 0 },
|
||||
{ }
|
||||
};
|
||||
|
|
|
@ -3238,13 +3238,14 @@ static int __init init_dmars(void)
|
|||
iommu_identity_mapping |= IDENTMAP_GFX;
|
||||
#endif
|
||||
|
||||
check_tylersburg_isoch();
|
||||
|
||||
if (iommu_identity_mapping) {
|
||||
ret = si_domain_init(hw_pass_through);
|
||||
if (ret)
|
||||
goto free_iommu;
|
||||
}
|
||||
|
||||
check_tylersburg_isoch();
|
||||
|
||||
/*
|
||||
* If we copied translations from a previous kernel in the kdump
|
||||
|
|
|
@ -425,7 +425,7 @@ struct cache {
|
|||
* until a gc finishes - otherwise we could pointlessly burn a ton of
|
||||
* cpu
|
||||
*/
|
||||
unsigned invalidate_needs_gc:1;
|
||||
unsigned invalidate_needs_gc;
|
||||
|
||||
bool discard; /* Get rid of? */
|
||||
|
||||
|
@ -593,8 +593,8 @@ struct cache_set {
|
|||
|
||||
/* Counts how many sectors bio_insert has added to the cache */
|
||||
atomic_t sectors_to_gc;
|
||||
wait_queue_head_t gc_wait;
|
||||
|
||||
wait_queue_head_t moving_gc_wait;
|
||||
struct keybuf moving_gc_keys;
|
||||
/* Number of moving GC bios in flight */
|
||||
struct semaphore moving_in_flight;
|
||||
|
|
|
@ -1762,33 +1762,34 @@ static void bch_btree_gc(struct cache_set *c)
|
|||
bch_moving_gc(c);
|
||||
}
|
||||
|
||||
static int bch_gc_thread(void *arg)
|
||||
static bool gc_should_run(struct cache_set *c)
|
||||
{
|
||||
struct cache_set *c = arg;
|
||||
struct cache *ca;
|
||||
unsigned i;
|
||||
|
||||
while (1) {
|
||||
again:
|
||||
bch_btree_gc(c);
|
||||
for_each_cache(ca, c, i)
|
||||
if (ca->invalidate_needs_gc)
|
||||
return true;
|
||||
|
||||
if (atomic_read(&c->sectors_to_gc) < 0)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static int bch_gc_thread(void *arg)
|
||||
{
|
||||
struct cache_set *c = arg;
|
||||
|
||||
while (1) {
|
||||
wait_event_interruptible(c->gc_wait,
|
||||
kthread_should_stop() || gc_should_run(c));
|
||||
|
||||
set_current_state(TASK_INTERRUPTIBLE);
|
||||
if (kthread_should_stop())
|
||||
break;
|
||||
|
||||
mutex_lock(&c->bucket_lock);
|
||||
|
||||
for_each_cache(ca, c, i)
|
||||
if (ca->invalidate_needs_gc) {
|
||||
mutex_unlock(&c->bucket_lock);
|
||||
set_current_state(TASK_RUNNING);
|
||||
goto again;
|
||||
}
|
||||
|
||||
mutex_unlock(&c->bucket_lock);
|
||||
|
||||
try_to_freeze();
|
||||
schedule();
|
||||
set_gc_sectors(c);
|
||||
bch_btree_gc(c);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -1796,11 +1797,10 @@ again:
|
|||
|
||||
int bch_gc_thread_start(struct cache_set *c)
|
||||
{
|
||||
c->gc_thread = kthread_create(bch_gc_thread, c, "bcache_gc");
|
||||
c->gc_thread = kthread_run(bch_gc_thread, c, "bcache_gc");
|
||||
if (IS_ERR(c->gc_thread))
|
||||
return PTR_ERR(c->gc_thread);
|
||||
|
||||
set_task_state(c->gc_thread, TASK_INTERRUPTIBLE);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -260,8 +260,7 @@ void bch_initial_mark_key(struct cache_set *, int, struct bkey *);
|
|||
|
||||
static inline void wake_up_gc(struct cache_set *c)
|
||||
{
|
||||
if (c->gc_thread)
|
||||
wake_up_process(c->gc_thread);
|
||||
wake_up(&c->gc_wait);
|
||||
}
|
||||
|
||||
#define MAP_DONE 0
|
||||
|
|
|
@ -196,10 +196,8 @@ static void bch_data_insert_start(struct closure *cl)
|
|||
struct data_insert_op *op = container_of(cl, struct data_insert_op, cl);
|
||||
struct bio *bio = op->bio, *n;
|
||||
|
||||
if (atomic_sub_return(bio_sectors(bio), &op->c->sectors_to_gc) < 0) {
|
||||
set_gc_sectors(op->c);
|
||||
if (atomic_sub_return(bio_sectors(bio), &op->c->sectors_to_gc) < 0)
|
||||
wake_up_gc(op->c);
|
||||
}
|
||||
|
||||
if (op->bypass)
|
||||
return bch_data_invalidate(cl);
|
||||
|
|
|
@ -1489,6 +1489,7 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
|
|||
mutex_init(&c->bucket_lock);
|
||||
init_waitqueue_head(&c->btree_cache_wait);
|
||||
init_waitqueue_head(&c->bucket_wait);
|
||||
init_waitqueue_head(&c->gc_wait);
|
||||
sema_init(&c->uuid_write_mutex, 1);
|
||||
|
||||
spin_lock_init(&c->btree_gc_time.lock);
|
||||
|
@ -1547,6 +1548,7 @@ static void run_cache_set(struct cache_set *c)
|
|||
|
||||
for_each_cache(ca, c, i)
|
||||
c->nbuckets += ca->sb.nbuckets;
|
||||
set_gc_sectors(c);
|
||||
|
||||
if (CACHE_SYNC(&c->sb)) {
|
||||
LIST_HEAD(journal);
|
||||
|
|
|
@ -248,7 +248,7 @@ struct cache {
|
|||
/*
|
||||
* Fields for converting from sectors to blocks.
|
||||
*/
|
||||
uint32_t sectors_per_block;
|
||||
sector_t sectors_per_block;
|
||||
int sectors_per_block_shift;
|
||||
|
||||
spinlock_t lock;
|
||||
|
@ -3544,11 +3544,11 @@ static void cache_status(struct dm_target *ti, status_type_t type,
|
|||
|
||||
residency = policy_residency(cache->policy);
|
||||
|
||||
DMEMIT("%u %llu/%llu %u %llu/%llu %u %u %u %u %u %u %lu ",
|
||||
DMEMIT("%u %llu/%llu %llu %llu/%llu %u %u %u %u %u %u %lu ",
|
||||
(unsigned)DM_CACHE_METADATA_BLOCK_SIZE,
|
||||
(unsigned long long)(nr_blocks_metadata - nr_free_blocks_metadata),
|
||||
(unsigned long long)nr_blocks_metadata,
|
||||
cache->sectors_per_block,
|
||||
(unsigned long long)cache->sectors_per_block,
|
||||
(unsigned long long) from_cblock(residency),
|
||||
(unsigned long long) from_cblock(cache->cache_size),
|
||||
(unsigned) atomic_read(&cache->stats.read_hit),
|
||||
|
|
|
@ -175,6 +175,7 @@ static void dm_stat_free(struct rcu_head *head)
|
|||
int cpu;
|
||||
struct dm_stat *s = container_of(head, struct dm_stat, rcu_head);
|
||||
|
||||
kfree(s->histogram_boundaries);
|
||||
kfree(s->program_id);
|
||||
kfree(s->aux_data);
|
||||
for_each_possible_cpu(cpu) {
|
||||
|
|
|
@ -1467,11 +1467,62 @@ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(dm_accept_partial_bio);
|
||||
|
||||
/*
|
||||
* Flush current->bio_list when the target map method blocks.
|
||||
* This fixes deadlocks in snapshot and possibly in other targets.
|
||||
*/
|
||||
struct dm_offload {
|
||||
struct blk_plug plug;
|
||||
struct blk_plug_cb cb;
|
||||
};
|
||||
|
||||
static void flush_current_bio_list(struct blk_plug_cb *cb, bool from_schedule)
|
||||
{
|
||||
struct dm_offload *o = container_of(cb, struct dm_offload, cb);
|
||||
struct bio_list list;
|
||||
struct bio *bio;
|
||||
|
||||
INIT_LIST_HEAD(&o->cb.list);
|
||||
|
||||
if (unlikely(!current->bio_list))
|
||||
return;
|
||||
|
||||
list = *current->bio_list;
|
||||
bio_list_init(current->bio_list);
|
||||
|
||||
while ((bio = bio_list_pop(&list))) {
|
||||
struct bio_set *bs = bio->bi_pool;
|
||||
if (unlikely(!bs) || bs == fs_bio_set) {
|
||||
bio_list_add(current->bio_list, bio);
|
||||
continue;
|
||||
}
|
||||
|
||||
spin_lock(&bs->rescue_lock);
|
||||
bio_list_add(&bs->rescue_list, bio);
|
||||
queue_work(bs->rescue_workqueue, &bs->rescue_work);
|
||||
spin_unlock(&bs->rescue_lock);
|
||||
}
|
||||
}
|
||||
|
||||
static void dm_offload_start(struct dm_offload *o)
|
||||
{
|
||||
blk_start_plug(&o->plug);
|
||||
o->cb.callback = flush_current_bio_list;
|
||||
list_add(&o->cb.list, ¤t->plug->cb_list);
|
||||
}
|
||||
|
||||
static void dm_offload_end(struct dm_offload *o)
|
||||
{
|
||||
list_del(&o->cb.list);
|
||||
blk_finish_plug(&o->plug);
|
||||
}
|
||||
|
||||
static void __map_bio(struct dm_target_io *tio)
|
||||
{
|
||||
int r;
|
||||
sector_t sector;
|
||||
struct mapped_device *md;
|
||||
struct dm_offload o;
|
||||
struct bio *clone = &tio->clone;
|
||||
struct dm_target *ti = tio->ti;
|
||||
|
||||
|
@ -1484,7 +1535,11 @@ static void __map_bio(struct dm_target_io *tio)
|
|||
*/
|
||||
atomic_inc(&tio->io->io_count);
|
||||
sector = clone->bi_iter.bi_sector;
|
||||
|
||||
dm_offload_start(&o);
|
||||
r = ti->type->map(ti, clone);
|
||||
dm_offload_end(&o);
|
||||
|
||||
if (r == DM_MAPIO_REMAPPED) {
|
||||
/* the bio has been remapped so dispatch it */
|
||||
|
||||
|
|
|
@ -52,18 +52,26 @@ static inline struct dev_info *which_dev(struct mddev *mddev, sector_t sector)
|
|||
return conf->disks + lo;
|
||||
}
|
||||
|
||||
/*
|
||||
* In linear_congested() conf->raid_disks is used as a copy of
|
||||
* mddev->raid_disks to iterate conf->disks[], because conf->raid_disks
|
||||
* and conf->disks[] are created in linear_conf(), they are always
|
||||
* consitent with each other, but mddev->raid_disks does not.
|
||||
*/
|
||||
static int linear_congested(struct mddev *mddev, int bits)
|
||||
{
|
||||
struct linear_conf *conf;
|
||||
int i, ret = 0;
|
||||
|
||||
conf = mddev->private;
|
||||
rcu_read_lock();
|
||||
conf = rcu_dereference(mddev->private);
|
||||
|
||||
for (i = 0; i < mddev->raid_disks && !ret ; i++) {
|
||||
for (i = 0; i < conf->raid_disks && !ret ; i++) {
|
||||
struct request_queue *q = bdev_get_queue(conf->disks[i].rdev->bdev);
|
||||
ret |= bdi_congested(&q->backing_dev_info, bits);
|
||||
}
|
||||
|
||||
rcu_read_unlock();
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -143,6 +151,19 @@ static struct linear_conf *linear_conf(struct mddev *mddev, int raid_disks)
|
|||
conf->disks[i-1].end_sector +
|
||||
conf->disks[i].rdev->sectors;
|
||||
|
||||
/*
|
||||
* conf->raid_disks is copy of mddev->raid_disks. The reason to
|
||||
* keep a copy of mddev->raid_disks in struct linear_conf is,
|
||||
* mddev->raid_disks may not be consistent with pointers number of
|
||||
* conf->disks[] when it is updated in linear_add() and used to
|
||||
* iterate old conf->disks[] earray in linear_congested().
|
||||
* Here conf->raid_disks is always consitent with number of
|
||||
* pointers in conf->disks[] array, and mddev->private is updated
|
||||
* with rcu_assign_pointer() in linear_addr(), such race can be
|
||||
* avoided.
|
||||
*/
|
||||
conf->raid_disks = raid_disks;
|
||||
|
||||
return conf;
|
||||
|
||||
out:
|
||||
|
@ -195,15 +216,23 @@ static int linear_add(struct mddev *mddev, struct md_rdev *rdev)
|
|||
if (!newconf)
|
||||
return -ENOMEM;
|
||||
|
||||
/* newconf->raid_disks already keeps a copy of * the increased
|
||||
* value of mddev->raid_disks, WARN_ONCE() is just used to make
|
||||
* sure of this. It is possible that oldconf is still referenced
|
||||
* in linear_congested(), therefore kfree_rcu() is used to free
|
||||
* oldconf until no one uses it anymore.
|
||||
*/
|
||||
mddev_suspend(mddev);
|
||||
oldconf = mddev->private;
|
||||
oldconf = rcu_dereference(mddev->private);
|
||||
mddev->raid_disks++;
|
||||
mddev->private = newconf;
|
||||
WARN_ONCE(mddev->raid_disks != newconf->raid_disks,
|
||||
"copied raid_disks doesn't match mddev->raid_disks");
|
||||
rcu_assign_pointer(mddev->private, newconf);
|
||||
md_set_array_sectors(mddev, linear_size(mddev, 0, 0));
|
||||
set_capacity(mddev->gendisk, mddev->array_sectors);
|
||||
mddev_resume(mddev);
|
||||
revalidate_disk(mddev->gendisk);
|
||||
kfree(oldconf);
|
||||
kfree_rcu(oldconf, rcu);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -10,6 +10,7 @@ struct linear_conf
|
|||
{
|
||||
struct rcu_head rcu;
|
||||
sector_t array_sectors;
|
||||
int raid_disks; /* a copy of mddev->raid_disks */
|
||||
struct dev_info disks[0];
|
||||
};
|
||||
#endif
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
config DVB_DM1105
|
||||
tristate "SDMC DM1105 based PCI cards"
|
||||
depends on DVB_CORE && PCI && I2C
|
||||
depends on DVB_CORE && PCI && I2C && I2C_ALGOBIT
|
||||
select DVB_PLL if MEDIA_SUBDRV_AUTOSELECT
|
||||
select DVB_STV0299 if MEDIA_SUBDRV_AUTOSELECT
|
||||
select DVB_STV0288 if MEDIA_SUBDRV_AUTOSELECT
|
||||
|
|
|
@ -1576,7 +1576,7 @@ static int vpfe_s_fmt(struct file *file, void *priv,
|
|||
return -EBUSY;
|
||||
}
|
||||
|
||||
ret = vpfe_try_fmt(file, priv, &format);
|
||||
ret = __vpfe_get_format(vpfe, &format, &bpp);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
|
@ -200,22 +200,30 @@ static int smsusb_start_streaming(struct smsusb_device_t *dev)
|
|||
static int smsusb_sendrequest(void *context, void *buffer, size_t size)
|
||||
{
|
||||
struct smsusb_device_t *dev = (struct smsusb_device_t *) context;
|
||||
struct sms_msg_hdr *phdr = (struct sms_msg_hdr *) buffer;
|
||||
int dummy;
|
||||
struct sms_msg_hdr *phdr;
|
||||
int dummy, ret;
|
||||
|
||||
if (dev->state != SMSUSB_ACTIVE) {
|
||||
pr_debug("Device not active yet\n");
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
phdr = kmalloc(size, GFP_KERNEL);
|
||||
if (!phdr)
|
||||
return -ENOMEM;
|
||||
memcpy(phdr, buffer, size);
|
||||
|
||||
pr_debug("sending %s(%d) size: %d\n",
|
||||
smscore_translate_msg(phdr->msg_type), phdr->msg_type,
|
||||
phdr->msg_length);
|
||||
|
||||
smsendian_handle_tx_message((struct sms_msg_data *) phdr);
|
||||
smsendian_handle_message_header((struct sms_msg_hdr *)buffer);
|
||||
return usb_bulk_msg(dev->udev, usb_sndbulkpipe(dev->udev, 2),
|
||||
buffer, size, &dummy, 1000);
|
||||
smsendian_handle_message_header((struct sms_msg_hdr *)phdr);
|
||||
ret = usb_bulk_msg(dev->udev, usb_sndbulkpipe(dev->udev, 2),
|
||||
phdr, size, &dummy, 1000);
|
||||
|
||||
kfree(phdr);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static char *smsusb1_fw_lkup[] = {
|
||||
|
|
|
@ -416,7 +416,7 @@ struct uvc_buffer *uvc_queue_next_buffer(struct uvc_video_queue *queue,
|
|||
nextbuf = NULL;
|
||||
spin_unlock_irqrestore(&queue->irqlock, flags);
|
||||
|
||||
buf->state = buf->error ? VB2_BUF_STATE_ERROR : UVC_BUF_STATE_DONE;
|
||||
buf->state = buf->error ? UVC_BUF_STATE_ERROR : UVC_BUF_STATE_DONE;
|
||||
vb2_set_plane_payload(&buf->buf.vb2_buf, 0, buf->bytesused);
|
||||
vb2_buffer_done(&buf->buf.vb2_buf, VB2_BUF_STATE_DONE);
|
||||
|
||||
|
|
|
@ -2032,10 +2032,10 @@ reinit:
|
|||
err = mmc_select_hs400(card);
|
||||
if (err)
|
||||
goto free_card;
|
||||
} else if (mmc_card_hs(card)) {
|
||||
} else {
|
||||
/* Select the desired bus width optionally */
|
||||
err = mmc_select_bus_width(card);
|
||||
if (!IS_ERR_VALUE(err)) {
|
||||
if (!IS_ERR_VALUE(err) && mmc_card_hs(card)) {
|
||||
err = mmc_select_hs_ddr(card);
|
||||
if (err)
|
||||
goto free_card;
|
||||
|
|
|
@ -139,15 +139,13 @@ static int __init init_msp_flash(void)
|
|||
}
|
||||
|
||||
msp_maps[i].bankwidth = 1;
|
||||
msp_maps[i].name = kmalloc(7, GFP_KERNEL);
|
||||
msp_maps[i].name = kstrndup(flash_name, 7, GFP_KERNEL);
|
||||
if (!msp_maps[i].name) {
|
||||
iounmap(msp_maps[i].virt);
|
||||
kfree(msp_parts[i]);
|
||||
goto cleanup_loop;
|
||||
}
|
||||
|
||||
msp_maps[i].name = strncpy(msp_maps[i].name, flash_name, 7);
|
||||
|
||||
for (j = 0; j < pcnt; j++) {
|
||||
part_name[5] = '0' + i;
|
||||
part_name[7] = '0' + j;
|
||||
|
|
|
@ -954,8 +954,8 @@ static int usb_8dev_probe(struct usb_interface *intf,
|
|||
for (i = 0; i < MAX_TX_URBS; i++)
|
||||
priv->tx_contexts[i].echo_index = MAX_TX_URBS;
|
||||
|
||||
priv->cmd_msg_buffer = kzalloc(sizeof(struct usb_8dev_cmd_msg),
|
||||
GFP_KERNEL);
|
||||
priv->cmd_msg_buffer = devm_kzalloc(&intf->dev, sizeof(struct usb_8dev_cmd_msg),
|
||||
GFP_KERNEL);
|
||||
if (!priv->cmd_msg_buffer)
|
||||
goto cleanup_candev;
|
||||
|
||||
|
@ -969,7 +969,7 @@ static int usb_8dev_probe(struct usb_interface *intf,
|
|||
if (err) {
|
||||
netdev_err(netdev,
|
||||
"couldn't register CAN device: %d\n", err);
|
||||
goto cleanup_cmd_msg_buffer;
|
||||
goto cleanup_candev;
|
||||
}
|
||||
|
||||
err = usb_8dev_cmd_version(priv, &version);
|
||||
|
@ -990,9 +990,6 @@ static int usb_8dev_probe(struct usb_interface *intf,
|
|||
cleanup_unregister_candev:
|
||||
unregister_netdev(priv->netdev);
|
||||
|
||||
cleanup_cmd_msg_buffer:
|
||||
kfree(priv->cmd_msg_buffer);
|
||||
|
||||
cleanup_candev:
|
||||
free_candev(netdev);
|
||||
|
||||
|
|
|
@ -993,7 +993,7 @@ static void mvpp2_txq_inc_put(struct mvpp2_txq_pcpu *txq_pcpu,
|
|||
txq_pcpu->buffs + txq_pcpu->txq_put_index;
|
||||
tx_buf->skb = skb;
|
||||
tx_buf->size = tx_desc->data_size;
|
||||
tx_buf->phys = tx_desc->buf_phys_addr;
|
||||
tx_buf->phys = tx_desc->buf_phys_addr + tx_desc->packet_offset;
|
||||
txq_pcpu->txq_put_index++;
|
||||
if (txq_pcpu->txq_put_index == txq_pcpu->size)
|
||||
txq_pcpu->txq_put_index = 0;
|
||||
|
|
|
@ -502,8 +502,11 @@ void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv)
|
|||
return;
|
||||
|
||||
for (ring = 0; ring < priv->rx_ring_num; ring++) {
|
||||
if (mlx4_en_is_ring_empty(priv->rx_ring[ring]))
|
||||
if (mlx4_en_is_ring_empty(priv->rx_ring[ring])) {
|
||||
local_bh_disable();
|
||||
napi_reschedule(&priv->rx_cq[ring]->napi);
|
||||
local_bh_enable();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -1237,7 +1237,7 @@ int cpmac_init(void)
|
|||
goto fail_alloc;
|
||||
}
|
||||
|
||||
#warning FIXME: unhardcode gpio&reset bits
|
||||
/* FIXME: unhardcode gpio&reset bits */
|
||||
ar7_gpio_disable(26);
|
||||
ar7_gpio_disable(27);
|
||||
ar7_device_reset(AR7_RESET_BIT_CPMAC_LO);
|
||||
|
|
|
@ -30,7 +30,7 @@
|
|||
static int numlbs = 2;
|
||||
|
||||
static LIST_HEAD(fakelb_phys);
|
||||
static DEFINE_SPINLOCK(fakelb_phys_lock);
|
||||
static DEFINE_MUTEX(fakelb_phys_lock);
|
||||
|
||||
static LIST_HEAD(fakelb_ifup_phys);
|
||||
static DEFINE_RWLOCK(fakelb_ifup_phys_lock);
|
||||
|
@ -180,9 +180,9 @@ static int fakelb_add_one(struct device *dev)
|
|||
if (err)
|
||||
goto err_reg;
|
||||
|
||||
spin_lock(&fakelb_phys_lock);
|
||||
mutex_lock(&fakelb_phys_lock);
|
||||
list_add_tail(&phy->list, &fakelb_phys);
|
||||
spin_unlock(&fakelb_phys_lock);
|
||||
mutex_unlock(&fakelb_phys_lock);
|
||||
|
||||
return 0;
|
||||
|
||||
|
@ -214,10 +214,10 @@ static int fakelb_probe(struct platform_device *pdev)
|
|||
return 0;
|
||||
|
||||
err_slave:
|
||||
spin_lock(&fakelb_phys_lock);
|
||||
mutex_lock(&fakelb_phys_lock);
|
||||
list_for_each_entry_safe(phy, tmp, &fakelb_phys, list)
|
||||
fakelb_del(phy);
|
||||
spin_unlock(&fakelb_phys_lock);
|
||||
mutex_unlock(&fakelb_phys_lock);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -225,10 +225,10 @@ static int fakelb_remove(struct platform_device *pdev)
|
|||
{
|
||||
struct fakelb_phy *phy, *tmp;
|
||||
|
||||
spin_lock(&fakelb_phys_lock);
|
||||
mutex_lock(&fakelb_phys_lock);
|
||||
list_for_each_entry_safe(phy, tmp, &fakelb_phys, list)
|
||||
fakelb_del(phy);
|
||||
spin_unlock(&fakelb_phys_lock);
|
||||
mutex_unlock(&fakelb_phys_lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -164,6 +164,7 @@ static void loopback_setup(struct net_device *dev)
|
|||
{
|
||||
dev->mtu = 64 * 1024;
|
||||
dev->hard_header_len = ETH_HLEN; /* 14 */
|
||||
dev->min_header_len = ETH_HLEN; /* 14 */
|
||||
dev->addr_len = ETH_ALEN; /* 6 */
|
||||
dev->type = ARPHRD_LOOPBACK; /* 0x0001*/
|
||||
dev->flags = IFF_LOOPBACK;
|
||||
|
|
|
@ -725,7 +725,7 @@ static ssize_t macvtap_get_user(struct macvtap_queue *q, struct msghdr *m,
|
|||
ssize_t n;
|
||||
|
||||
if (q->flags & IFF_VNET_HDR) {
|
||||
vnet_hdr_len = q->vnet_hdr_sz;
|
||||
vnet_hdr_len = READ_ONCE(q->vnet_hdr_sz);
|
||||
|
||||
err = -EINVAL;
|
||||
if (len < vnet_hdr_len)
|
||||
|
@ -865,7 +865,7 @@ static ssize_t macvtap_put_user(struct macvtap_queue *q,
|
|||
|
||||
if (q->flags & IFF_VNET_HDR) {
|
||||
struct virtio_net_hdr vnet_hdr;
|
||||
vnet_hdr_len = q->vnet_hdr_sz;
|
||||
vnet_hdr_len = READ_ONCE(q->vnet_hdr_sz);
|
||||
if (iov_iter_count(iter) < vnet_hdr_len)
|
||||
return -EINVAL;
|
||||
|
||||
|
|
|
@ -1105,9 +1105,11 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
|
|||
}
|
||||
|
||||
if (tun->flags & IFF_VNET_HDR) {
|
||||
if (len < tun->vnet_hdr_sz)
|
||||
int vnet_hdr_sz = READ_ONCE(tun->vnet_hdr_sz);
|
||||
|
||||
if (len < vnet_hdr_sz)
|
||||
return -EINVAL;
|
||||
len -= tun->vnet_hdr_sz;
|
||||
len -= vnet_hdr_sz;
|
||||
|
||||
n = copy_from_iter(&gso, sizeof(gso), from);
|
||||
if (n != sizeof(gso))
|
||||
|
@ -1119,7 +1121,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
|
|||
|
||||
if (tun16_to_cpu(tun, gso.hdr_len) > len)
|
||||
return -EINVAL;
|
||||
iov_iter_advance(from, tun->vnet_hdr_sz - sizeof(gso));
|
||||
iov_iter_advance(from, vnet_hdr_sz - sizeof(gso));
|
||||
}
|
||||
|
||||
if ((tun->flags & TUN_TYPE_MASK) == IFF_TAP) {
|
||||
|
@ -1302,7 +1304,7 @@ static ssize_t tun_put_user(struct tun_struct *tun,
|
|||
vlan_hlen = VLAN_HLEN;
|
||||
|
||||
if (tun->flags & IFF_VNET_HDR)
|
||||
vnet_hdr_sz = tun->vnet_hdr_sz;
|
||||
vnet_hdr_sz = READ_ONCE(tun->vnet_hdr_sz);
|
||||
|
||||
total = skb->len + vlan_hlen + vnet_hdr_sz;
|
||||
|
||||
|
|
|
@ -502,8 +502,7 @@ ath5k_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
|
|||
break;
|
||||
return -EOPNOTSUPP;
|
||||
default:
|
||||
WARN_ON(1);
|
||||
return -EINVAL;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
mutex_lock(&ah->lock);
|
||||
|
|
|
@ -73,13 +73,13 @@
|
|||
#define AR9300_OTP_BASE \
|
||||
((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x30000 : 0x14000)
|
||||
#define AR9300_OTP_STATUS \
|
||||
((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x30018 : 0x15f18)
|
||||
((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x31018 : 0x15f18)
|
||||
#define AR9300_OTP_STATUS_TYPE 0x7
|
||||
#define AR9300_OTP_STATUS_VALID 0x4
|
||||
#define AR9300_OTP_STATUS_ACCESS_BUSY 0x2
|
||||
#define AR9300_OTP_STATUS_SM_BUSY 0x1
|
||||
#define AR9300_OTP_READ_DATA \
|
||||
((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x3001c : 0x15f1c)
|
||||
((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x3101c : 0x15f1c)
|
||||
|
||||
enum targetPowerHTRates {
|
||||
HT_TARGET_RATE_0_8_16,
|
||||
|
|
|
@ -959,6 +959,7 @@ struct ath_softc {
|
|||
struct survey_info *cur_survey;
|
||||
struct survey_info survey[ATH9K_NUM_CHANNELS];
|
||||
|
||||
spinlock_t intr_lock;
|
||||
struct tasklet_struct intr_tq;
|
||||
struct tasklet_struct bcon_tasklet;
|
||||
struct ath_hw *sc_ah;
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue