This is the 4.4.124 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlq2IVkACgkQONu9yGCS aT5SGQ//eDH59qGQJFy8GwDQULnYV8JEeBZT2tIzx2LDVJvKKz/8BkJmcGZrQ0gH 9EnMCiPkbxRH6HV6Cu1SHcyLFK+iVXGEw28Uk/sEcesQSZvrRxUKIgRLvyWhANgP E1FiPbI6V1tXTROmUk/lrGZYgHMVUMAa+kYRndpbijtB++7qO25mVngP0Yg6OveO oyw0MRx+v9mwQS/SID2EtlXnZOVGM+7kd3gg5fLY5w0qJT1jHjhrtZNWyKH2SMrS QnzkFDMGDIWk5TrHJY0LEvpZI/toKNtPrG/Mt5gOvtSgjmQ4+EEVndrlutPAOa3K xF6ERjtyBIRRG2ftk122fm6brjpxyCoqycD8JgCm1BxalNN7Bg+1ogQCGwE0EWNp yYEsxLmSbLllX/4rkZtx+WsgiG4rwNWG1/IsPsEo80La3M53WzA3My8E0aVLpbIj aqkzR62lZ1TSRgtbFjm6Bl4DtpJH4f9uELbO0VBi7b8LRTUI99Qcz4bOoKH+WKOC umvHsuyMwDm8wc0KhQ4hdyGb1le0nS4Y1xydp1p0pcASLfeA2VOIg20ZvxQVBTZA rlHo7zEQVkVYvphoPONUodOUVZ8/AGxoVvim3HlI6s5VvtdF6k/hQUWaOuWDh59J bjw2vnMWOLWSqmmkpK9HmU6ohIV310TJewPu8BShzAuZRSNDxl8= =OCjx -----END PGP SIGNATURE----- Merge 4.4.124 into android-4.4 Changes in 4.4.124 tpm: fix potential buffer overruns caused by bit glitches on the bus tpm_tis: fix potential buffer overruns caused by bit glitches on the bus SMB3: Validate negotiate request must always be signed CIFS: Enable encryption during session setup phase staging: android: ashmem: Fix possible deadlock in ashmem_ioctl platform/x86: asus-nb-wmi: Add wapf4 quirk for the X302UA regulator: anatop: set default voltage selector for pcie x86: i8259: export legacy_pic symbol rtc: cmos: Do not assume irq 8 for rtc when there are no legacy irqs Input: ar1021_i2c - fix too long name in driver's device table time: Change posix clocks ops interfaces to use timespec64 ACPI/processor: Fix error handling in __acpi_processor_start() ACPI/processor: Replace racy task affinity logic cpufreq/sh: Replace racy task affinity logic genirq: Use irqd_get_trigger_type to compare the trigger type for shared IRQs i2c: i2c-scmi: add a MS HID net: ipv6: send unsolicited NA on admin up media/dvb-core: Race condition when writing to CAM spi: dw: Disable clock after unregistering the host ath: Fix updating radar flags for coutry code India clk: ns2: Correct SDIO bits scsi: virtio_scsi: Always try to read VPD pages KVM: PPC: Book3S PR: Exit KVM on failed mapping ARM: 8668/1: ftrace: Fix dynamic ftrace with DEBUG_RODATA and !FRAME_POINTER iommu/omap: Register driver before setting IOMMU ops md/raid10: wait up frozen array in handle_write_completed NFS: Fix missing pg_cleanup after nfs_pageio_cond_complete() tcp: remove poll() flakes with FastOpen e1000e: fix timing for 82579 Gigabit Ethernet controller ALSA: hda - Fix headset microphone detection for ASUS N551 and N751 IB/ipoib: Fix deadlock between ipoib_stop and mcast join flow IB/ipoib: Update broadcast object if PKey value was changed in index 0 HSI: ssi_protocol: double free in ssip_pn_xmit() IB/mlx4: Take write semaphore when changing the vma struct IB/mlx4: Change vma from shared to private ASoC: Intel: Skylake: Uninitialized variable in probe_codec() Fix driver usage of 128B WQEs when WQ_CREATE is V1. netfilter: xt_CT: fix refcnt leak on error path openvswitch: Delete conntrack entry clashing with an expectation. mmc: host: omap_hsmmc: checking for NULL instead of IS_ERR() wan: pc300too: abort path on failure qlcnic: fix unchecked return value scsi: mac_esp: Replace bogus memory barrier with spinlock infiniband/uverbs: Fix integer overflows NFS: don't try to cross a mountpount when there isn't one there. iio: st_pressure: st_accel: Initialise sensor platform data properly mt7601u: check return value of alloc_skb rndis_wlan: add return value validation Btrfs: send, fix file hole not being preserved due to inline extent mac80211: don't parse encrypted management frames in ieee80211_frame_acked mfd: palmas: Reset the POWERHOLD mux during power off mtip32xx: use runtime tag to initialize command header staging: unisys: visorhba: fix s-Par to boot with option CONFIG_VMAP_STACK set to y staging: wilc1000: fix unchecked return value mmc: sdhci-of-esdhc: limit SD clock for ls1012a/ls1046a ARM: DRA7: clockdomain: Change the CLKTRCTRL of CM_PCIE_CLKSTCTRL to SW_WKUP ipmi/watchdog: fix wdog hang on panic waiting for ipmi response ACPI / PMIC: xpower: Fix power_table addresses drm/nouveau/kms: Increase max retries in scanout position queries. bnx2x: Align RX buffers power: supply: pda_power: move from timer to delayed_work Input: twl4030-pwrbutton - use correct device for irq request md/raid10: skip spare disk as 'first' disk ia64: fix module loading for gcc-5.4 tcm_fileio: Prevent information leak for short reads video: fbdev: udlfb: Fix buffer on stack sm501fb: don't return zero on failure path in sm501fb_start() net: hns: fix ethtool_get_strings overflow in hns driver cifs: small underflow in cnvrtDosUnixTm() rtc: ds1374: wdt: Fix issue with timeout scaling from secs to wdt ticks rtc: ds1374: wdt: Fix stop/start ioctl always returning -EINVAL perf tests kmod-path: Don't fail if compressed modules aren't supported Bluetooth: hci_qca: Avoid setup failure on missing rampatch media: c8sectpfe: fix potential NULL pointer dereference in c8sectpfe_timer_interrupt drm/msm: fix leak in failed get_pages RDMA/iwpm: Fix uninitialized error code in iwpm_send_mapinfo() rtlwifi: rtl_pci: Fix the bug when inactiveps is enabled. media: bt8xx: Fix err 'bt878_probe()' media: [RESEND] media: dvb-frontends: Add delay to Si2168 restart cros_ec: fix nul-termination for firmware build info platform/chrome: Use proper protocol transfer function mmc: avoid removing non-removable hosts during suspend IB/ipoib: Avoid memory leak if the SA returns a different DGID RDMA/cma: Use correct size when writing netlink stats IB/umem: Fix use of npages/nmap fields vgacon: Set VGA struct resource types drm/omap: DMM: Check for DMM readiness after successful transaction commit pty: cancel pty slave port buf's work in tty_release coresight: Fix disabling of CoreSight TPIU pinctrl: Really force states during suspend/resume iommu/vt-d: clean up pr_irq if request_threaded_irq fails ip6_vti: adjust vti mtu according to mtu of lower device RDMA/ocrdma: Fix permissions for OCRDMA_RESET_STATS nfsd4: permit layoutget of executable-only files clk: si5351: Rename internal plls to avoid name collisions dmaengine: ti-dma-crossbar: Fix event mapping for TPCC_EVT_MUX_60_63 RDMA/ucma: Fix access to non-initialized CM_ID object Linux 4.4.124 Change-Id: Iac6f5bda7941f032c5b1f58750e084140b0e3f23 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
851fb4da32
102 changed files with 825 additions and 315 deletions
2
Makefile
2
Makefile
|
@ -1,6 +1,6 @@
|
|||
VERSION = 4
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 123
|
||||
SUBLEVEL = 124
|
||||
EXTRAVERSION =
|
||||
NAME = Blurry Fish Butt
|
||||
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
struct pci_controller *pci_vga_hose;
|
||||
static struct resource alpha_vga = {
|
||||
.name = "alpha-vga+",
|
||||
.flags = IORESOURCE_IO,
|
||||
.start = 0x3C0,
|
||||
.end = 0x3DF
|
||||
};
|
||||
|
|
|
@ -29,11 +29,6 @@
|
|||
#endif
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
#ifdef CONFIG_OLD_MCOUNT
|
||||
#define OLD_MCOUNT_ADDR ((unsigned long) mcount)
|
||||
#define OLD_FTRACE_ADDR ((unsigned long) ftrace_caller_old)
|
||||
|
||||
#define OLD_NOP 0xe1a00000 /* mov r0, r0 */
|
||||
|
||||
static int __ftrace_modify_code(void *data)
|
||||
{
|
||||
|
@ -51,6 +46,12 @@ void arch_ftrace_update_code(int command)
|
|||
stop_machine(__ftrace_modify_code, &command, NULL);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_OLD_MCOUNT
|
||||
#define OLD_MCOUNT_ADDR ((unsigned long) mcount)
|
||||
#define OLD_FTRACE_ADDR ((unsigned long) ftrace_caller_old)
|
||||
|
||||
#define OLD_NOP 0xe1a00000 /* mov r0, r0 */
|
||||
|
||||
static unsigned long ftrace_nop_replace(struct dyn_ftrace *rec)
|
||||
{
|
||||
return rec->arch.old_mcount ? OLD_NOP : NOP;
|
||||
|
|
|
@ -524,7 +524,7 @@ static struct clockdomain pcie_7xx_clkdm = {
|
|||
.dep_bit = DRA7XX_PCIE_STATDEP_SHIFT,
|
||||
.wkdep_srcs = pcie_wkup_sleep_deps,
|
||||
.sleepdep_srcs = pcie_wkup_sleep_deps,
|
||||
.flags = CLKDM_CAN_HWSUP_SWSUP,
|
||||
.flags = CLKDM_CAN_SWSUP,
|
||||
};
|
||||
|
||||
static struct clockdomain atl_7xx_clkdm = {
|
||||
|
|
|
@ -153,7 +153,7 @@ slot (const struct insn *insn)
|
|||
static int
|
||||
apply_imm64 (struct module *mod, struct insn *insn, uint64_t val)
|
||||
{
|
||||
if (slot(insn) != 2) {
|
||||
if (slot(insn) != 1 && slot(insn) != 2) {
|
||||
printk(KERN_ERR "%s: invalid slot number %d for IMM64\n",
|
||||
mod->name, slot(insn));
|
||||
return 0;
|
||||
|
@ -165,7 +165,7 @@ apply_imm64 (struct module *mod, struct insn *insn, uint64_t val)
|
|||
static int
|
||||
apply_imm60 (struct module *mod, struct insn *insn, uint64_t val)
|
||||
{
|
||||
if (slot(insn) != 2) {
|
||||
if (slot(insn) != 1 && slot(insn) != 2) {
|
||||
printk(KERN_ERR "%s: invalid slot number %d for IMM60\n",
|
||||
mod->name, slot(insn));
|
||||
return 0;
|
||||
|
|
|
@ -177,12 +177,15 @@ map_again:
|
|||
ret = ppc_md.hpte_insert(hpteg, vpn, hpaddr, rflags, vflags,
|
||||
hpsize, hpsize, MMU_SEGSIZE_256M);
|
||||
|
||||
if (ret < 0) {
|
||||
if (ret == -1) {
|
||||
/* If we couldn't map a primary PTE, try a secondary */
|
||||
hash = ~hash;
|
||||
vflags ^= HPTE_V_SECONDARY;
|
||||
attempt++;
|
||||
goto map_again;
|
||||
} else if (ret < 0) {
|
||||
r = -EIO;
|
||||
goto out_unlock;
|
||||
} else {
|
||||
trace_kvm_book3s_64_mmu_map(rflags, hpteg,
|
||||
vpn, hpaddr, orig_pte);
|
||||
|
|
|
@ -625,7 +625,11 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
|||
kvmppc_mmu_unmap_page(vcpu, &pte);
|
||||
}
|
||||
/* The guest's PTE is not mapped yet. Map on the host */
|
||||
kvmppc_mmu_map_page(vcpu, &pte, iswrite);
|
||||
if (kvmppc_mmu_map_page(vcpu, &pte, iswrite) == -EIO) {
|
||||
/* Exit KVM if mapping failed */
|
||||
run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
|
||||
return RESUME_HOST;
|
||||
}
|
||||
if (data)
|
||||
vcpu->stat.sp_storage++;
|
||||
else if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
|
||||
|
|
|
@ -418,6 +418,7 @@ struct legacy_pic default_legacy_pic = {
|
|||
};
|
||||
|
||||
struct legacy_pic *legacy_pic = &default_legacy_pic;
|
||||
EXPORT_SYMBOL(legacy_pic);
|
||||
|
||||
static int __init i8259A_init_ops(void)
|
||||
{
|
||||
|
|
|
@ -28,97 +28,97 @@ static struct pmic_table power_table[] = {
|
|||
.address = 0x00,
|
||||
.reg = 0x13,
|
||||
.bit = 0x05,
|
||||
},
|
||||
}, /* ALD1 */
|
||||
{
|
||||
.address = 0x04,
|
||||
.reg = 0x13,
|
||||
.bit = 0x06,
|
||||
},
|
||||
}, /* ALD2 */
|
||||
{
|
||||
.address = 0x08,
|
||||
.reg = 0x13,
|
||||
.bit = 0x07,
|
||||
},
|
||||
}, /* ALD3 */
|
||||
{
|
||||
.address = 0x0c,
|
||||
.reg = 0x12,
|
||||
.bit = 0x03,
|
||||
},
|
||||
}, /* DLD1 */
|
||||
{
|
||||
.address = 0x10,
|
||||
.reg = 0x12,
|
||||
.bit = 0x04,
|
||||
},
|
||||
}, /* DLD2 */
|
||||
{
|
||||
.address = 0x14,
|
||||
.reg = 0x12,
|
||||
.bit = 0x05,
|
||||
},
|
||||
}, /* DLD3 */
|
||||
{
|
||||
.address = 0x18,
|
||||
.reg = 0x12,
|
||||
.bit = 0x06,
|
||||
},
|
||||
}, /* DLD4 */
|
||||
{
|
||||
.address = 0x1c,
|
||||
.reg = 0x12,
|
||||
.bit = 0x00,
|
||||
},
|
||||
}, /* ELD1 */
|
||||
{
|
||||
.address = 0x20,
|
||||
.reg = 0x12,
|
||||
.bit = 0x01,
|
||||
},
|
||||
}, /* ELD2 */
|
||||
{
|
||||
.address = 0x24,
|
||||
.reg = 0x12,
|
||||
.bit = 0x02,
|
||||
},
|
||||
}, /* ELD3 */
|
||||
{
|
||||
.address = 0x28,
|
||||
.reg = 0x13,
|
||||
.bit = 0x02,
|
||||
},
|
||||
}, /* FLD1 */
|
||||
{
|
||||
.address = 0x2c,
|
||||
.reg = 0x13,
|
||||
.bit = 0x03,
|
||||
},
|
||||
}, /* FLD2 */
|
||||
{
|
||||
.address = 0x30,
|
||||
.reg = 0x13,
|
||||
.bit = 0x04,
|
||||
},
|
||||
}, /* FLD3 */
|
||||
{
|
||||
.address = 0x34,
|
||||
.reg = 0x10,
|
||||
.bit = 0x03,
|
||||
}, /* BUC1 */
|
||||
{
|
||||
.address = 0x38,
|
||||
.reg = 0x10,
|
||||
.bit = 0x03,
|
||||
},
|
||||
.bit = 0x06,
|
||||
}, /* BUC2 */
|
||||
{
|
||||
.address = 0x3c,
|
||||
.reg = 0x10,
|
||||
.bit = 0x06,
|
||||
},
|
||||
.bit = 0x05,
|
||||
}, /* BUC3 */
|
||||
{
|
||||
.address = 0x40,
|
||||
.reg = 0x10,
|
||||
.bit = 0x05,
|
||||
},
|
||||
.bit = 0x04,
|
||||
}, /* BUC4 */
|
||||
{
|
||||
.address = 0x44,
|
||||
.reg = 0x10,
|
||||
.bit = 0x04,
|
||||
},
|
||||
.bit = 0x01,
|
||||
}, /* BUC5 */
|
||||
{
|
||||
.address = 0x48,
|
||||
.reg = 0x10,
|
||||
.bit = 0x01,
|
||||
},
|
||||
{
|
||||
.address = 0x4c,
|
||||
.reg = 0x10,
|
||||
.bit = 0x00
|
||||
},
|
||||
}, /* BUC6 */
|
||||
};
|
||||
|
||||
/* TMP0 - TMP5 are the same, all from GPADC */
|
||||
|
|
|
@ -259,6 +259,9 @@ static int __acpi_processor_start(struct acpi_device *device)
|
|||
if (ACPI_SUCCESS(status))
|
||||
return 0;
|
||||
|
||||
result = -ENODEV;
|
||||
acpi_pss_perf_exit(pr, device);
|
||||
|
||||
err_power_exit:
|
||||
acpi_processor_power_exit(pr);
|
||||
return result;
|
||||
|
@ -267,11 +270,16 @@ err_power_exit:
|
|||
static int acpi_processor_start(struct device *dev)
|
||||
{
|
||||
struct acpi_device *device = ACPI_COMPANION(dev);
|
||||
int ret;
|
||||
|
||||
if (!device)
|
||||
return -ENODEV;
|
||||
|
||||
return __acpi_processor_start(device);
|
||||
/* Protect against concurrent CPU hotplug operations */
|
||||
get_online_cpus();
|
||||
ret = __acpi_processor_start(device);
|
||||
put_online_cpus();
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int acpi_processor_stop(struct device *dev)
|
||||
|
|
|
@ -62,8 +62,8 @@ struct acpi_processor_throttling_arg {
|
|||
#define THROTTLING_POSTCHANGE (2)
|
||||
|
||||
static int acpi_processor_get_throttling(struct acpi_processor *pr);
|
||||
int acpi_processor_set_throttling(struct acpi_processor *pr,
|
||||
int state, bool force);
|
||||
static int __acpi_processor_set_throttling(struct acpi_processor *pr,
|
||||
int state, bool force, bool direct);
|
||||
|
||||
static int acpi_processor_update_tsd_coord(void)
|
||||
{
|
||||
|
@ -891,7 +891,8 @@ static int acpi_processor_get_throttling_ptc(struct acpi_processor *pr)
|
|||
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
|
||||
"Invalid throttling state, reset\n"));
|
||||
state = 0;
|
||||
ret = acpi_processor_set_throttling(pr, state, true);
|
||||
ret = __acpi_processor_set_throttling(pr, state, true,
|
||||
true);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
@ -901,36 +902,31 @@ static int acpi_processor_get_throttling_ptc(struct acpi_processor *pr)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static long __acpi_processor_get_throttling(void *data)
|
||||
{
|
||||
struct acpi_processor *pr = data;
|
||||
|
||||
return pr->throttling.acpi_processor_get_throttling(pr);
|
||||
}
|
||||
|
||||
static int acpi_processor_get_throttling(struct acpi_processor *pr)
|
||||
{
|
||||
cpumask_var_t saved_mask;
|
||||
int ret;
|
||||
|
||||
if (!pr)
|
||||
return -EINVAL;
|
||||
|
||||
if (!pr->flags.throttling)
|
||||
return -ENODEV;
|
||||
|
||||
if (!alloc_cpumask_var(&saved_mask, GFP_KERNEL))
|
||||
return -ENOMEM;
|
||||
|
||||
/*
|
||||
* Migrate task to the cpu pointed by pr.
|
||||
* This is either called from the CPU hotplug callback of
|
||||
* processor_driver or via the ACPI probe function. In the latter
|
||||
* case the CPU is not guaranteed to be online. Both call sites are
|
||||
* protected against CPU hotplug.
|
||||
*/
|
||||
cpumask_copy(saved_mask, ¤t->cpus_allowed);
|
||||
/* FIXME: use work_on_cpu() */
|
||||
if (set_cpus_allowed_ptr(current, cpumask_of(pr->id))) {
|
||||
/* Can't migrate to the target pr->id CPU. Exit */
|
||||
free_cpumask_var(saved_mask);
|
||||
if (!cpu_online(pr->id))
|
||||
return -ENODEV;
|
||||
}
|
||||
ret = pr->throttling.acpi_processor_get_throttling(pr);
|
||||
/* restore the previous state */
|
||||
set_cpus_allowed_ptr(current, saved_mask);
|
||||
free_cpumask_var(saved_mask);
|
||||
|
||||
return ret;
|
||||
return work_on_cpu(pr->id, __acpi_processor_get_throttling, pr);
|
||||
}
|
||||
|
||||
static int acpi_processor_get_fadt_info(struct acpi_processor *pr)
|
||||
|
@ -1080,8 +1076,15 @@ static long acpi_processor_throttling_fn(void *data)
|
|||
arg->target_state, arg->force);
|
||||
}
|
||||
|
||||
int acpi_processor_set_throttling(struct acpi_processor *pr,
|
||||
int state, bool force)
|
||||
static int call_on_cpu(int cpu, long (*fn)(void *), void *arg, bool direct)
|
||||
{
|
||||
if (direct)
|
||||
return fn(arg);
|
||||
return work_on_cpu(cpu, fn, arg);
|
||||
}
|
||||
|
||||
static int __acpi_processor_set_throttling(struct acpi_processor *pr,
|
||||
int state, bool force, bool direct)
|
||||
{
|
||||
int ret = 0;
|
||||
unsigned int i;
|
||||
|
@ -1130,7 +1133,8 @@ int acpi_processor_set_throttling(struct acpi_processor *pr,
|
|||
arg.pr = pr;
|
||||
arg.target_state = state;
|
||||
arg.force = force;
|
||||
ret = work_on_cpu(pr->id, acpi_processor_throttling_fn, &arg);
|
||||
ret = call_on_cpu(pr->id, acpi_processor_throttling_fn, &arg,
|
||||
direct);
|
||||
} else {
|
||||
/*
|
||||
* When the T-state coordination is SW_ALL or HW_ALL,
|
||||
|
@ -1163,8 +1167,8 @@ int acpi_processor_set_throttling(struct acpi_processor *pr,
|
|||
arg.pr = match_pr;
|
||||
arg.target_state = state;
|
||||
arg.force = force;
|
||||
ret = work_on_cpu(pr->id, acpi_processor_throttling_fn,
|
||||
&arg);
|
||||
ret = call_on_cpu(pr->id, acpi_processor_throttling_fn,
|
||||
&arg, direct);
|
||||
}
|
||||
}
|
||||
/*
|
||||
|
@ -1182,6 +1186,12 @@ int acpi_processor_set_throttling(struct acpi_processor *pr,
|
|||
return ret;
|
||||
}
|
||||
|
||||
int acpi_processor_set_throttling(struct acpi_processor *pr, int state,
|
||||
bool force)
|
||||
{
|
||||
return __acpi_processor_set_throttling(pr, state, force, false);
|
||||
}
|
||||
|
||||
int acpi_processor_get_throttling_info(struct acpi_processor *pr)
|
||||
{
|
||||
int result = 0;
|
||||
|
|
|
@ -169,6 +169,25 @@ static bool mtip_check_surprise_removal(struct pci_dev *pdev)
|
|||
return false; /* device present */
|
||||
}
|
||||
|
||||
/* we have to use runtime tag to setup command header */
|
||||
static void mtip_init_cmd_header(struct request *rq)
|
||||
{
|
||||
struct driver_data *dd = rq->q->queuedata;
|
||||
struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq);
|
||||
u32 host_cap_64 = readl(dd->mmio + HOST_CAP) & HOST_CAP_64;
|
||||
|
||||
/* Point the command headers at the command tables. */
|
||||
cmd->command_header = dd->port->command_list +
|
||||
(sizeof(struct mtip_cmd_hdr) * rq->tag);
|
||||
cmd->command_header_dma = dd->port->command_list_dma +
|
||||
(sizeof(struct mtip_cmd_hdr) * rq->tag);
|
||||
|
||||
if (host_cap_64)
|
||||
cmd->command_header->ctbau = __force_bit2int cpu_to_le32((cmd->command_dma >> 16) >> 16);
|
||||
|
||||
cmd->command_header->ctba = __force_bit2int cpu_to_le32(cmd->command_dma & 0xFFFFFFFF);
|
||||
}
|
||||
|
||||
static struct mtip_cmd *mtip_get_int_command(struct driver_data *dd)
|
||||
{
|
||||
struct request *rq;
|
||||
|
@ -180,6 +199,9 @@ static struct mtip_cmd *mtip_get_int_command(struct driver_data *dd)
|
|||
if (IS_ERR(rq))
|
||||
return NULL;
|
||||
|
||||
/* Internal cmd isn't submitted via .queue_rq */
|
||||
mtip_init_cmd_header(rq);
|
||||
|
||||
return blk_mq_rq_to_pdu(rq);
|
||||
}
|
||||
|
||||
|
@ -3818,6 +3840,8 @@ static int mtip_queue_rq(struct blk_mq_hw_ctx *hctx,
|
|||
struct request *rq = bd->rq;
|
||||
int ret;
|
||||
|
||||
mtip_init_cmd_header(rq);
|
||||
|
||||
if (unlikely(mtip_check_unal_depth(hctx, rq)))
|
||||
return BLK_MQ_RQ_QUEUE_BUSY;
|
||||
|
||||
|
@ -3849,7 +3873,6 @@ static int mtip_init_cmd(void *data, struct request *rq, unsigned int hctx_idx,
|
|||
{
|
||||
struct driver_data *dd = data;
|
||||
struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq);
|
||||
u32 host_cap_64 = readl(dd->mmio + HOST_CAP) & HOST_CAP_64;
|
||||
|
||||
/*
|
||||
* For flush requests, request_idx starts at the end of the
|
||||
|
@ -3866,17 +3889,6 @@ static int mtip_init_cmd(void *data, struct request *rq, unsigned int hctx_idx,
|
|||
|
||||
memset(cmd->command, 0, CMD_DMA_ALLOC_SZ);
|
||||
|
||||
/* Point the command headers at the command tables. */
|
||||
cmd->command_header = dd->port->command_list +
|
||||
(sizeof(struct mtip_cmd_hdr) * request_idx);
|
||||
cmd->command_header_dma = dd->port->command_list_dma +
|
||||
(sizeof(struct mtip_cmd_hdr) * request_idx);
|
||||
|
||||
if (host_cap_64)
|
||||
cmd->command_header->ctbau = __force_bit2int cpu_to_le32((cmd->command_dma >> 16) >> 16);
|
||||
|
||||
cmd->command_header->ctba = __force_bit2int cpu_to_le32(cmd->command_dma & 0xFFFFFFFF);
|
||||
|
||||
sg_init_table(cmd->sg, MTIP_MAX_SG);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -936,6 +936,9 @@ static int qca_setup(struct hci_uart *hu)
|
|||
if (!ret) {
|
||||
set_bit(STATE_IN_BAND_SLEEP_ENABLED, &qca->flags);
|
||||
qca_debugfs_init(hdev);
|
||||
} else if (ret == -ENOENT) {
|
||||
/* No patch/nvm-config found, run with original fw/config */
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
/* Setup bdaddr */
|
||||
|
|
|
@ -515,7 +515,7 @@ static void panic_halt_ipmi_heartbeat(void)
|
|||
msg.cmd = IPMI_WDOG_RESET_TIMER;
|
||||
msg.data = NULL;
|
||||
msg.data_len = 0;
|
||||
atomic_add(2, &panic_done_count);
|
||||
atomic_add(1, &panic_done_count);
|
||||
rv = ipmi_request_supply_msgs(watchdog_user,
|
||||
(struct ipmi_addr *) &addr,
|
||||
0,
|
||||
|
@ -525,7 +525,7 @@ static void panic_halt_ipmi_heartbeat(void)
|
|||
&panic_halt_heartbeat_recv_msg,
|
||||
1);
|
||||
if (rv)
|
||||
atomic_sub(2, &panic_done_count);
|
||||
atomic_sub(1, &panic_done_count);
|
||||
}
|
||||
|
||||
static struct ipmi_smi_msg panic_halt_smi_msg = {
|
||||
|
@ -549,12 +549,12 @@ static void panic_halt_ipmi_set_timeout(void)
|
|||
/* Wait for the messages to be free. */
|
||||
while (atomic_read(&panic_done_count) != 0)
|
||||
ipmi_poll_interface(watchdog_user);
|
||||
atomic_add(2, &panic_done_count);
|
||||
atomic_add(1, &panic_done_count);
|
||||
rv = i_ipmi_set_timeout(&panic_halt_smi_msg,
|
||||
&panic_halt_recv_msg,
|
||||
&send_heartbeat_now);
|
||||
if (rv) {
|
||||
atomic_sub(2, &panic_done_count);
|
||||
atomic_sub(1, &panic_done_count);
|
||||
printk(KERN_WARNING PFX
|
||||
"Unable to extend the watchdog timeout.");
|
||||
} else {
|
||||
|
|
|
@ -1040,6 +1040,11 @@ int tpm_get_random(u32 chip_num, u8 *out, size_t max)
|
|||
break;
|
||||
|
||||
recd = be32_to_cpu(tpm_cmd.params.getrandom_out.rng_data_len);
|
||||
if (recd > num_bytes) {
|
||||
total = -EFAULT;
|
||||
break;
|
||||
}
|
||||
|
||||
memcpy(dest, tpm_cmd.params.getrandom_out.rng_data, recd);
|
||||
|
||||
dest += recd;
|
||||
|
|
|
@ -622,6 +622,11 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip,
|
|||
if (!rc) {
|
||||
data_len = be16_to_cpup(
|
||||
(__be16 *) &buf.data[TPM_HEADER_SIZE + 4]);
|
||||
if (data_len < MIN_KEY_SIZE || data_len > MAX_KEY_SIZE + 1) {
|
||||
rc = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
data = &buf.data[TPM_HEADER_SIZE + 6];
|
||||
|
||||
memcpy(payload->key, data, data_len - 1);
|
||||
|
@ -629,6 +634,7 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip,
|
|||
payload->migratable = data[data_len - 1];
|
||||
}
|
||||
|
||||
out:
|
||||
tpm_buf_destroy(&buf);
|
||||
return rc;
|
||||
}
|
||||
|
|
|
@ -283,7 +283,8 @@ static int recv_data(struct tpm_chip *chip, u8 *buf, size_t count)
|
|||
static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count)
|
||||
{
|
||||
int size = 0;
|
||||
int expected, status;
|
||||
int status;
|
||||
u32 expected;
|
||||
|
||||
if (count < TPM_HEADER_SIZE) {
|
||||
size = -EIO;
|
||||
|
@ -298,7 +299,7 @@ static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count)
|
|||
}
|
||||
|
||||
expected = be32_to_cpu(*(__be32 *) (buf + 2));
|
||||
if (expected > count) {
|
||||
if (expected > count || expected < TPM_HEADER_SIZE) {
|
||||
size = -EIO;
|
||||
goto out;
|
||||
}
|
||||
|
|
|
@ -103,7 +103,7 @@ CLK_OF_DECLARE(ns2_genpll_src_clk, "brcm,ns2-genpll-scr",
|
|||
|
||||
static const struct iproc_pll_ctrl genpll_sw = {
|
||||
.flags = IPROC_CLK_AON | IPROC_CLK_PLL_SPLIT_STAT_CTRL,
|
||||
.aon = AON_VAL(0x0, 2, 9, 8),
|
||||
.aon = AON_VAL(0x0, 1, 11, 10),
|
||||
.reset = RESET_VAL(0x4, 2, 1),
|
||||
.dig_filter = DF_VAL(0x0, 9, 3, 5, 4, 2, 3),
|
||||
.ndiv_int = REG_VAL(0x8, 4, 10),
|
||||
|
|
|
@ -72,7 +72,7 @@ static const char * const si5351_input_names[] = {
|
|||
"xtal", "clkin"
|
||||
};
|
||||
static const char * const si5351_pll_names[] = {
|
||||
"plla", "pllb", "vxco"
|
||||
"si5351_plla", "si5351_pllb", "si5351_vxco"
|
||||
};
|
||||
static const char * const si5351_msynth_names[] = {
|
||||
"ms0", "ms1", "ms2", "ms3", "ms4", "ms5", "ms6", "ms7"
|
||||
|
|
|
@ -30,11 +30,51 @@
|
|||
|
||||
static DEFINE_PER_CPU(struct clk, sh_cpuclk);
|
||||
|
||||
struct cpufreq_target {
|
||||
struct cpufreq_policy *policy;
|
||||
unsigned int freq;
|
||||
};
|
||||
|
||||
static unsigned int sh_cpufreq_get(unsigned int cpu)
|
||||
{
|
||||
return (clk_get_rate(&per_cpu(sh_cpuclk, cpu)) + 500) / 1000;
|
||||
}
|
||||
|
||||
static long __sh_cpufreq_target(void *arg)
|
||||
{
|
||||
struct cpufreq_target *target = arg;
|
||||
struct cpufreq_policy *policy = target->policy;
|
||||
int cpu = policy->cpu;
|
||||
struct clk *cpuclk = &per_cpu(sh_cpuclk, cpu);
|
||||
struct cpufreq_freqs freqs;
|
||||
struct device *dev;
|
||||
long freq;
|
||||
|
||||
if (smp_processor_id() != cpu)
|
||||
return -ENODEV;
|
||||
|
||||
dev = get_cpu_device(cpu);
|
||||
|
||||
/* Convert target_freq from kHz to Hz */
|
||||
freq = clk_round_rate(cpuclk, target->freq * 1000);
|
||||
|
||||
if (freq < (policy->min * 1000) || freq > (policy->max * 1000))
|
||||
return -EINVAL;
|
||||
|
||||
dev_dbg(dev, "requested frequency %u Hz\n", target->freq * 1000);
|
||||
|
||||
freqs.old = sh_cpufreq_get(cpu);
|
||||
freqs.new = (freq + 500) / 1000;
|
||||
freqs.flags = 0;
|
||||
|
||||
cpufreq_freq_transition_begin(target->policy, &freqs);
|
||||
clk_set_rate(cpuclk, freq);
|
||||
cpufreq_freq_transition_end(target->policy, &freqs, 0);
|
||||
|
||||
dev_dbg(dev, "set frequency %lu Hz\n", freq);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Here we notify other drivers of the proposed change and the final change.
|
||||
*/
|
||||
|
@ -42,40 +82,9 @@ static int sh_cpufreq_target(struct cpufreq_policy *policy,
|
|||
unsigned int target_freq,
|
||||
unsigned int relation)
|
||||
{
|
||||
unsigned int cpu = policy->cpu;
|
||||
struct clk *cpuclk = &per_cpu(sh_cpuclk, cpu);
|
||||
cpumask_t cpus_allowed;
|
||||
struct cpufreq_freqs freqs;
|
||||
struct device *dev;
|
||||
long freq;
|
||||
struct cpufreq_target data = { .policy = policy, .freq = target_freq };
|
||||
|
||||
cpus_allowed = current->cpus_allowed;
|
||||
set_cpus_allowed_ptr(current, cpumask_of(cpu));
|
||||
|
||||
BUG_ON(smp_processor_id() != cpu);
|
||||
|
||||
dev = get_cpu_device(cpu);
|
||||
|
||||
/* Convert target_freq from kHz to Hz */
|
||||
freq = clk_round_rate(cpuclk, target_freq * 1000);
|
||||
|
||||
if (freq < (policy->min * 1000) || freq > (policy->max * 1000))
|
||||
return -EINVAL;
|
||||
|
||||
dev_dbg(dev, "requested frequency %u Hz\n", target_freq * 1000);
|
||||
|
||||
freqs.old = sh_cpufreq_get(cpu);
|
||||
freqs.new = (freq + 500) / 1000;
|
||||
freqs.flags = 0;
|
||||
|
||||
cpufreq_freq_transition_begin(policy, &freqs);
|
||||
set_cpus_allowed_ptr(current, &cpus_allowed);
|
||||
clk_set_rate(cpuclk, freq);
|
||||
cpufreq_freq_transition_end(policy, &freqs, 0);
|
||||
|
||||
dev_dbg(dev, "set frequency %lu Hz\n", freq);
|
||||
|
||||
return 0;
|
||||
return work_on_cpu(policy->cpu, __sh_cpufreq_target, &data);
|
||||
}
|
||||
|
||||
static int sh_cpufreq_verify(struct cpufreq_policy *policy)
|
||||
|
|
|
@ -51,7 +51,15 @@ struct ti_am335x_xbar_map {
|
|||
|
||||
static inline void ti_am335x_xbar_write(void __iomem *iomem, int event, u8 val)
|
||||
{
|
||||
writeb_relaxed(val, iomem + event);
|
||||
/*
|
||||
* TPCC_EVT_MUX_60_63 register layout is different than the
|
||||
* rest, in the sense, that event 63 is mapped to lowest byte
|
||||
* and event 60 is mapped to highest, handle it separately.
|
||||
*/
|
||||
if (event >= 60 && event <= 63)
|
||||
writeb_relaxed(val, iomem + (63 - event % 4));
|
||||
else
|
||||
writeb_relaxed(val, iomem + event);
|
||||
}
|
||||
|
||||
static void ti_am335x_xbar_free(struct device *dev, void *route_data)
|
||||
|
|
|
@ -89,13 +89,16 @@ static struct page **get_pages(struct drm_gem_object *obj)
|
|||
return p;
|
||||
}
|
||||
|
||||
msm_obj->pages = p;
|
||||
|
||||
msm_obj->sgt = drm_prime_pages_to_sg(p, npages);
|
||||
if (IS_ERR(msm_obj->sgt)) {
|
||||
dev_err(dev->dev, "failed to allocate sgt\n");
|
||||
return ERR_CAST(msm_obj->sgt);
|
||||
}
|
||||
void *ptr = ERR_CAST(msm_obj->sgt);
|
||||
|
||||
msm_obj->pages = p;
|
||||
dev_err(dev->dev, "failed to allocate sgt\n");
|
||||
msm_obj->sgt = NULL;
|
||||
return ptr;
|
||||
}
|
||||
|
||||
/* For non-cached buffers, ensure the new pages are clean
|
||||
* because display controller, GPU, etc. are not coherent:
|
||||
|
@ -119,7 +122,10 @@ static void put_pages(struct drm_gem_object *obj)
|
|||
if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))
|
||||
dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl,
|
||||
msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
|
||||
sg_free_table(msm_obj->sgt);
|
||||
|
||||
if (msm_obj->sgt)
|
||||
sg_free_table(msm_obj->sgt);
|
||||
|
||||
kfree(msm_obj->sgt);
|
||||
|
||||
if (use_pages(obj))
|
||||
|
|
|
@ -104,7 +104,7 @@ nouveau_display_scanoutpos_head(struct drm_crtc *crtc, int *vpos, int *hpos,
|
|||
};
|
||||
struct nouveau_display *disp = nouveau_display(crtc->dev);
|
||||
struct drm_vblank_crtc *vblank = &crtc->dev->vblank[drm_crtc_index(crtc)];
|
||||
int ret, retry = 1;
|
||||
int ret, retry = 20;
|
||||
|
||||
do {
|
||||
ret = nvif_mthd(&disp->disp, 0, &args, sizeof(args));
|
||||
|
|
|
@ -288,7 +288,12 @@ static int dmm_txn_commit(struct dmm_txn *txn, bool wait)
|
|||
msecs_to_jiffies(100))) {
|
||||
dev_err(dmm->dev, "timed out waiting for done\n");
|
||||
ret = -ETIMEDOUT;
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
/* Check the engine status before continue */
|
||||
ret = wait_status(engine, DMM_PATSTATUS_READY |
|
||||
DMM_PATSTATUS_VALID | DMM_PATSTATUS_DONE);
|
||||
}
|
||||
|
||||
cleanup:
|
||||
|
|
|
@ -976,7 +976,7 @@ static int ssip_pn_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
goto drop;
|
||||
/* Pad to 32-bits - FIXME: Revisit*/
|
||||
if ((skb->len & 3) && skb_pad(skb, 4 - (skb->len & 3)))
|
||||
goto drop;
|
||||
goto inc_dropped;
|
||||
|
||||
/*
|
||||
* Modem sends Phonet messages over SSI with its own endianess...
|
||||
|
@ -1028,8 +1028,9 @@ static int ssip_pn_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
drop2:
|
||||
hsi_free_msg(msg);
|
||||
drop:
|
||||
dev->stats.tx_dropped++;
|
||||
dev_kfree_skb(skb);
|
||||
inc_dropped:
|
||||
dev->stats.tx_dropped++;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -45,8 +45,11 @@
|
|||
#define TPIU_ITATBCTR0 0xef8
|
||||
|
||||
/** register definition **/
|
||||
/* FFSR - 0x300 */
|
||||
#define FFSR_FT_STOPPED BIT(1)
|
||||
/* FFCR - 0x304 */
|
||||
#define FFCR_FON_MAN BIT(6)
|
||||
#define FFCR_STOP_FI BIT(12)
|
||||
|
||||
/**
|
||||
* @base: memory mapped base address for this component.
|
||||
|
@ -85,10 +88,14 @@ static void tpiu_disable_hw(struct tpiu_drvdata *drvdata)
|
|||
{
|
||||
CS_UNLOCK(drvdata->base);
|
||||
|
||||
/* Clear formatter controle reg. */
|
||||
writel_relaxed(0x0, drvdata->base + TPIU_FFCR);
|
||||
/* Clear formatter and stop on flush */
|
||||
writel_relaxed(FFCR_STOP_FI, drvdata->base + TPIU_FFCR);
|
||||
/* Generate manual flush */
|
||||
writel_relaxed(FFCR_FON_MAN, drvdata->base + TPIU_FFCR);
|
||||
writel_relaxed(FFCR_STOP_FI | FFCR_FON_MAN, drvdata->base + TPIU_FFCR);
|
||||
/* Wait for flush to complete */
|
||||
coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN, 0);
|
||||
/* Wait for formatter to stop */
|
||||
coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED, 1);
|
||||
|
||||
CS_LOCK(drvdata->base);
|
||||
}
|
||||
|
|
|
@ -18,6 +18,9 @@
|
|||
#define ACPI_SMBUS_HC_CLASS "smbus"
|
||||
#define ACPI_SMBUS_HC_DEVICE_NAME "cmi"
|
||||
|
||||
/* SMBUS HID definition as supported by Microsoft Windows */
|
||||
#define ACPI_SMBUS_MS_HID "SMB0001"
|
||||
|
||||
ACPI_MODULE_NAME("smbus_cmi");
|
||||
|
||||
struct smbus_methods_t {
|
||||
|
@ -51,6 +54,7 @@ static const struct smbus_methods_t ibm_smbus_methods = {
|
|||
static const struct acpi_device_id acpi_smbus_cmi_ids[] = {
|
||||
{"SMBUS01", (kernel_ulong_t)&smbus_methods},
|
||||
{ACPI_SMBUS_IBM_HID, (kernel_ulong_t)&ibm_smbus_methods},
|
||||
{ACPI_SMBUS_MS_HID, (kernel_ulong_t)&smbus_methods},
|
||||
{"", 0}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(acpi, acpi_smbus_cmi_ids);
|
||||
|
|
|
@ -628,6 +628,8 @@ static const struct iio_trigger_ops st_accel_trigger_ops = {
|
|||
int st_accel_common_probe(struct iio_dev *indio_dev)
|
||||
{
|
||||
struct st_sensor_data *adata = iio_priv(indio_dev);
|
||||
struct st_sensors_platform_data *pdata =
|
||||
(struct st_sensors_platform_data *)adata->dev->platform_data;
|
||||
int irq = adata->get_irq_data_ready(indio_dev);
|
||||
int err;
|
||||
|
||||
|
@ -652,9 +654,8 @@ int st_accel_common_probe(struct iio_dev *indio_dev)
|
|||
&adata->sensor_settings->fs.fs_avl[0];
|
||||
adata->odr = adata->sensor_settings->odr.odr_avl[0].hz;
|
||||
|
||||
if (!adata->dev->platform_data)
|
||||
adata->dev->platform_data =
|
||||
(struct st_sensors_platform_data *)&default_accel_pdata;
|
||||
if (!pdata)
|
||||
pdata = (struct st_sensors_platform_data *)&default_accel_pdata;
|
||||
|
||||
err = st_sensors_init_sensor(indio_dev, adata->dev->platform_data);
|
||||
if (err < 0)
|
||||
|
|
|
@ -436,6 +436,8 @@ static const struct iio_trigger_ops st_press_trigger_ops = {
|
|||
int st_press_common_probe(struct iio_dev *indio_dev)
|
||||
{
|
||||
struct st_sensor_data *press_data = iio_priv(indio_dev);
|
||||
struct st_sensors_platform_data *pdata =
|
||||
(struct st_sensors_platform_data *)press_data->dev->platform_data;
|
||||
int irq = press_data->get_irq_data_ready(indio_dev);
|
||||
int err;
|
||||
|
||||
|
@ -464,10 +466,8 @@ int st_press_common_probe(struct iio_dev *indio_dev)
|
|||
press_data->odr = press_data->sensor_settings->odr.odr_avl[0].hz;
|
||||
|
||||
/* Some devices don't support a data ready pin. */
|
||||
if (!press_data->dev->platform_data &&
|
||||
press_data->sensor_settings->drdy_irq.addr)
|
||||
press_data->dev->platform_data =
|
||||
(struct st_sensors_platform_data *)&default_press_pdata;
|
||||
if (!pdata && press_data->sensor_settings->drdy_irq.addr)
|
||||
pdata = (struct st_sensors_platform_data *)&default_press_pdata;
|
||||
|
||||
err = st_sensors_init_sensor(indio_dev, press_data->dev->platform_data);
|
||||
if (err < 0)
|
||||
|
|
|
@ -3743,6 +3743,9 @@ int rdma_join_multicast(struct rdma_cm_id *id, struct sockaddr *addr,
|
|||
struct cma_multicast *mc;
|
||||
int ret;
|
||||
|
||||
if (!id->device)
|
||||
return -EINVAL;
|
||||
|
||||
id_priv = container_of(id, struct rdma_id_private, id);
|
||||
if (!cma_comp(id_priv, RDMA_CM_ADDR_BOUND) &&
|
||||
!cma_comp(id_priv, RDMA_CM_ADDR_RESOLVED))
|
||||
|
@ -4007,7 +4010,7 @@ static int cma_get_id_stats(struct sk_buff *skb, struct netlink_callback *cb)
|
|||
RDMA_NL_RDMA_CM_ATTR_SRC_ADDR))
|
||||
goto out;
|
||||
if (ibnl_put_attr(skb, nlh,
|
||||
rdma_addr_size(cma_src_addr(id_priv)),
|
||||
rdma_addr_size(cma_dst_addr(id_priv)),
|
||||
cma_dst_addr(id_priv),
|
||||
RDMA_NL_RDMA_CM_ATTR_DST_ADDR))
|
||||
goto out;
|
||||
|
|
|
@ -663,6 +663,7 @@ int iwpm_send_mapinfo(u8 nl_client, int iwpm_pid)
|
|||
}
|
||||
skb_num++;
|
||||
spin_lock_irqsave(&iwpm_mapinfo_lock, flags);
|
||||
ret = -EINVAL;
|
||||
for (i = 0; i < IWPM_MAPINFO_HASH_SIZE; i++) {
|
||||
hlist_for_each_entry(map_info, &iwpm_hash_bucket[i],
|
||||
hlist_node) {
|
||||
|
|
|
@ -354,7 +354,7 @@ int ib_umem_copy_from(void *dst, struct ib_umem *umem, size_t offset,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = sg_pcopy_to_buffer(umem->sg_head.sgl, umem->nmap, dst, length,
|
||||
ret = sg_pcopy_to_buffer(umem->sg_head.sgl, umem->npages, dst, length,
|
||||
offset + ib_umem_offset(umem));
|
||||
|
||||
if (ret < 0)
|
||||
|
|
|
@ -2436,9 +2436,13 @@ ssize_t ib_uverbs_destroy_qp(struct ib_uverbs_file *file,
|
|||
|
||||
static void *alloc_wr(size_t wr_size, __u32 num_sge)
|
||||
{
|
||||
if (num_sge >= (U32_MAX - ALIGN(wr_size, sizeof (struct ib_sge))) /
|
||||
sizeof (struct ib_sge))
|
||||
return NULL;
|
||||
|
||||
return kmalloc(ALIGN(wr_size, sizeof (struct ib_sge)) +
|
||||
num_sge * sizeof (struct ib_sge), GFP_KERNEL);
|
||||
};
|
||||
}
|
||||
|
||||
ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file,
|
||||
struct ib_device *ib_dev,
|
||||
|
@ -2665,6 +2669,13 @@ static struct ib_recv_wr *ib_uverbs_unmarshall_recv(const char __user *buf,
|
|||
goto err;
|
||||
}
|
||||
|
||||
if (user_wr->num_sge >=
|
||||
(U32_MAX - ALIGN(sizeof *next, sizeof (struct ib_sge))) /
|
||||
sizeof (struct ib_sge)) {
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
next = kmalloc(ALIGN(sizeof *next, sizeof (struct ib_sge)) +
|
||||
user_wr->num_sge * sizeof (struct ib_sge),
|
||||
GFP_KERNEL);
|
||||
|
|
|
@ -1041,7 +1041,7 @@ static void mlx4_ib_disassociate_ucontext(struct ib_ucontext *ibcontext)
|
|||
/* need to protect from a race on closing the vma as part of
|
||||
* mlx4_ib_vma_close().
|
||||
*/
|
||||
down_read(&owning_mm->mmap_sem);
|
||||
down_write(&owning_mm->mmap_sem);
|
||||
for (i = 0; i < HW_BAR_COUNT; i++) {
|
||||
vma = context->hw_bar_info[i].vma;
|
||||
if (!vma)
|
||||
|
@ -1055,11 +1055,13 @@ static void mlx4_ib_disassociate_ucontext(struct ib_ucontext *ibcontext)
|
|||
BUG_ON(1);
|
||||
}
|
||||
|
||||
context->hw_bar_info[i].vma->vm_flags &=
|
||||
~(VM_SHARED | VM_MAYSHARE);
|
||||
/* context going to be destroyed, should not access ops any more */
|
||||
context->hw_bar_info[i].vma->vm_ops = NULL;
|
||||
}
|
||||
|
||||
up_read(&owning_mm->mmap_sem);
|
||||
up_write(&owning_mm->mmap_sem);
|
||||
mmput(owning_mm);
|
||||
put_task_struct(owning_process);
|
||||
}
|
||||
|
|
|
@ -834,7 +834,7 @@ void ocrdma_add_port_stats(struct ocrdma_dev *dev)
|
|||
|
||||
dev->reset_stats.type = OCRDMA_RESET_STATS;
|
||||
dev->reset_stats.dev = dev;
|
||||
if (!debugfs_create_file("reset_stats", S_IRUSR, dev->dir,
|
||||
if (!debugfs_create_file("reset_stats", 0200, dev->dir,
|
||||
&dev->reset_stats, &ocrdma_dbg_ops))
|
||||
goto err;
|
||||
|
||||
|
|
|
@ -945,6 +945,19 @@ static inline int update_parent_pkey(struct ipoib_dev_priv *priv)
|
|||
*/
|
||||
priv->dev->broadcast[8] = priv->pkey >> 8;
|
||||
priv->dev->broadcast[9] = priv->pkey & 0xff;
|
||||
|
||||
/*
|
||||
* Update the broadcast address in the priv->broadcast object,
|
||||
* in case it already exists, otherwise no one will do that.
|
||||
*/
|
||||
if (priv->broadcast) {
|
||||
spin_lock_irq(&priv->lock);
|
||||
memcpy(priv->broadcast->mcmember.mgid.raw,
|
||||
priv->dev->broadcast + 4,
|
||||
sizeof(union ib_gid));
|
||||
spin_unlock_irq(&priv->lock);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -724,6 +724,22 @@ static void path_rec_completion(int status,
|
|||
spin_lock_irqsave(&priv->lock, flags);
|
||||
|
||||
if (!IS_ERR_OR_NULL(ah)) {
|
||||
/*
|
||||
* pathrec.dgid is used as the database key from the LLADDR,
|
||||
* it must remain unchanged even if the SA returns a different
|
||||
* GID to use in the AH.
|
||||
*/
|
||||
if (memcmp(pathrec->dgid.raw, path->pathrec.dgid.raw,
|
||||
sizeof(union ib_gid))) {
|
||||
ipoib_dbg(
|
||||
priv,
|
||||
"%s got PathRec for gid %pI6 while asked for %pI6\n",
|
||||
dev->name, pathrec->dgid.raw,
|
||||
path->pathrec.dgid.raw);
|
||||
memcpy(pathrec->dgid.raw, path->pathrec.dgid.raw,
|
||||
sizeof(union ib_gid));
|
||||
}
|
||||
|
||||
path->pathrec = *pathrec;
|
||||
|
||||
old_ah = path->ah;
|
||||
|
|
|
@ -473,6 +473,9 @@ static int ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast)
|
|||
!test_bit(IPOIB_FLAG_OPER_UP, &priv->flags))
|
||||
return -EINVAL;
|
||||
|
||||
init_completion(&mcast->done);
|
||||
set_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
|
||||
|
||||
ipoib_dbg_mcast(priv, "joining MGID %pI6\n", mcast->mcmember.mgid.raw);
|
||||
|
||||
rec.mgid = mcast->mcmember.mgid;
|
||||
|
@ -631,8 +634,6 @@ void ipoib_mcast_join_task(struct work_struct *work)
|
|||
if (mcast->backoff == 1 ||
|
||||
time_after_eq(jiffies, mcast->delay_until)) {
|
||||
/* Found the next unjoined group */
|
||||
init_completion(&mcast->done);
|
||||
set_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
|
||||
if (ipoib_mcast_join(dev, mcast)) {
|
||||
spin_unlock_irq(&priv->lock);
|
||||
return;
|
||||
|
@ -652,11 +653,9 @@ out:
|
|||
queue_delayed_work(priv->wq, &priv->mcast_task,
|
||||
delay_until - jiffies);
|
||||
}
|
||||
if (mcast) {
|
||||
init_completion(&mcast->done);
|
||||
set_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
|
||||
if (mcast)
|
||||
ipoib_mcast_join(dev, mcast);
|
||||
}
|
||||
|
||||
spin_unlock_irq(&priv->lock);
|
||||
}
|
||||
|
||||
|
|
|
@ -70,7 +70,7 @@ static int twl4030_pwrbutton_probe(struct platform_device *pdev)
|
|||
pwr->phys = "twl4030_pwrbutton/input0";
|
||||
pwr->dev.parent = &pdev->dev;
|
||||
|
||||
err = devm_request_threaded_irq(&pwr->dev, irq, NULL, powerbutton_irq,
|
||||
err = devm_request_threaded_irq(&pdev->dev, irq, NULL, powerbutton_irq,
|
||||
IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING |
|
||||
IRQF_ONESHOT,
|
||||
"twl4030_pwrbutton", pwr);
|
||||
|
|
|
@ -152,7 +152,7 @@ static int __maybe_unused ar1021_i2c_resume(struct device *dev)
|
|||
static SIMPLE_DEV_PM_OPS(ar1021_i2c_pm, ar1021_i2c_suspend, ar1021_i2c_resume);
|
||||
|
||||
static const struct i2c_device_id ar1021_i2c_id[] = {
|
||||
{ "MICROCHIP_AR1021_I2C", 0 },
|
||||
{ "ar1021", 0 },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(i2c, ar1021_i2c_id);
|
||||
|
|
|
@ -127,6 +127,7 @@ int intel_svm_enable_prq(struct intel_iommu *iommu)
|
|||
pr_err("IOMMU: %s: Failed to request IRQ for page request queue\n",
|
||||
iommu->name);
|
||||
dmar_free_hwirq(irq);
|
||||
iommu->pr_irq = 0;
|
||||
goto err;
|
||||
}
|
||||
dmar_writeq(iommu->reg + DMAR_PQH_REG, 0ULL);
|
||||
|
@ -142,9 +143,11 @@ int intel_svm_finish_prq(struct intel_iommu *iommu)
|
|||
dmar_writeq(iommu->reg + DMAR_PQT_REG, 0ULL);
|
||||
dmar_writeq(iommu->reg + DMAR_PQA_REG, 0ULL);
|
||||
|
||||
free_irq(iommu->pr_irq, iommu);
|
||||
dmar_free_hwirq(iommu->pr_irq);
|
||||
iommu->pr_irq = 0;
|
||||
if (iommu->pr_irq) {
|
||||
free_irq(iommu->pr_irq, iommu);
|
||||
dmar_free_hwirq(iommu->pr_irq);
|
||||
iommu->pr_irq = 0;
|
||||
}
|
||||
|
||||
free_pages((unsigned long)iommu->prq, PRQ_ORDER);
|
||||
iommu->prq = NULL;
|
||||
|
|
|
@ -1295,6 +1295,7 @@ static int __init omap_iommu_init(void)
|
|||
const unsigned long flags = SLAB_HWCACHE_ALIGN;
|
||||
size_t align = 1 << 10; /* L2 pagetable alignement */
|
||||
struct device_node *np;
|
||||
int ret;
|
||||
|
||||
np = of_find_matching_node(NULL, omap_iommu_of_match);
|
||||
if (!np)
|
||||
|
@ -1308,11 +1309,25 @@ static int __init omap_iommu_init(void)
|
|||
return -ENOMEM;
|
||||
iopte_cachep = p;
|
||||
|
||||
bus_set_iommu(&platform_bus_type, &omap_iommu_ops);
|
||||
|
||||
omap_iommu_debugfs_init();
|
||||
|
||||
return platform_driver_register(&omap_iommu_driver);
|
||||
ret = platform_driver_register(&omap_iommu_driver);
|
||||
if (ret) {
|
||||
pr_err("%s: failed to register driver\n", __func__);
|
||||
goto fail_driver;
|
||||
}
|
||||
|
||||
ret = bus_set_iommu(&platform_bus_type, &omap_iommu_ops);
|
||||
if (ret)
|
||||
goto fail_bus;
|
||||
|
||||
return 0;
|
||||
|
||||
fail_bus:
|
||||
platform_driver_unregister(&omap_iommu_driver);
|
||||
fail_driver:
|
||||
kmem_cache_destroy(iopte_cachep);
|
||||
return ret;
|
||||
}
|
||||
subsys_initcall(omap_iommu_init);
|
||||
/* must be ready before omap3isp is probed */
|
||||
|
|
|
@ -2698,6 +2698,11 @@ static void handle_write_completed(struct r10conf *conf, struct r10bio *r10_bio)
|
|||
list_add(&r10_bio->retry_list, &conf->bio_end_io_list);
|
||||
conf->nr_queued++;
|
||||
spin_unlock_irq(&conf->device_lock);
|
||||
/*
|
||||
* In case freeze_array() is waiting for condition
|
||||
* nr_pending == nr_queued + extra to be true.
|
||||
*/
|
||||
wake_up(&conf->wait_barrier);
|
||||
md_wakeup_thread(conf->mddev->thread);
|
||||
} else {
|
||||
if (test_bit(R10BIO_WriteError,
|
||||
|
@ -4039,6 +4044,7 @@ static int raid10_start_reshape(struct mddev *mddev)
|
|||
diff = 0;
|
||||
if (first || diff < min_offset_diff)
|
||||
min_offset_diff = diff;
|
||||
first = 0;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -750,6 +750,29 @@ static int dvb_ca_en50221_write_data(struct dvb_ca_private *ca, int slot, u8 * b
|
|||
goto exit;
|
||||
}
|
||||
|
||||
/*
|
||||
* It may need some time for the CAM to settle down, or there might
|
||||
* be a race condition between the CAM, writing HC and our last
|
||||
* check for DA. This happens, if the CAM asserts DA, just after
|
||||
* checking DA before we are setting HC. In this case it might be
|
||||
* a bug in the CAM to keep the FR bit, the lower layer/HW
|
||||
* communication requires a longer timeout or the CAM needs more
|
||||
* time internally. But this happens in reality!
|
||||
* We need to read the status from the HW again and do the same
|
||||
* we did for the previous check for DA
|
||||
*/
|
||||
status = ca->pub->read_cam_control(ca->pub, slot, CTRLIF_STATUS);
|
||||
if (status < 0)
|
||||
goto exit;
|
||||
|
||||
if (status & (STATUSREG_DA | STATUSREG_RE)) {
|
||||
if (status & STATUSREG_DA)
|
||||
dvb_ca_en50221_thread_wakeup(ca);
|
||||
|
||||
status = -EAGAIN;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
/* send the amount of data */
|
||||
if ((status = ca->pub->write_cam_control(ca->pub, slot, CTRLIF_SIZE_HIGH, bytes_write >> 8)) != 0)
|
||||
goto exit;
|
||||
|
|
|
@ -14,6 +14,8 @@
|
|||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
|
||||
#include "si2168_priv.h"
|
||||
|
||||
static const struct dvb_frontend_ops si2168_ops;
|
||||
|
@ -420,6 +422,7 @@ static int si2168_init(struct dvb_frontend *fe)
|
|||
if (ret)
|
||||
goto err;
|
||||
|
||||
udelay(100);
|
||||
memcpy(cmd.args, "\x85", 1);
|
||||
cmd.wlen = 1;
|
||||
cmd.rlen = 1;
|
||||
|
|
|
@ -422,8 +422,7 @@ static int bt878_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
|
|||
bt878_num);
|
||||
if (bt878_num >= BT878_MAX) {
|
||||
printk(KERN_ERR "bt878: Too many devices inserted\n");
|
||||
result = -ENOMEM;
|
||||
goto fail0;
|
||||
return -ENOMEM;
|
||||
}
|
||||
if (pci_enable_device(dev))
|
||||
return -EIO;
|
||||
|
|
|
@ -83,7 +83,7 @@ static void c8sectpfe_timer_interrupt(unsigned long ac8sectpfei)
|
|||
static void channel_swdemux_tsklet(unsigned long data)
|
||||
{
|
||||
struct channel_info *channel = (struct channel_info *)data;
|
||||
struct c8sectpfei *fei = channel->fei;
|
||||
struct c8sectpfei *fei;
|
||||
unsigned long wp, rp;
|
||||
int pos, num_packets, n, size;
|
||||
u8 *buf;
|
||||
|
@ -91,6 +91,8 @@ static void channel_swdemux_tsklet(unsigned long data)
|
|||
if (unlikely(!channel || !channel->irec))
|
||||
return;
|
||||
|
||||
fei = channel->fei;
|
||||
|
||||
wp = readl(channel->irec + DMA_PRDS_BUSWP_TP(0));
|
||||
rp = readl(channel->irec + DMA_PRDS_BUSRP_TP(0));
|
||||
|
||||
|
|
|
@ -430,6 +430,20 @@ static void palmas_power_off(void)
|
|||
{
|
||||
unsigned int addr;
|
||||
int ret, slave;
|
||||
struct device_node *np = palmas_dev->dev->of_node;
|
||||
|
||||
if (of_property_read_bool(np, "ti,palmas-override-powerhold")) {
|
||||
addr = PALMAS_BASE_TO_REG(PALMAS_PU_PD_OD_BASE,
|
||||
PALMAS_PRIMARY_SECONDARY_PAD2);
|
||||
slave = PALMAS_BASE_TO_SLAVE(PALMAS_PU_PD_OD_BASE);
|
||||
|
||||
ret = regmap_update_bits(palmas_dev->regmap[slave], addr,
|
||||
PALMAS_PRIMARY_SECONDARY_PAD2_GPIO_7_MASK, 0);
|
||||
if (ret)
|
||||
dev_err(palmas_dev->dev,
|
||||
"Unable to write PRIMARY_SECONDARY_PAD2 %d\n",
|
||||
ret);
|
||||
}
|
||||
|
||||
if (!palmas_dev)
|
||||
return;
|
||||
|
|
|
@ -2831,6 +2831,14 @@ int mmc_pm_notify(struct notifier_block *notify_block,
|
|||
if (!err)
|
||||
break;
|
||||
|
||||
if (!mmc_card_is_removable(host)) {
|
||||
dev_warn(mmc_dev(host),
|
||||
"pre_suspend failed for non-removable host: "
|
||||
"%d\n", err);
|
||||
/* Avoid removing non-removable hosts */
|
||||
break;
|
||||
}
|
||||
|
||||
/* Calling bus_ops->remove() with a claimed host can deadlock */
|
||||
host->bus_ops->remove(host);
|
||||
mmc_claim_host(host);
|
||||
|
|
|
@ -1776,8 +1776,8 @@ static int omap_hsmmc_configure_wake_irq(struct omap_hsmmc_host *host)
|
|||
*/
|
||||
if (host->pdata->controller_flags & OMAP_HSMMC_SWAKEUP_MISSING) {
|
||||
struct pinctrl *p = devm_pinctrl_get(host->dev);
|
||||
if (!p) {
|
||||
ret = -ENODEV;
|
||||
if (IS_ERR(p)) {
|
||||
ret = PTR_ERR(p);
|
||||
goto err_free_irq;
|
||||
}
|
||||
if (IS_ERR(pinctrl_lookup_state(p, PINCTRL_STATE_DEFAULT))) {
|
||||
|
|
|
@ -418,6 +418,20 @@ static void esdhc_of_set_clock(struct sdhci_host *host, unsigned int clock)
|
|||
if (esdhc->vendor_ver < VENDOR_V_23)
|
||||
pre_div = 2;
|
||||
|
||||
/*
|
||||
* Limit SD clock to 167MHz for ls1046a according to its datasheet
|
||||
*/
|
||||
if (clock > 167000000 &&
|
||||
of_find_compatible_node(NULL, NULL, "fsl,ls1046a-esdhc"))
|
||||
clock = 167000000;
|
||||
|
||||
/*
|
||||
* Limit SD clock to 125MHz for ls1012a according to its datasheet
|
||||
*/
|
||||
if (clock > 125000000 &&
|
||||
of_find_compatible_node(NULL, NULL, "fsl,ls1012a-esdhc"))
|
||||
clock = 125000000;
|
||||
|
||||
/* Workaround to reduce the clock frequency for p1010 esdhc */
|
||||
if (of_find_compatible_node(NULL, NULL, "fsl,p1010-esdhc")) {
|
||||
if (clock > 20000000)
|
||||
|
|
|
@ -2044,6 +2044,7 @@ static void bnx2x_set_rx_buf_size(struct bnx2x *bp)
|
|||
ETH_OVREHEAD +
|
||||
mtu +
|
||||
BNX2X_FW_RX_ALIGN_END;
|
||||
fp->rx_buf_size = SKB_DATA_ALIGN(fp->rx_buf_size);
|
||||
/* Note : rx_buf_size doesn't take into account NET_SKB_PAD */
|
||||
if (fp->rx_buf_size + NET_SKB_PAD <= PAGE_SIZE)
|
||||
fp->rx_frag_size = fp->rx_buf_size + NET_SKB_PAD;
|
||||
|
|
|
@ -648,7 +648,7 @@ static void hns_gmac_get_strings(u32 stringset, u8 *data)
|
|||
|
||||
static int hns_gmac_get_sset_count(int stringset)
|
||||
{
|
||||
if (stringset == ETH_SS_STATS)
|
||||
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
|
||||
return ARRAY_SIZE(g_gmac_stats_string);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -384,7 +384,7 @@ void hns_ppe_update_stats(struct hns_ppe_cb *ppe_cb)
|
|||
|
||||
int hns_ppe_get_sset_count(int stringset)
|
||||
{
|
||||
if (stringset == ETH_SS_STATS)
|
||||
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
|
||||
return ETH_PPE_STATIC_NUM;
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -807,7 +807,7 @@ void hns_rcb_get_stats(struct hnae_queue *queue, u64 *data)
|
|||
*/
|
||||
int hns_rcb_get_ring_sset_count(int stringset)
|
||||
{
|
||||
if (stringset == ETH_SS_STATS)
|
||||
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
|
||||
return HNS_RING_STATIC_REG_NUM;
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -776,7 +776,7 @@ static void hns_xgmac_get_strings(u32 stringset, u8 *data)
|
|||
*/
|
||||
static int hns_xgmac_get_sset_count(int stringset)
|
||||
{
|
||||
if (stringset == ETH_SS_STATS)
|
||||
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
|
||||
return ARRAY_SIZE(g_xgmac_stats_string);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -3526,6 +3526,12 @@ s32 e1000e_get_base_timinca(struct e1000_adapter *adapter, u32 *timinca)
|
|||
|
||||
switch (hw->mac.type) {
|
||||
case e1000_pch2lan:
|
||||
/* Stable 96MHz frequency */
|
||||
incperiod = INCPERIOD_96MHz;
|
||||
incvalue = INCVALUE_96MHz;
|
||||
shift = INCVALUE_SHIFT_96MHz;
|
||||
adapter->cc.shift = shift + INCPERIOD_SHIFT_96MHz;
|
||||
break;
|
||||
case e1000_pch_lpt:
|
||||
if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) {
|
||||
/* Stable 96MHz frequency */
|
||||
|
|
|
@ -127,6 +127,8 @@ static int qlcnic_sriov_virtid_fn(struct qlcnic_adapter *adapter, int vf_id)
|
|||
return 0;
|
||||
|
||||
pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_SRIOV);
|
||||
if (!pos)
|
||||
return 0;
|
||||
pci_read_config_word(dev, pos + PCI_SRIOV_VF_OFFSET, &offset);
|
||||
pci_read_config_word(dev, pos + PCI_SRIOV_VF_STRIDE, &stride);
|
||||
|
||||
|
|
|
@ -347,6 +347,7 @@ static int pc300_pci_init_one(struct pci_dev *pdev,
|
|||
card->rambase == NULL) {
|
||||
pr_err("ioremap() failed\n");
|
||||
pc300_pci_remove_one(pdev);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/* PLX PCI 9050 workaround for local configuration register read bug */
|
||||
|
|
|
@ -254,8 +254,12 @@ bool ath_is_49ghz_allowed(u16 regdomain)
|
|||
EXPORT_SYMBOL(ath_is_49ghz_allowed);
|
||||
|
||||
/* Frequency is one where radar detection is required */
|
||||
static bool ath_is_radar_freq(u16 center_freq)
|
||||
static bool ath_is_radar_freq(u16 center_freq,
|
||||
struct ath_regulatory *reg)
|
||||
|
||||
{
|
||||
if (reg->country_code == CTRY_INDIA)
|
||||
return (center_freq >= 5500 && center_freq <= 5700);
|
||||
return (center_freq >= 5260 && center_freq <= 5700);
|
||||
}
|
||||
|
||||
|
@ -306,7 +310,7 @@ __ath_reg_apply_beaconing_flags(struct wiphy *wiphy,
|
|||
enum nl80211_reg_initiator initiator,
|
||||
struct ieee80211_channel *ch)
|
||||
{
|
||||
if (ath_is_radar_freq(ch->center_freq) ||
|
||||
if (ath_is_radar_freq(ch->center_freq, reg) ||
|
||||
(ch->flags & IEEE80211_CHAN_RADAR))
|
||||
return;
|
||||
|
||||
|
@ -395,8 +399,9 @@ ath_reg_apply_ir_flags(struct wiphy *wiphy,
|
|||
}
|
||||
}
|
||||
|
||||
/* Always apply Radar/DFS rules on freq range 5260 MHz - 5700 MHz */
|
||||
static void ath_reg_apply_radar_flags(struct wiphy *wiphy)
|
||||
/* Always apply Radar/DFS rules on freq range 5500 MHz - 5700 MHz */
|
||||
static void ath_reg_apply_radar_flags(struct wiphy *wiphy,
|
||||
struct ath_regulatory *reg)
|
||||
{
|
||||
struct ieee80211_supported_band *sband;
|
||||
struct ieee80211_channel *ch;
|
||||
|
@ -409,7 +414,7 @@ static void ath_reg_apply_radar_flags(struct wiphy *wiphy)
|
|||
|
||||
for (i = 0; i < sband->n_channels; i++) {
|
||||
ch = &sband->channels[i];
|
||||
if (!ath_is_radar_freq(ch->center_freq))
|
||||
if (!ath_is_radar_freq(ch->center_freq, reg))
|
||||
continue;
|
||||
/* We always enable radar detection/DFS on this
|
||||
* frequency range. Additionally we also apply on
|
||||
|
@ -505,7 +510,7 @@ void ath_reg_notifier_apply(struct wiphy *wiphy,
|
|||
struct ath_common *common = container_of(reg, struct ath_common,
|
||||
regulatory);
|
||||
/* We always apply this */
|
||||
ath_reg_apply_radar_flags(wiphy);
|
||||
ath_reg_apply_radar_flags(wiphy, reg);
|
||||
|
||||
/*
|
||||
* This would happen when we have sent a custom regulatory request
|
||||
|
@ -653,7 +658,7 @@ ath_regd_init_wiphy(struct ath_regulatory *reg,
|
|||
}
|
||||
|
||||
wiphy_apply_custom_regulatory(wiphy, regd);
|
||||
ath_reg_apply_radar_flags(wiphy);
|
||||
ath_reg_apply_radar_flags(wiphy, reg);
|
||||
ath_reg_apply_world_flags(wiphy, NL80211_REGDOM_SET_BY_DRIVER, reg);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -66,8 +66,10 @@ mt7601u_mcu_msg_alloc(struct mt7601u_dev *dev, const void *data, int len)
|
|||
WARN_ON(len % 4); /* if length is not divisible by 4 we need to pad */
|
||||
|
||||
skb = alloc_skb(len + MT_DMA_HDR_LEN + 4, GFP_KERNEL);
|
||||
skb_reserve(skb, MT_DMA_HDR_LEN);
|
||||
memcpy(skb_put(skb, len), data, len);
|
||||
if (skb) {
|
||||
skb_reserve(skb, MT_DMA_HDR_LEN);
|
||||
memcpy(skb_put(skb, len), data, len);
|
||||
}
|
||||
|
||||
return skb;
|
||||
}
|
||||
|
@ -170,6 +172,8 @@ static int mt7601u_mcu_function_select(struct mt7601u_dev *dev,
|
|||
};
|
||||
|
||||
skb = mt7601u_mcu_msg_alloc(dev, &msg, sizeof(msg));
|
||||
if (!skb)
|
||||
return -ENOMEM;
|
||||
return mt7601u_mcu_msg_send(dev, skb, CMD_FUN_SET_OP, func == 5);
|
||||
}
|
||||
|
||||
|
@ -205,6 +209,8 @@ mt7601u_mcu_calibrate(struct mt7601u_dev *dev, enum mcu_calibrate cal, u32 val)
|
|||
};
|
||||
|
||||
skb = mt7601u_mcu_msg_alloc(dev, &msg, sizeof(msg));
|
||||
if (!skb)
|
||||
return -ENOMEM;
|
||||
return mt7601u_mcu_msg_send(dev, skb, CMD_CALIBRATION_OP, true);
|
||||
}
|
||||
|
||||
|
|
|
@ -1572,7 +1572,14 @@ int rtl_pci_reset_trx_ring(struct ieee80211_hw *hw)
|
|||
dev_kfree_skb_irq(skb);
|
||||
ring->idx = (ring->idx + 1) % ring->entries;
|
||||
}
|
||||
|
||||
if (rtlpriv->use_new_trx_flow) {
|
||||
rtlpci->tx_ring[i].cur_tx_rp = 0;
|
||||
rtlpci->tx_ring[i].cur_tx_wp = 0;
|
||||
}
|
||||
|
||||
ring->idx = 0;
|
||||
ring->entries = rtlpci->txringcount[i];
|
||||
}
|
||||
}
|
||||
spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
|
||||
|
|
|
@ -3425,6 +3425,10 @@ static int rndis_wlan_bind(struct usbnet *usbdev, struct usb_interface *intf)
|
|||
|
||||
/* because rndis_command() sleeps we need to use workqueue */
|
||||
priv->workqueue = create_singlethread_workqueue("rndis_wlan");
|
||||
if (!priv->workqueue) {
|
||||
wiphy_free(wiphy);
|
||||
return -ENOMEM;
|
||||
}
|
||||
INIT_WORK(&priv->work, rndis_wlan_worker);
|
||||
INIT_DELAYED_WORK(&priv->dev_poller_work, rndis_device_poller);
|
||||
INIT_DELAYED_WORK(&priv->scan_work, rndis_get_scan_results);
|
||||
|
|
|
@ -979,19 +979,16 @@ struct pinctrl_state *pinctrl_lookup_state(struct pinctrl *p,
|
|||
EXPORT_SYMBOL_GPL(pinctrl_lookup_state);
|
||||
|
||||
/**
|
||||
* pinctrl_select_state() - select/activate/program a pinctrl state to HW
|
||||
* pinctrl_commit_state() - select/activate/program a pinctrl state to HW
|
||||
* @p: the pinctrl handle for the device that requests configuration
|
||||
* @state: the state handle to select/activate/program
|
||||
*/
|
||||
int pinctrl_select_state(struct pinctrl *p, struct pinctrl_state *state)
|
||||
static int pinctrl_commit_state(struct pinctrl *p, struct pinctrl_state *state)
|
||||
{
|
||||
struct pinctrl_setting *setting, *setting2;
|
||||
struct pinctrl_state *old_state = p->state;
|
||||
int ret;
|
||||
|
||||
if (p->state == state)
|
||||
return 0;
|
||||
|
||||
if (p->state) {
|
||||
/*
|
||||
* For each pinmux setting in the old state, forget SW's record
|
||||
|
@ -1055,6 +1052,19 @@ unapply_new_state:
|
|||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* pinctrl_select_state() - select/activate/program a pinctrl state to HW
|
||||
* @p: the pinctrl handle for the device that requests configuration
|
||||
* @state: the state handle to select/activate/program
|
||||
*/
|
||||
int pinctrl_select_state(struct pinctrl *p, struct pinctrl_state *state)
|
||||
{
|
||||
if (p->state == state)
|
||||
return 0;
|
||||
|
||||
return pinctrl_commit_state(p, state);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pinctrl_select_state);
|
||||
|
||||
static void devm_pinctrl_release(struct device *dev, void *res)
|
||||
|
@ -1223,7 +1233,7 @@ void pinctrl_unregister_map(struct pinctrl_map const *map)
|
|||
int pinctrl_force_sleep(struct pinctrl_dev *pctldev)
|
||||
{
|
||||
if (!IS_ERR(pctldev->p) && !IS_ERR(pctldev->hog_sleep))
|
||||
return pinctrl_select_state(pctldev->p, pctldev->hog_sleep);
|
||||
return pinctrl_commit_state(pctldev->p, pctldev->hog_sleep);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pinctrl_force_sleep);
|
||||
|
@ -1235,7 +1245,7 @@ EXPORT_SYMBOL_GPL(pinctrl_force_sleep);
|
|||
int pinctrl_force_default(struct pinctrl_dev *pctldev)
|
||||
{
|
||||
if (!IS_ERR(pctldev->p) && !IS_ERR(pctldev->hog_default))
|
||||
return pinctrl_select_state(pctldev->p, pctldev->hog_default);
|
||||
return pinctrl_commit_state(pctldev->p, pctldev->hog_default);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pinctrl_force_default);
|
||||
|
|
|
@ -59,12 +59,14 @@ static int send_command(struct cros_ec_device *ec_dev,
|
|||
struct cros_ec_command *msg)
|
||||
{
|
||||
int ret;
|
||||
int (*xfer_fxn)(struct cros_ec_device *ec, struct cros_ec_command *msg);
|
||||
|
||||
if (ec_dev->proto_version > 2)
|
||||
ret = ec_dev->pkt_xfer(ec_dev, msg);
|
||||
xfer_fxn = ec_dev->pkt_xfer;
|
||||
else
|
||||
ret = ec_dev->cmd_xfer(ec_dev, msg);
|
||||
xfer_fxn = ec_dev->cmd_xfer;
|
||||
|
||||
ret = (*xfer_fxn)(ec_dev, msg);
|
||||
if (msg->result == EC_RES_IN_PROGRESS) {
|
||||
int i;
|
||||
struct cros_ec_command *status_msg;
|
||||
|
@ -87,7 +89,7 @@ static int send_command(struct cros_ec_device *ec_dev,
|
|||
for (i = 0; i < EC_COMMAND_RETRIES; i++) {
|
||||
usleep_range(10000, 11000);
|
||||
|
||||
ret = ec_dev->cmd_xfer(ec_dev, status_msg);
|
||||
ret = (*xfer_fxn)(ec_dev, status_msg);
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
|
|
|
@ -187,7 +187,7 @@ static ssize_t show_ec_version(struct device *dev,
|
|||
count += scnprintf(buf + count, PAGE_SIZE - count,
|
||||
"Build info: EC error %d\n", msg->result);
|
||||
else {
|
||||
msg->data[sizeof(msg->data) - 1] = '\0';
|
||||
msg->data[EC_HOST_PARAM_SIZE - 1] = '\0';
|
||||
count += scnprintf(buf + count, PAGE_SIZE - count,
|
||||
"Build info: %s\n", msg->data);
|
||||
}
|
||||
|
|
|
@ -99,6 +99,15 @@ static const struct dmi_system_id asus_quirks[] = {
|
|||
*/
|
||||
.driver_data = &quirk_asus_wapf4,
|
||||
},
|
||||
{
|
||||
.callback = dmi_matched,
|
||||
.ident = "ASUSTeK COMPUTER INC. X302UA",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "X302UA"),
|
||||
},
|
||||
.driver_data = &quirk_asus_wapf4,
|
||||
},
|
||||
{
|
||||
.callback = dmi_matched,
|
||||
.ident = "ASUSTeK COMPUTER INC. X401U",
|
||||
|
|
|
@ -30,9 +30,9 @@ static inline unsigned int get_irq_flags(struct resource *res)
|
|||
static struct device *dev;
|
||||
static struct pda_power_pdata *pdata;
|
||||
static struct resource *ac_irq, *usb_irq;
|
||||
static struct timer_list charger_timer;
|
||||
static struct timer_list supply_timer;
|
||||
static struct timer_list polling_timer;
|
||||
static struct delayed_work charger_work;
|
||||
static struct delayed_work polling_work;
|
||||
static struct delayed_work supply_work;
|
||||
static int polling;
|
||||
static struct power_supply *pda_psy_ac, *pda_psy_usb;
|
||||
|
||||
|
@ -140,7 +140,7 @@ static void update_charger(void)
|
|||
}
|
||||
}
|
||||
|
||||
static void supply_timer_func(unsigned long unused)
|
||||
static void supply_work_func(struct work_struct *work)
|
||||
{
|
||||
if (ac_status == PDA_PSY_TO_CHANGE) {
|
||||
ac_status = new_ac_status;
|
||||
|
@ -161,11 +161,12 @@ static void psy_changed(void)
|
|||
* Okay, charger set. Now wait a bit before notifying supplicants,
|
||||
* charge power should stabilize.
|
||||
*/
|
||||
mod_timer(&supply_timer,
|
||||
jiffies + msecs_to_jiffies(pdata->wait_for_charger));
|
||||
cancel_delayed_work(&supply_work);
|
||||
schedule_delayed_work(&supply_work,
|
||||
msecs_to_jiffies(pdata->wait_for_charger));
|
||||
}
|
||||
|
||||
static void charger_timer_func(unsigned long unused)
|
||||
static void charger_work_func(struct work_struct *work)
|
||||
{
|
||||
update_status();
|
||||
psy_changed();
|
||||
|
@ -184,13 +185,14 @@ static irqreturn_t power_changed_isr(int irq, void *power_supply)
|
|||
* Wait a bit before reading ac/usb line status and setting charger,
|
||||
* because ac/usb status readings may lag from irq.
|
||||
*/
|
||||
mod_timer(&charger_timer,
|
||||
jiffies + msecs_to_jiffies(pdata->wait_for_status));
|
||||
cancel_delayed_work(&charger_work);
|
||||
schedule_delayed_work(&charger_work,
|
||||
msecs_to_jiffies(pdata->wait_for_status));
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static void polling_timer_func(unsigned long unused)
|
||||
static void polling_work_func(struct work_struct *work)
|
||||
{
|
||||
int changed = 0;
|
||||
|
||||
|
@ -211,8 +213,9 @@ static void polling_timer_func(unsigned long unused)
|
|||
if (changed)
|
||||
psy_changed();
|
||||
|
||||
mod_timer(&polling_timer,
|
||||
jiffies + msecs_to_jiffies(pdata->polling_interval));
|
||||
cancel_delayed_work(&polling_work);
|
||||
schedule_delayed_work(&polling_work,
|
||||
msecs_to_jiffies(pdata->polling_interval));
|
||||
}
|
||||
|
||||
#if IS_ENABLED(CONFIG_USB_PHY)
|
||||
|
@ -250,8 +253,9 @@ static int otg_handle_notification(struct notifier_block *nb,
|
|||
* Wait a bit before reading ac/usb line status and setting charger,
|
||||
* because ac/usb status readings may lag from irq.
|
||||
*/
|
||||
mod_timer(&charger_timer,
|
||||
jiffies + msecs_to_jiffies(pdata->wait_for_status));
|
||||
cancel_delayed_work(&charger_work);
|
||||
schedule_delayed_work(&charger_work,
|
||||
msecs_to_jiffies(pdata->wait_for_status));
|
||||
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
@ -300,8 +304,8 @@ static int pda_power_probe(struct platform_device *pdev)
|
|||
if (!pdata->ac_max_uA)
|
||||
pdata->ac_max_uA = 500000;
|
||||
|
||||
setup_timer(&charger_timer, charger_timer_func, 0);
|
||||
setup_timer(&supply_timer, supply_timer_func, 0);
|
||||
INIT_DELAYED_WORK(&charger_work, charger_work_func);
|
||||
INIT_DELAYED_WORK(&supply_work, supply_work_func);
|
||||
|
||||
ac_irq = platform_get_resource_byname(pdev, IORESOURCE_IRQ, "ac");
|
||||
usb_irq = platform_get_resource_byname(pdev, IORESOURCE_IRQ, "usb");
|
||||
|
@ -385,9 +389,10 @@ static int pda_power_probe(struct platform_device *pdev)
|
|||
|
||||
if (polling) {
|
||||
dev_dbg(dev, "will poll for status\n");
|
||||
setup_timer(&polling_timer, polling_timer_func, 0);
|
||||
mod_timer(&polling_timer,
|
||||
jiffies + msecs_to_jiffies(pdata->polling_interval));
|
||||
INIT_DELAYED_WORK(&polling_work, polling_work_func);
|
||||
cancel_delayed_work(&polling_work);
|
||||
schedule_delayed_work(&polling_work,
|
||||
msecs_to_jiffies(pdata->polling_interval));
|
||||
}
|
||||
|
||||
if (ac_irq || usb_irq)
|
||||
|
@ -433,9 +438,9 @@ static int pda_power_remove(struct platform_device *pdev)
|
|||
free_irq(ac_irq->start, pda_psy_ac);
|
||||
|
||||
if (polling)
|
||||
del_timer_sync(&polling_timer);
|
||||
del_timer_sync(&charger_timer);
|
||||
del_timer_sync(&supply_timer);
|
||||
cancel_delayed_work_sync(&polling_work);
|
||||
cancel_delayed_work_sync(&charger_work);
|
||||
cancel_delayed_work_sync(&supply_work);
|
||||
|
||||
if (pdata->is_usb_online)
|
||||
power_supply_unregister(pda_psy_usb);
|
||||
|
|
|
@ -97,30 +97,26 @@ static s32 scaled_ppm_to_ppb(long ppm)
|
|||
|
||||
/* posix clock implementation */
|
||||
|
||||
static int ptp_clock_getres(struct posix_clock *pc, struct timespec *tp)
|
||||
static int ptp_clock_getres(struct posix_clock *pc, struct timespec64 *tp)
|
||||
{
|
||||
tp->tv_sec = 0;
|
||||
tp->tv_nsec = 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ptp_clock_settime(struct posix_clock *pc, const struct timespec *tp)
|
||||
static int ptp_clock_settime(struct posix_clock *pc, const struct timespec64 *tp)
|
||||
{
|
||||
struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock);
|
||||
struct timespec64 ts = timespec_to_timespec64(*tp);
|
||||
|
||||
return ptp->info->settime64(ptp->info, &ts);
|
||||
return ptp->info->settime64(ptp->info, tp);
|
||||
}
|
||||
|
||||
static int ptp_clock_gettime(struct posix_clock *pc, struct timespec *tp)
|
||||
static int ptp_clock_gettime(struct posix_clock *pc, struct timespec64 *tp)
|
||||
{
|
||||
struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock);
|
||||
struct timespec64 ts;
|
||||
int err;
|
||||
|
||||
err = ptp->info->gettime64(ptp->info, &ts);
|
||||
if (!err)
|
||||
*tp = timespec64_to_timespec(ts);
|
||||
err = ptp->info->gettime64(ptp->info, tp);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -133,7 +129,7 @@ static int ptp_clock_adjtime(struct posix_clock *pc, struct timex *tx)
|
|||
ops = ptp->info;
|
||||
|
||||
if (tx->modes & ADJ_SETOFFSET) {
|
||||
struct timespec ts;
|
||||
struct timespec64 ts;
|
||||
ktime_t kt;
|
||||
s64 delta;
|
||||
|
||||
|
@ -146,7 +142,7 @@ static int ptp_clock_adjtime(struct posix_clock *pc, struct timex *tx)
|
|||
if ((unsigned long) ts.tv_nsec >= NSEC_PER_SEC)
|
||||
return -EINVAL;
|
||||
|
||||
kt = timespec_to_ktime(ts);
|
||||
kt = timespec64_to_ktime(ts);
|
||||
delta = ktime_to_ns(kt);
|
||||
err = ops->adjtime(ops, delta);
|
||||
} else if (tx->modes & ADJ_FREQUENCY) {
|
||||
|
|
|
@ -296,6 +296,11 @@ static int anatop_regulator_probe(struct platform_device *pdev)
|
|||
if (!sreg->sel && !strcmp(sreg->name, "vddpu"))
|
||||
sreg->sel = 22;
|
||||
|
||||
/* set the default voltage of the pcie phy to be 1.100v */
|
||||
if (!sreg->sel && rdesc->name &&
|
||||
!strcmp(rdesc->name, "vddpcie"))
|
||||
sreg->sel = 0x10;
|
||||
|
||||
if (!sreg->bypass && !sreg->sel) {
|
||||
dev_err(&pdev->dev, "Failed to read a valid default voltage selector.\n");
|
||||
return -EINVAL;
|
||||
|
|
|
@ -41,6 +41,9 @@
|
|||
#include <linux/pm.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_platform.h>
|
||||
#ifdef CONFIG_X86
|
||||
#include <asm/i8259.h>
|
||||
#endif
|
||||
|
||||
/* this is for "generic access to PC-style RTC" using CMOS_READ/CMOS_WRITE */
|
||||
#include <asm-generic/rtc.h>
|
||||
|
@ -1058,17 +1061,23 @@ static int cmos_pnp_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
|
|||
{
|
||||
cmos_wake_setup(&pnp->dev);
|
||||
|
||||
if (pnp_port_start(pnp, 0) == 0x70 && !pnp_irq_valid(pnp, 0))
|
||||
if (pnp_port_start(pnp, 0) == 0x70 && !pnp_irq_valid(pnp, 0)) {
|
||||
unsigned int irq = 0;
|
||||
#ifdef CONFIG_X86
|
||||
/* Some machines contain a PNP entry for the RTC, but
|
||||
* don't define the IRQ. It should always be safe to
|
||||
* hardcode it in these cases
|
||||
* hardcode it on systems with a legacy PIC.
|
||||
*/
|
||||
if (nr_legacy_irqs())
|
||||
irq = 8;
|
||||
#endif
|
||||
return cmos_do_probe(&pnp->dev,
|
||||
pnp_get_resource(pnp, IORESOURCE_IO, 0), 8);
|
||||
else
|
||||
pnp_get_resource(pnp, IORESOURCE_IO, 0), irq);
|
||||
} else {
|
||||
return cmos_do_probe(&pnp->dev,
|
||||
pnp_get_resource(pnp, IORESOURCE_IO, 0),
|
||||
pnp_irq(pnp, 0));
|
||||
}
|
||||
}
|
||||
|
||||
static void __exit cmos_pnp_remove(struct pnp_dev *pnp)
|
||||
|
|
|
@ -527,6 +527,10 @@ static long ds1374_wdt_ioctl(struct file *file, unsigned int cmd,
|
|||
if (get_user(new_margin, (int __user *)arg))
|
||||
return -EFAULT;
|
||||
|
||||
/* the hardware's tick rate is 4096 Hz, so
|
||||
* the counter value needs to be scaled accordingly
|
||||
*/
|
||||
new_margin <<= 12;
|
||||
if (new_margin < 1 || new_margin > 16777216)
|
||||
return -EINVAL;
|
||||
|
||||
|
@ -535,7 +539,8 @@ static long ds1374_wdt_ioctl(struct file *file, unsigned int cmd,
|
|||
ds1374_wdt_ping();
|
||||
/* fallthrough */
|
||||
case WDIOC_GETTIMEOUT:
|
||||
return put_user(wdt_margin, (int __user *)arg);
|
||||
/* when returning ... inverse is true */
|
||||
return put_user((wdt_margin >> 12), (int __user *)arg);
|
||||
case WDIOC_SETOPTIONS:
|
||||
if (copy_from_user(&options, (int __user *)arg, sizeof(int)))
|
||||
return -EFAULT;
|
||||
|
@ -543,14 +548,15 @@ static long ds1374_wdt_ioctl(struct file *file, unsigned int cmd,
|
|||
if (options & WDIOS_DISABLECARD) {
|
||||
pr_info("disable watchdog\n");
|
||||
ds1374_wdt_disable();
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (options & WDIOS_ENABLECARD) {
|
||||
pr_info("enable watchdog\n");
|
||||
ds1374_wdt_settimeout(wdt_margin);
|
||||
ds1374_wdt_ping();
|
||||
return 0;
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
return -ENOTTY;
|
||||
|
|
|
@ -13493,6 +13493,9 @@ lpfc_wq_create(struct lpfc_hba *phba, struct lpfc_queue *wq,
|
|||
case LPFC_Q_CREATE_VERSION_1:
|
||||
bf_set(lpfc_mbx_wq_create_wqe_count, &wq_create->u.request_1,
|
||||
wq->entry_count);
|
||||
bf_set(lpfc_mbox_hdr_version, &shdr->request,
|
||||
LPFC_Q_CREATE_VERSION_1);
|
||||
|
||||
switch (wq->entry_size) {
|
||||
default:
|
||||
case 64:
|
||||
|
|
|
@ -55,6 +55,7 @@ struct mac_esp_priv {
|
|||
int error;
|
||||
};
|
||||
static struct esp *esp_chips[2];
|
||||
static DEFINE_SPINLOCK(esp_chips_lock);
|
||||
|
||||
#define MAC_ESP_GET_PRIV(esp) ((struct mac_esp_priv *) \
|
||||
platform_get_drvdata((struct platform_device *) \
|
||||
|
@ -562,15 +563,18 @@ static int esp_mac_probe(struct platform_device *dev)
|
|||
}
|
||||
|
||||
host->irq = IRQ_MAC_SCSI;
|
||||
esp_chips[dev->id] = esp;
|
||||
mb();
|
||||
if (esp_chips[!dev->id] == NULL) {
|
||||
err = request_irq(host->irq, mac_scsi_esp_intr, 0, "ESP", NULL);
|
||||
if (err < 0) {
|
||||
esp_chips[dev->id] = NULL;
|
||||
goto fail_free_priv;
|
||||
}
|
||||
|
||||
/* The request_irq() call is intended to succeed for the first device
|
||||
* and fail for the second device.
|
||||
*/
|
||||
err = request_irq(host->irq, mac_scsi_esp_intr, 0, "ESP", NULL);
|
||||
spin_lock(&esp_chips_lock);
|
||||
if (err < 0 && esp_chips[!dev->id] == NULL) {
|
||||
spin_unlock(&esp_chips_lock);
|
||||
goto fail_free_priv;
|
||||
}
|
||||
esp_chips[dev->id] = esp;
|
||||
spin_unlock(&esp_chips_lock);
|
||||
|
||||
err = scsi_esp_register(esp, &dev->dev);
|
||||
if (err)
|
||||
|
@ -579,8 +583,13 @@ static int esp_mac_probe(struct platform_device *dev)
|
|||
return 0;
|
||||
|
||||
fail_free_irq:
|
||||
if (esp_chips[!dev->id] == NULL)
|
||||
spin_lock(&esp_chips_lock);
|
||||
esp_chips[dev->id] = NULL;
|
||||
if (esp_chips[!dev->id] == NULL) {
|
||||
spin_unlock(&esp_chips_lock);
|
||||
free_irq(host->irq, esp);
|
||||
} else
|
||||
spin_unlock(&esp_chips_lock);
|
||||
fail_free_priv:
|
||||
kfree(mep);
|
||||
fail_free_command_block:
|
||||
|
@ -599,9 +608,13 @@ static int esp_mac_remove(struct platform_device *dev)
|
|||
|
||||
scsi_esp_unregister(esp);
|
||||
|
||||
spin_lock(&esp_chips_lock);
|
||||
esp_chips[dev->id] = NULL;
|
||||
if (!(esp_chips[0] || esp_chips[1]))
|
||||
if (esp_chips[!dev->id] == NULL) {
|
||||
spin_unlock(&esp_chips_lock);
|
||||
free_irq(irq, NULL);
|
||||
} else
|
||||
spin_unlock(&esp_chips_lock);
|
||||
|
||||
kfree(mep);
|
||||
|
||||
|
|
|
@ -28,6 +28,7 @@
|
|||
#include <scsi/scsi_device.h>
|
||||
#include <scsi/scsi_cmnd.h>
|
||||
#include <scsi/scsi_tcq.h>
|
||||
#include <scsi/scsi_devinfo.h>
|
||||
#include <linux/seqlock.h>
|
||||
|
||||
#define VIRTIO_SCSI_MEMPOOL_SZ 64
|
||||
|
@ -704,6 +705,28 @@ static int virtscsi_device_reset(struct scsi_cmnd *sc)
|
|||
return virtscsi_tmf(vscsi, cmd);
|
||||
}
|
||||
|
||||
static int virtscsi_device_alloc(struct scsi_device *sdevice)
|
||||
{
|
||||
/*
|
||||
* Passed through SCSI targets (e.g. with qemu's 'scsi-block')
|
||||
* may have transfer limits which come from the host SCSI
|
||||
* controller or something on the host side other than the
|
||||
* target itself.
|
||||
*
|
||||
* To make this work properly, the hypervisor can adjust the
|
||||
* target's VPD information to advertise these limits. But
|
||||
* for that to work, the guest has to look at the VPD pages,
|
||||
* which we won't do by default if it is an SPC-2 device, even
|
||||
* if it does actually support it.
|
||||
*
|
||||
* So, set the blist to always try to read the VPD pages.
|
||||
*/
|
||||
sdevice->sdev_bflags = BLIST_TRY_VPD_PAGES;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* virtscsi_change_queue_depth() - Change a virtscsi target's queue depth
|
||||
* @sdev: Virtscsi target whose queue depth to change
|
||||
|
@ -775,6 +798,7 @@ static struct scsi_host_template virtscsi_host_template_single = {
|
|||
.change_queue_depth = virtscsi_change_queue_depth,
|
||||
.eh_abort_handler = virtscsi_abort,
|
||||
.eh_device_reset_handler = virtscsi_device_reset,
|
||||
.slave_alloc = virtscsi_device_alloc,
|
||||
|
||||
.can_queue = 1024,
|
||||
.dma_boundary = UINT_MAX,
|
||||
|
|
|
@ -120,8 +120,8 @@ static int dw_spi_mmio_remove(struct platform_device *pdev)
|
|||
{
|
||||
struct dw_spi_mmio *dwsmmio = platform_get_drvdata(pdev);
|
||||
|
||||
clk_disable_unprepare(dwsmmio->clk);
|
||||
dw_spi_remove_host(&dwsmmio->dws);
|
||||
clk_disable_unprepare(dwsmmio->clk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -792,7 +792,7 @@ static void
|
|||
do_scsi_nolinuxstat(struct uiscmdrsp *cmdrsp, struct scsi_cmnd *scsicmd)
|
||||
{
|
||||
struct scsi_device *scsidev;
|
||||
unsigned char buf[36];
|
||||
unsigned char *buf;
|
||||
struct scatterlist *sg;
|
||||
unsigned int i;
|
||||
char *this_page;
|
||||
|
@ -807,6 +807,10 @@ do_scsi_nolinuxstat(struct uiscmdrsp *cmdrsp, struct scsi_cmnd *scsicmd)
|
|||
if (cmdrsp->scsi.no_disk_result == 0)
|
||||
return;
|
||||
|
||||
buf = kzalloc(sizeof(char) * 36, GFP_KERNEL);
|
||||
if (!buf)
|
||||
return;
|
||||
|
||||
/* Linux scsi code wants a device at Lun 0
|
||||
* to issue report luns, but we don't want
|
||||
* a disk there so we'll present a processor
|
||||
|
@ -820,6 +824,7 @@ do_scsi_nolinuxstat(struct uiscmdrsp *cmdrsp, struct scsi_cmnd *scsicmd)
|
|||
if (scsi_sg_count(scsicmd) == 0) {
|
||||
memcpy(scsi_sglist(scsicmd), buf,
|
||||
cmdrsp->scsi.bufflen);
|
||||
kfree(buf);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -831,6 +836,7 @@ do_scsi_nolinuxstat(struct uiscmdrsp *cmdrsp, struct scsi_cmnd *scsicmd)
|
|||
memcpy(this_page, buf + bufind, sg[i].length);
|
||||
kunmap_atomic(this_page_orig);
|
||||
}
|
||||
kfree(buf);
|
||||
} else {
|
||||
devdata = (struct visorhba_devdata *)scsidev->host->hostdata;
|
||||
for_each_vdisk_match(vdisk, devdata, scsidev) {
|
||||
|
|
|
@ -251,6 +251,8 @@ static netdev_tx_t WILC_WFI_mon_xmit(struct sk_buff *skb,
|
|||
|
||||
if (skb->data[0] == 0xc0 && (!(memcmp(broadcast, &skb->data[4], 6)))) {
|
||||
skb2 = dev_alloc_skb(skb->len + sizeof(struct wilc_wfi_radiotap_cb_hdr));
|
||||
if (!skb2)
|
||||
return -ENOMEM;
|
||||
|
||||
memcpy(skb_put(skb2, skb->len), skb->data, skb->len);
|
||||
|
||||
|
|
|
@ -276,12 +276,11 @@ static int fd_do_rw(struct se_cmd *cmd, struct file *fd,
|
|||
else
|
||||
ret = vfs_iter_read(fd, &iter, &pos);
|
||||
|
||||
kfree(bvec);
|
||||
|
||||
if (is_write) {
|
||||
if (ret < 0 || ret != data_length) {
|
||||
pr_err("%s() write returned %d\n", __func__, ret);
|
||||
return (ret < 0 ? ret : -EINVAL);
|
||||
if (ret >= 0)
|
||||
ret = -EINVAL;
|
||||
}
|
||||
} else {
|
||||
/*
|
||||
|
@ -294,17 +293,29 @@ static int fd_do_rw(struct se_cmd *cmd, struct file *fd,
|
|||
pr_err("%s() returned %d, expecting %u for "
|
||||
"S_ISBLK\n", __func__, ret,
|
||||
data_length);
|
||||
return (ret < 0 ? ret : -EINVAL);
|
||||
if (ret >= 0)
|
||||
ret = -EINVAL;
|
||||
}
|
||||
} else {
|
||||
if (ret < 0) {
|
||||
pr_err("%s() returned %d for non S_ISBLK\n",
|
||||
__func__, ret);
|
||||
return ret;
|
||||
} else if (ret != data_length) {
|
||||
/*
|
||||
* Short read case:
|
||||
* Probably some one truncate file under us.
|
||||
* We must explicitly zero sg-pages to prevent
|
||||
* expose uninizialized pages to userspace.
|
||||
*/
|
||||
if (ret < data_length)
|
||||
ret += iov_iter_zero(data_length - ret, &iter);
|
||||
else
|
||||
ret = -EINVAL;
|
||||
}
|
||||
}
|
||||
}
|
||||
return 1;
|
||||
kfree(bvec);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static sense_reason_t
|
||||
|
|
|
@ -1694,6 +1694,8 @@ static void release_tty(struct tty_struct *tty, int idx)
|
|||
if (tty->link)
|
||||
tty->link->port->itty = NULL;
|
||||
tty_buffer_cancel_work(tty->port);
|
||||
if (tty->link)
|
||||
tty_buffer_cancel_work(tty->link->port);
|
||||
|
||||
tty_kref_put(tty->link);
|
||||
tty_kref_put(tty);
|
||||
|
|
|
@ -409,7 +409,10 @@ static const char *vgacon_startup(void)
|
|||
vga_video_port_val = VGA_CRT_DM;
|
||||
if ((screen_info.orig_video_ega_bx & 0xff) != 0x10) {
|
||||
static struct resource ega_console_resource =
|
||||
{ .name = "ega", .start = 0x3B0, .end = 0x3BF };
|
||||
{ .name = "ega",
|
||||
.flags = IORESOURCE_IO,
|
||||
.start = 0x3B0,
|
||||
.end = 0x3BF };
|
||||
vga_video_type = VIDEO_TYPE_EGAM;
|
||||
vga_vram_size = 0x8000;
|
||||
display_desc = "EGA+";
|
||||
|
@ -417,9 +420,15 @@ static const char *vgacon_startup(void)
|
|||
&ega_console_resource);
|
||||
} else {
|
||||
static struct resource mda1_console_resource =
|
||||
{ .name = "mda", .start = 0x3B0, .end = 0x3BB };
|
||||
{ .name = "mda",
|
||||
.flags = IORESOURCE_IO,
|
||||
.start = 0x3B0,
|
||||
.end = 0x3BB };
|
||||
static struct resource mda2_console_resource =
|
||||
{ .name = "mda", .start = 0x3BF, .end = 0x3BF };
|
||||
{ .name = "mda",
|
||||
.flags = IORESOURCE_IO,
|
||||
.start = 0x3BF,
|
||||
.end = 0x3BF };
|
||||
vga_video_type = VIDEO_TYPE_MDA;
|
||||
vga_vram_size = 0x2000;
|
||||
display_desc = "*MDA";
|
||||
|
@ -441,15 +450,21 @@ static const char *vgacon_startup(void)
|
|||
vga_vram_size = 0x8000;
|
||||
|
||||
if (!screen_info.orig_video_isVGA) {
|
||||
static struct resource ega_console_resource
|
||||
= { .name = "ega", .start = 0x3C0, .end = 0x3DF };
|
||||
static struct resource ega_console_resource =
|
||||
{ .name = "ega",
|
||||
.flags = IORESOURCE_IO,
|
||||
.start = 0x3C0,
|
||||
.end = 0x3DF };
|
||||
vga_video_type = VIDEO_TYPE_EGAC;
|
||||
display_desc = "EGA";
|
||||
request_resource(&ioport_resource,
|
||||
&ega_console_resource);
|
||||
} else {
|
||||
static struct resource vga_console_resource
|
||||
= { .name = "vga+", .start = 0x3C0, .end = 0x3DF };
|
||||
static struct resource vga_console_resource =
|
||||
{ .name = "vga+",
|
||||
.flags = IORESOURCE_IO,
|
||||
.start = 0x3C0,
|
||||
.end = 0x3DF };
|
||||
vga_video_type = VIDEO_TYPE_VGAC;
|
||||
display_desc = "VGA+";
|
||||
request_resource(&ioport_resource,
|
||||
|
@ -493,7 +508,10 @@ static const char *vgacon_startup(void)
|
|||
}
|
||||
} else {
|
||||
static struct resource cga_console_resource =
|
||||
{ .name = "cga", .start = 0x3D4, .end = 0x3D5 };
|
||||
{ .name = "cga",
|
||||
.flags = IORESOURCE_IO,
|
||||
.start = 0x3D4,
|
||||
.end = 0x3D5 };
|
||||
vga_video_type = VIDEO_TYPE_CGA;
|
||||
vga_vram_size = 0x2000;
|
||||
display_desc = "*CGA";
|
||||
|
|
|
@ -1600,6 +1600,7 @@ static int sm501fb_start(struct sm501fb_info *info,
|
|||
info->fbmem = ioremap(res->start, resource_size(res));
|
||||
if (info->fbmem == NULL) {
|
||||
dev_err(dev, "cannot remap framebuffer\n");
|
||||
ret = -ENXIO;
|
||||
goto err_mem_res;
|
||||
}
|
||||
|
||||
|
|
|
@ -1487,15 +1487,25 @@ static struct device_attribute fb_device_attrs[] = {
|
|||
static int dlfb_select_std_channel(struct dlfb_data *dev)
|
||||
{
|
||||
int ret;
|
||||
u8 set_def_chn[] = { 0x57, 0xCD, 0xDC, 0xA7,
|
||||
void *buf;
|
||||
static const u8 set_def_chn[] = {
|
||||
0x57, 0xCD, 0xDC, 0xA7,
|
||||
0x1C, 0x88, 0x5E, 0x15,
|
||||
0x60, 0xFE, 0xC6, 0x97,
|
||||
0x16, 0x3D, 0x47, 0xF2 };
|
||||
|
||||
buf = kmemdup(set_def_chn, sizeof(set_def_chn), GFP_KERNEL);
|
||||
|
||||
if (!buf)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = usb_control_msg(dev->udev, usb_sndctrlpipe(dev->udev, 0),
|
||||
NR_USB_REQUEST_CHANNEL,
|
||||
(USB_DIR_OUT | USB_TYPE_VENDOR), 0, 0,
|
||||
set_def_chn, sizeof(set_def_chn), USB_CTRL_SET_TIMEOUT);
|
||||
buf, sizeof(set_def_chn), USB_CTRL_SET_TIMEOUT);
|
||||
|
||||
kfree(buf);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
|
@ -5008,13 +5008,19 @@ static int is_extent_unchanged(struct send_ctx *sctx,
|
|||
while (key.offset < ekey->offset + left_len) {
|
||||
ei = btrfs_item_ptr(eb, slot, struct btrfs_file_extent_item);
|
||||
right_type = btrfs_file_extent_type(eb, ei);
|
||||
if (right_type != BTRFS_FILE_EXTENT_REG) {
|
||||
if (right_type != BTRFS_FILE_EXTENT_REG &&
|
||||
right_type != BTRFS_FILE_EXTENT_INLINE) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
right_disknr = btrfs_file_extent_disk_bytenr(eb, ei);
|
||||
right_len = btrfs_file_extent_num_bytes(eb, ei);
|
||||
if (right_type == BTRFS_FILE_EXTENT_INLINE) {
|
||||
right_len = btrfs_file_extent_inline_len(eb, slot, ei);
|
||||
right_len = PAGE_ALIGN(right_len);
|
||||
} else {
|
||||
right_len = btrfs_file_extent_num_bytes(eb, ei);
|
||||
}
|
||||
right_offset = btrfs_file_extent_offset(eb, ei);
|
||||
right_gen = btrfs_file_extent_generation(eb, ei);
|
||||
|
||||
|
@ -5028,6 +5034,19 @@ static int is_extent_unchanged(struct send_ctx *sctx,
|
|||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
* We just wanted to see if when we have an inline extent, what
|
||||
* follows it is a regular extent (wanted to check the above
|
||||
* condition for inline extents too). This should normally not
|
||||
* happen but it's possible for example when we have an inline
|
||||
* compressed extent representing data with a size matching
|
||||
* the page size (currently the same as sector size).
|
||||
*/
|
||||
if (right_type == BTRFS_FILE_EXTENT_INLINE) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
left_offset_fixed = left_offset;
|
||||
if (key.offset < ekey->offset) {
|
||||
/* Fix the right offset for 2a and 7. */
|
||||
|
|
|
@ -980,10 +980,10 @@ struct timespec cnvrtDosUnixTm(__le16 le_date, __le16 le_time, int offset)
|
|||
cifs_dbg(VFS, "illegal hours %d\n", st->Hours);
|
||||
days = sd->Day;
|
||||
month = sd->Month;
|
||||
if ((days > 31) || (month > 12)) {
|
||||
if (days < 1 || days > 31 || month < 1 || month > 12) {
|
||||
cifs_dbg(VFS, "illegal date, month %d day: %d\n", month, days);
|
||||
if (month > 12)
|
||||
month = 12;
|
||||
days = clamp(days, 1, 31);
|
||||
month = clamp(month, 1, 12);
|
||||
}
|
||||
month -= 1;
|
||||
days += total_days_of_prev_months[month];
|
||||
|
|
|
@ -344,13 +344,12 @@ void build_ntlmssp_negotiate_blob(unsigned char *pbuffer,
|
|||
/* BB is NTLMV2 session security format easier to use here? */
|
||||
flags = NTLMSSP_NEGOTIATE_56 | NTLMSSP_REQUEST_TARGET |
|
||||
NTLMSSP_NEGOTIATE_128 | NTLMSSP_NEGOTIATE_UNICODE |
|
||||
NTLMSSP_NEGOTIATE_NTLM | NTLMSSP_NEGOTIATE_EXTENDED_SEC;
|
||||
if (ses->server->sign) {
|
||||
NTLMSSP_NEGOTIATE_NTLM | NTLMSSP_NEGOTIATE_EXTENDED_SEC |
|
||||
NTLMSSP_NEGOTIATE_SEAL;
|
||||
if (ses->server->sign)
|
||||
flags |= NTLMSSP_NEGOTIATE_SIGN;
|
||||
if (!ses->server->session_estab ||
|
||||
ses->ntlmssp->sesskey_per_smbsess)
|
||||
flags |= NTLMSSP_NEGOTIATE_KEY_XCH;
|
||||
}
|
||||
if (!ses->server->session_estab || ses->ntlmssp->sesskey_per_smbsess)
|
||||
flags |= NTLMSSP_NEGOTIATE_KEY_XCH;
|
||||
|
||||
sec_blob->NegotiateFlags = cpu_to_le32(flags);
|
||||
|
||||
|
@ -407,13 +406,12 @@ int build_ntlmssp_auth_blob(unsigned char **pbuffer,
|
|||
flags = NTLMSSP_NEGOTIATE_56 |
|
||||
NTLMSSP_REQUEST_TARGET | NTLMSSP_NEGOTIATE_TARGET_INFO |
|
||||
NTLMSSP_NEGOTIATE_128 | NTLMSSP_NEGOTIATE_UNICODE |
|
||||
NTLMSSP_NEGOTIATE_NTLM | NTLMSSP_NEGOTIATE_EXTENDED_SEC;
|
||||
if (ses->server->sign) {
|
||||
NTLMSSP_NEGOTIATE_NTLM | NTLMSSP_NEGOTIATE_EXTENDED_SEC |
|
||||
NTLMSSP_NEGOTIATE_SEAL;
|
||||
if (ses->server->sign)
|
||||
flags |= NTLMSSP_NEGOTIATE_SIGN;
|
||||
if (!ses->server->session_estab ||
|
||||
ses->ntlmssp->sesskey_per_smbsess)
|
||||
flags |= NTLMSSP_NEGOTIATE_KEY_XCH;
|
||||
}
|
||||
if (!ses->server->session_estab || ses->ntlmssp->sesskey_per_smbsess)
|
||||
flags |= NTLMSSP_NEGOTIATE_KEY_XCH;
|
||||
|
||||
tmp = *pbuffer + sizeof(AUTHENTICATE_MESSAGE);
|
||||
sec_blob->NegotiateFlags = cpu_to_le32(flags);
|
||||
|
|
|
@ -832,10 +832,8 @@ ssetup_exit:
|
|||
|
||||
if (!rc) {
|
||||
mutex_lock(&server->srv_mutex);
|
||||
if (server->sign && server->ops->generate_signingkey) {
|
||||
if (server->ops->generate_signingkey) {
|
||||
rc = server->ops->generate_signingkey(ses);
|
||||
kfree(ses->auth_key.response);
|
||||
ses->auth_key.response = NULL;
|
||||
if (rc) {
|
||||
cifs_dbg(FYI,
|
||||
"SMB3 session key generation failed\n");
|
||||
|
@ -857,10 +855,6 @@ ssetup_exit:
|
|||
}
|
||||
|
||||
keygen_exit:
|
||||
if (!server->sign) {
|
||||
kfree(ses->auth_key.response);
|
||||
ses->auth_key.response = NULL;
|
||||
}
|
||||
if (spnego_key) {
|
||||
key_invalidate(spnego_key);
|
||||
key_put(spnego_key);
|
||||
|
@ -1558,6 +1552,9 @@ SMB2_ioctl(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid,
|
|||
} else
|
||||
iov[0].iov_len = get_rfc1002_length(req) + 4;
|
||||
|
||||
/* validate negotiate request must be signed - see MS-SMB2 3.2.5.5 */
|
||||
if (opcode == FSCTL_VALIDATE_NEGOTIATE_INFO)
|
||||
req->hdr.Flags |= SMB2_FLAGS_SIGNED;
|
||||
|
||||
rc = SendReceive2(xid, ses, iov, num_iovecs, &resp_buftype, 0);
|
||||
rsp = (struct smb2_ioctl_rsp *)iov[0].iov_base;
|
||||
|
|
|
@ -1273,8 +1273,10 @@ void nfs_pageio_cond_complete(struct nfs_pageio_descriptor *desc, pgoff_t index)
|
|||
mirror = &desc->pg_mirrors[midx];
|
||||
if (!list_empty(&mirror->pg_list)) {
|
||||
prev = nfs_list_entry(mirror->pg_list.prev);
|
||||
if (index != prev->wb_index + 1)
|
||||
nfs_pageio_complete_mirror(desc, midx);
|
||||
if (index != prev->wb_index + 1) {
|
||||
nfs_pageio_complete(desc);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1245,14 +1245,14 @@ nfsd4_layoutget(struct svc_rqst *rqstp,
|
|||
const struct nfsd4_layout_ops *ops;
|
||||
struct nfs4_layout_stateid *ls;
|
||||
__be32 nfserr;
|
||||
int accmode;
|
||||
int accmode = NFSD_MAY_READ_IF_EXEC;
|
||||
|
||||
switch (lgp->lg_seg.iomode) {
|
||||
case IOMODE_READ:
|
||||
accmode = NFSD_MAY_READ;
|
||||
accmode |= NFSD_MAY_READ;
|
||||
break;
|
||||
case IOMODE_RW:
|
||||
accmode = NFSD_MAY_READ | NFSD_MAY_WRITE;
|
||||
accmode |= NFSD_MAY_READ | NFSD_MAY_WRITE;
|
||||
break;
|
||||
default:
|
||||
dprintk("%s: invalid iomode %d\n",
|
||||
|
|
|
@ -92,6 +92,12 @@ nfsd_cross_mnt(struct svc_rqst *rqstp, struct dentry **dpp,
|
|||
err = follow_down(&path);
|
||||
if (err < 0)
|
||||
goto out;
|
||||
if (path.mnt == exp->ex_path.mnt && path.dentry == dentry &&
|
||||
nfsd_mountpoint(dentry, exp) == 2) {
|
||||
/* This is only a mountpoint in some other namespace */
|
||||
path_put(&path);
|
||||
goto out;
|
||||
}
|
||||
|
||||
exp2 = rqst_exp_get_by_name(rqstp, &path);
|
||||
if (IS_ERR(exp2)) {
|
||||
|
@ -165,16 +171,26 @@ static int nfsd_lookup_parent(struct svc_rqst *rqstp, struct dentry *dparent, st
|
|||
/*
|
||||
* For nfsd purposes, we treat V4ROOT exports as though there was an
|
||||
* export at *every* directory.
|
||||
* We return:
|
||||
* '1' if this dentry *must* be an export point,
|
||||
* '2' if it might be, if there is really a mount here, and
|
||||
* '0' if there is no chance of an export point here.
|
||||
*/
|
||||
int nfsd_mountpoint(struct dentry *dentry, struct svc_export *exp)
|
||||
{
|
||||
if (d_mountpoint(dentry))
|
||||
if (!d_inode(dentry))
|
||||
return 0;
|
||||
if (exp->ex_flags & NFSEXP_V4ROOT)
|
||||
return 1;
|
||||
if (nfsd4_is_junction(dentry))
|
||||
return 1;
|
||||
if (!(exp->ex_flags & NFSEXP_V4ROOT))
|
||||
return 0;
|
||||
return d_inode(dentry) != NULL;
|
||||
if (d_mountpoint(dentry))
|
||||
/*
|
||||
* Might only be a mountpoint in a different namespace,
|
||||
* but we need to check.
|
||||
*/
|
||||
return 2;
|
||||
return 0;
|
||||
}
|
||||
|
||||
__be32
|
||||
|
|
|
@ -59,23 +59,23 @@ struct posix_clock_operations {
|
|||
|
||||
int (*clock_adjtime)(struct posix_clock *pc, struct timex *tx);
|
||||
|
||||
int (*clock_gettime)(struct posix_clock *pc, struct timespec *ts);
|
||||
int (*clock_gettime)(struct posix_clock *pc, struct timespec64 *ts);
|
||||
|
||||
int (*clock_getres) (struct posix_clock *pc, struct timespec *ts);
|
||||
int (*clock_getres) (struct posix_clock *pc, struct timespec64 *ts);
|
||||
|
||||
int (*clock_settime)(struct posix_clock *pc,
|
||||
const struct timespec *ts);
|
||||
const struct timespec64 *ts);
|
||||
|
||||
int (*timer_create) (struct posix_clock *pc, struct k_itimer *kit);
|
||||
|
||||
int (*timer_delete) (struct posix_clock *pc, struct k_itimer *kit);
|
||||
|
||||
void (*timer_gettime)(struct posix_clock *pc,
|
||||
struct k_itimer *kit, struct itimerspec *tsp);
|
||||
struct k_itimer *kit, struct itimerspec64 *tsp);
|
||||
|
||||
int (*timer_settime)(struct posix_clock *pc,
|
||||
struct k_itimer *kit, int flags,
|
||||
struct itimerspec *tsp, struct itimerspec *old);
|
||||
struct itimerspec64 *tsp, struct itimerspec64 *old);
|
||||
/*
|
||||
* Optional character device methods:
|
||||
*/
|
||||
|
|
|
@ -1189,8 +1189,10 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
|
|||
* set the trigger type must match. Also all must
|
||||
* agree on ONESHOT.
|
||||
*/
|
||||
unsigned int oldtype = irqd_get_trigger_type(&desc->irq_data);
|
||||
|
||||
if (!((old->flags & new->flags) & IRQF_SHARED) ||
|
||||
((old->flags ^ new->flags) & IRQF_TRIGGER_MASK) ||
|
||||
(oldtype != (new->flags & IRQF_TRIGGER_MASK)) ||
|
||||
((old->flags ^ new->flags) & IRQF_ONESHOT))
|
||||
goto mismatch;
|
||||
|
||||
|
|
|
@ -300,14 +300,17 @@ out:
|
|||
static int pc_clock_gettime(clockid_t id, struct timespec *ts)
|
||||
{
|
||||
struct posix_clock_desc cd;
|
||||
struct timespec64 ts64;
|
||||
int err;
|
||||
|
||||
err = get_clock_desc(id, &cd);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (cd.clk->ops.clock_gettime)
|
||||
err = cd.clk->ops.clock_gettime(cd.clk, ts);
|
||||
if (cd.clk->ops.clock_gettime) {
|
||||
err = cd.clk->ops.clock_gettime(cd.clk, &ts64);
|
||||
*ts = timespec64_to_timespec(ts64);
|
||||
}
|
||||
else
|
||||
err = -EOPNOTSUPP;
|
||||
|
||||
|
@ -319,14 +322,17 @@ static int pc_clock_gettime(clockid_t id, struct timespec *ts)
|
|||
static int pc_clock_getres(clockid_t id, struct timespec *ts)
|
||||
{
|
||||
struct posix_clock_desc cd;
|
||||
struct timespec64 ts64;
|
||||
int err;
|
||||
|
||||
err = get_clock_desc(id, &cd);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (cd.clk->ops.clock_getres)
|
||||
err = cd.clk->ops.clock_getres(cd.clk, ts);
|
||||
if (cd.clk->ops.clock_getres) {
|
||||
err = cd.clk->ops.clock_getres(cd.clk, &ts64);
|
||||
*ts = timespec64_to_timespec(ts64);
|
||||
}
|
||||
else
|
||||
err = -EOPNOTSUPP;
|
||||
|
||||
|
@ -337,6 +343,7 @@ static int pc_clock_getres(clockid_t id, struct timespec *ts)
|
|||
|
||||
static int pc_clock_settime(clockid_t id, const struct timespec *ts)
|
||||
{
|
||||
struct timespec64 ts64 = timespec_to_timespec64(*ts);
|
||||
struct posix_clock_desc cd;
|
||||
int err;
|
||||
|
||||
|
@ -350,7 +357,7 @@ static int pc_clock_settime(clockid_t id, const struct timespec *ts)
|
|||
}
|
||||
|
||||
if (cd.clk->ops.clock_settime)
|
||||
err = cd.clk->ops.clock_settime(cd.clk, ts);
|
||||
err = cd.clk->ops.clock_settime(cd.clk, &ts64);
|
||||
else
|
||||
err = -EOPNOTSUPP;
|
||||
out:
|
||||
|
@ -403,29 +410,36 @@ static void pc_timer_gettime(struct k_itimer *kit, struct itimerspec *ts)
|
|||
{
|
||||
clockid_t id = kit->it_clock;
|
||||
struct posix_clock_desc cd;
|
||||
struct itimerspec64 ts64;
|
||||
|
||||
if (get_clock_desc(id, &cd))
|
||||
return;
|
||||
|
||||
if (cd.clk->ops.timer_gettime)
|
||||
cd.clk->ops.timer_gettime(cd.clk, kit, ts);
|
||||
|
||||
if (cd.clk->ops.timer_gettime) {
|
||||
cd.clk->ops.timer_gettime(cd.clk, kit, &ts64);
|
||||
*ts = itimerspec64_to_itimerspec(&ts64);
|
||||
}
|
||||
put_clock_desc(&cd);
|
||||
}
|
||||
|
||||
static int pc_timer_settime(struct k_itimer *kit, int flags,
|
||||
struct itimerspec *ts, struct itimerspec *old)
|
||||
{
|
||||
struct itimerspec64 ts64 = itimerspec_to_itimerspec64(ts);
|
||||
clockid_t id = kit->it_clock;
|
||||
struct posix_clock_desc cd;
|
||||
struct itimerspec64 old64;
|
||||
int err;
|
||||
|
||||
err = get_clock_desc(id, &cd);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (cd.clk->ops.timer_settime)
|
||||
err = cd.clk->ops.timer_settime(cd.clk, kit, flags, ts, old);
|
||||
if (cd.clk->ops.timer_settime) {
|
||||
err = cd.clk->ops.timer_settime(cd.clk, kit, flags, &ts64, &old64);
|
||||
if (old)
|
||||
*old = itimerspec64_to_itimerspec(&old64);
|
||||
}
|
||||
else
|
||||
err = -EOPNOTSUPP;
|
||||
|
||||
|
|
|
@ -5465,10 +5465,6 @@ void tcp_finish_connect(struct sock *sk, struct sk_buff *skb)
|
|||
else
|
||||
tp->pred_flags = 0;
|
||||
|
||||
if (!sock_flag(sk, SOCK_DEAD)) {
|
||||
sk->sk_state_change(sk);
|
||||
sk_wake_async(sk, SOCK_WAKE_IO, POLL_OUT);
|
||||
}
|
||||
}
|
||||
|
||||
static bool tcp_rcv_fastopen_synack(struct sock *sk, struct sk_buff *synack,
|
||||
|
@ -5532,6 +5528,7 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb,
|
|||
struct tcp_sock *tp = tcp_sk(sk);
|
||||
struct tcp_fastopen_cookie foc = { .len = -1 };
|
||||
int saved_clamp = tp->rx_opt.mss_clamp;
|
||||
bool fastopen_fail;
|
||||
|
||||
tcp_parse_options(skb, &tp->rx_opt, 0, &foc);
|
||||
if (tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr)
|
||||
|
@ -5634,10 +5631,15 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb,
|
|||
|
||||
tcp_finish_connect(sk, skb);
|
||||
|
||||
if ((tp->syn_fastopen || tp->syn_data) &&
|
||||
tcp_rcv_fastopen_synack(sk, skb, &foc))
|
||||
return -1;
|
||||
fastopen_fail = (tp->syn_fastopen || tp->syn_data) &&
|
||||
tcp_rcv_fastopen_synack(sk, skb, &foc);
|
||||
|
||||
if (!sock_flag(sk, SOCK_DEAD)) {
|
||||
sk->sk_state_change(sk);
|
||||
sk_wake_async(sk, SOCK_WAKE_IO, POLL_OUT);
|
||||
}
|
||||
if (fastopen_fail)
|
||||
return -1;
|
||||
if (sk->sk_write_pending ||
|
||||
icsk->icsk_accept_queue.rskq_defer_accept ||
|
||||
icsk->icsk_ack.pingpong) {
|
||||
|
|
|
@ -615,6 +615,7 @@ static void vti6_link_config(struct ip6_tnl *t)
|
|||
{
|
||||
struct net_device *dev = t->dev;
|
||||
struct __ip6_tnl_parm *p = &t->parms;
|
||||
struct net_device *tdev = NULL;
|
||||
|
||||
memcpy(dev->dev_addr, &p->laddr, sizeof(struct in6_addr));
|
||||
memcpy(dev->broadcast, &p->raddr, sizeof(struct in6_addr));
|
||||
|
@ -627,6 +628,25 @@ static void vti6_link_config(struct ip6_tnl *t)
|
|||
dev->flags |= IFF_POINTOPOINT;
|
||||
else
|
||||
dev->flags &= ~IFF_POINTOPOINT;
|
||||
|
||||
if (p->flags & IP6_TNL_F_CAP_XMIT) {
|
||||
int strict = (ipv6_addr_type(&p->raddr) &
|
||||
(IPV6_ADDR_MULTICAST | IPV6_ADDR_LINKLOCAL));
|
||||
struct rt6_info *rt = rt6_lookup(t->net,
|
||||
&p->raddr, &p->laddr,
|
||||
p->link, strict);
|
||||
|
||||
if (rt)
|
||||
tdev = rt->dst.dev;
|
||||
ip6_rt_put(rt);
|
||||
}
|
||||
|
||||
if (!tdev && p->link)
|
||||
tdev = __dev_get_by_index(t->net, p->link);
|
||||
|
||||
if (tdev)
|
||||
dev->mtu = max_t(int, tdev->mtu - dev->hard_header_len,
|
||||
IPV6_MIN_MTU);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -1688,6 +1688,8 @@ static int ndisc_netdev_event(struct notifier_block *this, unsigned long event,
|
|||
case NETDEV_CHANGEADDR:
|
||||
neigh_changeaddr(&nd_tbl, dev);
|
||||
fib6_run_gc(0, net, false);
|
||||
/* fallthrough */
|
||||
case NETDEV_UP:
|
||||
idev = in6_dev_get(dev);
|
||||
if (!idev)
|
||||
break;
|
||||
|
|
|
@ -194,6 +194,7 @@ static void ieee80211_frame_acked(struct sta_info *sta, struct sk_buff *skb)
|
|||
}
|
||||
|
||||
if (ieee80211_is_action(mgmt->frame_control) &&
|
||||
!ieee80211_has_protected(mgmt->frame_control) &&
|
||||
mgmt->u.action.category == WLAN_CATEGORY_HT &&
|
||||
mgmt->u.action.u.ht_smps.action == WLAN_HT_ACTION_SMPS &&
|
||||
ieee80211_sdata_running(sdata)) {
|
||||
|
|
|
@ -168,8 +168,10 @@ xt_ct_set_timeout(struct nf_conn *ct, const struct xt_tgchk_param *par,
|
|||
goto err_put_timeout;
|
||||
}
|
||||
timeout_ext = nf_ct_timeout_ext_add(ct, timeout, GFP_ATOMIC);
|
||||
if (timeout_ext == NULL)
|
||||
if (!timeout_ext) {
|
||||
ret = -ENOMEM;
|
||||
goto err_put_timeout;
|
||||
}
|
||||
|
||||
rcu_read_unlock();
|
||||
return ret;
|
||||
|
@ -201,6 +203,7 @@ static int xt_ct_tg_check(const struct xt_tgchk_param *par,
|
|||
struct xt_ct_target_info_v1 *info)
|
||||
{
|
||||
struct nf_conntrack_zone zone;
|
||||
struct nf_conn_help *help;
|
||||
struct nf_conn *ct;
|
||||
int ret = -EOPNOTSUPP;
|
||||
|
||||
|
@ -249,7 +252,7 @@ static int xt_ct_tg_check(const struct xt_tgchk_param *par,
|
|||
if (info->timeout[0]) {
|
||||
ret = xt_ct_set_timeout(ct, par, info->timeout);
|
||||
if (ret < 0)
|
||||
goto err3;
|
||||
goto err4;
|
||||
}
|
||||
__set_bit(IPS_CONFIRMED_BIT, &ct->status);
|
||||
nf_conntrack_get(&ct->ct_general);
|
||||
|
@ -257,6 +260,10 @@ out:
|
|||
info->ct = ct;
|
||||
return 0;
|
||||
|
||||
err4:
|
||||
help = nfct_help(ct);
|
||||
if (help)
|
||||
module_put(help->helper->me);
|
||||
err3:
|
||||
nf_ct_tmpl_free(ct);
|
||||
err2:
|
||||
|
|
|
@ -361,10 +361,38 @@ ovs_ct_expect_find(struct net *net, const struct nf_conntrack_zone *zone,
|
|||
u16 proto, const struct sk_buff *skb)
|
||||
{
|
||||
struct nf_conntrack_tuple tuple;
|
||||
struct nf_conntrack_expect *exp;
|
||||
|
||||
if (!nf_ct_get_tuplepr(skb, skb_network_offset(skb), proto, net, &tuple))
|
||||
return NULL;
|
||||
return __nf_ct_expect_find(net, zone, &tuple);
|
||||
|
||||
exp = __nf_ct_expect_find(net, zone, &tuple);
|
||||
if (exp) {
|
||||
struct nf_conntrack_tuple_hash *h;
|
||||
|
||||
/* Delete existing conntrack entry, if it clashes with the
|
||||
* expectation. This can happen since conntrack ALGs do not
|
||||
* check for clashes between (new) expectations and existing
|
||||
* conntrack entries. nf_conntrack_in() will check the
|
||||
* expectations only if a conntrack entry can not be found,
|
||||
* which can lead to OVS finding the expectation (here) in the
|
||||
* init direction, but which will not be removed by the
|
||||
* nf_conntrack_in() call, if a matching conntrack entry is
|
||||
* found instead. In this case all init direction packets
|
||||
* would be reported as new related packets, while reply
|
||||
* direction packets would be reported as un-related
|
||||
* established packets.
|
||||
*/
|
||||
h = nf_conntrack_find_get(net, zone, &tuple);
|
||||
if (h) {
|
||||
struct nf_conn *ct = nf_ct_tuplehash_to_ctrack(h);
|
||||
|
||||
nf_ct_delete(ct, 0, 0);
|
||||
nf_conntrack_put(&ct->ct_general);
|
||||
}
|
||||
}
|
||||
|
||||
return exp;
|
||||
}
|
||||
|
||||
/* Determine whether skb->nfct is equal to the result of conntrack lookup. */
|
||||
|
|
|
@ -6717,6 +6717,7 @@ enum {
|
|||
ALC668_FIXUP_DELL_DISABLE_AAMIX,
|
||||
ALC668_FIXUP_DELL_XPS13,
|
||||
ALC662_FIXUP_ASUS_Nx50,
|
||||
ALC668_FIXUP_ASUS_Nx51_HEADSET_MODE,
|
||||
ALC668_FIXUP_ASUS_Nx51,
|
||||
};
|
||||
|
||||
|
@ -6964,14 +6965,21 @@ static const struct hda_fixup alc662_fixups[] = {
|
|||
.chained = true,
|
||||
.chain_id = ALC662_FIXUP_BASS_1A
|
||||
},
|
||||
[ALC668_FIXUP_ASUS_Nx51_HEADSET_MODE] = {
|
||||
.type = HDA_FIXUP_FUNC,
|
||||
.v.func = alc_fixup_headset_mode_alc668,
|
||||
.chain_id = ALC662_FIXUP_BASS_CHMAP
|
||||
},
|
||||
[ALC668_FIXUP_ASUS_Nx51] = {
|
||||
.type = HDA_FIXUP_PINS,
|
||||
.v.pins = (const struct hda_pintbl[]) {
|
||||
{0x1a, 0x90170151}, /* bass speaker */
|
||||
{ 0x19, 0x03a1913d }, /* use as headphone mic, without its own jack detect */
|
||||
{ 0x1a, 0x90170151 }, /* bass speaker */
|
||||
{ 0x1b, 0x03a1113c }, /* use as headset mic, without its own jack detect */
|
||||
{}
|
||||
},
|
||||
.chained = true,
|
||||
.chain_id = ALC662_FIXUP_BASS_CHMAP,
|
||||
.chain_id = ALC668_FIXUP_ASUS_Nx51_HEADSET_MODE,
|
||||
},
|
||||
};
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue