Commit graph

739 commits

Author SHA1 Message Date
Greg Kroah-Hartman
20ddb25b3e This is the 4.4.116 stable release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlqHLN8ACgkQONu9yGCS
 aT7eyQ/+NGK3/MPgoqRtg8sEvr1CVk8VhH1BiBfiQPGXe/D4nqPrKQzQBBzsW8QX
 6Z9PY7wDz9RgFkw+FoOyG0eLuYdgNYOelASdQ4kJzteVH8pB2GxxTbX0drttzV+F
 liNy0w39YLYxbjR4FavOSuDekd46dNQsHBvzTawaFKh0BEtQO+1uUGMg1LjMKVPn
 F9ry0mEPrOoC2+nRvU6QXIUZy6y4+Pgdda0sfGcO3yXwQev9HoW5h9qMCnGah30J
 D3Glt86dtpQcuqeIaXrfX+HnkvAOxTHjP8uRn3O7A7h8+WYBWq5Xms6A7EE9duNV
 0UA8OZpvq0r0YSTmBFzrDexAcf/cXW8ajd/VKseI/d53iIauLV5FUaGldLJ3IQMc
 gYZ2uNxGTI4z3V+nIiVQ0NCm4kmqogVY8PvMlgUwiFVG2B088iYGZ7iTOQ9b7wBO
 VgDo0ouC/yDA8Lmz/A0l3SuvkJDNIPJit5lWzqCGRjk1F8WdPpI5C3ONfp8R3Lko
 sTllldOo982KW5up/fg5HfuMg1OjgXZtzO+/NlTtyTpSr9bb1OoniSROG8eEcMqO
 lKI1MB8Xx/pqqW1E8OOtb7A/8JPCBFzVV9xVGKwI0uZa2XOQeAwGruOe8Ub6nEpU
 8w30DlSgy8MB1BPL6UGC6k+001k8jkohdl/qjpYb6aK55CfbhlA=
 =a3k5
 -----END PGP SIGNATURE-----

Merge 4.4.116 into android-4.4

Changes in 4.4.116
	powerpc/bpf/jit: Disable classic BPF JIT on ppc64le
	powerpc/64: Fix flush_(d|i)cache_range() called from modules
	powerpc: Fix VSX enabling/flushing to also test MSR_FP and MSR_VEC
	powerpc: Simplify module TOC handling
	powerpc/pseries: Add H_GET_CPU_CHARACTERISTICS flags & wrapper
	powerpc/64: Add macros for annotating the destination of rfid/hrfid
	powerpc/64s: Simple RFI macro conversions
	powerpc/64: Convert fast_exception_return to use RFI_TO_USER/KERNEL
	powerpc/64: Convert the syscall exit path to use RFI_TO_USER/KERNEL
	powerpc/64s: Convert slb_miss_common to use RFI_TO_USER/KERNEL
	powerpc/64s: Add support for RFI flush of L1-D cache
	powerpc/64s: Support disabling RFI flush with no_rfi_flush and nopti
	powerpc/pseries: Query hypervisor for RFI flush settings
	powerpc/powernv: Check device-tree for RFI flush settings
	powerpc/64s: Wire up cpu_show_meltdown()
	powerpc/64s: Allow control of RFI flush via debugfs
	ASoC: pcm512x: add missing MODULE_DESCRIPTION/AUTHOR/LICENSE
	usbip: vhci_hcd: clear just the USB_PORT_STAT_POWER bit
	usbip: fix 3eee23c3ec14 tcp_socket address still in the status file
	net: cdc_ncm: initialize drvflags before usage
	ASoC: simple-card: Fix misleading error message
	ASoC: rsnd: don't call free_irq() on Parent SSI
	ASoC: rsnd: avoid duplicate free_irq()
	drm: rcar-du: Use the VBK interrupt for vblank events
	drm: rcar-du: Fix race condition when disabling planes at CRTC stop
	x86/asm: Fix inline asm call constraints for GCC 4.4
	ip6mr: fix stale iterator
	net: igmp: add a missing rcu locking section
	qlcnic: fix deadlock bug
	r8169: fix RTL8168EP take too long to complete driver initialization.
	tcp: release sk_frag.page in tcp_disconnect
	vhost_net: stop device during reset owner
	media: soc_camera: soc_scale_crop: add missing MODULE_DESCRIPTION/AUTHOR/LICENSE
	KEYS: encrypted: fix buffer overread in valid_master_desc()
	don't put symlink bodies in pagecache into highmem
	crypto: tcrypt - fix S/G table for test_aead_speed()
	x86/microcode/AMD: Do not load when running on a hypervisor
	x86/microcode: Do the family check first
	powerpc/pseries: include linux/types.h in asm/hvcall.h
	cifs: Fix missing put_xid in cifs_file_strict_mmap
	cifs: Fix autonegotiate security settings mismatch
	CIFS: zero sensitive data when freeing
	dmaengine: dmatest: fix container_of member in dmatest_callback
	x86/kaiser: fix build error with KASAN && !FUNCTION_GRAPH_TRACER
	kaiser: fix compile error without vsyscall
	netfilter: nf_queue: Make the queue_handler pernet
	posix-timer: Properly check sigevent->sigev_notify
	usb: gadget: uvc: Missing files for configfs interface
	sched/rt: Use container_of() to get root domain in rto_push_irq_work_func()
	sched/rt: Up the root domain ref count when passing it around via IPIs
	dccp: CVE-2017-8824: use-after-free in DCCP code
	media: dvb-usb-v2: lmedm04: Improve logic checking of warm start
	media: dvb-usb-v2: lmedm04: move ts2020 attach to dm04_lme2510_tuner
	mtd: cfi: convert inline functions to macros
	mtd: nand: brcmnand: Disable prefetch by default
	mtd: nand: Fix nand_do_read_oob() return value
	mtd: nand: sunxi: Fix ECC strength choice
	ubi: block: Fix locking for idr_alloc/idr_remove
	nfs/pnfs: fix nfs_direct_req ref leak when i/o falls back to the mds
	NFS: Add a cond_resched() to nfs_commit_release_pages()
	NFS: commit direct writes even if they fail partially
	NFS: reject request for id_legacy key without auxdata
	kernfs: fix regression in kernfs_fop_write caused by wrong type
	ahci: Annotate PCI ids for mobile Intel chipsets as such
	ahci: Add PCI ids for Intel Bay Trail, Cherry Trail and Apollo Lake AHCI
	ahci: Add Intel Cannon Lake PCH-H PCI ID
	crypto: hash - introduce crypto_hash_alg_has_setkey()
	crypto: cryptd - pass through absence of ->setkey()
	crypto: poly1305 - remove ->setkey() method
	nsfs: mark dentry with DCACHE_RCUACCESS
	media: v4l2-ioctl.c: don't copy back the result for -ENOTTY
	vb2: V4L2_BUF_FLAG_DONE is set after DQBUF
	media: v4l2-compat-ioctl32.c: add missing VIDIOC_PREPARE_BUF
	media: v4l2-compat-ioctl32.c: fix the indentation
	media: v4l2-compat-ioctl32.c: move 'helper' functions to __get/put_v4l2_format32
	media: v4l2-compat-ioctl32.c: avoid sizeof(type)
	media: v4l2-compat-ioctl32.c: copy m.userptr in put_v4l2_plane32
	media: v4l2-compat-ioctl32.c: fix ctrl_is_pointer
	media: v4l2-compat-ioctl32.c: make ctrl_is_pointer work for subdevs
	media: v4l2-compat-ioctl32: Copy v4l2_window->global_alpha
	media: v4l2-compat-ioctl32.c: copy clip list in put_v4l2_window32
	media: v4l2-compat-ioctl32.c: drop pr_info for unknown buffer type
	media: v4l2-compat-ioctl32.c: don't copy back the result for certain errors
	media: v4l2-compat-ioctl32.c: refactor compat ioctl32 logic
	crypto: caam - fix endless loop when DECO acquire fails
	arm: KVM: Fix SMCCC handling of unimplemented SMC/HVC calls
	KVM: nVMX: Fix races when sending nested PI while dest enters/leaves L2
	watchdog: imx2_wdt: restore previous timeout after suspend+resume
	media: ts2020: avoid integer overflows on 32 bit machines
	media: cxusb, dib0700: ignore XC2028_I2C_FLUSH
	kernel/async.c: revert "async: simplify lowest_in_progress()"
	HID: quirks: Fix keyboard + touchpad on Toshiba Click Mini not working
	Bluetooth: btsdio: Do not bind to non-removable BCM43341
	Revert "Bluetooth: btusb: fix QCA Rome suspend/resume"
	Bluetooth: btusb: Restore QCA Rome suspend/resume fix with a "rewritten" version
	signal/openrisc: Fix do_unaligned_access to send the proper signal
	signal/sh: Ensure si_signo is initialized in do_divide_error
	alpha: fix crash if pthread_create races with signal delivery
	alpha: fix reboot on Avanti platform
	xtensa: fix futex_atomic_cmpxchg_inatomic
	EDAC, octeon: Fix an uninitialized variable warning
	pktcdvd: Fix pkt_setup_dev() error path
	btrfs: Handle btrfs_set_extent_delalloc failure in fixup worker
	nvme: Fix managing degraded controllers
	ACPI: sbshc: remove raw pointer from printk() message
	ovl: fix failure to fsync lower dir
	mn10300/misalignment: Use SIGSEGV SEGV_MAPERR to report a failed user copy
	ftrace: Remove incorrect setting of glob search field
	Linux 4.4.116

Change-Id: Id000cb8d59b74de063902e9ad24dd07fe1b1694b
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
2018-02-20 16:23:06 +01:00
Steven Rostedt (VMware)
911357aed6 sched/rt: Up the root domain ref count when passing it around via IPIs
commit 364f56653708ba8bcdefd4f0da2a42904baa8eeb upstream.

When issuing an IPI RT push, where an IPI is sent to each CPU that has more
than one RT task scheduled on it, it references the root domain's rto_mask,
that contains all the CPUs within the root domain that has more than one RT
task in the runable state. The problem is, after the IPIs are initiated, the
rq->lock is released. This means that the root domain that is associated to
the run queue could be freed while the IPIs are going around.

Add a sched_get_rd() and a sched_put_rd() that will increment and decrement
the root domain's ref count respectively. This way when initiating the IPIs,
the scheduler will up the root domain's ref count before releasing the
rq->lock, ensuring that the root domain does not go away until the IPI round
is complete.

Reported-by: Pavan Kondeti <pkondeti@codeaurora.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 4bdced5c9a292 ("sched/rt: Simplify the IPI based RT balancing logic")
Link: http://lkml.kernel.org/r/CAEU1=PkiHO35Dzna8EQqNSKW1fr1y1zRQ5y66X117MG06sQtNA@mail.gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-02-16 20:09:40 +01:00
Joel Fernandes
a81d322647 ANDROID: sched/rt: schedtune: Add boost retention to RT
Boosted RT tasks can be deboosted quickly, this makes boost usless
for RT tasks and causes lots of glitching. Use timers to prevent
de-boost too soon and wait for long enough such that next enqueue
happens after a threshold.

While this can be solved in the governor, there are following
advantages:
- The approach used is governor-independent
- Reduces boost group lock contention for frequently sleepers/wakers

Note:
Fixed build breakage due to schedfreq dependency which isn't used
for RT anymore.

Bug: 30210506

Change-Id: I428a2695cac06cc3458cdde0dea72315e4e66c00
Signed-off-by: Joel Fernandes <joelaf@google.com>
2018-02-01 11:19:48 -08:00
Greg Kroah-Hartman
fe09418d6f This is the 4.4.114 stable release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlpxo0gACgkQONu9yGCS
 aT78EhAAs+LNVHZzqBuwgiuB/7Fsx5RvnzetpCstjWQnJHUCPjU9iCc4oTgpTGeC
 jLZeQeUlwAguL87+GLEhKEflSqKd5O/3VozLd4Xw7tGTUSqkV/0yUbmKuXzMwqTP
 ZlUDtM8eK4nfQ9ci/9yF6D3jMcpboVzFSlfu+HYLFxNUhr3NOf8jpPrMqDqTWEbP
 ncT4habS87sQSDtZLFVsGLq2rtOg91NkiXSJEwyDeioTwR9kUju5eJGhF1yhmJZh
 GEBOddmpD+RndL/Q0SN9poThWEFtWHwaBKeittHYzwnn5J7+ov9pjMmXkvGf8Slc
 pWVx7WADcPkmyx18x53szI05uR0VycPB8YhwQW28yB9+4LabPzLSz9KNVDNcs7Tf
 1GpP7Au0YVJBMjbUuJfZVe0MgSM6pRsw+I/etz47O27zsm/HEoRqHNwTk7T6B6jd
 W0vjw2HohpQUxVa6AAgqVqCgzw4ALCmlIcepaOxtU6l3XEWLrMe8OwwUl6pQY+Fr
 8dLk87SnFMgWVMyQf6M4Bse5EGHwfVEvA8z82HOlGNbynycexYDFWWxI/0P4CjPx
 VCRg3XZF4OyoRWmy/9NKgpeQBXS+fiIGIGp0opjeMfpw6t6IJqeeFik6DzXZ3Hhe
 FHYlCCtc45TAr3kJwSgfIoS7PcorBK93MDoEV58yJa6kcu0OOFQ=
 =vx5p
 -----END PGP SIGNATURE-----

Merge 4.4.114 into android-4.4

Changes in 4.4.114
	x86/asm/32: Make sync_core() handle missing CPUID on all 32-bit kernels
	usbip: prevent vhci_hcd driver from leaking a socket pointer address
	usbip: Fix implicit fallthrough warning
	usbip: Fix potential format overflow in userspace tools
	x86/microcode/intel: Fix BDW late-loading revision check
	x86/cpu/intel: Introduce macros for Intel family numbers
	x86/retpoline: Fill RSB on context switch for affected CPUs
	sched/deadline: Use the revised wakeup rule for suspending constrained dl tasks
	can: af_can: can_rcv(): replace WARN_ONCE by pr_warn_once
	can: af_can: canfd_rcv(): replace WARN_ONCE by pr_warn_once
	PM / sleep: declare __tracedata symbols as char[] rather than char
	time: Avoid undefined behaviour in ktime_add_safe()
	timers: Plug locking race vs. timer migration
	Prevent timer value 0 for MWAITX
	drivers: base: cacheinfo: fix x86 with CONFIG_OF enabled
	drivers: base: cacheinfo: fix boot error message when acpi is enabled
	PCI: layerscape: Add "fsl,ls2085a-pcie" compatible ID
	PCI: layerscape: Fix MSG TLP drop setting
	mmc: sdhci-of-esdhc: add/remove some quirks according to vendor version
	fs/select: add vmalloc fallback for select(2)
	mm/mmap.c: do not blow on PROT_NONE MAP_FIXED holes in the stack
	hwpoison, memcg: forcibly uncharge LRU pages
	cma: fix calculation of aligned offset
	mm, page_alloc: fix potential false positive in __zone_watermark_ok
	ipc: msg, make msgrcv work with LONG_MIN
	x86/ioapic: Fix incorrect pointers in ioapic_setup_resources()
	ACPI / processor: Avoid reserving IO regions too early
	ACPI / scan: Prefer devices without _HID/_CID for _ADR matching
	ACPICA: Namespace: fix operand cache leak
	netfilter: x_tables: speed up jump target validation
	netfilter: arp_tables: fix invoking 32bit "iptable -P INPUT ACCEPT" failed in 64bit kernel
	netfilter: nf_dup_ipv6: set again FLOWI_FLAG_KNOWN_NH at flowi6_flags
	netfilter: nf_ct_expect: remove the redundant slash when policy name is empty
	netfilter: nfnetlink_queue: reject verdict request from different portid
	netfilter: restart search if moved to other chain
	netfilter: nf_conntrack_sip: extend request line validation
	netfilter: use fwmark_reflect in nf_send_reset
	netfilter: fix IS_ERR_VALUE usage
	netfilter: nfnetlink_cthelper: Add missing permission checks
	netfilter: xt_osf: Add missing permission checks
	ext2: Don't clear SGID when inheriting ACLs
	reiserfs: fix race in prealloc discard
	reiserfs: don't preallocate blocks for extended attributes
	reiserfs: Don't clear SGID when inheriting ACLs
	fs/fcntl: f_setown, avoid undefined behaviour
	scsi: libiscsi: fix shifting of DID_REQUEUE host byte
	Revert "module: Add retpoline tag to VERMAGIC"
	Input: trackpoint - force 3 buttons if 0 button is reported
	usb: usbip: Fix possible deadlocks reported by lockdep
	usbip: fix stub_rx: get_pipe() to validate endpoint number
	usbip: fix stub_rx: harden CMD_SUBMIT path to handle malicious input
	usbip: prevent leaking socket pointer address in messages
	um: link vmlinux with -no-pie
	vsyscall: Fix permissions for emulate mode with KAISER/PTI
	eventpoll.h: add missing epoll event masks
	x86/microcode/intel: Extend BDW late-loading further with LLC size check
	hrtimer: Reset hrtimer cpu base proper on CPU hotplug
	dccp: don't restart ccid2_hc_tx_rto_expire() if sk in closed state
	ipv6: Fix getsockopt() for sockets with default IPV6_AUTOFLOWLABEL
	ipv6: fix udpv6 sendmsg crash caused by too small MTU
	ipv6: ip6_make_skb() needs to clear cork.base.dst
	lan78xx: Fix failure in USB Full Speed
	net: igmp: fix source address check for IGMPv3 reports
	tcp: __tcp_hdrlen() helper
	net: qdisc_pkt_len_init() should be more robust
	pppoe: take ->needed_headroom of lower device into account on xmit
	r8169: fix memory corruption on retrieval of hardware statistics.
	sctp: do not allow the v4 socket to bind a v4mapped v6 address
	sctp: return error if the asoc has been peeled off in sctp_wait_for_sndbuf
	vmxnet3: repair memory leak
	net: Allow neigh contructor functions ability to modify the primary_key
	ipv4: Make neigh lookup keys for loopback/point-to-point devices be INADDR_ANY
	flow_dissector: properly cap thoff field
	net: tcp: close sock if net namespace is exiting
	nfsd: auth: Fix gid sorting when rootsquash enabled
	Linux 4.4.114

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
2018-01-31 14:08:55 +01:00
Daniel Bristot de Oliveira
1d00e3d9b7 sched/deadline: Use the revised wakeup rule for suspending constrained dl tasks
commit 3effcb4247e74a51f5d8b775a1ee4abf87cc089a upstream.

We have been facing some problems with self-suspending constrained
deadline tasks. The main reason is that the original CBS was not
designed for such sort of tasks.

One problem reported by Xunlei Pang takes place when a task
suspends, and then is awakened before the deadline, but so close
to the deadline that its remaining runtime can cause the task
to have an absolute density higher than allowed. In such situation,
the original CBS assumes that the task is facing an early activation,
and so it replenishes the task and set another deadline, one deadline
in the future. This rule works fine for implicit deadline tasks.
Moreover, it allows the system to adapt the period of a task in which
the external event source suffered from a clock drift.

However, this opens the window for bandwidth leakage for constrained
deadline tasks. For instance, a task with the following parameters:

  runtime   = 5 ms
  deadline  = 7 ms
  [density] = 5 / 7 = 0.71
  period    = 1000 ms

If the task runs for 1 ms, and then suspends for another 1ms,
it will be awakened with the following parameters:

  remaining runtime = 4
  laxity = 5

presenting a absolute density of 4 / 5 = 0.80.

In this case, the original CBS would assume the task had an early
wakeup. Then, CBS will reset the runtime, and the absolute deadline will
be postponed by one relative deadline, allowing the task to run.

The problem is that, if the task runs this pattern forever, it will keep
receiving bandwidth, being able to run 1ms every 2ms. Following this
behavior, the task would be able to run 500 ms in 1 sec. Thus running
more than the 5 ms / 1 sec the admission control allowed it to run.

Trying to address the self-suspending case, Luca Abeni, Giuseppe
Lipari, and Juri Lelli [1] revisited the CBS in order to deal with
self-suspending tasks. In the new approach, rather than
replenishing/postponing the absolute deadline, the revised wakeup rule
adjusts the remaining runtime, reducing it to fit into the allowed
density.

A revised version of the idea is:

At a given time t, the maximum absolute density of a task cannot be
higher than its relative density, that is:

  runtime / (deadline - t) <= dl_runtime / dl_deadline

Knowing the laxity of a task (deadline - t), it is possible to move
it to the other side of the equality, thus enabling to define max
remaining runtime a task can use within the absolute deadline, without
over-running the allowed density:

  runtime = (dl_runtime / dl_deadline) * (deadline - t)

For instance, in our previous example, the task could still run:

  runtime = ( 5 / 7 ) * 5
  runtime = 3.57 ms

Without causing damage for other deadline tasks. It is note worthy
that the laxity cannot be negative because that would cause a negative
runtime. Thus, this patch depends on the patch:

  df8eac8cafce ("sched/deadline: Throttle a constrained deadline task activated after the deadline")

Which throttles a constrained deadline task activated after the
deadline.

Finally, it is also possible to use the revised wakeup rule for
all other tasks, but that would require some more discussions
about pros and cons.

[The main difference from the original commit is that
 the BW_SHIFT define was not present yet. As BW_SHIFT was
 introduced in a new feature, I just used the value (20),
 likewise we used to use before the #define.
 Other changes were required because of comments. - bistrot]

Reported-by: Xunlei Pang <xpang@redhat.com>
Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com>
[peterz: replaced dl_is_constrained with dl_is_implicit]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luca Abeni <luca.abeni@santannapisa.it>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Romulo Silva de Oliveira <romulo.deoliveira@ufsc.br>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tommaso Cucinotta <tommaso.cucinotta@sssup.it>
Link: http://lkml.kernel.org/r/5c800ab3a74a168a84ee5f3f84d12a02e11383be.1495803804.git.bristot@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-01-31 12:06:07 +01:00
Greg Kroah-Hartman
55b3b8c2b5 This is the 4.4.108 stable release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlpA+4sACgkQONu9yGCS
 aT4RphAAkoRI16GF7c2U1aLVGcYA5zezxnYEjtFXjoM7sLwmfEBeSTTR8OqCZGaa
 hLvhrqCbF6qjgu0dKAKaLasnoUk+/ZEpxSE0JAlQ0ZdmCP9YH+Sd2PqjD48QGJ80
 O2JktsYT3MjaNKFHNeSElf2ffk3mDzRpeeHEtSduJE3/Pqvoch2qV36qJWstOpLR
 QKEQCKVP9Xa71V5Zu5tDAMFI0IQ09HRjBjyjlAxnJL+/wgYM/NTraZ1/3Ju8nGDU
 Ohamr80SdIZ0+36Gl9mFTDQRl5yuh2SllqaD4Hq4P41HPIWaXuTuKQwA0LD7U8U6
 N7I5byN7QVmOszERc0jzkQPd4aN1FWfDqAR5i6S+fDbianDiHvBNsybihayWqzay
 MgX3VnzhFrnkh5UcijKOjsRHp/CkDIAfUseb5dFekAjuctWtNK7qkE+bGySZJ9ET
 XO2b8xlRz/nYuWodsAXT6FhVaVd45ba+VR/nDwSzijf+7SY2Ub9/lwChePbI8VJj
 vSiBURRtZhEgwXXJYH3JT4L+MSqfKA+1Jd+G7BqvZaSQ6S8RLvts/JZAX1nGs0JK
 B2a84CD0lXd5RcMBRFXkfzCEZDA1hB/oLpmVVXxNROseSlSc/ExQ3xZ9UPXlWCnN
 dB7XCV5GoV9Fqx5FX0lg6eEkNzUrkFV/My/FKaJtZR8U0TJB1nk=
 =uslW
 -----END PGP SIGNATURE-----

Merge 4.4.108 into android-4.4

Changes in 4.4.108
	arm64: Initialise high_memory global variable earlier
	cxl: Check if vphb exists before iterating over AFU devices
	x86/mm: Add INVPCID helpers
	x86/mm: Fix INVPCID asm constraint
	x86/mm: Add a 'noinvpcid' boot option to turn off INVPCID
	x86/mm: If INVPCID is available, use it to flush global mappings
	mm/rmap: batched invalidations should use existing api
	mm/mmu_context, sched/core: Fix mmu_context.h assumption
	sched/core: Add switch_mm_irqs_off() and use it in the scheduler
	x86/mm: Build arch/x86/mm/tlb.c even on !SMP
	x86/mm, sched/core: Uninline switch_mm()
	x86/mm, sched/core: Turn off IRQs in switch_mm()
	ARM: Hide finish_arch_post_lock_switch() from modules
	sched/core: Idle_task_exit() shouldn't use switch_mm_irqs_off()
	x86/irq: Do not substract irq_tlb_count from irq_call_count
	ALSA: hda - add support for docking station for HP 820 G2
	ALSA: hda - add support for docking station for HP 840 G3
	arm: kprobes: Fix the return address of multiple kretprobes
	arm: kprobes: Align stack to 8-bytes in test code
	cpuidle: Validate cpu_dev in cpuidle_add_sysfs()
	r8152: fix the list rx_done may be used without initialization
	crypto: deadlock between crypto_alg_sem/rtnl_mutex/genl_mutex
	sch_dsmark: fix invalid skb_cow() usage
	bna: integer overflow bug in debugfs
	net: qmi_wwan: Add USB IDs for MDM6600 modem on Motorola Droid 4
	usb: gadget: f_uvc: Sanity check wMaxPacketSize for SuperSpeed
	usb: gadget: udc: remove pointer dereference after free
	netfilter: nfnl_cthelper: fix runtime expectation policy updates
	netfilter: nfnl_cthelper: Fix memory leak
	inet: frag: release spinlock before calling icmp_send()
	pinctrl: st: add irq_request/release_resources callbacks
	scsi: lpfc: Fix PT2PT PRLI reject
	KVM: x86: correct async page present tracepoint
	KVM: VMX: Fix enable VPID conditions
	ARM: dts: ti: fix PCI bus dtc warnings
	hwmon: (asus_atk0110) fix uninitialized data access
	HID: xinmo: fix for out of range for THT 2P arcade controller.
	r8152: prevent the driver from transmitting packets with carrier off
	s390/qeth: no ETH header for outbound AF_IUCV
	bna: avoid writing uninitialized data into hw registers
	net: Do not allow negative values for busy_read and busy_poll sysctl interfaces
	i40e: Do not enable NAPI on q_vectors that have no rings
	RDMA/iser: Fix possible mr leak on device removal event
	irda: vlsi_ir: fix check for DMA mapping errors
	netfilter: nfnl_cthelper: fix a race when walk the nf_ct_helper_hash table
	netfilter: nf_nat_snmp: Fix panic when snmp_trap_helper fails to register
	ARM: dts: am335x-evmsk: adjust mmc2 param to allow suspend
	KVM: pci-assign: do not map smm memory slot pages in vt-d page tables
	isdn: kcapi: avoid uninitialized data
	xhci: plat: Register shutdown for xhci_plat
	netfilter: nfnetlink_queue: fix secctx memory leak
	ARM: dma-mapping: disallow dma_get_sgtable() for non-kernel managed memory
	cpuidle: powernv: Pass correct drv->cpumask for registration
	bnxt_en: Fix NULL pointer dereference in reopen failure path
	backlight: pwm_bl: Fix overflow condition
	crypto: crypto4xx - increase context and scatter ring buffer elements
	rtc: pl031: make interrupt optional
	net: phy: at803x: Change error to EINVAL for invalid MAC
	PCI: Avoid bus reset if bridge itself is broken
	scsi: cxgb4i: fix Tx skb leak
	scsi: mpt3sas: Fix IO error occurs on pulling out a drive from RAID1 volume created on two SATA drive
	PCI: Create SR-IOV virtfn/physfn links before attaching driver
	igb: check memory allocation failure
	ixgbe: fix use of uninitialized padding
	PCI/AER: Report non-fatal errors only to the affected endpoint
	scsi: lpfc: Fix secure firmware updates
	scsi: lpfc: PLOGI failures during NPIV testing
	fm10k: ensure we process SM mbx when processing VF mbx
	tcp: fix under-evaluated ssthresh in TCP Vegas
	rtc: set the alarm to the next expiring timer
	cpuidle: fix broadcast control when broadcast can not be entered
	thermal: hisilicon: Handle return value of clk_prepare_enable
	MIPS: math-emu: Fix final emulation phase for certain instructions
	Revert "Bluetooth: btusb: driver to enable the usb-wakeup feature"
	ALSA: hda - Clear the leftover component assignment at snd_hdac_i915_exit()
	ALSA: hda - Degrade i915 binding failure message
	ALSA: hda - Fix yet another i915 pointer leftover in error path
	alpha: fix build failures
	Linux 4.4.108

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
2017-12-27 13:36:00 +01:00
Andy Lutomirski
18a5348d49 sched/core: Idle_task_exit() shouldn't use switch_mm_irqs_off()
commit 252d2a4117bc181b287eeddf848863788da733ae upstream.

idle_task_exit() can be called with IRQs on x86 on and therefore
should use switch_mm(), not switch_mm_irqs_off().

This doesn't seem to cause any problems right now, but it will
confuse my upcoming TLB flush changes.  Nonetheless, I think it
should be backported because it's trivial.  There won't be any
meaningful performance impact because idle_task_exit() is only
used when offlining a CPU.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Fixes: f98db6013c55 ("sched/core: Add switch_mm_irqs_off() and use it in the scheduler")
Link: http://lkml.kernel.org/r/ca3d1a9fa93a0b49f5a8ff729eda3640fb6abdf9.1497034141.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-25 14:22:09 +01:00
Andy Lutomirski
425f13a366 sched/core: Add switch_mm_irqs_off() and use it in the scheduler
commit f98db6013c557c216da5038d9c52045be55cd039 upstream.

By default, this is the same thing as switch_mm().

x86 will override it as an optimization.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Borislav Petkov <bp@suse.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/df401df47bdd6be3e389c6f1e3f5310d70e81b2c.1461688545.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-25 14:22:09 +01:00
Greg Kroah-Hartman
9fbf3d7374 This is the 4.4.103 stable release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlofw0sACgkQONu9yGCS
 aT4MPBAAo85uk2d6CXKRkNl3qKWtiStKXUet+NJFVr4GotOeg6ul9yul5jcs4pvl
 BJYnBh2LE77oDCOUKaSKI/0nDOHJs9n5m8GxjvG6cAvfn9RdgNm6kCCxNQFEhpNT
 IrmRrmCMd3aKPNrdz2Cbd4qHzNr0JuIv/bykNHDA/rw+PkQeLzZgiGIw9ftg1yHJ
 npzNLCjfVDPRy4qUCDYSS7+p83oHpWq3tHfha7M1S5HphsjVWjG79ABIKkN8w86z
 5KnY3dqt5tqO4w0gZzKXv0gg4IJS62YqeJbF/dSefASvnBkINIzxBOEu0+xOFQ5t
 ezKkukpe8ivX4eUP2ruF9jAjVLCPYCm6UaWbYQZBAAf04KHC09uXDjB4wdGCINt6
 tdOgfm60OsPHUFjx9KBn8M81Iabq8DYNubp+naG2U/j7lGzh3+mvyAlzQKetXMct
 b69skOxrjfT+2cCYeqz0UupHJigi5VLjX8hjpraXJA9oEwdS5gr9CfckEN3aUysu
 YmQ2LtgGuglUdV3Lc4QptFxRDoKna3E/Gx6rzMDPtRdV1L6dn9CULRz+Pw4T+nWl
 m6Ly9QXJVmC+d6fPW7cOEytPKRIqAUHSXQZxcPNPEcaPxD9CPWGO6TJLanc0BNYS
 g7u9kLA2fWmWnAkvEosP8lxJlQvgorhkXdCpEWuL+mAbnaImpts=
 =2wPT
 -----END PGP SIGNATURE-----

Merge 4.4.103 into android-4.4

Changes in 4.4.103
	s390: fix transactional execution control register handling
	s390/runtime instrumention: fix possible memory corruption
	s390/disassembler: add missing end marker for e7 table
	s390/disassembler: increase show_code buffer size
	ipv6: only call ip6_route_dev_notify() once for NETDEV_UNREGISTER
	AF_VSOCK: Shrink the area influenced by prepare_to_wait
	vsock: use new wait API for vsock_stream_sendmsg()
	sched: Make resched_cpu() unconditional
	lib/mpi: call cond_resched() from mpi_powm() loop
	x86/decoder: Add new TEST instruction pattern
	ARM: 8722/1: mm: make STRICT_KERNEL_RWX effective for LPAE
	ARM: 8721/1: mm: dump: check hardware RO bit for LPAE
	MIPS: ralink: Fix MT7628 pinmux
	MIPS: ralink: Fix typo in mt7628 pinmux function
	ALSA: hda: Add Raven PCI ID
	dm bufio: fix integer overflow when limiting maximum cache size
	dm: fix race between dm_get_from_kobject() and __dm_destroy()
	MIPS: Fix an n32 core file generation regset support regression
	MIPS: BCM47XX: Fix LED inversion for WRT54GSv1
	autofs: don't fail mount for transient error
	nilfs2: fix race condition that causes file system corruption
	eCryptfs: use after free in ecryptfs_release_messaging()
	bcache: check ca->alloc_thread initialized before wake up it
	isofs: fix timestamps beyond 2027
	NFS: Fix typo in nomigration mount option
	nfs: Fix ugly referral attributes
	nfsd: deal with revoked delegations appropriately
	rtlwifi: rtl8192ee: Fix memory leak when loading firmware
	rtlwifi: fix uninitialized rtlhal->last_suspend_sec time
	ata: fixes kernel crash while tracing ata_eh_link_autopsy event
	ext4: fix interaction between i_size, fallocate, and delalloc after a crash
	ALSA: pcm: update tstamp only if audio_tstamp changed
	ALSA: usb-audio: Add sanity checks to FE parser
	ALSA: usb-audio: Fix potential out-of-bound access at parsing SU
	ALSA: usb-audio: Add sanity checks in v2 clock parsers
	ALSA: timer: Remove kernel warning at compat ioctl error paths
	ALSA: hda/realtek - Fix ALC700 family no sound issue
	fix a page leak in vhost_scsi_iov_to_sgl() error recovery
	fs/9p: Compare qid.path in v9fs_test_inode
	iscsi-target: Fix non-immediate TMR reference leak
	target: Fix QUEUE_FULL + SCSI task attribute handling
	KVM: nVMX: set IDTR and GDTR limits when loading L1 host state
	KVM: SVM: obey guest PAT
	SUNRPC: Fix tracepoint storage issues with svc_recv and svc_rqst_status
	clk: ti: dra7-atl-clock: Fix of_node reference counting
	clk: ti: dra7-atl-clock: fix child-node lookups
	libnvdimm, namespace: fix label initialization to use valid seq numbers
	libnvdimm, namespace: make 'resource' attribute only readable by root
	IB/srpt: Do not accept invalid initiator port names
	IB/srp: Avoid that a cable pull can trigger a kernel crash
	NFC: fix device-allocation error return
	i40e: Use smp_rmb rather than read_barrier_depends
	igb: Use smp_rmb rather than read_barrier_depends
	igbvf: Use smp_rmb rather than read_barrier_depends
	ixgbevf: Use smp_rmb rather than read_barrier_depends
	i40evf: Use smp_rmb rather than read_barrier_depends
	fm10k: Use smp_rmb rather than read_barrier_depends
	ixgbe: Fix skb list corruption on Power systems
	parisc: Fix validity check of pointer size argument in new CAS implementation
	powerpc/signal: Properly handle return value from uprobe_deny_signal()
	media: Don't do DMA on stack for firmware upload in the AS102 driver
	media: rc: check for integer overflow
	cx231xx-cards: fix NULL-deref on missing association descriptor
	media: v4l2-ctrl: Fix flags field on Control events
	sched/rt: Simplify the IPI based RT balancing logic
	fscrypt: lock mutex before checking for bounce page pool
	net/9p: Switch to wait_event_killable()
	PM / OPP: Add missing of_node_put(np)
	e1000e: Fix error path in link detection
	e1000e: Fix return value test
	e1000e: Separate signaling for link check/link up
	RDS: RDMA: return appropriate error on rdma map failures
	PCI: Apply _HPX settings only to relevant devices
	dmaengine: zx: set DMA_CYCLIC cap_mask bit
	net: Allow IP_MULTICAST_IF to set index to L3 slave
	net: 3com: typhoon: typhoon_init_one: make return values more specific
	net: 3com: typhoon: typhoon_init_one: fix incorrect return values
	drm/armada: Fix compile fail
	ath10k: fix incorrect txpower set by P2P_DEVICE interface
	ath10k: ignore configuring the incorrect board_id
	ath10k: fix potential memory leak in ath10k_wmi_tlv_op_pull_fw_stats()
	ath10k: set CTS protection VDEV param only if VDEV is up
	ALSA: hda - Apply ALC269_FIXUP_NO_SHUTUP on HDA_FIXUP_ACT_PROBE
	drm: Apply range restriction after color adjustment when allocation
	mac80211: Remove invalid flag operations in mesh TSF synchronization
	mac80211: Suppress NEW_PEER_CANDIDATE event if no room
	iio: light: fix improper return value
	staging: iio: cdc: fix improper return value
	spi: SPI_FSL_DSPI should depend on HAS_DMA
	netfilter: nft_queue: use raw_smp_processor_id()
	netfilter: nf_tables: fix oob access
	ASoC: rsnd: don't double free kctrl
	btrfs: return the actual error value from from btrfs_uuid_tree_iterate
	ASoC: wm_adsp: Don't overrun firmware file buffer when reading region data
	s390/kbuild: enable modversions for symbols exported from asm
	xen: xenbus driver must not accept invalid transaction ids
	Revert "sctp: do not peel off an assoc from one netns to another one"
	Linux 4.4.103

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
2017-11-30 15:43:08 +00:00
Steven Rostedt (Red Hat)
cb1831a83e sched/rt: Simplify the IPI based RT balancing logic
commit 4bdced5c9a2922521e325896a7bbbf0132c94e56 upstream.

When a CPU lowers its priority (schedules out a high priority task for a
lower priority one), a check is made to see if any other CPU has overloaded
RT tasks (more than one). It checks the rto_mask to determine this and if so
it will request to pull one of those tasks to itself if the non running RT
task is of higher priority than the new priority of the next task to run on
the current CPU.

When we deal with large number of CPUs, the original pull logic suffered
from large lock contention on a single CPU run queue, which caused a huge
latency across all CPUs. This was caused by only having one CPU having
overloaded RT tasks and a bunch of other CPUs lowering their priority. To
solve this issue, commit:

  b6366f048e ("sched/rt: Use IPI to trigger RT task push migration instead of pulling")

changed the way to request a pull. Instead of grabbing the lock of the
overloaded CPU's runqueue, it simply sent an IPI to that CPU to do the work.

Although the IPI logic worked very well in removing the large latency build
up, it still could suffer from a large number of IPIs being sent to a single
CPU. On a 80 CPU box, I measured over 200us of processing IPIs. Worse yet,
when I tested this on a 120 CPU box, with a stress test that had lots of
RT tasks scheduling on all CPUs, it actually triggered the hard lockup
detector! One CPU had so many IPIs sent to it, and due to the restart
mechanism that is triggered when the source run queue has a priority status
change, the CPU spent minutes! processing the IPIs.

Thinking about this further, I realized there's no reason for each run queue
to send its own IPI. As all CPUs with overloaded tasks must be scanned
regardless if there's one or many CPUs lowering their priority, because
there's no current way to find the CPU with the highest priority task that
can schedule to one of these CPUs, there really only needs to be one IPI
being sent around at a time.

This greatly simplifies the code!

The new approach is to have each root domain have its own irq work, as the
rto_mask is per root domain. The root domain has the following fields
attached to it:

  rto_push_work	 - the irq work to process each CPU set in rto_mask
  rto_lock	 - the lock to protect some of the other rto fields
  rto_loop_start - an atomic that keeps contention down on rto_lock
		    the first CPU scheduling in a lower priority task
		    is the one to kick off the process.
  rto_loop_next	 - an atomic that gets incremented for each CPU that
		    schedules in a lower priority task.
  rto_loop	 - a variable protected by rto_lock that is used to
		    compare against rto_loop_next
  rto_cpu	 - The cpu to send the next IPI to, also protected by
		    the rto_lock.

When a CPU schedules in a lower priority task and wants to make sure
overloaded CPUs know about it. It increments the rto_loop_next. Then it
atomically sets rto_loop_start with a cmpxchg. If the old value is not "0",
then it is done, as another CPU is kicking off the IPI loop. If the old
value is "0", then it will take the rto_lock to synchronize with a possible
IPI being sent around to the overloaded CPUs.

If rto_cpu is greater than or equal to nr_cpu_ids, then there's either no
IPI being sent around, or one is about to finish. Then rto_cpu is set to the
first CPU in rto_mask and an IPI is sent to that CPU. If there's no CPUs set
in rto_mask, then there's nothing to be done.

When the CPU receives the IPI, it will first try to push any RT tasks that is
queued on the CPU but can't run because a higher priority RT task is
currently running on that CPU.

Then it takes the rto_lock and looks for the next CPU in the rto_mask. If it
finds one, it simply sends an IPI to that CPU and the process continues.

If there's no more CPUs in the rto_mask, then rto_loop is compared with
rto_loop_next. If they match, everything is done and the process is over. If
they do not match, then a CPU scheduled in a lower priority task as the IPI
was being passed around, and the process needs to start again. The first CPU
in rto_mask is sent the IPI.

This change removes this duplication of work in the IPI logic, and greatly
lowers the latency caused by the IPIs. This removed the lockup happening on
the 120 CPU machine. It also simplifies the code tremendously. What else
could anyone ask for?

Thanks to Peter Zijlstra for simplifying the rto_loop_start atomic logic and
supplying me with the rto_start_trylock() and rto_start_unlock() helper
functions.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott Wood <swood@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20170424114732.1aac6dc4@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30 08:37:25 +00:00
Paul E. McKenney
8ff3471878 sched: Make resched_cpu() unconditional
commit 7c2102e56a3f7d85b5d8f33efbd7aecc1f36fdd8 upstream.

The current implementation of synchronize_sched_expedited() incorrectly
assumes that resched_cpu() is unconditional, which it is not.  This means
that synchronize_sched_expedited() can hang when resched_cpu()'s trylock
fails as follows (analysis by Neeraj Upadhyay):

o	CPU1 is waiting for expedited wait to complete:

	sync_rcu_exp_select_cpus
	     rdp->exp_dynticks_snap & 0x1   // returns 1 for CPU5
	     IPI sent to CPU5

	synchronize_sched_expedited_wait
		 ret = swait_event_timeout(rsp->expedited_wq,
					   sync_rcu_preempt_exp_done(rnp_root),
					   jiffies_stall);

	expmask = 0x20, CPU 5 in idle path (in cpuidle_enter())

o	CPU5 handles IPI and fails to acquire rq lock.

	Handles IPI
	     sync_sched_exp_handler
		 resched_cpu
		     returns while failing to try lock acquire rq->lock
		 need_resched is not set

o	CPU5 calls  rcu_idle_enter() and as need_resched is not set, goes to
	idle (schedule() is not called).

o	CPU 1 reports RCU stall.

Given that resched_cpu() is now used only by RCU, this commit fixes the
assumption by making resched_cpu() unconditional.

Reported-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Suggested-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30 08:37:19 +00:00
Todd Kjos
3822fe484c Revert "ANDROID: sched/rt: schedtune: Add boost retention to RT"
This reverts commit d194ba5d71.

Reason for revert: Broke some builds. Will fix and resubmit.

Change-Id: I4e6fa1562346eda1bbf058f1d5ace5ba6256ce07
2017-11-08 00:43:53 +00:00
Viresh Kumar
df147c9e33 cpufreq: Drop schedfreq governor
We all should be using (and improving) the schedutil governor now. Get
rid of the non-upstream governor.

Tested on Hikey.

Change-Id: Ic660756536e5da51952738c3c18b94e31f58cd57
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2017-11-07 23:57:47 +00:00
Joel Fernandes
d194ba5d71 ANDROID: sched/rt: schedtune: Add boost retention to RT
Boosted RT tasks can be deboosted quickly, this makes boost usless
for RT tasks and causes lots of glitching. Use timers to prevent
de-boost too soon and wait for long enough such that next enqueue
happens after a threshold.

While this can be solved in the governor, there are following
advantages:
- The approach used is governor-independent
- Reduces boost group lock contention for frequently sleepers/wakers
- Works with schedfreq without any other schedfreq hacks.

Bug: 30210506

Change-Id: I41788b235586988be446505deb7c0529758a9898
Signed-off-by: Joel Fernandes <joelaf@google.com>
2017-11-07 23:47:42 +00:00
Joonwoo Park
9e293db052 sched: EAS: upmigrate misfit current task
Upmigrate misfit current task upon scheduler tick with stopper.

We can kick an random (not necessarily big CPU) NOHZ idle CPU when a
CPU bound task is in need of upmigration.  But it's not efficient as that
way needs following unnecessary wakeups:

  1. Busy little CPU A to kick idle B
  2. B runs idle balancer and enqueue migration/A
  3. B goes idle
  4. A runs migration/A, enqueues busy task on B.
  5. B wakes up again.

This change makes active upmigration more efficiently by doing:

  1. Busy little CPU A find target CPU B upon tick.
  2. CPU A enqueues migration/A.

Change-Id: Ie865738054ea3296f28e6ba01710635efa7193c0
[joonwoop: The original version had logic to reserve CPU.  The logic is
 omitted in this version.]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
2017-11-01 15:09:35 -07:00
Srivatsa Vaddagiri
2da014c0d8 sched: Extend active balance to accept 'push_task' argument
Active balance currently picks one task to migrate from busy cpu to
a chosen cpu (push_cpu). This patch extends active load balance to
recognize a particular task ('push_task') that needs to be migrated to
'push_cpu'. This capability will be leveraged by HMP-aware task
placement in a subsequent patch.

Change-Id: If31320111e6cc7044e617b5c3fd6d8e0c0e16952
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
[rameezmustafa@codeaurora.org]: Port to msm-3.18]
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2017-11-01 15:09:34 -07:00
Joel Fernandes
3a353d6cea Revert "sched/core: Warn if ENERGY_AWARE is enabled but data is missing"
This reverts commit a21299785a.

Change-Id: Idb707c80788d8b6d26d400a59d9b14f854cce89f
2017-10-30 21:15:39 +00:00
Joel Fernandes
c2f18159a2 Revert "sched/core: fix have_sched_energy_data build warning"
This reverts commit a899b9085c.

Reverting for now due to suspend warning issues.

Change-Id: I9387297b52552a73d5a253b9e8e4467b366479a5
2017-10-30 21:14:39 +00:00
Joel Fernandes
a899b9085c sched/core: fix have_sched_energy_data build warning
have_sched_energy_data is defined only for CONFIG_SMP, so declare it
only with CONFIG_SMP.

Fixes warning from intel bot:

tree:   https://android.googlesource.com/kernel/msm android-4.4
head:   a21299785a
commit: a21299785a [5/5] sched/core: Warn
if ENERGY_AWARE is enabled but data is missing
config: i386-randconfig-x002-201743 (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
        git checkout a21299785a
        # save the attached .config to linux build tree
        make ARCH=i386

All warnings (new ones prefixed by >>):

>> kernel//sched/core.c:94:13: warning: 'have_sched_energy_data' used
but never defined
    static bool have_sched_energy_data(void);
                ^~~~~~~~~~~~~~~~~~~~~~

vim +/have_sched_energy_data +94 kernel//sched/core.c

    93
  > 94  static bool have_sched_energy_data(void);
    95

Change-Id: I266b63ece6fb31d2b5b11821a8244e147ba6d3a4
Signed-off-by: Joel Fernandes <joelaf@google.com>
2017-10-27 13:24:40 -07:00
Brendan Jackman
a21299785a sched/core: Warn if ENERGY_AWARE is enabled but data is missing
If the EAS energy model is missing or incomplete, i.e. sd_scs is NULL, then
sched_group_energy will return -EINVAL on the assumption that it raced with a
CPU hotplug event. In that case, energy_diff will return 0 and the energy-aware
wake path will silently fail to trigger any migrations.

This case can be triggered by disabling CONFIG_SCHED_MC on existing platforms,
so that there are no sched_groups with the SD_SHARE_CAP_STATES flag, so that
sd_scs is NULL.

Add checks so that a warning is printed if EAS is ever enabled while the
necessary data is not present.

Change-Id: Id233a510b5ad8b7fcecac0b1d789e730bbfc7c4a
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
2017-10-27 19:13:36 +00:00
Brendan Jackman
38ddcff85a FROMLIST: sched/fair: Use wake_q length as a hint for wake_wide
(from https://patchwork.kernel.org/patch/9895261/)

This patch adds a parameter to select_task_rq, sibling_count_hint
allowing the caller, where it has this information, to inform the
sched_class the number of tasks that are being woken up as part of
the same event.

The wake_q mechanism is one case where this information is available.

select_task_rq_fair can then use the information to detect that it
needs to widen the search space for task placement in order to avoid
overloading the last-level cache domain's CPUs.

                               * * *

The reason I am investigating this change is the following use case
on ARM big.LITTLE (asymmetrical CPU capacity): 1 task per CPU, which
all repeatedly do X amount of work then
pthread_barrier_wait (i.e. sleep until the last task finishes its X
and hits the barrier). On big.LITTLE, the tasks which get a "big" CPU
finish faster, and then those CPUs pull over the tasks that are still
running:

     v CPU v           ->time->

                    -------------
   0  (big)         11111  /333
                    -------------
   1  (big)         22222   /444|
                    -------------
   2  (LITTLE)      333333/
                    -------------
   3  (LITTLE)      444444/
                    -------------

Now when task 4 hits the barrier (at |) and wakes the others up,
there are 4 tasks with prev_cpu=<big> and 0 tasks with
prev_cpu=<little>. want_affine therefore means that we'll only look
in CPUs 0 and 1 (sd_llc), so tasks will be unnecessarily coscheduled
on the bigs until the next load balance, something like this:

     v CPU v           ->time->

                    ------------------------
   0  (big)         11111  /333  31313\33333
                    ------------------------
   1  (big)         22222   /444|424\4444444
                    ------------------------
   2  (LITTLE)      333333/          \222222
                    ------------------------
   3  (LITTLE)      444444/            \1111
                    ------------------------
                                 ^^^
                           underutilization

So, I'm trying to get want_affine = 0 for these tasks.

I don't _think_ any incarnation of the wakee_flips mechanism can help
us here because which task is waker and which tasks are wakees
generally changes with each iteration.

However pthread_barrier_wait (or more accurately FUTEX_WAKE) has the
nice property that we know exactly how many tasks are being woken, so
we can cheat.

It might be a disadvantage that we "widen" _every_ task that's woken in
an event, while select_idle_sibling would work fine for the first
sd_llc_size - 1 tasks.

IIUC, if wake_affine() behaves correctly this trick wouldn't be
necessary on SMP systems, so it might be best guarded by the presence
of SD_ASYM_CPUCAPACITY?

                               * * *

Final note..

In order to observe "perfect" behaviour for this use case, I also had
to disable the TTWU_QUEUE sched feature. Suppose during the wakeup
above we are working through the work queue and have placed tasks 3
and 2, and are about to place task 1:

     v CPU v           ->time->

                    --------------
   0  (big)         11111  /333  3
                    --------------
   1  (big)         22222   /444|4
                    --------------
   2  (LITTLE)      333333/      2
                    --------------
   3  (LITTLE)      444444/          <- Task 1 should go here
                    --------------

If TTWU_QUEUE is enabled, we will not yet have enqueued task
2 (having instead sent a reschedule IPI) or attached its load to CPU
2. So we are likely to also place task 1 on cpu 2. Disabling
TTWU_QUEUE means that we enqueue task 2 before placing task 1,
solving this issue. TTWU_QUEUE is there to minimise rq lock
contention, and I guess that this contention is less of an issue on
big.LITTLE systems since they have relatively few CPUs, which
suggests the trade-off makes sense here.

Change-Id: I2080302839a263e0841a89efea8589ea53bbda9c
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
2017-10-27 18:50:59 +00:00
Joonwoo Park
43bd960dfe sched: WALT: account cumulative window demand
Energy cost estimation has been a long lasting challenge for WALT
because WALT guides CPU frequency based on the CPU utilization of
previous window.  Consequently it's not possible to know newly
waking-up task's energy cost until WALT's end of the current window.

The WALT already tracks 'Previous Runnable Sum' (prev_runnable_sum)
and 'Cumulative Runnable Average' (cr_avg).  They are designed for
CPU frequency guidance and task placement but unfortunately both
are not suitable for the energy cost estimation.

It's because using prev_runnable_sum for energy cost calculation would
make us to account CPU and task's energy solely based on activity in the
previous window so for example, any task didn't have an activity in the
previous window will be accounted as a 'zero energy cost' task.
Energy estimation with cr_avg is what energy_diff() relies on at present.
However cr_avg can only represent instantaneous picture of energy cost
thus for example, if a CPU was fully occupied for an entire WALT window
and became idle just before window boundary, and if there is a wake-up,
energy_diff() accounts that CPU is a 'zero energy cost' CPU.

As a result, introduce a new accounting unit 'Cumulative Window Demand'.
The cumulative window demand tracks all the tasks' demands have seen in
current window which is neither instantaneous nor actual execution time.
Because task demand represents estimated scaled execution time when the
task runs a full window, accumulation of all the demands represents
predicted CPU load at the end of window.

Thus we can estimate CPU's frequency at the end of current WALT window
with the cumulative window demand.

The use of prev_runnable_sum for the CPU frequency guidance and cr_avg
for the task placement have not changed and these are going to be used
for both purpose while this patch aims to add an additional statistics.

Change-Id: I9908c77ead9973a26dea2b36c001c2baf944d4f5
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2017-10-27 18:10:15 +00:00
Peter Zijlstra
fd4a95dab8 UPSTREAM: sched/core: Add missing update_rq_clock() call in set_user_nice()
Address this rq-clock update bug:

  WARNING: CPU: 30 PID: 195 at ../kernel/sched/sched.h:797 set_next_entity()
  rq->clock_update_flags < RQCF_ACT_SKIP

  Call Trace:
    dump_stack()
    __warn()
    warn_slowpath_fmt()
    set_next_entity()
    ? _raw_spin_lock()
    set_curr_task_fair()
    set_user_nice.part.85()
    set_user_nice()
    create_worker()
    worker_thread()
    kthread()
    ret_from_fork()

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 2fb8d36787affe26f3536c3d8ec094995a48037d)
Change-Id: I53ba056e72820c7fadb3f022e4ee3b821c0de17d
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-10-27 13:30:33 +01:00
Peter Zijlstra
bea1b621d9 UPSTREAM: sched/core: Add missing update_rq_clock() in detach_task_cfs_rq()
Instead of adding the update_rq_clock() all the way at the bottom of
the callstack, add one at the top, this to aid later effort to
minimize update_rq_lock() calls.

  WARNING: CPU: 0 PID: 1 at ../kernel/sched/sched.h:797 detach_task_cfs_rq()
  rq->clock_update_flags < RQCF_ACT_SKIP

  Call Trace:
    dump_stack()
    __warn()
    warn_slowpath_fmt()
    detach_task_cfs_rq()
    switched_from_fair()
    __sched_setscheduler()
    _sched_setscheduler()
    sched_set_stop_task()
    cpu_stop_create()
    __smpboot_create_thread.part.2()
    smpboot_register_percpu_thread_cpumask()
    cpu_stop_init()
    do_one_initcall()
    ? print_cpu_info()
    kernel_init_freeable()
    ? rest_init()
    kernel_init()
    ret_from_fork()

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 80f5c1b84baa8180c3c27b7e227429712cd967b6)
Change-Id: Ibffde077d18eabec4c2984158bd9d6d73bd0fb96
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-10-27 13:30:33 +01:00
Peter Zijlstra
4863faf5e4 UPSTREAM: sched/core: Add missing update_rq_clock() in post_init_entity_util_avg()
Address this rq-clock update bug:

  WARNING: CPU: 0 PID: 0 at ../kernel/sched/sched.h:797 post_init_entity_util_avg()
  rq->clock_update_flags < RQCF_ACT_SKIP

  Call Trace:
    __warn()
    post_init_entity_util_avg()
    wake_up_new_task()
    _do_fork()
    kernel_thread()
    rest_init()
    start_kernel()

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 4126bad6717336abe5d666440ae15555563ca53f)
Change-Id: Ibe9a73386896377f96483d195e433259218755a5
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-10-27 13:30:33 +01:00
Peter Zijlstra
97cb74f485 BACKPORT: sched/fair: Fix PELT integrity for new tasks
Vincent and Yuyang found another few scenarios in which entity
tracking goes wobbly.

The scenarios are basically due to the fact that new tasks are not
immediately attached and thereby differ from the normal situation -- a
task is always attached to a cfs_rq load average (such that it
includes its blocked contribution) and are explicitly
detached/attached on migration to another cfs_rq.

Scenario 1: switch to fair class

  p->sched_class = fair_class;
  if (queued)
    enqueue_task(p);
      ...
        enqueue_entity()
	  enqueue_entity_load_avg()
	    migrated = !sa->last_update_time (true)
	    if (migrated)
	      attach_entity_load_avg()
  check_class_changed()
    switched_from() (!fair)
    switched_to()   (fair)
      switched_to_fair()
        attach_entity_load_avg()

If @p is a new task that hasn't been fair before, it will have
!last_update_time and, per the above, end up in
attach_entity_load_avg() _twice_.

Scenario 2: change between cgroups

  sched_move_group(p)
    if (queued)
      dequeue_task()
    task_move_group_fair()
      detach_task_cfs_rq()
        detach_entity_load_avg()
      set_task_rq()
      attach_task_cfs_rq()
        attach_entity_load_avg()
    if (queued)
      enqueue_task();
        ...
          enqueue_entity()
	    enqueue_entity_load_avg()
	      migrated = !sa->last_update_time (true)
	      if (migrated)
	        attach_entity_load_avg()

Similar as with scenario 1, if @p is a new task, it will have
!load_update_time and we'll end up in attach_entity_load_avg()
_twice_.

Furthermore, notice how we do a detach_entity_load_avg() on something
that wasn't attached to begin with.

As stated above; the problem is that the new task isn't yet attached
to the load tracking and thereby violates the invariant assumption.

This patch remedies this by ensuring a new task is indeed properly
attached to the load tracking on creation, through
post_init_entity_util_avg().

Of course, this isn't entirely as straightforward as one might think,
since the task is hashed before we call wake_up_new_task() and thus
can be poked at. We avoid this by adding TASK_NEW and teaching
cpu_cgroup_can_attach() to refuse such tasks.

.:: BACKPORT

Complicated by the fact that mch of the lines changed by the original
of this commit were then changed by:

df217913e72e sched/fair: Factorize attach/detach entity <Vincent Guittot>

and then

d31b1a66cbe0 sched/fair: Factorize PELT update <Vincent Guittot>

, which have both already been backported here.

Reported-by: Yuyang Du <yuyang.du@intel.com>
Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 7dc603c9028ea5d4354e0e317e8481df99b06d7e)
Change-Id: Ibc59eb52310a62709d49a744bd5a24e8b97c4ae8
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-10-27 13:30:32 +01:00
Vincent Guittot
138a670d97 BACKPORT: sched/cgroup: Fix cpu_cgroup_fork() handling
A new fair task is detached and attached from/to task_group with:

  cgroup_post_fork()
    ss->fork(child) := cpu_cgroup_fork()
      sched_move_task()
        task_move_group_fair()

Which is wrong, because at this point in fork() the task isn't fully
initialized and it cannot 'move' to another group, because its not
attached to any group as yet.

In fact, cpu_cgroup_fork() needs a small part of sched_move_task() so we
can just call this small part directly instead sched_move_task(). And
the task doesn't really migrate because it is not yet attached so we
need the following sequence:

  do_fork()
    sched_fork()
      __set_task_cpu()

    cgroup_post_fork()
      set_task_rq() # set task group and runqueue

    wake_up_new_task()
      select_task_rq() can select a new cpu
      __set_task_cpu
      post_init_entity_util_avg
        attach_task_cfs_rq()
      activate_task
        enqueue_task

This patch makes that happen.

BACKPORT: Difference from original commit:

- Removed use of DEQUEUE_MOVE (which isn't defined in 4.4) in
  dequeue_task flags
- Replaced "struct rq_flags rf" with "unsigned long flags".

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
[ Added TASK_SET_GROUP to set depth properly. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit ea86cb4b7621e1298a37197005bf0abcc86348d4)
Change-Id: I8126fd923288acf961218431ffd29d6bf6fd8d72
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-10-27 13:30:32 +01:00
Peter Zijlstra
6b02ab68ec UPSTREAM: sched/fair: Fix and optimize the fork() path
The task_fork_fair() callback already calls __set_task_cpu() and takes
rq->lock.

If we move the sched_class::task_fork callback in sched_fork() under
the existing p->pi_lock, right after its set_task_cpu() call, we can
avoid doing two such calls and omit the IRQ disabling on the rq->lock.

Change to __set_task_cpu() to skip the migration bits, this is a new
task, not a migration. Similarly, make wake_up_new_task() use
__set_task_cpu() for the same reason, the task hasn't actually
migrated as it hasn't ever ran.

This cures the problem of calling migrate_task_rq_fair(), which does
remove_entity_from_load_avg() on tasks that have never been added to
the load avg to begin with.

This bug would result in transiently messed up load_avg values, averaged
out after a few dozen milliseconds. This is probably the reason why
this bug was not found for such a long time.

Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit e210bffd39d01b649c94b820c28ff112673266dd)
Change-Id: Icbddbaa6e8c1071859673d8685bc3f38955cf144
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-10-27 13:30:32 +01:00
Chris Redpath
792510d9b3 BACKPORT: sched/fair: Make it possible to account fair load avg consistently
While set_task_rq_fair() is introduced in mainline by commit ad936d8658fd
("sched/fair: Make it possible to account fair load avg consistently"),
the function results to be introduced here by the backport of
commit 09a43ace1f98 ("sched/fair: Propagate load during synchronous
attach/detach"). The problem (apart from the confusion introduced by the
backport) is actually that set_task_rq_fair() is currently not called at
all.

Fix the problem by backporting again commit ad936d8658fd
("sched/fair: Make it possible to account fair load avg consistently").

Original change log:

The current code accounts for the time a task was absent from the fair
class (per ATTACH_AGE_LOAD). However it does not work correctly when a
task got migrated or moved to another cgroup while outside of the fair
class.

This patch tries to address that by aging on migration. We locklessly
read the 'last_update_time' stamp from both the old and new cfs_rq,
ages the load upto the old time, and sets it to the new time.

These timestamps should in general not be more than 1 tick apart from
one another, so there is a definite bound on things.

Signed-off-by: Byungchul Park <byungchul.park@lge.com>
[ Changelog, a few edits and !SMP build fix ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1445616981-29904-2-git-send-email-byungchul.park@lge.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry-picked from ad936d8658fd348338cb7d42c577dac77892b074)
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Change-Id: I17294ab0ada3901d35895014715fd60952949358
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
2017-10-27 13:30:32 +01:00
Chris Redpath
fac311be26 cpufreq/sched: Consider max cpu capacity when choosing frequencies
When using schedfreq on cpus with max capacity significantly smaller than
1024, the tick update uses non-normalised capacities - this leads to
selecting an incorrect OPP as we were scaling the frequency as if the
max capacity achievable was 1024 rather than the max for that particular
cpu or group. This could result in a cpu being stuck at the lowest OPP
and unable to generate enough utilisation to climb out if the max
capacity is significantly smaller than 1024.

Instead, normalize the capacity to be in the range 0-1024 in the tick
so that when we later select a frequency, we get the correct one.

Also comments updated to be clearer about what is needed.

Change-Id: Id84391c7ac015311002ada21813a353ee13bee60
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-10-27 13:30:32 +01:00
Joonwoo Park
ed9e749668 sched: EAS/WALT: finish accounting prior to task_tick
In order to set rq->misfit_task in time, call update_task_ravg() prior
to task_tick.  This reduces upmigration delay by 1 scheduler window.

Change-Id: I7cc80badd423f2e7684125fbfd853b0a3610f0e8
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
2017-10-19 11:56:50 -07:00
Joonwoo Park
40c3aaa56a cpufreq: sched: update capacity request upon tick always
At present, sched_freq_tick() skips updating of capacity update when
current frequency is fmax.  This can cause incorrect frequency drop
when a CPU bound task goes into sleep for example :
  1) A task (A) enqueues onto CPU 0 and executes for long time.
  2) A new task (B) which has low task demand enqueues onto CPU 1 and
     executes long so becomes a CPU bound task.
  3) Both CPU 0 and 1 gets scheduler tick but skip sched_freq_tick()
     since current frequency is fmax.
  4) Task (A) sleeps and lower the CPU 0's capacity request.
  5) Because task (B) voted CPU capacity at step 2 with low demand and
     skipped to request afterwards, cluster frequency for both CPU 0
     and 1 drops to match capacity voted by CPU 1 at step 2 even though
     task (B) on CPU 1 requires max capacity.

Fix such incorrectness by not skipping CPU capacity voting at tick
path.

Change-Id: Ieb46af1ac96ffce7a5532c58c7f07bf1ada06b86
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
2017-10-19 11:56:50 -07:00
Vikram Mulukutla
be832f69a9 sched: walt: Leverage existing helper APIs to apply invariance
There's no need for a separate hierarchy of notifiers, APIs
and variables in walt.c for the purpose of applying frequency
and IPC invariance. Let's just use capacity_curr_of and get
rid of a lot of the infrastructure relating to capacity,
load_scale_factor etc.

Change-Id: Ia220e2c896373fa535db05bff60f9aa33aefc978
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
2017-10-19 11:56:50 -07:00
Olav Haugan
ec888d46d8 sched: Update task->on_rq when tasks are moving between runqueues
Task->on_rq has three states:
	0 - Task is not on runqueue (rq)
	1 (TASK_ON_RQ_QUEUED) - Task is on rq
	2 (TASK_ON_RQ_MIGRATING) - Task is on rq but in the
	process of being migrated to another rq

When a task is moving between rqs task->on_rq state should be
TASK_ON_RQ_MIGRATING in order for WALT to account rq's cumulative
runnable average correctly.  Without such state marking for all the
classes, WALT's update_history() would try to fixup task's demand
which was never contributed to any of CPUs during migration.

Change-Id: Iced3428f3924fe8ab5d0075698273ead04f12d5b
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
[joonwoop: Reinforced changelog to explain why this is needed by WALT.
           Fixed conflicts in deadline.c]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2017-10-16 21:07:06 +00:00
Greg Kroah-Hartman
73a2b70bdf This is the 4.4.92 stable release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlnfNZMACgkQONu9yGCS
 aT4KcxAAg3C7QW8Nm30Ds1ZgUrkFB78YAFLdgZNKzTBfQFwFHhkPR8fTGyCboMV6
 rL56FUDepHpGfnX8yT8PeoWNBzBprwFtCfMKFUi1MKRFpNEde4pLDVoVKQ3ie0dm
 ZMLoRsrzF4UOktRidEmxQr5yNv6qmPwMw6P68Giguu9QDabADjDJMY5Fv/2JQc0q
 UHefFWbA7WDwaNhBpgF0/P3IBgNF781sdYm7lOjiyMCJZoJyBGjyH6jeBSyJy9Qg
 bUhr7jALOe45PU0Lo3KpVjr14OEb78k/c2Iw1M4RuiNQgBrPOdmpGckstbRn1vtz
 yCmaUimLRB+Lu/7RSxI94gOduEt9t2NLQNhNKbbSJvEdU1L8o+MOhCLyaT+VQ2Dg
 xwKpXUefDRcZbetTNBCP4lvivSLo9mwY6U74ZmDUXZhVCJj+v7AriCB8y8k2uFdY
 uRf0Dv3Wf0H7LDFbjuVKirv5rILVGFL8gcg421yFUJQvQ1k0Mx1pBlUI3srjZCMu
 tRP3ObZta32ArGEvIKJAn3uhrguPgy8FWRsG3akh000jwDCsXHLohSX/5FDcnlUH
 q2N9wHoEYBe4aPBaDm5ll2wOvpxubwBbt9xdbj6I5AUPCUfYcAni9D1+3cKDSLK8
 hbSbkKrjK9zFT4qi7QNvKoU8IscTb7lRGqjoT0ee+ws15SvxSVk=
 =oa8l
 -----END PGP SIGNATURE-----

Merge 4.4.92 into android-4.4

Changes in 4.4.92
	usb: gadget: inode.c: fix unbalanced spin_lock in ep0_write
	USB: gadgetfs: Fix crash caused by inadequate synchronization
	USB: gadgetfs: fix copy_to_user while holding spinlock
	usb: gadget: udc: atmel: set vbus irqflags explicitly
	usb-storage: unusual_devs entry to fix write-access regression for Seagate external drives
	usb: renesas_usbhs: fix the BCLR setting condition for non-DCP pipe
	usb: renesas_usbhs: fix usbhsf_fifo_clear() for RX direction
	ALSA: usb-audio: Check out-of-bounds access by corrupted buffer descriptor
	usb: pci-quirks.c: Corrected timeout values used in handshake
	USB: dummy-hcd: fix connection failures (wrong speed)
	USB: dummy-hcd: fix infinite-loop resubmission bug
	USB: dummy-hcd: Fix erroneous synchronization change
	USB: devio: Don't corrupt user memory
	usb: gadget: mass_storage: set msg_registered after msg registered
	USB: g_mass_storage: Fix deadlock when driver is unbound
	lsm: fix smack_inode_removexattr and xattr_getsecurity memleak
	ALSA: compress: Remove unused variable
	ALSA: usx2y: Suppress kernel warning at page allocation failures
	driver core: platform: Don't read past the end of "driver_override" buffer
	Drivers: hv: fcopy: restore correct transfer length
	stm class: Fix a use-after-free
	ftrace: Fix kmemleak in unregister_ftrace_graph
	HID: i2c-hid: allocate hid buffers for real worst case
	iwlwifi: add workaround to disable wide channels in 5GHz
	scsi: sd: Do not override max_sectors_kb sysfs setting
	USB: uas: fix bug in handling of alternate settings
	USB: core: harden cdc_parse_cdc_header
	usb: Increase quirk delay for USB devices
	USB: fix out-of-bounds in usb_set_configuration
	xhci: fix finding correct bus_state structure for USB 3.1 hosts
	iio: adc: twl4030: Fix an error handling path in 'twl4030_madc_probe()'
	iio: adc: twl4030: Disable the vusb3v1 rugulator in the error handling path of 'twl4030_madc_probe()'
	iio: ad_sigma_delta: Implement a dedicated reset function
	staging: iio: ad7192: Fix - use the dedicated reset function avoiding dma from stack.
	iio: core: Return error for failed read_reg
	iio: ad7793: Fix the serial interface reset
	iio: adc: mcp320x: Fix readout of negative voltages
	iio: adc: mcp320x: Fix oops on module unload
	uwb: properly check kthread_run return value
	uwb: ensure that endpoint is interrupt
	brcmfmac: setup passive scan if requested by user-space
	drm/i915/bios: ignore HDMI on port A
	nvme: protect against simultaneous shutdown invocations
	sched/cpuset/pm: Fix cpuset vs. suspend-resume bugs
	ext4: fix data corruption for mmap writes
	ext4: Don't clear SGID when inheriting ACLs
	ext4: don't allow encrypted operations without keys
	Linux 4.4.92

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
2017-10-12 12:05:45 +02:00
Peter Zijlstra
90fd673873 sched/cpuset/pm: Fix cpuset vs. suspend-resume bugs
commit 50e76632339d4655859523a39249dd95ee5e93e7 upstream.

Cpusets vs. suspend-resume is _completely_ broken. And it got noticed
because it now resulted in non-cpuset usage breaking too.

On suspend cpuset_cpu_inactive() doesn't call into
cpuset_update_active_cpus() because it doesn't want to move tasks about,
there is no need, all tasks are frozen and won't run again until after
we've resumed everything.

But this means that when we finally do call into
cpuset_update_active_cpus() after resuming the last frozen cpu in
cpuset_cpu_active(), the top_cpuset will not have any difference with
the cpu_active_mask and this it will not in fact do _anything_.

So the cpuset configuration will not be restored. This was largely
hidden because we would unconditionally create identity domains and
mobile users would not in fact use cpusets much. And servers what do use
cpusets tend to not suspend-resume much.

An addition problem is that we'd not in fact wait for the cpuset work to
finish before resuming the tasks, allowing spurious migrations outside
of the specified domains.

Fix the rebuild by introducing cpuset_force_rebuild() and fix the
ordering with cpuset_wait_for_hotplug().

Reported-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <stable@vger.kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: deb7aa308e ("cpuset: reorganize CPU / memory hotplug handling")
Link: http://lkml.kernel.org/r/20170907091338.orwxrqkbfkki3c24@hirez.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-12 11:27:35 +02:00
Joonwoo Park
f94958ffa7 cpufreq: sched: WALT: don't apply capacity margin twice
With WALT all the scheduler classes' load are accounted in scr->cfs and
update_cpu_capacity_request() adds capacity margin.  At present, at tick
path, scheduler also adds capacity margin.  Therefore the margin applied
twice.

Fix such error by using margin applied cpu utilization only for checking
whether frequency increase is needed.

Change-Id: Id7d8cc73b2e4eec70b274ca66e09bb0b16bf6f09
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
(trivial rebase conflict)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-09-01 17:23:40 -07:00
Joonwoo Park
ee4cebd75e sched: EAS/WALT: use cr_avg instead of prev_runnable_sum
WALT accounts two major statistics; CPU load and cumulative tasks
demand.

The CPU load which is account of accumulated each CPU's absolute
execution time is for CPU frequency guidance.  Whereas cumulative
tasks demand which is each CPU's instantaneous load to reflect
CPU's load at given time is for task placement decision.

Use cumulative tasks demand for cpu_util() for task placement and
introduce cpu_util_freq() for frequency guidance.

Change-Id: Id928f01dbc8cb2a617cdadc584c1f658022565c5
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2017-09-01 17:20:59 -07:00
Greg Kroah-Hartman
9f764bbe06 This is the 4.4.80 stable release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlmHzogACgkQONu9yGCS
 aT72Kg/9Ea02hrf7SCaEmReH0CNBsZiWBp0u/4b6QtXt3TrPDXK0oteIB4SUIVi/
 zOzjU5SkssMLL9RoRQob81DLFJlL0b9ME5nLXxAACe2P74DaRSxA3DDmrYILgerH
 Gnv4k9xjbVMXMjdk6qAZ/SahCFfYPfnPCRO/zPeb3+6EZk8UQpaaB/GNxVCsGFTZ
 AfThsAHYzfFOg2fYdK0T09eDtAFqAokwGY6O8uaigkJt3u5mbMXcgxSp4o322OcG
 V3jxCUPzSk/78QtoSqQErXDCj/30451oLVByMBuRpBJAilsDf6VaURuz1dVfKFW8
 PdkLiy397sir696HwPU0HwHz++kRnZK2u2z//TRDE5wmgsC9VSq9fkggZdmNBol5
 N4ekCWjhYyyJzxf9hTxK/fA4t4KRFtOcdRiEkJj9RDIhT9jxsxPMr3TGJ25LJaUH
 8Qae+nNlYVe7lmaojckGa+AjIMm5HRB7LZnf4VQr1E8kvWpWpwA/0YtnduzPsXhH
 6xqT0rL/1/Z1Jz63/zPAtZ9OSL/ne0hJs+xOuUhKHGwH3oWBKrgmxAH8CAxYq0x9
 Y6ALkDweS3e+vVt+4BcHpUz8JTNTlspMcebt4VvjqvmERpKwmVsl7tEY242Uw4LQ
 wMF50vA9Cc0bVkVS7w2Ns/dn6XEWYpqS4a/MninjaBOMbtMia78=
 =l+tE
 -----END PGP SIGNATURE-----

Merge 4.4.80 into android-4.4

Changes in 4.4.80
	af_key: Add lock to key dump
	pstore: Make spinlock per zone instead of global
	net: reduce skb_warn_bad_offload() noise
	powerpc/pseries: Fix of_node_put() underflow during reconfig remove
	crypto: authencesn - Fix digest_null crash
	md/raid5: add thread_group worker async_tx_issue_pending_all
	drm/vmwgfx: Fix gcc-7.1.1 warning
	drm/nouveau/bar/gf100: fix access to upper half of BAR2
	KVM: PPC: Book3S HV: Context-switch EBB registers properly
	KVM: PPC: Book3S HV: Restore critical SPRs to host values on guest exit
	KVM: PPC: Book3S HV: Reload HTM registers explicitly
	KVM: PPC: Book3S HV: Save/restore host values of debug registers
	Revert "powerpc/numa: Fix percpu allocations to be NUMA aware"
	Staging: comedi: comedi_fops: Avoid orphaned proc entry
	drm/rcar: Nuke preclose hook
	drm: rcar-du: Perform initialization/cleanup at probe/remove time
	drm: rcar-du: Simplify and fix probe error handling
	perf intel-pt: Fix ip compression
	perf intel-pt: Fix last_ip usage
	perf intel-pt: Use FUP always when scanning for an IP
	perf intel-pt: Ensure never to set 'last_ip' when packet 'count' is zero
	xfs: don't BUG() on mixed direct and mapped I/O
	nfc: fdp: fix NULL pointer dereference
	net: phy: Do not perform software reset for Generic PHY
	isdn: Fix a sleep-in-atomic bug
	isdn/i4l: fix buffer overflow
	ath10k: fix null deref on wmi-tlv when trying spectral scan
	wil6210: fix deadlock when using fw_no_recovery option
	mailbox: always wait in mbox_send_message for blocking Tx mode
	mailbox: skip complete wait event if timer expired
	mailbox: handle empty message in tx_tick
	mpt3sas: Don't overreach ioc->reply_post[] during initialization
	kaweth: fix firmware download
	kaweth: fix oops upon failed memory allocation
	sched/cgroup: Move sched_online_group() back into css_online() to fix crash
	PM / Domains: defer dev_pm_domain_set() until genpd->attach_dev succeeds if present
	RDMA/uverbs: Fix the check for port number
	libnvdimm, btt: fix btt_rw_page not returning errors
	ipmi/watchdog: fix watchdog timeout set on reboot
	dentry name snapshots
	v4l: s5c73m3: fix negation operator
	Make file credentials available to the seqfile interfaces
	/proc/iomem: only expose physical resource addresses to privileged users
	vlan: Propagate MAC address to VLANs
	pstore: Allow prz to control need for locking
	pstore: Correctly initialize spinlock and flags
	pstore: Use dynamic spinlock initializer
	net: skb_needs_check() accepts CHECKSUM_NONE for tx
	sched/cputime: Fix prev steal time accouting during CPU hotplug
	xen/blkback: don't free be structure too early
	xen/blkback: don't use xen_blkif_get() in xen-blkback kthread
	tpm: fix a kernel memory leak in tpm-sysfs.c
	tpm: Replace device number bitmap with IDR
	x86/mce/AMD: Make the init code more robust
	r8169: add support for RTL8168 series add-on card.
	ARM: dts: n900: Mark eMMC slot with no-sdio and no-sd flags
	ipv6: Should use consistent conditional judgement for ip6 fragment between __ip6_append_data and ip6_finish_output
	net/mlx4: Remove BUG_ON from ICM allocation routine
	drm/msm: Ensure that the hardware write pointer is valid
	drm/msm: Verify that MSM_SUBMIT_BO_FLAGS are set
	vfio-pci: use 32-bit comparisons for register address for gcc-4.5
	irqchip/keystone: Fix "scheduling while atomic" on rt
	ASoC: tlv320aic3x: Mark the RESET register as volatile
	spi: dw: Make debugfs name unique between instances
	ASoC: nau8825: fix invalid configuration in Pre-Scalar of FLL
	irqchip/mxs: Enable SKIP_SET_WAKE and MASK_ON_SUSPEND
	openrisc: Add _text symbol to fix ksym build error
	dmaengine: ioatdma: Add Skylake PCI Dev ID
	dmaengine: ioatdma: workaround SKX ioatdma version
	dmaengine: ti-dma-crossbar: Add some 'of_node_put()' in error path.
	ARM64: zynqmp: Fix W=1 dtc 1.4 warnings
	ARM64: zynqmp: Fix i2c node's compatible string
	ARM: s3c2410_defconfig: Fix invalid values for NF_CT_PROTO_*
	ACPI / scan: Prefer devices without _HID/_CID for _ADR matching
	usb: gadget: Fix copy/pasted error message
	Btrfs: adjust outstanding_extents counter properly when dio write is split
	tools lib traceevent: Fix prev/next_prio for deadline tasks
	xfrm: Don't use sk_family for socket policy lookups
	perf tools: Install tools/lib/traceevent plugins with install-bin
	perf symbols: Robustify reading of build-id from sysfs
	video: fbdev: cobalt_lcdfb: Handle return NULL error from devm_ioremap
	vfio-pci: Handle error from pci_iomap
	arm64: mm: fix show_pte KERN_CONT fallout
	nvmem: imx-ocotp: Fix wrong register size
	sh_eth: enable RX descriptor word 0 shift on SH7734
	ALSA: usb-audio: test EP_FLAG_RUNNING at urb completion
	HID: ignore Petzl USB headlamp
	scsi: fnic: Avoid sending reset to firmware when another reset is in progress
	scsi: snic: Return error code on memory allocation failure
	ASoC: dpcm: Avoid putting stream state to STOP when FE stream is paused
	Linux 4.4.80

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
2017-08-07 14:29:16 -07:00
Wanpeng Li
62208707b4 sched/cputime: Fix prev steal time accouting during CPU hotplug
commit 3d89e5478bf550a50c99e93adf659369798263b0 upstream.

Commit:

  e9532e69b8d1 ("sched/cputime: Fix steal time accounting vs. CPU hotplug")

... set rq->prev_* to 0 after a CPU hotplug comes back, in order to
fix the case where (after CPU hotplug) steal time is smaller than
rq->prev_steal_time.

However, this should never happen. Steal time was only smaller because of the
KVM-specific bug fixed by the previous patch.  Worse, the previous patch
triggers a bug on CPU hot-unplug/plug operation: because
rq->prev_steal_time is cleared, all of the CPU's past steal time will be
accounted again on hot-plug.

Since the root cause has been fixed, we can just revert commit e9532e69b8d1.

Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 'commit e9532e69b8d1 ("sched/cputime: Fix steal time accounting vs. CPU hotplug")'
Link: http://lkml.kernel.org/r/1465813966-3116-3-git-send-email-wanpeng.li@hotmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Andres Oportus <andresoportus@google.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-06 19:19:43 -07:00
Konstantin Khlebnikov
0e0967e262 sched/cgroup: Move sched_online_group() back into css_online() to fix crash
commit 96b777452d8881480fd5be50112f791c17db4b6b upstream.

Commit:

  2f5177f0fd7e ("sched/cgroup: Fix/cleanup cgroup teardown/init")

.. moved sched_online_group() from css_online() to css_alloc().
It exposes half-baked task group into global lists before initializing
generic cgroup stuff.

LTP testcase (third in cgroup_regression_test) written for testing
similar race in kernels 2.6.26-2.6.28 easily triggers this oops:

  BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
  IP: kernfs_path_from_node_locked+0x260/0x320
  CPU: 1 PID: 30346 Comm: cat Not tainted 4.10.0-rc5-test #4
  Call Trace:
  ? kernfs_path_from_node+0x4f/0x60
  kernfs_path_from_node+0x3e/0x60
  print_rt_rq+0x44/0x2b0
  print_rt_stats+0x7a/0xd0
  print_cpu+0x2fc/0xe80
  ? __might_sleep+0x4a/0x80
  sched_debug_show+0x17/0x30
  seq_read+0xf2/0x3b0
  proc_reg_read+0x42/0x70
  __vfs_read+0x28/0x130
  ? security_file_permission+0x9b/0xc0
  ? rw_verify_area+0x4e/0xb0
  vfs_read+0xa5/0x170
  SyS_read+0x46/0xa0
  entry_SYSCALL_64_fastpath+0x1e/0xad

Here the task group is already linked into the global RCU-protected 'task_groups'
list, but the css->cgroup pointer is still NULL.

This patch reverts this chunk and moves online back to css_online().

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 2f5177f0fd7e ("sched/cgroup: Fix/cleanup cgroup teardown/init")
Link: http://lkml.kernel.org/r/148655324740.424917.5302984537258726349.stgit@buzz
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-06 19:19:42 -07:00
Greg Kroah-Hartman
59ff2e15be This is the 4.4.78 stable release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAllxlOsACgkQONu9yGCS
 aT4naw//VrWIuIf523IP6egcS+iprNM1lt8HZIzTQ0b2x0qv82UFA9FSs3e97dbG
 LJAUEmHW+oBbm6AekAUrNFF62qaMM5HYxVgGYeiniA1dRvLCDG/OxzxPc0hv6Kmm
 VsbZBlPQEj2B5tqUOkpvQNcqbKpIVglBsK4tjeHE/mRziUkIoPXWKS6Tt7VKGtBa
 gIGY7VASyKhPtxGCdbKzHsO/IGi3cmpwAyhqQDRBR/5Dxy3p9xlQ4gTpobeL+x8z
 A7WHqVNbgfdz21LBii5xE8+GUwiRjlYhxeFJTjhM6wo2/XCtw1FJgc7EMmsbVtbZ
 xYu1/tkYaZGoxOQ7sH5jNgVjE8IcyVimDa5eJ7p7fbh3AzsyXzAJIRc8cS7cL40F
 jLDWrYDnm5t7ziITrAf0uoMmRZLHqS2Bv5sqaoCxR3D51r6LVaNdavD6hYN1CLRA
 fjlDvnvwxbLPI4YPr7PZNu4oSJiawKx9jEBOTFSYu9L1XLvdvdcVU6ULAdpShnLn
 80a+YJmYsNg54im7sxFUw6z87AScznzirIpXEJoUO+Hs0SN9Rq/BQx8cKvVeaMEI
 z53c4Hci45Go/ozZVqCTxGbkQ6tKWIKBSo6kl3xchwms4lmijpWuwbAkDUzECNvv
 0RgyQvNx4d2cRJ8hIVcdVFzp+h+kgNqVQQ2nDld7/1QSZHLPhFo=
 =vrgw
 -----END PGP SIGNATURE-----

Merge 4.4.78 into android-4.4

Changes in 4.4.78
	net_sched: fix error recovery at qdisc creation
	net: sched: Fix one possible panic when no destroy callback
	net/phy: micrel: configure intterupts after autoneg workaround
	ipv6: avoid unregistering inet6_dev for loopback
	net: dp83640: Avoid NULL pointer dereference.
	tcp: reset sk_rx_dst in tcp_disconnect()
	net: prevent sign extension in dev_get_stats()
	bpf: prevent leaking pointer via xadd on unpriviledged
	net: handle NAPI_GRO_FREE_STOLEN_HEAD case also in napi_frags_finish()
	ipv6: dad: don't remove dynamic addresses if link is down
	net: ipv6: Compare lwstate in detecting duplicate nexthops
	vrf: fix bug_on triggered by rx when destroying a vrf
	rds: tcp: use sock_create_lite() to create the accept socket
	brcmfmac: fix possible buffer overflow in brcmf_cfg80211_mgmt_tx()
	cfg80211: Define nla_policy for NL80211_ATTR_LOCAL_MESH_POWER_MODE
	cfg80211: Validate frequencies nested in NL80211_ATTR_SCAN_FREQUENCIES
	cfg80211: Check if PMKID attribute is of expected size
	irqchip/gic-v3: Fix out-of-bound access in gic_set_affinity
	parisc: Report SIGSEGV instead of SIGBUS when running out of stack
	parisc: use compat_sys_keyctl()
	parisc: DMA API: return error instead of BUG_ON for dma ops on non dma devs
	parisc/mm: Ensure IRQs are off in switch_mm()
	tools/lib/lockdep: Reduce MAX_LOCK_DEPTH to avoid overflowing lock_chain/: Depth
	kernel/extable.c: mark core_kernel_text notrace
	mm/list_lru.c: fix list_lru_count_node() to be race free
	fs/dcache.c: fix spin lockup issue on nlru->lock
	checkpatch: silence perl 5.26.0 unescaped left brace warnings
	binfmt_elf: use ELF_ET_DYN_BASE only for PIE
	arm: move ELF_ET_DYN_BASE to 4MB
	arm64: move ELF_ET_DYN_BASE to 4GB / 4MB
	powerpc: move ELF_ET_DYN_BASE to 4GB / 4MB
	s390: reduce ELF_ET_DYN_BASE
	exec: Limit arg stack to at most 75% of _STK_LIM
	vt: fix unchecked __put_user() in tioclinux ioctls
	mnt: In umount propagation reparent in a separate pass
	mnt: In propgate_umount handle visiting mounts in any order
	mnt: Make propagate_umount less slow for overlapping mount propagation trees
	selftests/capabilities: Fix the test_execve test
	tpm: Get rid of chip->pdev
	tpm: Provide strong locking for device removal
	Add "shutdown" to "struct class".
	tpm: Issue a TPM2_Shutdown for TPM2 devices.
	mm: fix overflow check in expand_upwards()
	crypto: talitos - Extend max key length for SHA384/512-HMAC and AEAD
	crypto: atmel - only treat EBUSY as transient if backlog
	crypto: sha1-ssse3 - Disable avx2
	crypto: caam - fix signals handling
	sched/topology: Fix overlapping sched_group_mask
	sched/topology: Optimize build_group_mask()
	PM / wakeirq: Convert to SRCU
	PM / QoS: return -EINVAL for bogus strings
	tracing: Use SOFTIRQ_OFFSET for softirq dectection for more accurate results
	KVM: x86: disable MPX if host did not enable MPX XSAVE features
	kvm: vmx: Do not disable intercepts for BNDCFGS
	kvm: x86: Guest BNDCFGS requires guest MPX support
	kvm: vmx: Check value written to IA32_BNDCFGS
	kvm: vmx: allow host to access guest MSR_IA32_BNDCFGS
	Linux 4.4.78

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
2017-07-21 09:14:57 +02:00
Lauro Ramos Venancio
988067ec96 sched/topology: Optimize build_group_mask()
commit f32d782e31bf079f600dcec126ed117b0577e85c upstream.

The group mask is always used in intersection with the group CPUs. So,
when building the group mask, we don't have to care about CPUs that are
not part of the group.

Signed-off-by: Lauro Ramos Venancio <lvenanci@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: lwang@redhat.com
Cc: riel@redhat.com
Link: http://lkml.kernel.org/r/1492717903-5195-2-git-send-email-lvenanci@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-21 07:44:59 +02:00
Peter Zijlstra
5c34f49776 sched/topology: Fix overlapping sched_group_mask
commit 73bb059f9b8a00c5e1bf2f7ca83138c05d05e600 upstream.

The point of sched_group_mask is to select those CPUs from
sched_group_cpus that can actually arrive at this balance domain.

The current code gets it wrong, as can be readily demonstrated with a
topology like:

  node   0   1   2   3
    0:  10  20  30  20
    1:  20  10  20  30
    2:  30  20  10  20
    3:  20  30  20  10

Where (for example) domain 1 on CPU1 ends up with a mask that includes
CPU0:

  [] CPU1 attaching sched-domain:
  []  domain 0: span 0-2 level NUMA
  []   groups: 1 (mask: 1), 2, 0
  []   domain 1: span 0-3 level NUMA
  []    groups: 0-2 (mask: 0-2) (cpu_capacity: 3072), 0,2-3 (cpu_capacity: 3072)

This causes sched_balance_cpu() to compute the wrong CPU and
consequently should_we_balance() will terminate early resulting in
missed load-balance opportunities.

The fixed topology looks like:

  [] CPU1 attaching sched-domain:
  []  domain 0: span 0-2 level NUMA
  []   groups: 1 (mask: 1), 2, 0
  []   domain 1: span 0-3 level NUMA
  []    groups: 0-2 (mask: 1) (cpu_capacity: 3072), 0,2-3 (cpu_capacity: 3072)

(note: this relies on OVERLAP domains to always have children, this is
 true because the regular topology domains are still here -- this is
 before degenerate trimming)

Debugged-by: Lauro Ramos Venancio <lvenanci@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Fixes: e3589f6c81 ("sched: Allow for overlapping sched_domain spans")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-21 07:44:59 +02:00
Andres Oportus
aa8882923a sched/core: Fix PELT jump to max OPP upon util increase
Change-Id: Ic80b588ec466ef707f658dcea039fd0d6b384b63
Signed-off-by: Andres Oportus <andresoportus@google.com>
2017-06-02 08:01:54 -07:00
Dietmar Eggemann
55af384815 sched: EAS & 'single cpu per cluster'/cpu hotplug interoperability
For Energy-Aware Scheduling (EAS) to work properly, even in the
case that there is only one cpu per cluster or that cpus are hot-plugged
out, the Energy Model (EM) data on all energy-aware sched domains (sd)
has to be present for all online cpus.

Mainline sd hierarchy setup code will remove sd's which are not useful
for task scheduling e.g. in the following situations:

1. Only 1 cpu is/remains in one cluster of a multi cluster system.

   This remaining cpu only has DIE and no MC sd.

2. A complete cluster in a two cluster system is hot-plugged out.

   The cpus of the remaining cluster only have MC and no DIE sd.

To make sure that all online cpus keep all their energy-aware sd's,
the sd degenerate functionality has been changed to not free a sd if
its first sched group (sg) contains EM data in case:

1. There is only 1 cpu left in the sd.

2. There have to be at least 2 sg's if certain sd flags are set.

Instead of freeing such a sd it now clears only its SD_LOAD_BALANCE
flag. This will make sure that the EAS functionality will always see
all energy-aware sd's for all online cpus.

It will introduce a tiny performance degradation for operations on
affected cpus since the hot-path macro for_each_domain() has to deal
with sd's not contributing to task scheduling at all now.

In most cases the exisiting code makes sure that task scheduling is not
invoked on a sd with !SD_LOAD_BALANCE.

However, a small change is necessary in update_sd_lb_stats() to make
sure that sd->parent is only initialized to !NULL in case the parent sd
contains more than 1 sg.

The handling of newidle decay values before the SD_LOAD_BALANCE check in
rebalance_domains() stays unchanged.

Test (w/ CONFIG_SCHED_DEBUG):

JUNO r0 default system:

$ cat /proc/cpuinfo | grep "^CPU part"
CPU part        : 0xd03
CPU part        : 0xd07
CPU part        : 0xd07
CPU part        : 0xd03
CPU part        : 0xd03
CPU part        : 0xd03

SD names and flags:

$ cat /proc/sys/kernel/sched_domain/cpu*/domain*/name
MC
DIE
MC
DIE
MC
DIE
MC
DIE
MC
DIE
MC
DIE

$ printf "%x\n" `cat /proc/sys/kernel/sched_domain/cpu*/domain*/flags`
832f
102f
832f
102f
832f
102f
832f
102f
832f
102f
832f
102f

Test 1: Hotplug-out one A57 (CPU part 0xd07) cpu:

$ echo 0 > /sys/devices/system/cpu/cpu1/online

$ cat /proc/cpuinfo | grep "^CPU part"
CPU part        : 0xd03
CPU part        : 0xd07
CPU part        : 0xd03
CPU part        : 0xd03
CPU part        : 0xd03

SD names and flags for remaining A57 (cpu2) cpu:

$ cat /proc/sys/kernel/sched_domain/cpu2/domain*/name
MC
DIE

$ printf "%x\n" `cat /proc/sys/kernel/sched_domain/cpu2/domain*/flags`
832e <-- MC SD with !SD_LOAD_BALANCE
102f

Test 2: Hotplug-out the entire A57 cluster:

$ echo 0 > /sys/devices/system/cpu/cpu1/online
$ echo 0 > /sys/devices/system/cpu/cpu2/online

$ cat /proc/cpuinfo | grep "^CPU part"
CPU part        : 0xd03
CPU part        : 0xd03
CPU part        : 0xd03
CPU part        : 0xd03

SD names and flags for the remaining A53 (CPU part 0xd03) cluster:

$ cat /proc/sys/kernel/sched_domain/cpu*/domain*/name
MC
DIE
MC
DIE
MC
DIE
MC
DIE

$ printf "%x\n" `cat /proc/sys/kernel/sched_domain/cpu*/domain*/flags`
832f
102e <-- DIE SD with !SD_LOAD_BALANCE
832f
102e
832f
102e
832f
102e

Change-Id: If24aa2b2628f334abbf0207d39e2a86168d9d673
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
2017-06-02 08:01:54 -07:00
Vincent Guittot
8370e07d82 UPSTREAM: sched/fair: Fix hierarchical order in rq->leaf_cfs_rq_list
Fix the insertion of cfs_rq in rq->leaf_cfs_rq_list to ensure that a
child will always be called before its parent.

The hierarchical order in shares update list has been introduced by
commit:

  67e86250f8 ("sched: Introduce hierarchal order on shares update list")

With the current implementation a child can be still put after its
parent.

Lets take the example of:

       root
        \
         b
         /\
         c d*
           |
           e*

with root -> b -> c already enqueued but not d -> e so the
leaf_cfs_rq_list looks like: head -> c -> b -> root -> tail

The branch d -> e will be added the first time that they are enqueued,
starting with e then d.

When e is added, its parents is not already on the list so e is put at
the tail : head -> c -> b -> root -> e -> tail

Then, d is added at the head because its parent is already on the
list: head -> d -> c -> b -> root -> e -> tail

e is not placed at the right position and will be called the last
whereas it should be called at the beginning.

Because it follows the bottom-up enqueue sequence, we are sure that we
will finished to add either a cfs_rq without parent or a cfs_rq with a
parent that is already on the list. We can use this event to detect
when we have finished to add a new branch. For the others, whose
parents are not already added, we have to ensure that they will be
added after their children that have just been inserted the steps
before, and after any potential parents that are already in the list.
The easiest way is to put the cfs_rq just after the last inserted one
and to keep track of it untl the branch is fully added.

Change-Id: I4fe0b8502ea628c13d14e8e5c5279bce67fb8845
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: kernellwp@gmail.com
Cc: pjt@google.com
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1478598827-32372-3-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 9c2791f936ef5fd04a118b5c284f2c9a95f4a647)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:54 -07:00
Peter Zijlstra
3fd734a8f9 UPSTREAM: sched/fair: Fix post_init_entity_util_avg() serialization
Chris Wilson reported a divide by 0 at:

 post_init_entity_util_avg():

 >    725	if (cfs_rq->avg.util_avg != 0) {
 >    726		sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;
 > -> 727		sa->util_avg /= (cfs_rq->avg.load_avg + 1);
 >    728
 >    729		if (sa->util_avg > cap)
 >    730			sa->util_avg = cap;
 >    731	} else {

Which given the lack of serialization, and the code generated from
update_cfs_rq_load_avg() is entirely possible:

	if (atomic_long_read(&cfs_rq->removed_load_avg)) {
		s64 r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0);
		sa->load_avg = max_t(long, sa->load_avg - r, 0);
		sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0);
		removed_load = 1;
	}

turns into:

  ffffffff81087064:       49 8b 85 98 00 00 00    mov    0x98(%r13),%rax
  ffffffff8108706b:       48 85 c0                test   %rax,%rax
  ffffffff8108706e:       74 40                   je     ffffffff810870b0
  ffffffff81087070:       4c 89 f8                mov    %r15,%rax
  ffffffff81087073:       49 87 85 98 00 00 00    xchg   %rax,0x98(%r13)
  ffffffff8108707a:       49 29 45 70             sub    %rax,0x70(%r13)
  ffffffff8108707e:       4c 89 f9                mov    %r15,%rcx
  ffffffff81087081:       bb 01 00 00 00          mov    $0x1,%ebx
  ffffffff81087086:       49 83 7d 70 00          cmpq   $0x0,0x70(%r13)
  ffffffff8108708b:       49 0f 49 4d 70          cmovns 0x70(%r13),%rcx

Which you'll note ends up with 'sa->load_avg - r' in memory at
ffffffff8108707a.

By calling post_init_entity_util_avg() under rq->lock we're sure to be
fully serialized against PELT updates and cannot observe intermediate
state like this.

Change-Id: I56c11886102b7859df82e26c88b1b7c200a39f6e
Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Yuyang Du <yuyang.du@intel.com>
Cc: bsegall@google.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: steve.muckle@linaro.org
Fixes: 2b8c41daba32 ("sched/fair: Initiate a new task's util avg to a bounded value")
Link: http://lkml.kernel.org/r/20160609130750.GQ30909@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit b7fa30c9cc48c4f55663420472505d3b4f6e1705)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:53 -07:00
Yuyang Du
9de438d27c BACKPORT: sched/fair: Initiate a new task's util avg to a bounded value
A new task's util_avg is set to full utilization of a CPU (100% time
running). This accelerates a new task's utilization ramp-up, useful to
boost its execution in early time. However, it may result in
(insanely) high utilization for a transient time period when a flood
of tasks are spawned. Importantly, it violates the "fundamentally
bounded" CPU utilization, and its side effect is negative if we don't
take any measure to bound it.

This patch proposes an algorithm to address this issue. It has
two methods to approach a sensible initial util_avg:

(1) An expected (or average) util_avg based on its cfs_rq's util_avg:

  util_avg = cfs_rq->util_avg / (cfs_rq->load_avg + 1) * se.load.weight

(2) A trajectory of how successive new tasks' util develops, which
gives 1/2 of the left utilization budget to a new task such that
the additional util is noticeably large (when overall util is low) or
unnoticeably small (when overall util is high enough). In the meantime,
the aggregate utilization is well bounded:

  util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n

where n denotes the nth task.

If util_avg is larger than util_avg_cap, then the effective util is
clamped to the util_avg_cap.

Change-Id: Idafe989b24d9e70911666f09800bf1d5a011e1f4
Reported-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: steve.muckle@linaro.org
Link: http://lkml.kernel.org/r/1459283456-21682-1-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 2b8c41daba327c633228169e8bd8ec067ab443f8)
[integrate with schedfreq - schedfreq has a tuneable for init task util
 but this commit removes the use of the tuneable since we have a new
 algorithm for calculating an initial utilisation. I've left the tuneable
 in place, but it is no longer used even when schedfreq is the CPUFreq
 governor]
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:53 -07:00
Dietmar Eggemann
633b98b651 sched/core: Add first cpu w/ max/min orig capacity to root domain
This will allow to start iterating from a cpu with max or min original
capacity in the wakeup path regardless on which cpu the scheduler is
currently running (smp_processor_id()) or the previous cpu of the task
(task_cpu(p)). This iteration has to happen on a sched_domain spanning
all cpus in the order of the sched_groups of this sched_domain seen by
the starting cpu.

In case of an SMP system the first cpu with max orig capacity and the
the one with min orig capacity is the same. This can temporally happen
on a big.LITTLE system with hotplug as well.

E.g. the different order of cpu iteration can be used to map schedtune
task parameter 'boosted' into the cpu iteration order in
find_best_target().

Use of READ_ONCE()/WRITE_ONCE() to avoid load/store tearing.

Change-Id: I812fbd9c7e5f506617e456c0eec3edcd2c016e92
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
(cherry picked from commit fd6e9543c1fd8971a5e2e68e39b2f6e591d46114)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2017-06-02 08:01:53 -07:00