* origin/tmp-da9a92f: arm64: kaslr: increase randomization granularity arm64: relocatable: deal with physically misaligned kernel images arm64: don't map TEXT_OFFSET bytes below the kernel if we can avoid it arm64: kernel: replace early 64-bit literal loads with move-immediates arm64: introduce mov_q macro to move a constant into a 64-bit register arm64: kernel: perform relocation processing from ID map arm64: kernel: use literal for relocated address of __secondary_switched arm64: kernel: don't export local symbols from head.S arm64: simplify kernel segment mapping granularity arm64: cover the .head.text section in the .text segment mapping arm64: move early boot code to the .init segment arm64: use 'segment' rather than 'chunk' to describe mapped kernel regions arm64: mm: Mark .rodata as RO Linux 4.4.16 ovl: verify upper dentry before unlink and rename drm/i915: Revert DisplayPort fast link training feature tmpfs: fix regression hang in fallocate undo tmpfs: don't undo fallocate past its last page crypto: qat - make qat_asym_algs.o depend on asn1 headers xen/acpi: allow xen-acpi-processor driver to load on Xen 4.7 File names with trailing period or space need special case conversion cifs: dynamic allocation of ntlmssp blob Fix reconnect to not defer smb3 session reconnect long after socket reconnect 53c700: fix BUG on untagged commands s390: fix test_fp_ctl inline assembly contraints scsi: fix race between simultaneous decrements of ->host_failed ovl: verify upper dentry in ovl_remove_and_whiteout() ovl: Copy up underlying inode's ->i_mode to overlay inode ARM: mvebu: fix HW I/O coherency related deadlocks ARM: dts: armada-38x: fix MBUS_ID for crypto SRAM on Armada 385 Linksys ARM: sunxi/dt: make the CHIP inherit from allwinner,sun5i-a13 ALSA: hda: add AMD Stoney PCI ID with proper driver caps ALSA: hda - fix use-after-free after module unload ALSA: ctl: Stop notification after disconnection ALSA: pcm: Free chmap at PCM free callback, too ALSA: hda/realtek - add new pin definition in alc225 pin quirk table ALSA: hda - fix read before array start ALSA: hda - Add PCI ID for Kabylake-H ALSA: hda/realtek: Add Lenovo L460 to docking unit fixup ALSA: timer: Fix negative queue usage by racy accesses ALSA: echoaudio: Fix memory allocation ALSA: au88x0: Fix calculation in vortex_wtdma_bufshift() ALSA: hda / realtek - add two more Thinkpad IDs (5050,5053) for tpt460 fixup ALSA: hda - Fix the headset mic jack detection on Dell machine ALSA: dummy: Fix a use-after-free at closing hwmon: (dell-smm) Cache fan_type() calls and change fan detection hwmon: (dell-smm) Disallow fan_type() calls on broken machines hwmon: (dell-smm) Restrict fan control and serial number to CAP_SYS_ADMIN by default tty/vt/keyboard: fix OOB access in do_compute_shiftstate() tty: vt: Fix soft lockup in fbcon cursor blink timer. iio:ad7266: Fix probe deferral for vref iio:ad7266: Fix support for optional regulators iio:ad7266: Fix broken regulator error handling iio: accel: kxsd9: fix the usage of spi_w8r8() staging: iio: accel: fix error check iio: hudmidity: hdc100x: fix incorrect shifting and scaling iio: humidity: hdc100x: fix IIO_TEMP channel reporting iio: humidity: hdc100x: correct humidity integration time mask iio: proximity: as3935: fix buffer stack trashing iio: proximity: as3935: remove triggered buffer processing iio: proximity: as3935: correct IIO_CHAN_INFO_RAW output iio: light apds9960: Add the missing dev.parent iio:st_pressure: fix sampling gains (bring inline with ABI) iio: Fix error handling in iio_trigger_attach_poll_func xen/balloon: Fix declared-but-not-defined warning perf/x86: Fix undefined shift on 32-bit kernels memory: omap-gpmc: Fix omap gpmc EXTRADELAY timing drm/vmwgfx: Fix error paths when mapping framebuffer drm/vmwgfx: Delay pinning fbdev framebuffer until after mode set drm/vmwgfx: Check pin count before attempting to move a buffer drm/vmwgfx: Work around mode set failure in 2D VMs drm/vmwgfx: Add an option to change assumed FB bpp drm/ttm: Make ttm_bo_mem_compat available drm: atmel-hlcdc: actually disable scaling when no scaling is required drm: make drm_atomic_set_mode_prop_for_crtc() more reliable drm: add missing drm_mode_set_crtcinfo call drm/i915: Update CDCLK_FREQ register on BDW after changing cdclk frequency drm/i915: Update ifdeffery for mutex->owner drm/i915: Refresh cached DP port register value on resume drm/i915/ilk: Don't disable SSC source if it's in use drm/nouveau/disp/sor/gf119: select correct sor when poking training pattern drm/nouveau: fix for disabled fbdev emulation drm/nouveau/fbcon: fix out-of-bounds memory accesses drm/nouveau/gr/gf100-: update sm error decoding from gk20a nvgpu headers drm/nouveau/disp/sor/gf119: both links use the same training register virtio_balloon: fix PFN format for virtio-1 drm/dp/mst: Always clear proposed vcpi table for port. drm/amdkfd: destroy dbgmgr in notifier release drm/amdkfd: unbind only existing processes ubi: Make recover_peb power cut aware drm/amdgpu/gfx7: fix broken condition check drm/radeon: fix asic initialization for virtualized environments btrfs: account for non-CoW'd blocks in btrfs_abort_transaction percpu: fix synchronization between synchronous map extension and chunk destruction percpu: fix synchronization between chunk->map_extend_work and chunk destruction af_unix: fix hard linked sockets on overlay vfs: add d_real_inode() helper arm64: Rework valid_user_regs ipmi: Remove smi_msg from waiting_rcv_msgs list before handle_one_recv_msg() drm/mgag200: Black screen fix for G200e rev 4 iommu/amd: Fix unity mapping initialization race iommu/vt-d: Enable QI on all IOMMUs before setting root entry iommu/arm-smmu: Wire up map_sg for arm-smmu-v3 base: make module_create_drivers_dir race-free tracing: Handle NULL formats in hold_module_trace_bprintk_format() HID: multitouch: enable palm rejection for Windows Precision Touchpad HID: hiddev: validate num_values for HIDIOCGUSAGES, HIDIOCSUSAGES commands HID: elo: kill not flush the work KVM: nVMX: VMX instructions: fix segment checks when L1 is in long mode. kvm: Fix irq route entries exceeding KVM_MAX_IRQ_ROUTES KEYS: potential uninitialized variable ARCv2: LLSC: software backoff is NOT needed starting HS2.1c ARCv2: Check for LL-SC livelock only if LLSC is enabled ipv6: Fix mem leak in rt6i_pcpu cdc_ncm: workaround for EM7455 "silent" data interface net_sched: fix mirrored packets checksum packet: Use symmetric hash for PACKET_FANOUT_HASH. sched/fair: Fix cfs_rq avg tracking underflow UBIFS: Implement ->migratepage() mm: Export migrate_page_move_mapping and migrate_page_copy MIPS: KVM: Fix modular KVM under QEMU ARM: 8579/1: mm: Fix definition of pmd_mknotpresent ARM: 8578/1: mm: ensure pmd_present only checks the valid bit ARM: imx6ul: Fix Micrel PHY mask NFS: Fix another OPEN_DOWNGRADE bug make nfs_atomic_open() call d_drop() on all ->open_context() errors. nfsd: check permissions when setting ACLs posix_acl: Add set_posix_acl nfsd: Extend the mutex holding region around in nfsd4_process_open2() nfsd: Always lock state exclusively. nfsd4/rpc: move backchannel create logic into rpc code writeback: use higher precision calculation in domain_dirty_limits() thermal: cpu_cooling: fix improper order during initialization uvc: Forward compat ioctls to their handlers directly Revert "gpiolib: Split GPIO flags parsing and GPIO configuration" x86/amd_nb: Fix boot crash on non-AMD systems kprobes/x86: Clear TF bit in fault on single-stepping x86, build: copy ldlinux.c32 to image.iso locking/static_key: Fix concurrent static_key_slow_inc() locking/qspinlock: Fix spin_unlock_wait() some more locking/ww_mutex: Report recursive ww_mutex locking early of: irq: fix of_irq_get[_byname]() kernel-doc of: fix autoloading due to broken modalias with no 'compatible' mnt: If fs_fully_visible fails call put_filesystem. mnt: Account for MS_RDONLY in fs_fully_visible mnt: fs_fully_visible test the proper mount for MNT_LOCKED usb: common: otg-fsm: add license to usb-otg-fsm USB: EHCI: declare hostpc register as zero-length array usb: dwc2: fix regression on big-endian PowerPC/ARM systems powerpc/tm: Always reclaim in start_thread() for exec() class syscalls powerpc/pseries: Fix IBM_ARCH_VEC_NRCORES_OFFSET since POWER8NVL was added powerpc/pseries: Fix PCI config address for DDW powerpc/iommu: Remove the dependency on EEH struct in DDW mechanism IB/mlx4: Properly initialize GRH TClass and FlowLabel in AHs IB/cm: Fix a recently introduced locking bug EDAC, sb_edac: Fix rank lookup on Broadwell mac80211: Fix mesh estab_plinks counting in STA removal case mac80211_hwsim: Add missing check for HWSIM_ATTR_SIGNAL mac80211: mesh: flush mesh paths unconditionally mac80211: fix fast_tx header alignment Linux 4.4.15 usb: dwc3: exynos: Fix deferred probing storm. usb: host: ehci-tegra: Grab the correct UTMI pads reset usb: gadget: fix spinlock dead lock in gadgetfs USB: mos7720: delete parport xhci: Fix handling timeouted commands on hosts in weird states. USB: xhci: Add broken streams quirk for Frescologic device id 1009 usb: xhci-plat: properly handle probe deferral for devm_clk_get() xhci: Cleanup only when releasing primary hcd usb: musb: host: correct cppi dma channel for isoch transfer usb: musb: Ensure rx reinit occurs for shared_fifo endpoints usb: musb: Stop bulk endpoint while queue is rotated usb: musb: only restore devctl when session was set in backup usb: quirks: Add no-lpm quirk for Acer C120 LED Projector usb: quirks: Fix sorting USB: uas: Fix slave queue_depth not being set crypto: user - re-add size check for CRYPTO_MSG_GETALG crypto: ux500 - memmove the right size crypto: vmx - Increase priority of aes-cbc cipher AX.25: Close socket connection on session completion bpf: try harder on clones when writing into skb net: alx: Work around the DMA RX overflow issue net: macb: fix default configuration for GMAC on AT91 neigh: Explicitly declare RCU-bh read side critical section in neigh_xmit() bpf, perf: delay release of BPF prog after grace period sock_diag: do not broadcast raw socket destruction Bridge: Fix ipv6 mc snooping if bridge has no ipv6 address ipmr/ip6mr: Initialize the last assert time of mfc entries. netem: fix a use after free esp: Fix ESN generation under UDP encapsulation sit: correct IP protocol used in ipip6_err net: Don't forget pr_fmt on net_dbg_ratelimited for CONFIG_DYNAMIC_DEBUG net_sched: fix pfifo_head_drop behavior vs backlog sdcardfs: Truncate packages_gid.list on overflow UPSTREAM: cdc_ncm: do not call usbnet_link_change from cdc_ncm_bind BACKPORT: proc: add /proc/<pid>/timerslack_ns interface BACKPORT: timer: convert timer_slack_ns from unsigned long to u64 netfilter: xt_quota2: make quota2_log work well Revert "usb: gadget: prevent change of Host MAC address of 'usb0' interface" BACKPORT: PM / sleep: Go direct_complete if driver has no callbacks ANDROID: base-cfg: enable UID_CPUTIME UPSTREAM: USB: usbfs: fix potential infoleak in devio UPSTREAM: ALSA: timer: Fix leak in events via snd_timer_user_ccallback UPSTREAM: ALSA: timer: Fix leak in events via snd_timer_user_tinterrupt UPSTREAM: ALSA: timer: Fix leak in SNDRV_TIMER_IOCTL_PARAMS ANDROID: configs: remove unused configs ANDROID: cpu: send KOBJ_ONLINE event when enabling cpus ANDROID: dm verity fec: initialize recursion level ANDROID: dm verity fec: fix RS block calculation Linux 4.4.14 netfilter: x_tables: introduce and use xt_copy_counters_from_user netfilter: x_tables: do compat validation via translate_table netfilter: x_tables: xt_compat_match_from_user doesn't need a retval netfilter: ip6_tables: simplify translate_compat_table args netfilter: ip_tables: simplify translate_compat_table args netfilter: arp_tables: simplify translate_compat_table args netfilter: x_tables: don't reject valid target size on some architectures netfilter: x_tables: validate all offsets and sizes in a rule netfilter: x_tables: check for bogus target offset netfilter: x_tables: check standard target size too netfilter: x_tables: add compat version of xt_check_entry_offsets netfilter: x_tables: assert minimum target size netfilter: x_tables: kill check_entry helper netfilter: x_tables: add and use xt_check_entry_offsets netfilter: x_tables: validate targets of jumps netfilter: x_tables: don't move to non-existent next rule drm/core: Do not preserve framebuffer on rmfb, v4. crypto: qat - fix adf_ctl_drv.c:undefined reference to adf_init_pf_wq netfilter: x_tables: fix unconditional helper netfilter: x_tables: make sure e->next_offset covers remaining blob size netfilter: x_tables: validate e->target_offset early MIPS: Fix 64k page support for 32 bit kernels. sparc64: Fix return from trap window fill crashes. sparc: Harden signal return frame checks. sparc64: Take ctx_alloc_lock properly in hugetlb_setup(). sparc64: Reduce TLB flushes during hugepte changes sparc/PCI: Fix for panic while enabling SR-IOV sparc64: Fix sparc64_set_context stack handling. sparc64: Fix numa node distance initialization sparc64: Fix bootup regressions on some Kconfig combinations. sparc: Fix system call tracing register handling. fix d_walk()/non-delayed __d_free() race sched: panic on corrupted stack end proc: prevent stacking filesystems on top x86/entry/traps: Don't force in_interrupt() to return true in IST handlers wext: Fix 32 bit iwpriv compatibility issue with 64 bit Kernel ecryptfs: forbid opening files without mmap handler memcg: add RCU locking around css_for_each_descendant_pre() in memcg_offline_kmem() parisc: Fix pagefault crash in unaligned __get_user() call pinctrl: mediatek: fix dual-edge code defect powerpc/pseries: Add POWER8NVL support to ibm,client-architecture-support call powerpc: Use privileged SPR number for MMCR2 powerpc: Fix definition of SIAR and SDAR registers powerpc/pseries/eeh: Handle RTAS delay requests in configure_bridge arm64: mm: always take dirty state from new pte in ptep_set_access_flags arm64: Provide "model name" in /proc/cpuinfo for PER_LINUX32 tasks crypto: ccp - Fix AES XTS error for request sizes above 4096 crypto: public_key: select CRYPTO_AKCIPHER irqchip/gic-v3: Fix ICC_SGI1R_EL1.INTID decoding mask s390/bpf: reduce maximum program size to 64 KB s390/bpf: fix recache skb->data/hlen for skb_vlan_push/pop gpio: bcm-kona: fix bcm_kona_gpio_reset() warnings ARM: fix PTRACE_SETVFPREGS on SMP systems ALSA: hda/realtek: Add T560 docking unit fixup ALSA: hda/realtek - Add support for new codecs ALC700/ALC701/ALC703 ALSA: hda/realtek - ALC256 speaker noise issue ALSA: hda - Fix headset mic detection problem for Dell machine ALSA: hda - Add PCI ID for Kabylake KVM: irqfd: fix NULL pointer dereference in kvm_irq_map_gsi KVM: x86: fix OOPS after invalid KVM_SET_DEBUGREGS vxlan, gre, geneve: Set a large MTU on ovs-created tunnel devices geneve: Relax MTU constraints vxlan: Relax MTU constraints ipv6: Skip XFRM lookup if dst_entry in socket cache is valid l2tp: fix configuration passed to setup_udp_tunnel_sock() bridge: Don't insert unnecessary local fdb entry on changing mac address tcp: record TLP and ER timer stats in v6 stats vxlan: Accept user specified MTU value when create new vxlan link team: don't call netdev_change_features under team->lock sfc: on MC reset, clear PIO buffer linkage in TXQs bpf, inode: disallow userns mounts uapi glibc compat: fix compilation when !__USE_MISC in glibc udp: prevent skbs lingering in tunnel socket queues bpf: Use mount_nodev not mount_ns to mount the bpf filesystem tuntap: correctly wake up process during uninit switchdev: pass pointer to fib_info instead of copy tipc: fix nametable publication field in nl compat netlink: Fix dump skb leak/double free tipc: check nl sock before parsing nested attributes scsi: Add QEMU CD-ROM to VPD Inquiry Blacklist scsi_lib: correctly retry failed zero length REQ_TYPE_FS commands cs-etm: associating output packet with CPU they executed on cs-etm: removing unecessary structure field cs-etm: account for each trace buffer in the queue cs-etm: avoid casting variable perf tools: fixing Makefile problems perf tools: new naming convention for openCSD perf scripts: Add python scripts for CoreSight traces perf tools: decoding capailitity for CoreSight traces perf symbols: Check before overwriting build_id perf tools: pushing driver configuration down to the kernel perf tools: add infrastructure for PMU specific configuration coresight: etm-perf: incorporating sink definition from the cmd line coresight: adding sink parameter to function coresight_build_path() perf: passing struct perf_event to function setup_aux() perf/core: adding PMU driver specific configuration perf tools: adding coresight etm PMU record capabilities perf tools: making coresight PMU listable coresight: tmc: implementing TMC-ETR AUX space API coresight: Add support for Juno platform coresight: Handle build path error coresight: Fix erroneous memset in tmc_read_unprepare_etr coresight: Fix tmc_read_unprepare_etr coresight: Fix NULL pointer dereference in _coresight_build_path ANDROID: dm verity fec: add missing release from fec_ktype ANDROID: dm verity fec: limit error correction recursion ANDROID: restrict access to perf events FROMLIST: security,perf: Allow further restriction of perf_event_open BACKPORT: perf tools: Document the perf sysctls Revert "armv6 dcc tty driver" Revert "arm: dcc_tty: fix armv6 dcc tty build failure" ARM64: Ignore Image-dtb from git point of view arm64: add option to build Image-dtb ANDROID: usb: gadget: f_midi: set fi->f to NULL when free f_midi function Linux 4.4.13 xfs: handle dquot buffer readahead in log recovery correctly xfs: print name of verifier if it fails xfs: skip stale inodes in xfs_iflush_cluster xfs: fix inode validity check in xfs_iflush_cluster xfs: xfs_iflush_cluster fails to abort on error xfs: Don't wrap growfs AGFL indexes xfs: disallow rw remount on fs with unknown ro-compat features gcov: disable tree-loop-im to reduce stack usage scripts/package/Makefile: rpmbuild add support of RPMOPTS dma-debug: avoid spinlock recursion when disabling dma-debug PM / sleep: Handle failures in device_suspend_late() consistently ext4: silence UBSAN in ext4_mb_init() ext4: address UBSAN warning in mb_find_order_for_block() ext4: fix oops on corrupted filesystem ext4: clean up error handling when orphan list is corrupted ext4: fix hang when processing corrupted orphaned inode list drm/imx: Match imx-ipuv3-crtc components using device node in platform data drm/i915: Don't leave old junk in ilk active watermarks on readout drm/atomic: Verify connector->funcs != NULL when clearing states drm/fb_helper: Fix references to dev->mode_config.num_connector drm/i915/fbdev: Fix num_connector references in intel_fb_initial_config() drm/amdgpu: Fix hdmi deep color support. drm/amdgpu: use drm_mode_vrefresh() rather than mode->vrefresh drm/vmwgfx: Fix order of operation drm/vmwgfx: use vmw_cmd_dx_cid_check for query commands. drm/vmwgfx: Enable SVGA_3D_CMD_DX_SET_PREDICATION drm/gma500: Fix possible out of bounds read sunrpc: fix stripping of padded MIC tokens xen: use same main loop for counting and remapping pages xen/events: Don't move disabled irqs powerpc/eeh: Restore initial state in eeh_pe_reset_and_recover() Revert "powerpc/eeh: Fix crash in eeh_add_device_early() on Cell" powerpc/eeh: Don't report error in eeh_pe_reset_and_recover() powerpc/book3s64: Fix branching to OOL handlers in relocatable kernel pipe: limit the per-user amount of pages allocated in pipes QE-UART: add "fsl,t1040-ucc-uart" to of_device_id wait/ptrace: assume __WALL if the child is traced mm: use phys_addr_t for reserve_bootmem_region() arguments media: v4l2-compat-ioctl32: fix missing reserved field copy in put_v4l2_create32 PCI: Disable all BAR sizing for devices with non-compliant BARs pinctrl: exynos5440: Use off-stack memory for pinctrl_gpio_range clk: bcm2835: divider value has to be 1 or more clk: bcm2835: pll_off should only update CM_PLL_ANARST clk: at91: fix check of clk_register() returned value clk: bcm2835: Fix PLL poweron cpuidle: Fix cpuidle_state_is_coupled() argument in cpuidle_enter() cpuidle: Indicate when a device has been unregistered PM / Runtime: Fix error path in pm_runtime_force_resume() mfd: intel_soc_pmic_core: Terminate panel control GPIO lookup table correctly mfd: intel-lpss: Save register context on suspend hwmon: (ads7828) Enable internal reference aacraid: Fix for KDUMP driver hang aacraid: Fix for aac_command_thread hang aacraid: Relinquish CPU during timeout wait rtlwifi: pci: use dev_kfree_skb_irq instead of kfree_skb in rtl_pci_reset_trx_ring rtlwifi: Fix logic error in enter/exit power-save mode rtlwifi: btcoexist: Implement antenna selection rtlwifi: rtl8723be: Add antenna select module parameter hwrng: exynos - Fix unbalanced PM runtime put on timeout error path ath5k: Change led pin configuration for compaq c700 laptop ath10k: fix kernel panic, move arvifs list head init before htt init ath10k: fix rx_channel during hw reconfigure ath10k: fix firmware assert in monitor mode ath10k: fix debugfs pktlog_filter write ath9k: Fix LED polarity for some Mini PCI AR9220 MB92 cards. ath9k: Add a module parameter to invert LED polarity. ARM: dts: imx35: restore existing used clock enumeration ARM: dts: exynos: Add interrupt line to MAX8997 PMIC on exynos4210-trats ARM: dts: at91: fix typo in sama5d2 PIN_PD24 description ARM: mvebu: fix GPIO config on the Linksys boards Input: uinput - handle compat ioctl for UI_SET_PHYS ASoC: ak4642: Enable cache usage to fix crashes on resume affs: fix remount failure when there are no options changed MIPS: VDSO: Build with `-fno-strict-aliasing' MIPS: lib: Mark intrinsics notrace MIPS: Build microMIPS VDSO for microMIPS kernels MIPS: Fix sigreturn via VDSO on microMIPS kernel MIPS: ptrace: Prevent writes to read-only FCSR bits MIPS: ptrace: Fix FP context restoration FCSR regression MIPS: Disable preemption during prctl(PR_SET_FP_MODE, ...) MIPS: Prevent "restoration" of MSA context in non-MSA kernels MIPS: Fix MSA ld_*/st_* asm macros to use PTR_ADDU MIPS: Use copy_s.fmt rather than copy_u.fmt MIPS: Loongson-3: Reserve 32MB for RS780E integrated GPU MIPS: Reserve nosave data for hibernation MIPS: ath79: make bootconsole wait for both THRE and TEMT MIPS: Sync icache & dcache in set_pte_at MIPS: Handle highmem pages in __update_cache MIPS: Flush highmem pages in __flush_dcache_page MIPS: Fix watchpoint restoration MIPS: Fix uapi include in exported asm/siginfo.h MIPS: Fix siginfo.h to use strict posix types MIPS: Avoid using unwind_stack() with usermode MIPS: Don't unwind to user mode with EVA MIPS: MSA: Fix a link error on `_init_msa_upper' with older GCC MIPS: math-emu: Fix jalr emulation when rd == $0 MIPS64: R6: R2 emulation bugfix coresight: etb10: adjust read pointer only when needed coresight: configuring ETF in FIFO mode when acting as link coresight: tmc: implementing TMC-ETF AUX space API coresight: moving struct cs_buffers to header file coresight: tmc: keep track of memory width coresight: tmc: make sysFS and Perf mode mutually exclusive coresight: tmc: dump system memory content only when needed coresight: tmc: adding mode of operation for link/sinks coresight: tmc: getting rid of multiple read access coresight: tmc: allocating memory when needed coresight: tmc: making prepare/unprepare functions generic coresight: tmc: splitting driver in ETB/ETF and ETR components coresight: tmc: cleaning up header file coresight: tmc: introducing new header file coresight: tmc: clearly define number of transfers per burst coresight: tmc: re-implementing tmc_read_prepare/unprepare() functions coresight: tmc: waiting for TMCReady bit before programming coresight: tmc: modifying naming convention coresight: tmc: adding sysFS management entries coresight: etm4x: add tracer ID for A72 Maia processor. coresight: etb10: fixing the right amount of words to read coresight: stm: adding driver for CoreSight STM component coresight: adding path for STM device coresight: etm4x: modify q_support type coresight: no need to do the forced type conversion coresight: removing gratuitous boot time log messages coresight: etb10: splitting sysFS "status" entry coresight: moving coresight_simple_func() to header file coresight: etm4x: implementing the perf PMU API coresight: etm4x: implementing user/kernel mode tracing coresight: etm4x: moving etm_drvdata::enable to atomic field coresight: etm4x: unlocking tracers in default arch init coresight: etm4x: splitting etmv4 default configuration coresight: etm4x: splitting struct etmv4_drvdata coresight: etm4x: adding config and traceid registers coresight: etm4x: moving sysFS entries to a dedicated file stm class: Support devices that override software assigned masters stm class: Remove unnecessary pointer increment stm class: Fix stm device initialization order stm class: Do not leak the chrdev in error path stm class: Remove a pointless line stm class: stm_heartbeat: Make nr_devs parameter read-only stm class: dummy_stm: Make nr_dummies parameter read-only MAINTAINERS: Add a git tree for the stm class perf/ring_buffer: Document AUX API usage perf/core: Free AUX pages in unmap path perf/ring_buffer: Refuse to begin AUX transaction after rb->aux_mmap_count drops perf auxtrace: Add perf_evlist pointer to *info_priv_size() perf session: Simplify tool stubs perf inject: Hit all DSOs for AUX data in JIT and other cases perf tools: tracepoint_error() can receive e=NULL, robustify it perf evlist: Make perf_evlist__open() open evsels with their cpus and threads (like perf record does) perf evsel: Introduce disable() method perf cpumap: Auto initialize cpu__max_{node,cpu} drivers/hwtracing: make coresight-etm-perf.c explicitly non-modular drivers/hwtracing: make coresight-* explicitly non-modular coresight: introducing a global trace ID function coresight: etm-perf: new PMU driver for ETM tracers coresight: etb10: implementing AUX API coresight: etb10: adding operation mode for sink->enable() coresight: etb10: moving to local atomic operations coresight: etm3x: implementing perf_enable/disable() API coresight: etm3x: implementing user/kernel mode tracing coresight: etm3x: consolidating initial config coresight: etm3x: changing default trace configuration coresight: etm3x: set progbit to stop trace collection coresight: etm3x: adding operation mode for etm_enable() coresight: etm3x: splitting struct etm_drvdata coresight: etm3x: unlocking tracers in default arch init coresight: etm3x: moving sysFS entries to dedicated file coresight: etm3x: moving etm_readl/writel to header file coresight: moving PM runtime operations to core framework coresight: add API to get sink from path coresight: associating path with session rather than tracer coresight: etm4x: Check every parameter used by dma_xx_coherent. coresight: "DEVICE_ATTR_RO" should defined as static. coresight: implementing 'cpu_id()' API coresight: removing bind/unbind options from sysfs coresight: remove csdev's link from topology coresight: release reference taken by 'bus_find_device()' coresight: coresight_unregister() function cleanup coresight: fixing lockdep error coresight: fixing indentation problem coresight: Fix a typo in Kconfig coresight: checking for NULL string in coresight_name_match() perf/core: Disable the event on a truncated AUX record perf/core: Don't leak event in the syscall error path perf/core: Fix perf_sched_count derailment stm class: dummy_stm: Add link callback for fault injection stm class: Plug stm device's unlink callback stm class: Fix a race in unlinking stm class: Fix unbalanced module/device refcounting stm class: Guard output assignment against concurrency stm class: Fix unlocking braino in the error path stm class: Add heartbeat stm source device stm class: dummy_stm: Create multiple devices stm class: Support devices with multiple instances stm class: Use driver's packet callback return value stm class: Prevent user-controllable allocations stm class: Fix link list locking stm class: Fix locking in unbinding policy path stm class: Select CONFIG_SRCU stm class: Hide STM-specific options if STM is disabled perf: Synchronously free aux pages in case of allocation failure Linux 4.4.12 kbuild: move -Wunused-const-variable to W=1 warning level Revert "scsi: fix soft lockup in scsi_remove_target() on module removal" scsi: Add intermediate STARGET_REMOVE state to scsi_target_state hpfs: implement the show_options method hpfs: fix remount failure when there are no options changed UBI: Fix static volume checks when Fastmap is used SIGNAL: Move generic copy_siginfo() to signal.h thunderbolt: Fix double free of drom buffer IB/srp: Fix a debug kernel crash ALSA: hda - Fix headset mic detection problem for one Dell machine ALSA: hda/realtek - Add support for ALC295/ALC3254 ALSA: hda - Fix headphone noise on Dell XPS 13 9360 ALSA: hda/realtek - New codecs support for ALC234/ALC274/ALC294 mcb: Fixed bar number assignment for the gdd clk: bcm2835: add locking to pll*_on/off methods locking,qspinlock: Fix spin_is_locked() and spin_unlock_wait() serial: samsung: Reorder the sequence of clock control when call s3c24xx_serial_set_termios() serial: 8250_mid: recognize interrupt source in handler serial: 8250_mid: use proper bar for DNV platform serial: 8250_pci: fix divide error bug if baud rate is 0 Fix OpenSSH pty regression on close tty/serial: atmel: fix hardware handshake selection TTY: n_gsm, fix false positive WARN_ON tty: vt, return error when con_startup fails xen/x86: actually allocate legacy interrupts on PV guests KVM: x86: mask CPUID(0xD,0x1).EAX against host value MIPS: KVM: Fix timer IRQ race when writing CP0_Compare MIPS: KVM: Fix timer IRQ race when freezing timer KVM: x86: fix ordering of cr0 initialization code in vmx_cpu_reset KVM: MTRR: remove MSR 0x2f8 staging: comedi: das1800: fix possible NULL dereference usb: gadget: udc: core: Fix argument of dev_err() in usb_gadget_map_request() USB: leave LPM alone if possible when binding/unbinding interface drivers usb: misc: usbtest: fix pattern tests for scatterlists. usb: f_mass_storage: test whether thread is running before starting another usb: gadget: f_fs: Fix EFAULT generation for async read operations USB: serial: option: add even more ZTE device ids USB: serial: option: add more ZTE device ids USB: serial: option: add support for Cinterion PH8 and AHxx USB: serial: io_edgeport: fix memory leaks in probe error path USB: serial: io_edgeport: fix memory leaks in attach error path USB: serial: quatech2: fix use-after-free in probe error path USB: serial: keyspan: fix use-after-free in probe error path USB: serial: mxuport: fix use-after-free in probe error path mei: bus: call mei_cl_read_start under device lock mei: amthif: discard not read messages mei: fix NULL dereferencing during FW initiated disconnection Bluetooth: vhci: Fix race at creating hci device Bluetooth: vhci: purge unhandled skbs Bluetooth: vhci: fix open_timeout vs. hdev race mmc: sdhci-pci: Remove MMC_CAP_BUS_WIDTH_TEST for Intel controllers mmc: longer timeout for long read time quirk dell-rbtn: Ignore ACPI notifications if device is suspended ACPI / osi: Fix an issue that acpi_osi=!* cannot disable ACPICA internal strings mmc: sdhci-acpi: Remove MMC_CAP_BUS_WIDTH_TEST for Intel controllers mmc: mmc: Fix partition switch timeout for some eMMCs can: fix handling of unmodifiable configuration options irqchip/gic-v3: Configure all interrupts as non-secure Group-1 irqchip/gic: Ensure ordering between read of INTACK and shared data Input: pwm-beeper - fix - scheduling while atomic mfd: omap-usb-tll: Fix scheduling while atomic BUG sched/loadavg: Fix loadavg artifacts on fully idle and on fully loaded systems clk: qcom: msm8916: Fix crypto clock flags crypto: sun4i-ss - Replace spinlock_bh by spin_lock_irq{save|restore} crypto: talitos - fix ahash algorithms registration crypto: caam - fix caam_jr_alloc() ret code ring-buffer: Prevent overflow of size in ring_buffer_resize() ring-buffer: Use long for nr_pages to avoid overflow failures asix: Fix offset calculation in asix_rx_fixup() causing slow transmissions fs/cifs: correctly to anonymous authentication for the NTLM(v2) authentication fs/cifs: correctly to anonymous authentication for the NTLM(v1) authentication fs/cifs: correctly to anonymous authentication for the LANMAN authentication fs/cifs: correctly to anonymous authentication via NTLMSSP remove directory incorrectly tries to set delete on close on non-empty directories kvm: arm64: Fix EC field in inject_abt64 arm/arm64: KVM: Enforce Break-Before-Make on Stage-2 page tables arm64: cpuinfo: Missing NULL terminator in compat_hwcap_str arm64: Implement pmdp_set_access_flags() for hardware AF/DBM arm64: Implement ptep_set_access_flags() for hardware AF/DBM arm64: Ensure pmd_present() returns false after pmd_mknotpresent() arm64: Fix typo in the pmdp_huge_get_and_clear() definition ext4: iterate over buffer heads correctly in move_extent_per_page() perf test: Fix build of BPF and LLVM on older glibc libraries perf/core: Fix perf_event_open() vs. execve() race perf/x86/intel/pt: Generate PMI in the STOP region as well Btrfs: don't use src fd for printk UPSTREAM: mac80211: fix "warning: ‘target_metric’ may be used uninitialized" Revert "drivers: power: use 'current' instead of 'get_current()'" cpufreq: interactive: drop cpufreq_{get,put}_global_kobject func calls Revert "cpufreq: interactive: build fixes for 4.4" xt_qtaguid: Fix panic caused by processing non-full socket. fiq_debugger: Add fiq_debugger.disable option UPSTREAM: procfs: fixes pthread cross-thread naming if !PR_DUMPABLE FROMLIST: wlcore: Disable filtering in AP role Revert "drivers: power: Add watchdog timer to catch drivers which lockup during suspend." fiq_debugger: Add option to apply uart overlay by FIQ_DEBUGGER_UART_OVERLAY Revert "Recreate asm/mach/mmc.h include file" Revert "ARM: Add 'card_present' state to mmc_platfrom_data" usb: dual-role: make stub functions inline Revert "mmc: Add status IRQ and status callback function to mmc platform data" quick selinux support for tracefs Revert "hid-multitouch: Filter collections by application usage." Revert "HID: steelseries: validate output report details" xt_qtaguid: Fix panic caused by synack processing Revert "mm: vmscan: Add a debug file for shrinkers" Revert "SELinux: Enable setting security contexts on rootfs inodes." Revert "SELinux: build fix for 4.1" fuse: Add support for d_canonical_path vfs: change d_canonical_path to take two paths android: recommended.cfg: remove CONFIG_UID_STAT netfilter: xt_qtaguid: seq_printf fixes Revert "misc: uidstat: Adding uid stat driver to collect network statistics." Revert "net: activity_stats: Add statistics for network transmission activity" Revert "net: activity_stats: Stop using obsolete create_proc_read_entry api" Revert "misc: uidstat: avoid create_stat() race and blockage." Revert "misc: uidstat: Remove use of obsolete create_proc_read_entry api" Revert "misc seq_printf fixes for 4.4" Revert "misc: uid_stat: Include linux/atomic.h instead of asm/atomic.h" Revert "net: socket ioctl to reset connections matching local address" Revert "net: fix iterating over hashtable in tcp_nuke_addr()" Revert "net: fix crash in tcp_nuke_addr()" Revert "Don't kill IPv4 sockets when killing IPv6 sockets was requested." Revert "tcp: Fix IPV6 module build errors" android: base-cfg: remove CONFIG_SWITCH Revert "switch: switch class and GPIO drivers." Revert "drivers: switch: remove S_IWUSR from dev_attr" ANDROID: base-cfg: enable CONFIG_IP_NF_NAT BACKPORT: selinux: restrict kernel module loading android: base-cfg: enable CONFIG_QUOTA Conflicts: Documentation/sysctl/kernel.txt drivers/cpufreq/cpufreq_interactive.c drivers/hwtracing/coresight/Kconfig drivers/hwtracing/coresight/Makefile drivers/hwtracing/coresight/coresight-etm4x.c drivers/hwtracing/coresight/coresight-etm4x.h drivers/hwtracing/coresight/coresight-priv.h drivers/hwtracing/coresight/coresight-stm.c drivers/hwtracing/coresight/coresight-tmc.c drivers/mmc/core/core.c include/linux/coresight-stm.h include/linux/coresight.h include/linux/msm_mdp.h include/uapi/linux/coresight-stm.h kernel/events/core.c kernel/sched/fair.c net/Makefile net/ipv4/netfilter/arp_tables.c net/ipv4/netfilter/ip_tables.c net/ipv4/tcp.c net/ipv6/netfilter/ip6_tables.c net/netfilter/xt_quota2.c sound/core/pcm.c Change-Id: I17aa0002815014e9bddc47e67769a53c15768a99 Signed-off-by: Runmin Wang <runminw@codeaurora.org>
1886 lines
48 KiB
C
1886 lines
48 KiB
C
/*
|
|
* Memory Migration functionality - linux/mm/migrate.c
|
|
*
|
|
* Copyright (C) 2006 Silicon Graphics, Inc., Christoph Lameter
|
|
*
|
|
* Page migration was first developed in the context of the memory hotplug
|
|
* project. The main authors of the migration code are:
|
|
*
|
|
* IWAMOTO Toshihiro <iwamoto@valinux.co.jp>
|
|
* Hirokazu Takahashi <taka@valinux.co.jp>
|
|
* Dave Hansen <haveblue@us.ibm.com>
|
|
* Christoph Lameter
|
|
*/
|
|
|
|
#include <linux/migrate.h>
|
|
#include <linux/export.h>
|
|
#include <linux/swap.h>
|
|
#include <linux/swapops.h>
|
|
#include <linux/pagemap.h>
|
|
#include <linux/buffer_head.h>
|
|
#include <linux/mm_inline.h>
|
|
#include <linux/nsproxy.h>
|
|
#include <linux/pagevec.h>
|
|
#include <linux/ksm.h>
|
|
#include <linux/rmap.h>
|
|
#include <linux/topology.h>
|
|
#include <linux/cpu.h>
|
|
#include <linux/cpuset.h>
|
|
#include <linux/writeback.h>
|
|
#include <linux/mempolicy.h>
|
|
#include <linux/vmalloc.h>
|
|
#include <linux/security.h>
|
|
#include <linux/backing-dev.h>
|
|
#include <linux/syscalls.h>
|
|
#include <linux/hugetlb.h>
|
|
#include <linux/hugetlb_cgroup.h>
|
|
#include <linux/gfp.h>
|
|
#include <linux/balloon_compaction.h>
|
|
#include <linux/mmu_notifier.h>
|
|
#include <linux/page_idle.h>
|
|
|
|
#include <asm/tlbflush.h>
|
|
|
|
#define CREATE_TRACE_POINTS
|
|
#include <trace/events/migrate.h>
|
|
|
|
#include "internal.h"
|
|
|
|
/*
|
|
* migrate_prep() needs to be called before we start compiling a list of pages
|
|
* to be migrated using isolate_lru_page(). If scheduling work on other CPUs is
|
|
* undesirable, use migrate_prep_local()
|
|
*/
|
|
int migrate_prep(void)
|
|
{
|
|
/*
|
|
* Clear the LRU lists so pages can be isolated.
|
|
* Note that pages may be moved off the LRU after we have
|
|
* drained them. Those pages will fail to migrate like other
|
|
* pages that may be busy.
|
|
*/
|
|
lru_add_drain_all();
|
|
|
|
return 0;
|
|
}
|
|
|
|
/* Do the necessary work of migrate_prep but not if it involves other CPUs */
|
|
int migrate_prep_local(void)
|
|
{
|
|
lru_add_drain();
|
|
|
|
return 0;
|
|
}
|
|
|
|
/*
|
|
* Put previously isolated pages back onto the appropriate lists
|
|
* from where they were once taken off for compaction/migration.
|
|
*
|
|
* This function shall be used whenever the isolated pageset has been
|
|
* built from lru, balloon, hugetlbfs page. See isolate_migratepages_range()
|
|
* and isolate_huge_page().
|
|
*/
|
|
void putback_movable_pages(struct list_head *l)
|
|
{
|
|
struct page *page;
|
|
struct page *page2;
|
|
|
|
list_for_each_entry_safe(page, page2, l, lru) {
|
|
if (unlikely(PageHuge(page))) {
|
|
putback_active_hugepage(page);
|
|
continue;
|
|
}
|
|
list_del(&page->lru);
|
|
dec_zone_page_state(page, NR_ISOLATED_ANON +
|
|
page_is_file_cache(page));
|
|
if (unlikely(isolated_balloon_page(page)))
|
|
balloon_page_putback(page);
|
|
else
|
|
putback_lru_page(page);
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Restore a potential migration pte to a working pte entry
|
|
*/
|
|
static int remove_migration_pte(struct page *new, struct vm_area_struct *vma,
|
|
unsigned long addr, void *old)
|
|
{
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
swp_entry_t entry;
|
|
pmd_t *pmd;
|
|
pte_t *ptep, pte;
|
|
spinlock_t *ptl;
|
|
|
|
if (unlikely(PageHuge(new))) {
|
|
ptep = huge_pte_offset(mm, addr);
|
|
if (!ptep)
|
|
goto out;
|
|
ptl = huge_pte_lockptr(hstate_vma(vma), mm, ptep);
|
|
} else {
|
|
pmd = mm_find_pmd(mm, addr);
|
|
if (!pmd)
|
|
goto out;
|
|
|
|
ptep = pte_offset_map(pmd, addr);
|
|
|
|
/*
|
|
* Peek to check is_swap_pte() before taking ptlock? No, we
|
|
* can race mremap's move_ptes(), which skips anon_vma lock.
|
|
*/
|
|
|
|
ptl = pte_lockptr(mm, pmd);
|
|
}
|
|
|
|
spin_lock(ptl);
|
|
pte = *ptep;
|
|
if (!is_swap_pte(pte))
|
|
goto unlock;
|
|
|
|
entry = pte_to_swp_entry(pte);
|
|
|
|
if (!is_migration_entry(entry) ||
|
|
migration_entry_to_page(entry) != old)
|
|
goto unlock;
|
|
|
|
get_page(new);
|
|
pte = pte_mkold(mk_pte(new, vma->vm_page_prot));
|
|
if (pte_swp_soft_dirty(*ptep))
|
|
pte = pte_mksoft_dirty(pte);
|
|
|
|
/* Recheck VMA as permissions can change since migration started */
|
|
if (is_write_migration_entry(entry))
|
|
pte = maybe_mkwrite(pte, vma);
|
|
|
|
#ifdef CONFIG_HUGETLB_PAGE
|
|
if (PageHuge(new)) {
|
|
pte = pte_mkhuge(pte);
|
|
pte = arch_make_huge_pte(pte, vma, new, 0);
|
|
}
|
|
#endif
|
|
flush_dcache_page(new);
|
|
set_pte_at(mm, addr, ptep, pte);
|
|
|
|
if (PageHuge(new)) {
|
|
if (PageAnon(new))
|
|
hugepage_add_anon_rmap(new, vma, addr);
|
|
else
|
|
page_dup_rmap(new);
|
|
} else if (PageAnon(new))
|
|
page_add_anon_rmap(new, vma, addr);
|
|
else
|
|
page_add_file_rmap(new);
|
|
|
|
if (vma->vm_flags & VM_LOCKED)
|
|
mlock_vma_page(new);
|
|
|
|
/* No need to invalidate - it was non-present before */
|
|
update_mmu_cache(vma, addr, ptep);
|
|
unlock:
|
|
pte_unmap_unlock(ptep, ptl);
|
|
out:
|
|
return SWAP_AGAIN;
|
|
}
|
|
|
|
/*
|
|
* Get rid of all migration entries and replace them by
|
|
* references to the indicated page.
|
|
*/
|
|
static void remove_migration_ptes(struct page *old, struct page *new)
|
|
{
|
|
struct rmap_walk_control rwc = {
|
|
.rmap_one = remove_migration_pte,
|
|
.arg = old,
|
|
};
|
|
|
|
rmap_walk(new, &rwc);
|
|
}
|
|
|
|
/*
|
|
* Something used the pte of a page under migration. We need to
|
|
* get to the page and wait until migration is finished.
|
|
* When we return from this function the fault will be retried.
|
|
*/
|
|
void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
|
|
spinlock_t *ptl)
|
|
{
|
|
pte_t pte;
|
|
swp_entry_t entry;
|
|
struct page *page;
|
|
|
|
spin_lock(ptl);
|
|
pte = *ptep;
|
|
if (!is_swap_pte(pte))
|
|
goto out;
|
|
|
|
entry = pte_to_swp_entry(pte);
|
|
if (!is_migration_entry(entry))
|
|
goto out;
|
|
|
|
page = migration_entry_to_page(entry);
|
|
|
|
/*
|
|
* Once radix-tree replacement of page migration started, page_count
|
|
* *must* be zero. And, we don't want to call wait_on_page_locked()
|
|
* against a page without get_page().
|
|
* So, we use get_page_unless_zero(), here. Even failed, page fault
|
|
* will occur again.
|
|
*/
|
|
if (!get_page_unless_zero(page))
|
|
goto out;
|
|
pte_unmap_unlock(ptep, ptl);
|
|
wait_on_page_locked(page);
|
|
put_page(page);
|
|
return;
|
|
out:
|
|
pte_unmap_unlock(ptep, ptl);
|
|
}
|
|
|
|
void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
|
|
unsigned long address)
|
|
{
|
|
spinlock_t *ptl = pte_lockptr(mm, pmd);
|
|
pte_t *ptep = pte_offset_map(pmd, address);
|
|
__migration_entry_wait(mm, ptep, ptl);
|
|
}
|
|
|
|
void migration_entry_wait_huge(struct vm_area_struct *vma,
|
|
struct mm_struct *mm, pte_t *pte)
|
|
{
|
|
spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), mm, pte);
|
|
__migration_entry_wait(mm, pte, ptl);
|
|
}
|
|
|
|
#ifdef CONFIG_BLOCK
|
|
/* Returns true if all buffers are successfully locked */
|
|
static bool buffer_migrate_lock_buffers(struct buffer_head *head,
|
|
enum migrate_mode mode)
|
|
{
|
|
struct buffer_head *bh = head;
|
|
|
|
/* Simple case, sync compaction */
|
|
if (mode != MIGRATE_ASYNC) {
|
|
do {
|
|
get_bh(bh);
|
|
lock_buffer(bh);
|
|
bh = bh->b_this_page;
|
|
|
|
} while (bh != head);
|
|
|
|
return true;
|
|
}
|
|
|
|
/* async case, we cannot block on lock_buffer so use trylock_buffer */
|
|
do {
|
|
get_bh(bh);
|
|
if (!trylock_buffer(bh)) {
|
|
/*
|
|
* We failed to lock the buffer and cannot stall in
|
|
* async migration. Release the taken locks
|
|
*/
|
|
struct buffer_head *failed_bh = bh;
|
|
put_bh(failed_bh);
|
|
bh = head;
|
|
while (bh != failed_bh) {
|
|
unlock_buffer(bh);
|
|
put_bh(bh);
|
|
bh = bh->b_this_page;
|
|
}
|
|
return false;
|
|
}
|
|
|
|
bh = bh->b_this_page;
|
|
} while (bh != head);
|
|
return true;
|
|
}
|
|
#else
|
|
static inline bool buffer_migrate_lock_buffers(struct buffer_head *head,
|
|
enum migrate_mode mode)
|
|
{
|
|
return true;
|
|
}
|
|
#endif /* CONFIG_BLOCK */
|
|
|
|
/*
|
|
* Replace the page in the mapping.
|
|
*
|
|
* The number of remaining references must be:
|
|
* 1 for anonymous pages without a mapping
|
|
* 2 for pages with a mapping
|
|
* 3 for pages with a mapping and PagePrivate/PagePrivate2 set.
|
|
*/
|
|
int migrate_page_move_mapping(struct address_space *mapping,
|
|
struct page *newpage, struct page *page,
|
|
struct buffer_head *head, enum migrate_mode mode,
|
|
int extra_count)
|
|
{
|
|
struct zone *oldzone, *newzone;
|
|
int dirty;
|
|
int expected_count = 1 + extra_count;
|
|
void **pslot;
|
|
|
|
if (!mapping) {
|
|
/* Anonymous page without mapping */
|
|
if (page_count(page) != expected_count)
|
|
return -EAGAIN;
|
|
|
|
/* No turning back from here */
|
|
set_page_memcg(newpage, page_memcg(page));
|
|
newpage->index = page->index;
|
|
newpage->mapping = page->mapping;
|
|
if (PageSwapBacked(page))
|
|
SetPageSwapBacked(newpage);
|
|
|
|
return MIGRATEPAGE_SUCCESS;
|
|
}
|
|
|
|
oldzone = page_zone(page);
|
|
newzone = page_zone(newpage);
|
|
|
|
spin_lock_irq(&mapping->tree_lock);
|
|
|
|
pslot = radix_tree_lookup_slot(&mapping->page_tree,
|
|
page_index(page));
|
|
|
|
expected_count += 1 + page_has_private(page);
|
|
if (page_count(page) != expected_count ||
|
|
radix_tree_deref_slot_protected(pslot, &mapping->tree_lock) != page) {
|
|
spin_unlock_irq(&mapping->tree_lock);
|
|
return -EAGAIN;
|
|
}
|
|
|
|
if (!page_freeze_refs(page, expected_count)) {
|
|
spin_unlock_irq(&mapping->tree_lock);
|
|
return -EAGAIN;
|
|
}
|
|
|
|
/*
|
|
* In the async migration case of moving a page with buffers, lock the
|
|
* buffers using trylock before the mapping is moved. If the mapping
|
|
* was moved, we later failed to lock the buffers and could not move
|
|
* the mapping back due to an elevated page count, we would have to
|
|
* block waiting on other references to be dropped.
|
|
*/
|
|
if (mode == MIGRATE_ASYNC && head &&
|
|
!buffer_migrate_lock_buffers(head, mode)) {
|
|
page_unfreeze_refs(page, expected_count);
|
|
spin_unlock_irq(&mapping->tree_lock);
|
|
return -EAGAIN;
|
|
}
|
|
|
|
/*
|
|
* Now we know that no one else is looking at the page:
|
|
* no turning back from here.
|
|
*/
|
|
set_page_memcg(newpage, page_memcg(page));
|
|
newpage->index = page->index;
|
|
newpage->mapping = page->mapping;
|
|
if (PageSwapBacked(page))
|
|
SetPageSwapBacked(newpage);
|
|
|
|
get_page(newpage); /* add cache reference */
|
|
if (PageSwapCache(page)) {
|
|
SetPageSwapCache(newpage);
|
|
set_page_private(newpage, page_private(page));
|
|
}
|
|
|
|
/* Move dirty while page refs frozen and newpage not yet exposed */
|
|
dirty = PageDirty(page);
|
|
if (dirty) {
|
|
ClearPageDirty(page);
|
|
SetPageDirty(newpage);
|
|
}
|
|
|
|
radix_tree_replace_slot(pslot, newpage);
|
|
|
|
/*
|
|
* Drop cache reference from old page by unfreezing
|
|
* to one less reference.
|
|
* We know this isn't the last reference.
|
|
*/
|
|
page_unfreeze_refs(page, expected_count - 1);
|
|
|
|
spin_unlock(&mapping->tree_lock);
|
|
/* Leave irq disabled to prevent preemption while updating stats */
|
|
|
|
/*
|
|
* If moved to a different zone then also account
|
|
* the page for that zone. Other VM counters will be
|
|
* taken care of when we establish references to the
|
|
* new page and drop references to the old page.
|
|
*
|
|
* Note that anonymous pages are accounted for
|
|
* via NR_FILE_PAGES and NR_ANON_PAGES if they
|
|
* are mapped to swap space.
|
|
*/
|
|
if (newzone != oldzone) {
|
|
__dec_zone_state(oldzone, NR_FILE_PAGES);
|
|
__inc_zone_state(newzone, NR_FILE_PAGES);
|
|
if (PageSwapBacked(page) && !PageSwapCache(page)) {
|
|
__dec_zone_state(oldzone, NR_SHMEM);
|
|
__inc_zone_state(newzone, NR_SHMEM);
|
|
}
|
|
if (dirty && mapping_cap_account_dirty(mapping)) {
|
|
__dec_zone_state(oldzone, NR_FILE_DIRTY);
|
|
__inc_zone_state(newzone, NR_FILE_DIRTY);
|
|
}
|
|
}
|
|
local_irq_enable();
|
|
|
|
return MIGRATEPAGE_SUCCESS;
|
|
}
|
|
EXPORT_SYMBOL(migrate_page_move_mapping);
|
|
|
|
/*
|
|
* The expected number of remaining references is the same as that
|
|
* of migrate_page_move_mapping().
|
|
*/
|
|
int migrate_huge_page_move_mapping(struct address_space *mapping,
|
|
struct page *newpage, struct page *page)
|
|
{
|
|
int expected_count;
|
|
void **pslot;
|
|
|
|
spin_lock_irq(&mapping->tree_lock);
|
|
|
|
pslot = radix_tree_lookup_slot(&mapping->page_tree,
|
|
page_index(page));
|
|
|
|
expected_count = 2 + page_has_private(page);
|
|
if (page_count(page) != expected_count ||
|
|
radix_tree_deref_slot_protected(pslot, &mapping->tree_lock) != page) {
|
|
spin_unlock_irq(&mapping->tree_lock);
|
|
return -EAGAIN;
|
|
}
|
|
|
|
if (!page_freeze_refs(page, expected_count)) {
|
|
spin_unlock_irq(&mapping->tree_lock);
|
|
return -EAGAIN;
|
|
}
|
|
|
|
set_page_memcg(newpage, page_memcg(page));
|
|
newpage->index = page->index;
|
|
newpage->mapping = page->mapping;
|
|
get_page(newpage);
|
|
|
|
radix_tree_replace_slot(pslot, newpage);
|
|
|
|
page_unfreeze_refs(page, expected_count - 1);
|
|
|
|
spin_unlock_irq(&mapping->tree_lock);
|
|
return MIGRATEPAGE_SUCCESS;
|
|
}
|
|
|
|
/*
|
|
* Gigantic pages are so large that we do not guarantee that page++ pointer
|
|
* arithmetic will work across the entire page. We need something more
|
|
* specialized.
|
|
*/
|
|
static void __copy_gigantic_page(struct page *dst, struct page *src,
|
|
int nr_pages)
|
|
{
|
|
int i;
|
|
struct page *dst_base = dst;
|
|
struct page *src_base = src;
|
|
|
|
for (i = 0; i < nr_pages; ) {
|
|
cond_resched();
|
|
copy_highpage(dst, src);
|
|
|
|
i++;
|
|
dst = mem_map_next(dst, dst_base, i);
|
|
src = mem_map_next(src, src_base, i);
|
|
}
|
|
}
|
|
|
|
static void copy_huge_page(struct page *dst, struct page *src)
|
|
{
|
|
int i;
|
|
int nr_pages;
|
|
|
|
if (PageHuge(src)) {
|
|
/* hugetlbfs page */
|
|
struct hstate *h = page_hstate(src);
|
|
nr_pages = pages_per_huge_page(h);
|
|
|
|
if (unlikely(nr_pages > MAX_ORDER_NR_PAGES)) {
|
|
__copy_gigantic_page(dst, src, nr_pages);
|
|
return;
|
|
}
|
|
} else {
|
|
/* thp page */
|
|
BUG_ON(!PageTransHuge(src));
|
|
nr_pages = hpage_nr_pages(src);
|
|
}
|
|
|
|
for (i = 0; i < nr_pages; i++) {
|
|
cond_resched();
|
|
copy_highpage(dst + i, src + i);
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Copy the page to its new location
|
|
*/
|
|
void migrate_page_copy(struct page *newpage, struct page *page)
|
|
{
|
|
int cpupid;
|
|
|
|
if (PageHuge(page) || PageTransHuge(page))
|
|
copy_huge_page(newpage, page);
|
|
else
|
|
copy_highpage(newpage, page);
|
|
|
|
if (PageError(page))
|
|
SetPageError(newpage);
|
|
if (PageReferenced(page))
|
|
SetPageReferenced(newpage);
|
|
if (PageUptodate(page))
|
|
SetPageUptodate(newpage);
|
|
if (TestClearPageActive(page)) {
|
|
VM_BUG_ON_PAGE(PageUnevictable(page), page);
|
|
SetPageActive(newpage);
|
|
} else if (TestClearPageUnevictable(page))
|
|
SetPageUnevictable(newpage);
|
|
if (PageChecked(page))
|
|
SetPageChecked(newpage);
|
|
if (PageMappedToDisk(page))
|
|
SetPageMappedToDisk(newpage);
|
|
|
|
/* Move dirty on pages not done by migrate_page_move_mapping() */
|
|
if (PageDirty(page))
|
|
SetPageDirty(newpage);
|
|
|
|
if (page_is_young(page))
|
|
set_page_young(newpage);
|
|
if (page_is_idle(page))
|
|
set_page_idle(newpage);
|
|
|
|
/*
|
|
* Copy NUMA information to the new page, to prevent over-eager
|
|
* future migrations of this same page.
|
|
*/
|
|
cpupid = page_cpupid_xchg_last(page, -1);
|
|
page_cpupid_xchg_last(newpage, cpupid);
|
|
|
|
ksm_migrate_page(newpage, page);
|
|
/*
|
|
* Please do not reorder this without considering how mm/ksm.c's
|
|
* get_ksm_page() depends upon ksm_migrate_page() and PageSwapCache().
|
|
*/
|
|
if (PageSwapCache(page))
|
|
ClearPageSwapCache(page);
|
|
ClearPagePrivate(page);
|
|
set_page_private(page, 0);
|
|
|
|
/*
|
|
* If any waiters have accumulated on the new page then
|
|
* wake them up.
|
|
*/
|
|
if (PageWriteback(newpage))
|
|
end_page_writeback(newpage);
|
|
}
|
|
EXPORT_SYMBOL(migrate_page_copy);
|
|
|
|
/************************************************************
|
|
* Migration functions
|
|
***********************************************************/
|
|
|
|
/*
|
|
* Common logic to directly migrate a single page suitable for
|
|
* pages that do not use PagePrivate/PagePrivate2.
|
|
*
|
|
* Pages are locked upon entry and exit.
|
|
*/
|
|
int migrate_page(struct address_space *mapping,
|
|
struct page *newpage, struct page *page,
|
|
enum migrate_mode mode)
|
|
{
|
|
int rc;
|
|
|
|
BUG_ON(PageWriteback(page)); /* Writeback must be complete */
|
|
|
|
rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, 0);
|
|
|
|
if (rc != MIGRATEPAGE_SUCCESS)
|
|
return rc;
|
|
|
|
migrate_page_copy(newpage, page);
|
|
return MIGRATEPAGE_SUCCESS;
|
|
}
|
|
EXPORT_SYMBOL(migrate_page);
|
|
|
|
#ifdef CONFIG_BLOCK
|
|
/*
|
|
* Migration function for pages with buffers. This function can only be used
|
|
* if the underlying filesystem guarantees that no other references to "page"
|
|
* exist.
|
|
*/
|
|
int buffer_migrate_page(struct address_space *mapping,
|
|
struct page *newpage, struct page *page, enum migrate_mode mode)
|
|
{
|
|
struct buffer_head *bh, *head;
|
|
int rc;
|
|
|
|
if (!page_has_buffers(page))
|
|
return migrate_page(mapping, newpage, page, mode);
|
|
|
|
head = page_buffers(page);
|
|
|
|
rc = migrate_page_move_mapping(mapping, newpage, page, head, mode, 0);
|
|
|
|
if (rc != MIGRATEPAGE_SUCCESS)
|
|
return rc;
|
|
|
|
/*
|
|
* In the async case, migrate_page_move_mapping locked the buffers
|
|
* with an IRQ-safe spinlock held. In the sync case, the buffers
|
|
* need to be locked now
|
|
*/
|
|
if (mode != MIGRATE_ASYNC)
|
|
BUG_ON(!buffer_migrate_lock_buffers(head, mode));
|
|
|
|
ClearPagePrivate(page);
|
|
set_page_private(newpage, page_private(page));
|
|
set_page_private(page, 0);
|
|
put_page(page);
|
|
get_page(newpage);
|
|
|
|
bh = head;
|
|
do {
|
|
set_bh_page(bh, newpage, bh_offset(bh));
|
|
bh = bh->b_this_page;
|
|
|
|
} while (bh != head);
|
|
|
|
SetPagePrivate(newpage);
|
|
|
|
migrate_page_copy(newpage, page);
|
|
|
|
bh = head;
|
|
do {
|
|
unlock_buffer(bh);
|
|
put_bh(bh);
|
|
bh = bh->b_this_page;
|
|
|
|
} while (bh != head);
|
|
|
|
return MIGRATEPAGE_SUCCESS;
|
|
}
|
|
EXPORT_SYMBOL(buffer_migrate_page);
|
|
#endif
|
|
|
|
/*
|
|
* Writeback a page to clean the dirty state
|
|
*/
|
|
static int writeout(struct address_space *mapping, struct page *page)
|
|
{
|
|
struct writeback_control wbc = {
|
|
.sync_mode = WB_SYNC_NONE,
|
|
.nr_to_write = 1,
|
|
.range_start = 0,
|
|
.range_end = LLONG_MAX,
|
|
.for_reclaim = 1
|
|
};
|
|
int rc;
|
|
|
|
if (!mapping->a_ops->writepage)
|
|
/* No write method for the address space */
|
|
return -EINVAL;
|
|
|
|
if (!clear_page_dirty_for_io(page))
|
|
/* Someone else already triggered a write */
|
|
return -EAGAIN;
|
|
|
|
/*
|
|
* A dirty page may imply that the underlying filesystem has
|
|
* the page on some queue. So the page must be clean for
|
|
* migration. Writeout may mean we loose the lock and the
|
|
* page state is no longer what we checked for earlier.
|
|
* At this point we know that the migration attempt cannot
|
|
* be successful.
|
|
*/
|
|
remove_migration_ptes(page, page);
|
|
|
|
rc = mapping->a_ops->writepage(page, &wbc);
|
|
|
|
if (rc != AOP_WRITEPAGE_ACTIVATE)
|
|
/* unlocked. Relock */
|
|
lock_page(page);
|
|
|
|
return (rc < 0) ? -EIO : -EAGAIN;
|
|
}
|
|
|
|
/*
|
|
* Default handling if a filesystem does not provide a migration function.
|
|
*/
|
|
static int fallback_migrate_page(struct address_space *mapping,
|
|
struct page *newpage, struct page *page, enum migrate_mode mode)
|
|
{
|
|
if (PageDirty(page)) {
|
|
/* Only writeback pages in full synchronous migration */
|
|
if (mode != MIGRATE_SYNC)
|
|
return -EBUSY;
|
|
return writeout(mapping, page);
|
|
}
|
|
|
|
/*
|
|
* Buffers may be managed in a filesystem specific way.
|
|
* We must have no buffers or drop them.
|
|
*/
|
|
if (page_has_private(page) &&
|
|
!try_to_release_page(page, GFP_KERNEL))
|
|
return -EAGAIN;
|
|
|
|
return migrate_page(mapping, newpage, page, mode);
|
|
}
|
|
|
|
/*
|
|
* Move a page to a newly allocated page
|
|
* The page is locked and all ptes have been successfully removed.
|
|
*
|
|
* The new page will have replaced the old page if this function
|
|
* is successful.
|
|
*
|
|
* Return value:
|
|
* < 0 - error code
|
|
* MIGRATEPAGE_SUCCESS - success
|
|
*/
|
|
static int move_to_new_page(struct page *newpage, struct page *page,
|
|
enum migrate_mode mode)
|
|
{
|
|
struct address_space *mapping;
|
|
int rc;
|
|
|
|
VM_BUG_ON_PAGE(!PageLocked(page), page);
|
|
VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
|
|
|
|
mapping = page_mapping(page);
|
|
if (!mapping)
|
|
rc = migrate_page(mapping, newpage, page, mode);
|
|
else if (mapping->a_ops->migratepage)
|
|
/*
|
|
* Most pages have a mapping and most filesystems provide a
|
|
* migratepage callback. Anonymous pages are part of swap
|
|
* space which also has its own migratepage callback. This
|
|
* is the most common path for page migration.
|
|
*/
|
|
rc = mapping->a_ops->migratepage(mapping, newpage, page, mode);
|
|
else
|
|
rc = fallback_migrate_page(mapping, newpage, page, mode);
|
|
|
|
/*
|
|
* When successful, old pagecache page->mapping must be cleared before
|
|
* page is freed; but stats require that PageAnon be left as PageAnon.
|
|
*/
|
|
if (rc == MIGRATEPAGE_SUCCESS) {
|
|
set_page_memcg(page, NULL);
|
|
if (!PageAnon(page))
|
|
page->mapping = NULL;
|
|
}
|
|
return rc;
|
|
}
|
|
|
|
static int __unmap_and_move(struct page *page, struct page *newpage,
|
|
int force, enum migrate_mode mode)
|
|
{
|
|
int rc = -EAGAIN;
|
|
int page_was_mapped = 0;
|
|
struct anon_vma *anon_vma = NULL;
|
|
|
|
if (!trylock_page(page)) {
|
|
if (!force || mode == MIGRATE_ASYNC)
|
|
goto out;
|
|
|
|
/*
|
|
* It's not safe for direct compaction to call lock_page.
|
|
* For example, during page readahead pages are added locked
|
|
* to the LRU. Later, when the IO completes the pages are
|
|
* marked uptodate and unlocked. However, the queueing
|
|
* could be merging multiple pages for one bio (e.g.
|
|
* mpage_readpages). If an allocation happens for the
|
|
* second or third page, the process can end up locking
|
|
* the same page twice and deadlocking. Rather than
|
|
* trying to be clever about what pages can be locked,
|
|
* avoid the use of lock_page for direct compaction
|
|
* altogether.
|
|
*/
|
|
if (current->flags & PF_MEMALLOC)
|
|
goto out;
|
|
|
|
lock_page(page);
|
|
}
|
|
|
|
if (PageWriteback(page)) {
|
|
/*
|
|
* Only in the case of a full synchronous migration is it
|
|
* necessary to wait for PageWriteback. In the async case,
|
|
* the retry loop is too short and in the sync-light case,
|
|
* the overhead of stalling is too much
|
|
*/
|
|
if (mode != MIGRATE_SYNC) {
|
|
rc = -EBUSY;
|
|
goto out_unlock;
|
|
}
|
|
if (!force)
|
|
goto out_unlock;
|
|
wait_on_page_writeback(page);
|
|
}
|
|
|
|
/*
|
|
* By try_to_unmap(), page->mapcount goes down to 0 here. In this case,
|
|
* we cannot notice that anon_vma is freed while we migrates a page.
|
|
* This get_anon_vma() delays freeing anon_vma pointer until the end
|
|
* of migration. File cache pages are no problem because of page_lock()
|
|
* File Caches may use write_page() or lock_page() in migration, then,
|
|
* just care Anon page here.
|
|
*
|
|
* Only page_get_anon_vma() understands the subtleties of
|
|
* getting a hold on an anon_vma from outside one of its mms.
|
|
* But if we cannot get anon_vma, then we won't need it anyway,
|
|
* because that implies that the anon page is no longer mapped
|
|
* (and cannot be remapped so long as we hold the page lock).
|
|
*/
|
|
if (PageAnon(page) && !PageKsm(page))
|
|
anon_vma = page_get_anon_vma(page);
|
|
|
|
/*
|
|
* Block others from accessing the new page when we get around to
|
|
* establishing additional references. We are usually the only one
|
|
* holding a reference to newpage at this point. We used to have a BUG
|
|
* here if trylock_page(newpage) fails, but would like to allow for
|
|
* cases where there might be a race with the previous use of newpage.
|
|
* This is much like races on refcount of oldpage: just don't BUG().
|
|
*/
|
|
if (unlikely(!trylock_page(newpage)))
|
|
goto out_unlock;
|
|
|
|
if (unlikely(isolated_balloon_page(page))) {
|
|
/*
|
|
* A ballooned page does not need any special attention from
|
|
* physical to virtual reverse mapping procedures.
|
|
* Skip any attempt to unmap PTEs or to remap swap cache,
|
|
* in order to avoid burning cycles at rmap level, and perform
|
|
* the page migration right away (proteced by page lock).
|
|
*/
|
|
rc = balloon_page_migrate(newpage, page, mode);
|
|
goto out_unlock_both;
|
|
}
|
|
|
|
/*
|
|
* Corner case handling:
|
|
* 1. When a new swap-cache page is read into, it is added to the LRU
|
|
* and treated as swapcache but it has no rmap yet.
|
|
* Calling try_to_unmap() against a page->mapping==NULL page will
|
|
* trigger a BUG. So handle it here.
|
|
* 2. An orphaned page (see truncate_complete_page) might have
|
|
* fs-private metadata. The page can be picked up due to memory
|
|
* offlining. Everywhere else except page reclaim, the page is
|
|
* invisible to the vm, so the page can not be migrated. So try to
|
|
* free the metadata, so the page can be freed.
|
|
*/
|
|
if (!page->mapping) {
|
|
VM_BUG_ON_PAGE(PageAnon(page), page);
|
|
if (page_has_private(page)) {
|
|
try_to_free_buffers(page);
|
|
goto out_unlock_both;
|
|
}
|
|
} else if (page_mapped(page)) {
|
|
/* Establish migration ptes */
|
|
VM_BUG_ON_PAGE(PageAnon(page) && !PageKsm(page) && !anon_vma,
|
|
page);
|
|
try_to_unmap(page,
|
|
TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS, NULL);
|
|
page_was_mapped = 1;
|
|
}
|
|
|
|
if (!page_mapped(page))
|
|
rc = move_to_new_page(newpage, page, mode);
|
|
|
|
if (page_was_mapped)
|
|
remove_migration_ptes(page,
|
|
rc == MIGRATEPAGE_SUCCESS ? newpage : page);
|
|
|
|
out_unlock_both:
|
|
unlock_page(newpage);
|
|
out_unlock:
|
|
/* Drop an anon_vma reference if we took one */
|
|
if (anon_vma)
|
|
put_anon_vma(anon_vma);
|
|
unlock_page(page);
|
|
out:
|
|
return rc;
|
|
}
|
|
|
|
/*
|
|
* gcc 4.7 and 4.8 on arm get an ICEs when inlining unmap_and_move(). Work
|
|
* around it.
|
|
*/
|
|
#if (GCC_VERSION >= 40700 && GCC_VERSION < 40900) && defined(CONFIG_ARM)
|
|
#define ICE_noinline noinline
|
|
#else
|
|
#define ICE_noinline
|
|
#endif
|
|
|
|
/*
|
|
* Obtain the lock on page, remove all ptes and migrate the page
|
|
* to the newly allocated page in newpage.
|
|
*/
|
|
static ICE_noinline int unmap_and_move(new_page_t get_new_page,
|
|
free_page_t put_new_page,
|
|
unsigned long private, struct page *page,
|
|
int force, enum migrate_mode mode,
|
|
enum migrate_reason reason)
|
|
{
|
|
int rc = MIGRATEPAGE_SUCCESS;
|
|
int *result = NULL;
|
|
struct page *newpage;
|
|
|
|
newpage = get_new_page(page, private, &result);
|
|
if (!newpage)
|
|
return -ENOMEM;
|
|
|
|
if (page_count(page) == 1) {
|
|
/* page was freed from under us. So we are done. */
|
|
goto out;
|
|
}
|
|
|
|
if (unlikely(PageTransHuge(page)))
|
|
if (unlikely(split_huge_page(page)))
|
|
goto out;
|
|
|
|
rc = __unmap_and_move(page, newpage, force, mode);
|
|
if (rc == MIGRATEPAGE_SUCCESS)
|
|
put_new_page = NULL;
|
|
|
|
out:
|
|
if (rc != -EAGAIN) {
|
|
/*
|
|
* A page that has been migrated has all references
|
|
* removed and will be freed. A page that has not been
|
|
* migrated will have kepts its references and be
|
|
* restored.
|
|
*/
|
|
list_del(&page->lru);
|
|
dec_zone_page_state(page, NR_ISOLATED_ANON +
|
|
page_is_file_cache(page));
|
|
/* Soft-offlined page shouldn't go through lru cache list */
|
|
if (reason == MR_MEMORY_FAILURE && rc == MIGRATEPAGE_SUCCESS) {
|
|
/*
|
|
* With this release, we free successfully migrated
|
|
* page and set PG_HWPoison on just freed page
|
|
* intentionally. Although it's rather weird, it's how
|
|
* HWPoison flag works at the moment.
|
|
*/
|
|
put_page(page);
|
|
if (!test_set_page_hwpoison(page))
|
|
num_poisoned_pages_inc();
|
|
} else
|
|
putback_lru_page(page);
|
|
}
|
|
|
|
/*
|
|
* If migration was not successful and there's a freeing callback, use
|
|
* it. Otherwise, putback_lru_page() will drop the reference grabbed
|
|
* during isolation.
|
|
*/
|
|
if (put_new_page)
|
|
put_new_page(newpage, private);
|
|
else if (unlikely(__is_movable_balloon_page(newpage))) {
|
|
/* drop our reference, page already in the balloon */
|
|
put_page(newpage);
|
|
} else
|
|
putback_lru_page(newpage);
|
|
|
|
if (result) {
|
|
if (rc)
|
|
*result = rc;
|
|
else
|
|
*result = page_to_nid(newpage);
|
|
}
|
|
return rc;
|
|
}
|
|
|
|
/*
|
|
* Counterpart of unmap_and_move_page() for hugepage migration.
|
|
*
|
|
* This function doesn't wait the completion of hugepage I/O
|
|
* because there is no race between I/O and migration for hugepage.
|
|
* Note that currently hugepage I/O occurs only in direct I/O
|
|
* where no lock is held and PG_writeback is irrelevant,
|
|
* and writeback status of all subpages are counted in the reference
|
|
* count of the head page (i.e. if all subpages of a 2MB hugepage are
|
|
* under direct I/O, the reference of the head page is 512 and a bit more.)
|
|
* This means that when we try to migrate hugepage whose subpages are
|
|
* doing direct I/O, some references remain after try_to_unmap() and
|
|
* hugepage migration fails without data corruption.
|
|
*
|
|
* There is also no race when direct I/O is issued on the page under migration,
|
|
* because then pte is replaced with migration swap entry and direct I/O code
|
|
* will wait in the page fault for migration to complete.
|
|
*/
|
|
static int unmap_and_move_huge_page(new_page_t get_new_page,
|
|
free_page_t put_new_page, unsigned long private,
|
|
struct page *hpage, int force,
|
|
enum migrate_mode mode)
|
|
{
|
|
int rc = -EAGAIN;
|
|
int *result = NULL;
|
|
int page_was_mapped = 0;
|
|
struct page *new_hpage;
|
|
struct anon_vma *anon_vma = NULL;
|
|
|
|
/*
|
|
* Movability of hugepages depends on architectures and hugepage size.
|
|
* This check is necessary because some callers of hugepage migration
|
|
* like soft offline and memory hotremove don't walk through page
|
|
* tables or check whether the hugepage is pmd-based or not before
|
|
* kicking migration.
|
|
*/
|
|
if (!hugepage_migration_supported(page_hstate(hpage))) {
|
|
putback_active_hugepage(hpage);
|
|
return -ENOSYS;
|
|
}
|
|
|
|
new_hpage = get_new_page(hpage, private, &result);
|
|
if (!new_hpage)
|
|
return -ENOMEM;
|
|
|
|
if (!trylock_page(hpage)) {
|
|
if (!force || mode != MIGRATE_SYNC)
|
|
goto out;
|
|
lock_page(hpage);
|
|
}
|
|
|
|
if (PageAnon(hpage))
|
|
anon_vma = page_get_anon_vma(hpage);
|
|
|
|
if (unlikely(!trylock_page(new_hpage)))
|
|
goto put_anon;
|
|
|
|
if (page_mapped(hpage)) {
|
|
try_to_unmap(hpage,
|
|
TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS, NULL);
|
|
page_was_mapped = 1;
|
|
}
|
|
|
|
if (!page_mapped(hpage))
|
|
rc = move_to_new_page(new_hpage, hpage, mode);
|
|
|
|
if (page_was_mapped)
|
|
remove_migration_ptes(hpage,
|
|
rc == MIGRATEPAGE_SUCCESS ? new_hpage : hpage);
|
|
|
|
unlock_page(new_hpage);
|
|
|
|
put_anon:
|
|
if (anon_vma)
|
|
put_anon_vma(anon_vma);
|
|
|
|
if (rc == MIGRATEPAGE_SUCCESS) {
|
|
hugetlb_cgroup_migrate(hpage, new_hpage);
|
|
put_new_page = NULL;
|
|
}
|
|
|
|
unlock_page(hpage);
|
|
out:
|
|
if (rc != -EAGAIN)
|
|
putback_active_hugepage(hpage);
|
|
|
|
/*
|
|
* If migration was not successful and there's a freeing callback, use
|
|
* it. Otherwise, put_page() will drop the reference grabbed during
|
|
* isolation.
|
|
*/
|
|
if (put_new_page)
|
|
put_new_page(new_hpage, private);
|
|
else
|
|
putback_active_hugepage(new_hpage);
|
|
|
|
if (result) {
|
|
if (rc)
|
|
*result = rc;
|
|
else
|
|
*result = page_to_nid(new_hpage);
|
|
}
|
|
return rc;
|
|
}
|
|
|
|
/*
|
|
* migrate_pages - migrate the pages specified in a list, to the free pages
|
|
* supplied as the target for the page migration
|
|
*
|
|
* @from: The list of pages to be migrated.
|
|
* @get_new_page: The function used to allocate free pages to be used
|
|
* as the target of the page migration.
|
|
* @put_new_page: The function used to free target pages if migration
|
|
* fails, or NULL if no special handling is necessary.
|
|
* @private: Private data to be passed on to get_new_page()
|
|
* @mode: The migration mode that specifies the constraints for
|
|
* page migration, if any.
|
|
* @reason: The reason for page migration.
|
|
*
|
|
* The function returns after 10 attempts or if no pages are movable any more
|
|
* because the list has become empty or no retryable pages exist any more.
|
|
* The caller should call putback_movable_pages() to return pages to the LRU
|
|
* or free list only if ret != 0.
|
|
*
|
|
* Returns the number of pages that were not migrated, or an error code.
|
|
*/
|
|
int migrate_pages(struct list_head *from, new_page_t get_new_page,
|
|
free_page_t put_new_page, unsigned long private,
|
|
enum migrate_mode mode, int reason)
|
|
{
|
|
int retry = 1;
|
|
int nr_failed = 0;
|
|
int nr_succeeded = 0;
|
|
int pass = 0;
|
|
struct page *page;
|
|
struct page *page2;
|
|
int swapwrite = current->flags & PF_SWAPWRITE;
|
|
int rc;
|
|
|
|
trace_mm_migrate_pages_start(mode, reason);
|
|
|
|
if (!swapwrite)
|
|
current->flags |= PF_SWAPWRITE;
|
|
|
|
for(pass = 0; pass < 10 && retry; pass++) {
|
|
retry = 0;
|
|
|
|
list_for_each_entry_safe(page, page2, from, lru) {
|
|
cond_resched();
|
|
|
|
if (PageHuge(page))
|
|
rc = unmap_and_move_huge_page(get_new_page,
|
|
put_new_page, private, page,
|
|
pass > 2, mode);
|
|
else
|
|
rc = unmap_and_move(get_new_page, put_new_page,
|
|
private, page, pass > 2, mode,
|
|
reason);
|
|
|
|
switch(rc) {
|
|
case -ENOMEM:
|
|
goto out;
|
|
case -EAGAIN:
|
|
retry++;
|
|
break;
|
|
case MIGRATEPAGE_SUCCESS:
|
|
nr_succeeded++;
|
|
break;
|
|
default:
|
|
/*
|
|
* Permanent failure (-EBUSY, -ENOSYS, etc.):
|
|
* unlike -EAGAIN case, the failed page is
|
|
* removed from migration page list and not
|
|
* retried in the next outer loop.
|
|
*/
|
|
nr_failed++;
|
|
break;
|
|
}
|
|
}
|
|
}
|
|
nr_failed += retry;
|
|
rc = nr_failed;
|
|
out:
|
|
if (nr_succeeded)
|
|
count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded);
|
|
if (nr_failed)
|
|
count_vm_events(PGMIGRATE_FAIL, nr_failed);
|
|
trace_mm_migrate_pages(nr_succeeded, nr_failed, mode, reason);
|
|
|
|
if (!swapwrite)
|
|
current->flags &= ~PF_SWAPWRITE;
|
|
|
|
return rc;
|
|
}
|
|
|
|
#ifdef CONFIG_NUMA
|
|
/*
|
|
* Move a list of individual pages
|
|
*/
|
|
struct page_to_node {
|
|
unsigned long addr;
|
|
struct page *page;
|
|
int node;
|
|
int status;
|
|
};
|
|
|
|
static struct page *new_page_node(struct page *p, unsigned long private,
|
|
int **result)
|
|
{
|
|
struct page_to_node *pm = (struct page_to_node *)private;
|
|
|
|
while (pm->node != MAX_NUMNODES && pm->page != p)
|
|
pm++;
|
|
|
|
if (pm->node == MAX_NUMNODES)
|
|
return NULL;
|
|
|
|
*result = &pm->status;
|
|
|
|
if (PageHuge(p))
|
|
return alloc_huge_page_node(page_hstate(compound_head(p)),
|
|
pm->node);
|
|
else
|
|
return __alloc_pages_node(pm->node,
|
|
GFP_HIGHUSER_MOVABLE | __GFP_THISNODE, 0);
|
|
}
|
|
|
|
/*
|
|
* Move a set of pages as indicated in the pm array. The addr
|
|
* field must be set to the virtual address of the page to be moved
|
|
* and the node number must contain a valid target node.
|
|
* The pm array ends with node = MAX_NUMNODES.
|
|
*/
|
|
static int do_move_page_to_node_array(struct mm_struct *mm,
|
|
struct page_to_node *pm,
|
|
int migrate_all)
|
|
{
|
|
int err;
|
|
struct page_to_node *pp;
|
|
LIST_HEAD(pagelist);
|
|
|
|
down_read(&mm->mmap_sem);
|
|
|
|
/*
|
|
* Build a list of pages to migrate
|
|
*/
|
|
for (pp = pm; pp->node != MAX_NUMNODES; pp++) {
|
|
struct vm_area_struct *vma;
|
|
struct page *page;
|
|
|
|
err = -EFAULT;
|
|
vma = find_vma(mm, pp->addr);
|
|
if (!vma || pp->addr < vma->vm_start || !vma_migratable(vma))
|
|
goto set_status;
|
|
|
|
/* FOLL_DUMP to ignore special (like zero) pages */
|
|
page = follow_page(vma, pp->addr,
|
|
FOLL_GET | FOLL_SPLIT | FOLL_DUMP);
|
|
|
|
err = PTR_ERR(page);
|
|
if (IS_ERR(page))
|
|
goto set_status;
|
|
|
|
err = -ENOENT;
|
|
if (!page)
|
|
goto set_status;
|
|
|
|
pp->page = page;
|
|
err = page_to_nid(page);
|
|
|
|
if (err == pp->node)
|
|
/*
|
|
* Node already in the right place
|
|
*/
|
|
goto put_and_set;
|
|
|
|
err = -EACCES;
|
|
if (page_mapcount(page) > 1 &&
|
|
!migrate_all)
|
|
goto put_and_set;
|
|
|
|
if (PageHuge(page)) {
|
|
if (PageHead(page))
|
|
isolate_huge_page(page, &pagelist);
|
|
goto put_and_set;
|
|
}
|
|
|
|
err = isolate_lru_page(page);
|
|
if (!err) {
|
|
list_add_tail(&page->lru, &pagelist);
|
|
inc_zone_page_state(page, NR_ISOLATED_ANON +
|
|
page_is_file_cache(page));
|
|
}
|
|
put_and_set:
|
|
/*
|
|
* Either remove the duplicate refcount from
|
|
* isolate_lru_page() or drop the page ref if it was
|
|
* not isolated.
|
|
*/
|
|
put_page(page);
|
|
set_status:
|
|
pp->status = err;
|
|
}
|
|
|
|
err = 0;
|
|
if (!list_empty(&pagelist)) {
|
|
err = migrate_pages(&pagelist, new_page_node, NULL,
|
|
(unsigned long)pm, MIGRATE_SYNC, MR_SYSCALL);
|
|
if (err)
|
|
putback_movable_pages(&pagelist);
|
|
}
|
|
|
|
up_read(&mm->mmap_sem);
|
|
return err;
|
|
}
|
|
|
|
/*
|
|
* Migrate an array of page address onto an array of nodes and fill
|
|
* the corresponding array of status.
|
|
*/
|
|
static int do_pages_move(struct mm_struct *mm, nodemask_t task_nodes,
|
|
unsigned long nr_pages,
|
|
const void __user * __user *pages,
|
|
const int __user *nodes,
|
|
int __user *status, int flags)
|
|
{
|
|
struct page_to_node *pm;
|
|
unsigned long chunk_nr_pages;
|
|
unsigned long chunk_start;
|
|
int err;
|
|
|
|
err = -ENOMEM;
|
|
pm = (struct page_to_node *)__get_free_page(GFP_KERNEL);
|
|
if (!pm)
|
|
goto out;
|
|
|
|
migrate_prep();
|
|
|
|
/*
|
|
* Store a chunk of page_to_node array in a page,
|
|
* but keep the last one as a marker
|
|
*/
|
|
chunk_nr_pages = (PAGE_SIZE / sizeof(struct page_to_node)) - 1;
|
|
|
|
for (chunk_start = 0;
|
|
chunk_start < nr_pages;
|
|
chunk_start += chunk_nr_pages) {
|
|
int j;
|
|
|
|
if (chunk_start + chunk_nr_pages > nr_pages)
|
|
chunk_nr_pages = nr_pages - chunk_start;
|
|
|
|
/* fill the chunk pm with addrs and nodes from user-space */
|
|
for (j = 0; j < chunk_nr_pages; j++) {
|
|
const void __user *p;
|
|
int node;
|
|
|
|
err = -EFAULT;
|
|
if (get_user(p, pages + j + chunk_start))
|
|
goto out_pm;
|
|
pm[j].addr = (unsigned long) p;
|
|
|
|
if (get_user(node, nodes + j + chunk_start))
|
|
goto out_pm;
|
|
|
|
err = -ENODEV;
|
|
if (node < 0 || node >= MAX_NUMNODES)
|
|
goto out_pm;
|
|
|
|
if (!node_state(node, N_MEMORY))
|
|
goto out_pm;
|
|
|
|
err = -EACCES;
|
|
if (!node_isset(node, task_nodes))
|
|
goto out_pm;
|
|
|
|
pm[j].node = node;
|
|
}
|
|
|
|
/* End marker for this chunk */
|
|
pm[chunk_nr_pages].node = MAX_NUMNODES;
|
|
|
|
/* Migrate this chunk */
|
|
err = do_move_page_to_node_array(mm, pm,
|
|
flags & MPOL_MF_MOVE_ALL);
|
|
if (err < 0)
|
|
goto out_pm;
|
|
|
|
/* Return status information */
|
|
for (j = 0; j < chunk_nr_pages; j++)
|
|
if (put_user(pm[j].status, status + j + chunk_start)) {
|
|
err = -EFAULT;
|
|
goto out_pm;
|
|
}
|
|
}
|
|
err = 0;
|
|
|
|
out_pm:
|
|
free_page((unsigned long)pm);
|
|
out:
|
|
return err;
|
|
}
|
|
|
|
/*
|
|
* Determine the nodes of an array of pages and store it in an array of status.
|
|
*/
|
|
static void do_pages_stat_array(struct mm_struct *mm, unsigned long nr_pages,
|
|
const void __user **pages, int *status)
|
|
{
|
|
unsigned long i;
|
|
|
|
down_read(&mm->mmap_sem);
|
|
|
|
for (i = 0; i < nr_pages; i++) {
|
|
unsigned long addr = (unsigned long)(*pages);
|
|
struct vm_area_struct *vma;
|
|
struct page *page;
|
|
int err = -EFAULT;
|
|
|
|
vma = find_vma(mm, addr);
|
|
if (!vma || addr < vma->vm_start)
|
|
goto set_status;
|
|
|
|
/* FOLL_DUMP to ignore special (like zero) pages */
|
|
page = follow_page(vma, addr, FOLL_DUMP);
|
|
|
|
err = PTR_ERR(page);
|
|
if (IS_ERR(page))
|
|
goto set_status;
|
|
|
|
err = page ? page_to_nid(page) : -ENOENT;
|
|
set_status:
|
|
*status = err;
|
|
|
|
pages++;
|
|
status++;
|
|
}
|
|
|
|
up_read(&mm->mmap_sem);
|
|
}
|
|
|
|
/*
|
|
* Determine the nodes of a user array of pages and store it in
|
|
* a user array of status.
|
|
*/
|
|
static int do_pages_stat(struct mm_struct *mm, unsigned long nr_pages,
|
|
const void __user * __user *pages,
|
|
int __user *status)
|
|
{
|
|
#define DO_PAGES_STAT_CHUNK_NR 16
|
|
const void __user *chunk_pages[DO_PAGES_STAT_CHUNK_NR];
|
|
int chunk_status[DO_PAGES_STAT_CHUNK_NR];
|
|
|
|
while (nr_pages) {
|
|
unsigned long chunk_nr;
|
|
|
|
chunk_nr = nr_pages;
|
|
if (chunk_nr > DO_PAGES_STAT_CHUNK_NR)
|
|
chunk_nr = DO_PAGES_STAT_CHUNK_NR;
|
|
|
|
if (copy_from_user(chunk_pages, pages, chunk_nr * sizeof(*chunk_pages)))
|
|
break;
|
|
|
|
do_pages_stat_array(mm, chunk_nr, chunk_pages, chunk_status);
|
|
|
|
if (copy_to_user(status, chunk_status, chunk_nr * sizeof(*status)))
|
|
break;
|
|
|
|
pages += chunk_nr;
|
|
status += chunk_nr;
|
|
nr_pages -= chunk_nr;
|
|
}
|
|
return nr_pages ? -EFAULT : 0;
|
|
}
|
|
|
|
/*
|
|
* Move a list of pages in the address space of the currently executing
|
|
* process.
|
|
*/
|
|
SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages,
|
|
const void __user * __user *, pages,
|
|
const int __user *, nodes,
|
|
int __user *, status, int, flags)
|
|
{
|
|
const struct cred *cred = current_cred(), *tcred;
|
|
struct task_struct *task;
|
|
struct mm_struct *mm;
|
|
int err;
|
|
nodemask_t task_nodes;
|
|
|
|
/* Check flags */
|
|
if (flags & ~(MPOL_MF_MOVE|MPOL_MF_MOVE_ALL))
|
|
return -EINVAL;
|
|
|
|
if ((flags & MPOL_MF_MOVE_ALL) && !capable(CAP_SYS_NICE))
|
|
return -EPERM;
|
|
|
|
/* Find the mm_struct */
|
|
rcu_read_lock();
|
|
task = pid ? find_task_by_vpid(pid) : current;
|
|
if (!task) {
|
|
rcu_read_unlock();
|
|
return -ESRCH;
|
|
}
|
|
get_task_struct(task);
|
|
|
|
/*
|
|
* Check if this process has the right to modify the specified
|
|
* process. The right exists if the process has administrative
|
|
* capabilities, superuser privileges or the same
|
|
* userid as the target process.
|
|
*/
|
|
tcred = __task_cred(task);
|
|
if (!uid_eq(cred->euid, tcred->suid) && !uid_eq(cred->euid, tcred->uid) &&
|
|
!uid_eq(cred->uid, tcred->suid) && !uid_eq(cred->uid, tcred->uid) &&
|
|
!capable(CAP_SYS_NICE)) {
|
|
rcu_read_unlock();
|
|
err = -EPERM;
|
|
goto out;
|
|
}
|
|
rcu_read_unlock();
|
|
|
|
err = security_task_movememory(task);
|
|
if (err)
|
|
goto out;
|
|
|
|
task_nodes = cpuset_mems_allowed(task);
|
|
mm = get_task_mm(task);
|
|
put_task_struct(task);
|
|
|
|
if (!mm)
|
|
return -EINVAL;
|
|
|
|
if (nodes)
|
|
err = do_pages_move(mm, task_nodes, nr_pages, pages,
|
|
nodes, status, flags);
|
|
else
|
|
err = do_pages_stat(mm, nr_pages, pages, status);
|
|
|
|
mmput(mm);
|
|
return err;
|
|
|
|
out:
|
|
put_task_struct(task);
|
|
return err;
|
|
}
|
|
|
|
#ifdef CONFIG_NUMA_BALANCING
|
|
/*
|
|
* Returns true if this is a safe migration target node for misplaced NUMA
|
|
* pages. Currently it only checks the watermarks which crude
|
|
*/
|
|
static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
|
|
unsigned long nr_migrate_pages)
|
|
{
|
|
int z;
|
|
for (z = pgdat->nr_zones - 1; z >= 0; z--) {
|
|
struct zone *zone = pgdat->node_zones + z;
|
|
|
|
if (!populated_zone(zone))
|
|
continue;
|
|
|
|
if (!zone_reclaimable(zone))
|
|
continue;
|
|
|
|
/* Avoid waking kswapd by allocating pages_to_migrate pages. */
|
|
if (!zone_watermark_ok(zone, 0,
|
|
high_wmark_pages(zone) +
|
|
nr_migrate_pages,
|
|
0, 0))
|
|
continue;
|
|
return true;
|
|
}
|
|
return false;
|
|
}
|
|
|
|
static struct page *alloc_misplaced_dst_page(struct page *page,
|
|
unsigned long data,
|
|
int **result)
|
|
{
|
|
int nid = (int) data;
|
|
struct page *newpage;
|
|
|
|
newpage = __alloc_pages_node(nid,
|
|
(GFP_HIGHUSER_MOVABLE |
|
|
__GFP_THISNODE | __GFP_NOMEMALLOC |
|
|
__GFP_NORETRY | __GFP_NOWARN) &
|
|
~__GFP_RECLAIM, 0);
|
|
|
|
return newpage;
|
|
}
|
|
|
|
/*
|
|
* page migration rate limiting control.
|
|
* Do not migrate more than @pages_to_migrate in a @migrate_interval_millisecs
|
|
* window of time. Default here says do not migrate more than 1280M per second.
|
|
*/
|
|
static unsigned int migrate_interval_millisecs __read_mostly = 100;
|
|
static unsigned int ratelimit_pages __read_mostly = 128 << (20 - PAGE_SHIFT);
|
|
|
|
/* Returns true if the node is migrate rate-limited after the update */
|
|
static bool numamigrate_update_ratelimit(pg_data_t *pgdat,
|
|
unsigned long nr_pages)
|
|
{
|
|
/*
|
|
* Rate-limit the amount of data that is being migrated to a node.
|
|
* Optimal placement is no good if the memory bus is saturated and
|
|
* all the time is being spent migrating!
|
|
*/
|
|
if (time_after(jiffies, pgdat->numabalancing_migrate_next_window)) {
|
|
spin_lock(&pgdat->numabalancing_migrate_lock);
|
|
pgdat->numabalancing_migrate_nr_pages = 0;
|
|
pgdat->numabalancing_migrate_next_window = jiffies +
|
|
msecs_to_jiffies(migrate_interval_millisecs);
|
|
spin_unlock(&pgdat->numabalancing_migrate_lock);
|
|
}
|
|
if (pgdat->numabalancing_migrate_nr_pages > ratelimit_pages) {
|
|
trace_mm_numa_migrate_ratelimit(current, pgdat->node_id,
|
|
nr_pages);
|
|
return true;
|
|
}
|
|
|
|
/*
|
|
* This is an unlocked non-atomic update so errors are possible.
|
|
* The consequences are failing to migrate when we potentiall should
|
|
* have which is not severe enough to warrant locking. If it is ever
|
|
* a problem, it can be converted to a per-cpu counter.
|
|
*/
|
|
pgdat->numabalancing_migrate_nr_pages += nr_pages;
|
|
return false;
|
|
}
|
|
|
|
static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
|
|
{
|
|
int page_lru;
|
|
|
|
VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
|
|
|
|
/* Avoid migrating to a node that is nearly full */
|
|
if (!migrate_balanced_pgdat(pgdat, 1UL << compound_order(page)))
|
|
return 0;
|
|
|
|
if (isolate_lru_page(page))
|
|
return 0;
|
|
|
|
/*
|
|
* migrate_misplaced_transhuge_page() skips page migration's usual
|
|
* check on page_count(), so we must do it here, now that the page
|
|
* has been isolated: a GUP pin, or any other pin, prevents migration.
|
|
* The expected page count is 3: 1 for page's mapcount and 1 for the
|
|
* caller's pin and 1 for the reference taken by isolate_lru_page().
|
|
*/
|
|
if (PageTransHuge(page) && page_count(page) != 3) {
|
|
putback_lru_page(page);
|
|
return 0;
|
|
}
|
|
|
|
page_lru = page_is_file_cache(page);
|
|
mod_zone_page_state(page_zone(page), NR_ISOLATED_ANON + page_lru,
|
|
hpage_nr_pages(page));
|
|
|
|
/*
|
|
* Isolating the page has taken another reference, so the
|
|
* caller's reference can be safely dropped without the page
|
|
* disappearing underneath us during migration.
|
|
*/
|
|
put_page(page);
|
|
return 1;
|
|
}
|
|
|
|
bool pmd_trans_migrating(pmd_t pmd)
|
|
{
|
|
struct page *page = pmd_page(pmd);
|
|
return PageLocked(page);
|
|
}
|
|
|
|
/*
|
|
* Attempt to migrate a misplaced page to the specified destination
|
|
* node. Caller is expected to have an elevated reference count on
|
|
* the page that will be dropped by this function before returning.
|
|
*/
|
|
int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
|
|
int node)
|
|
{
|
|
pg_data_t *pgdat = NODE_DATA(node);
|
|
int isolated;
|
|
int nr_remaining;
|
|
LIST_HEAD(migratepages);
|
|
|
|
/*
|
|
* Don't migrate file pages that are mapped in multiple processes
|
|
* with execute permissions as they are probably shared libraries.
|
|
*/
|
|
if (page_mapcount(page) != 1 && page_is_file_cache(page) &&
|
|
(vma->vm_flags & VM_EXEC))
|
|
goto out;
|
|
|
|
/*
|
|
* Rate-limit the amount of data that is being migrated to a node.
|
|
* Optimal placement is no good if the memory bus is saturated and
|
|
* all the time is being spent migrating!
|
|
*/
|
|
if (numamigrate_update_ratelimit(pgdat, 1))
|
|
goto out;
|
|
|
|
isolated = numamigrate_isolate_page(pgdat, page);
|
|
if (!isolated)
|
|
goto out;
|
|
|
|
list_add(&page->lru, &migratepages);
|
|
nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_page,
|
|
NULL, node, MIGRATE_ASYNC,
|
|
MR_NUMA_MISPLACED);
|
|
if (nr_remaining) {
|
|
if (!list_empty(&migratepages)) {
|
|
list_del(&page->lru);
|
|
dec_zone_page_state(page, NR_ISOLATED_ANON +
|
|
page_is_file_cache(page));
|
|
putback_lru_page(page);
|
|
}
|
|
isolated = 0;
|
|
} else
|
|
count_vm_numa_event(NUMA_PAGE_MIGRATE);
|
|
BUG_ON(!list_empty(&migratepages));
|
|
return isolated;
|
|
|
|
out:
|
|
put_page(page);
|
|
return 0;
|
|
}
|
|
#endif /* CONFIG_NUMA_BALANCING */
|
|
|
|
#if defined(CONFIG_NUMA_BALANCING) && defined(CONFIG_TRANSPARENT_HUGEPAGE)
|
|
/*
|
|
* Migrates a THP to a given target node. page must be locked and is unlocked
|
|
* before returning.
|
|
*/
|
|
int migrate_misplaced_transhuge_page(struct mm_struct *mm,
|
|
struct vm_area_struct *vma,
|
|
pmd_t *pmd, pmd_t entry,
|
|
unsigned long address,
|
|
struct page *page, int node)
|
|
{
|
|
spinlock_t *ptl;
|
|
pg_data_t *pgdat = NODE_DATA(node);
|
|
int isolated = 0;
|
|
struct page *new_page = NULL;
|
|
int page_lru = page_is_file_cache(page);
|
|
unsigned long mmun_start = address & HPAGE_PMD_MASK;
|
|
unsigned long mmun_end = mmun_start + HPAGE_PMD_SIZE;
|
|
pmd_t orig_entry;
|
|
|
|
/*
|
|
* Rate-limit the amount of data that is being migrated to a node.
|
|
* Optimal placement is no good if the memory bus is saturated and
|
|
* all the time is being spent migrating!
|
|
*/
|
|
if (numamigrate_update_ratelimit(pgdat, HPAGE_PMD_NR))
|
|
goto out_dropref;
|
|
|
|
new_page = alloc_pages_node(node,
|
|
(GFP_TRANSHUGE | __GFP_THISNODE) & ~__GFP_RECLAIM,
|
|
HPAGE_PMD_ORDER);
|
|
if (!new_page)
|
|
goto out_fail;
|
|
|
|
isolated = numamigrate_isolate_page(pgdat, page);
|
|
if (!isolated) {
|
|
put_page(new_page);
|
|
goto out_fail;
|
|
}
|
|
|
|
if (mm_tlb_flush_pending(mm))
|
|
flush_tlb_range(vma, mmun_start, mmun_end);
|
|
|
|
/* Prepare a page as a migration target */
|
|
__set_page_locked(new_page);
|
|
SetPageSwapBacked(new_page);
|
|
|
|
/* anon mapping, we can simply copy page->mapping to the new page: */
|
|
new_page->mapping = page->mapping;
|
|
new_page->index = page->index;
|
|
migrate_page_copy(new_page, page);
|
|
WARN_ON(PageLRU(new_page));
|
|
|
|
/* Recheck the target PMD */
|
|
mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
|
|
ptl = pmd_lock(mm, pmd);
|
|
if (unlikely(!pmd_same(*pmd, entry) || page_count(page) != 2)) {
|
|
fail_putback:
|
|
spin_unlock(ptl);
|
|
mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
|
|
|
|
/* Reverse changes made by migrate_page_copy() */
|
|
if (TestClearPageActive(new_page))
|
|
SetPageActive(page);
|
|
if (TestClearPageUnevictable(new_page))
|
|
SetPageUnevictable(page);
|
|
|
|
unlock_page(new_page);
|
|
put_page(new_page); /* Free it */
|
|
|
|
/* Retake the callers reference and putback on LRU */
|
|
get_page(page);
|
|
putback_lru_page(page);
|
|
mod_zone_page_state(page_zone(page),
|
|
NR_ISOLATED_ANON + page_lru, -HPAGE_PMD_NR);
|
|
|
|
goto out_unlock;
|
|
}
|
|
|
|
orig_entry = *pmd;
|
|
entry = mk_pmd(new_page, vma->vm_page_prot);
|
|
entry = pmd_mkhuge(entry);
|
|
entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
|
|
|
|
/*
|
|
* Clear the old entry under pagetable lock and establish the new PTE.
|
|
* Any parallel GUP will either observe the old page blocking on the
|
|
* page lock, block on the page table lock or observe the new page.
|
|
* The SetPageUptodate on the new page and page_add_new_anon_rmap
|
|
* guarantee the copy is visible before the pagetable update.
|
|
*/
|
|
flush_cache_range(vma, mmun_start, mmun_end);
|
|
page_add_anon_rmap(new_page, vma, mmun_start);
|
|
pmdp_huge_clear_flush_notify(vma, mmun_start, pmd);
|
|
set_pmd_at(mm, mmun_start, pmd, entry);
|
|
flush_tlb_range(vma, mmun_start, mmun_end);
|
|
update_mmu_cache_pmd(vma, address, &entry);
|
|
|
|
if (page_count(page) != 2) {
|
|
set_pmd_at(mm, mmun_start, pmd, orig_entry);
|
|
flush_tlb_range(vma, mmun_start, mmun_end);
|
|
mmu_notifier_invalidate_range(mm, mmun_start, mmun_end);
|
|
update_mmu_cache_pmd(vma, address, &entry);
|
|
page_remove_rmap(new_page);
|
|
goto fail_putback;
|
|
}
|
|
|
|
mlock_migrate_page(new_page, page);
|
|
set_page_memcg(new_page, page_memcg(page));
|
|
set_page_memcg(page, NULL);
|
|
page_remove_rmap(page);
|
|
|
|
spin_unlock(ptl);
|
|
mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
|
|
|
|
/* Take an "isolate" reference and put new page on the LRU. */
|
|
get_page(new_page);
|
|
putback_lru_page(new_page);
|
|
|
|
unlock_page(new_page);
|
|
unlock_page(page);
|
|
put_page(page); /* Drop the rmap reference */
|
|
put_page(page); /* Drop the LRU isolation reference */
|
|
|
|
count_vm_events(PGMIGRATE_SUCCESS, HPAGE_PMD_NR);
|
|
count_vm_numa_events(NUMA_PAGE_MIGRATE, HPAGE_PMD_NR);
|
|
|
|
mod_zone_page_state(page_zone(page),
|
|
NR_ISOLATED_ANON + page_lru,
|
|
-HPAGE_PMD_NR);
|
|
return isolated;
|
|
|
|
out_fail:
|
|
count_vm_events(PGMIGRATE_FAIL, HPAGE_PMD_NR);
|
|
out_dropref:
|
|
ptl = pmd_lock(mm, pmd);
|
|
if (pmd_same(*pmd, entry)) {
|
|
entry = pmd_modify(entry, vma->vm_page_prot);
|
|
set_pmd_at(mm, mmun_start, pmd, entry);
|
|
update_mmu_cache_pmd(vma, address, &entry);
|
|
}
|
|
spin_unlock(ptl);
|
|
|
|
out_unlock:
|
|
unlock_page(page);
|
|
put_page(page);
|
|
return 0;
|
|
}
|
|
#endif /* CONFIG_NUMA_BALANCING */
|
|
|
|
#endif /* CONFIG_NUMA */
|