* refs/heads/tmp-8ec9fd8
ANDROID: sdcardfs: Check stacked filesystem depth
Fix backport of "tcp: detect malicious patterns in tcp_collapse_ofo_queue()"
tcp: detect malicious patterns in tcp_collapse_ofo_queue()
tcp: avoid collapses in tcp_prune_queue() if possible
x86_64_cuttlefish_defconfig: Enable android-verity
x86_64_cuttlefish_defconfig: enable verity cert
Linux 4.4.142
perf tools: Move syscall number fallbacks from perf-sys.h to tools/arch/x86/include/asm/
x86/cpu: Probe CPUID leaf 6 even when cpuid_level == 6
Kbuild: fix # escaping in .cmd files for future Make
ANDROID: Fix massive cpufreq_times memory leaks
ANDROID: Reduce use of #ifdef CONFIG_CPU_FREQ_TIMES
UPSTREAM: binder: replace "%p" with "%pK"
UPSTREAM: binder: free memory on error
UPSTREAM: binder: fix proc->files use-after-free
UPSTREAM: Revert "FROMLIST: binder: fix proc->files use-after-free"
UPSTREAM: ANDROID: binder: change down_write to down_read
UPSTREAM: ANDROID: binder: correct the cmd print for BINDER_WORK_RETURN_ERROR
UPSTREAM: ANDROID: binder: remove 32-bit binder interface.
UPSTREAM: ANDROID: binder: re-order some conditions
UPSTREAM: android: binder: use VM_ALLOC to get vm area
UPSTREAM: android: binder: Use true and false for boolean values
UPSTREAM: android: binder: Use octal permissions
UPSTREAM: android: binder: Prefer __func__ to using hardcoded function name
UPSTREAM: ANDROID: binder: make binder_alloc_new_buf_locked static and indent its arguments
UPSTREAM: android: binder: Check for errors in binder_alloc_shrinker_init().
treewide: Use array_size in f2fs_kvzalloc()
treewide: Use array_size() in f2fs_kzalloc()
treewide: Use array_size() in f2fs_kmalloc()
overflow.h: Add allocation size calculation helpers
f2fs: fix to clear FI_VOLATILE_FILE correctly
f2fs: let sync node IO interrupt async one
f2fs: don't change wbc->sync_mode
f2fs: fix to update mtime correctly
fs: f2fs: insert space around that ':' and ', '
fs: f2fs: add missing blank lines after declarations
fs: f2fs: changed variable type of offset "unsigned" to "loff_t"
f2fs: clean up symbol namespace
f2fs: make set_de_type() static
f2fs: make __f2fs_write_data_pages() static
f2fs: fix to avoid accessing cross the boundary
f2fs: fix to let caller retry allocating block address
disable loading f2fs module on PAGE_SIZE > 4KB
f2fs: fix error path of move_data_page
f2fs: don't drop dentry pages after fs shutdown
f2fs: fix to avoid race during access gc_thread pointer
f2fs: clean up with clear_radix_tree_dirty_tag
f2fs: fix to don't trigger writeback during recovery
f2fs: clear discard_wake earlier
f2fs: let discard thread wait a little longer if dev is busy
f2fs: avoid stucking GC due to atomic write
f2fs: introduce sbi->gc_mode to determine the policy
f2fs: keep migration IO order in LFS mode
f2fs: fix to wait page writeback during revoking atomic write
f2fs: Fix deadlock in shutdown ioctl
f2fs: detect synchronous writeback more earlier
mm: remove nr_pages argument from pagevec_lookup_{,range}_tag()
ceph: use pagevec_lookup_range_nr_tag()
mm: add variant of pagevec_lookup_range_tag() taking number of pages
mm: use pagevec_lookup_range_tag() in write_cache_pages()
mm: use pagevec_lookup_range_tag() in __filemap_fdatawait_range()
nilfs2: use pagevec_lookup_range_tag()
gfs2: use pagevec_lookup_range_tag()
f2fs: use find_get_pages_tag() for looking up single page
f2fs: simplify page iteration loops
f2fs: use pagevec_lookup_range_tag()
ext4: use pagevec_lookup_range_tag()
ceph: use pagevec_lookup_range_tag()
btrfs: use pagevec_lookup_range_tag()
mm: implement find_get_pages_range_tag()
f2fs: clean up with is_valid_blkaddr()
f2fs: fix to initialize min_mtime with ULLONG_MAX
f2fs: fix to let checkpoint guarantee atomic page persistence
f2fs: fix to initialize i_current_depth according to inode type
Revert "f2fs: add ovp valid_blocks check for bg gc victim to fg_gc"
f2fs: don't drop any page on f2fs_cp_error() case
f2fs: fix spelling mistake: "extenstion" -> "extension"
f2fs: enhance sanity_check_raw_super() to avoid potential overflows
f2fs: treat volatile file's data as hot one
f2fs: introduce release_discard_addr() for cleanup
f2fs: fix potential overflow
f2fs: rename dio_rwsem to i_gc_rwsem
f2fs: move mnt_want_write_file after range check
f2fs: fix missing clear FI_NO_PREALLOC in some error case
f2fs: enforce fsync_mode=strict for renamed directory
f2fs: sanity check for total valid node blocks
f2fs: sanity check on sit entry
f2fs: avoid bug_on on corrupted inode
f2fs: give message and set need_fsck given broken node id
f2fs: clean up commit_inmem_pages()
f2fs: do not check F2FS_INLINE_DOTS in recover
f2fs: remove duplicated dquot_initialize and fix error handling
f2fs: stop issue discard if something wrong with f2fs
f2fs: fix return value in f2fs_ioc_commit_atomic_write
f2fs: allocate hot_data for atomic write more strictly
f2fs: check if inmem_pages list is empty correctly
f2fs: fix race in between GC and atomic open
f2fs: change le32 to le16 of f2fs_inode->i_extra_size
f2fs: check cur_valid_map_mir & raw_sit block count when flush sit entries
f2fs: correct return value of f2fs_trim_fs
f2fs: fix to show missing bits in FS_IOC_GETFLAGS
f2fs: remove unneeded F2FS_PROJINHERIT_FL
f2fs: don't use GFP_ZERO for page caches
f2fs: issue all big range discards in umount process
f2fs: remove redundant block plug
f2fs: remove unmatched zero_user_segment when convert inline dentry
f2fs: introduce private inode status mapping
fscrypt: log the crypto algorithm implementations
crypto: api - Add crypto_type_has_alg helper
crypto: skcipher - Add low-level skcipher interface
crypto: skcipher - Add helper to retrieve driver name
crypto: skcipher - Add default key size helper
fscrypt: add Speck128/256 support
fscrypt: only derive the needed portion of the key
fscrypt: separate key lookup from key derivation
fscrypt: use a common logging function
fscrypt: remove internal key size constants
fscrypt: remove unnecessary check for non-logon key type
fscrypt: make fscrypt_operations.max_namelen an integer
fscrypt: drop empty name check from fname_decrypt()
fscrypt: drop max_namelen check from fname_decrypt()
fscrypt: don't special-case EOPNOTSUPP from fscrypt_get_encryption_info()
fscrypt: don't clear flags on crypto transform
fscrypt: remove stale comment from fscrypt_d_revalidate()
fscrypt: remove error messages for skcipher_request_alloc() failure
fscrypt: remove unnecessary NULL check when allocating skcipher
fscrypt: clean up after fscrypt_prepare_lookup() conversions
fscrypt: use unbound workqueue for decryption
f2fs: run fstrim asynchronously if runtime discard is on
f2fs: turn down IO priority of discard from background
f2fs: don't split checkpoint in fstrim
f2fs: issue discard commands proactively in high fs utilization
f2fs: add fsync_mode=nobarrier for non-atomic files
f2fs: let fstrim issue discard commands in lower priority
f2fs: avoid fsync() failure caused by EAGAIN in writepage()
f2fs: clear PageError on writepage - part 2
f2fs: check cap_resource only for data blocks
Revert "f2fs: introduce f2fs_set_page_dirty_nobuffer"
f2fs: clear PageError on writepage
f2fs: call unlock_new_inode() before d_instantiate()
f2fs: refactor read path to allow multiple postprocessing steps
fscrypt: allow synchronous bio decryption
f2fs: remain written times to update inode during fsync
f2fs: make assignment of t->dentry_bitmap more readable
f2fs: truncate preallocated blocks in error case
f2fs: fix a wrong condition in f2fs_skip_inode_update
f2fs: reserve bits for fs-verity
f2fs: Add a segment type check in inplace write
f2fs: no need to initialize zero value for GFP_F2FS_ZERO
f2fs: don't track new nat entry in nat set
f2fs: clean up with F2FS_BLK_ALIGN
f2fs: check blkaddr more accuratly before issue a bio
f2fs: Set GF_NOFS in read_cache_page_gfp while doing f2fs_quota_read
f2fs: introduce a new mount option test_dummy_encryption
f2fs: introduce F2FS_FEATURE_LOST_FOUND feature
f2fs: release locks before return in f2fs_ioc_gc_range()
f2fs: align memory boundary for bitops
f2fs: remove unneeded set_cold_node()
f2fs: add nowait aio support
f2fs: wrap all options with f2fs_sb_info.mount_opt
f2fs: Don't overwrite all types of node to keep node chain
f2fs: introduce mount option for fsync mode
f2fs: fix to restore old mount option in ->remount_fs
f2fs: wrap sb_rdonly with f2fs_readonly
f2fs: avoid selinux denial on CAP_SYS_RESOURCE
f2fs: support hot file extension
f2fs: fix to avoid race in between atomic write and background GC
f2fs: do gc in greedy mode for whole range if gc_urgent mode is set
f2fs: issue discard aggressively in the gc_urgent mode
f2fs: set readdir_ra by default
f2fs: add auto tuning for small devices
f2fs: add mount option for segment allocation policy
f2fs: don't stop GC if GC is contended
f2fs: expose extension_list sysfs entry
f2fs: fix to set KEEP_SIZE bit in f2fs_zero_range
f2fs: introduce sb_lock to make encrypt pwsalt update exclusive
f2fs: remove redundant initialization of pointer 'p'
f2fs: flush cp pack except cp pack 2 page at first
f2fs: clean up f2fs_sb_has_xxx functions
f2fs: remove redundant check of page type when submit bio
f2fs: fix to handle looped node chain during recovery
f2fs: handle quota for orphan inodes
f2fs: support passing down write hints to block layer with F2FS policy
f2fs: support passing down write hints given by users to block layer
f2fs: fix to clear CP_TRIMMED_FLAG
f2fs: support large nat bitmap
f2fs: fix to check extent cache in f2fs_drop_extent_tree
f2fs: restrict inline_xattr_size configuration
f2fs: fix heap mode to reset it back
f2fs: fix potential corruption in area before F2FS_SUPER_OFFSET
fscrypt: fix build with pre-4.6 gcc versions
fscrypt: fix up fscrypt_fname_encrypted_size() for internal use
fscrypt: define fscrypt_fname_alloc_buffer() to be for presented names
fscrypt: calculate NUL-padding length in one place only
fscrypt: move fscrypt_symlink_data to fscrypt_private.h
fscrypt: remove fscrypt_fname_usr_to_disk()
f2fs: switch to fscrypt_get_symlink()
f2fs: switch to fscrypt ->symlink() helper functions
fscrypt: new helper function - fscrypt_get_symlink()
fscrypt: new helper functions for ->symlink()
fscrypt: trim down fscrypt.h includes
fscrypt: move fscrypt_is_dot_dotdot() to fs/crypto/fname.c
fscrypt: move fscrypt_valid_enc_modes() to fscrypt_private.h
fscrypt: move fscrypt_operations declaration to fscrypt_supp.h
fscrypt: split fscrypt_dummy_context_enabled() into supp/notsupp versions
fscrypt: move fscrypt_ctx declaration to fscrypt_supp.h
fscrypt: move fscrypt_info_cachep declaration to fscrypt_private.h
fscrypt: move fscrypt_control_page() to supp/notsupp headers
fscrypt: move fscrypt_has_encryption_key() to supp/notsupp headers
f2fs: don't put dentry page in pagecache into highmem
f2fs: support inode creation time
f2fs: rebuild sit page from sit info in mem
f2fs: stop issuing discard if fs is readonly
f2fs: clean up duplicated assignment in init_discard_policy
f2fs: use GFP_F2FS_ZERO for cleanup
f2fs: allow to recover node blocks given updated checkpoint
f2fs: recover some i_inline flags
f2fs: correct removexattr behavior for null valued extended attribute
f2fs: drop page cache after fs shutdown
f2fs: stop gc/discard thread after fs shutdown
f2fs: hanlde error case in f2fs_ioc_shutdown
f2fs: split need_inplace_update
f2fs: fix to update last_disk_size correctly
f2fs: kill F2FS_INLINE_XATTR_ADDRS for cleanup
f2fs: clean up error path of fill_super
f2fs: avoid hungtask when GC encrypted block if io_bits is set
f2fs: allow quota to use reserved blocks
f2fs: fix to drop all inmem pages correctly
f2fs: speed up defragment on sparse file
f2fs: support F2FS_IOC_PRECACHE_EXTENTS
f2fs: add an ioctl to disable GC for specific file
f2fs: prevent newly created inode from being dirtied incorrectly
f2fs: support FIEMAP_FLAG_XATTR
f2fs: fix to cover f2fs_inline_data_fiemap with inode_lock
f2fs: check node page again in write end io
f2fs: fix to caclulate required free section correctly
f2fs: handle newly created page when revoking inmem pages
f2fs: add resgid and resuid to reserve root blocks
f2fs: implement cgroup writeback support
f2fs: remove unused pend_list_tag
f2fs: avoid high cpu usage in discard thread
f2fs: make local functions static
f2fs: add reserved blocks for root user
f2fs: check segment type in __f2fs_replace_block
f2fs: update inode info to inode page for new file
f2fs: show precise # of blocks that user/root can use
f2fs: clean up unneeded declaration
f2fs: continue to do direct IO if we only preallocate partial blocks
f2fs: enable quota at remount from r to w
f2fs: skip stop_checkpoint for user data writes
f2fs: fix missing error number for xattr operation
f2fs: recover directory operations by fsync
f2fs: return error during fill_super
f2fs: fix an error case of missing update inode page
f2fs: fix potential hangtask in f2fs_trace_pid
f2fs: no need return value in restore summary process
f2fs: use unlikely for release case
f2fs: don't return value in truncate_data_blocks_range
f2fs: clean up f2fs_map_blocks
f2fs: clean up hash codes
f2fs: fix error handling in fill_super
f2fs: spread f2fs_k{m,z}alloc
f2fs: inject fault to kvmalloc
f2fs: inject fault to kzalloc
f2fs: remove a redundant conditional expression
f2fs: apply write hints to select the type of segment for direct write
f2fs: switch to fscrypt_prepare_setattr()
f2fs: switch to fscrypt_prepare_lookup()
f2fs: switch to fscrypt_prepare_rename()
f2fs: switch to fscrypt_prepare_link()
f2fs: switch to fscrypt_file_open()
f2fs: remove repeated f2fs_bug_on
f2fs: remove an excess variable
f2fs: fix lock dependency in between dio_rwsem & i_mmap_sem
f2fs: remove unused parameter
f2fs: still write data if preallocate only partial blocks
f2fs: introduce sysfs readdir_ra to readahead inode block in readdir
f2fs: fix concurrent problem for updating free bitmap
f2fs: remove unneeded memory footprint accounting
f2fs: no need to read nat block if nat_block_bitmap is set
f2fs: reserve nid resource for quota sysfile
fscrypt: resolve some cherry-pick bugs
fscrypt: move to generic async completion
crypto: introduce crypto wait for async op
fscrypt: lock mutex before checking for bounce page pool
fscrypt: new helper function - fscrypt_prepare_setattr()
fscrypt: new helper function - fscrypt_prepare_lookup()
fscrypt: new helper function - fscrypt_prepare_rename()
fscrypt: new helper function - fscrypt_prepare_link()
fscrypt: new helper function - fscrypt_file_open()
fscrypt: new helper function - fscrypt_require_key()
fscrypt: remove unneeded empty fscrypt_operations structs
fscrypt: remove ->is_encrypted()
fscrypt: switch from ->is_encrypted() to IS_ENCRYPTED()
fs, fscrypt: add an S_ENCRYPTED inode flag
fscrypt: clean up include file mess
fscrypt: fix dereference of NULL user_key_payload
fscrypt: make ->dummy_context() return bool
f2fs: deny accessing encryption policy if encryption is off
f2fs: inject fault in inc_valid_node_count
f2fs: fix to clear FI_NO_PREALLOC
f2fs: expose quota information in debugfs
f2fs: separate nat entry mem alloc from nat_tree_lock
f2fs: validate before set/clear free nat bitmap
f2fs: avoid opened loop codes in __add_ino_entry
f2fs: apply write hints to select the type of segments for buffered write
f2fs: introduce scan_curseg_cache for cleanup
f2fs: optimize the way of traversing free_nid_bitmap
f2fs: keep scanning until enough free nids are acquired
f2fs: trace checkpoint reason in fsync()
f2fs: keep isize once block is reserved cross EOF
f2fs: avoid race in between GC and block exchange
f2fs: save a multiplication for last_nid calculation
f2fs: fix summary info corruption
f2fs: remove dead code in update_meta_page
f2fs: remove unneeded semicolon
f2fs: don't bother with inode->i_version
f2fs: check curseg space before foreground GC
f2fs: use rw_semaphore to protect SIT cache
f2fs: support quota sys files
f2fs: add quota_ino feature infra
f2fs: optimize __update_nat_bits
f2fs: modify for accurate fggc node io stat
Revert "f2fs: handle dirty segments inside refresh_sit_entry"
f2fs: add a function to move nid
f2fs: export SSR allocation threshold
f2fs: give correct trimmed blocks in fstrim
f2fs: support bio allocation error injection
f2fs: support get_page error injection
f2fs: add missing sysfs description
f2fs: support soft block reservation
f2fs: handle error case when adding xattr entry
f2fs: support flexible inline xattr size
f2fs: show current cp state
f2fs: add missing quota_initialize
f2fs: show # of dirty segments via sysfs
f2fs: stop all the operations by cp_error flag
f2fs: remove several redundant assignments
f2fs: avoid using timespec
f2fs: fix to correct no_fggc_candidate
Revert "f2fs: return wrong error number on f2fs_quota_write"
f2fs: remove obsolete pointer for truncate_xattr_node
f2fs: retry ENOMEM for quota_read|write
f2fs: limit # of inmemory pages
f2fs: update ctx->pos correctly when hitting hole in directory
f2fs: relocate readahead codes in readdir()
f2fs: allow readdir() to be interrupted
f2fs: trace f2fs_readdir
f2fs: trace f2fs_lookup
f2fs: skip searching non-exist range in truncate_hole
f2fs: expose some sectors to user in inline data or dentry case
f2fs: avoid stale fi->gdirty_list pointer
f2fs/crypto: drop crypto key at evict_inode only
f2fs: fix to avoid race when accessing last_disk_size
f2fs: Fix bool initialization/comparison
f2fs: give up CP_TRIMMED_FLAG if it drops discards
f2fs: trace f2fs_remove_discard
f2fs: reduce cmd_lock coverage in __issue_discard_cmd
f2fs: split discard policy
f2fs: wrap discard policy
f2fs: support issuing/waiting discard in range
f2fs: fix to flush multiple device in checkpoint
f2fs: enhance multiple device flush
f2fs: fix to show ino management cache size correctly
f2fs: drop FI_UPDATE_WRITE tag after f2fs_issue_flush
f2fs: obsolete ALLOC_NID_LIST list
f2fs: convert inline data for direct I/O & FI_NO_PREALLOC
f2fs: allow readpages with NULL file pointer
f2fs: show flush list status in sysfs
f2fs: introduce read_xattr_block
f2fs: introduce read_inline_xattr
Revert "f2fs: reuse nids more aggressively"
Revert "f2fs: node segment is prior to data segment selected victim"
f2fs: fix potential panic during fstrim
f2fs: hurry up to issue discard after io interruption
f2fs: fix to show correct discard_granularity in sysfs
f2fs: detect dirty inode in evict_inode
f2fs: clear radix tree dirty tag of pages whose dirty flag is cleared
f2fs: speed up gc_urgent mode with SSR
f2fs: better to wait for fstrim completion
f2fs: avoid race in between read xattr & write xattr
f2fs: make get_lock_data_page to handle encrypted inode
f2fs: use generic terms used for encrypted block management
f2fs: introduce f2fs_encrypted_file for clean-up
Revert "f2fs: add a new function get_ssr_cost"
f2fs: constify super_operations
f2fs: fix to wake up all sleeping flusher
f2fs: avoid race in between atomic_read & atomic_inc
f2fs: remove unneeded parameter of change_curseg
f2fs: update i_flags correctly
f2fs: don't check inode's checksum if it was dirtied or writebacked
f2fs: don't need to update inode checksum for recovery
f2fs: trigger fdatasync for non-atomic_write file
f2fs: fix to avoid race in between aio and gc
f2fs: wake up discard_thread iff there is a candidate
f2fs: return error when accessing insane flie offset
f2fs: trigger normal fsync for non-atomic_write file
f2fs: clear FI_HOT_DATA correctly
f2fs: fix out-of-order execution in f2fs_issue_flush
f2fs: issue discard commands if gc_urgent is set
f2fs: introduce discard_granularity sysfs entry
f2fs: remove unused function overprovision_sections
f2fs: check hot_data for roll-forward recovery
f2fs: add tracepoint for f2fs_gc
f2fs: retry to revoke atomic commit in -ENOMEM case
f2fs: let fill_super handle roll-forward errors
f2fs: merge equivalent flags F2FS_GET_BLOCK_[READ|DIO]
f2fs: support journalled quota
f2fs: fix potential overflow when adjusting GC cycle
f2fs: avoid unneeded sync on quota file
f2fs: introduce gc_urgent mode for background GC
f2fs: use IPU for cold files
f2fs: fix the size value in __check_sit_bitmap
f2fs: add app/fs io stat
f2fs: do not change the valid_block value if cur_valid_map was wrongly set or cleared
f2fs: update cur_valid_map_mir together with cur_valid_map
f2fs: use printk_ratelimited for f2fs_msg
f2fs: expose features to sysfs entry
f2fs: support inode checksum
f2fs: return wrong error number on f2fs_quota_write
f2fs: provide f2fs_balance_fs to __write_node_page
f2fs: introduce f2fs_statfs_project
f2fs: don't need to wait for node writes for atomic write
f2fs: avoid naming confusion of sysfs init
f2fs: support project quota
f2fs: record quota during dot{,dot} recovery
f2fs: enhance on-disk inode structure scalability
f2fs: make max inline size changeable
f2fs: add ioctl to expose current features
f2fs: make background threads of f2fs being aware of freezing
f2fs: don't give partially written atomic data from process crash
f2fs: give a try to do atomic write in -ENOMEM case
f2fs: preserve i_mode if __f2fs_set_acl() fails
f2fs: alloc new nids for xattr block in recovery
f2fs: spread struct f2fs_dentry_ptr for inline path
f2fs: remove unused input parameter
f2fs: avoid cpu lockup
f2fs: include seq_file.h for sysfs.c
f2fs: Don't clear SGID when inheriting ACLs
f2fs: remove extra inode_unlock() in error path
fscrypt: add support for AES-128-CBC
fscrypt: inline fscrypt_free_filename()
f2fs: make more close to v4.13-rc1
f2fs: support plain user/group quota
f2fs: avoid deadlock caused by lock order of page and lock_op
f2fs: use spin_{,un}lock_irq{save,restore}
f2fs: relax migratepage for atomic written page
f2fs: don't count inode block in in-memory inode.i_blocks
Revert "f2fs: fix to clean previous mount option when remount_fs"
f2fs: do not set LOST_PINO for renamed dir
f2fs: do not set LOST_PINO for newly created dir
f2fs: skip ->writepages for {mete,node}_inode during recovery
f2fs: introduce __check_sit_bitmap
f2fs: stop gc/discard thread in prior during umount
f2fs: introduce reserved_blocks in sysfs
f2fs: avoid redundant f2fs_flush after remount
f2fs: report # of free inodes more precisely
f2fs: add ioctl to do gc with target block address
f2fs: don't need to check encrypted inode for partial truncation
f2fs: measure inode.i_blocks as generic filesystem
f2fs: set CP_TRIMMED_FLAG correctly
f2fs: require key for truncate(2) of encrypted file
f2fs: move sysfs code from super.c to fs/f2fs/sysfs.c
f2fs: clean up sysfs codes
f2fs: fix wrong error number of fill_super
f2fs: fix to show injection rate in ->show_options
f2fs: Fix a return value in case of error in 'f2fs_fill_super'
f2fs: use proper variable name
f2fs: fix to avoid panic when encountering corrupt node
f2fs: don't track newly allocated nat entry in list
f2fs: add f2fs_bug_on in __remove_discard_cmd
f2fs: introduce __wait_one_discard_bio
f2fs: dax: fix races between page faults and truncating pages
f2fs: simplify the way of calulating next nat address
f2fs: sanity check size of nat and sit cache
f2fs: fix a panic caused by NULL flush_cmd_control
f2fs: remove the unnecessary cast for PTR_ERR
f2fs: remove false-positive bug_on
f2fs: Do not issue small discards in LFS mode
f2fs: don't bother checking for encryption key in ->write_iter()
f2fs: don't bother checking for encryption key in ->mmap()
f2fs: wait discard IO completion without cmd_lock held
f2fs: wake up all waiters in f2fs_submit_discard_endio
f2fs: show more info if fail to issue discard
f2fs: introduce io_list for serialize data/node IOs
f2fs: split wio_mutex
f2fs: combine huge num of discard rb tree consistence checks
f2fs: fix a bug caused by NULL extent tree
f2fs: try to freeze in gc and discard threads
f2fs: add a new function get_ssr_cost
f2fs: declare load_free_nid_bitmap static
f2fs: avoid f2fs_lock_op for IPU writes
f2fs: split bio cache
f2fs: use fio instead of multiple parameters
f2fs: remove unnecessary read cases in merged IO flow
f2fs: use f2fs_submit_page_bio for ra_meta_pages
f2fs: make sure f2fs_gc returns consistent errno
f2fs: load inode's flag from disk
f2fs: sanity check checkpoint segno and blkoff
f2fs, block_dump: give WRITE direction to submit_bio
fscrypt: correct collision claim for digested names
f2fs: switch to using fscrypt_match_name()
fscrypt: introduce helper function for filename matching
fscrypt: fix context consistency check when key(s) unavailable
fscrypt: Move key structure and constants to uapi
fscrypt: remove fscrypt_symlink_data_len()
fscrypt: remove unnecessary checks for NULL operations
fscrypt: eliminate ->prepare_context() operation
fscrypt: remove broken support for detecting keyring key revocation
fscrypt: avoid collisions when presenting long encrypted filenames
f2fs: check entire encrypted bigname when finding a dentry
f2fs: sync f2fs_lookup() with ext4_lookup()
f2fs: fix a mount fail for wrong next_scan_nid
f2fs: relocate inode_{,un}lock in F2FS_IOC_SETFLAGS
f2fs: show available_nids in f2fs/status
f2fs: flush dirty nats periodically
f2fs: introduce CP_TRIMMED_FLAG to avoid unneeded discard
f2fs: allow cpc->reason to indicate more than one reason
f2fs: release cp and dnode lock before IPU
f2fs: shrink size of struct discard_cmd
f2fs: don't hold cmd_lock during waiting discard command
f2fs: nullify fio->encrypted_page for each writes
f2fs: sanity check segment count
f2fs: introduce valid_ipu_blkaddr to clean up
f2fs: lookup extent cache first under IPU scenario
f2fs: reconstruct code to write a data page
f2fs: introduce __wait_discard_cmd
f2fs: introduce __issue_discard_cmd
f2fs: enable small discard by default
f2fs: delay awaking discard thread
f2fs: seperate read nat page from nat_tree_lock
f2fs: fix multiple f2fs_add_link() having same name for inline dentry
f2fs: skip encrypted inode in ASYNC IPU policy
f2fs: fix out-of free segments
f2fs: improve definition of statistic macros
f2fs: assign allocation hint for warm/cold data
f2fs: fix _IOW usage
f2fs: add ioctl to flush data from faster device to cold area
f2fs: introduce async IPU policy
f2fs: add undiscard blocks stat
f2fs: unlock cp_rwsem early for IPU writes
f2fs: introduce __check_rb_tree_consistence
f2fs: trace __submit_discard_cmd
f2fs: in prior to issue big discard
f2fs: clean up discard_cmd_control structure
f2fs: use rb-tree to track pending discard commands
f2fs: avoid dirty node pages in check_only recovery
f2fs: fix not to set fsync/dentry mark
f2fs: allocate hot_data for atomic writes
f2fs: give time to flush dirty pages for checkpoint
f2fs: fix fs corruption due to zero inode page
f2fs: shrink blk plug region
f2fs: extract rb-tree operation infrastructure
f2fs: avoid frequent checkpoint during f2fs_gc
f2fs: clean up some macros in terms of GET_SEGNO
f2fs: clean up get_valid_blocks with consistent parameter
f2fs: use segment number for get_valid_blocks
f2fs: guard macro variables with braces
f2fs: fix comment on f2fs_flush_merged_bios() after 86531d6b
f2fs: prevent waiter encountering incorrect discard states
f2fs: introduce f2fs_wait_discard_bios
f2fs: split discard_cmd_list
Revert "f2fs: put allocate_segment after refresh_sit_entry"
f2fs: split make_dentry_ptr() into block and inline versions
f2fs: submit bio of in-place-update pages
f2fs: remove the redundant variable definition
f2fs: avoid IO split due to mixed WB_SYNC_ALL and WB_SYNC_NONE
f2fs: write small sized IO to hot log
f2fs: use bitmap in discard_entry
f2fs: clean up destroy_discard_cmd_control
f2fs: count discard command entry
f2fs: show issued flush/discard count
f2fs: relax node version check for victim data in gc
f2fs: start SSR much eariler to avoid FG_GC
f2fs: allocate node and hot data in the beginning of partition
f2fs: fix wrong max cost initialization
f2fs: allow write page cache when writting cp
f2fs: don't reserve additional space in xattr block
f2fs: clean up xattr operation
f2fs: don't track volatile file in dirty inode list
f2fs: show the max number of volatile operations
f2fs: fix race condition in between free nid allocator/initializer
f2fs: use set_page_private marcro in f2fs_trace_pid
f2fs: fix recording invalid last_victim
f2fs: more reasonable mem_size calculating of ino_entry
f2fs: calculate the f2fs_stat_info into base_mem
f2fs: avoid stat_inc_atomic_write for non-atomic file
f2fs: sanity check of crc_offset from raw checkpoint
f2fs: cleanup the disk level filename updating
f2fs: cover update_free_nid_bitmap with nid_list_lock
f2fs: fix bad prefetchw of NULL page
f2fs: clear FI_DATA_EXIST flag in truncate_inline_inode
f2fs: move mnt_want_write_file after arguments checking
f2fs: check new size by inode_newsize_ok in f2fs_insert_range
f2fs: avoid copy date to user-space if move file range fail
f2fs: drop duplicate new_size assign in f2fs_zero_range
f2fs: adjust the way of calculating nat block
f2fs: add fault injection on f2fs_truncate
f2fs: check range before defragment
f2fs: use parameter max_items instead of PIDVEC_SIZE
f2fs: add a punch discard command function
f2fs: allocate a bio for discarding when actually issuing it
f2fs: skip writeback meta pages if cp_mutex acquire failed
f2fs: show more precise message on orphan recovery failure
f2fs: remove dead macro PGOFS_OF_NEXT_DNODE
f2fs: drop duplicate radix tree lookup of nat_entry_set
f2fs: make sure trace all f2fs_issue_flush
f2fs: don't allow volatile writes for non-regular file
f2fs: don't allow atomic writes for not regular files
f2fs: fix stale ATOMIC_WRITTEN_PAGE private pointer
f2fs: build stat_info before orphan inode recovery
f2fs: fix the fault of calculating blkstart twice
f2fs: fix the fault of checking F2FS_LINK_MAX for rename inode
f2fs: don't allow to get pino when filename is encrypted
f2fs: fix wrong error injection for evict_inode
f2fs: le32_to_cpu for ckpt->cp_pack_total_block_count
f2fs: le16_to_cpu for xattr->e_value_size
f2fs: don't need to invalidate wrong node page
f2fs: fix an error return value in truncate_partial_data_page
f2fs: combine nat_bits and free_nid_bitmap cache
f2fs: skip scanning free nid bitmap of full NAT blocks
f2fs: use __set{__clear}_bit_le
f2fs: update_free_nid_bitmap() can be static
f2fs: __update_nat_bits() can be static
f2fs: le16_to_cpu for xattr->e_value_size
f2fs: don't overwrite node block by SSR
f2fs: don't need to invalidate wrong node page
f2fs: fix an error return value in truncate_partial_data_page
fscrypt: catch up to v4.11-rc1
f2fs: avoid to flush nat journal entries
f2fs: avoid to issue redundant discard commands
f2fs: fix a plint compile warning
f2fs: add f2fs_drop_inode tracepoint
f2fs: Fix zoned block device support
f2fs: remove redundant set_page_dirty()
f2fs: fix to enlarge size of write_io_dummy mempool
f2fs: fix memory leak of write_io_dummy mempool during umount
f2fs: fix to update F2FS_{CP_}WB_DATA count correctly
f2fs: use MAX_FREE_NIDS for the free nids target
f2fs: introduce free nid bitmap
f2fs: new helper cur_cp_crc() getting crc in f2fs_checkpoint
f2fs: update the comment of default nr_pages to skipping
f2fs: drop the duplicate pval in f2fs_getxattr
f2fs: Don't update the xattr data that same as the exist
f2fs: kill __is_extent_same
f2fs: avoid bggc->fggc when enough free segments are avaliable after cp
f2fs: select target segment with closer temperature in SSR mode
f2fs: show simple call stack in fault injection message
fscrypt: catch fscrypto_get_policy in v4.10-rc6
f2fs: use __clear_bit_le
f2fs: no need lock_op in f2fs_write_inline_data
f2fs: add bitmaps for empty or full NAT blocks
f2fs: replace rw semaphore extent_tree_lock with mutex lock
f2fs: avoid m_flags overlay when allocating more data blocks
f2fs: remove unsafe bitmap checking
f2fs: init local extent_info to avoid stale stack info in tp
f2fs: remove unnecessary condition check for write_checkpoint in f2fs_gc
f2fs: do SSR for node segments more aggresively
f2fs: check discard alignment only for SEQWRITE zones
f2fs: wait for discard completion after submission
f2fs: much larger batched trim_fs job
f2fs: avoid very large discard command
f2fs: find data segments across all the types
f2fs: do SSR in higher priority
f2fs: do SSR for data when there is enough free space
f2fs: node segment is prior to data segment selected victim
f2fs: put allocate_segment after refresh_sit_entry
f2fs: add ovp valid_blocks check for bg gc victim to fg_gc
f2fs: do not wait for writeback in write_begin
f2fs: replace __get_victim by dirty_segments in FG_GC
f2fs: fix multiple f2fs_add_link() calls having same name
f2fs: show actual device info in tracepoints
f2fs: use SSR for warm node as well
f2fs: enable inline_xattr by default
f2fs: introduce noinline_xattr mount option
f2fs: avoid reading NAT page by get_node_info
f2fs: remove build_free_nids() during checkpoint
f2fs: change recovery policy of xattr node block
f2fs: super: constify fscrypt_operations structure
f2fs: show checkpoint version at mount time
f2fs: remove preflush for nobarrier case
f2fs: check last page index in cached bio to decide submission
f2fs: check io submission more precisely
f2fs: fix trim_fs assignment
Revert "f2fs: remove batched discard in f2fs_trim_fs"
f2fs: fix missing bio_alloc(1)
f2fs: call internal __write_data_page directly
f2fs: avoid out-of-order execution of atomic writes
f2fs: move write_node_page above fsync_node_pages
f2fs: move flush tracepoint
f2fs: show # of APPEND and UPDATE inodes
f2fs: fix 446 coding style warnings in f2fs.h
f2fs: fix 3 coding style errors in f2fs.h
f2fs: declare missing static function
f2fs: show the fault injection mount option
f2fs: fix null pointer dereference when issuing flush in ->fsync
f2fs: fix to avoid overflow when left shifting page offset
f2fs: enhance lookup xattr
f2fs: fix a dead loop in f2fs_fiemap()
f2fs: do not preallocate blocks which has wrong buffer
f2fs: show # of on-going flush and discard bios
f2fs: add a kernel thread to issue discard commands asynchronously
f2fs: factor out discard command info into discard_cmd_control
f2fs: remove batched discard in f2fs_trim_fs
f2fs: reorganize stat information
f2fs: clean up flush/discard command namings
f2fs: check in-memory sit version bitmap
f2fs: check in-memory nat version bitmap
f2fs: check in-memory block bitmap
f2fs: introduce FI_ATOMIC_COMMIT
f2fs: clean up with list_{first, last}_entry
f2fs: return fs_trim if there is no candidate
f2fs: avoid needless checkpoint in f2fs_trim_fs
f2fs: relax async discard commands more
f2fs: drop exist_data for inline_data when truncated to 0
f2fs: don't allow encrypted operations without keys
f2fs: show the max number of atomic operations
f2fs: get io size bit from mount option
f2fs: support IO alignment for DATA and NODE writes
f2fs: add submit_bio tracepoint
f2fs: reassign new segment for mode=lfs
f2fs: fix a missing discard prefree segments
f2fs: use rb_entry_safe
f2fs: add a case of no need to read a page in write begin
f2fs: fix a problem of using memory after free
f2fs: remove unneeded condition
f2fs: don't cache nat entry if out of memory
f2fs: remove unused values in recover_fsync_data
f2fs: support async discard based on v4.9
f2fs: resolve op and op_flags confilcts
f2fs: remove wrong backported codes
f2fs: fix a missing size change in f2fs_setattr
fs/super.c: fix race between freeze_super() and thaw_super()
scripts/tags.sh: catch 4.9-rc6
f2fs: fix to access nullified flush_cmd_control pointer
f2fs: free meta pages if sanity check for ckpt is failed
f2fs: detect wrong layout
f2fs: call sync_fs when f2fs is idle
Revert "f2fs: use percpu_counter for # of dirty pages in inode"
f2fs: return AOP_WRITEPAGE_ACTIVATE for writepage
f2fs: do not activate auto_recovery for fallocated i_size
f2fs: fix 32-bit build
f2fs: set ->owner for debugfs status file's file_operations
f2fs: fix incorrect free inode count in ->statfs
f2fs: drop duplicate header timer.h
f2fs: fix wrong AUTO_RECOVER condition
f2fs: do not recover i_size if it's valid
f2fs: fix fdatasync
f2fs: fix to account total free nid correctly
f2fs: fix an infinite loop when flush nodes in cp
f2fs: don't wait writeback for datas during checkpoint
f2fs: fix wrong written_valid_blocks counting
f2fs: avoid BG_GC in f2fs_balance_fs
f2fs: fix redundant block allocation
f2fs: use err for f2fs_preallocate_blocks
f2fs: support multiple devices
f2fs: allow dio read for LFS mode
f2fs: revert segment allocation for direct IO
f2fs: return directly if block has been removed from the victim
Revert "f2fs: do not recover from previous remained wrong dnodes"
f2fs: remove checkpoint in f2fs_freeze
f2fs: assign segments correctly for direct_io
f2fs: fix wrong i_atime recovery
f2fs: record inode updating status correctly
f2fs: Trace reset zone events
f2fs: Reset sequential zones on zoned block devices
f2fs: Cache zoned block devices zone type
f2fs: Do not allow adaptive mode for host-managed zoned block devices
f2fs: Always enable discard for zoned blocks devices
f2fs: Suppress discard warning message for zoned block devices
f2fs: Check zoned block feature for host-managed zoned block devices
f2fs: Use generic zoned block device terminology
f2fs: Add missing break in switch-case
f2fs: avoid infinite loop in the EIO case on recover_orphan_inodes
f2fs: report error of f2fs_fill_dentries
fs/crypto: catch up 4.9-rc6
f2fs: hide a maybe-uninitialized warning
f2fs: remove percpu_count due to performance regression
f2fs: make clean inodes when flushing inode page
f2fs: keep dirty inodes selectively for checkpoint
f2fs: Replace CURRENT_TIME_SEC with current_time() for inode timestamps
f2fs: use BIO_MAX_PAGES for bio allocation
f2fs: declare static function for __build_free_nids
f2fs: call f2fs_balance_fs for setattr
f2fs: count dirty inodes to flush node pages during checkpoint
f2fs: avoid casted negative value as shrink count
f2fs: don't interrupt free nids building during nid allocation
f2fs: clean up free nid list operations
f2fs: split free nid list
f2fs: clear nlink if fail to add_link
f2fs: fix sparse warnings
f2fs: fix error handling in fsync_node_pages
f2fs: fix to update largest extent under lock
f2fs: be aware of extent beyond EOF in fiemap
f2fs: don't miss any f2fs_balance_fs cases
f2fs: add missing f2fs_balance_fs in f2fs_zero_range
f2fs: give a chance to detach from dirty list
f2fs: fix to release discard entries during checkpoint
f2fs: exclude free nids building and allocation
f2fs: fix to determine start_cp_addr by sbi->cur_cp_pack
f2fs: fix overflow due to condition check order
posix_acl: Clear SGID bit when setting file permissions
f2fs: fix wrong sum_page pointer in f2fs_gc
f2fs: backport from (4c1fad64 - Merge tag 'for-f2fs-4.9' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs)
Change-Id: I6c7208efc63ce7b13f26f0ec1cd3c8aef410eff0
Signed-off-by: Blagovest Kolenichev <bkolenichev@codeaurora.org>
Signed-off-by: Srinivasarao P <spathi@codeaurora.org>
2790 lines
74 KiB
C
2790 lines
74 KiB
C
/*
|
|
* linux/mm/filemap.c
|
|
*
|
|
* Copyright (C) 1994-1999 Linus Torvalds
|
|
*/
|
|
|
|
/*
|
|
* This file handles the generic file mmap semantics used by
|
|
* most "normal" filesystems (but you don't /have/ to use this:
|
|
* the NFS filesystem used to do this differently, for example)
|
|
*/
|
|
#include <linux/export.h>
|
|
#include <linux/compiler.h>
|
|
#include <linux/fs.h>
|
|
#include <linux/uaccess.h>
|
|
#include <linux/capability.h>
|
|
#include <linux/kernel_stat.h>
|
|
#include <linux/gfp.h>
|
|
#include <linux/mm.h>
|
|
#include <linux/swap.h>
|
|
#include <linux/mman.h>
|
|
#include <linux/pagemap.h>
|
|
#include <linux/file.h>
|
|
#include <linux/uio.h>
|
|
#include <linux/hash.h>
|
|
#include <linux/writeback.h>
|
|
#include <linux/backing-dev.h>
|
|
#include <linux/pagevec.h>
|
|
#include <linux/blkdev.h>
|
|
#include <linux/security.h>
|
|
#include <linux/cpuset.h>
|
|
#include <linux/hardirq.h> /* for BUG_ON(!in_atomic()) only */
|
|
#include <linux/hugetlb.h>
|
|
#include <linux/memcontrol.h>
|
|
#include <linux/cleancache.h>
|
|
#include <linux/rmap.h>
|
|
#include "internal.h"
|
|
|
|
#define CREATE_TRACE_POINTS
|
|
#include <trace/events/filemap.h>
|
|
|
|
/*
|
|
* FIXME: remove all knowledge of the buffer layer from the core VM
|
|
*/
|
|
#include <linux/buffer_head.h> /* for try_to_free_buffers */
|
|
|
|
#include <asm/mman.h>
|
|
|
|
/*
|
|
* Shared mappings implemented 30.11.1994. It's not fully working yet,
|
|
* though.
|
|
*
|
|
* Shared mappings now work. 15.8.1995 Bruno.
|
|
*
|
|
* finished 'unifying' the page and buffer cache and SMP-threaded the
|
|
* page-cache, 21.05.1999, Ingo Molnar <mingo@redhat.com>
|
|
*
|
|
* SMP-threaded pagemap-LRU 1999, Andrea Arcangeli <andrea@suse.de>
|
|
*/
|
|
|
|
/*
|
|
* Lock ordering:
|
|
*
|
|
* ->i_mmap_rwsem (truncate_pagecache)
|
|
* ->private_lock (__free_pte->__set_page_dirty_buffers)
|
|
* ->swap_lock (exclusive_swap_page, others)
|
|
* ->mapping->tree_lock
|
|
*
|
|
* ->i_mutex
|
|
* ->i_mmap_rwsem (truncate->unmap_mapping_range)
|
|
*
|
|
* ->mmap_sem
|
|
* ->i_mmap_rwsem
|
|
* ->page_table_lock or pte_lock (various, mainly in memory.c)
|
|
* ->mapping->tree_lock (arch-dependent flush_dcache_mmap_lock)
|
|
*
|
|
* ->mmap_sem
|
|
* ->lock_page (access_process_vm)
|
|
*
|
|
* ->i_mutex (generic_perform_write)
|
|
* ->mmap_sem (fault_in_pages_readable->do_page_fault)
|
|
*
|
|
* bdi->wb.list_lock
|
|
* sb_lock (fs/fs-writeback.c)
|
|
* ->mapping->tree_lock (__sync_single_inode)
|
|
*
|
|
* ->i_mmap_rwsem
|
|
* ->anon_vma.lock (vma_adjust)
|
|
*
|
|
* ->anon_vma.lock
|
|
* ->page_table_lock or pte_lock (anon_vma_prepare and various)
|
|
*
|
|
* ->page_table_lock or pte_lock
|
|
* ->swap_lock (try_to_unmap_one)
|
|
* ->private_lock (try_to_unmap_one)
|
|
* ->tree_lock (try_to_unmap_one)
|
|
* ->zone.lru_lock (follow_page->mark_page_accessed)
|
|
* ->zone.lru_lock (check_pte_range->isolate_lru_page)
|
|
* ->private_lock (page_remove_rmap->set_page_dirty)
|
|
* ->tree_lock (page_remove_rmap->set_page_dirty)
|
|
* bdi.wb->list_lock (page_remove_rmap->set_page_dirty)
|
|
* ->inode->i_lock (page_remove_rmap->set_page_dirty)
|
|
* ->memcg->move_lock (page_remove_rmap->mem_cgroup_begin_page_stat)
|
|
* bdi.wb->list_lock (zap_pte_range->set_page_dirty)
|
|
* ->inode->i_lock (zap_pte_range->set_page_dirty)
|
|
* ->private_lock (zap_pte_range->__set_page_dirty_buffers)
|
|
*
|
|
* ->i_mmap_rwsem
|
|
* ->tasklist_lock (memory_failure, collect_procs_ao)
|
|
*/
|
|
|
|
static int page_cache_tree_insert(struct address_space *mapping,
|
|
struct page *page, void **shadowp)
|
|
{
|
|
struct radix_tree_node *node;
|
|
void **slot;
|
|
int error;
|
|
|
|
error = __radix_tree_create(&mapping->page_tree, page->index,
|
|
&node, &slot);
|
|
if (error)
|
|
return error;
|
|
if (*slot) {
|
|
void *p;
|
|
|
|
p = radix_tree_deref_slot_protected(slot, &mapping->tree_lock);
|
|
if (!radix_tree_exceptional_entry(p))
|
|
return -EEXIST;
|
|
if (shadowp)
|
|
*shadowp = p;
|
|
mapping->nrshadows--;
|
|
if (node)
|
|
workingset_node_shadows_dec(node);
|
|
}
|
|
radix_tree_replace_slot(slot, page);
|
|
mapping->nrpages++;
|
|
if (node) {
|
|
workingset_node_pages_inc(node);
|
|
/*
|
|
* Don't track node that contains actual pages.
|
|
*
|
|
* Avoid acquiring the list_lru lock if already
|
|
* untracked. The list_empty() test is safe as
|
|
* node->private_list is protected by
|
|
* mapping->tree_lock.
|
|
*/
|
|
if (!list_empty(&node->private_list))
|
|
list_lru_del(&workingset_shadow_nodes,
|
|
&node->private_list);
|
|
}
|
|
return 0;
|
|
}
|
|
|
|
static void page_cache_tree_delete(struct address_space *mapping,
|
|
struct page *page, void *shadow)
|
|
{
|
|
struct radix_tree_node *node;
|
|
unsigned long index;
|
|
unsigned int offset;
|
|
unsigned int tag;
|
|
void **slot;
|
|
|
|
VM_BUG_ON(!PageLocked(page));
|
|
|
|
__radix_tree_lookup(&mapping->page_tree, page->index, &node, &slot);
|
|
|
|
if (!node) {
|
|
/*
|
|
* We need a node to properly account shadow
|
|
* entries. Don't plant any without. XXX
|
|
*/
|
|
shadow = NULL;
|
|
}
|
|
|
|
if (shadow) {
|
|
mapping->nrshadows++;
|
|
/*
|
|
* Make sure the nrshadows update is committed before
|
|
* the nrpages update so that final truncate racing
|
|
* with reclaim does not see both counters 0 at the
|
|
* same time and miss a shadow entry.
|
|
*/
|
|
smp_wmb();
|
|
}
|
|
mapping->nrpages--;
|
|
|
|
if (!node) {
|
|
/* Clear direct pointer tags in root node */
|
|
mapping->page_tree.gfp_mask &= __GFP_BITS_MASK;
|
|
radix_tree_replace_slot(slot, shadow);
|
|
return;
|
|
}
|
|
|
|
/* Clear tree tags for the removed page */
|
|
index = page->index;
|
|
offset = index & RADIX_TREE_MAP_MASK;
|
|
for (tag = 0; tag < RADIX_TREE_MAX_TAGS; tag++) {
|
|
if (test_bit(offset, node->tags[tag]))
|
|
radix_tree_tag_clear(&mapping->page_tree, index, tag);
|
|
}
|
|
|
|
/* Delete page, swap shadow entry */
|
|
radix_tree_replace_slot(slot, shadow);
|
|
workingset_node_pages_dec(node);
|
|
if (shadow)
|
|
workingset_node_shadows_inc(node);
|
|
else
|
|
if (__radix_tree_delete_node(&mapping->page_tree, node))
|
|
return;
|
|
|
|
/*
|
|
* Track node that only contains shadow entries.
|
|
*
|
|
* Avoid acquiring the list_lru lock if already tracked. The
|
|
* list_empty() test is safe as node->private_list is
|
|
* protected by mapping->tree_lock.
|
|
*/
|
|
if (!workingset_node_pages(node) &&
|
|
list_empty(&node->private_list)) {
|
|
node->private_data = mapping;
|
|
list_lru_add(&workingset_shadow_nodes, &node->private_list);
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Delete a page from the page cache and free it. Caller has to make
|
|
* sure the page is locked and that nobody else uses it - or that usage
|
|
* is safe. The caller must hold the mapping's tree_lock and
|
|
* mem_cgroup_begin_page_stat().
|
|
*/
|
|
void __delete_from_page_cache(struct page *page, void *shadow,
|
|
struct mem_cgroup *memcg)
|
|
{
|
|
struct address_space *mapping = page->mapping;
|
|
|
|
trace_mm_filemap_delete_from_page_cache(page);
|
|
/*
|
|
* if we're uptodate, flush out into the cleancache, otherwise
|
|
* invalidate any existing cleancache entries. We can't leave
|
|
* stale data around in the cleancache once our page is gone
|
|
*/
|
|
if (PageUptodate(page) && PageMappedToDisk(page)) {
|
|
count_vm_event(PGPGOUTCLEAN);
|
|
cleancache_put_page(page);
|
|
} else {
|
|
cleancache_invalidate_page(mapping, page);
|
|
}
|
|
|
|
page_cache_tree_delete(mapping, page, shadow);
|
|
|
|
page->mapping = NULL;
|
|
/* Leave page->index set: truncation lookup relies upon it */
|
|
|
|
/* hugetlb pages do not participate in page cache accounting. */
|
|
if (!PageHuge(page))
|
|
__dec_zone_page_state(page, NR_FILE_PAGES);
|
|
if (PageSwapBacked(page))
|
|
__dec_zone_page_state(page, NR_SHMEM);
|
|
BUG_ON(page_mapped(page));
|
|
|
|
/*
|
|
* At this point page must be either written or cleaned by truncate.
|
|
* Dirty page here signals a bug and loss of unwritten data.
|
|
*
|
|
* This fixes dirty accounting after removing the page entirely but
|
|
* leaves PageDirty set: it has no effect for truncated page and
|
|
* anyway will be cleared before returning page into buddy allocator.
|
|
*/
|
|
if (WARN_ON_ONCE(PageDirty(page)))
|
|
account_page_cleaned(page, mapping, memcg,
|
|
inode_to_wb(mapping->host));
|
|
}
|
|
|
|
/**
|
|
* delete_from_page_cache - delete page from page cache
|
|
* @page: the page which the kernel is trying to remove from page cache
|
|
*
|
|
* This must be called only on pages that have been verified to be in the page
|
|
* cache and locked. It will never put the page into the free list, the caller
|
|
* has a reference on the page.
|
|
*/
|
|
void delete_from_page_cache(struct page *page)
|
|
{
|
|
struct address_space *mapping = page->mapping;
|
|
struct mem_cgroup *memcg;
|
|
unsigned long flags;
|
|
|
|
void (*freepage)(struct page *);
|
|
|
|
BUG_ON(!PageLocked(page));
|
|
|
|
freepage = mapping->a_ops->freepage;
|
|
|
|
memcg = mem_cgroup_begin_page_stat(page);
|
|
spin_lock_irqsave(&mapping->tree_lock, flags);
|
|
__delete_from_page_cache(page, NULL, memcg);
|
|
spin_unlock_irqrestore(&mapping->tree_lock, flags);
|
|
mem_cgroup_end_page_stat(memcg);
|
|
|
|
if (freepage)
|
|
freepage(page);
|
|
page_cache_release(page);
|
|
}
|
|
EXPORT_SYMBOL(delete_from_page_cache);
|
|
|
|
static int filemap_check_errors(struct address_space *mapping)
|
|
{
|
|
int ret = 0;
|
|
/* Check for outstanding write errors */
|
|
if (test_bit(AS_ENOSPC, &mapping->flags) &&
|
|
test_and_clear_bit(AS_ENOSPC, &mapping->flags))
|
|
ret = -ENOSPC;
|
|
if (test_bit(AS_EIO, &mapping->flags) &&
|
|
test_and_clear_bit(AS_EIO, &mapping->flags))
|
|
ret = -EIO;
|
|
return ret;
|
|
}
|
|
|
|
/**
|
|
* __filemap_fdatawrite_range - start writeback on mapping dirty pages in range
|
|
* @mapping: address space structure to write
|
|
* @start: offset in bytes where the range starts
|
|
* @end: offset in bytes where the range ends (inclusive)
|
|
* @sync_mode: enable synchronous operation
|
|
*
|
|
* Start writeback against all of a mapping's dirty pages that lie
|
|
* within the byte offsets <start, end> inclusive.
|
|
*
|
|
* If sync_mode is WB_SYNC_ALL then this is a "data integrity" operation, as
|
|
* opposed to a regular memory cleansing writeback. The difference between
|
|
* these two operations is that if a dirty page/buffer is encountered, it must
|
|
* be waited upon, and not just skipped over.
|
|
*/
|
|
int __filemap_fdatawrite_range(struct address_space *mapping, loff_t start,
|
|
loff_t end, int sync_mode)
|
|
{
|
|
int ret;
|
|
struct writeback_control wbc = {
|
|
.sync_mode = sync_mode,
|
|
.nr_to_write = LONG_MAX,
|
|
.range_start = start,
|
|
.range_end = end,
|
|
};
|
|
|
|
if (!mapping_cap_writeback_dirty(mapping))
|
|
return 0;
|
|
|
|
wbc_attach_fdatawrite_inode(&wbc, mapping->host);
|
|
ret = do_writepages(mapping, &wbc);
|
|
wbc_detach_inode(&wbc);
|
|
return ret;
|
|
}
|
|
|
|
static inline int __filemap_fdatawrite(struct address_space *mapping,
|
|
int sync_mode)
|
|
{
|
|
return __filemap_fdatawrite_range(mapping, 0, LLONG_MAX, sync_mode);
|
|
}
|
|
|
|
int filemap_fdatawrite(struct address_space *mapping)
|
|
{
|
|
return __filemap_fdatawrite(mapping, WB_SYNC_ALL);
|
|
}
|
|
EXPORT_SYMBOL(filemap_fdatawrite);
|
|
|
|
int filemap_fdatawrite_range(struct address_space *mapping, loff_t start,
|
|
loff_t end)
|
|
{
|
|
return __filemap_fdatawrite_range(mapping, start, end, WB_SYNC_ALL);
|
|
}
|
|
EXPORT_SYMBOL(filemap_fdatawrite_range);
|
|
|
|
/**
|
|
* filemap_flush - mostly a non-blocking flush
|
|
* @mapping: target address_space
|
|
*
|
|
* This is a mostly non-blocking flush. Not suitable for data-integrity
|
|
* purposes - I/O may not be started against all dirty pages.
|
|
*/
|
|
int filemap_flush(struct address_space *mapping)
|
|
{
|
|
return __filemap_fdatawrite(mapping, WB_SYNC_NONE);
|
|
}
|
|
EXPORT_SYMBOL(filemap_flush);
|
|
|
|
static int __filemap_fdatawait_range(struct address_space *mapping,
|
|
loff_t start_byte, loff_t end_byte)
|
|
{
|
|
pgoff_t index = start_byte >> PAGE_CACHE_SHIFT;
|
|
pgoff_t end = end_byte >> PAGE_CACHE_SHIFT;
|
|
struct pagevec pvec;
|
|
int nr_pages;
|
|
int ret = 0;
|
|
|
|
if (end_byte < start_byte)
|
|
goto out;
|
|
|
|
pagevec_init(&pvec, 0);
|
|
while (index <= end) {
|
|
unsigned i;
|
|
|
|
nr_pages = pagevec_lookup_range_tag(&pvec, mapping, &index,
|
|
end, PAGECACHE_TAG_WRITEBACK);
|
|
if (!nr_pages)
|
|
break;
|
|
|
|
for (i = 0; i < nr_pages; i++) {
|
|
struct page *page = pvec.pages[i];
|
|
|
|
wait_on_page_writeback(page);
|
|
if (TestClearPageError(page))
|
|
ret = -EIO;
|
|
}
|
|
pagevec_release(&pvec);
|
|
cond_resched();
|
|
}
|
|
out:
|
|
return ret;
|
|
}
|
|
|
|
/**
|
|
* filemap_fdatawait_range - wait for writeback to complete
|
|
* @mapping: address space structure to wait for
|
|
* @start_byte: offset in bytes where the range starts
|
|
* @end_byte: offset in bytes where the range ends (inclusive)
|
|
*
|
|
* Walk the list of under-writeback pages of the given address space
|
|
* in the given range and wait for all of them. Check error status of
|
|
* the address space and return it.
|
|
*
|
|
* Since the error status of the address space is cleared by this function,
|
|
* callers are responsible for checking the return value and handling and/or
|
|
* reporting the error.
|
|
*/
|
|
int filemap_fdatawait_range(struct address_space *mapping, loff_t start_byte,
|
|
loff_t end_byte)
|
|
{
|
|
int ret, ret2;
|
|
|
|
ret = __filemap_fdatawait_range(mapping, start_byte, end_byte);
|
|
ret2 = filemap_check_errors(mapping);
|
|
if (!ret)
|
|
ret = ret2;
|
|
|
|
return ret;
|
|
}
|
|
EXPORT_SYMBOL(filemap_fdatawait_range);
|
|
|
|
/**
|
|
* filemap_fdatawait_keep_errors - wait for writeback without clearing errors
|
|
* @mapping: address space structure to wait for
|
|
*
|
|
* Walk the list of under-writeback pages of the given address space
|
|
* and wait for all of them. Unlike filemap_fdatawait(), this function
|
|
* does not clear error status of the address space.
|
|
*
|
|
* Use this function if callers don't handle errors themselves. Expected
|
|
* call sites are system-wide / filesystem-wide data flushers: e.g. sync(2),
|
|
* fsfreeze(8)
|
|
*/
|
|
void filemap_fdatawait_keep_errors(struct address_space *mapping)
|
|
{
|
|
loff_t i_size = i_size_read(mapping->host);
|
|
|
|
if (i_size == 0)
|
|
return;
|
|
|
|
__filemap_fdatawait_range(mapping, 0, i_size - 1);
|
|
}
|
|
|
|
/**
|
|
* filemap_fdatawait - wait for all under-writeback pages to complete
|
|
* @mapping: address space structure to wait for
|
|
*
|
|
* Walk the list of under-writeback pages of the given address space
|
|
* and wait for all of them. Check error status of the address space
|
|
* and return it.
|
|
*
|
|
* Since the error status of the address space is cleared by this function,
|
|
* callers are responsible for checking the return value and handling and/or
|
|
* reporting the error.
|
|
*/
|
|
int filemap_fdatawait(struct address_space *mapping)
|
|
{
|
|
loff_t i_size = i_size_read(mapping->host);
|
|
|
|
if (i_size == 0)
|
|
return 0;
|
|
|
|
return filemap_fdatawait_range(mapping, 0, i_size - 1);
|
|
}
|
|
EXPORT_SYMBOL(filemap_fdatawait);
|
|
|
|
int filemap_write_and_wait(struct address_space *mapping)
|
|
{
|
|
int err = 0;
|
|
|
|
if (mapping->nrpages) {
|
|
err = filemap_fdatawrite(mapping);
|
|
/*
|
|
* Even if the above returned error, the pages may be
|
|
* written partially (e.g. -ENOSPC), so we wait for it.
|
|
* But the -EIO is special case, it may indicate the worst
|
|
* thing (e.g. bug) happened, so we avoid waiting for it.
|
|
*/
|
|
if (err != -EIO) {
|
|
int err2 = filemap_fdatawait(mapping);
|
|
if (!err)
|
|
err = err2;
|
|
}
|
|
} else {
|
|
err = filemap_check_errors(mapping);
|
|
}
|
|
return err;
|
|
}
|
|
EXPORT_SYMBOL(filemap_write_and_wait);
|
|
|
|
/**
|
|
* filemap_write_and_wait_range - write out & wait on a file range
|
|
* @mapping: the address_space for the pages
|
|
* @lstart: offset in bytes where the range starts
|
|
* @lend: offset in bytes where the range ends (inclusive)
|
|
*
|
|
* Write out and wait upon file offsets lstart->lend, inclusive.
|
|
*
|
|
* Note that `lend' is inclusive (describes the last byte to be written) so
|
|
* that this function can be used to write to the very end-of-file (end = -1).
|
|
*/
|
|
int filemap_write_and_wait_range(struct address_space *mapping,
|
|
loff_t lstart, loff_t lend)
|
|
{
|
|
int err = 0;
|
|
|
|
if (mapping->nrpages) {
|
|
err = __filemap_fdatawrite_range(mapping, lstart, lend,
|
|
WB_SYNC_ALL);
|
|
/* See comment of filemap_write_and_wait() */
|
|
if (err != -EIO) {
|
|
int err2 = filemap_fdatawait_range(mapping,
|
|
lstart, lend);
|
|
if (!err)
|
|
err = err2;
|
|
}
|
|
} else {
|
|
err = filemap_check_errors(mapping);
|
|
}
|
|
return err;
|
|
}
|
|
EXPORT_SYMBOL(filemap_write_and_wait_range);
|
|
|
|
/**
|
|
* replace_page_cache_page - replace a pagecache page with a new one
|
|
* @old: page to be replaced
|
|
* @new: page to replace with
|
|
* @gfp_mask: allocation mode
|
|
*
|
|
* This function replaces a page in the pagecache with a new one. On
|
|
* success it acquires the pagecache reference for the new page and
|
|
* drops it for the old page. Both the old and new pages must be
|
|
* locked. This function does not add the new page to the LRU, the
|
|
* caller must do that.
|
|
*
|
|
* The remove + add is atomic. The only way this function can fail is
|
|
* memory allocation failure.
|
|
*/
|
|
int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
|
|
{
|
|
int error;
|
|
|
|
VM_BUG_ON_PAGE(!PageLocked(old), old);
|
|
VM_BUG_ON_PAGE(!PageLocked(new), new);
|
|
VM_BUG_ON_PAGE(new->mapping, new);
|
|
|
|
error = radix_tree_preload(gfp_mask & GFP_RECLAIM_MASK);
|
|
if (!error) {
|
|
struct address_space *mapping = old->mapping;
|
|
void (*freepage)(struct page *);
|
|
struct mem_cgroup *memcg;
|
|
unsigned long flags;
|
|
|
|
pgoff_t offset = old->index;
|
|
freepage = mapping->a_ops->freepage;
|
|
|
|
page_cache_get(new);
|
|
new->mapping = mapping;
|
|
new->index = offset;
|
|
|
|
memcg = mem_cgroup_begin_page_stat(old);
|
|
spin_lock_irqsave(&mapping->tree_lock, flags);
|
|
__delete_from_page_cache(old, NULL, memcg);
|
|
error = page_cache_tree_insert(mapping, new, NULL);
|
|
BUG_ON(error);
|
|
|
|
/*
|
|
* hugetlb pages do not participate in page cache accounting.
|
|
*/
|
|
if (!PageHuge(new))
|
|
__inc_zone_page_state(new, NR_FILE_PAGES);
|
|
if (PageSwapBacked(new))
|
|
__inc_zone_page_state(new, NR_SHMEM);
|
|
spin_unlock_irqrestore(&mapping->tree_lock, flags);
|
|
mem_cgroup_end_page_stat(memcg);
|
|
mem_cgroup_replace_page(old, new);
|
|
radix_tree_preload_end();
|
|
if (freepage)
|
|
freepage(old);
|
|
page_cache_release(old);
|
|
}
|
|
|
|
return error;
|
|
}
|
|
EXPORT_SYMBOL_GPL(replace_page_cache_page);
|
|
|
|
static int __add_to_page_cache_locked(struct page *page,
|
|
struct address_space *mapping,
|
|
pgoff_t offset, gfp_t gfp_mask,
|
|
void **shadowp)
|
|
{
|
|
int huge = PageHuge(page);
|
|
struct mem_cgroup *memcg;
|
|
int error;
|
|
|
|
VM_BUG_ON_PAGE(!PageLocked(page), page);
|
|
VM_BUG_ON_PAGE(PageSwapBacked(page), page);
|
|
|
|
if (!huge) {
|
|
error = mem_cgroup_try_charge(page, current->mm,
|
|
gfp_mask, &memcg);
|
|
if (error)
|
|
return error;
|
|
}
|
|
|
|
error = radix_tree_maybe_preload(gfp_mask & GFP_RECLAIM_MASK);
|
|
if (error) {
|
|
if (!huge)
|
|
mem_cgroup_cancel_charge(page, memcg);
|
|
return error;
|
|
}
|
|
|
|
page_cache_get(page);
|
|
page->mapping = mapping;
|
|
page->index = offset;
|
|
|
|
spin_lock_irq(&mapping->tree_lock);
|
|
error = page_cache_tree_insert(mapping, page, shadowp);
|
|
radix_tree_preload_end();
|
|
if (unlikely(error))
|
|
goto err_insert;
|
|
|
|
/* hugetlb pages do not participate in page cache accounting. */
|
|
if (!huge)
|
|
__inc_zone_page_state(page, NR_FILE_PAGES);
|
|
spin_unlock_irq(&mapping->tree_lock);
|
|
if (!huge)
|
|
mem_cgroup_commit_charge(page, memcg, false);
|
|
trace_mm_filemap_add_to_page_cache(page);
|
|
return 0;
|
|
err_insert:
|
|
page->mapping = NULL;
|
|
/* Leave page->index set: truncation relies upon it */
|
|
spin_unlock_irq(&mapping->tree_lock);
|
|
if (!huge)
|
|
mem_cgroup_cancel_charge(page, memcg);
|
|
page_cache_release(page);
|
|
return error;
|
|
}
|
|
|
|
/**
|
|
* add_to_page_cache_locked - add a locked page to the pagecache
|
|
* @page: page to add
|
|
* @mapping: the page's address_space
|
|
* @offset: page index
|
|
* @gfp_mask: page allocation mode
|
|
*
|
|
* This function is used to add a page to the pagecache. It must be locked.
|
|
* This function does not add the page to the LRU. The caller must do that.
|
|
*/
|
|
int add_to_page_cache_locked(struct page *page, struct address_space *mapping,
|
|
pgoff_t offset, gfp_t gfp_mask)
|
|
{
|
|
return __add_to_page_cache_locked(page, mapping, offset,
|
|
gfp_mask, NULL);
|
|
}
|
|
EXPORT_SYMBOL(add_to_page_cache_locked);
|
|
|
|
int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
|
|
pgoff_t offset, gfp_t gfp_mask)
|
|
{
|
|
void *shadow = NULL;
|
|
int ret;
|
|
|
|
__set_page_locked(page);
|
|
ret = __add_to_page_cache_locked(page, mapping, offset,
|
|
gfp_mask, &shadow);
|
|
if (unlikely(ret))
|
|
__clear_page_locked(page);
|
|
else {
|
|
/*
|
|
* The page might have been evicted from cache only
|
|
* recently, in which case it should be activated like
|
|
* any other repeatedly accessed page.
|
|
*/
|
|
if (shadow && workingset_refault(shadow)) {
|
|
SetPageActive(page);
|
|
workingset_activation(page);
|
|
} else
|
|
ClearPageActive(page);
|
|
lru_cache_add(page);
|
|
}
|
|
return ret;
|
|
}
|
|
EXPORT_SYMBOL_GPL(add_to_page_cache_lru);
|
|
|
|
#ifdef CONFIG_NUMA
|
|
struct page *__page_cache_alloc(gfp_t gfp)
|
|
{
|
|
int n;
|
|
struct page *page;
|
|
|
|
if (cpuset_do_page_mem_spread()) {
|
|
unsigned int cpuset_mems_cookie;
|
|
do {
|
|
cpuset_mems_cookie = read_mems_allowed_begin();
|
|
n = cpuset_mem_spread_node();
|
|
page = __alloc_pages_node(n, gfp, 0);
|
|
} while (!page && read_mems_allowed_retry(cpuset_mems_cookie));
|
|
|
|
return page;
|
|
}
|
|
return alloc_pages(gfp, 0);
|
|
}
|
|
EXPORT_SYMBOL(__page_cache_alloc);
|
|
#endif
|
|
|
|
/*
|
|
* In order to wait for pages to become available there must be
|
|
* waitqueues associated with pages. By using a hash table of
|
|
* waitqueues where the bucket discipline is to maintain all
|
|
* waiters on the same queue and wake all when any of the pages
|
|
* become available, and for the woken contexts to check to be
|
|
* sure the appropriate page became available, this saves space
|
|
* at a cost of "thundering herd" phenomena during rare hash
|
|
* collisions.
|
|
*/
|
|
wait_queue_head_t *page_waitqueue(struct page *page)
|
|
{
|
|
const struct zone *zone = page_zone(page);
|
|
|
|
return &zone->wait_table[hash_ptr(page, zone->wait_table_bits)];
|
|
}
|
|
EXPORT_SYMBOL(page_waitqueue);
|
|
|
|
void wait_on_page_bit(struct page *page, int bit_nr)
|
|
{
|
|
DEFINE_WAIT_BIT(wait, &page->flags, bit_nr);
|
|
|
|
if (test_bit(bit_nr, &page->flags))
|
|
__wait_on_bit(page_waitqueue(page), &wait, bit_wait_io,
|
|
TASK_UNINTERRUPTIBLE);
|
|
}
|
|
EXPORT_SYMBOL(wait_on_page_bit);
|
|
|
|
int wait_on_page_bit_killable(struct page *page, int bit_nr)
|
|
{
|
|
DEFINE_WAIT_BIT(wait, &page->flags, bit_nr);
|
|
|
|
if (!test_bit(bit_nr, &page->flags))
|
|
return 0;
|
|
|
|
return __wait_on_bit(page_waitqueue(page), &wait,
|
|
bit_wait_io, TASK_KILLABLE);
|
|
}
|
|
|
|
int wait_on_page_bit_killable_timeout(struct page *page,
|
|
int bit_nr, unsigned long timeout)
|
|
{
|
|
DEFINE_WAIT_BIT(wait, &page->flags, bit_nr);
|
|
|
|
wait.key.timeout = jiffies + timeout;
|
|
if (!test_bit(bit_nr, &page->flags))
|
|
return 0;
|
|
return __wait_on_bit(page_waitqueue(page), &wait,
|
|
bit_wait_io_timeout, TASK_KILLABLE);
|
|
}
|
|
EXPORT_SYMBOL_GPL(wait_on_page_bit_killable_timeout);
|
|
|
|
/**
|
|
* add_page_wait_queue - Add an arbitrary waiter to a page's wait queue
|
|
* @page: Page defining the wait queue of interest
|
|
* @waiter: Waiter to add to the queue
|
|
*
|
|
* Add an arbitrary @waiter to the wait queue for the nominated @page.
|
|
*/
|
|
void add_page_wait_queue(struct page *page, wait_queue_t *waiter)
|
|
{
|
|
wait_queue_head_t *q = page_waitqueue(page);
|
|
unsigned long flags;
|
|
|
|
spin_lock_irqsave(&q->lock, flags);
|
|
__add_wait_queue(q, waiter);
|
|
spin_unlock_irqrestore(&q->lock, flags);
|
|
}
|
|
EXPORT_SYMBOL_GPL(add_page_wait_queue);
|
|
|
|
/**
|
|
* unlock_page - unlock a locked page
|
|
* @page: the page
|
|
*
|
|
* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().
|
|
* Also wakes sleepers in wait_on_page_writeback() because the wakeup
|
|
* mechanism between PageLocked pages and PageWriteback pages is shared.
|
|
* But that's OK - sleepers in wait_on_page_writeback() just go back to sleep.
|
|
*
|
|
* The mb is necessary to enforce ordering between the clear_bit and the read
|
|
* of the waitqueue (to avoid SMP races with a parallel wait_on_page_locked()).
|
|
*/
|
|
void unlock_page(struct page *page)
|
|
{
|
|
VM_BUG_ON_PAGE(!PageLocked(page), page);
|
|
clear_bit_unlock(PG_locked, &page->flags);
|
|
smp_mb__after_atomic();
|
|
wake_up_page(page, PG_locked);
|
|
}
|
|
EXPORT_SYMBOL(unlock_page);
|
|
|
|
/**
|
|
* end_page_writeback - end writeback against a page
|
|
* @page: the page
|
|
*/
|
|
void end_page_writeback(struct page *page)
|
|
{
|
|
/*
|
|
* TestClearPageReclaim could be used here but it is an atomic
|
|
* operation and overkill in this particular case. Failing to
|
|
* shuffle a page marked for immediate reclaim is too mild to
|
|
* justify taking an atomic operation penalty at the end of
|
|
* ever page writeback.
|
|
*/
|
|
if (PageReclaim(page)) {
|
|
ClearPageReclaim(page);
|
|
rotate_reclaimable_page(page);
|
|
}
|
|
|
|
if (!test_clear_page_writeback(page))
|
|
BUG();
|
|
|
|
smp_mb__after_atomic();
|
|
wake_up_page(page, PG_writeback);
|
|
}
|
|
EXPORT_SYMBOL(end_page_writeback);
|
|
|
|
/*
|
|
* After completing I/O on a page, call this routine to update the page
|
|
* flags appropriately
|
|
*/
|
|
void page_endio(struct page *page, int rw, int err)
|
|
{
|
|
if (rw == READ) {
|
|
if (!err) {
|
|
SetPageUptodate(page);
|
|
} else {
|
|
ClearPageUptodate(page);
|
|
SetPageError(page);
|
|
}
|
|
unlock_page(page);
|
|
} else { /* rw == WRITE */
|
|
if (err) {
|
|
struct address_space *mapping;
|
|
|
|
SetPageError(page);
|
|
mapping = page_mapping(page);
|
|
if (mapping)
|
|
mapping_set_error(mapping, err);
|
|
}
|
|
end_page_writeback(page);
|
|
}
|
|
}
|
|
EXPORT_SYMBOL_GPL(page_endio);
|
|
|
|
/**
|
|
* __lock_page - get a lock on the page, assuming we need to sleep to get it
|
|
* @page: the page to lock
|
|
*/
|
|
void __lock_page(struct page *page)
|
|
{
|
|
DEFINE_WAIT_BIT(wait, &page->flags, PG_locked);
|
|
|
|
__wait_on_bit_lock(page_waitqueue(page), &wait, bit_wait_io,
|
|
TASK_UNINTERRUPTIBLE);
|
|
}
|
|
EXPORT_SYMBOL(__lock_page);
|
|
|
|
int __lock_page_killable(struct page *page)
|
|
{
|
|
DEFINE_WAIT_BIT(wait, &page->flags, PG_locked);
|
|
|
|
return __wait_on_bit_lock(page_waitqueue(page), &wait,
|
|
bit_wait_io, TASK_KILLABLE);
|
|
}
|
|
EXPORT_SYMBOL_GPL(__lock_page_killable);
|
|
|
|
/*
|
|
* Return values:
|
|
* 1 - page is locked; mmap_sem is still held.
|
|
* 0 - page is not locked.
|
|
* mmap_sem has been released (up_read()), unless flags had both
|
|
* FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in
|
|
* which case mmap_sem is still held.
|
|
*
|
|
* If neither ALLOW_RETRY nor KILLABLE are set, will always return 1
|
|
* with the page locked and the mmap_sem unperturbed.
|
|
*/
|
|
int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
|
|
unsigned int flags)
|
|
{
|
|
if (flags & FAULT_FLAG_ALLOW_RETRY) {
|
|
/*
|
|
* CAUTION! In this case, mmap_sem is not released
|
|
* even though return 0.
|
|
*/
|
|
if (flags & FAULT_FLAG_RETRY_NOWAIT)
|
|
return 0;
|
|
|
|
up_read(&mm->mmap_sem);
|
|
if (flags & FAULT_FLAG_KILLABLE)
|
|
wait_on_page_locked_killable(page);
|
|
else
|
|
wait_on_page_locked(page);
|
|
return 0;
|
|
} else {
|
|
if (flags & FAULT_FLAG_KILLABLE) {
|
|
int ret;
|
|
|
|
ret = __lock_page_killable(page);
|
|
if (ret) {
|
|
up_read(&mm->mmap_sem);
|
|
return 0;
|
|
}
|
|
} else
|
|
__lock_page(page);
|
|
return 1;
|
|
}
|
|
}
|
|
|
|
/**
|
|
* page_cache_next_hole - find the next hole (not-present entry)
|
|
* @mapping: mapping
|
|
* @index: index
|
|
* @max_scan: maximum range to search
|
|
*
|
|
* Search the set [index, min(index+max_scan-1, MAX_INDEX)] for the
|
|
* lowest indexed hole.
|
|
*
|
|
* Returns: the index of the hole if found, otherwise returns an index
|
|
* outside of the set specified (in which case 'return - index >=
|
|
* max_scan' will be true). In rare cases of index wrap-around, 0 will
|
|
* be returned.
|
|
*
|
|
* page_cache_next_hole may be called under rcu_read_lock. However,
|
|
* like radix_tree_gang_lookup, this will not atomically search a
|
|
* snapshot of the tree at a single point in time. For example, if a
|
|
* hole is created at index 5, then subsequently a hole is created at
|
|
* index 10, page_cache_next_hole covering both indexes may return 10
|
|
* if called under rcu_read_lock.
|
|
*/
|
|
pgoff_t page_cache_next_hole(struct address_space *mapping,
|
|
pgoff_t index, unsigned long max_scan)
|
|
{
|
|
unsigned long i;
|
|
|
|
for (i = 0; i < max_scan; i++) {
|
|
struct page *page;
|
|
|
|
page = radix_tree_lookup(&mapping->page_tree, index);
|
|
if (!page || radix_tree_exceptional_entry(page))
|
|
break;
|
|
index++;
|
|
if (index == 0)
|
|
break;
|
|
}
|
|
|
|
return index;
|
|
}
|
|
EXPORT_SYMBOL(page_cache_next_hole);
|
|
|
|
/**
|
|
* page_cache_prev_hole - find the prev hole (not-present entry)
|
|
* @mapping: mapping
|
|
* @index: index
|
|
* @max_scan: maximum range to search
|
|
*
|
|
* Search backwards in the range [max(index-max_scan+1, 0), index] for
|
|
* the first hole.
|
|
*
|
|
* Returns: the index of the hole if found, otherwise returns an index
|
|
* outside of the set specified (in which case 'index - return >=
|
|
* max_scan' will be true). In rare cases of wrap-around, ULONG_MAX
|
|
* will be returned.
|
|
*
|
|
* page_cache_prev_hole may be called under rcu_read_lock. However,
|
|
* like radix_tree_gang_lookup, this will not atomically search a
|
|
* snapshot of the tree at a single point in time. For example, if a
|
|
* hole is created at index 10, then subsequently a hole is created at
|
|
* index 5, page_cache_prev_hole covering both indexes may return 5 if
|
|
* called under rcu_read_lock.
|
|
*/
|
|
pgoff_t page_cache_prev_hole(struct address_space *mapping,
|
|
pgoff_t index, unsigned long max_scan)
|
|
{
|
|
unsigned long i;
|
|
|
|
for (i = 0; i < max_scan; i++) {
|
|
struct page *page;
|
|
|
|
page = radix_tree_lookup(&mapping->page_tree, index);
|
|
if (!page || radix_tree_exceptional_entry(page))
|
|
break;
|
|
index--;
|
|
if (index == ULONG_MAX)
|
|
break;
|
|
}
|
|
|
|
return index;
|
|
}
|
|
EXPORT_SYMBOL(page_cache_prev_hole);
|
|
|
|
/**
|
|
* find_get_entry - find and get a page cache entry
|
|
* @mapping: the address_space to search
|
|
* @offset: the page cache index
|
|
*
|
|
* Looks up the page cache slot at @mapping & @offset. If there is a
|
|
* page cache page, it is returned with an increased refcount.
|
|
*
|
|
* If the slot holds a shadow entry of a previously evicted page, or a
|
|
* swap entry from shmem/tmpfs, it is returned.
|
|
*
|
|
* Otherwise, %NULL is returned.
|
|
*/
|
|
struct page *find_get_entry(struct address_space *mapping, pgoff_t offset)
|
|
{
|
|
void **pagep;
|
|
struct page *page;
|
|
|
|
rcu_read_lock();
|
|
repeat:
|
|
page = NULL;
|
|
pagep = radix_tree_lookup_slot(&mapping->page_tree, offset);
|
|
if (pagep) {
|
|
page = radix_tree_deref_slot(pagep);
|
|
if (unlikely(!page))
|
|
goto out;
|
|
if (radix_tree_exception(page)) {
|
|
if (radix_tree_deref_retry(page))
|
|
goto repeat;
|
|
/*
|
|
* A shadow entry of a recently evicted page,
|
|
* or a swap entry from shmem/tmpfs. Return
|
|
* it without attempting to raise page count.
|
|
*/
|
|
goto out;
|
|
}
|
|
if (!page_cache_get_speculative(page))
|
|
goto repeat;
|
|
|
|
/*
|
|
* Has the page moved?
|
|
* This is part of the lockless pagecache protocol. See
|
|
* include/linux/pagemap.h for details.
|
|
*/
|
|
if (unlikely(page != *pagep)) {
|
|
page_cache_release(page);
|
|
goto repeat;
|
|
}
|
|
}
|
|
out:
|
|
rcu_read_unlock();
|
|
|
|
return page;
|
|
}
|
|
EXPORT_SYMBOL(find_get_entry);
|
|
|
|
/**
|
|
* find_lock_entry - locate, pin and lock a page cache entry
|
|
* @mapping: the address_space to search
|
|
* @offset: the page cache index
|
|
*
|
|
* Looks up the page cache slot at @mapping & @offset. If there is a
|
|
* page cache page, it is returned locked and with an increased
|
|
* refcount.
|
|
*
|
|
* If the slot holds a shadow entry of a previously evicted page, or a
|
|
* swap entry from shmem/tmpfs, it is returned.
|
|
*
|
|
* Otherwise, %NULL is returned.
|
|
*
|
|
* find_lock_entry() may sleep.
|
|
*/
|
|
struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset)
|
|
{
|
|
struct page *page;
|
|
|
|
repeat:
|
|
page = find_get_entry(mapping, offset);
|
|
if (page && !radix_tree_exception(page)) {
|
|
lock_page(page);
|
|
/* Has the page been truncated? */
|
|
if (unlikely(page->mapping != mapping)) {
|
|
unlock_page(page);
|
|
page_cache_release(page);
|
|
goto repeat;
|
|
}
|
|
VM_BUG_ON_PAGE(page->index != offset, page);
|
|
}
|
|
return page;
|
|
}
|
|
EXPORT_SYMBOL(find_lock_entry);
|
|
|
|
/**
|
|
* pagecache_get_page - find and get a page reference
|
|
* @mapping: the address_space to search
|
|
* @offset: the page index
|
|
* @fgp_flags: PCG flags
|
|
* @gfp_mask: gfp mask to use for the page cache data page allocation
|
|
*
|
|
* Looks up the page cache slot at @mapping & @offset.
|
|
*
|
|
* PCG flags modify how the page is returned.
|
|
*
|
|
* FGP_ACCESSED: the page will be marked accessed
|
|
* FGP_LOCK: Page is return locked
|
|
* FGP_CREAT: If page is not present then a new page is allocated using
|
|
* @gfp_mask and added to the page cache and the VM's LRU
|
|
* list. The page is returned locked and with an increased
|
|
* refcount. Otherwise, %NULL is returned.
|
|
*
|
|
* If FGP_LOCK or FGP_CREAT are specified then the function may sleep even
|
|
* if the GFP flags specified for FGP_CREAT are atomic.
|
|
*
|
|
* If there is a page cache page, it is returned with an increased refcount.
|
|
*/
|
|
struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset,
|
|
int fgp_flags, gfp_t gfp_mask)
|
|
{
|
|
struct page *page;
|
|
|
|
repeat:
|
|
page = find_get_entry(mapping, offset);
|
|
if (radix_tree_exceptional_entry(page))
|
|
page = NULL;
|
|
if (!page)
|
|
goto no_page;
|
|
|
|
if (fgp_flags & FGP_LOCK) {
|
|
if (fgp_flags & FGP_NOWAIT) {
|
|
if (!trylock_page(page)) {
|
|
page_cache_release(page);
|
|
return NULL;
|
|
}
|
|
} else {
|
|
lock_page(page);
|
|
}
|
|
|
|
/* Has the page been truncated? */
|
|
if (unlikely(page->mapping != mapping)) {
|
|
unlock_page(page);
|
|
page_cache_release(page);
|
|
goto repeat;
|
|
}
|
|
VM_BUG_ON_PAGE(page->index != offset, page);
|
|
}
|
|
|
|
if (page && (fgp_flags & FGP_ACCESSED))
|
|
mark_page_accessed(page);
|
|
|
|
no_page:
|
|
if (!page && (fgp_flags & FGP_CREAT)) {
|
|
int err;
|
|
if ((fgp_flags & FGP_WRITE) && mapping_cap_account_dirty(mapping))
|
|
gfp_mask |= __GFP_WRITE;
|
|
if (fgp_flags & FGP_NOFS)
|
|
gfp_mask &= ~__GFP_FS;
|
|
|
|
page = __page_cache_alloc(gfp_mask);
|
|
if (!page)
|
|
return NULL;
|
|
|
|
if (WARN_ON_ONCE(!(fgp_flags & FGP_LOCK)))
|
|
fgp_flags |= FGP_LOCK;
|
|
|
|
/* Init accessed so avoid atomic mark_page_accessed later */
|
|
if (fgp_flags & FGP_ACCESSED)
|
|
__SetPageReferenced(page);
|
|
|
|
err = add_to_page_cache_lru(page, mapping, offset, gfp_mask);
|
|
if (unlikely(err)) {
|
|
page_cache_release(page);
|
|
page = NULL;
|
|
if (err == -EEXIST)
|
|
goto repeat;
|
|
}
|
|
}
|
|
|
|
return page;
|
|
}
|
|
EXPORT_SYMBOL(pagecache_get_page);
|
|
|
|
/**
|
|
* find_get_entries - gang pagecache lookup
|
|
* @mapping: The address_space to search
|
|
* @start: The starting page cache index
|
|
* @nr_entries: The maximum number of entries
|
|
* @entries: Where the resulting entries are placed
|
|
* @indices: The cache indices corresponding to the entries in @entries
|
|
*
|
|
* find_get_entries() will search for and return a group of up to
|
|
* @nr_entries entries in the mapping. The entries are placed at
|
|
* @entries. find_get_entries() takes a reference against any actual
|
|
* pages it returns.
|
|
*
|
|
* The search returns a group of mapping-contiguous page cache entries
|
|
* with ascending indexes. There may be holes in the indices due to
|
|
* not-present pages.
|
|
*
|
|
* Any shadow entries of evicted pages, or swap entries from
|
|
* shmem/tmpfs, are included in the returned array.
|
|
*
|
|
* find_get_entries() returns the number of pages and shadow entries
|
|
* which were found.
|
|
*/
|
|
unsigned find_get_entries(struct address_space *mapping,
|
|
pgoff_t start, unsigned int nr_entries,
|
|
struct page **entries, pgoff_t *indices)
|
|
{
|
|
void **slot;
|
|
unsigned int ret = 0;
|
|
struct radix_tree_iter iter;
|
|
|
|
if (!nr_entries)
|
|
return 0;
|
|
|
|
rcu_read_lock();
|
|
restart:
|
|
radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
|
|
struct page *page;
|
|
repeat:
|
|
page = radix_tree_deref_slot(slot);
|
|
if (unlikely(!page))
|
|
continue;
|
|
if (radix_tree_exception(page)) {
|
|
if (radix_tree_deref_retry(page))
|
|
goto restart;
|
|
/*
|
|
* A shadow entry of a recently evicted page,
|
|
* or a swap entry from shmem/tmpfs. Return
|
|
* it without attempting to raise page count.
|
|
*/
|
|
goto export;
|
|
}
|
|
if (!page_cache_get_speculative(page))
|
|
goto repeat;
|
|
|
|
/* Has the page moved? */
|
|
if (unlikely(page != *slot)) {
|
|
page_cache_release(page);
|
|
goto repeat;
|
|
}
|
|
export:
|
|
indices[ret] = iter.index;
|
|
entries[ret] = page;
|
|
if (++ret == nr_entries)
|
|
break;
|
|
}
|
|
rcu_read_unlock();
|
|
return ret;
|
|
}
|
|
|
|
/**
|
|
* find_get_pages - gang pagecache lookup
|
|
* @mapping: The address_space to search
|
|
* @start: The starting page index
|
|
* @nr_pages: The maximum number of pages
|
|
* @pages: Where the resulting pages are placed
|
|
*
|
|
* find_get_pages() will search for and return a group of up to
|
|
* @nr_pages pages in the mapping. The pages are placed at @pages.
|
|
* find_get_pages() takes a reference against the returned pages.
|
|
*
|
|
* The search returns a group of mapping-contiguous pages with ascending
|
|
* indexes. There may be holes in the indices due to not-present pages.
|
|
*
|
|
* find_get_pages() returns the number of pages which were found.
|
|
*/
|
|
unsigned find_get_pages(struct address_space *mapping, pgoff_t start,
|
|
unsigned int nr_pages, struct page **pages)
|
|
{
|
|
struct radix_tree_iter iter;
|
|
void **slot;
|
|
unsigned ret = 0;
|
|
|
|
if (unlikely(!nr_pages))
|
|
return 0;
|
|
|
|
rcu_read_lock();
|
|
restart:
|
|
radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
|
|
struct page *page;
|
|
repeat:
|
|
page = radix_tree_deref_slot(slot);
|
|
if (unlikely(!page))
|
|
continue;
|
|
|
|
if (radix_tree_exception(page)) {
|
|
if (radix_tree_deref_retry(page)) {
|
|
/*
|
|
* Transient condition which can only trigger
|
|
* when entry at index 0 moves out of or back
|
|
* to root: none yet gotten, safe to restart.
|
|
*/
|
|
WARN_ON(iter.index);
|
|
goto restart;
|
|
}
|
|
/*
|
|
* A shadow entry of a recently evicted page,
|
|
* or a swap entry from shmem/tmpfs. Skip
|
|
* over it.
|
|
*/
|
|
continue;
|
|
}
|
|
|
|
if (!page_cache_get_speculative(page))
|
|
goto repeat;
|
|
|
|
/* Has the page moved? */
|
|
if (unlikely(page != *slot)) {
|
|
page_cache_release(page);
|
|
goto repeat;
|
|
}
|
|
|
|
pages[ret] = page;
|
|
if (++ret == nr_pages)
|
|
break;
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
return ret;
|
|
}
|
|
|
|
/**
|
|
* find_get_pages_contig - gang contiguous pagecache lookup
|
|
* @mapping: The address_space to search
|
|
* @index: The starting page index
|
|
* @nr_pages: The maximum number of pages
|
|
* @pages: Where the resulting pages are placed
|
|
*
|
|
* find_get_pages_contig() works exactly like find_get_pages(), except
|
|
* that the returned number of pages are guaranteed to be contiguous.
|
|
*
|
|
* find_get_pages_contig() returns the number of pages which were found.
|
|
*/
|
|
unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index,
|
|
unsigned int nr_pages, struct page **pages)
|
|
{
|
|
struct radix_tree_iter iter;
|
|
void **slot;
|
|
unsigned int ret = 0;
|
|
|
|
if (unlikely(!nr_pages))
|
|
return 0;
|
|
|
|
rcu_read_lock();
|
|
restart:
|
|
radix_tree_for_each_contig(slot, &mapping->page_tree, &iter, index) {
|
|
struct page *page;
|
|
repeat:
|
|
page = radix_tree_deref_slot(slot);
|
|
/* The hole, there no reason to continue */
|
|
if (unlikely(!page))
|
|
break;
|
|
|
|
if (radix_tree_exception(page)) {
|
|
if (radix_tree_deref_retry(page)) {
|
|
/*
|
|
* Transient condition which can only trigger
|
|
* when entry at index 0 moves out of or back
|
|
* to root: none yet gotten, safe to restart.
|
|
*/
|
|
goto restart;
|
|
}
|
|
/*
|
|
* A shadow entry of a recently evicted page,
|
|
* or a swap entry from shmem/tmpfs. Stop
|
|
* looking for contiguous pages.
|
|
*/
|
|
break;
|
|
}
|
|
|
|
if (!page_cache_get_speculative(page))
|
|
goto repeat;
|
|
|
|
/* Has the page moved? */
|
|
if (unlikely(page != *slot)) {
|
|
page_cache_release(page);
|
|
goto repeat;
|
|
}
|
|
|
|
/*
|
|
* must check mapping and index after taking the ref.
|
|
* otherwise we can get both false positives and false
|
|
* negatives, which is just confusing to the caller.
|
|
*/
|
|
if (page->mapping == NULL || page->index != iter.index) {
|
|
page_cache_release(page);
|
|
break;
|
|
}
|
|
|
|
pages[ret] = page;
|
|
if (++ret == nr_pages)
|
|
break;
|
|
}
|
|
rcu_read_unlock();
|
|
return ret;
|
|
}
|
|
EXPORT_SYMBOL(find_get_pages_contig);
|
|
|
|
/**
|
|
* find_get_pages_range_tag - find and return pages in given range matching @tag
|
|
* @mapping: the address_space to search
|
|
* @index: the starting page index
|
|
* @end: The final page index (inclusive)
|
|
* @tag: the tag index
|
|
* @nr_pages: the maximum number of pages
|
|
* @pages: where the resulting pages are placed
|
|
*
|
|
* Like find_get_pages, except we only return pages which are tagged with
|
|
* @tag. We update @index to index the next page for the traversal.
|
|
*/
|
|
unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
|
|
pgoff_t end, int tag, unsigned int nr_pages,
|
|
struct page **pages)
|
|
{
|
|
struct radix_tree_iter iter;
|
|
void **slot;
|
|
unsigned ret = 0;
|
|
|
|
if (unlikely(!nr_pages))
|
|
return 0;
|
|
|
|
rcu_read_lock();
|
|
restart:
|
|
radix_tree_for_each_tagged(slot, &mapping->page_tree,
|
|
&iter, *index, tag) {
|
|
struct page *page;
|
|
|
|
if (iter.index > end)
|
|
break;
|
|
repeat:
|
|
page = radix_tree_deref_slot(slot);
|
|
if (unlikely(!page))
|
|
continue;
|
|
|
|
if (radix_tree_exception(page)) {
|
|
if (radix_tree_deref_retry(page)) {
|
|
/*
|
|
* Transient condition which can only trigger
|
|
* when entry at index 0 moves out of or back
|
|
* to root: none yet gotten, safe to restart.
|
|
*/
|
|
goto restart;
|
|
}
|
|
/*
|
|
* A shadow entry of a recently evicted page.
|
|
*
|
|
* Those entries should never be tagged, but
|
|
* this tree walk is lockless and the tags are
|
|
* looked up in bulk, one radix tree node at a
|
|
* time, so there is a sizable window for page
|
|
* reclaim to evict a page we saw tagged.
|
|
*
|
|
* Skip over it.
|
|
*/
|
|
continue;
|
|
}
|
|
|
|
if (!page_cache_get_speculative(page))
|
|
goto repeat;
|
|
|
|
/* Has the page moved? */
|
|
if (unlikely(page != *slot)) {
|
|
page_cache_release(page);
|
|
goto repeat;
|
|
}
|
|
|
|
pages[ret] = page;
|
|
if (++ret == nr_pages) {
|
|
*index = pages[ret - 1]->index + 1;
|
|
goto out;
|
|
}
|
|
}
|
|
|
|
/*
|
|
* We come here when we got at @end. We take care to not overflow the
|
|
* index @index as it confuses some of the callers. This breaks the
|
|
* iteration when there is page at index -1 but that is already broken
|
|
* anyway.
|
|
*/
|
|
if (end == (pgoff_t)-1)
|
|
*index = (pgoff_t)-1;
|
|
else
|
|
*index = end + 1;
|
|
out:
|
|
rcu_read_unlock();
|
|
|
|
return ret;
|
|
}
|
|
EXPORT_SYMBOL(find_get_pages_range_tag);
|
|
|
|
/*
|
|
* CD/DVDs are error prone. When a medium error occurs, the driver may fail
|
|
* a _large_ part of the i/o request. Imagine the worst scenario:
|
|
*
|
|
* ---R__________________________________________B__________
|
|
* ^ reading here ^ bad block(assume 4k)
|
|
*
|
|
* read(R) => miss => readahead(R...B) => media error => frustrating retries
|
|
* => failing the whole request => read(R) => read(R+1) =>
|
|
* readahead(R+1...B+1) => bang => read(R+2) => read(R+3) =>
|
|
* readahead(R+3...B+2) => bang => read(R+3) => read(R+4) =>
|
|
* readahead(R+4...B+3) => bang => read(R+4) => read(R+5) => ......
|
|
*
|
|
* It is going insane. Fix it by quickly scaling down the readahead size.
|
|
*/
|
|
static void shrink_readahead_size_eio(struct file *filp,
|
|
struct file_ra_state *ra)
|
|
{
|
|
ra->ra_pages /= 4;
|
|
}
|
|
|
|
/**
|
|
* do_generic_file_read - generic file read routine
|
|
* @filp: the file to read
|
|
* @ppos: current file position
|
|
* @iter: data destination
|
|
* @written: already copied
|
|
*
|
|
* This is a generic file read routine, and uses the
|
|
* mapping->a_ops->readpage() function for the actual low-level stuff.
|
|
*
|
|
* This is really ugly. But the goto's actually try to clarify some
|
|
* of the logic when it comes to error handling etc.
|
|
*/
|
|
static ssize_t do_generic_file_read(struct file *filp, loff_t *ppos,
|
|
struct iov_iter *iter, ssize_t written)
|
|
{
|
|
struct address_space *mapping = filp->f_mapping;
|
|
struct inode *inode = mapping->host;
|
|
struct file_ra_state *ra = &filp->f_ra;
|
|
pgoff_t index;
|
|
pgoff_t last_index;
|
|
pgoff_t prev_index;
|
|
unsigned long offset; /* offset into pagecache page */
|
|
unsigned int prev_offset;
|
|
int error = 0;
|
|
|
|
index = *ppos >> PAGE_CACHE_SHIFT;
|
|
prev_index = ra->prev_pos >> PAGE_CACHE_SHIFT;
|
|
prev_offset = ra->prev_pos & (PAGE_CACHE_SIZE-1);
|
|
last_index = (*ppos + iter->count + PAGE_CACHE_SIZE-1) >> PAGE_CACHE_SHIFT;
|
|
offset = *ppos & ~PAGE_CACHE_MASK;
|
|
|
|
for (;;) {
|
|
struct page *page;
|
|
pgoff_t end_index;
|
|
loff_t isize;
|
|
unsigned long nr, ret;
|
|
|
|
cond_resched();
|
|
find_page:
|
|
if (fatal_signal_pending(current)) {
|
|
error = -EINTR;
|
|
goto out;
|
|
}
|
|
|
|
page = find_get_page(mapping, index);
|
|
if (!page) {
|
|
page_cache_sync_readahead(mapping,
|
|
ra, filp,
|
|
index, last_index - index);
|
|
page = find_get_page(mapping, index);
|
|
if (unlikely(page == NULL))
|
|
goto no_cached_page;
|
|
}
|
|
if (PageReadahead(page)) {
|
|
page_cache_async_readahead(mapping,
|
|
ra, filp, page,
|
|
index, last_index - index);
|
|
}
|
|
if (!PageUptodate(page)) {
|
|
/*
|
|
* See comment in do_read_cache_page on why
|
|
* wait_on_page_locked is used to avoid unnecessarily
|
|
* serialisations and why it's safe.
|
|
*/
|
|
wait_on_page_locked_killable(page);
|
|
if (PageUptodate(page))
|
|
goto page_ok;
|
|
|
|
if (inode->i_blkbits == PAGE_CACHE_SHIFT ||
|
|
!mapping->a_ops->is_partially_uptodate)
|
|
goto page_not_up_to_date;
|
|
if (!trylock_page(page))
|
|
goto page_not_up_to_date;
|
|
/* Did it get truncated before we got the lock? */
|
|
if (!page->mapping)
|
|
goto page_not_up_to_date_locked;
|
|
if (!mapping->a_ops->is_partially_uptodate(page,
|
|
offset, iter->count))
|
|
goto page_not_up_to_date_locked;
|
|
unlock_page(page);
|
|
}
|
|
page_ok:
|
|
/*
|
|
* i_size must be checked after we know the page is Uptodate.
|
|
*
|
|
* Checking i_size after the check allows us to calculate
|
|
* the correct value for "nr", which means the zero-filled
|
|
* part of the page is not copied back to userspace (unless
|
|
* another truncate extends the file - this is desired though).
|
|
*/
|
|
|
|
isize = i_size_read(inode);
|
|
end_index = (isize - 1) >> PAGE_CACHE_SHIFT;
|
|
if (unlikely(!isize || index > end_index)) {
|
|
page_cache_release(page);
|
|
goto out;
|
|
}
|
|
|
|
/* nr is the maximum number of bytes to copy from this page */
|
|
nr = PAGE_CACHE_SIZE;
|
|
if (index == end_index) {
|
|
nr = ((isize - 1) & ~PAGE_CACHE_MASK) + 1;
|
|
if (nr <= offset) {
|
|
page_cache_release(page);
|
|
goto out;
|
|
}
|
|
}
|
|
nr = nr - offset;
|
|
|
|
/* If users can be writing to this page using arbitrary
|
|
* virtual addresses, take care about potential aliasing
|
|
* before reading the page on the kernel side.
|
|
*/
|
|
if (mapping_writably_mapped(mapping))
|
|
flush_dcache_page(page);
|
|
|
|
/*
|
|
* When a sequential read accesses a page several times,
|
|
* only mark it as accessed the first time.
|
|
*/
|
|
if (prev_index != index || offset != prev_offset)
|
|
mark_page_accessed(page);
|
|
prev_index = index;
|
|
|
|
/*
|
|
* Ok, we have the page, and it's up-to-date, so
|
|
* now we can copy it to user space...
|
|
*/
|
|
|
|
ret = copy_page_to_iter(page, offset, nr, iter);
|
|
offset += ret;
|
|
index += offset >> PAGE_CACHE_SHIFT;
|
|
offset &= ~PAGE_CACHE_MASK;
|
|
prev_offset = offset;
|
|
|
|
page_cache_release(page);
|
|
written += ret;
|
|
if (!iov_iter_count(iter))
|
|
goto out;
|
|
if (ret < nr) {
|
|
error = -EFAULT;
|
|
goto out;
|
|
}
|
|
continue;
|
|
|
|
page_not_up_to_date:
|
|
/* Get exclusive access to the page ... */
|
|
error = lock_page_killable(page);
|
|
if (unlikely(error))
|
|
goto readpage_error;
|
|
|
|
page_not_up_to_date_locked:
|
|
/* Did it get truncated before we got the lock? */
|
|
if (!page->mapping) {
|
|
unlock_page(page);
|
|
page_cache_release(page);
|
|
continue;
|
|
}
|
|
|
|
/* Did somebody else fill it already? */
|
|
if (PageUptodate(page)) {
|
|
unlock_page(page);
|
|
goto page_ok;
|
|
}
|
|
|
|
readpage:
|
|
/*
|
|
* A previous I/O error may have been due to temporary
|
|
* failures, eg. multipath errors.
|
|
* PG_error will be set again if readpage fails.
|
|
*/
|
|
ClearPageError(page);
|
|
/* Start the actual read. The read will unlock the page. */
|
|
error = mapping->a_ops->readpage(filp, page);
|
|
|
|
if (unlikely(error)) {
|
|
if (error == AOP_TRUNCATED_PAGE) {
|
|
page_cache_release(page);
|
|
error = 0;
|
|
goto find_page;
|
|
}
|
|
goto readpage_error;
|
|
}
|
|
|
|
if (!PageUptodate(page)) {
|
|
error = lock_page_killable(page);
|
|
if (unlikely(error))
|
|
goto readpage_error;
|
|
if (!PageUptodate(page)) {
|
|
if (page->mapping == NULL) {
|
|
/*
|
|
* invalidate_mapping_pages got it
|
|
*/
|
|
unlock_page(page);
|
|
page_cache_release(page);
|
|
goto find_page;
|
|
}
|
|
unlock_page(page);
|
|
shrink_readahead_size_eio(filp, ra);
|
|
error = -EIO;
|
|
goto readpage_error;
|
|
}
|
|
unlock_page(page);
|
|
}
|
|
|
|
goto page_ok;
|
|
|
|
readpage_error:
|
|
/* UHHUH! A synchronous read error occurred. Report it */
|
|
page_cache_release(page);
|
|
goto out;
|
|
|
|
no_cached_page:
|
|
/*
|
|
* Ok, it wasn't cached, so we need to create a new
|
|
* page..
|
|
*/
|
|
page = page_cache_alloc_cold(mapping);
|
|
if (!page) {
|
|
error = -ENOMEM;
|
|
goto out;
|
|
}
|
|
error = add_to_page_cache_lru(page, mapping, index,
|
|
mapping_gfp_constraint(mapping, GFP_KERNEL));
|
|
if (error) {
|
|
page_cache_release(page);
|
|
if (error == -EEXIST) {
|
|
error = 0;
|
|
goto find_page;
|
|
}
|
|
goto out;
|
|
}
|
|
goto readpage;
|
|
}
|
|
|
|
out:
|
|
ra->prev_pos = prev_index;
|
|
ra->prev_pos <<= PAGE_CACHE_SHIFT;
|
|
ra->prev_pos |= prev_offset;
|
|
|
|
*ppos = ((loff_t)index << PAGE_CACHE_SHIFT) + offset;
|
|
file_accessed(filp);
|
|
return written ? written : error;
|
|
}
|
|
|
|
/**
|
|
* generic_file_read_iter - generic filesystem read routine
|
|
* @iocb: kernel I/O control block
|
|
* @iter: destination for the data read
|
|
*
|
|
* This is the "read_iter()" routine for all filesystems
|
|
* that can use the page cache directly.
|
|
*/
|
|
ssize_t
|
|
generic_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
|
|
{
|
|
struct file *file = iocb->ki_filp;
|
|
ssize_t retval = 0;
|
|
loff_t *ppos = &iocb->ki_pos;
|
|
loff_t pos = *ppos;
|
|
|
|
if (iocb->ki_flags & IOCB_DIRECT) {
|
|
struct address_space *mapping = file->f_mapping;
|
|
struct inode *inode = mapping->host;
|
|
size_t count = iov_iter_count(iter);
|
|
loff_t size;
|
|
|
|
if (!count)
|
|
goto out; /* skip atime */
|
|
size = i_size_read(inode);
|
|
retval = filemap_write_and_wait_range(mapping, pos,
|
|
pos + count - 1);
|
|
if (!retval) {
|
|
struct iov_iter data = *iter;
|
|
retval = mapping->a_ops->direct_IO(iocb, &data, pos);
|
|
}
|
|
|
|
if (retval > 0) {
|
|
*ppos = pos + retval;
|
|
iov_iter_advance(iter, retval);
|
|
}
|
|
|
|
/*
|
|
* Btrfs can have a short DIO read if we encounter
|
|
* compressed extents, so if there was an error, or if
|
|
* we've already read everything we wanted to, or if
|
|
* there was a short read because we hit EOF, go ahead
|
|
* and return. Otherwise fallthrough to buffered io for
|
|
* the rest of the read. Buffered reads will not work for
|
|
* DAX files, so don't bother trying.
|
|
*/
|
|
if (retval < 0 || !iov_iter_count(iter) || *ppos >= size ||
|
|
IS_DAX(inode)) {
|
|
file_accessed(file);
|
|
goto out;
|
|
}
|
|
}
|
|
|
|
retval = do_generic_file_read(file, ppos, iter, retval);
|
|
out:
|
|
return retval;
|
|
}
|
|
EXPORT_SYMBOL(generic_file_read_iter);
|
|
|
|
#ifdef CONFIG_MMU
|
|
/**
|
|
* page_cache_read - adds requested page to the page cache if not already there
|
|
* @file: file to read
|
|
* @offset: page index
|
|
*
|
|
* This adds the requested page to the page cache if it isn't already there,
|
|
* and schedules an I/O to read in its contents from disk.
|
|
*/
|
|
static int page_cache_read(struct file *file, pgoff_t offset, gfp_t gfp_mask)
|
|
{
|
|
struct address_space *mapping = file->f_mapping;
|
|
struct page *page;
|
|
int ret;
|
|
|
|
do {
|
|
page = __page_cache_alloc(gfp_mask|__GFP_COLD);
|
|
if (!page)
|
|
return -ENOMEM;
|
|
|
|
ret = add_to_page_cache_lru(page, mapping, offset, gfp_mask);
|
|
if (ret == 0)
|
|
ret = mapping->a_ops->readpage(file, page);
|
|
else if (ret == -EEXIST)
|
|
ret = 0; /* losing race to add is OK */
|
|
|
|
page_cache_release(page);
|
|
|
|
} while (ret == AOP_TRUNCATED_PAGE);
|
|
|
|
return ret;
|
|
}
|
|
|
|
#define MMAP_LOTSAMISS (100)
|
|
|
|
/*
|
|
* Synchronous readahead happens when we don't even find
|
|
* a page in the page cache at all.
|
|
*/
|
|
static void do_sync_mmap_readahead(struct vm_area_struct *vma,
|
|
struct file_ra_state *ra,
|
|
struct file *file,
|
|
pgoff_t offset)
|
|
{
|
|
struct address_space *mapping = file->f_mapping;
|
|
|
|
/* If we don't want any read-ahead, don't bother */
|
|
if (vma->vm_flags & VM_RAND_READ)
|
|
return;
|
|
if (!ra->ra_pages)
|
|
return;
|
|
|
|
if (vma->vm_flags & VM_SEQ_READ) {
|
|
page_cache_sync_readahead(mapping, ra, file, offset,
|
|
ra->ra_pages);
|
|
return;
|
|
}
|
|
|
|
/* Avoid banging the cache line if not needed */
|
|
if (ra->mmap_miss < MMAP_LOTSAMISS * 10)
|
|
ra->mmap_miss++;
|
|
|
|
/*
|
|
* Do we miss much more than hit in this file? If so,
|
|
* stop bothering with read-ahead. It will only hurt.
|
|
*/
|
|
if (ra->mmap_miss > MMAP_LOTSAMISS)
|
|
return;
|
|
|
|
/*
|
|
* mmap read-around
|
|
*/
|
|
ra->start = max_t(long, 0, offset - ra->ra_pages / 2);
|
|
ra->size = ra->ra_pages;
|
|
ra->async_size = ra->ra_pages / 4;
|
|
ra_submit(ra, mapping, file);
|
|
}
|
|
|
|
/*
|
|
* Asynchronous readahead happens when we find the page and PG_readahead,
|
|
* so we want to possibly extend the readahead further..
|
|
*/
|
|
static void do_async_mmap_readahead(struct vm_area_struct *vma,
|
|
struct file_ra_state *ra,
|
|
struct file *file,
|
|
struct page *page,
|
|
pgoff_t offset)
|
|
{
|
|
struct address_space *mapping = file->f_mapping;
|
|
|
|
/* If we don't want any read-ahead, don't bother */
|
|
if (vma->vm_flags & VM_RAND_READ)
|
|
return;
|
|
if (ra->mmap_miss > 0)
|
|
ra->mmap_miss--;
|
|
if (PageReadahead(page))
|
|
page_cache_async_readahead(mapping, ra, file,
|
|
page, offset, ra->ra_pages);
|
|
}
|
|
|
|
/**
|
|
* filemap_fault - read in file data for page fault handling
|
|
* @vma: vma in which the fault was taken
|
|
* @vmf: struct vm_fault containing details of the fault
|
|
*
|
|
* filemap_fault() is invoked via the vma operations vector for a
|
|
* mapped memory region to read in file data during a page fault.
|
|
*
|
|
* The goto's are kind of ugly, but this streamlines the normal case of having
|
|
* it in the page cache, and handles the special cases reasonably without
|
|
* having a lot of duplicated code.
|
|
*
|
|
* vma->vm_mm->mmap_sem must be held on entry.
|
|
*
|
|
* If our return value has VM_FAULT_RETRY set, it's because
|
|
* lock_page_or_retry() returned 0.
|
|
* The mmap_sem has usually been released in this case.
|
|
* See __lock_page_or_retry() for the exception.
|
|
*
|
|
* If our return value does not have VM_FAULT_RETRY set, the mmap_sem
|
|
* has not been released.
|
|
*
|
|
* We never return with VM_FAULT_RETRY and a bit from VM_FAULT_ERROR set.
|
|
*/
|
|
int filemap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
|
|
{
|
|
int error;
|
|
struct file *file = vma->vm_file;
|
|
struct address_space *mapping = file->f_mapping;
|
|
struct file_ra_state *ra = &file->f_ra;
|
|
struct inode *inode = mapping->host;
|
|
pgoff_t offset = vmf->pgoff;
|
|
struct page *page;
|
|
loff_t size;
|
|
int ret = 0;
|
|
|
|
size = round_up(i_size_read(inode), PAGE_CACHE_SIZE);
|
|
if (offset >= size >> PAGE_CACHE_SHIFT)
|
|
return VM_FAULT_SIGBUS;
|
|
|
|
/*
|
|
* Do we have something in the page cache already?
|
|
*/
|
|
page = find_get_page(mapping, offset);
|
|
if (likely(page) && !(vmf->flags & FAULT_FLAG_TRIED)) {
|
|
/*
|
|
* We found the page, so try async readahead before
|
|
* waiting for the lock.
|
|
*/
|
|
do_async_mmap_readahead(vma, ra, file, page, offset);
|
|
} else if (!page) {
|
|
/* No page in the page cache at all */
|
|
do_sync_mmap_readahead(vma, ra, file, offset);
|
|
count_vm_event(PGMAJFAULT);
|
|
mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT);
|
|
ret = VM_FAULT_MAJOR;
|
|
retry_find:
|
|
page = find_get_page(mapping, offset);
|
|
if (!page)
|
|
goto no_cached_page;
|
|
}
|
|
|
|
if (!lock_page_or_retry(page, vma->vm_mm, vmf->flags)) {
|
|
page_cache_release(page);
|
|
return ret | VM_FAULT_RETRY;
|
|
}
|
|
|
|
/* Did it get truncated? */
|
|
if (unlikely(page->mapping != mapping)) {
|
|
unlock_page(page);
|
|
put_page(page);
|
|
goto retry_find;
|
|
}
|
|
VM_BUG_ON_PAGE(page->index != offset, page);
|
|
|
|
/*
|
|
* We have a locked page in the page cache, now we need to check
|
|
* that it's up-to-date. If not, it is going to be due to an error.
|
|
*/
|
|
if (unlikely(!PageUptodate(page)))
|
|
goto page_not_uptodate;
|
|
|
|
/*
|
|
* Found the page and have a reference on it.
|
|
* We must recheck i_size under page lock.
|
|
*/
|
|
size = round_up(i_size_read(inode), PAGE_CACHE_SIZE);
|
|
if (unlikely(offset >= size >> PAGE_CACHE_SHIFT)) {
|
|
unlock_page(page);
|
|
page_cache_release(page);
|
|
return VM_FAULT_SIGBUS;
|
|
}
|
|
|
|
vmf->page = page;
|
|
return ret | VM_FAULT_LOCKED;
|
|
|
|
no_cached_page:
|
|
/*
|
|
* We're only likely to ever get here if MADV_RANDOM is in
|
|
* effect.
|
|
*/
|
|
error = page_cache_read(file, offset, vmf->gfp_mask);
|
|
|
|
/*
|
|
* The page we want has now been added to the page cache.
|
|
* In the unlikely event that someone removed it in the
|
|
* meantime, we'll just come back here and read it again.
|
|
*/
|
|
if (error >= 0)
|
|
goto retry_find;
|
|
|
|
/*
|
|
* An error return from page_cache_read can result if the
|
|
* system is low on memory, or a problem occurs while trying
|
|
* to schedule I/O.
|
|
*/
|
|
if (error == -ENOMEM)
|
|
return VM_FAULT_OOM;
|
|
return VM_FAULT_SIGBUS;
|
|
|
|
page_not_uptodate:
|
|
/*
|
|
* Umm, take care of errors if the page isn't up-to-date.
|
|
* Try to re-read it _once_. We do this synchronously,
|
|
* because there really aren't any performance issues here
|
|
* and we need to check for errors.
|
|
*/
|
|
ClearPageError(page);
|
|
error = mapping->a_ops->readpage(file, page);
|
|
if (!error) {
|
|
wait_on_page_locked(page);
|
|
if (!PageUptodate(page))
|
|
error = -EIO;
|
|
}
|
|
page_cache_release(page);
|
|
|
|
if (!error || error == AOP_TRUNCATED_PAGE)
|
|
goto retry_find;
|
|
|
|
/* Things didn't work out. Return zero to tell the mm layer so. */
|
|
shrink_readahead_size_eio(file, ra);
|
|
return VM_FAULT_SIGBUS;
|
|
}
|
|
EXPORT_SYMBOL(filemap_fault);
|
|
|
|
void filemap_map_pages(struct vm_area_struct *vma, struct vm_fault *vmf)
|
|
{
|
|
struct radix_tree_iter iter;
|
|
void **slot;
|
|
struct file *file = vma->vm_file;
|
|
struct address_space *mapping = file->f_mapping;
|
|
loff_t size;
|
|
struct page *page;
|
|
unsigned long address = (unsigned long) vmf->virtual_address;
|
|
unsigned long addr;
|
|
pte_t *pte;
|
|
|
|
rcu_read_lock();
|
|
radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, vmf->pgoff) {
|
|
if (iter.index > vmf->max_pgoff)
|
|
break;
|
|
repeat:
|
|
page = radix_tree_deref_slot(slot);
|
|
if (unlikely(!page))
|
|
goto next;
|
|
if (radix_tree_exception(page)) {
|
|
if (radix_tree_deref_retry(page))
|
|
break;
|
|
else
|
|
goto next;
|
|
}
|
|
|
|
if (!page_cache_get_speculative(page))
|
|
goto repeat;
|
|
|
|
/* Has the page moved? */
|
|
if (unlikely(page != *slot)) {
|
|
page_cache_release(page);
|
|
goto repeat;
|
|
}
|
|
|
|
if (!PageUptodate(page) ||
|
|
PageReadahead(page) ||
|
|
PageHWPoison(page))
|
|
goto skip;
|
|
if (!trylock_page(page))
|
|
goto skip;
|
|
|
|
if (page->mapping != mapping || !PageUptodate(page))
|
|
goto unlock;
|
|
|
|
size = round_up(i_size_read(mapping->host), PAGE_CACHE_SIZE);
|
|
if (page->index >= size >> PAGE_CACHE_SHIFT)
|
|
goto unlock;
|
|
|
|
pte = vmf->pte + page->index - vmf->pgoff;
|
|
if (!pte_none(*pte))
|
|
goto unlock;
|
|
|
|
if (file->f_ra.mmap_miss > 0)
|
|
file->f_ra.mmap_miss--;
|
|
addr = address + (page->index - vmf->pgoff) * PAGE_SIZE;
|
|
do_set_pte(vma, addr, page, pte, false, false);
|
|
unlock_page(page);
|
|
goto next;
|
|
unlock:
|
|
unlock_page(page);
|
|
skip:
|
|
page_cache_release(page);
|
|
next:
|
|
if (iter.index == vmf->max_pgoff)
|
|
break;
|
|
}
|
|
rcu_read_unlock();
|
|
}
|
|
EXPORT_SYMBOL(filemap_map_pages);
|
|
|
|
int filemap_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
|
|
{
|
|
struct page *page = vmf->page;
|
|
struct inode *inode = file_inode(vma->vm_file);
|
|
int ret = VM_FAULT_LOCKED;
|
|
|
|
sb_start_pagefault(inode->i_sb);
|
|
file_update_time(vma->vm_file);
|
|
lock_page(page);
|
|
if (page->mapping != inode->i_mapping) {
|
|
unlock_page(page);
|
|
ret = VM_FAULT_NOPAGE;
|
|
goto out;
|
|
}
|
|
/*
|
|
* We mark the page dirty already here so that when freeze is in
|
|
* progress, we are guaranteed that writeback during freezing will
|
|
* see the dirty page and writeprotect it again.
|
|
*/
|
|
set_page_dirty(page);
|
|
wait_for_stable_page(page);
|
|
out:
|
|
sb_end_pagefault(inode->i_sb);
|
|
return ret;
|
|
}
|
|
EXPORT_SYMBOL(filemap_page_mkwrite);
|
|
|
|
const struct vm_operations_struct generic_file_vm_ops = {
|
|
.fault = filemap_fault,
|
|
.map_pages = filemap_map_pages,
|
|
.page_mkwrite = filemap_page_mkwrite,
|
|
};
|
|
|
|
/* This is used for a general mmap of a disk file */
|
|
|
|
int generic_file_mmap(struct file * file, struct vm_area_struct * vma)
|
|
{
|
|
struct address_space *mapping = file->f_mapping;
|
|
|
|
if (!mapping->a_ops->readpage)
|
|
return -ENOEXEC;
|
|
file_accessed(file);
|
|
vma->vm_ops = &generic_file_vm_ops;
|
|
return 0;
|
|
}
|
|
|
|
/*
|
|
* This is for filesystems which do not implement ->writepage.
|
|
*/
|
|
int generic_file_readonly_mmap(struct file *file, struct vm_area_struct *vma)
|
|
{
|
|
if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_MAYWRITE))
|
|
return -EINVAL;
|
|
return generic_file_mmap(file, vma);
|
|
}
|
|
#else
|
|
int generic_file_mmap(struct file * file, struct vm_area_struct * vma)
|
|
{
|
|
return -ENOSYS;
|
|
}
|
|
int generic_file_readonly_mmap(struct file * file, struct vm_area_struct * vma)
|
|
{
|
|
return -ENOSYS;
|
|
}
|
|
#endif /* CONFIG_MMU */
|
|
|
|
EXPORT_SYMBOL(generic_file_mmap);
|
|
EXPORT_SYMBOL(generic_file_readonly_mmap);
|
|
|
|
static struct page *wait_on_page_read(struct page *page)
|
|
{
|
|
if (!IS_ERR(page)) {
|
|
wait_on_page_locked(page);
|
|
if (!PageUptodate(page)) {
|
|
page_cache_release(page);
|
|
page = ERR_PTR(-EIO);
|
|
}
|
|
}
|
|
return page;
|
|
}
|
|
|
|
static struct page *do_read_cache_page(struct address_space *mapping,
|
|
pgoff_t index,
|
|
int (*filler)(void *, struct page *),
|
|
void *data,
|
|
gfp_t gfp)
|
|
{
|
|
struct page *page;
|
|
int err;
|
|
repeat:
|
|
page = find_get_page(mapping, index);
|
|
if (!page) {
|
|
page = __page_cache_alloc(gfp | __GFP_COLD);
|
|
if (!page)
|
|
return ERR_PTR(-ENOMEM);
|
|
err = add_to_page_cache_lru(page, mapping, index, gfp);
|
|
if (unlikely(err)) {
|
|
page_cache_release(page);
|
|
if (err == -EEXIST)
|
|
goto repeat;
|
|
/* Presumably ENOMEM for radix tree node */
|
|
return ERR_PTR(err);
|
|
}
|
|
|
|
filler:
|
|
err = filler(data, page);
|
|
if (err < 0) {
|
|
page_cache_release(page);
|
|
return ERR_PTR(err);
|
|
}
|
|
|
|
page = wait_on_page_read(page);
|
|
if (IS_ERR(page))
|
|
return page;
|
|
goto out;
|
|
}
|
|
if (PageUptodate(page))
|
|
goto out;
|
|
|
|
/*
|
|
* Page is not up to date and may be locked due one of the following
|
|
* case a: Page is being filled and the page lock is held
|
|
* case b: Read/write error clearing the page uptodate status
|
|
* case c: Truncation in progress (page locked)
|
|
* case d: Reclaim in progress
|
|
*
|
|
* Case a, the page will be up to date when the page is unlocked.
|
|
* There is no need to serialise on the page lock here as the page
|
|
* is pinned so the lock gives no additional protection. Even if the
|
|
* the page is truncated, the data is still valid if PageUptodate as
|
|
* it's a race vs truncate race.
|
|
* Case b, the page will not be up to date
|
|
* Case c, the page may be truncated but in itself, the data may still
|
|
* be valid after IO completes as it's a read vs truncate race. The
|
|
* operation must restart if the page is not uptodate on unlock but
|
|
* otherwise serialising on page lock to stabilise the mapping gives
|
|
* no additional guarantees to the caller as the page lock is
|
|
* released before return.
|
|
* Case d, similar to truncation. If reclaim holds the page lock, it
|
|
* will be a race with remove_mapping that determines if the mapping
|
|
* is valid on unlock but otherwise the data is valid and there is
|
|
* no need to serialise with page lock.
|
|
*
|
|
* As the page lock gives no additional guarantee, we optimistically
|
|
* wait on the page to be unlocked and check if it's up to date and
|
|
* use the page if it is. Otherwise, the page lock is required to
|
|
* distinguish between the different cases. The motivation is that we
|
|
* avoid spurious serialisations and wakeups when multiple processes
|
|
* wait on the same page for IO to complete.
|
|
*/
|
|
wait_on_page_locked(page);
|
|
if (PageUptodate(page))
|
|
goto out;
|
|
|
|
/* Distinguish between all the cases under the safety of the lock */
|
|
lock_page(page);
|
|
|
|
/* Case c or d, restart the operation */
|
|
if (!page->mapping) {
|
|
unlock_page(page);
|
|
page_cache_release(page);
|
|
goto repeat;
|
|
}
|
|
|
|
/* Someone else locked and filled the page in a very small window */
|
|
if (PageUptodate(page)) {
|
|
unlock_page(page);
|
|
goto out;
|
|
}
|
|
goto filler;
|
|
|
|
out:
|
|
mark_page_accessed(page);
|
|
return page;
|
|
}
|
|
|
|
/**
|
|
* read_cache_page - read into page cache, fill it if needed
|
|
* @mapping: the page's address_space
|
|
* @index: the page index
|
|
* @filler: function to perform the read
|
|
* @data: first arg to filler(data, page) function, often left as NULL
|
|
*
|
|
* Read into the page cache. If a page already exists, and PageUptodate() is
|
|
* not set, try to fill the page and wait for it to become unlocked.
|
|
*
|
|
* If the page does not get brought uptodate, return -EIO.
|
|
*/
|
|
struct page *read_cache_page(struct address_space *mapping,
|
|
pgoff_t index,
|
|
int (*filler)(void *, struct page *),
|
|
void *data)
|
|
{
|
|
return do_read_cache_page(mapping, index, filler, data, mapping_gfp_mask(mapping));
|
|
}
|
|
EXPORT_SYMBOL(read_cache_page);
|
|
|
|
/**
|
|
* read_cache_page_gfp - read into page cache, using specified page allocation flags.
|
|
* @mapping: the page's address_space
|
|
* @index: the page index
|
|
* @gfp: the page allocator flags to use if allocating
|
|
*
|
|
* This is the same as "read_mapping_page(mapping, index, NULL)", but with
|
|
* any new page allocations done using the specified allocation flags.
|
|
*
|
|
* If the page does not get brought uptodate, return -EIO.
|
|
*/
|
|
struct page *read_cache_page_gfp(struct address_space *mapping,
|
|
pgoff_t index,
|
|
gfp_t gfp)
|
|
{
|
|
filler_t *filler = (filler_t *)mapping->a_ops->readpage;
|
|
|
|
return do_read_cache_page(mapping, index, filler, NULL, gfp);
|
|
}
|
|
EXPORT_SYMBOL(read_cache_page_gfp);
|
|
|
|
/*
|
|
* Performs necessary checks before doing a write
|
|
*
|
|
* Can adjust writing position or amount of bytes to write.
|
|
* Returns appropriate error code that caller should return or
|
|
* zero in case that write should be allowed.
|
|
*/
|
|
inline ssize_t generic_write_checks(struct kiocb *iocb, struct iov_iter *from)
|
|
{
|
|
struct file *file = iocb->ki_filp;
|
|
struct inode *inode = file->f_mapping->host;
|
|
unsigned long limit = rlimit(RLIMIT_FSIZE);
|
|
loff_t pos;
|
|
|
|
if (!iov_iter_count(from))
|
|
return 0;
|
|
|
|
/* FIXME: this is for backwards compatibility with 2.4 */
|
|
if (iocb->ki_flags & IOCB_APPEND)
|
|
iocb->ki_pos = i_size_read(inode);
|
|
|
|
pos = iocb->ki_pos;
|
|
|
|
if (limit != RLIM_INFINITY) {
|
|
if (iocb->ki_pos >= limit) {
|
|
send_sig(SIGXFSZ, current, 0);
|
|
return -EFBIG;
|
|
}
|
|
iov_iter_truncate(from, limit - (unsigned long)pos);
|
|
}
|
|
|
|
/*
|
|
* LFS rule
|
|
*/
|
|
if (unlikely(pos + iov_iter_count(from) > MAX_NON_LFS &&
|
|
!(file->f_flags & O_LARGEFILE))) {
|
|
if (pos >= MAX_NON_LFS)
|
|
return -EFBIG;
|
|
iov_iter_truncate(from, MAX_NON_LFS - (unsigned long)pos);
|
|
}
|
|
|
|
/*
|
|
* Are we about to exceed the fs block limit ?
|
|
*
|
|
* If we have written data it becomes a short write. If we have
|
|
* exceeded without writing data we send a signal and return EFBIG.
|
|
* Linus frestrict idea will clean these up nicely..
|
|
*/
|
|
if (unlikely(pos >= inode->i_sb->s_maxbytes))
|
|
return -EFBIG;
|
|
|
|
iov_iter_truncate(from, inode->i_sb->s_maxbytes - pos);
|
|
return iov_iter_count(from);
|
|
}
|
|
EXPORT_SYMBOL(generic_write_checks);
|
|
|
|
int pagecache_write_begin(struct file *file, struct address_space *mapping,
|
|
loff_t pos, unsigned len, unsigned flags,
|
|
struct page **pagep, void **fsdata)
|
|
{
|
|
const struct address_space_operations *aops = mapping->a_ops;
|
|
|
|
return aops->write_begin(file, mapping, pos, len, flags,
|
|
pagep, fsdata);
|
|
}
|
|
EXPORT_SYMBOL(pagecache_write_begin);
|
|
|
|
int pagecache_write_end(struct file *file, struct address_space *mapping,
|
|
loff_t pos, unsigned len, unsigned copied,
|
|
struct page *page, void *fsdata)
|
|
{
|
|
const struct address_space_operations *aops = mapping->a_ops;
|
|
|
|
return aops->write_end(file, mapping, pos, len, copied, page, fsdata);
|
|
}
|
|
EXPORT_SYMBOL(pagecache_write_end);
|
|
|
|
ssize_t
|
|
generic_file_direct_write(struct kiocb *iocb, struct iov_iter *from, loff_t pos)
|
|
{
|
|
struct file *file = iocb->ki_filp;
|
|
struct address_space *mapping = file->f_mapping;
|
|
struct inode *inode = mapping->host;
|
|
ssize_t written;
|
|
size_t write_len;
|
|
pgoff_t end;
|
|
struct iov_iter data;
|
|
|
|
write_len = iov_iter_count(from);
|
|
end = (pos + write_len - 1) >> PAGE_CACHE_SHIFT;
|
|
|
|
written = filemap_write_and_wait_range(mapping, pos, pos + write_len - 1);
|
|
if (written)
|
|
goto out;
|
|
|
|
/*
|
|
* After a write we want buffered reads to be sure to go to disk to get
|
|
* the new data. We invalidate clean cached page from the region we're
|
|
* about to write. We do this *before* the write so that we can return
|
|
* without clobbering -EIOCBQUEUED from ->direct_IO().
|
|
*/
|
|
if (mapping->nrpages) {
|
|
written = invalidate_inode_pages2_range(mapping,
|
|
pos >> PAGE_CACHE_SHIFT, end);
|
|
/*
|
|
* If a page can not be invalidated, return 0 to fall back
|
|
* to buffered write.
|
|
*/
|
|
if (written) {
|
|
if (written == -EBUSY)
|
|
return 0;
|
|
goto out;
|
|
}
|
|
}
|
|
|
|
data = *from;
|
|
written = mapping->a_ops->direct_IO(iocb, &data, pos);
|
|
|
|
/*
|
|
* Finally, try again to invalidate clean pages which might have been
|
|
* cached by non-direct readahead, or faulted in by get_user_pages()
|
|
* if the source of the write was an mmap'ed region of the file
|
|
* we're writing. Either one is a pretty crazy thing to do,
|
|
* so we don't support it 100%. If this invalidation
|
|
* fails, tough, the write still worked...
|
|
*/
|
|
if (mapping->nrpages) {
|
|
invalidate_inode_pages2_range(mapping,
|
|
pos >> PAGE_CACHE_SHIFT, end);
|
|
}
|
|
|
|
if (written > 0) {
|
|
pos += written;
|
|
iov_iter_advance(from, written);
|
|
if (pos > i_size_read(inode) && !S_ISBLK(inode->i_mode)) {
|
|
i_size_write(inode, pos);
|
|
mark_inode_dirty(inode);
|
|
}
|
|
iocb->ki_pos = pos;
|
|
}
|
|
out:
|
|
return written;
|
|
}
|
|
EXPORT_SYMBOL(generic_file_direct_write);
|
|
|
|
/*
|
|
* Find or create a page at the given pagecache position. Return the locked
|
|
* page. This function is specifically for buffered writes.
|
|
*/
|
|
struct page *grab_cache_page_write_begin(struct address_space *mapping,
|
|
pgoff_t index, unsigned flags)
|
|
{
|
|
struct page *page;
|
|
int fgp_flags = FGP_LOCK|FGP_ACCESSED|FGP_WRITE|FGP_CREAT;
|
|
|
|
if (flags & AOP_FLAG_NOFS)
|
|
fgp_flags |= FGP_NOFS;
|
|
|
|
page = pagecache_get_page(mapping, index, fgp_flags,
|
|
mapping_gfp_mask(mapping));
|
|
if (page)
|
|
wait_for_stable_page(page);
|
|
|
|
return page;
|
|
}
|
|
EXPORT_SYMBOL(grab_cache_page_write_begin);
|
|
|
|
ssize_t generic_perform_write(struct file *file,
|
|
struct iov_iter *i, loff_t pos)
|
|
{
|
|
struct address_space *mapping = file->f_mapping;
|
|
const struct address_space_operations *a_ops = mapping->a_ops;
|
|
long status = 0;
|
|
ssize_t written = 0;
|
|
unsigned int flags = 0;
|
|
|
|
/*
|
|
* Copies from kernel address space cannot fail (NFSD is a big user).
|
|
*/
|
|
if (!iter_is_iovec(i))
|
|
flags |= AOP_FLAG_UNINTERRUPTIBLE;
|
|
|
|
do {
|
|
struct page *page;
|
|
unsigned long offset; /* Offset into pagecache page */
|
|
unsigned long bytes; /* Bytes to write to page */
|
|
size_t copied; /* Bytes copied from user */
|
|
void *fsdata;
|
|
|
|
offset = (pos & (PAGE_CACHE_SIZE - 1));
|
|
bytes = min_t(unsigned long, PAGE_CACHE_SIZE - offset,
|
|
iov_iter_count(i));
|
|
|
|
again:
|
|
/*
|
|
* Bring in the user page that we will copy from _first_.
|
|
* Otherwise there's a nasty deadlock on copying from the
|
|
* same page as we're writing to, without it being marked
|
|
* up-to-date.
|
|
*
|
|
* Not only is this an optimisation, but it is also required
|
|
* to check that the address is actually valid, when atomic
|
|
* usercopies are used, below.
|
|
*/
|
|
if (unlikely(iov_iter_fault_in_readable(i, bytes))) {
|
|
status = -EFAULT;
|
|
break;
|
|
}
|
|
|
|
if (fatal_signal_pending(current)) {
|
|
status = -EINTR;
|
|
break;
|
|
}
|
|
|
|
status = a_ops->write_begin(file, mapping, pos, bytes, flags,
|
|
&page, &fsdata);
|
|
if (unlikely(status < 0))
|
|
break;
|
|
|
|
if (mapping_writably_mapped(mapping))
|
|
flush_dcache_page(page);
|
|
|
|
copied = iov_iter_copy_from_user_atomic(page, i, offset, bytes);
|
|
flush_dcache_page(page);
|
|
|
|
status = a_ops->write_end(file, mapping, pos, bytes, copied,
|
|
page, fsdata);
|
|
if (unlikely(status < 0))
|
|
break;
|
|
copied = status;
|
|
|
|
cond_resched();
|
|
|
|
iov_iter_advance(i, copied);
|
|
if (unlikely(copied == 0)) {
|
|
/*
|
|
* If we were unable to copy any data at all, we must
|
|
* fall back to a single segment length write.
|
|
*
|
|
* If we didn't fallback here, we could livelock
|
|
* because not all segments in the iov can be copied at
|
|
* once without a pagefault.
|
|
*/
|
|
bytes = min_t(unsigned long, PAGE_CACHE_SIZE - offset,
|
|
iov_iter_single_seg_count(i));
|
|
goto again;
|
|
}
|
|
pos += copied;
|
|
written += copied;
|
|
|
|
balance_dirty_pages_ratelimited(mapping);
|
|
} while (iov_iter_count(i));
|
|
|
|
return written ? written : status;
|
|
}
|
|
EXPORT_SYMBOL(generic_perform_write);
|
|
|
|
/**
|
|
* __generic_file_write_iter - write data to a file
|
|
* @iocb: IO state structure (file, offset, etc.)
|
|
* @from: iov_iter with data to write
|
|
*
|
|
* This function does all the work needed for actually writing data to a
|
|
* file. It does all basic checks, removes SUID from the file, updates
|
|
* modification times and calls proper subroutines depending on whether we
|
|
* do direct IO or a standard buffered write.
|
|
*
|
|
* It expects i_mutex to be grabbed unless we work on a block device or similar
|
|
* object which does not need locking at all.
|
|
*
|
|
* This function does *not* take care of syncing data in case of O_SYNC write.
|
|
* A caller has to handle it. This is mainly due to the fact that we want to
|
|
* avoid syncing under i_mutex.
|
|
*/
|
|
ssize_t __generic_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
|
|
{
|
|
struct file *file = iocb->ki_filp;
|
|
struct address_space * mapping = file->f_mapping;
|
|
struct inode *inode = mapping->host;
|
|
ssize_t written = 0;
|
|
ssize_t err;
|
|
ssize_t status;
|
|
|
|
/* We can write back this queue in page reclaim */
|
|
current->backing_dev_info = inode_to_bdi(inode);
|
|
err = file_remove_privs(file);
|
|
if (err)
|
|
goto out;
|
|
|
|
err = file_update_time(file);
|
|
if (err)
|
|
goto out;
|
|
|
|
if (iocb->ki_flags & IOCB_DIRECT) {
|
|
loff_t pos, endbyte;
|
|
|
|
written = generic_file_direct_write(iocb, from, iocb->ki_pos);
|
|
/*
|
|
* If the write stopped short of completing, fall back to
|
|
* buffered writes. Some filesystems do this for writes to
|
|
* holes, for example. For DAX files, a buffered write will
|
|
* not succeed (even if it did, DAX does not handle dirty
|
|
* page-cache pages correctly).
|
|
*/
|
|
if (written < 0 || !iov_iter_count(from) || IS_DAX(inode))
|
|
goto out;
|
|
|
|
status = generic_perform_write(file, from, pos = iocb->ki_pos);
|
|
/*
|
|
* If generic_perform_write() returned a synchronous error
|
|
* then we want to return the number of bytes which were
|
|
* direct-written, or the error code if that was zero. Note
|
|
* that this differs from normal direct-io semantics, which
|
|
* will return -EFOO even if some bytes were written.
|
|
*/
|
|
if (unlikely(status < 0)) {
|
|
err = status;
|
|
goto out;
|
|
}
|
|
/*
|
|
* We need to ensure that the page cache pages are written to
|
|
* disk and invalidated to preserve the expected O_DIRECT
|
|
* semantics.
|
|
*/
|
|
endbyte = pos + status - 1;
|
|
err = filemap_write_and_wait_range(mapping, pos, endbyte);
|
|
if (err == 0) {
|
|
iocb->ki_pos = endbyte + 1;
|
|
written += status;
|
|
invalidate_mapping_pages(mapping,
|
|
pos >> PAGE_CACHE_SHIFT,
|
|
endbyte >> PAGE_CACHE_SHIFT);
|
|
} else {
|
|
/*
|
|
* We don't know how much we wrote, so just return
|
|
* the number of bytes which were direct-written
|
|
*/
|
|
}
|
|
} else {
|
|
written = generic_perform_write(file, from, iocb->ki_pos);
|
|
if (likely(written > 0))
|
|
iocb->ki_pos += written;
|
|
}
|
|
out:
|
|
current->backing_dev_info = NULL;
|
|
return written ? written : err;
|
|
}
|
|
EXPORT_SYMBOL(__generic_file_write_iter);
|
|
|
|
/**
|
|
* generic_file_write_iter - write data to a file
|
|
* @iocb: IO state structure
|
|
* @from: iov_iter with data to write
|
|
*
|
|
* This is a wrapper around __generic_file_write_iter() to be used by most
|
|
* filesystems. It takes care of syncing the file in case of O_SYNC file
|
|
* and acquires i_mutex as needed.
|
|
*/
|
|
ssize_t generic_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
|
|
{
|
|
struct file *file = iocb->ki_filp;
|
|
struct inode *inode = file->f_mapping->host;
|
|
ssize_t ret;
|
|
|
|
mutex_lock(&inode->i_mutex);
|
|
ret = generic_write_checks(iocb, from);
|
|
if (ret > 0)
|
|
ret = __generic_file_write_iter(iocb, from);
|
|
mutex_unlock(&inode->i_mutex);
|
|
|
|
if (ret > 0) {
|
|
ssize_t err;
|
|
|
|
err = generic_write_sync(file, iocb->ki_pos - ret, ret);
|
|
if (err < 0)
|
|
ret = err;
|
|
}
|
|
return ret;
|
|
}
|
|
EXPORT_SYMBOL(generic_file_write_iter);
|
|
|
|
/**
|
|
* try_to_release_page() - release old fs-specific metadata on a page
|
|
*
|
|
* @page: the page which the kernel is trying to free
|
|
* @gfp_mask: memory allocation flags (and I/O mode)
|
|
*
|
|
* The address_space is to try to release any data against the page
|
|
* (presumably at page->private). If the release was successful, return `1'.
|
|
* Otherwise return zero.
|
|
*
|
|
* This may also be called if PG_fscache is set on a page, indicating that the
|
|
* page is known to the local caching routines.
|
|
*
|
|
* The @gfp_mask argument specifies whether I/O may be performed to release
|
|
* this page (__GFP_IO), and whether the call may block (__GFP_RECLAIM & __GFP_FS).
|
|
*
|
|
*/
|
|
int try_to_release_page(struct page *page, gfp_t gfp_mask)
|
|
{
|
|
struct address_space * const mapping = page->mapping;
|
|
|
|
BUG_ON(!PageLocked(page));
|
|
if (PageWriteback(page))
|
|
return 0;
|
|
|
|
if (mapping && mapping->a_ops->releasepage)
|
|
return mapping->a_ops->releasepage(page, gfp_mask);
|
|
return try_to_free_buffers(page);
|
|
}
|
|
|
|
EXPORT_SYMBOL(try_to_release_page);
|