commit 8dd33bcb7050dd6f8c1432732f930932c9d3a33e upstream.
One convenient way to erase trace is "echo > trace". However, this
is currently broken if the current tracer is irqsoff tracer. This
is because irqsoff tracer use max_buffer as the default trace
buffer.
Set the max_buffer as the one to be cleared when it's the trace
buffer currently in use.
Link: http://lkml.kernel.org/r/1505754215-29411-1-git-send-email-byan@nvidia.com
Cc: <mingo@redhat.com>
Fixes: 4acd4d00f ("tracing: give easy way to clear trace buffer")
Signed-off-by: Bo Yan <byan@nvidia.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 75df6e688ccd517e339a7c422ef7ad73045b18a2 upstream.
When reading data from trace_pipe, tracing_wait_pipe() performs a
check to see if tracing has been turned off after some data was read.
Currently, this check always looks at global trace state, but it
should be checking the trace instance where trace_pipe is located at.
Because of this bug, cat instances/i1/trace_pipe in the following
script will immediately exit instead of waiting for data:
cd /sys/kernel/debug/tracing
echo 0 > tracing_on
mkdir -p instances/i1
echo 1 > instances/i1/tracing_on
echo 1 > instances/i1/events/sched/sched_process_exec/enable
cat instances/i1/trace_pipe
Link: http://lkml.kernel.org/r/20170917102348.1615-1-tahsin@google.com
Fixes: 10246fa35d ("tracing: give easy way to clear trace buffer")
Signed-off-by: Tahsin Erdogan <tahsin@google.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 47c5310a8dbe7c2cb9f0083daa43ceed76c257fa upstream, with part
of commit edd03602d97236e8fea13cd76886c576186aa307 folded in.
Nixiaoming pointed out that there is a memory leak in
kvm_vm_ioctl_create_spapr_tce() if the call to anon_inode_getfd()
fails; the memory allocated for the kvmppc_spapr_tce_table struct
is not freed, and nor are the pages allocated for the iommu
tables.
David Hildenbrand pointed out that there is a race in that the
function checks early on that there is not already an entry in the
stt->iommu_tables list with the same LIOBN, but an entry with the
same LIOBN could get added between then and when the new entry is
added to the list.
This fixes both problems. To simplify things, we now call
anon_inode_getfd() before placing the new entry in the list. The
check for an existing entry is done while holding the kvm->lock
mutex, immediately before adding the new entry to the list.
[paulus@ozlabs.org - folded in that part of edd03602d972 ("KVM:
PPC: Book3S HV: Protect updates to spapr_tce_tables list", 2017-08-28)
which restructured the code that 47c5310a8dbe modified, to avoid
a build failure caused by the absence of put_unused_fd().
Also removed the locked memory accounting, since it doesn't exist
in this version, and adjusted the commit message.]
Fixes: 54738c0971 ("KVM: PPC: Accelerate H_PUT_TCE by implementing it in real mode")
Reported-by: Nixiaoming <nixiaoming@huawei.com>
Reported-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6e46d8ce894374fc135c96a8d1057c6af1fef237 upstream.
When HW ROC is supported it is possible that after the HW notified
that the ROC has started, the ROC was cancelled and another ROC was
added while the hw_roc_start worker is waiting on the mutex (since
cancelling the ROC and adding another one also holds the same mutex).
As a result, the hw_roc_start worker will continue to run after the
new ROC is added but before it is actually started by the HW.
This may result in notifying userspace that the ROC has started before
it actually does, or in case of management tx ROC, in an attempt to
tx while not on the right channel.
In addition, when the driver will notify mac80211 that the second ROC
has started, mac80211 will warn that this ROC has already been
notified.
Fix this by flushing the hw_roc_start work before cancelling an ROC.
Signed-off-by: Avraham Stern <avraham.stern@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f5c4ba816315d3b813af16f5571f86c8d4e897bd upstream.
There is a race that cause cifs reconnect in cifs_mount,
- cifs_mount
- cifs_get_tcp_session
- [ start thread cifs_demultiplex_thread
- cifs_read_from_socket: -ECONNABORTED
- DELAY_WORK smb2_reconnect_server ]
- cifs_setup_session
- [ smb2_reconnect_server ]
auth_key.response was allocated in cifs_setup_session, and
will release when the session destoried. So when session re-
connect, auth_key.response should be check and released.
Tested with my system:
CIFS VFS: Free previous auth_key.response = ffff8800320bbf80
A simple auth_key.response allocation call trace:
- cifs_setup_session
- SMB2_sess_setup
- SMB2_sess_auth_rawntlmssp_authenticate
- build_ntlmssp_auth_blob
- setup_ntlmv2_rsp
Signed-off-by: Shu Wang <shuwang@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Reviewed-by: Ronnie Sahlberg <lsahlber@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Charger driver expects USB driver notify -ETIMEDOUT always on
POWER_SUPPLY_PROP_CURRENT_MAXCURRENT on enumeration failure in case
of floating charger. This is to set ICL to value based on Rp in case of
floating charger. But currently driver is not notifying this, when floating
charger is disconnected and connected back. Due to this, device is charging
only with 100mA instead of current based on Rp value. Fix this by having
proper check and allow to notify in case of floating charger type.
Change-Id: Iee0c4d8faa7e25c8445b83784a782751e7148421
Signed-off-by: Vijayavardhan Vennapusa <vvreddy@codeaurora.org>
GPU speed-bin 2 supports fmax of 560MHz and DDR 1555MHz.
Add this config to MSM8996v3 to support required GPU fmax.
Change-Id: Ibdf9bb63c7d8f0e980fbf3c192d536adeaeec52d
Signed-off-by: Dumpeti Sathish Kumar <sathyanov14@codeaurora.org>
secure_pool_list is initialized during domain alloc and freed with domain
free.
commit e6a18bb617 ("iommu: free io pgtable during domain detach.") frees
the secure_pool_list as part of iommu detach sequence, and uses the same
list head as part of iommu attach. This uncovers an existing bug where
list was not being deleted from secure_pool_list and associated memory was
being freed. This resulted in invalid secure_pool_list head pointing to a
location already freed and resulted in kernel BUG of access after free
during iommu attach.
Call Trace:
arm_smmu_alloc_pages_exact+0x60/0x110
io_pgtable_alloc_pages_exact+0x48/0xb0
__arm_lpae_alloc_pages+0x48/0x1c0
arm_64_lpae_alloc_pgtable_s1+0x100/0x15c
alloc_io_pgtable_ops+0x54/0x88
arm_smmu_attach_dev+0x8cc/0x1144
__iommu_attach_device+0x3c/0xf4
[...]
Change-Id: I7d1b49030986da7f5d05b7e6cb9dc09079f20a41
Signed-off-by: Prakash Gupta <guptap@codeaurora.org>
There is no check before cache ops if the memory is marked secure.
This leads to stage 2 pagefault if a secure memory is passed to
IOCTL_KGSL_GPUMEM_SYNC_CACHE ioctl because kernel is not allowed to
do cache ops on secure memory. This can be avoided by returning
success immediately if the memory is marked as secure.
Change-Id: I215d77d2a488cdb00e8e18cfd38cddd9632fd9f6
Signed-off-by: Akhil P Oommen <akhilpo@codeaurora.org>
Using IS_ERR_OR_NULL instead of IS_ERR to also check for
null pointer.
Change-Id: If53a07db52a4d091693a49f9d084df7d4fbf257a
Signed-off-by: Vijay kumar Tumati <vtumati@codeaurora.org>
If DP cable is disconnected while processing HPD or IRQ_HPD,
DP driver may continue with failsafe parameters and notify
connection event and immediately notify disconnection event
which may result in state machine corruption in userspace.
Add changes to avoid reading dpcd caps, edid, link training
or connection events if DP cable is disconnected in between.
Change-Id: I0b59ebdb636c9dc1086673253399b849734d51ee
Signed-off-by: Narender Ankam <nankam@codeaurora.org>
Use the appropriate lock when adding sparse bindings, to protect the
list of sparse bindings from concurrent updates by multiple threads.
Change-Id: Ice9750c96fca42f4049ed352533f4722b5166221
Signed-off-by: Lynus Vaz <lvaz@codeaurora.org>
If we find that a different thread has already added bindings at the
same offset we wanted to add to the sparse object, don't get stuck in
an infinite loop, and return with an error.
Change-Id: I6b17c91eccb14c07e13cae24135dfe7b13f3301d
Signed-off-by: Lynus Vaz <lvaz@codeaurora.org>
In WAN ioctls user-supplied data structures
contain string members,but there's no guarantee
they're null-terminated, add the string terminator
to prevent vulnerability of string buffer overflows.
Change-Id: I17c06c94aa619a2cd3a678c495a31541a65a7741
Acked-by: Ashok Vuyyuru <avuyyuru@qti.qualcomm.com>
Signed-off-by: Mohammed Javid <mjavid@codeaurora.org>
Reset the HDCP 2.2 sink support during a cable disconnect
to avoid stale information during the next cable connect.
This information is populated again from the sink on the next
cable connect.
Change-Id: I54da6e633915718da4be7023027c1d8c68cd6c21
Signed-off-by: Abhinav Kumar <abhinavk@codeaurora.org>
This is cherry-picked from upstrea-f2fs-stable-linux-4.4.y.
Changes include:
commit c7fd9e2b4a ("f2fs: hurry up to issue discard after io interruption")
commit 603dde3965 ("f2fs: fix to show correct discard_granularity in sysfs")
...
commit 565f0225f9 ("f2fs: factor out discard command info into discard_cmd_control")
commit c4cc29d19e ("f2fs: remove batched discard in f2fs_trim_fs")
Change-Id: Icd8a85ac0c19a8aa25cd2591a12b4e9b85bdf1c5
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Currently, sugov_next_freq_shared() uses last_freq_update_time as a
reference to decide when to start considering CPU contributions as
stale.
However, since last_freq_update_time is set by the last CPU that issued
a frequency transition, this might cause problems in certain cases. In
practice, the detection of stale utilization values fails whenever the
CPU with such values was the last to update the policy. For example (and
please note again that the SCHED_CPUFREQ_RT flag is not the problem
here, but only the detection of after how much time that flag has to be
considered stale), suppose a policy with 2 CPUs:
CPU0 | CPU1
|
| RT task scheduled
| SCHED_CPUFREQ_RT is set
| CPU1->last_update = now
| freq transition to max
| last_freq_update_time = now
|
more than TICK_NSEC nsecs
|
a small CFS wakes up |
CPU0->last_update = now1 |
delta_ns(CPU0) < TICK_NSEC* |
CPU0's util is considered |
delta_ns(CPU1) = |
last_freq_update_time - |
CPU1->last_update = 0 |
< TICK_NSEC |
CPU1 is still considered |
CPU1->SCHED_CPUFREQ_RT is set |
we stay at max (until CPU1 |
exits from idle) |
* delta_ns is actually negative as now1 > last_freq_update_time
While last_freq_update_time is a sensible reference for rate limiting,
it doesn't seem to be useful for working around stale CPU states.
Fix the problem by always considering now (time) as the reference for
deciding when CPUs have stale contributions.
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit d86ab9cff8b936aadde444d0e263a8db5ff0349b)
Currently, when a channel is read in continuous mode and the read
operation fails, RR_ADC would be left enabled in continuous mode.
Disable the continuous mode in such cases so that the other read
operations which doesn't need continuous mode can go through.
Change-Id: I2bf257bd535e1e4a30e18b6257e584a5be96b69d
Signed-off-by: Subbaraman Narayanamurthy <subbaram@codeaurora.org>
Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
The Android kernel config fragments now live in a separate repository.
To prevent others from having to search for this location, add a script
to fetch and unpack the fragments.
Update .gitignore to include these fragments.
Change-Id: If2d4a59b86e4573b0a9b3190025dfe4191870b46
Signed-off-by: Steve Muckle <smuckle@google.com>
This change checks a NULL condition before initializing submitqueue.
Change-Id: I9ef6b6506b535d33e585be4988fa6433e11b3cb1
Signed-off-by: Rahul Sharma <sharah@codeaurora.org>
This changes modifies compatible name for smmu_kms_unsec_cb
to use sde terminology.
Change-Id: I31ee9620f8bb54fd582d9c6b21f5df0fda3cb975
Signed-off-by: Rahul Sharma <sharah@codeaurora.org>
commit d4adb30f25f5f2aa9b205891e395251d2a9098be upstream.
This patch modifies stat information more clearly.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit b01a92019cac30398ef75b560d2668b399f4e393 upstream.
This patch simply cleans up the names for flush/discard commands.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit ae27d62e6befd3cac4ffa702e644cc52019642e8 upstream.
This patch adds a mirror for sit version bitmap, and use it to detect
in-memory bitmap corruption which may be caused by bit-transition of
cache or memory overflow.
Signed-off-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 599a09b2c1ac222e6aad0c22515d1ccde7c3b702 upstream.
This patch adds a mirror for nat version bitmap, and use it to detect
in-memory bitmap corruption which may be caused by bit-transition of
cache or memory overflow.
Signed-off-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 355e78913c0d57492076d545b6f44b94fec2bf6b upstream.
This patch adds a mirror for valid block bitmap, and use it to detect
in-memory bitmap corruption which may be caused by bit-transition of
cache or memory overflow.
Signed-off-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 5fe457430e554a2f5188f13c1a2e36ad845640c5 upstream.
This patch introduces a new flag to indicate inode status of doing atomic
write committing, so that, we can keep atomic write status for inode
during atomic committing, then we can skip GCing pages of atomic write inode,
that avoids random GCed datas being mixed with current transaction, so
isolation of transaction can be kept.
Signed-off-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 25290fa5591d81767713db304e0d567bf991786f upstream.
If there is no candidate to submit discard command during f2sf_trim_fs, let's
return without checkpoint.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 0333ad4e4f49e14217256e1db1134a70cf60b2de upstream.
The f2fs_trim_fs() doesn't need to do checkpoint if there are newly allocated
data blocks only which didn't change the critical checkpoint data such as nat
and sit entries.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 4e6a8d9b224f886362ea6e8f6046b541437c944f upstream.
This patch relaxes async discard commands to avoid waiting its end_io during
checkpoint.
Instead of waiting them during checkpoint, it will be done when actually reusing
them.
Test on initial partition of nvme drive.
# time fstrim /mnt/test
Before : 6.158s
After : 4.822s
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit bb95d9ab2a9d4afd03b59a603cccb2c601f68b78 upstream.
A test program gets the SEEK_DATA with two values between
a new created file and the exist file on f2fs filesystem.
F2FS filesystem, (the first "test1" is a new file)
SEEK_DATA size != 0 (offset = 8192)
SEEK_DATA size != 0 (offset = 4096)
PNFS filesystem, (the first "test1" is a new file)
SEEK_DATA size != 0 (offset = 4096)
SEEK_DATA size != 0 (offset = 4096)
int main(int argc, char **argv)
{
char *filename = argv[1];
int offset = 1, i = 0, fd = -1;
if (argc < 2) {
printf("Usage: %s f2fsfilename\n", argv[0]);
return -1;
}
/*
if (!access(filename, F_OK) || errno != ENOENT) {
printf("Needs a new file for test, %m\n");
return -1;
}*/
fd = open(filename, O_RDWR | O_CREAT, 0777);
if (fd < 0) {
printf("Create test file %s failed, %m\n", filename);
return -1;
}
for (i = 0; i < 20; i++) {
offset = 1 << i;
ftruncate(fd, 0);
lseek(fd, offset, SEEK_SET);
write(fd, "test", 5);
/* Get the alloc size by seek data equal zero*/
if (lseek(fd, 0, SEEK_DATA)) {
printf("SEEK_DATA size != 0 (offset = %d)\n", offset);
break;
}
}
close(fd);
return 0;
}
Reported-and-Tested-by: Kinglong Mee <kinglongmee@gmail.com>
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 363fa4e078cbdc97a172c19d19dc04b41b52ebc8 upstream.
This patch fixes the renaming bug on encrypted filenames, which was pointed by
(ext4: don't allow encrypted operations without keys)
Cc: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 26a28a0c1eb756ba18bfb1f93309c4b4406b9cd9 upstream.
This patch adds to show the max number of atomic operations which are
conducting concurrently.
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit ec91538dccd44329ad83d3aae1aa6a8389b5c75f upstream.
This patch adds to set io_size_bits from mount option.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 0a595ebaaa6b53a2226d3fee2a2fd616ea5ba378 upstream.
This patch implements IO alignment by filling dummy blocks in DATA and NODE
write bios. If we can guarantee, for example, 32KB or 64KB for such the IOs,
we can eliminate underlying dummy page problem which FTL conducts in order to
close MLC or TLC partial written pages.
Note that,
- it requires "-o mode=lfs".
- IO size should be power of 2, not exceed BIO_MAX_PAGES, 256.
- read IO is still 4KB.
- do checkpoint at fsync, if dummy NODE page was written.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 9d52a504db6db9e4e254576130aa867838daff55 upstream.
Otherwise we can remain wrong curseg->next_blkoff, resulting in fsck failure.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 650d3c4e56e1e92ee6e004648c9deb243e5963e0 upstream.
If userspace issue a fstrim with a range not involve prefree segments,
it will reuse these segments without discard. This patch fix it.
Signed-off-by: Yunlei He <heyunlei@huawei.com>
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit ed0b56209fe79a1309653d4b03f5c3147f580f6b upstream.
Use rb_entry_safe() instead of open-coding it.
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 746e2403927efbd7c7f2e796314e3cfb3cfabaa4 upstream.
If the range we write cover the whole valid data in the last page,
we do not need to read it.
Signed-off-by: Yunlei He <heyunlei@huawei.com>
[Jaegeuk Kim: nullify the remaining area (fix: xfstests/f2fs/001)]
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 7855eba4d6102f811b6dd142d6c749f53b591fa3 upstream.
This patch fix a problem of using memory after free
in function __try_merge_extent_node.
Fixes: 0f825ee6e8 ("f2fs: add new interfaces for extent tree")
Cc: <stable@vger.kernel.org>
Signed-off-by: Yunlei He <heyunlei@huawei.com>
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 07fe8d44409f88be8f4a4e8f22b47ee709a22657 upstream.
We checked that "inode" is not an error pointer earlier so there is
no need to check again here.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 5c9e418436f3445d7cc4f3ba2964f231a4b33f17 upstream.
If we run out of memory, in cache_nat_entry, it's better to avoid loop
for allocating memory to cache nat entry, so in low memory scenario, for
read path of node block, I expect this can avoid unneeded latency.
Signed-off-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit fed24668482e07421b8e746a4886e7725434050a upstream.
This patch remove unused values in function recover_fsync_data
Signed-off-by: Yunlei He <heyunlei@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 275b66b09e85cf0520dc610dd89706952751a473 upstream.
This patch is based on commit 275b66b09e85 (f2fs: support async discard).
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
commit 70fd76140a6cb63262bd47b68d57b42e889c10ee upstream.
This patch backported ("block,fs: use REQ_* flags directly")
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>