Pull two nfsd bugfixes from Bruce Fields:
"Just two bugfixes, one for a merge-window-introduced ACL regression,
the other for a longer-standing v4 state bug"
* 'for-3.15' of git://linux-nfs.org/~bfields/linux:
nfsd4: warn on finding lockowner without stateid's
nfsd4: remove lockowner when removing lock stateid
nfsd4: fix corruption on setting an ACL.
In dlm_init, if create dlm_lockname_cache failed in
dlm_init_master_caches, it will destroy dlm_lockres_cache which created
before twice. And this will cause system die when loading modules.
Signed-off-by: Joseph Qi <joseph.qi@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.com>
Cc: Joel Becker <jlbec@evilplan.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Currently if the nfs-level part of a reply would be too large, we'll
return an error to the client. But if the nfs-level part fits and
leaves no room for krb5p or krb5i stuff, then we just drop the request
entirely.
That's no good. Instead, reserve some slack space at the end of the
buffer and make sure we fail outright if we'd come close.
The slack space here is a massive overstimate of what's required, we
should probably try for a tighter limit at some point.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Just change the nfsd4_encode_getattr api. Not changing any code or
adding any new functionality yet.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
This is a mechanical transformation with no change in behavior.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Currently a non-idempotent op reply may be cached if it fails in the
proc code but not if it fails at xdr decoding. I doubt there are any
xdr-decoding-time errors that would make this a problem in practice, so
this probably isn't a serious bug.
The space estimates should also take into account space required for
encoding of error returns. Again, not a practical problem, though it
would become one after future patches which will tighten the space
estimates.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
The client is actually asking for 2532 bytes. I suspect that's a
mistake. But maybe we can allow some more. In theory lock needs more
if it might return a maximum-length lockowner in the denied case.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
OP_MODIFIES_SOMETHING flags operations that we should be careful not to
initiate without being sure we have the buffer space to encode a reply.
None of these ops fall into that category.
We could probably remove a few more, but this isn't a very important
problem at least for ops whose reply size is easy to estimate.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
call->async_workfn() can take an afs_call* arg rather than a work_struct* as
the functions assigned there are now called from afs_async_workfn() which has
to call container_of() anyway.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Nathaniel Wesley Filardo <nwf@cs.jhu.edu>
Reviewed-by: Tejun Heo <tj@kernel.org>
At present, it is not possible to successfully unload the kafs module if there
are outstanding async outgoing calls (those made with afs_make_call()). This
appears to be due to the changes introduced by:
commit 059499453a
Author: Tejun Heo <tj@kernel.org>
Date: Fri Mar 7 10:24:50 2014 -0500
Subject: afs: don't use PREPARE_WORK
which didn't go far enough. The problem is due to:
(1) The aforementioned commit introduced a separate handler function pointer
in the call, call->async_workfn, in addition to the original workqueue
item, call->async_work, for asynchronous operations because workqueues
subsystem cannot handle the workqueue item pointer being changed whilst
the item is queued or being processed.
(2) afs_async_workfn() was introduced in that commit to be the callback for
call->async_work. Its sole purpose is to run whatever call->async_workfn
points to.
(3) call->async_workfn is only used from afs_async_workfn(), which is only
set on async_work by afs_collect_incoming_call() - ie. for incoming
calls.
(4) call->async_workfn is *not* set by afs_make_call() when outgoing calls are
made, and call->async_work is set afs_process_async_call() - and not
afs_async_workfn().
(5) afs_process_async_call() now changes call->async_workfn rather than
call->async_work to point to afs_delete_async_call() to clean up, but this
is only effective for incoming calls because call->async_work does not
point to afs_async_workfn() for outgoing calls.
(6) Because, for incoming calls, call->async_work remains pointing to
afs_process_async_call() this results in an infinite loop.
Instead, make the workqueue uniformly vector through call->async_workfn, via
afs_async_workfn() and simply initialise call->async_workfn to point to
afs_process_async_call() in afs_make_call().
Signed-off-by: Nathaniel Wesley Filardo <nwf@cs.jhu.edu>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Tejun Heo <tj@kernel.org>
Split afs_end_call() into two pieces, one of which is identical to code in
afs_process_async_call(). Replace the latter with a call to the first part of
afs_end_call().
Signed-off-by: Nathaniel Wesley Filardo <nwf@cs.jhu.edu>
Signed-off-by: David Howells <dhowells@redhat.com>
PF_LESS_THROTTLE has a very specific use case: to avoid deadlocks
and live-locks while writing to the page cache in a loop-back
NFS mount situation.
It therefore makes sense to *only* set PF_LESS_THROTTLE in this
situation.
We now know when a request came from the local-host so it could be a
loop-back mount. We already know when we are handling write requests,
and when we are doing anything else.
So combine those two to allow nfsd to still be throttled (like any
other process) in every situation except when it is known to be
problematic.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
No need for a kmem_cache_destroy wrapper in nfsd, just do proper
goto based unwinding.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Assignments should not happen inside an if conditional, but in the line
before. This issue was reported by checkpatch.
The semantic patch that makes this change is as follows
(http://coccinelle.lip6.fr/):
// <smpl>
@@
identifier i1;
expression e1;
statement S;
@@
-if(!(i1 = e1)) S
+i1 = e1;
+if(!i1)
+S
// </smpl>
It has been tested by compilation.
Signed-off-by: Benoit Taine <benoit.taine@lip6.fr>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
When ext3 is used in data=journal mode, syncing filesystem makes sure
all the data is committed in the journal but the data doesn't have to be
checkpointed. ext3_freeze() then takes care of checkpointing all the
data so all buffer heads are clean but pages can still have dangling
dirty bits. So when flusher thread comes later when filesystem is
frozen, it tries to write back dirty pages, ext3_journalled_writepage()
tries to start a transaction and hangs waiting for frozen fs causing a
deadlock because a holder of s_umount semaphore may be waiting for
flusher thread to complete.
The fix is luckily relatively easy. We don't have to start a transaction
in ext3_journalled_writepage() when a page is just dirty (and doesn't
have PageChecked set) because in that case all buffers should be already
mapped (mapping must happen before writing a buffer to the journal) and
it is enough to write them out. This optimization also solves the deadlock
because block_write_full_page() will just find out there's no buffer to
write and do nothing.
Signed-off-by: Jan Kara <jack@suse.cz>
This simple patch adds support for raid6 to the ORE.
Most operations and calculations where already for the general
case. Only things left:
* call async_gen_syndrome() in the case of raid6
(NOTE that the raid6 math is the one supported by the Linux Kernel
see: crypto/async_tx/async_pq.c)
* call _ore_add_parity_unit() twice with only last call generating
the redundancy pages.
* Fix couple BUGS in old code
a. In reads when parity==2 it can happen that per_dev->length=0
but per_dev->offset was set and adjusted by _ore_add_sg_seg().
Don't let it be overwritten.
b. The all 'cur_comp > starting_dev' thing to determine if:
"per_dev->offset is in the current stripe number or the
next one."
Was a complete raid5/4 accident. When parity==2 this is not
at all true usually. All we need to do is increment si->ob_offset
once we pass by the first parity device.
(This also greatly simplifies the code, amen)
c. Calculation of si->dev rotation can overflow when parity==2.
* Then last enable raid6 in ore_verify_layout()
I want to deeply thank Daniel Gryniewicz who found first all the
bugs in the old raid code, and inspired these patches:
Inspired-by Daniel Gryniewicz <dang@linuxbox.com>
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Two cleanups:
* si->cur_comp, si->cur_pg where always calculated after
the call to ore_calc_stripe_info() with the help of
_dev_order(...). But these are already calculated by
ore_calc_stripe_info() and can be just set there.
(This is left over from the time that si->cur_comp, si->cur_pg
were only used by raid code, but now the main loop manages
them anyway even though they are ultimately not used in
none raid code)
* si->cur_comp - For the very last stripe case, was set inside
_ore_add_parity_unit(). This is not clear and will be wrong
for coming raid6 so move this to only caller. Now si->cur_comp
is only manipulated within _prepare_for_striping(), always next
to the manipulation of cur_dev.
Which is much easier to understand and follow.
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Merge x86/espfix into x86/vdso, due to changes in the vdso setup code
that otherwise cause conflicts.
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Pull two btrfs fixes from Chris Mason:
"This has two fixes that we've been testing for 3.16, but since both
are safe and fix real bugs, it makes sense to send for 3.15 instead"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs:
Btrfs: send, fix incorrect ref access when using extrefs
Btrfs: fix EIO on reading file after ioctl clone works on it
Prior to commit 0e4f6a791b (Fix reiserfs_file_release()), reiserfs
truncates serialized on i_mutex. They mostly still do, with the exception
of reiserfs_file_release. That blocks out other writers via the tailpack
mutex and the inode openers counter adjusted in reiserfs_file_open.
However, NFS will call reiserfs_setattr without having called ->open, so
we end up with a race when nfs is calling ->setattr while another
process is releasing the file. Ultimately, it triggers the
BUG_ON(inode->i_size != new_file_size) check in maybe_indirect_to_direct.
The solution is to pull the lock into reiserfs_setattr to encompass the
truncate_setsize call as well.
Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Jan Kara <jack@suse.cz>
Code inspection of the XFS error number sign translations found a bunch of
issues, including returning incorrectly signed errors for some data integrity
operations. These leak to userspace and result in applications not getting the
errors correctly reported. Hence they need fixing sooner rather than later.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJTdXBCAAoJEK3oKUf0dfoddHgP/11HEo2mAU4s3IZ0FXiWg7IX
LLz5laDlK0hBTEzlE43Y3bhX5Euk9cMYYschXoX7o9gOBG5VmC4RF9oIlzbohu1D
IlekaClr9UYiy7G6k3jLYFB8UDO4L88SM1pkJOus40VDD74fU2mYRrkFCnxWgGUz
9dcQkCB3C75rkH7LT5QGr1qejhmvC8WG0yVnwQB97/wiHDOeFuLIGpJtq8pYabfH
HVm5VoWcBerX5q6Zd/8hFRLARfMcQLpeotByLRT6jiJHz/gteVou8jJhgBOW1c1/
Z/CnK7GlvnWUo06/8FRVoHXwuOL+iPa1kiJIGm6DaYEIfZcsif28w2IPZyPlNzzN
vrR7Tdq6jSqpHo8JHGmBJDmS+RAdQtGEo/5pjqJAdhWOK4EW1fUxcrAH24A8ATLZ
hb5aIozVAYhGLN8wtPushL7endzZ5qQJFCuGmBO0QRP+5Cbkq018tC/3K9NCPXmM
MRTyiMs3ZxyYIcvgBo08eU6k419S9D/eZuHy+LU6ALWLf8+Km4aJyC6hKAQmQnzb
pw/3tP0xbdUK83Xl8wHVGmNUlQgjB1ZhOLdF0xAc9MocRarPqbuvLKTIUHslE8uO
1+sGIkKeiTzeOd0fJ+UGQC8cFxYbRyhg/fpg2feWF69Rn+hkpUTaSXivhCgAoDVs
fQ1SB/n97rNi68ZJF6z5
=5syB
-----END PGP SIGNATURE-----
Merge tag 'xfs-for-linus-3.15-rc6' of git://oss.sgi.com/xfs/xfs
Pull xfs fixes from Dave Chinner:
"Code inspection of the XFS error number sign translations found a
bunch of issues, including returning incorrectly signed errors for
some data integrity operations.
These leak to userspace and result in applications not getting the
errors correctly reported. Hence they need fixing sooner rather than
later.
A couple of the bugs are in data integrity operations, a couple more
are in the new COLLAPSE_RANGE code. One of these came in through a
recent ext4 merge and so I had to update the base tree to 3.15-rc5
before fixing the issues"
* tag 'xfs-for-linus-3.15-rc6' of git://oss.sgi.com/xfs/xfs:
xfs: list_lru_init returns a negative error
xfs: negate xfs_icsb_init_counters error value
xfs: negate mount workqueue init error value
xfs: fix wrong err sign on xfs_set_acl()
xfs: fix wrong errno from xfs_initxattrs
xfs: correct error sign on COLLAPSE_RANGE errors
xfs: xfs_commit_metadata returns wrong errno
xfs: fix incorrect error sign in xfs_file_aio_read
xfs: xfs_dir_fsync() returns positive errno
Dan Carpenter says:
The patch 04febabcf5: "cifs: sanitize username handling" from Jan
17, 2012, leads to the following static checker warning:
fs/cifs/connect.c:2231 match_session()
error: we previously assumed 'vol->username' could be null (see line 2228)
fs/cifs/connect.c
2219 /* NULL username means anonymous session */
2220 if (ses->user_name == NULL) {
2221 if (!vol->nullauth)
2222 return 0;
2223 break;
2224 }
2225
2226 /* anything else takes username/password */
2227 if (strncmp(ses->user_name,
2228 vol->username ? vol->username : "",
^^^^^^^^^^^^^
We added this check for vol->username here.
2229 CIFS_MAX_USERNAME_LEN))
2230 return 0;
2231 if (strlen(vol->username) != 0 &&
^^^^^^^^^^^^^
But this dereference is not checked.
2232 ses->password != NULL &&
2233 strncmp(ses->password,
2234 vol->password ? vol->password : "",
2235 CIFS_MAX_PASSWORD_LEN))
2236 return 0;
...fix this by ensuring that vol->username is not NULL before running
strlen on it.
Signed-off-by: Jeff Layton <jlayton@poochiereds.net>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Clarify comments for create contexts which we do send,
and fix typo in one create context definition and add
newer SMB3 create contexts to the list.
Signed-off-by: Steve French <smfrench@gmail.com>
ClientGUID must be zero for SMB2.02 dialect. See section 2.2.3
of MS-SMB2. For SMB2.1 and later it must be non-zero.
Signed-off-by: Steve French <smfrench@gmail.com>
CC: Sachin Prabhu <sprabhu@redhat.com>
When mounting from a Windows 2012R2 server, we hit the following
problem:
1) Mount with any of the following versions - 2.0, 2.1 or 3.0
2) unmount
3) Attempt a mount again using a different SMB version >= 2.0.
You end up with the following failure:
Status code returned 0xc0000203 STATUS_USER_SESSION_DELETED
CIFS VFS: Send error in SessSetup = -5
CIFS VFS: cifs_mount failed w/return code = -5
I cannot reproduce this issue using a Windows 2008 R2 server.
This appears to be caused because we use the same client guid for the
connection on first mount which we then disconnect and attempt to mount
again using a different protocol version. By generating a new guid each
time a new connection is Negotiated, we avoid hitting this problem.
Signed-off-by: Sachin Prabhu <sprabhu@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Also fixes array checkpatch warning and converts it to static const
(suggested by Joe Perches).
Cc: Joe Perches <joe@perches.com>
Cc: Steve French <sfrench@samba.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Steve French <smfrench@gmail.com>
Replace seq_printf where possible
Cc: Steve French <sfrench@samba.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Steve French <smfrench@gmail.com>
The handling of the CIFS_INO_INVALID_MAPPING flag is racy. It's possible
for two tasks to attempt to revalidate the mapping at the same time. The
first sees that CIFS_INO_INVALID_MAPPING is set. It clears the flag and
then calls invalidate_inode_pages2 to start shooting down the pagecache.
While that's going on, another task checks the flag and sees that it's
clear. It then ends up trusting the pagecache to satisfy a read when it
shouldn't.
Fix this by adding a bitlock to ensure that the clearing of the flag is
atomic with respect to the actual cache invalidation. Also, move the
other existing users of cifs_invalidate_mapping to use a new
cifs_zap_mapping() function that just sets the INVALID_MAPPING bit and
then uses the standard codepath to handle the invalidation.
Signed-off-by: Jeff Layton <jlayton@poochiereds.net>
Signed-off-by: Steve French <smfrench@gmail.com>
Consolidate a bit of code. In a later patch we'll expand this to fix
some races.
Signed-off-by: Jeff Layton <jlayton@poochiereds.net>
Signed-off-by: Steve French <smfrench@gmail.com>
In later patches, we'll need to have a bitlock, so go ahead and convert
these bools to use atomic bitops instead.
Also, clean up the initialization of the flags field. There's no need
to unset each bit individually just after it was zeroed on allocation.
Signed-off-by: Jeff Layton <jlayton@poochiereds.net>
Signed-off-by: Steve French <smfrench@gmail.com>
Currently, when the top and bottom 32-bit words are equivalent and the
host is a 32-bit arch, cifs_uniqueid_to_ino_t returns 0 as the ino_t
value. All we're doing to hash the value down to 32 bits is xor'ing the
top and bottom 32-bit words and that obviously results in 0 if they are
equivalent.
The kernel doesn't really care if it returns this value, but some
userland apps (like "ls") will ignore dirents that have a zero d_ino
value.
Change this function to use hash_64 to convert this value to a 31 bit
value and then add 1 to ensure that it doesn't ever return 0. Also,
there's no need to check the sizeof(ino_t) at runtime so create two
different cifs_uniqueid_to_ino_t functions based on whether
BITS_PER_LONG is 64 for not.
This should fix:
https://bugzilla.kernel.org/show_bug.cgi?id=19282
Reported-by: Eric <copet_eric@emc.com>
Reported-by: <per-ola@sadata.se>
Signed-off-by: Jeff Layton <jlayton@poochiereds.net>
Signed-off-by: Steve French <smfrench@gmail.com>
We're not cleaning up everything we need to on error. In particular,
we're not removing our lease. Among other problems this can cause the
struct nfs4_file used as fl_owner to be referenced after it has been
destroyed.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
We're clearing the SUID/SGID bits on write by hand in nfsd_vfs_write,
even though the subsequent vfs_writev() call will end up doing this for
us (through file system write methods eventually calling
file_remove_suid(), e.g., from __generic_file_aio_write).
So, remove the redundant nfsd code.
The only change in behavior is when the write is by root, in which case
we previously cleared SUID/SGID, but will now leave it alone. The new
behavior is the behavior of every filesystem we've checked.
It seems better to be consistent with local filesystem behavior. And
the security advantage seems limited as root could always restore these
bits by hand if it wanted.
SUID/SGID is not cleared after writing data with (root, local ext4),
File: ‘test’
Size: 0 Blocks: 0 IO Block: 4096 regular
empty file
Device: 803h/2051d Inode: 1200137 Links: 1
Access: (4777/-rwsrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
Context: unconfined_u:object_r:admin_home_t:s0
Access: 2014-04-18 21:36:31.016029014 +0800
Modify: 2014-04-18 21:36:31.016029014 +0800
Change: 2014-04-18 21:36:31.026030285 +0800
Birth: -
File: ‘test’
Size: 5 Blocks: 8 IO Block: 4096 regular file
Device: 803h/2051d Inode: 1200137 Links: 1
Access: (4777/-rwsrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
Context: unconfined_u:object_r:admin_home_t:s0
Access: 2014-04-18 21:36:31.016029014 +0800
Modify: 2014-04-18 21:36:31.040032065 +0800
Change: 2014-04-18 21:36:31.040032065 +0800
Birth: -
With no_root_squash, (root, remote ext4), SUID/SGID are cleared,
File: ‘test’
Size: 0 Blocks: 0 IO Block: 262144 regular
empty file
Device: 24h/36d Inode: 786439 Links: 1
Access: (4777/-rwsrwxrwx) Uid: ( 1000/ test) Gid: ( 1000/ test)
Context: system_u:object_r:nfs_t:s0
Access: 2014-04-18 21:45:32.155805097 +0800
Modify: 2014-04-18 21:45:32.155805097 +0800
Change: 2014-04-18 21:45:32.168806749 +0800
Birth: -
File: ‘test’
Size: 5 Blocks: 8 IO Block: 262144 regular file
Device: 24h/36d Inode: 786439 Links: 1
Access: (0777/-rwxrwxrwx) Uid: ( 1000/ test) Gid: ( 1000/ test)
Context: system_u:object_r:nfs_t:s0
Access: 2014-04-18 21:45:32.155805097 +0800
Modify: 2014-04-18 21:45:32.184808783 +0800
Change: 2014-04-18 21:45:32.184808783 +0800
Birth: -
Signed-off-by: Kinglong Mee <kinglongmee@gmail.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
The current code assumes a one-to-one lockowner<->lock stateid
correspondance.
Cc: stable@vger.kernel.org
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
The nfsv4 state code has always assumed a one-to-one correspondance
between lock stateid's and lockowners even if it appears not to in some
places.
We may actually change that, but for now when FREE_STATEID releases a
lock stateid it also needs to release the parent lockowner.
Symptoms were a subsequent LOCK crashing in find_lockowner_str when it
calls same_lockowner_ino on a lockowner that unexpectedly has an empty
so_stateids list.
Cc: stable@vger.kernel.org
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Fix the cache manager RPC service handlers. The afs_send_empty_reply() and
afs_send_simple_reply() functions:
(a) Kill the call and free up the buffers associated with it if they fail.
(b) Return with call intact if it they succeed.
However, none of the callers actually check the result or clean up if
successful - and may use the now non-existent data if it fails.
This was detected by Dan Carpenter using a static checker:
The patch 08e0e7c82e: "[AF_RXRPC]: Make the in-kernel AFS
filesystem use AF_RXRPC." from Apr 26, 2007, leads to the following
static checker warning:
"fs/afs/cmservice.c:155 SRXAFSCB_CallBack()
warn: 'call' was already freed."
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Here are two driver core (well, sysfs) fixes for 3.15-rc6 that resolve
some reported issues and a regression from 3.13.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iEUEABECAAYFAlN8LzgACgkQMUfUDdst+ynJnQCeKQt7KdEBlHAKI5/iP2IQVNNx
KG8AmMepPCjpp9/MbrFQnx3miGgNEug=
=813a
-----END PGP SIGNATURE-----
Merge tag 'driver-core-3.15-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core
Pull driver core fixes from Greg KH:
"Here are two driver core (well, sysfs) fixes for 3.15-rc6 that resolve
some reported issues and a regression from 3.13"
* tag 'driver-core-3.15-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core:
sysfs: make sure read buffer is zeroed
kernfs, sysfs, cgroup: restrict extra perm check on open to sysfs
journal_init_revoke_table is only called with positive hash_size
(JOURNAL_REVOKE_DEFAULT_HASH) so we can replace loop shift by ilog2
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Joe Perches <joe@perches.com>
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Jan Kara <jack@suse.cz>
arch_vma_name sucks. It's a silly hack, and it's annoying to
implement correctly. In fact, AFAICS, even the straightforward x86
implementation is incorrect (I suspect that it breaks if the vdso
mapping is split or gets remapped).
This adds a new vm_ops->name operation that can replace it. The
followup patches will remove all uses of arch_vma_name on x86,
fixing a couple of annoyances in the process.
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Link: http://lkml.kernel.org/r/2eee21791bb36a0a408c5c2bdb382a9e6a41ca4a.1400538962.git.luto@amacapital.net
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>