fscrypt currently only supports AES encryption. However, many low-end mobile devices have older CPUs that don't have AES instructions, e.g. the ARMv8 Cryptography Extensions. Currently, user data on such devices is not encrypted at rest because AES is too slow, even when the NEON bit-sliced implementation of AES is used. Unfortunately, it is infeasible to encrypt these devices at all when AES is the only option. Therefore, this patch updates fscrypt to support the Speck block cipher, which was recently added to the crypto API. The C implementation of Speck is not especially fast, but Speck can be implemented very efficiently with general-purpose vector instructions, e.g. ARM NEON. For example, on an ARMv7 processor, we measured the NEON-accelerated Speck128/256-XTS at 69 MB/s for both encryption and decryption, while AES-256-XTS with the NEON bit-sliced implementation was only 22 MB/s encryption and 19 MB/s decryption. There are multiple variants of Speck. This patch only adds support for Speck128/256, which is the variant with a 128-bit block size and 256-bit key size -- the same as AES-256. This is believed to be the most secure variant of Speck, and it's only about 6% slower than Speck128/128. Speck64/128 would be at least 20% faster because it has 20% rounds, and it can be even faster on CPUs that can't efficiently do the 64-bit operations needed for Speck128. However, Speck64's 64-bit block size is not preferred security-wise. ARM NEON also supports the needed 64-bit operations even on 32-bit CPUs, resulting in Speck128 being fast enough for our targeted use cases so far. The chosen modes of operation are XTS for contents and CTS-CBC for filenames. These are the same modes of operation that fscrypt defaults to for AES. Note that as with the other fscrypt modes, Speck will not be used unless userspace chooses to use it. Nor are any of the existing modes (which are all AES-based) being removed, of course. We intentionally don't make CONFIG_FS_ENCRYPTION select CONFIG_CRYPTO_SPECK, so people will have to enable Speck support themselves if they need it. This is because we shouldn't bloat the FS_ENCRYPTION dependencies with every new cipher, especially ones that aren't recommended for most users. Moreover, CRYPTO_SPECK is just the generic implementation, which won't be fast enough for many users; in practice, they'll need to enable CRYPTO_SPECK_NEON to get acceptable performance. More details about our choice of Speck can be found in our patches that added Speck to the crypto API, and the follow-on discussion threads. We're planning a publication that explains the choice in more detail. But briefly, we can't use ChaCha20 as we previously proposed, since it would be insecure to use a stream cipher in this context, with potential IV reuse during writes on f2fs and/or on wear-leveling flash storage. We also evaluated many other lightweight and/or ARX-based block ciphers such as Chaskey-LTS, RC5, LEA, CHAM, Threefish, RC6, NOEKEON, SPARX, and XTEA. However, all had disadvantages vs. Speck, such as insufficient performance with NEON, much less published cryptanalysis, or an insufficient security level. Various design choices in Speck make it perform better with NEON than competing ciphers while still having a security margin similar to AES, and in the case of Speck128 also the same available security levels. Unfortunately, Speck does have some political baggage attached -- it's an NSA designed cipher, and was rejected from an ISO standard (though for context, as far as I know none of the above-mentioned alternatives are ISO standards either). Nevertheless, we believe it is a good solution to the problem from a technical perspective. Certain algorithms constructed from ChaCha or the ChaCha permutation, such as MEM (Masked Even-Mansour) or HPolyC, may also meet our performance requirements. However, these are new constructions that need more time to receive the cryptographic review and acceptance needed to be confident in their security. HPolyC hasn't been published yet, and we are concerned that MEM makes stronger assumptions about the underlying permutation than the ChaCha stream cipher does. In contrast, the XTS mode of operation is relatively well accepted, and Speck has over 70 cryptanalysis papers. Of course, these ChaCha-based algorithms can still be added later if they become ready. The best known attack on Speck128/256 is a differential cryptanalysis attack on 25 of 34 rounds with 2^253 time complexity and 2^125 chosen plaintexts, i.e. only marginally faster than brute force. There is no known attack on the full 34 rounds. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> (cherry-picked from commit 12d28f79558f2e987c5f3817f89e1ccc0f11a7b5 https://git.kernel.org/pub/scm/linux/kernel/git/tytso/fscrypt.git master) (dropped Documentation/filesystems/fscrypt.rst change) (fixed merge conflict in fs/crypto/keyinfo.c) (also ported change to fs/ext4/, which isn't using fs/crypto/ in this kernel version) Change-Id: I62c632044dfd06a2c5b74c2fb058f9c3b8af0add Signed-off-by: Eric Biggers <ebiggers@google.com>
374 lines
9.8 KiB
C
374 lines
9.8 KiB
C
/*
|
|
* key management facility for FS encryption support.
|
|
*
|
|
* Copyright (C) 2015, Google, Inc.
|
|
*
|
|
* This contains encryption key functions.
|
|
*
|
|
* Written by Michael Halcrow, Ildar Muslukhov, and Uday Savagaonkar, 2015.
|
|
*/
|
|
|
|
#include <keys/user-type.h>
|
|
#include <linux/scatterlist.h>
|
|
#include <linux/ratelimit.h>
|
|
#include <crypto/aes.h>
|
|
#include <crypto/sha.h>
|
|
#include <crypto/skcipher.h>
|
|
#include "fscrypt_private.h"
|
|
|
|
static struct crypto_shash *essiv_hash_tfm;
|
|
|
|
/**
|
|
* derive_key_aes() - Derive a key using AES-128-ECB
|
|
* @deriving_key: Encryption key used for derivation.
|
|
* @source_key: Source key to which to apply derivation.
|
|
* @derived_raw_key: Derived raw key.
|
|
*
|
|
* Return: Zero on success; non-zero otherwise.
|
|
*/
|
|
static int derive_key_aes(u8 deriving_key[FS_AES_128_ECB_KEY_SIZE],
|
|
const struct fscrypt_key *source_key,
|
|
u8 derived_raw_key[FS_MAX_KEY_SIZE])
|
|
{
|
|
int res = 0;
|
|
struct skcipher_request *req = NULL;
|
|
DECLARE_CRYPTO_WAIT(wait);
|
|
struct scatterlist src_sg, dst_sg;
|
|
struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0);
|
|
|
|
if (IS_ERR(tfm)) {
|
|
res = PTR_ERR(tfm);
|
|
tfm = NULL;
|
|
goto out;
|
|
}
|
|
crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_WEAK_KEY);
|
|
req = skcipher_request_alloc(tfm, GFP_NOFS);
|
|
if (!req) {
|
|
res = -ENOMEM;
|
|
goto out;
|
|
}
|
|
skcipher_request_set_callback(req,
|
|
CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
|
|
crypto_req_done, &wait);
|
|
res = crypto_skcipher_setkey(tfm, deriving_key,
|
|
FS_AES_128_ECB_KEY_SIZE);
|
|
if (res < 0)
|
|
goto out;
|
|
|
|
sg_init_one(&src_sg, source_key->raw, source_key->size);
|
|
sg_init_one(&dst_sg, derived_raw_key, source_key->size);
|
|
skcipher_request_set_crypt(req, &src_sg, &dst_sg, source_key->size,
|
|
NULL);
|
|
res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
|
|
out:
|
|
skcipher_request_free(req);
|
|
crypto_free_skcipher(tfm);
|
|
return res;
|
|
}
|
|
|
|
static int validate_user_key(struct fscrypt_info *crypt_info,
|
|
struct fscrypt_context *ctx, u8 *raw_key,
|
|
const char *prefix, int min_keysize)
|
|
{
|
|
char *description;
|
|
struct key *keyring_key;
|
|
struct fscrypt_key *master_key;
|
|
const struct user_key_payload *ukp;
|
|
int res;
|
|
|
|
description = kasprintf(GFP_NOFS, "%s%*phN", prefix,
|
|
FS_KEY_DESCRIPTOR_SIZE,
|
|
ctx->master_key_descriptor);
|
|
if (!description)
|
|
return -ENOMEM;
|
|
|
|
keyring_key = request_key(&key_type_logon, description, NULL);
|
|
kfree(description);
|
|
if (IS_ERR(keyring_key))
|
|
return PTR_ERR(keyring_key);
|
|
down_read(&keyring_key->sem);
|
|
|
|
if (keyring_key->type != &key_type_logon) {
|
|
printk_once(KERN_WARNING
|
|
"%s: key type must be logon\n", __func__);
|
|
res = -ENOKEY;
|
|
goto out;
|
|
}
|
|
ukp = user_key_payload(keyring_key);
|
|
if (!ukp) {
|
|
/* key was revoked before we acquired its semaphore */
|
|
res = -EKEYREVOKED;
|
|
goto out;
|
|
}
|
|
if (ukp->datalen != sizeof(struct fscrypt_key)) {
|
|
res = -EINVAL;
|
|
goto out;
|
|
}
|
|
master_key = (struct fscrypt_key *)ukp->data;
|
|
BUILD_BUG_ON(FS_AES_128_ECB_KEY_SIZE != FS_KEY_DERIVATION_NONCE_SIZE);
|
|
|
|
if (master_key->size < min_keysize || master_key->size > FS_MAX_KEY_SIZE
|
|
|| master_key->size % AES_BLOCK_SIZE != 0) {
|
|
printk_once(KERN_WARNING
|
|
"%s: key size incorrect: %d\n",
|
|
__func__, master_key->size);
|
|
res = -ENOKEY;
|
|
goto out;
|
|
}
|
|
res = derive_key_aes(ctx->nonce, master_key, raw_key);
|
|
out:
|
|
up_read(&keyring_key->sem);
|
|
key_put(keyring_key);
|
|
return res;
|
|
}
|
|
|
|
static const struct {
|
|
const char *cipher_str;
|
|
int keysize;
|
|
} available_modes[] = {
|
|
[FS_ENCRYPTION_MODE_AES_256_XTS] = { "xts(aes)",
|
|
FS_AES_256_XTS_KEY_SIZE },
|
|
[FS_ENCRYPTION_MODE_AES_256_CTS] = { "cts(cbc(aes))",
|
|
FS_AES_256_CTS_KEY_SIZE },
|
|
[FS_ENCRYPTION_MODE_AES_128_CBC] = { "cbc(aes)",
|
|
FS_AES_128_CBC_KEY_SIZE },
|
|
[FS_ENCRYPTION_MODE_AES_128_CTS] = { "cts(cbc(aes))",
|
|
FS_AES_128_CTS_KEY_SIZE },
|
|
[FS_ENCRYPTION_MODE_SPECK128_256_XTS] = { "xts(speck128)", 64 },
|
|
[FS_ENCRYPTION_MODE_SPECK128_256_CTS] = { "cts(cbc(speck128))", 32 },
|
|
};
|
|
|
|
static int determine_cipher_type(struct fscrypt_info *ci, struct inode *inode,
|
|
const char **cipher_str_ret, int *keysize_ret)
|
|
{
|
|
u32 mode;
|
|
|
|
if (!fscrypt_valid_enc_modes(ci->ci_data_mode, ci->ci_filename_mode)) {
|
|
pr_warn_ratelimited("fscrypt: inode %lu uses unsupported encryption modes (contents mode %d, filenames mode %d)\n",
|
|
inode->i_ino,
|
|
ci->ci_data_mode, ci->ci_filename_mode);
|
|
return -EINVAL;
|
|
}
|
|
|
|
if (S_ISREG(inode->i_mode)) {
|
|
mode = ci->ci_data_mode;
|
|
} else if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode)) {
|
|
mode = ci->ci_filename_mode;
|
|
} else {
|
|
WARN_ONCE(1, "fscrypt: filesystem tried to load encryption info for inode %lu, which is not encryptable (file type %d)\n",
|
|
inode->i_ino, (inode->i_mode & S_IFMT));
|
|
return -EINVAL;
|
|
}
|
|
|
|
*cipher_str_ret = available_modes[mode].cipher_str;
|
|
*keysize_ret = available_modes[mode].keysize;
|
|
return 0;
|
|
}
|
|
|
|
static void put_crypt_info(struct fscrypt_info *ci)
|
|
{
|
|
if (!ci)
|
|
return;
|
|
|
|
crypto_free_skcipher(ci->ci_ctfm);
|
|
crypto_free_cipher(ci->ci_essiv_tfm);
|
|
kmem_cache_free(fscrypt_info_cachep, ci);
|
|
}
|
|
|
|
static int derive_essiv_salt(const u8 *key, int keysize, u8 *salt)
|
|
{
|
|
struct crypto_shash *tfm = READ_ONCE(essiv_hash_tfm);
|
|
|
|
/* init hash transform on demand */
|
|
if (unlikely(!tfm)) {
|
|
struct crypto_shash *prev_tfm;
|
|
|
|
tfm = crypto_alloc_shash("sha256", 0, 0);
|
|
if (IS_ERR(tfm)) {
|
|
pr_warn_ratelimited("fscrypt: error allocating SHA-256 transform: %ld\n",
|
|
PTR_ERR(tfm));
|
|
return PTR_ERR(tfm);
|
|
}
|
|
prev_tfm = cmpxchg(&essiv_hash_tfm, NULL, tfm);
|
|
if (prev_tfm) {
|
|
crypto_free_shash(tfm);
|
|
tfm = prev_tfm;
|
|
}
|
|
}
|
|
|
|
{
|
|
SHASH_DESC_ON_STACK(desc, tfm);
|
|
desc->tfm = tfm;
|
|
desc->flags = 0;
|
|
|
|
return crypto_shash_digest(desc, key, keysize, salt);
|
|
}
|
|
}
|
|
|
|
static int init_essiv_generator(struct fscrypt_info *ci, const u8 *raw_key,
|
|
int keysize)
|
|
{
|
|
int err;
|
|
struct crypto_cipher *essiv_tfm;
|
|
u8 salt[SHA256_DIGEST_SIZE];
|
|
|
|
essiv_tfm = crypto_alloc_cipher("aes", 0, 0);
|
|
if (IS_ERR(essiv_tfm))
|
|
return PTR_ERR(essiv_tfm);
|
|
|
|
ci->ci_essiv_tfm = essiv_tfm;
|
|
|
|
err = derive_essiv_salt(raw_key, keysize, salt);
|
|
if (err)
|
|
goto out;
|
|
|
|
/*
|
|
* Using SHA256 to derive the salt/key will result in AES-256 being
|
|
* used for IV generation. File contents encryption will still use the
|
|
* configured keysize (AES-128) nevertheless.
|
|
*/
|
|
err = crypto_cipher_setkey(essiv_tfm, salt, sizeof(salt));
|
|
if (err)
|
|
goto out;
|
|
|
|
out:
|
|
memzero_explicit(salt, sizeof(salt));
|
|
return err;
|
|
}
|
|
|
|
void __exit fscrypt_essiv_cleanup(void)
|
|
{
|
|
crypto_free_shash(essiv_hash_tfm);
|
|
}
|
|
|
|
int fscrypt_get_encryption_info(struct inode *inode)
|
|
{
|
|
struct fscrypt_info *crypt_info;
|
|
struct fscrypt_context ctx;
|
|
struct crypto_skcipher *ctfm;
|
|
const char *cipher_str;
|
|
int keysize;
|
|
u8 *raw_key = NULL;
|
|
int res;
|
|
|
|
if (inode->i_crypt_info)
|
|
return 0;
|
|
|
|
res = fscrypt_initialize(inode->i_sb->s_cop->flags);
|
|
if (res)
|
|
return res;
|
|
|
|
res = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
|
|
if (res < 0) {
|
|
if (!fscrypt_dummy_context_enabled(inode) ||
|
|
IS_ENCRYPTED(inode))
|
|
return res;
|
|
/* Fake up a context for an unencrypted directory */
|
|
memset(&ctx, 0, sizeof(ctx));
|
|
ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1;
|
|
ctx.contents_encryption_mode = FS_ENCRYPTION_MODE_AES_256_XTS;
|
|
ctx.filenames_encryption_mode = FS_ENCRYPTION_MODE_AES_256_CTS;
|
|
memset(ctx.master_key_descriptor, 0x42, FS_KEY_DESCRIPTOR_SIZE);
|
|
} else if (res != sizeof(ctx)) {
|
|
return -EINVAL;
|
|
}
|
|
|
|
if (ctx.format != FS_ENCRYPTION_CONTEXT_FORMAT_V1)
|
|
return -EINVAL;
|
|
|
|
if (ctx.flags & ~FS_POLICY_FLAGS_VALID)
|
|
return -EINVAL;
|
|
|
|
crypt_info = kmem_cache_alloc(fscrypt_info_cachep, GFP_NOFS);
|
|
if (!crypt_info)
|
|
return -ENOMEM;
|
|
|
|
crypt_info->ci_flags = ctx.flags;
|
|
crypt_info->ci_data_mode = ctx.contents_encryption_mode;
|
|
crypt_info->ci_filename_mode = ctx.filenames_encryption_mode;
|
|
crypt_info->ci_ctfm = NULL;
|
|
crypt_info->ci_essiv_tfm = NULL;
|
|
memcpy(crypt_info->ci_master_key, ctx.master_key_descriptor,
|
|
sizeof(crypt_info->ci_master_key));
|
|
|
|
res = determine_cipher_type(crypt_info, inode, &cipher_str, &keysize);
|
|
if (res)
|
|
goto out;
|
|
|
|
/*
|
|
* This cannot be a stack buffer because it is passed to the scatterlist
|
|
* crypto API as part of key derivation.
|
|
*/
|
|
res = -ENOMEM;
|
|
raw_key = kmalloc(FS_MAX_KEY_SIZE, GFP_NOFS);
|
|
if (!raw_key)
|
|
goto out;
|
|
|
|
res = validate_user_key(crypt_info, &ctx, raw_key, FS_KEY_DESC_PREFIX,
|
|
keysize);
|
|
if (res && inode->i_sb->s_cop->key_prefix) {
|
|
int res2 = validate_user_key(crypt_info, &ctx, raw_key,
|
|
inode->i_sb->s_cop->key_prefix,
|
|
keysize);
|
|
if (res2) {
|
|
if (res2 == -ENOKEY)
|
|
res = -ENOKEY;
|
|
goto out;
|
|
}
|
|
} else if (res) {
|
|
goto out;
|
|
}
|
|
ctfm = crypto_alloc_skcipher(cipher_str, 0, 0);
|
|
if (!ctfm || IS_ERR(ctfm)) {
|
|
res = ctfm ? PTR_ERR(ctfm) : -ENOMEM;
|
|
pr_debug("%s: error %d (inode %lu) allocating crypto tfm\n",
|
|
__func__, res, inode->i_ino);
|
|
goto out;
|
|
}
|
|
crypt_info->ci_ctfm = ctfm;
|
|
crypto_skcipher_clear_flags(ctfm, ~0);
|
|
crypto_skcipher_set_flags(ctfm, CRYPTO_TFM_REQ_WEAK_KEY);
|
|
/*
|
|
* if the provided key is longer than keysize, we use the first
|
|
* keysize bytes of the derived key only
|
|
*/
|
|
res = crypto_skcipher_setkey(ctfm, raw_key, keysize);
|
|
if (res)
|
|
goto out;
|
|
|
|
if (S_ISREG(inode->i_mode) &&
|
|
crypt_info->ci_data_mode == FS_ENCRYPTION_MODE_AES_128_CBC) {
|
|
res = init_essiv_generator(crypt_info, raw_key, keysize);
|
|
if (res) {
|
|
pr_debug("%s: error %d (inode %lu) allocating essiv tfm\n",
|
|
__func__, res, inode->i_ino);
|
|
goto out;
|
|
}
|
|
}
|
|
if (cmpxchg(&inode->i_crypt_info, NULL, crypt_info) == NULL)
|
|
crypt_info = NULL;
|
|
out:
|
|
if (res == -ENOKEY)
|
|
res = 0;
|
|
put_crypt_info(crypt_info);
|
|
kzfree(raw_key);
|
|
return res;
|
|
}
|
|
EXPORT_SYMBOL(fscrypt_get_encryption_info);
|
|
|
|
void fscrypt_put_encryption_info(struct inode *inode, struct fscrypt_info *ci)
|
|
{
|
|
struct fscrypt_info *prev;
|
|
|
|
if (ci == NULL)
|
|
ci = ACCESS_ONCE(inode->i_crypt_info);
|
|
if (ci == NULL)
|
|
return;
|
|
|
|
prev = cmpxchg(&inode->i_crypt_info, ci, NULL);
|
|
if (prev != ci)
|
|
return;
|
|
|
|
put_crypt_info(ci);
|
|
}
|
|
EXPORT_SYMBOL(fscrypt_put_encryption_info);
|