Merge commit 'refs/changes/41/1640241/2' of ssh://review-android.quicinc.com:29418/kernel/msm-4.4 into kernel.lnx.4.4-160606_new

This commit is contained in:
Kyle Yan 2016-06-07 11:54:28 -07:00
commit 25a0e286ba
42 changed files with 16421 additions and 765 deletions

View file

@ -1,55 +1,187 @@
* Qualcomm SDHCI controller (sdhci-msm)
Qualcomm Technologies, Inc. Standard Secure Digital Host Controller (SDHC)
This file documents differences between the core properties in mmc.txt
and the properties used by the sdhci-msm driver.
Secure Digital Host Controller provides standard host interface to SD/MMC/SDIO cards.
Required properties:
- compatible: Should contain "qcom,sdhci-msm-v4".
- reg: Base address and length of the register in the following order:
- Host controller register map (required)
- SD Core register map (required)
- interrupts: Should contain an interrupt-specifiers for the interrupts:
- Host controller interrupt (required)
- pinctrl-names: Should contain only one value - "default".
- pinctrl-0: Should specify pin control groups used for this controller.
- clocks: A list of phandle + clock-specifier pairs for the clocks listed in clock-names.
- clock-names: Should contain the following:
"iface" - Main peripheral bus clock (PCLK/HCLK - AHB Bus clock) (required)
"core" - SDC MMC clock (MCLK) (required)
"bus" - SDCC bus voter clock (optional)
- compatible : should be "qcom,sdhci-msm"
- reg : should contain SDHC, SD Core register map.
- reg-names : indicates various resources passed to driver (via reg proptery) by name.
Required "reg-names" are "hc_mem" and "core_mem"
optional ones are "tlmm_mem"
- interrupts : should contain SDHC interrupts.
- interrupt-names : indicates interrupts passed to driver (via interrupts property) by name.
Required "interrupt-names" are "hc_irq" and "pwr_irq".
- <supply-name>-supply: phandle to the regulator device tree node
Required "supply-name" are "vdd" and "vdd-io".
- qcom,ice-clk-rates: this is an array that specifies supported Inline
Crypto Engine (ICE) clock frequencies, Units - Hz.
- sdhc-msm-crypto: phandle to SDHC ICE node
Required alias:
- The slot number is specified via an alias with the following format
'sdhc{n}' where n is the slot number.
Optional Properties:
- interrupt-names - "status_irq". This status_irq will be used for card
detection.
- qcom,bus-width - defines the bus I/O width that controller supports.
Units - number of bits. The valid bus-width values are
1, 4 and 8.
- qcom,nonremovable - specifies whether the card in slot is
hot pluggable or hard wired.
- qcom,nonhotplug - specifies the card in slot is not hot pluggable.
if card lost or removed manually at runtime, don't retry
to redetect it until next reboot probe.
- qcom,bus-speed-mode - specifies supported bus speed modes by host.
The supported bus speed modes are :
"HS200_1p8v" - indicates that host can support HS200 at 1.8v.
"HS200_1p2v" - indicates that host can support HS200 at 1.2v.
"DDR_1p8v" - indicates that host can support DDR mode at 1.8v.
"DDR_1p2v" - indicates that host can support DDR mode at 1.2v.
- qcom,devfreq,freq-table - specifies supported frequencies for clock scaling.
Clock scaling logic shall toggle between these frequencies based
on card load. In case the defined frequencies are over or below
the supported card frequencies, they will be overridden
during card init. In case this entry is not supplied,
the driver will construct one based on the card
supported max and min frequencies.
The frequencies must be ordered from lowest to highest.
- qcom,pm-qos-irq-type - the PM QoS request type to be used for IRQ voting.
Can be either "affine_cores" or "affine_irq". If not specified, will default
to "affine_cores". Use "affine_irq" setting in case an IRQ balancer is active,
and IRQ affinity changes during runtime.
- qcom,pm-qos-irq-cpu - specifies the CPU for which IRQ voting shall be done.
If "affine_cores" was specified for property 'qcom,pm-qos-irq-type'
then this property must be defined, and is not relevant otherwise.
- qcom,pm-qos-irq-latency - a tuple defining two latency values with which
PM QoS IRQ voting shall be done. The first value is the latecy to be used
when load is high (performance mode) and the second is for low loads
(power saving mode).
- qcom,pm-qos-cpu-groups - defines cpu groups mapping.
Each cell represnets a group, which is a cpu bitmask defining which cpus belong
to that group.
- qcom,pm-qos-<mode>-latency-us - where <mode> is either "cmdq" or "legacy".
An array of latency value tuples, each tuple corresponding to a cpu group in the order
defined in property 'qcom,pm-qos-cpu-groups'. The first value is the latecy to be used
when load is high (performance mode) and the second is for low loads
(power saving mode). These values will be used for cpu group voting for
command-queueing mode or legacy respectively.
- qcom,core_3_0v_support: an optional property that is used to fake
3.0V support for SDIO devices.
- qcom,scaling-lower-bus-speed-mode: specifies the lower bus speed mode to be used
during clock scaling. If this property is not
defined, then it falls back to the default HS
bus speed mode to maintain backward compatibility.
In the following, <supply> can be vdd (flash core voltage) or vdd-io (I/O voltage).
- qcom,<supply>-always-on - specifies whether supply should be kept "on" always.
- qcom,<supply>-lpm_sup - specifies whether supply can be kept in low power mode (lpm).
- qcom,<supply>-voltage_level - specifies voltage levels for supply. Should be
specified in pairs (min, max), units uV.
- qcom,<supply>-current_level - specifies load levels for supply in lpm or
high power mode (hpm). Should be specified in
pairs (lpm, hpm), units uA.
- gpios - specifies gpios assigned for sdhc slot.
- qcom,gpio-names - a list of strings that map in order to the list of gpios
A slot has either gpios or dedicated tlmm pins as represented below.
- qcom,pad-pull-on - Active pull configuration for sdc tlmm pins
- qcom,pad-pull-off - Suspend pull configuration for sdc tlmm pins.
- qcom,pad-drv-on - Active drive strength configuration for sdc tlmm pins.
- qcom,pad-drv-off - Suspend drive strength configuration for sdc tlmm pins.
Tlmm pins are specified as <clk cmd data> and starting with eMMC5.0 as
<clk cmd data rclk>
- Refer to "Documentation/devicetree/bindings/pinctrl/pinctrl-bindings.txt"
for following optional properties:
- pinctrl-names
- pinctrl-0, pinctrl-1,.. pinctrl-n
- qcom,large-address-bus - specifies whether the soc is capable of
supporting larger than 32 bit address bus width.
- qcom,wakeup-on-idle: if configured, the mmcqd thread will call
set_wake_up_idle(), thereby voting for it to be called on idle CPUs.
Example:
sdhc_1: sdhci@f9824900 {
compatible = "qcom,sdhci-msm-v4";
aliases {
sdhc1 = &sdhc_1;
sdhc2 = &sdhc_2;
};
sdhc_1: qcom,sdhc@f9824900 {
compatible = "qcom,sdhci-msm";
reg = <0xf9824900 0x11c>, <0xf9824000 0x800>;
interrupts = <0 123 0>;
bus-width = <8>;
non-removable;
reg-names = "hc_mem", "core_mem";
interrupts = <0 123 0>, <0 138 0>;
interrupt-names = "hc_irq", "pwr_irq";
sdhc-msm-crypto = <&sdcc1_ice>;
vmmc-supply = <&pm8941_l20>;
vqmmc-supply = <&pm8941_s3>;
vdd-supply = <&pm8941_l21>;
vdd-io-supply = <&pm8941_l13>;
qcom,vdd-voltage-level = <2950000 2950000>;
qcom,vdd-current-level = <9000 800000>;
pinctrl-names = "default";
pinctrl-0 = <&sdc1_clk &sdc1_cmd &sdc1_data>;
qcom,vdd-io-always-on;
qcom,vdd-io-lpm-sup;
qcom,vdd-io-voltage-level = <1800000 2950000>;
qcom,vdd-io-current-level = <6 22000>;
clocks = <&gcc GCC_SDCC1_APPS_CLK>, <&gcc GCC_SDCC1_AHB_CLK>;
clock-names = "core", "iface";
qcom,devfreq,freq-table = <52000000 200000000>;
pinctrl-names = "active", "sleep";
pinctrl-0 = <&sdc1_clk_on &sdc1_cmd_on &sdc1_data_on>;
pinctrl-1 = <&sdc1_clk_off &sdc1_cmd_on &sdc1_data_on>;
qcom,bus-width = <4>;
qcom,nonremovable;
qcom,large-address-bus;
qcom,bus-speed-mode = "HS200_1p8v", "DDR_1p8v";
qcom,ice-clk-rates = <300000000>;
qcom,scaling-lower-bus-speed-mode = "DDR52";
gpios = <&msmgpio 40 0>, /* CLK */
<&msmgpio 39 0>, /* CMD */
<&msmgpio 38 0>, /* DATA0 */
<&msmgpio 37 0>, /* DATA1 */
<&msmgpio 36 0>, /* DATA2 */
<&msmgpio 35 0>; /* DATA3 */
qcom,gpio-names = "CLK", "CMD", "DAT0", "DAT1", "DAT2", "DAT3";
qcom,pm-qos-irq-type = "affine_cores";
qcom,pm-qos-irq-cpu = <0>;
qcom,pm-qos-irq-latency = <500 100>;
qcom,pm-qos-cpu-groups = <0x03 0x0c>;
qcom,pm-qos-cmdq-latency-us = <50 100>, <50 100>;
qcom,pm-qos-legacy-latency-us = <50 100>, <50 100>;
};
sdhc_2: sdhci@f98a4900 {
compatible = "qcom,sdhci-msm-v4";
reg = <0xf98a4900 0x11c>, <0xf98a4000 0x800>;
interrupts = <0 125 0>;
bus-width = <4>;
cd-gpios = <&msmgpio 62 0x1>;
sdhc_2: qcom,sdhc@f98a4900 {
compatible = "qcom,sdhci-msm";
reg = <0xf9824900 0x11c>, <0xf9824000 0x800>;
reg-names = "hc_mem", "core_mem";
interrupts = <0 123 0>, <0 138 0>;
interrupt-names = "hc_irq", "pwr_irq";
vmmc-supply = <&pm8941_l21>;
vqmmc-supply = <&pm8941_l13>;
vdd-supply = <&pm8941_l21>;
vdd-io-supply = <&pm8941_l13>;
pinctrl-names = "default";
pinctrl-0 = <&sdc2_clk &sdc2_cmd &sdc2_data>;
pinctrl-names = "active", "sleep";
pinctrl-0 = <&sdc2_clk_on &sdc2_cmd_on &sdc2_data_on>;
pinctrl-1 = <&sdc2_clk_off &sdc2_cmd_on &sdc2_data_on>;
clocks = <&gcc GCC_SDCC2_APPS_CLK>, <&gcc GCC_SDCC2_AHB_CLK>;
clock-names = "core", "iface";
qcom,bus-width = <4>;
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,pm-qos-irq-type = "affine_irq";
qcom,pm-qos-irq-latency = <120 200>;
};

View file

@ -8,12 +8,29 @@ The following attributes are read/write.
force_ro Enforce read-only access even if write protect switch is off.
num_wr_reqs_to_start_packing This attribute is used to determine
the trigger for activating the write packing, in case the write
packing control feature is enabled.
When the MMC manages to reach a point where num_wr_reqs_to_start_packing
write requests could be packed, it enables the write packing feature.
This allows us to start the write packing only when it is beneficial
and has minimum affect on the read latency.
The number of potential packed requests that will trigger the packing
can be configured via sysfs by writing the required value to:
/sys/block/<block_dev_name>/num_wr_reqs_to_start_packing.
The default value of num_wr_reqs_to_start_packing was determined by
running parallel lmdd write and lmdd read operations and calculating
the max number of packed writes requests.
SD and MMC Device Attributes
============================
All attributes are read-only.
cid Card Identifaction Register
cid Card Identification Register
csd Card Specific Data Register
scr SD Card Configuration Register (SD only)
date Manufacturing Date (from CID Register)
@ -72,3 +89,51 @@ Note on raw_rpmb_size_mult:
"raw_rpmb_size_mult" is a mutliple of 128kB block.
RPMB size in byte is calculated by using the following equation:
RPMB partition size = 128kB x raw_rpmb_size_mult
SD/MMC/SDIO Clock Gating Attribute
==================================
Read and write access is provided to following attribute.
This attribute appears only if CONFIG_MMC_CLKGATE is enabled.
clkgate_delay Tune the clock gating delay with desired value in milliseconds.
echo <desired delay> > /sys/class/mmc_host/mmcX/clkgate_delay
SD/MMC/SDIO Clock Scaling Attributes
====================================
Read and write accesses are provided to following attributes.
polling_interval Measured in milliseconds, this attribute
defines how often we need to check the card
usage and make decisions on frequency scaling.
up_threshold This attribute defines what should be the
average card usage between the polling
interval for the mmc core to make a decision
on whether it should increase the frequency.
For example when it is set to '35' it means
that between the checking intervals the card
needs to be on average more than 35% in use to
scale up the frequency. The value should be
between 0 - 100 so that it can be compared
against load percentage.
down_threshold Similar to up_threshold, but on lowering the
frequency. For example, when it is set to '2'
it means that between the checking intervals
the card needs to be on average less than 2%
in use to scale down the clocks to minimum
frequency. The value should be between 0 - 100
so that it can be compared against load
percentage.
enable Enable clock scaling for hosts (and cards)
that support ultrahigh speed modes
(SDR104, DDR50, HS200).
echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/polling_interval
echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/up_threshold
echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/down_threshold
echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/enable

View file

@ -19,6 +19,14 @@ config MMC_DEBUG
This is an option for use by developers; most people should
say N here. This enables MMC core and driver debugging.
config MMC_PERF_PROFILING
bool "MMC performance profiling"
depends on MMC != n
default n
help
If you say Y here, support will be added for collecting
performance numbers at the MMC Queue and Host layers.
if MMC
source "drivers/mmc/core/Kconfig"

View file

@ -68,13 +68,3 @@ config MMC_TEST
This driver is only of interest to those developing or
testing a host driver. Most people should say N here.
config MMC_BLOCK_TEST
tristate "MMC block test"
depends on MMC_BLOCK && IOSCHED_TEST
help
MMC block test can be used with test iosched to test the MMC block
device.
Currently used to test eMMC 4.5 features (packed commands, sanitize,
BKOPs).

View file

@ -8,3 +8,4 @@ obj-$(CONFIG_MMC_TEST) += mmc_test.o
obj-$(CONFIG_SDIO_UART) += sdio_uart.o
obj-$(CONFIG_MMC_BLOCK_TEST) += mmc_block_test.o

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -2807,6 +2807,7 @@ static ssize_t mtf_test_write(struct file *file, const char __user *buf,
}
#ifdef CONFIG_HIGHMEM
if (test->highmem)
__free_pages(test->highmem, BUFFER_ORDER);
#endif
kfree(test->buffer);

View file

@ -16,6 +16,8 @@
#include <linux/kthread.h>
#include <linux/scatterlist.h>
#include <linux/dma-mapping.h>
#include <linux/bitops.h>
#include <linux/delay.h>
#include <linux/mmc/card.h>
#include <linux/mmc/host.h>
@ -24,6 +26,13 @@
#define MMC_QUEUE_BOUNCESZ 65536
/*
* Based on benchmark tests the default num of requests to trigger the write
* packing was determined, to keep the read latency as low as possible and
* manage to keep the high write throughput.
*/
#define DEFAULT_NUM_REQS_TO_START_PACK 17
/*
* Prepare a MMC request. This just filters out odd stuff.
*/
@ -47,10 +56,96 @@ static int mmc_prep_request(struct request_queue *q, struct request *req)
return BLKPREP_OK;
}
static struct request *mmc_peek_request(struct mmc_queue *mq)
{
struct request_queue *q = mq->queue;
mq->cmdq_req_peeked = NULL;
spin_lock_irq(q->queue_lock);
if (!blk_queue_stopped(q))
mq->cmdq_req_peeked = blk_peek_request(q);
spin_unlock_irq(q->queue_lock);
return mq->cmdq_req_peeked;
}
static bool mmc_check_blk_queue_start_tag(struct request_queue *q,
struct request *req)
{
int ret;
spin_lock_irq(q->queue_lock);
ret = blk_queue_start_tag(q, req);
spin_unlock_irq(q->queue_lock);
return !!ret;
}
static inline void mmc_cmdq_ready_wait(struct mmc_host *host,
struct mmc_queue *mq)
{
struct mmc_cmdq_context_info *ctx = &host->cmdq_ctx;
struct request_queue *q = mq->queue;
/*
* Wait until all of the following conditions are true:
* 1. There is a request pending in the block layer queue
* to be processed.
* 2. If the peeked request is flush/discard then there shouldn't
* be any other direct command active.
* 3. cmdq state should be unhalted.
* 4. cmdq state shouldn't be in error state.
* 5. free tag available to process the new request.
*/
wait_event(ctx->wait, kthread_should_stop()
|| (mmc_peek_request(mq) &&
!((mq->cmdq_req_peeked->cmd_flags & (REQ_FLUSH | REQ_DISCARD))
&& test_bit(CMDQ_STATE_DCMD_ACTIVE, &ctx->curr_state))
&& !(!host->card->part_curr && !mmc_card_suspended(host->card)
&& mmc_host_halt(host))
&& !(!host->card->part_curr && mmc_host_cq_disable(host) &&
!mmc_card_suspended(host->card))
&& !test_bit(CMDQ_STATE_ERR, &ctx->curr_state)
&& !mmc_check_blk_queue_start_tag(q, mq->cmdq_req_peeked)));
}
static int mmc_cmdq_thread(void *d)
{
struct mmc_queue *mq = d;
struct mmc_card *card = mq->card;
struct mmc_host *host = card->host;
current->flags |= PF_MEMALLOC;
if (card->host->wakeup_on_idle)
set_wake_up_idle(true);
while (1) {
int ret = 0;
mmc_cmdq_ready_wait(host, mq);
if (kthread_should_stop())
break;
ret = mq->cmdq_issue_fn(mq, mq->cmdq_req_peeked);
/*
* Don't requeue if issue_fn fails, just bug on.
* We don't expect failure here and there is no recovery other
* than fixing the actual issue if there is any.
* Also we end the request if there is a partition switch error,
* so we should not requeue the request here.
*/
if (ret)
BUG_ON(1);
} /* loop */
return 0;
}
static int mmc_queue_thread(void *d)
{
struct mmc_queue *mq = d;
struct request_queue *q = mq->queue;
struct mmc_card *card = mq->card;
struct sched_param scheduler_params = {0};
scheduler_params.sched_priority = 1;
@ -58,6 +153,8 @@ static int mmc_queue_thread(void *d)
sched_setscheduler(current, SCHED_FIFO, &scheduler_params);
current->flags |= PF_MEMALLOC;
if (card->host->wakeup_on_idle)
set_wake_up_idle(true);
down(&mq->thread_sem);
do {
@ -75,8 +172,8 @@ static int mmc_queue_thread(void *d)
cmd_flags = req ? req->cmd_flags : 0;
mq->issue_fn(mq, req);
cond_resched();
if (mq->flags & MMC_QUEUE_NEW_REQUEST) {
mq->flags &= ~MMC_QUEUE_NEW_REQUEST;
if (test_bit(MMC_QUEUE_NEW_REQUEST, &mq->flags)) {
clear_bit(MMC_QUEUE_NEW_REQUEST, &mq->flags);
continue; /* fetch again */
}
@ -108,6 +205,13 @@ static int mmc_queue_thread(void *d)
return 0;
}
static void mmc_cmdq_dispatch_req(struct request_queue *q)
{
struct mmc_queue *mq = q->queuedata;
wake_up(&mq->card->host->cmdq_ctx.wait);
}
/*
* Generic MMC request handler. This is called for any queue on a
* particular host. When the host is not busy, we look for a request
@ -182,6 +286,32 @@ static void mmc_queue_setup_discard(struct request_queue *q,
queue_flag_set_unlocked(QUEUE_FLAG_SECDISCARD, q);
}
/**
* mmc_blk_cmdq_setup_queue
* @mq: mmc queue
* @card: card to attach to this queue
*
* Setup queue for CMDQ supporting MMC card
*/
void mmc_cmdq_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
{
u64 limit = BLK_BOUNCE_HIGH;
struct mmc_host *host = card->host;
if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
limit = *mmc_dev(host)->dma_mask;
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue);
if (mmc_can_erase(card))
mmc_queue_setup_discard(mq->queue, card);
blk_queue_bounce_limit(mq->queue, limit);
blk_queue_max_hw_sectors(mq->queue, min(host->max_blk_count,
host->max_req_size / 512));
blk_queue_max_segment_size(mq->queue, host->max_seg_size);
blk_queue_max_segments(mq->queue, host->max_segs);
}
/**
* mmc_init_queue - initialise a queue structure.
* @mq: mmc queue
@ -192,7 +322,7 @@ static void mmc_queue_setup_discard(struct request_queue *q,
* Initialise a MMC card request queue.
*/
int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
spinlock_t *lock, const char *subname)
spinlock_t *lock, const char *subname, int area_type)
{
struct mmc_host *host = card->host;
u64 limit = BLK_BOUNCE_HIGH;
@ -204,6 +334,37 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
mq->card = card;
if (card->ext_csd.cmdq_support &&
(area_type == MMC_BLK_DATA_AREA_MAIN)) {
mq->queue = blk_init_queue(mmc_cmdq_dispatch_req, lock);
if (!mq->queue)
return -ENOMEM;
mmc_cmdq_setup_queue(mq, card);
ret = mmc_cmdq_init(mq, card);
if (ret) {
pr_err("%s: %d: cmdq: unable to set-up\n",
mmc_hostname(card->host), ret);
blk_cleanup_queue(mq->queue);
} else {
sema_init(&mq->thread_sem, 1);
/* hook for pm qos cmdq init */
if (card->host->cmdq_ops->init)
card->host->cmdq_ops->init(card->host);
mq->queue->queuedata = mq;
mq->thread = kthread_run(mmc_cmdq_thread, mq,
"mmc-cmdqd/%d%s",
host->index,
subname ? subname : "");
if (IS_ERR(mq->thread)) {
pr_err("%s: %d: cmdq: failed to start mmc-cmdqd thread\n",
mmc_hostname(card->host), ret);
ret = PTR_ERR(mq->thread);
}
return ret;
}
}
mq->queue = blk_init_queue(mmc_request_fn, lock);
if (!mq->queue)
return -ENOMEM;
@ -211,6 +372,9 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
mq->mqrq_cur = mqrq_cur;
mq->mqrq_prev = mqrq_prev;
mq->queue->queuedata = mq;
mq->num_wr_reqs_to_start_packing =
min_t(int, (int)card->ext_csd.max_packed_writes,
DEFAULT_NUM_REQS_TO_START_PACK);
blk_queue_prep_rq(mq->queue, mmc_prep_request);
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue);
@ -276,24 +440,49 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
#endif
if (!mqrq_cur->bounce_buf && !mqrq_prev->bounce_buf) {
unsigned int max_segs = host->max_segs;
blk_queue_bounce_limit(mq->queue, limit);
blk_queue_max_hw_sectors(mq->queue,
min(host->max_blk_count, host->max_req_size / 512));
blk_queue_max_segments(mq->queue, host->max_segs);
blk_queue_max_segment_size(mq->queue, host->max_seg_size);
retry:
blk_queue_max_segments(mq->queue, host->max_segs);
mqrq_cur->sg = mmc_alloc_sg(host->max_segs, &ret);
if (ret)
if (ret == -ENOMEM)
goto cur_sg_alloc_failed;
else if (ret)
goto cleanup_queue;
mqrq_prev->sg = mmc_alloc_sg(host->max_segs, &ret);
if (ret)
if (ret == -ENOMEM)
goto prev_sg_alloc_failed;
else if (ret)
goto cleanup_queue;
goto success;
prev_sg_alloc_failed:
kfree(mqrq_cur->sg);
mqrq_cur->sg = NULL;
cur_sg_alloc_failed:
host->max_segs /= 2;
if (host->max_segs) {
goto retry;
} else {
host->max_segs = max_segs;
goto cleanup_queue;
}
}
success:
sema_init(&mq->thread_sem, 1);
/* hook for pm qos legacy init */
if (card->host->ops->init)
card->host->ops->init(card->host);
mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d%s",
host->index, subname ? subname : "");
@ -408,28 +597,188 @@ void mmc_packed_clean(struct mmc_queue *mq)
mqrq_prev->packed = NULL;
}
static void mmc_cmdq_softirq_done(struct request *rq)
{
struct mmc_queue *mq = rq->q->queuedata;
mq->cmdq_complete_fn(rq);
}
static void mmc_cmdq_error_work(struct work_struct *work)
{
struct mmc_queue *mq = container_of(work, struct mmc_queue,
cmdq_err_work);
mq->cmdq_error_fn(mq);
}
enum blk_eh_timer_return mmc_cmdq_rq_timed_out(struct request *req)
{
struct mmc_queue *mq = req->q->queuedata;
pr_err("%s: request with tag: %d flags: 0x%llx timed out\n",
mmc_hostname(mq->card->host), req->tag, req->cmd_flags);
return mq->cmdq_req_timed_out(req);
}
int mmc_cmdq_init(struct mmc_queue *mq, struct mmc_card *card)
{
int i, ret = 0;
/* one slot is reserved for dcmd requests */
int q_depth = card->ext_csd.cmdq_depth - 1;
card->cmdq_init = false;
if (!(card->host->caps2 & MMC_CAP2_CMD_QUEUE)) {
ret = -ENOTSUPP;
goto out;
}
init_waitqueue_head(&card->host->cmdq_ctx.queue_empty_wq);
init_waitqueue_head(&card->host->cmdq_ctx.wait);
mq->mqrq_cmdq = kzalloc(
sizeof(struct mmc_queue_req) * q_depth, GFP_KERNEL);
if (!mq->mqrq_cmdq) {
pr_warn("%s: unable to allocate mqrq's for q_depth %d\n",
mmc_card_name(card), q_depth);
ret = -ENOMEM;
goto out;
}
/* sg is allocated for data request slots only */
for (i = 0; i < q_depth; i++) {
mq->mqrq_cmdq[i].sg = mmc_alloc_sg(card->host->max_segs, &ret);
if (ret) {
pr_warn("%s: unable to allocate cmdq sg of size %d\n",
mmc_card_name(card),
card->host->max_segs);
goto free_mqrq_sg;
}
}
ret = blk_queue_init_tags(mq->queue, q_depth, NULL, BLK_TAG_ALLOC_FIFO);
if (ret) {
pr_warn("%s: unable to allocate cmdq tags %d\n",
mmc_card_name(card), q_depth);
goto free_mqrq_sg;
}
blk_queue_softirq_done(mq->queue, mmc_cmdq_softirq_done);
INIT_WORK(&mq->cmdq_err_work, mmc_cmdq_error_work);
init_completion(&mq->cmdq_shutdown_complete);
init_completion(&mq->cmdq_pending_req_done);
blk_queue_rq_timed_out(mq->queue, mmc_cmdq_rq_timed_out);
blk_queue_rq_timeout(mq->queue, 120 * HZ);
card->cmdq_init = true;
goto out;
free_mqrq_sg:
for (i = 0; i < q_depth; i++)
kfree(mq->mqrq_cmdq[i].sg);
kfree(mq->mqrq_cmdq);
mq->mqrq_cmdq = NULL;
out:
return ret;
}
void mmc_cmdq_clean(struct mmc_queue *mq, struct mmc_card *card)
{
int i;
int q_depth = card->ext_csd.cmdq_depth - 1;
blk_free_tags(mq->queue->queue_tags);
mq->queue->queue_tags = NULL;
blk_queue_free_tags(mq->queue);
for (i = 0; i < q_depth; i++)
kfree(mq->mqrq_cmdq[i].sg);
kfree(mq->mqrq_cmdq);
mq->mqrq_cmdq = NULL;
}
/**
* mmc_queue_suspend - suspend a MMC request queue
* @mq: MMC queue to suspend
* @wait: Wait till MMC request queue is empty
*
* Stop the block request queue, and wait for our thread to
* complete any outstanding requests. This ensures that we
* won't suspend while a request is being processed.
*/
void mmc_queue_suspend(struct mmc_queue *mq)
int mmc_queue_suspend(struct mmc_queue *mq, int wait)
{
struct request_queue *q = mq->queue;
unsigned long flags;
int rc = 0;
struct mmc_card *card = mq->card;
struct request *req;
if (!(mq->flags & MMC_QUEUE_SUSPENDED)) {
mq->flags |= MMC_QUEUE_SUSPENDED;
if (card->cmdq_init && blk_queue_tagged(q)) {
struct mmc_host *host = card->host;
if (test_and_set_bit(MMC_QUEUE_SUSPENDED, &mq->flags))
goto out;
if (wait) {
/*
* After blk_stop_queue is called, wait for all
* active_reqs to complete.
* Then wait for cmdq thread to exit before calling
* cmdq shutdown to avoid race between issuing
* requests and shutdown of cmdq.
*/
spin_lock_irqsave(q->queue_lock, flags);
blk_stop_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
down(&mq->thread_sem);
if (host->cmdq_ctx.active_reqs)
wait_for_completion(
&mq->cmdq_shutdown_complete);
kthread_stop(mq->thread);
mq->cmdq_shutdown(mq);
} else {
spin_lock_irqsave(q->queue_lock, flags);
blk_stop_queue(q);
wake_up(&host->cmdq_ctx.wait);
req = blk_peek_request(q);
if (req || mq->cmdq_req_peeked ||
host->cmdq_ctx.active_reqs) {
clear_bit(MMC_QUEUE_SUSPENDED, &mq->flags);
blk_start_queue(q);
rc = -EBUSY;
}
spin_unlock_irqrestore(q->queue_lock, flags);
}
goto out;
}
if (!(test_and_set_bit(MMC_QUEUE_SUSPENDED, &mq->flags))) {
spin_lock_irqsave(q->queue_lock, flags);
blk_stop_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
rc = down_trylock(&mq->thread_sem);
if (rc && !wait) {
/*
* Failed to take the lock so better to abort the
* suspend because mmcqd thread is processing requests.
*/
clear_bit(MMC_QUEUE_SUSPENDED, &mq->flags);
spin_lock_irqsave(q->queue_lock, flags);
blk_start_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
rc = -EBUSY;
} else if (rc && wait) {
down(&mq->thread_sem);
rc = 0;
}
}
out:
return rc;
}
/**
@ -439,11 +788,12 @@ void mmc_queue_suspend(struct mmc_queue *mq)
void mmc_queue_resume(struct mmc_queue *mq)
{
struct request_queue *q = mq->queue;
struct mmc_card *card = mq->card;
unsigned long flags;
if (mq->flags & MMC_QUEUE_SUSPENDED) {
mq->flags &= ~MMC_QUEUE_SUSPENDED;
if (test_and_clear_bit(MMC_QUEUE_SUSPENDED, &mq->flags)) {
if (!(card->cmdq_init && blk_queue_tagged(q)))
up(&mq->thread_sem);
spin_lock_irqsave(q->queue_lock, flags);

View file

@ -42,28 +42,47 @@ struct mmc_queue_req {
struct mmc_async_req mmc_active;
enum mmc_packed_type cmd_type;
struct mmc_packed *packed;
struct mmc_cmdq_req cmdq_req;
};
struct mmc_queue {
struct mmc_card *card;
struct task_struct *thread;
struct semaphore thread_sem;
unsigned int flags;
#define MMC_QUEUE_SUSPENDED (1 << 0)
#define MMC_QUEUE_NEW_REQUEST (1 << 1)
unsigned long flags;
#define MMC_QUEUE_SUSPENDED 0
#define MMC_QUEUE_NEW_REQUEST 1
int (*issue_fn)(struct mmc_queue *, struct request *);
int (*cmdq_issue_fn)(struct mmc_queue *,
struct request *);
void (*cmdq_complete_fn)(struct request *);
void (*cmdq_error_fn)(struct mmc_queue *);
enum blk_eh_timer_return (*cmdq_req_timed_out)(struct request *);
void *data;
struct request_queue *queue;
struct mmc_queue_req mqrq[2];
struct mmc_queue_req *mqrq_cur;
struct mmc_queue_req *mqrq_prev;
struct mmc_queue_req *mqrq_cmdq;
bool wr_packing_enabled;
int num_of_potential_packed_wr_reqs;
int num_wr_reqs_to_start_packing;
bool no_pack_for_random;
struct work_struct cmdq_err_work;
struct completion cmdq_pending_req_done;
struct completion cmdq_shutdown_complete;
struct request *cmdq_req_peeked;
int (*err_check_fn) (struct mmc_card *, struct mmc_async_req *);
void (*packed_test_fn) (struct request_queue *, struct mmc_queue_req *);
void (*cmdq_shutdown)(struct mmc_queue *);
};
extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
const char *);
const char *, int);
extern void mmc_cleanup_queue(struct mmc_queue *);
extern void mmc_queue_suspend(struct mmc_queue *);
extern int mmc_queue_suspend(struct mmc_queue *, int);
extern void mmc_queue_resume(struct mmc_queue *);
extern unsigned int mmc_queue_map_sg(struct mmc_queue *,
@ -76,4 +95,9 @@ extern void mmc_packed_clean(struct mmc_queue *);
extern int mmc_access_rpmb(struct mmc_queue *);
extern void print_mmc_packing_stats(struct mmc_card *card);
extern int mmc_cmdq_init(struct mmc_queue *mq, struct mmc_card *card);
extern void mmc_cmdq_clean(struct mmc_queue *mq, struct mmc_card *card);
#endif

View file

@ -16,3 +16,13 @@ config MMC_PARANOID_SD_INIT
about re-trying SD init requests. This can be a useful
work-around for buggy controllers and hardware. Enable
if you are experiencing issues with SD detection.
config MMC_CLKGATE
bool "MMC host clock gating"
help
This will attempt to aggressively gate the clock to the MMC card.
This is done to save power due to gating off the logic and bus
noise when the MMC card is not in use. Your host driver has to
support handling this in order for it to be of any use.
If unsure, say N.

View file

@ -132,6 +132,16 @@ static void mmc_bus_shutdown(struct device *dev)
struct mmc_host *host = card->host;
int ret;
if (!drv) {
pr_debug("%s: %s: drv is NULL\n", dev_name(dev), __func__);
return;
}
if (!card) {
pr_debug("%s: %s: card is NULL\n", dev_name(dev), __func__);
return;
}
if (dev->driver && drv->shutdown)
drv->shutdown(card);
@ -190,6 +200,7 @@ static int mmc_runtime_resume(struct device *dev)
return host->bus_ops->runtime_resume(host);
}
#endif /* !CONFIG_PM */
static const struct dev_pm_ops mmc_bus_pm_ops = {
@ -273,6 +284,9 @@ struct mmc_card *mmc_alloc_card(struct mmc_host *host, struct device_type *type)
card->dev.release = mmc_release_card;
card->dev.type = type;
spin_lock_init(&card->wr_pack_stats.lock);
spin_lock_init(&card->bkops.stats.lock);
return card;
}
@ -349,6 +363,12 @@ int mmc_add_card(struct mmc_card *card)
card->dev.of_node = mmc_of_find_child_device(card->host, 0);
if (mmc_card_sdio(card)) {
ret = device_init_wakeup(&card->dev, true);
if (ret)
pr_err("%s: %s: failed to init wakeup: %d\n",
mmc_hostname(card->host), __func__, ret);
}
ret = device_add(&card->dev);
if (ret)
return ret;
@ -380,6 +400,9 @@ void mmc_remove_card(struct mmc_card *card)
of_node_put(card->dev.of_node);
}
kfree(card->wr_pack_stats.packing_events);
kfree(card->cached_ext_csd);
put_device(&card->dev);
}

File diff suppressed because it is too large Load diff

View file

@ -15,21 +15,6 @@
#define MMC_CMD_RETRIES 3
struct mmc_bus_ops {
void (*remove)(struct mmc_host *);
void (*detect)(struct mmc_host *);
int (*pre_suspend)(struct mmc_host *);
int (*suspend)(struct mmc_host *);
int (*resume)(struct mmc_host *);
int (*runtime_suspend)(struct mmc_host *);
int (*runtime_resume)(struct mmc_host *);
int (*power_save)(struct mmc_host *);
int (*power_restore)(struct mmc_host *);
int (*alive)(struct mmc_host *);
int (*shutdown)(struct mmc_host *);
int (*reset)(struct mmc_host *);
};
void mmc_attach_bus(struct mmc_host *host, const struct mmc_bus_ops *ops);
void mmc_detach_bus(struct mmc_host *host);
@ -40,6 +25,11 @@ void mmc_init_erase(struct mmc_card *card);
void mmc_set_chip_select(struct mmc_host *host, int mode);
void mmc_set_clock(struct mmc_host *host, unsigned int hz);
int mmc_clk_update_freq(struct mmc_host *host,
unsigned long freq, enum mmc_load state);
void mmc_gate_clock(struct mmc_host *host);
void mmc_ungate_clock(struct mmc_host *host);
void mmc_set_ungated(struct mmc_host *host);
void mmc_set_bus_mode(struct mmc_host *host, unsigned int mode);
void mmc_set_bus_width(struct mmc_host *host, unsigned int width);
u32 mmc_select_voltage(struct mmc_host *host, u32 ocr);
@ -59,6 +49,8 @@ static inline void mmc_delay(unsigned int ms)
if (ms < 1000 / HZ) {
cond_resched();
mdelay(ms);
} else if (ms < jiffies_to_msecs(2)) {
usleep_range(ms * 1000, (ms + 1) * 1000);
} else {
msleep(ms);
}
@ -86,6 +78,13 @@ void mmc_remove_card_debugfs(struct mmc_card *card);
void mmc_init_context_info(struct mmc_host *host);
extern bool mmc_can_scale_clk(struct mmc_host *host);
extern int mmc_init_clk_scaling(struct mmc_host *host);
extern int mmc_suspend_clk_scaling(struct mmc_host *host);
extern int mmc_resume_clk_scaling(struct mmc_host *host);
extern int mmc_exit_clk_scaling(struct mmc_host *host);
extern unsigned long mmc_get_max_frequency(struct mmc_host *host);
int mmc_execute_tuning(struct mmc_card *card);
int mmc_hs200_to_hs400(struct mmc_card *card);
int mmc_hs400_to_hs200(struct mmc_card *card);

View file

@ -15,6 +15,7 @@
#include <linux/slab.h>
#include <linux/stat.h>
#include <linux/fault-inject.h>
#include <linux/uaccess.h>
#include <linux/mmc/card.h>
#include <linux/mmc/host.h>
@ -233,6 +234,100 @@ static int mmc_clock_opt_set(void *data, u64 val)
DEFINE_SIMPLE_ATTRIBUTE(mmc_clock_fops, mmc_clock_opt_get, mmc_clock_opt_set,
"%llu\n");
#include <linux/delay.h>
static int mmc_scale_get(void *data, u64 *val)
{
struct mmc_host *host = data;
*val = host->clk_scaling.curr_freq;
return 0;
}
static int mmc_scale_set(void *data, u64 val)
{
int err = 0;
struct mmc_host *host = data;
mmc_claim_host(host);
mmc_host_clk_hold(host);
/* change frequency from sysfs manually */
err = mmc_clk_update_freq(host, val, host->clk_scaling.state);
if (err == -EAGAIN)
err = 0;
else if (err)
pr_err("%s: clock scale to %llu failed with error %d\n",
mmc_hostname(host), val, err);
else
pr_debug("%s: clock change to %llu finished successfully (%s)\n",
mmc_hostname(host), val, current->comm);
mmc_host_clk_release(host);
mmc_release_host(host);
return err;
}
DEFINE_SIMPLE_ATTRIBUTE(mmc_scale_fops, mmc_scale_get, mmc_scale_set,
"%llu\n");
static int mmc_max_clock_get(void *data, u64 *val)
{
struct mmc_host *host = data;
if (!host)
return -EINVAL;
*val = host->f_max;
return 0;
}
static int mmc_max_clock_set(void *data, u64 val)
{
struct mmc_host *host = data;
int err = -EINVAL;
unsigned long freq = val;
unsigned int old_freq;
if (!host || (val < host->f_min))
goto out;
mmc_claim_host(host);
if (host->bus_ops && host->bus_ops->change_bus_speed) {
old_freq = host->f_max;
host->f_max = freq;
err = host->bus_ops->change_bus_speed(host, &freq);
if (err)
host->f_max = old_freq;
}
mmc_release_host(host);
out:
return err;
}
DEFINE_SIMPLE_ATTRIBUTE(mmc_max_clock_fops, mmc_max_clock_get,
mmc_max_clock_set, "%llu\n");
static int mmc_force_err_set(void *data, u64 val)
{
struct mmc_host *host = data;
if (host && host->ops && host->ops->force_err_irq) {
mmc_host_clk_hold(host);
host->ops->force_err_irq(host, val);
mmc_host_clk_release(host);
}
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(mmc_force_err_fops, NULL, mmc_force_err_set, "%llu\n");
void mmc_add_host_debugfs(struct mmc_host *host)
{
struct dentry *root;
@ -255,6 +350,29 @@ void mmc_add_host_debugfs(struct mmc_host *host)
&mmc_clock_fops))
goto err_node;
if (!debugfs_create_file("max_clock", S_IRUSR | S_IWUSR, root, host,
&mmc_max_clock_fops))
goto err_node;
if (!debugfs_create_file("scale", S_IRUSR | S_IWUSR, root, host,
&mmc_scale_fops))
goto err_node;
if (!debugfs_create_bool("skip_clk_scale_freq_update",
S_IRUSR | S_IWUSR, root,
&host->clk_scaling.skip_clk_scale_freq_update))
goto err_node;
if (!debugfs_create_bool("cmdq_task_history",
S_IRUSR | S_IWUSR, root,
&host->cmdq_thist_enabled))
goto err_node;
#ifdef CONFIG_MMC_CLKGATE
if (!debugfs_create_u32("clk_delay", (S_IRUSR | S_IWUSR),
root, &host->clk_delay))
goto err_node;
#endif
#ifdef CONFIG_FAIL_MMC_REQUEST
if (fail_request)
setup_fault_attr(&fail_default_attr, fail_request);
@ -264,6 +382,10 @@ void mmc_add_host_debugfs(struct mmc_host *host)
&host->fail_mmc_request)))
goto err_node;
#endif
if (!debugfs_create_file("force_error", S_IWUSR, root, host,
&mmc_force_err_fops))
goto err_node;
return;
err_node:
@ -285,11 +407,26 @@ static int mmc_dbg_card_status_get(void *data, u64 *val)
int ret;
mmc_get_card(card);
if (mmc_card_cmdq(card)) {
ret = mmc_cmdq_halt_on_empty_queue(card->host);
if (ret) {
pr_err("%s: halt failed while doing %s err (%d)\n",
mmc_hostname(card->host), __func__,
ret);
goto out;
}
}
ret = mmc_send_status(data, &status);
if (!ret)
*val = status;
if (mmc_card_cmdq(card)) {
if (mmc_cmdq_halt(card->host, false))
pr_err("%s: %s: cmdq unhalt failed\n",
mmc_hostname(card->host), __func__);
}
out:
mmc_put_card(card);
return ret;
@ -312,8 +449,18 @@ static int mmc_ext_csd_open(struct inode *inode, struct file *filp)
return -ENOMEM;
mmc_get_card(card);
err = mmc_get_ext_csd(card, &ext_csd);
if (mmc_card_cmdq(card)) {
err = mmc_cmdq_halt_on_empty_queue(card->host);
if (err) {
pr_err("%s: halt failed while doing %s err (%d)\n",
mmc_hostname(card->host), __func__,
err);
mmc_put_card(card);
goto out_free_halt;
}
}
err = mmc_get_ext_csd(card, &ext_csd);
if (err)
goto out_free;
@ -323,10 +470,25 @@ static int mmc_ext_csd_open(struct inode *inode, struct file *filp)
BUG_ON(n != EXT_CSD_STR_LEN);
filp->private_data = buf;
if (mmc_card_cmdq(card)) {
if (mmc_cmdq_halt(card->host, false))
pr_err("%s: %s: cmdq unhalt failed\n",
mmc_hostname(card->host), __func__);
}
mmc_put_card(card);
kfree(ext_csd);
return 0;
out_free:
if (mmc_card_cmdq(card)) {
if (mmc_cmdq_halt(card->host, false))
pr_err("%s: %s: cmdq unhalt failed\n",
mmc_hostname(card->host), __func__);
}
mmc_put_card(card);
out_free_halt:
kfree(buf);
return err;
}
@ -353,6 +515,275 @@ static const struct file_operations mmc_dbg_ext_csd_fops = {
.llseek = default_llseek,
};
static int mmc_wr_pack_stats_open(struct inode *inode, struct file *filp)
{
struct mmc_card *card = inode->i_private;
filp->private_data = card;
card->wr_pack_stats.print_in_read = 1;
return 0;
}
#define TEMP_BUF_SIZE 256
static ssize_t mmc_wr_pack_stats_read(struct file *filp, char __user *ubuf,
size_t cnt, loff_t *ppos)
{
struct mmc_card *card = filp->private_data;
struct mmc_wr_pack_stats *pack_stats;
int i;
int max_num_of_packed_reqs = 0;
char *temp_buf;
if (!card)
return cnt;
if (!access_ok(VERIFY_WRITE, ubuf, cnt))
return cnt;
if (!card->wr_pack_stats.print_in_read)
return 0;
if (!card->wr_pack_stats.enabled) {
pr_info("%s: write packing statistics are disabled\n",
mmc_hostname(card->host));
goto exit;
}
pack_stats = &card->wr_pack_stats;
if (!pack_stats->packing_events) {
pr_info("%s: NULL packing_events\n", mmc_hostname(card->host));
goto exit;
}
max_num_of_packed_reqs = card->ext_csd.max_packed_writes;
temp_buf = kmalloc(TEMP_BUF_SIZE, GFP_KERNEL);
if (!temp_buf)
goto exit;
spin_lock(&pack_stats->lock);
snprintf(temp_buf, TEMP_BUF_SIZE, "%s: write packing statistics:\n",
mmc_hostname(card->host));
strlcat(ubuf, temp_buf, cnt);
for (i = 1 ; i <= max_num_of_packed_reqs ; ++i) {
if (pack_stats->packing_events[i]) {
snprintf(temp_buf, TEMP_BUF_SIZE,
"%s: Packed %d reqs - %d times\n",
mmc_hostname(card->host), i,
pack_stats->packing_events[i]);
strlcat(ubuf, temp_buf, cnt);
}
}
snprintf(temp_buf, TEMP_BUF_SIZE,
"%s: stopped packing due to the following reasons:\n",
mmc_hostname(card->host));
strlcat(ubuf, temp_buf, cnt);
if (pack_stats->pack_stop_reason[EXCEEDS_SEGMENTS]) {
snprintf(temp_buf, TEMP_BUF_SIZE,
"%s: %d times: exceed max num of segments\n",
mmc_hostname(card->host),
pack_stats->pack_stop_reason[EXCEEDS_SEGMENTS]);
strlcat(ubuf, temp_buf, cnt);
}
if (pack_stats->pack_stop_reason[EXCEEDS_SECTORS]) {
snprintf(temp_buf, TEMP_BUF_SIZE,
"%s: %d times: exceed max num of sectors\n",
mmc_hostname(card->host),
pack_stats->pack_stop_reason[EXCEEDS_SECTORS]);
strlcat(ubuf, temp_buf, cnt);
}
if (pack_stats->pack_stop_reason[WRONG_DATA_DIR]) {
snprintf(temp_buf, TEMP_BUF_SIZE,
"%s: %d times: wrong data direction\n",
mmc_hostname(card->host),
pack_stats->pack_stop_reason[WRONG_DATA_DIR]);
strlcat(ubuf, temp_buf, cnt);
}
if (pack_stats->pack_stop_reason[FLUSH_OR_DISCARD]) {
snprintf(temp_buf, TEMP_BUF_SIZE,
"%s: %d times: flush or discard\n",
mmc_hostname(card->host),
pack_stats->pack_stop_reason[FLUSH_OR_DISCARD]);
strlcat(ubuf, temp_buf, cnt);
}
if (pack_stats->pack_stop_reason[EMPTY_QUEUE]) {
snprintf(temp_buf, TEMP_BUF_SIZE,
"%s: %d times: empty queue\n",
mmc_hostname(card->host),
pack_stats->pack_stop_reason[EMPTY_QUEUE]);
strlcat(ubuf, temp_buf, cnt);
}
if (pack_stats->pack_stop_reason[REL_WRITE]) {
snprintf(temp_buf, TEMP_BUF_SIZE,
"%s: %d times: rel write\n",
mmc_hostname(card->host),
pack_stats->pack_stop_reason[REL_WRITE]);
strlcat(ubuf, temp_buf, cnt);
}
if (pack_stats->pack_stop_reason[THRESHOLD]) {
snprintf(temp_buf, TEMP_BUF_SIZE,
"%s: %d times: Threshold\n",
mmc_hostname(card->host),
pack_stats->pack_stop_reason[THRESHOLD]);
strlcat(ubuf, temp_buf, cnt);
}
if (pack_stats->pack_stop_reason[LARGE_SEC_ALIGN]) {
snprintf(temp_buf, TEMP_BUF_SIZE,
"%s: %d times: Large sector alignment\n",
mmc_hostname(card->host),
pack_stats->pack_stop_reason[LARGE_SEC_ALIGN]);
strlcat(ubuf, temp_buf, cnt);
}
if (pack_stats->pack_stop_reason[RANDOM]) {
snprintf(temp_buf, TEMP_BUF_SIZE,
"%s: %d times: random request\n",
mmc_hostname(card->host),
pack_stats->pack_stop_reason[RANDOM]);
strlcat(ubuf, temp_buf, cnt);
}
if (pack_stats->pack_stop_reason[FUA]) {
snprintf(temp_buf, TEMP_BUF_SIZE,
"%s: %d times: fua request\n",
mmc_hostname(card->host),
pack_stats->pack_stop_reason[FUA]);
strlcat(ubuf, temp_buf, cnt);
}
spin_unlock(&pack_stats->lock);
kfree(temp_buf);
pr_info("%s", ubuf);
exit:
if (card->wr_pack_stats.print_in_read == 1) {
card->wr_pack_stats.print_in_read = 0;
return strnlen(ubuf, cnt);
}
return 0;
}
static ssize_t mmc_wr_pack_stats_write(struct file *filp,
const char __user *ubuf, size_t cnt,
loff_t *ppos)
{
struct mmc_card *card = filp->private_data;
int value;
if (!card)
return cnt;
if (!access_ok(VERIFY_READ, ubuf, cnt))
return cnt;
sscanf(ubuf, "%d", &value);
if (value) {
mmc_blk_init_packed_statistics(card);
} else {
spin_lock(&card->wr_pack_stats.lock);
card->wr_pack_stats.enabled = false;
spin_unlock(&card->wr_pack_stats.lock);
}
return cnt;
}
static const struct file_operations mmc_dbg_wr_pack_stats_fops = {
.open = mmc_wr_pack_stats_open,
.read = mmc_wr_pack_stats_read,
.write = mmc_wr_pack_stats_write,
};
static int mmc_bkops_stats_read(struct seq_file *file, void *data)
{
struct mmc_card *card = file->private;
struct mmc_bkops_stats *stats;
int i;
if (!card)
return -EINVAL;
stats = &card->bkops.stats;
if (!stats->enabled) {
pr_info("%s: bkops statistics are disabled\n",
mmc_hostname(card->host));
goto exit;
}
spin_lock(&stats->lock);
seq_printf(file, "%s: bkops statistics:\n",
mmc_hostname(card->host));
seq_printf(file, "%s: BKOPS: sent START_BKOPS to device: %u\n",
mmc_hostname(card->host), stats->manual_start);
seq_printf(file, "%s: BKOPS: stopped due to HPI: %u\n",
mmc_hostname(card->host), stats->hpi);
seq_printf(file, "%s: BKOPS: sent AUTO_EN set to 1: %u\n",
mmc_hostname(card->host), stats->auto_start);
seq_printf(file, "%s: BKOPS: sent AUTO_EN set to 0: %u\n",
mmc_hostname(card->host), stats->auto_stop);
for (i = 0 ; i < MMC_BKOPS_NUM_SEVERITY_LEVELS ; ++i)
seq_printf(file, "%s: BKOPS: due to level %d: %u\n",
mmc_hostname(card->host), i, stats->level[i]);
spin_unlock(&stats->lock);
exit:
return 0;
}
static ssize_t mmc_bkops_stats_write(struct file *filp,
const char __user *ubuf, size_t cnt,
loff_t *ppos)
{
struct mmc_card *card = filp->f_mapping->host->i_private;
int value;
struct mmc_bkops_stats *stats;
int err;
if (!card)
return cnt;
stats = &card->bkops.stats;
err = kstrtoint_from_user(ubuf, cnt, 0, &value);
if (err) {
pr_err("%s: %s: error parsing input from user (%d)\n",
mmc_hostname(card->host), __func__, err);
return err;
}
if (value) {
mmc_blk_init_bkops_statistics(card);
} else {
spin_lock(&stats->lock);
stats->enabled = false;
spin_unlock(&stats->lock);
}
return cnt;
}
static int mmc_bkops_stats_open(struct inode *inode, struct file *file)
{
return single_open(file, mmc_bkops_stats_read, inode->i_private);
}
static const struct file_operations mmc_dbg_bkops_stats_fops = {
.open = mmc_bkops_stats_open,
.read = seq_read,
.write = mmc_bkops_stats_write,
};
void mmc_add_card_debugfs(struct mmc_card *card)
{
struct mmc_host *host = card->host;
@ -385,6 +816,19 @@ void mmc_add_card_debugfs(struct mmc_card *card)
&mmc_dbg_ext_csd_fops))
goto err;
if (mmc_card_mmc(card) && (card->ext_csd.rev >= 6) &&
(card->host->caps2 & MMC_CAP2_PACKED_WR))
if (!debugfs_create_file("wr_pack_stats", S_IRUSR, root, card,
&mmc_dbg_wr_pack_stats_fops))
goto err;
if (mmc_card_mmc(card) && (card->ext_csd.rev >= 5) &&
(mmc_card_configured_auto_bkops(card) ||
mmc_card_configured_manual_bkops(card)))
if (!debugfs_create_file("bkops_stats", S_IRUSR, root, card,
&mmc_dbg_bkops_stats_fops))
goto err;
return;
err:

View file

@ -4,6 +4,7 @@
* Copyright (C) 2003 Russell King, All Rights Reserved.
* Copyright (C) 2007-2008 Pierre Ossman
* Copyright (C) 2010 Linus Walleij
* Copyright (c) 2012, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@ -33,6 +34,9 @@
#include "pwrseq.h"
#define cls_dev_to_mmc_host(d) container_of(d, struct mmc_host, class_dev)
#define MMC_DEVFRQ_DEFAULT_UP_THRESHOLD 35
#define MMC_DEVFRQ_DEFAULT_DOWN_THRESHOLD 5
#define MMC_DEVFRQ_DEFAULT_POLLING_MSEC 100
static DEFINE_IDR(mmc_host_idr);
static DEFINE_SPINLOCK(mmc_host_lock);
@ -61,6 +65,259 @@ void mmc_unregister_host_class(void)
class_unregister(&mmc_host_class);
}
#ifdef CONFIG_MMC_CLKGATE
static ssize_t clkgate_delay_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
return snprintf(buf, PAGE_SIZE, "%lu\n", host->clkgate_delay);
}
static ssize_t clkgate_delay_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
unsigned long flags, value;
if (kstrtoul(buf, 0, &value))
return -EINVAL;
spin_lock_irqsave(&host->clk_lock, flags);
host->clkgate_delay = value;
spin_unlock_irqrestore(&host->clk_lock, flags);
return count;
}
/*
* Enabling clock gating will make the core call out to the host
* once up and once down when it performs a request or card operation
* intermingled in any fashion. The driver will see this through
* set_ios() operations with ios.clock field set to 0 to gate (disable)
* the block clock, and to the old frequency to enable it again.
*/
static void mmc_host_clk_gate_delayed(struct mmc_host *host)
{
unsigned long tick_ns;
unsigned long freq = host->ios.clock;
unsigned long flags;
if (!freq) {
pr_debug("%s: frequency set to 0 in disable function, "
"this means the clock is already disabled.\n",
mmc_hostname(host));
return;
}
/*
* New requests may have appeared while we were scheduling,
* then there is no reason to delay the check before
* clk_disable().
*/
spin_lock_irqsave(&host->clk_lock, flags);
/*
* Delay n bus cycles (at least 8 from MMC spec) before attempting
* to disable the MCI block clock. The reference count may have
* gone up again after this delay due to rescheduling!
*/
if (!host->clk_requests) {
spin_unlock_irqrestore(&host->clk_lock, flags);
tick_ns = DIV_ROUND_UP(1000000000, freq);
ndelay(host->clk_delay * tick_ns);
} else {
/* New users appeared while waiting for this work */
spin_unlock_irqrestore(&host->clk_lock, flags);
return;
}
mutex_lock(&host->clk_gate_mutex);
spin_lock_irqsave(&host->clk_lock, flags);
if (!host->clk_requests) {
spin_unlock_irqrestore(&host->clk_lock, flags);
/* This will set host->ios.clock to 0 */
mmc_gate_clock(host);
spin_lock_irqsave(&host->clk_lock, flags);
pr_debug("%s: gated MCI clock\n", mmc_hostname(host));
}
spin_unlock_irqrestore(&host->clk_lock, flags);
mutex_unlock(&host->clk_gate_mutex);
}
/*
* Internal work. Work to disable the clock at some later point.
*/
static void mmc_host_clk_gate_work(struct work_struct *work)
{
struct mmc_host *host = container_of(work, struct mmc_host,
clk_gate_work.work);
mmc_host_clk_gate_delayed(host);
}
/**
* mmc_host_clk_hold - ungate hardware MCI clocks
* @host: host to ungate.
*
* Makes sure the host ios.clock is restored to a non-zero value
* past this call. Increase clock reference count and ungate clock
* if we're the first user.
*/
void mmc_host_clk_hold(struct mmc_host *host)
{
unsigned long flags;
/* cancel any clock gating work scheduled by mmc_host_clk_release() */
cancel_delayed_work_sync(&host->clk_gate_work);
mutex_lock(&host->clk_gate_mutex);
spin_lock_irqsave(&host->clk_lock, flags);
if (host->clk_gated) {
spin_unlock_irqrestore(&host->clk_lock, flags);
mmc_ungate_clock(host);
spin_lock_irqsave(&host->clk_lock, flags);
pr_debug("%s: ungated MCI clock\n", mmc_hostname(host));
}
host->clk_requests++;
spin_unlock_irqrestore(&host->clk_lock, flags);
mutex_unlock(&host->clk_gate_mutex);
}
/**
* mmc_host_may_gate_card - check if this card may be gated
* @card: card to check.
*/
bool mmc_host_may_gate_card(struct mmc_card *card)
{
/* If there is no card we may gate it */
if (!card)
return true;
/*
* SDIO3.0 card allows the clock to be gated off so check if
* that is the case or not.
*/
if (mmc_card_sdio(card) && card->cccr.async_intr_sup)
return true;
/*
* Don't gate SDIO cards! These need to be clocked at all times
* since they may be independent systems generating interrupts
* and other events. The clock requests counter from the core will
* go down to zero since the core does not need it, but we will not
* gate the clock, because there is somebody out there that may still
* be using it.
*/
return !(card->quirks & MMC_QUIRK_BROKEN_CLK_GATING);
}
/**
* mmc_host_clk_release - gate off hardware MCI clocks
* @host: host to gate.
*
* Calls the host driver with ios.clock set to zero as often as possible
* in order to gate off hardware MCI clocks. Decrease clock reference
* count and schedule disabling of clock.
*/
void mmc_host_clk_release(struct mmc_host *host)
{
unsigned long flags;
spin_lock_irqsave(&host->clk_lock, flags);
host->clk_requests--;
if (mmc_host_may_gate_card(host->card) &&
!host->clk_requests)
schedule_delayed_work(&host->clk_gate_work,
msecs_to_jiffies(host->clkgate_delay));
spin_unlock_irqrestore(&host->clk_lock, flags);
}
/**
* mmc_host_clk_rate - get current clock frequency setting
* @host: host to get the clock frequency for.
*
* Returns current clock frequency regardless of gating.
*/
unsigned int mmc_host_clk_rate(struct mmc_host *host)
{
unsigned long freq;
unsigned long flags;
spin_lock_irqsave(&host->clk_lock, flags);
if (host->clk_gated)
freq = host->clk_old;
else
freq = host->ios.clock;
spin_unlock_irqrestore(&host->clk_lock, flags);
return freq;
}
/**
* mmc_host_clk_init - set up clock gating code
* @host: host with potential clock to control
*/
static inline void mmc_host_clk_init(struct mmc_host *host)
{
host->clk_requests = 0;
/* Hold MCI clock for 8 cycles by default */
host->clk_delay = 8;
/*
* Default clock gating delay is 0ms to avoid wasting power.
* This value can be tuned by writing into sysfs entry.
*/
host->clkgate_delay = 0;
host->clk_gated = false;
INIT_DELAYED_WORK(&host->clk_gate_work, mmc_host_clk_gate_work);
spin_lock_init(&host->clk_lock);
mutex_init(&host->clk_gate_mutex);
}
/**
* mmc_host_clk_exit - shut down clock gating code
* @host: host with potential clock to control
*/
static inline void mmc_host_clk_exit(struct mmc_host *host)
{
/*
* Wait for any outstanding gate and then make sure we're
* ungated before exiting.
*/
if (cancel_delayed_work_sync(&host->clk_gate_work))
mmc_host_clk_gate_delayed(host);
if (host->clk_gated)
mmc_host_clk_hold(host);
/* There should be only one user now */
WARN_ON(host->clk_requests > 1);
}
static inline void mmc_host_clk_sysfs_init(struct mmc_host *host)
{
host->clkgate_delay_attr.show = clkgate_delay_show;
host->clkgate_delay_attr.store = clkgate_delay_store;
sysfs_attr_init(&host->clkgate_delay_attr.attr);
host->clkgate_delay_attr.attr.name = "clkgate_delay";
host->clkgate_delay_attr.attr.mode = S_IRUGO | S_IWUSR;
if (device_create_file(&host->class_dev, &host->clkgate_delay_attr))
pr_err("%s: Failed to create clkgate_delay sysfs entry\n",
mmc_hostname(host));
}
#else
static inline void mmc_host_clk_init(struct mmc_host *host)
{
}
static inline void mmc_host_clk_exit(struct mmc_host *host)
{
}
static inline void mmc_host_clk_sysfs_init(struct mmc_host *host)
{
}
bool mmc_host_may_gate_card(struct mmc_card *card)
{
return false;
}
#endif
void mmc_retune_enable(struct mmc_host *host)
{
host->can_retune = 1;
@ -345,6 +602,8 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
return NULL;
}
mmc_host_clk_init(host);
spin_lock_init(&host->lock);
init_waitqueue_head(&host->wq);
INIT_DELAYED_WORK(&host->detect, mmc_rescan);
@ -369,6 +628,217 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
EXPORT_SYMBOL(mmc_alloc_host);
static ssize_t show_enable(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
if (!host)
return -EINVAL;
return snprintf(buf, PAGE_SIZE, "%d\n", mmc_can_scale_clk(host));
}
static ssize_t store_enable(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
unsigned long value;
if (!host || kstrtoul(buf, 0, &value))
return -EINVAL;
mmc_get_card(host->card);
if (!value) {
/*turning off clock scaling*/
mmc_exit_clk_scaling(host);
host->caps2 &= ~MMC_CAP2_CLK_SCALE;
host->clk_scaling.state = MMC_LOAD_HIGH;
/* Set to max. frequency when disabling */
mmc_clk_update_freq(host, host->card->clk_scaling_highest,
host->clk_scaling.state);
} else if (value) {
/* starting clock scaling, will restart in case started */
host->caps2 |= MMC_CAP2_CLK_SCALE;
if (host->clk_scaling.enable)
mmc_exit_clk_scaling(host);
mmc_init_clk_scaling(host);
}
mmc_put_card(host->card);
return count;
}
static ssize_t show_up_threshold(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
if (!host)
return -EINVAL;
return snprintf(buf, PAGE_SIZE, "%d\n", host->clk_scaling.upthreshold);
}
#define MAX_PERCENTAGE 100
static ssize_t store_up_threshold(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
unsigned long value;
if (!host || kstrtoul(buf, 0, &value) || (value > MAX_PERCENTAGE))
return -EINVAL;
host->clk_scaling.upthreshold = value;
pr_debug("%s: clkscale_up_thresh set to %lu\n",
mmc_hostname(host), value);
return count;
}
static ssize_t show_down_threshold(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
if (!host)
return -EINVAL;
return snprintf(buf, PAGE_SIZE, "%d\n",
host->clk_scaling.downthreshold);
}
static ssize_t store_down_threshold(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
unsigned long value;
if (!host || kstrtoul(buf, 0, &value) || (value > MAX_PERCENTAGE))
return -EINVAL;
host->clk_scaling.downthreshold = value;
pr_debug("%s: clkscale_down_thresh set to %lu\n",
mmc_hostname(host), value);
return count;
}
static ssize_t show_polling(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
if (!host)
return -EINVAL;
return snprintf(buf, PAGE_SIZE, "%lu milliseconds\n",
host->clk_scaling.polling_delay_ms);
}
static ssize_t store_polling(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
unsigned long value;
if (!host || kstrtoul(buf, 0, &value))
return -EINVAL;
host->clk_scaling.polling_delay_ms = value;
pr_debug("%s: clkscale_polling_delay_ms set to %lu\n",
mmc_hostname(host), value);
return count;
}
DEVICE_ATTR(enable, S_IRUGO | S_IWUSR,
show_enable, store_enable);
DEVICE_ATTR(polling_interval, S_IRUGO | S_IWUSR,
show_polling, store_polling);
DEVICE_ATTR(up_threshold, S_IRUGO | S_IWUSR,
show_up_threshold, store_up_threshold);
DEVICE_ATTR(down_threshold, S_IRUGO | S_IWUSR,
show_down_threshold, store_down_threshold);
static struct attribute *clk_scaling_attrs[] = {
&dev_attr_enable.attr,
&dev_attr_up_threshold.attr,
&dev_attr_down_threshold.attr,
&dev_attr_polling_interval.attr,
NULL,
};
static struct attribute_group clk_scaling_attr_grp = {
.name = "clk_scaling",
.attrs = clk_scaling_attrs,
};
#ifdef CONFIG_MMC_PERF_PROFILING
static ssize_t
show_perf(struct device *dev, struct device_attribute *attr, char *buf)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
int64_t rtime_drv, wtime_drv;
unsigned long rbytes_drv, wbytes_drv, flags;
spin_lock_irqsave(&host->lock, flags);
rbytes_drv = host->perf.rbytes_drv;
wbytes_drv = host->perf.wbytes_drv;
rtime_drv = ktime_to_us(host->perf.rtime_drv);
wtime_drv = ktime_to_us(host->perf.wtime_drv);
spin_unlock_irqrestore(&host->lock, flags);
return snprintf(buf, PAGE_SIZE, "Write performance at driver Level:"
"%lu bytes in %lld microseconds\n"
"Read performance at driver Level:"
"%lu bytes in %lld microseconds\n",
wbytes_drv, wtime_drv,
rbytes_drv, rtime_drv);
}
static ssize_t
set_perf(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
int64_t value;
unsigned long flags;
sscanf(buf, "%lld", &value);
spin_lock_irqsave(&host->lock, flags);
if (!value) {
memset(&host->perf, 0, sizeof(host->perf));
host->perf_enable = false;
} else {
host->perf_enable = true;
}
spin_unlock_irqrestore(&host->lock, flags);
return count;
}
static DEVICE_ATTR(perf, S_IRUGO | S_IWUSR,
show_perf, set_perf);
#endif
static struct attribute *dev_attrs[] = {
#ifdef CONFIG_MMC_PERF_PROFILING
&dev_attr_perf.attr,
#endif
NULL,
};
static struct attribute_group dev_attr_grp = {
.attrs = dev_attrs,
};
/**
* mmc_add_host - initialise host hardware
* @host: mmc host
@ -390,9 +860,25 @@ int mmc_add_host(struct mmc_host *host)
led_trigger_register_simple(dev_name(&host->class_dev), &host->led);
host->clk_scaling.upthreshold = MMC_DEVFRQ_DEFAULT_UP_THRESHOLD;
host->clk_scaling.downthreshold = MMC_DEVFRQ_DEFAULT_DOWN_THRESHOLD;
host->clk_scaling.polling_delay_ms = MMC_DEVFRQ_DEFAULT_POLLING_MSEC;
host->clk_scaling.skip_clk_scale_freq_update = false;
#ifdef CONFIG_DEBUG_FS
mmc_add_host_debugfs(host);
#endif
mmc_host_clk_sysfs_init(host);
err = sysfs_create_group(&host->class_dev.kobj, &clk_scaling_attr_grp);
if (err)
pr_err("%s: failed to create clk scale sysfs group with err %d\n",
__func__, err);
err = sysfs_create_group(&host->class_dev.kobj, &dev_attr_grp);
if (err)
pr_err("%s: failed to create sysfs group with err %d\n",
__func__, err);
mmc_start_host(host);
if (!(host->pm_flags & MMC_PM_IGNORE_PM_NOTIFY))
@ -421,10 +907,14 @@ void mmc_remove_host(struct mmc_host *host)
#ifdef CONFIG_DEBUG_FS
mmc_remove_host_debugfs(host);
#endif
sysfs_remove_group(&host->parent->kobj, &dev_attr_grp);
sysfs_remove_group(&host->class_dev.kobj, &clk_scaling_attr_grp);
device_del(&host->class_dev);
led_trigger_unregister_simple(host->led);
mmc_host_clk_exit(host);
}
EXPORT_SYMBOL(mmc_remove_host);

File diff suppressed because it is too large Load diff

View file

@ -465,6 +465,45 @@ int mmc_switch_status_error(struct mmc_host *host, u32 status)
return 0;
}
/**
* mmc_prepare_switch - helper; prepare to modify EXT_CSD register
* @card: the MMC card associated with the data transfer
* @set: cmd set values
* @index: EXT_CSD register index
* @value: value to program into EXT_CSD register
* @tout_ms: timeout (ms) for operation performed by register write,
* timeout of zero implies maximum possible timeout
* @use_busy_signal: use the busy signal as response type
*
* Helper to prepare to modify EXT_CSD register for selected card.
*/
static inline void mmc_prepare_switch(struct mmc_command *cmd, u8 index,
u8 value, u8 set, unsigned int tout_ms,
bool use_busy_signal)
{
cmd->opcode = MMC_SWITCH;
cmd->arg = (MMC_SWITCH_MODE_WRITE_BYTE << 24) |
(index << 16) |
(value << 8) |
set;
cmd->flags = MMC_CMD_AC;
cmd->busy_timeout = tout_ms;
if (use_busy_signal)
cmd->flags |= MMC_RSP_SPI_R1B | MMC_RSP_R1B;
else
cmd->flags |= MMC_RSP_SPI_R1 | MMC_RSP_R1;
}
int __mmc_switch_cmdq_mode(struct mmc_command *cmd, u8 set, u8 index, u8 value,
unsigned int timeout_ms, bool use_busy_signal,
bool ignore_timeout)
{
mmc_prepare_switch(cmd, index, value, set, timeout_ms, use_busy_signal);
return 0;
}
EXPORT_SYMBOL(__mmc_switch_cmdq_mode);
/**
* __mmc_switch - modify EXT_CSD register
* @card: the MMC card associated with the data transfer
@ -489,6 +528,7 @@ int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
unsigned long timeout;
u32 status = 0;
bool use_r1b_resp = use_busy_signal;
int retries = 5;
mmc_retune_hold(host);
@ -502,12 +542,8 @@ int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
(timeout_ms > host->max_busy_timeout))
use_r1b_resp = false;
cmd.opcode = MMC_SWITCH;
cmd.arg = (MMC_SWITCH_MODE_WRITE_BYTE << 24) |
(index << 16) |
(value << 8) |
set;
cmd.flags = MMC_CMD_AC;
mmc_prepare_switch(&cmd, index, value, set, timeout_ms,
use_r1b_resp);
if (use_r1b_resp) {
cmd.flags |= MMC_RSP_SPI_R1B | MMC_RSP_R1B;
/*
@ -521,6 +557,8 @@ int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
if (index == EXT_CSD_SANITIZE_START)
cmd.sanitize_busy = true;
else if (index == EXT_CSD_BKOPS_START)
cmd.bkops_busy = true;
err = mmc_wait_for_cmd(host, &cmd, MMC_CMD_RETRIES);
if (err)
@ -566,11 +604,18 @@ int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
/* Timeout if the device never leaves the program state. */
if (time_after(jiffies, timeout)) {
pr_err("%s: Card stuck in programming state! %s\n",
mmc_hostname(host), __func__);
pr_err("%s: Card stuck in programming state! %s, timeout:%ums, retries:%d\n",
mmc_hostname(host), __func__,
timeout_ms, retries);
if (retries)
timeout = jiffies +
msecs_to_jiffies(timeout_ms);
else {
err = -ETIMEDOUT;
goto out;
}
retries--;
}
} while (R1_CURRENT_STATE(status) == R1_STATE_PRG);
err = mmc_switch_status_error(host, status);
@ -713,7 +758,10 @@ mmc_send_bus_test(struct mmc_card *card, struct mmc_host *host, u8 opcode,
data.sg = &sg;
data.sg_len = 1;
data.timeout_ns = 1000000;
data.timeout_clks = 0;
mmc_set_data_timeout(&data, card);
sg_init_one(&sg, data_buf, len);
mmc_wait_for_req(host, &mrq);
err = 0;
@ -762,7 +810,7 @@ int mmc_send_hpi_cmd(struct mmc_card *card, u32 *status)
unsigned int opcode;
int err;
if (!card->ext_csd.hpi) {
if (!card->ext_csd.hpi_en) {
pr_warn("%s: Card didn't support HPI command\n",
mmc_hostname(card->host));
return -EINVAL;
@ -779,7 +827,7 @@ int mmc_send_hpi_cmd(struct mmc_card *card, u32 *status)
err = mmc_wait_for_cmd(card->host, &cmd, 0);
if (err) {
pr_warn("%s: error %d interrupting operation. "
pr_debug("%s: error %d interrupting operation. "
"HPI command response %#x\n", mmc_hostname(card->host),
err, cmd.resp[0]);
return err;
@ -794,3 +842,21 @@ int mmc_can_ext_csd(struct mmc_card *card)
{
return (card && card->csd.mmca_vsn > CSD_SPEC_VER_3);
}
int mmc_discard_queue(struct mmc_host *host, u32 tasks)
{
struct mmc_command cmd = {0};
cmd.opcode = MMC_CMDQ_TASK_MGMT;
if (tasks) {
cmd.arg = DISCARD_TASK;
cmd.arg |= (tasks << 16);
} else {
cmd.arg = DISCARD_QUEUE;
}
cmd.flags = MMC_RSP_R1B | MMC_CMD_AC;
return mmc_wait_for_cmd(host, &cmd, 0);
}
EXPORT_SYMBOL(mmc_discard_queue);

View file

@ -27,6 +27,7 @@ int mmc_spi_set_crc(struct mmc_host *host, int use_crc);
int mmc_bus_test(struct mmc_card *card, u8 bus_width);
int mmc_send_hpi_cmd(struct mmc_card *card, u32 *status);
int mmc_can_ext_csd(struct mmc_card *card);
int mmc_discard_queue(struct mmc_host *host, u32 tasks);
int mmc_switch_status_error(struct mmc_host *host, u32 status);
int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
unsigned int timeout_ms, bool use_busy_signal, bool send_status,

View file

@ -35,7 +35,85 @@
#define SDIO_DEVICE_ID_MARVELL_8797_F0 0x9128
#endif
#ifndef SDIO_VENDOR_ID_MSM
#define SDIO_VENDOR_ID_MSM 0x0070
#endif
#ifndef SDIO_DEVICE_ID_MSM_WCN1314
#define SDIO_DEVICE_ID_MSM_WCN1314 0x2881
#endif
#ifndef SDIO_VENDOR_ID_MSM_QCA
#define SDIO_VENDOR_ID_MSM_QCA 0x271
#endif
#ifndef SDIO_DEVICE_ID_MSM_QCA_AR6003_1
#define SDIO_DEVICE_ID_MSM_QCA_AR6003_1 0x300
#endif
#ifndef SDIO_DEVICE_ID_MSM_QCA_AR6003_2
#define SDIO_DEVICE_ID_MSM_QCA_AR6003_2 0x301
#endif
#ifndef SDIO_DEVICE_ID_MSM_QCA_AR6004_1
#define SDIO_DEVICE_ID_MSM_QCA_AR6004_1 0x400
#endif
#ifndef SDIO_DEVICE_ID_MSM_QCA_AR6004_2
#define SDIO_DEVICE_ID_MSM_QCA_AR6004_2 0x401
#endif
#ifndef SDIO_VENDOR_ID_QCA6574
#define SDIO_VENDOR_ID_QCA6574 0x271
#endif
#ifndef SDIO_DEVICE_ID_QCA6574
#define SDIO_DEVICE_ID_QCA6574 0x50a
#endif
#ifndef SDIO_VENDOR_ID_QCA9377
#define SDIO_VENDOR_ID_QCA9377 0x271
#endif
#ifndef SDIO_DEVICE_ID_QCA9377
#define SDIO_DEVICE_ID_QCA9377 0x701
#endif
/*
* This hook just adds a quirk for all sdio devices
*/
static void add_quirk_for_sdio_devices(struct mmc_card *card, int data)
{
if (mmc_card_sdio(card))
card->quirks |= data;
}
static const struct mmc_fixup mmc_fixup_methods[] = {
/* by default sdio devices are considered CLK_GATING broken */
/* good cards will be whitelisted as they are tested */
SDIO_FIXUP(SDIO_ANY_ID, SDIO_ANY_ID,
add_quirk_for_sdio_devices,
MMC_QUIRK_BROKEN_CLK_GATING),
SDIO_FIXUP(SDIO_VENDOR_ID_TI, SDIO_DEVICE_ID_TI_WL1271,
remove_quirk, MMC_QUIRK_BROKEN_CLK_GATING),
SDIO_FIXUP(SDIO_VENDOR_ID_MSM, SDIO_DEVICE_ID_MSM_WCN1314,
remove_quirk, MMC_QUIRK_BROKEN_CLK_GATING),
SDIO_FIXUP(SDIO_VENDOR_ID_MSM_QCA, SDIO_DEVICE_ID_MSM_QCA_AR6003_1,
remove_quirk, MMC_QUIRK_BROKEN_CLK_GATING),
SDIO_FIXUP(SDIO_VENDOR_ID_MSM_QCA, SDIO_DEVICE_ID_MSM_QCA_AR6003_2,
remove_quirk, MMC_QUIRK_BROKEN_CLK_GATING),
SDIO_FIXUP(SDIO_VENDOR_ID_MSM_QCA, SDIO_DEVICE_ID_MSM_QCA_AR6004_1,
remove_quirk, MMC_QUIRK_BROKEN_CLK_GATING),
SDIO_FIXUP(SDIO_VENDOR_ID_MSM_QCA, SDIO_DEVICE_ID_MSM_QCA_AR6004_2,
remove_quirk, MMC_QUIRK_BROKEN_CLK_GATING),
SDIO_FIXUP(SDIO_VENDOR_ID_TI, SDIO_DEVICE_ID_TI_WL1271,
add_quirk, MMC_QUIRK_NONSTD_FUNC_IF),
@ -48,6 +126,11 @@ static const struct mmc_fixup mmc_fixup_methods[] = {
SDIO_FIXUP(SDIO_VENDOR_ID_MARVELL, SDIO_DEVICE_ID_MARVELL_8797_F0,
add_quirk, MMC_QUIRK_BROKEN_IRQ_POLLING),
SDIO_FIXUP(SDIO_VENDOR_ID_QCA6574, SDIO_DEVICE_ID_QCA6574,
add_quirk, MMC_QUIRK_QCA6574_SETTINGS),
SDIO_FIXUP(SDIO_VENDOR_ID_QCA9377, SDIO_DEVICE_ID_QCA9377,
add_quirk, MMC_QUIRK_QCA9377_SETTINGS),
END_FIXUP
};
@ -68,6 +151,8 @@ void mmc_fixup_device(struct mmc_card *card, const struct mmc_fixup *table)
(f->name == CID_NAME_ANY ||
!strncmp(f->name, card->cid.prod_name,
sizeof(card->cid.prod_name))) &&
(f->ext_csd_rev == EXT_CSD_REV_ANY ||
f->ext_csd_rev == card->ext_csd.rev) &&
(f->cis_vendor == card->cis.vendor ||
f->cis_vendor == (u16) SDIO_ANY_ID) &&
(f->cis_device == card->cis.device ||

View file

@ -27,6 +27,12 @@
#include "sd.h"
#include "sd_ops.h"
#define UHS_SDR104_MIN_DTR (100 * 1000 * 1000)
#define UHS_DDR50_MIN_DTR (50 * 1000 * 1000)
#define UHS_SDR50_MIN_DTR (50 * 1000 * 1000)
#define UHS_SDR25_MIN_DTR (25 * 1000 * 1000)
#define UHS_SDR12_MIN_DTR (12.5 * 1000 * 1000)
static const unsigned int tran_exp[] = {
10000, 100000, 1000000, 10000000,
0, 0, 0, 0
@ -369,9 +375,9 @@ int mmc_sd_switch_hs(struct mmc_card *card)
goto out;
if ((status[16] & 0xF) != 1) {
pr_warn("%s: Problem switching card into high-speed mode!\n",
mmc_hostname(card->host));
err = 0;
pr_warn("%s: Problem switching card into high-speed mode!, status:%x\n",
mmc_hostname(card->host), (status[16] & 0xF));
err = -EBUSY;
} else {
err = 1;
}
@ -425,18 +431,22 @@ static void sd_update_bus_speed_mode(struct mmc_card *card)
}
if ((card->host->caps & MMC_CAP_UHS_SDR104) &&
(card->sw_caps.sd3_bus_mode & SD_MODE_UHS_SDR104)) {
(card->sw_caps.sd3_bus_mode & SD_MODE_UHS_SDR104) &&
(card->host->f_max > UHS_SDR104_MIN_DTR)) {
card->sd_bus_speed = UHS_SDR104_BUS_SPEED;
} else if ((card->host->caps & MMC_CAP_UHS_DDR50) &&
(card->sw_caps.sd3_bus_mode & SD_MODE_UHS_DDR50)) {
(card->sw_caps.sd3_bus_mode & SD_MODE_UHS_DDR50) &&
(card->host->f_max > UHS_DDR50_MIN_DTR)) {
card->sd_bus_speed = UHS_DDR50_BUS_SPEED;
} else if ((card->host->caps & (MMC_CAP_UHS_SDR104 |
MMC_CAP_UHS_SDR50)) && (card->sw_caps.sd3_bus_mode &
SD_MODE_UHS_SDR50)) {
SD_MODE_UHS_SDR50) &&
(card->host->f_max > UHS_SDR50_MIN_DTR)) {
card->sd_bus_speed = UHS_SDR50_BUS_SPEED;
} else if ((card->host->caps & (MMC_CAP_UHS_SDR104 |
MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR25)) &&
(card->sw_caps.sd3_bus_mode & SD_MODE_UHS_SDR25)) {
(card->sw_caps.sd3_bus_mode & SD_MODE_UHS_SDR25) &&
(card->host->f_max > UHS_SDR25_MIN_DTR)) {
card->sd_bus_speed = UHS_SDR25_BUS_SPEED;
} else if ((card->host->caps & (MMC_CAP_UHS_SDR104 |
MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR25 |
@ -480,15 +490,17 @@ static int sd_set_bus_speed_mode(struct mmc_card *card, u8 *status)
if (err)
return err;
if ((status[16] & 0xF) != card->sd_bus_speed)
pr_warn("%s: Problem setting bus speed mode!\n",
mmc_hostname(card->host));
else {
if ((status[16] & 0xF) != card->sd_bus_speed) {
pr_warn("%s: Problem setting bus speed mode(%u)! max_dtr:%u, timing:%u, status:%x\n",
mmc_hostname(card->host), card->sd_bus_speed,
card->sw_caps.uhs_max_dtr, timing, (status[16] & 0xF));
err = -EBUSY;
} else {
mmc_set_timing(card->host, timing);
mmc_set_clock(card->host, card->sw_caps.uhs_max_dtr);
}
return 0;
return err;
}
/* Get host's max current setting at its current voltage */
@ -569,6 +581,64 @@ static int sd_set_current_limit(struct mmc_card *card, u8 *status)
return 0;
}
/**
* mmc_sd_change_bus_speed() - Change SD card bus frequency at runtime
* @host: pointer to mmc host structure
* @freq: pointer to desired frequency to be set
*
* Change the SD card bus frequency at runtime after the card is
* initialized. Callers are expected to make sure of the card's
* state (DATA/RCV/TRANSFER) beforing changing the frequency at runtime.
*
* If the frequency to change is greater than max. supported by card,
* *freq is changed to max. supported by card and if it is less than min.
* supported by host, *freq is changed to min. supported by host.
*/
static int mmc_sd_change_bus_speed(struct mmc_host *host, unsigned long *freq)
{
int err = 0;
struct mmc_card *card;
mmc_claim_host(host);
/*
* Assign card pointer after claiming host to avoid race
* conditions that may arise during removal of the card.
*/
card = host->card;
/* sanity checks */
if (!card || !freq) {
err = -EINVAL;
goto out;
}
mmc_set_clock(host, (unsigned int) (*freq));
if (!mmc_host_is_spi(card->host) && mmc_card_uhs(card)
&& card->host->ops->execute_tuning) {
/*
* We try to probe host driver for tuning for any
* frequency, it is host driver responsibility to
* perform actual tuning only when required.
*/
mmc_host_clk_hold(card->host);
err = card->host->ops->execute_tuning(card->host,
MMC_SEND_TUNING_BLOCK);
mmc_host_clk_release(card->host);
if (err) {
pr_warn("%s: %s: tuning execution failed %d. Restoring to previous clock %lu\n",
mmc_hostname(card->host), __func__, err,
host->clk_scaling.curr_freq);
mmc_set_clock(host, host->clk_scaling.curr_freq);
}
}
out:
mmc_release_host(host);
return err;
}
/*
* UHS-I specific initialization procedure
*/
@ -800,7 +870,9 @@ static int mmc_sd_get_ro(struct mmc_host *host)
if (!host->ops->get_ro)
return -1;
mmc_host_clk_hold(host);
ro = host->ops->get_ro(host);
mmc_host_clk_release(host);
return ro;
}
@ -895,7 +967,10 @@ unsigned mmc_sd_get_max_clock(struct mmc_card *card)
{
unsigned max_dtr = (unsigned int)-1;
if (mmc_card_hs(card)) {
if (mmc_card_uhs(card)) {
if (max_dtr > card->sw_caps.uhs_max_dtr)
max_dtr = card->sw_caps.uhs_max_dtr;
} else if (mmc_card_hs(card)) {
if (max_dtr > card->sw_caps.hs_max_dtr)
max_dtr = card->sw_caps.hs_max_dtr;
} else if (max_dtr > card->csd.max_dtr) {
@ -957,6 +1032,7 @@ static int mmc_sd_init_card(struct mmc_host *host, u32 ocr,
err = mmc_send_relative_addr(host, &card->rca);
if (err)
goto free_card;
host->card = card;
}
if (!oldcard) {
@ -1020,12 +1096,16 @@ static int mmc_sd_init_card(struct mmc_host *host, u32 ocr,
}
}
host->card = card;
card->clk_scaling_highest = mmc_sd_get_max_clock(card);
card->clk_scaling_lowest = host->f_min;
return 0;
free_card:
if (!oldcard)
if (!oldcard) {
host->card = NULL;
mmc_remove_card(card);
}
return err;
}
@ -1038,8 +1118,12 @@ static void mmc_sd_remove(struct mmc_host *host)
BUG_ON(!host);
BUG_ON(!host->card);
mmc_exit_clk_scaling(host);
mmc_remove_card(host->card);
mmc_claim_host(host);
host->card = NULL;
mmc_release_host(host);
}
/*
@ -1081,6 +1165,7 @@ static void mmc_sd_detect(struct mmc_host *host)
if (!retries) {
printk(KERN_ERR "%s(%s): Unable to re-detect card (%d)\n",
__func__, mmc_hostname(host), err);
err = _mmc_detect_card_removed(host);
}
#else
err = _mmc_detect_card_removed(host);
@ -1105,6 +1190,13 @@ static int _mmc_sd_suspend(struct mmc_host *host)
BUG_ON(!host);
BUG_ON(!host->card);
err = mmc_suspend_clk_scaling(host);
if (err) {
pr_err("%s: %s: fail to suspend clock scaling (%d)\n",
mmc_hostname(host), __func__, err);
return err;
}
mmc_claim_host(host);
if (mmc_card_suspended(host->card))
@ -1167,8 +1259,11 @@ static int _mmc_sd_resume(struct mmc_host *host)
if (err) {
printk(KERN_ERR "%s: Re-init card rc = %d (retries = %d)\n",
mmc_hostname(host), err, retries);
mdelay(5);
retries--;
mmc_power_off(host);
usleep_range(5000, 5500);
mmc_power_up(host, host->card->ocr);
mmc_select_voltage(host, host->card->ocr);
continue;
}
break;
@ -1178,6 +1273,13 @@ static int _mmc_sd_resume(struct mmc_host *host)
#endif
mmc_card_clr_suspended(host->card);
err = mmc_resume_clk_scaling(host);
if (err) {
pr_err("%s: %s: fail to resume clock scaling (%d)\n",
mmc_hostname(host), __func__, err);
goto out;
}
out:
mmc_release_host(host);
return err;
@ -1250,7 +1352,7 @@ static const struct mmc_bus_ops mmc_sd_ops = {
.suspend = mmc_sd_suspend,
.resume = mmc_sd_resume,
.alive = mmc_sd_alive,
.shutdown = mmc_sd_suspend,
.change_bus_speed = mmc_sd_change_bus_speed,
.reset = mmc_sd_reset,
};
@ -1306,6 +1408,10 @@ int mmc_attach_sd(struct mmc_host *host)
err = mmc_sd_init_card(host, rocr, NULL);
if (err) {
retries--;
mmc_power_off(host);
usleep_range(5000, 5500);
mmc_power_up(host, rocr);
mmc_select_voltage(host, rocr);
continue;
}
break;
@ -1328,6 +1434,13 @@ int mmc_attach_sd(struct mmc_host *host)
goto remove_card;
mmc_claim_host(host);
err = mmc_init_clk_scaling(host);
if (err) {
mmc_release_host(host);
goto remove_card;
}
return 0;
remove_card:

View file

@ -187,6 +187,23 @@ static int sdio_read_cccr(struct mmc_card *card, u32 ocr)
card->sw_caps.sd3_drv_type |= SD_DRIVER_TYPE_C;
if (data & SDIO_DRIVE_SDTD)
card->sw_caps.sd3_drv_type |= SD_DRIVER_TYPE_D;
ret = mmc_io_rw_direct(card, 0, 0,
SDIO_CCCR_INTERRUPT_EXTENSION, 0, &data);
if (ret)
goto out;
if (data & SDIO_SUPPORT_ASYNC_INTR) {
if (card->host->caps2 &
MMC_CAP2_ASYNC_SDIO_IRQ_4BIT_MODE) {
data |= SDIO_ENABLE_ASYNC_INTR;
ret = mmc_io_rw_direct(card, 1, 0,
SDIO_CCCR_INTERRUPT_EXTENSION,
data, NULL);
if (ret)
goto out;
card->cccr.async_intr_sup = 1;
}
}
}
/* if no uhs mode ensure we check for high speed */
@ -205,12 +222,60 @@ out:
return ret;
}
static void sdio_enable_vendor_specific_settings(struct mmc_card *card)
{
int ret;
u8 settings;
if (mmc_enable_qca6574_settings(card) ||
mmc_enable_qca9377_settings(card)) {
ret = mmc_io_rw_direct(card, 1, 0, 0xF2, 0x0F, NULL);
if (ret) {
pr_crit("%s: failed to write to fn 0xf2 %d\n",
mmc_hostname(card->host), ret);
goto out;
}
ret = mmc_io_rw_direct(card, 0, 0, 0xF1, 0, &settings);
if (ret) {
pr_crit("%s: failed to read fn 0xf1 %d\n",
mmc_hostname(card->host), ret);
goto out;
}
settings |= 0x80;
ret = mmc_io_rw_direct(card, 1, 0, 0xF1, settings, NULL);
if (ret) {
pr_crit("%s: failed to write to fn 0xf1 %d\n",
mmc_hostname(card->host), ret);
goto out;
}
ret = mmc_io_rw_direct(card, 0, 0, 0xF0, 0, &settings);
if (ret) {
pr_crit("%s: failed to read fn 0xf0 %d\n",
mmc_hostname(card->host), ret);
goto out;
}
settings |= 0x20;
ret = mmc_io_rw_direct(card, 1, 0, 0xF0, settings, NULL);
if (ret) {
pr_crit("%s: failed to write to fn 0xf0 %d\n",
mmc_hostname(card->host), ret);
goto out;
}
}
out:
return;
}
static int sdio_enable_wide(struct mmc_card *card)
{
int ret;
u8 ctrl;
if (!(card->host->caps & MMC_CAP_4_BIT_DATA))
if (!(card->host->caps & (MMC_CAP_4_BIT_DATA | MMC_CAP_8_BIT_DATA)))
return 0;
if (card->cccr.low_speed && !card->cccr.wide_bus)
@ -226,6 +291,9 @@ static int sdio_enable_wide(struct mmc_card *card)
/* set as 4-bit bus width */
ctrl &= ~SDIO_BUS_WIDTH_MASK;
if (card->host->caps & MMC_CAP_8_BIT_DATA)
ctrl |= SDIO_BUS_WIDTH_8BIT;
else if (card->host->caps & MMC_CAP_4_BIT_DATA)
ctrl |= SDIO_BUS_WIDTH_4BIT;
ret = mmc_io_rw_direct(card, 1, 0, SDIO_CCCR_IF, ctrl, NULL);
@ -267,7 +335,7 @@ static int sdio_disable_wide(struct mmc_card *card)
int ret;
u8 ctrl;
if (!(card->host->caps & MMC_CAP_4_BIT_DATA))
if (!(card->host->caps & (MMC_CAP_4_BIT_DATA | MMC_CAP_8_BIT_DATA)))
return 0;
if (card->cccr.low_speed && !card->cccr.wide_bus)
@ -277,10 +345,10 @@ static int sdio_disable_wide(struct mmc_card *card)
if (ret)
return ret;
if (!(ctrl & SDIO_BUS_WIDTH_4BIT))
if (!(ctrl & (SDIO_BUS_WIDTH_4BIT | SDIO_BUS_WIDTH_8BIT)))
return 0;
ctrl &= ~SDIO_BUS_WIDTH_4BIT;
ctrl &= ~(SDIO_BUS_WIDTH_4BIT | SDIO_BUS_WIDTH_8BIT);
ctrl |= SDIO_BUS_ASYNC_INT;
ret = mmc_io_rw_direct(card, 1, 0, SDIO_CCCR_IF, ctrl, NULL);
@ -497,6 +565,9 @@ static int sdio_set_bus_speed_mode(struct mmc_card *card)
if (err)
return err;
/* Vendor specific settings based on card quirks */
sdio_enable_vendor_specific_settings(card);
speed &= ~SDIO_SPEED_BSS_MASK;
speed |= bus_speed;
err = mmc_io_rw_direct(card, 1, 0, SDIO_CCCR_SPEED, speed, NULL);
@ -623,8 +694,11 @@ try_again:
/*
* Call the optional HC's init_card function to handle quirks.
*/
if (host->ops->init_card)
if (host->ops->init_card) {
mmc_host_clk_hold(host);
host->ops->init_card(host, card);
mmc_host_clk_release(host);
}
/*
* If the host and card support UHS-I mode request the card
@ -791,7 +865,12 @@ try_again:
* Switch to wider bus (if supported).
*/
err = sdio_enable_4bit_bus(card);
if (err)
if (err > 0) {
if (card->host->caps & MMC_CAP_8_BIT_DATA)
mmc_set_bus_width(card->host, MMC_BUS_WIDTH_8);
else if (card->host->caps & MMC_CAP_4_BIT_DATA)
mmc_set_bus_width(card->host, MMC_BUS_WIDTH_4);
} else if (err)
goto remove;
}
finish:
@ -925,6 +1004,8 @@ static int mmc_sdio_suspend(struct mmc_host *host)
if (!mmc_card_keep_power(host)) {
mmc_power_off(host);
} else if (host->ios.clock) {
mmc_gate_clock(host);
} else if (host->retune_period) {
mmc_retune_timer_stop(host);
mmc_retune_needed(host);
@ -974,13 +1055,23 @@ static int mmc_sdio_resume(struct mmc_host *host)
} else if (mmc_card_keep_power(host) && mmc_card_wake_sdio_irq(host)) {
/* We may have switched to 1-bit mode during suspend */
err = sdio_enable_4bit_bus(host->card);
if (err > 0) {
if (host->caps & MMC_CAP_8_BIT_DATA)
mmc_set_bus_width(host, MMC_BUS_WIDTH_8);
else if (host->caps & MMC_CAP_4_BIT_DATA)
mmc_set_bus_width(host, MMC_BUS_WIDTH_4);
err = 0;
}
}
if (!err && host->sdio_irqs) {
if (!(host->caps2 & MMC_CAP2_SDIO_IRQ_NOTHREAD))
if (!(host->caps2 & MMC_CAP2_SDIO_IRQ_NOTHREAD)) {
wake_up_process(host->sdio_irq_thread);
else if (host->caps & MMC_CAP_SDIO_IRQ)
} else if (host->caps & MMC_CAP_SDIO_IRQ) {
mmc_host_clk_hold(host);
host->ops->enable_sdio_irq(host, 1);
mmc_host_clk_release(host);
}
}
mmc_release_host(host);
@ -1220,38 +1311,6 @@ err:
int sdio_reset_comm(struct mmc_card *card)
{
struct mmc_host *host = card->host;
u32 ocr;
u32 rocr;
int err;
printk("%s():\n", __func__);
mmc_claim_host(host);
mmc_go_idle(host);
mmc_set_clock(host, host->f_min);
err = mmc_send_io_op_cond(host, 0, &ocr);
if (err)
goto err;
rocr = mmc_select_voltage(host, ocr);
if (!rocr) {
err = -EINVAL;
goto err;
}
err = mmc_sdio_init_card(host, rocr, card, 0);
if (err)
goto err;
mmc_release_host(host);
return 0;
err:
printk("%s: Error resetting SDIO communications (%d)\n",
mmc_hostname(host), err);
mmc_release_host(host);
return err;
return mmc_power_restore_host(card->host);
}
EXPORT_SYMBOL(sdio_reset_comm);

View file

@ -55,7 +55,7 @@ static int cistpl_vers_1(struct mmc_card *card, struct sdio_func *func,
for (i = 0; i < nr_strings; i++) {
buffer[i] = string;
strcpy(string, buf);
strlcpy(string, buf, strlen(buf) + 1);
string += strlen(string) + 1;
buf += strlen(buf) + 1;
}
@ -270,8 +270,16 @@ static int sdio_read_cis(struct mmc_card *card, struct sdio_func *func)
break;
/* null entries have no link field or data */
if (tpl_code == 0x00)
if (tpl_code == 0x00) {
if (card->cis.vendor == 0x70 &&
(card->cis.device == 0x2460 ||
card->cis.device == 0x0460 ||
card->cis.device == 0x23F1 ||
card->cis.device == 0x23F0))
break;
else
continue;
}
ret = mmc_io_rw_direct(card, 0, 0, ptr++, 0, &tpl_link);
if (ret)

View file

@ -93,7 +93,9 @@ void sdio_run_irqs(struct mmc_host *host)
{
mmc_claim_host(host);
host->sdio_irq_pending = true;
mmc_host_clk_hold(host);
process_sdio_pending_irqs(host);
mmc_host_clk_release(host);
mmc_release_host(host);
}
EXPORT_SYMBOL_GPL(sdio_run_irqs);
@ -104,6 +106,7 @@ static int sdio_irq_thread(void *_host)
struct sched_param param = { .sched_priority = 1 };
unsigned long period, idle_period;
int ret;
bool ws;
sched_setscheduler(current, SCHED_FIFO, &param);
@ -137,6 +140,17 @@ static int sdio_irq_thread(void *_host)
ret = __mmc_claim_host(host, &host->sdio_irq_thread_abort);
if (ret)
break;
ws = false;
/*
* prevent suspend if it has started when scheduled;
* 100 msec (approx. value) should be enough for the system to
* resume and attend to the card's request
*/
if ((host->dev_status == DEV_SUSPENDING) ||
(host->dev_status == DEV_SUSPENDED)) {
pm_wakeup_event(&host->card->dev, 100);
ws = true;
}
ret = process_sdio_pending_irqs(host);
host->sdio_irq_pending = false;
mmc_release_host(host);
@ -168,15 +182,27 @@ static int sdio_irq_thread(void *_host)
}
set_current_state(TASK_INTERRUPTIBLE);
if (host->caps & MMC_CAP_SDIO_IRQ)
if (host->caps & MMC_CAP_SDIO_IRQ) {
mmc_host_clk_hold(host);
host->ops->enable_sdio_irq(host, 1);
mmc_host_clk_release(host);
}
/*
* function drivers would have processed the event from card
* unless suspended, hence release wake source
*/
if (ws && (host->dev_status == DEV_RESUMED))
pm_relax(&host->card->dev);
if (!kthread_should_stop())
schedule_timeout(period);
set_current_state(TASK_RUNNING);
} while (!kthread_should_stop());
if (host->caps & MMC_CAP_SDIO_IRQ)
if (host->caps & MMC_CAP_SDIO_IRQ) {
mmc_host_clk_hold(host);
host->ops->enable_sdio_irq(host, 0);
mmc_host_clk_release(host);
}
pr_debug("%s: IRQ thread exiting with code %d\n",
mmc_hostname(host), ret);
@ -202,7 +228,9 @@ static int sdio_card_irq_get(struct mmc_card *card)
return err;
}
} else if (host->caps & MMC_CAP_SDIO_IRQ) {
mmc_host_clk_hold(host);
host->ops->enable_sdio_irq(host, 1);
mmc_host_clk_release(host);
}
}
@ -221,7 +249,9 @@ static int sdio_card_irq_put(struct mmc_card *card)
atomic_set(&host->sdio_irq_thread_abort, 1);
kthread_stop(host->sdio_irq_thread);
} else if (host->caps & MMC_CAP_SDIO_IRQ) {
mmc_host_clk_hold(host);
host->ops->enable_sdio_irq(host, 0);
mmc_host_clk_release(host);
}
}

View file

@ -405,18 +405,39 @@ config MMC_ATMELMCI
If unsure, say N.
config MMC_SDHCI_MSM
tristate "Qualcomm SDHCI Controller Support"
depends on ARCH_QCOM || (ARM && COMPILE_TEST)
tristate "Qualcomm Technologies, Inc. SDHCI Controller Support"
depends on ARCH_QCOM || ARCH_MSM || (ARM && COMPILE_TEST)
depends on MMC_SDHCI_PLTFM
select PM_DEVFREQ
select DEVFREQ_GOV_SIMPLE_ONDEMAND
help
This selects the Secure Digital Host Controller Interface (SDHCI)
support present in Qualcomm SOCs. The controller supports
SD/MMC/SDIO devices.
support present in Qualcomm Technologies, Inc. SOCs. The controller
supports SD/MMC/SDIO devices.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
config MMC_SDHCI_MSM_ICE
bool "Qualcomm Technologies, Inc Inline Crypto Engine for SDHCI core"
depends on MMC_SDHCI_MSM && CRYPTO_DEV_QCOM_ICE
help
This selects the QTI specific additions to support Inline Crypto
Engine (ICE). ICE accelerates the crypto operations and maintains
the high SDHCI performance.
Select this if you have ICE supported for SDHCI on QTI chipset.
If unsure, say N.
config MMC_MSM
tristate "Qualcomm SDCC Controller Support"
depends on MMC && (ARCH_MSM7X00A || ARCH_MSM7X30 || ARCH_QSD8X50)
help
This provides support for the SD/MMC cell found in the
MSM and QSD SOCs from Qualcomm. The controller also has
support for SDIO devices.
config MMC_MXC
tristate "Freescale i.MX21/27/31 or MPC512x Multimedia Card support"
depends on ARCH_MXC || PPC_MPC512x
@ -772,6 +793,19 @@ config MMC_SUNXI
This selects support for the SD/MMC Host Controller on
Allwinner sunxi SoCs.
config MMC_CQ_HCI
tristate "Command Queue Support"
depends on HAS_DMA
help
This selects the Command Queue Host Controller Interface (CQHCI)
support present in host controllers of Qualcomm Technologies, Inc
amongst others.
This controller supports eMMC devices with command queue support.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
config MMC_TOSHIBA_PCI
tristate "Toshiba Type A SD/MMC Card Interface Driver"
depends on PCI

View file

@ -72,9 +72,11 @@ obj-$(CONFIG_MMC_SDHCI_OF_ESDHC) += sdhci-of-esdhc.o
obj-$(CONFIG_MMC_SDHCI_OF_HLWD) += sdhci-of-hlwd.o
obj-$(CONFIG_MMC_SDHCI_BCM_KONA) += sdhci-bcm-kona.o
obj-$(CONFIG_MMC_SDHCI_BCM2835) += sdhci-bcm2835.o
obj-$(CONFIG_MMC_SDHCI_IPROC) += sdhci-iproc.o
obj-$(CONFIG_MMC_SDHCI_MSM) += sdhci-msm.o
obj-$(CONFIG_MMC_SDHCI_MSM_ICE) += sdhci-msm-ice.o
obj-$(CONFIG_MMC_SDHCI_IPROC) += sdhci-iproc.o
obj-$(CONFIG_MMC_SDHCI_ST) += sdhci-st.o
obj-$(CONFIG_MMC_CQ_HCI) += cmdq_hci.o
ifeq ($(CONFIG_CB710_DEBUG),y)
CFLAGS-cb710-mmc += -DDEBUG

1134
drivers/mmc/host/cmdq_hci.c Normal file

File diff suppressed because it is too large Load diff

236
drivers/mmc/host/cmdq_hci.h Normal file
View file

@ -0,0 +1,236 @@
/* Copyright (c) 2015, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef LINUX_MMC_CQ_HCI_H
#define LINUX_MMC_CQ_HCI_H
#include <linux/mmc/core.h>
/* registers */
/* version */
#define CQVER 0x00
/* capabilities */
#define CQCAP 0x04
/* configuration */
#define CQCFG 0x08
#define CQ_DCMD 0x00001000
#define CQ_TASK_DESC_SZ 0x00000100
#define CQ_ENABLE 0x00000001
/* control */
#define CQCTL 0x0C
#define CLEAR_ALL_TASKS 0x00000100
#define HALT 0x00000001
/* interrupt status */
#define CQIS 0x10
#define CQIS_HAC (1 << 0)
#define CQIS_TCC (1 << 1)
#define CQIS_RED (1 << 2)
#define CQIS_TCL (1 << 3)
/* interrupt status enable */
#define CQISTE 0x14
/* interrupt signal enable */
#define CQISGE 0x18
/* interrupt coalescing */
#define CQIC 0x1C
#define CQIC_ENABLE (1 << 31)
#define CQIC_RESET (1 << 16)
#define CQIC_ICCTHWEN (1 << 15)
#define CQIC_ICCTH(x) ((x & 0x1F) << 8)
#define CQIC_ICTOVALWEN (1 << 7)
#define CQIC_ICTOVAL(x) (x & 0x7F)
/* task list base address */
#define CQTDLBA 0x20
/* task list base address upper */
#define CQTDLBAU 0x24
/* door-bell */
#define CQTDBR 0x28
/* task completion notification */
#define CQTCN 0x2C
/* device queue status */
#define CQDQS 0x30
/* device pending tasks */
#define CQDPT 0x34
/* task clear */
#define CQTCLR 0x38
/* send status config 1 */
#define CQSSC1 0x40
/*
* Value n means CQE would send CMD13 during the transfer of data block
* BLOCK_CNT-n
*/
#define SEND_QSR_INTERVAL 0x70001
/* send status config 2 */
#define CQSSC2 0x44
/* response for dcmd */
#define CQCRDCT 0x48
/* response mode error mask */
#define CQRMEM 0x50
#define CQ_EXCEPTION (1 << 6)
/* task error info */
#define CQTERRI 0x54
/* CQTERRI bit fields */
#define CQ_RMECI 0x1F
#define CQ_RMETI (0x1F << 8)
#define CQ_RMEFV (1 << 15)
#define CQ_DTECI (0x3F << 16)
#define CQ_DTETI (0x1F << 24)
#define CQ_DTEFV (1 << 31)
#define GET_CMD_ERR_TAG(__r__) ((__r__ & CQ_RMETI) >> 8)
#define GET_DAT_ERR_TAG(__r__) ((__r__ & CQ_DTETI) >> 24)
/* command response index */
#define CQCRI 0x58
/* command response argument */
#define CQCRA 0x5C
#define CQ_INT_ALL 0xF
#define CQIC_DEFAULT_ICCTH 31
#define CQIC_DEFAULT_ICTOVAL 1
/* attribute fields */
#define VALID(x) ((x & 1) << 0)
#define END(x) ((x & 1) << 1)
#define INT(x) ((x & 1) << 2)
#define ACT(x) ((x & 0x7) << 3)
/* data command task descriptor fields */
#define FORCED_PROG(x) ((x & 1) << 6)
#define CONTEXT(x) ((x & 0xF) << 7)
#define DATA_TAG(x) ((x & 1) << 11)
#define DATA_DIR(x) ((x & 1) << 12)
#define PRIORITY(x) ((x & 1) << 13)
#define QBAR(x) ((x & 1) << 14)
#define REL_WRITE(x) ((x & 1) << 15)
#define BLK_COUNT(x) ((x & 0xFFFF) << 16)
#define BLK_ADDR(x) ((x & 0xFFFFFFFF) << 32)
/* direct command task descriptor fields */
#define CMD_INDEX(x) ((x & 0x3F) << 16)
#define CMD_TIMING(x) ((x & 1) << 22)
#define RESP_TYPE(x) ((x & 0x3) << 23)
/* transfer descriptor fields */
#define DAT_LENGTH(x) ((x & 0xFFFF) << 16)
#define DAT_ADDR_LO(x) ((x & 0xFFFFFFFF) << 32)
#define DAT_ADDR_HI(x) ((x & 0xFFFFFFFF) << 0)
#define CQ_VENDOR_CFG 0x100
#define CMDQ_SEND_STATUS_TRIGGER (1 << 31)
struct task_history {
u64 task;
bool is_dcmd;
};
struct cmdq_host {
const struct cmdq_host_ops *ops;
void __iomem *mmio;
struct mmc_host *mmc;
/* 64 bit DMA */
bool dma64;
int num_slots;
u32 dcmd_slot;
u32 caps;
#define CMDQ_TASK_DESC_SZ_128 0x1
u32 quirks;
#define CMDQ_QUIRK_SHORT_TXFR_DESC_SZ 0x1
#define CMDQ_QUIRK_NO_DCMD 0x2
bool enabled;
bool halted;
bool init_done;
u8 *desc_base;
/* total descriptor size */
u8 slot_sz;
/* 64/128 bit depends on CQCFG */
u8 task_desc_len;
/* 64 bit on 32-bit arch, 128 bit on 64-bit */
u8 link_desc_len;
u8 *trans_desc_base;
/* same length as transfer descriptor */
u8 trans_desc_len;
dma_addr_t desc_dma_base;
dma_addr_t trans_desc_dma_base;
struct task_history *thist;
u8 thist_idx;
struct completion halt_comp;
struct mmc_request **mrq_slot;
void *private;
};
struct cmdq_host_ops {
void (*set_tranfer_params)(struct mmc_host *mmc);
void (*set_data_timeout)(struct mmc_host *mmc, u32 val);
void (*clear_set_irqs)(struct mmc_host *mmc, bool clear);
void (*set_block_size)(struct mmc_host *mmc);
void (*dump_vendor_regs)(struct mmc_host *mmc);
void (*write_l)(struct cmdq_host *host, u32 val, int reg);
u32 (*read_l)(struct cmdq_host *host, int reg);
void (*clear_set_dumpregs)(struct mmc_host *mmc, bool set);
void (*enhanced_strobe_mask)(struct mmc_host *mmc, bool set);
int (*reset)(struct mmc_host *mmc);
int (*crypto_cfg)(struct mmc_host *mmc, struct mmc_request *mrq,
u32 slot);
void (*crypto_cfg_reset)(struct mmc_host *mmc, unsigned int slot);
void (*post_cqe_halt)(struct mmc_host *mmc);
};
static inline void cmdq_writel(struct cmdq_host *host, u32 val, int reg)
{
if (unlikely(host->ops && host->ops->write_l))
host->ops->write_l(host, val, reg);
else
writel_relaxed(val, host->mmio + reg);
}
static inline u32 cmdq_readl(struct cmdq_host *host, int reg)
{
if (unlikely(host->ops && host->ops->read_l))
return host->ops->read_l(host, reg);
else
return readl_relaxed(host->mmio + reg);
}
extern irqreturn_t cmdq_irq(struct mmc_host *mmc, int err);
extern int cmdq_init(struct cmdq_host *cq_host, struct mmc_host *mmc,
bool dma64);
extern struct cmdq_host *cmdq_pltfm_init(struct platform_device *pdev);
#endif

View file

@ -0,0 +1,320 @@
/*
* Copyright (c) 2015, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include "sdhci-msm-ice.h"
static void sdhci_msm_ice_error_cb(void *host_ctrl, u32 error)
{
struct sdhci_msm_host *msm_host = (struct sdhci_msm_host *)host_ctrl;
dev_err(&msm_host->pdev->dev, "%s: Error in ice operation 0x%x",
__func__, error);
if (msm_host->ice.state == SDHCI_MSM_ICE_STATE_ACTIVE)
msm_host->ice.state = SDHCI_MSM_ICE_STATE_DISABLED;
}
static struct platform_device *sdhci_msm_ice_get_pdevice(struct device *dev)
{
struct device_node *node;
struct platform_device *ice_pdev = NULL;
node = of_parse_phandle(dev->of_node, SDHC_MSM_CRYPTO_LABEL, 0);
if (!node) {
dev_dbg(dev, "%s: sdhc-msm-crypto property not specified\n",
__func__);
goto out;
}
ice_pdev = qcom_ice_get_pdevice(node);
out:
return ice_pdev;
}
static
struct qcom_ice_variant_ops *sdhci_msm_ice_get_vops(struct device *dev)
{
struct qcom_ice_variant_ops *ice_vops = NULL;
struct device_node *node;
node = of_parse_phandle(dev->of_node, SDHC_MSM_CRYPTO_LABEL, 0);
if (!node) {
dev_dbg(dev, "%s: sdhc-msm-crypto property not specified\n",
__func__);
goto out;
}
ice_vops = qcom_ice_get_variant_ops(node);
of_node_put(node);
out:
return ice_vops;
}
int sdhci_msm_ice_get_dev(struct sdhci_host *host)
{
struct device *sdhc_dev;
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
if (!msm_host || !msm_host->pdev) {
pr_err("%s: invalid msm_host %p or msm_host->pdev\n",
__func__, msm_host);
return -EINVAL;
}
sdhc_dev = &msm_host->pdev->dev;
msm_host->ice.vops = sdhci_msm_ice_get_vops(sdhc_dev);
msm_host->ice.pdev = sdhci_msm_ice_get_pdevice(sdhc_dev);
if (msm_host->ice.pdev == ERR_PTR(-EPROBE_DEFER)) {
dev_err(sdhc_dev, "%s: ICE device not probed yet\n",
__func__);
msm_host->ice.pdev = NULL;
msm_host->ice.vops = NULL;
return -EPROBE_DEFER;
}
if (!msm_host->ice.pdev) {
dev_dbg(sdhc_dev, "%s: invalid platform device\n", __func__);
msm_host->ice.vops = NULL;
return -ENODEV;
}
if (!msm_host->ice.vops) {
dev_dbg(sdhc_dev, "%s: invalid ice vops\n", __func__);
msm_host->ice.pdev = NULL;
return -ENODEV;
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_DISABLED;
return 0;
}
int sdhci_msm_ice_init(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.vops->config) {
err = msm_host->ice.vops->init(msm_host->ice.pdev,
msm_host,
sdhci_msm_ice_error_cb);
if (err) {
pr_err("%s: ice init err %d\n",
mmc_hostname(host->mmc), err);
sdhci_msm_ice_print_regs(host);
goto out;
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_ACTIVE;
}
out:
return err;
}
void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot)
{
writel_relaxed(SDHCI_MSM_ICE_ENABLE_BYPASS,
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n + 16 * slot);
}
int sdhci_msm_ice_cfg(struct sdhci_host *host, struct mmc_request *mrq,
u32 slot)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
struct ice_data_setting ice_set;
sector_t lba = 0;
unsigned int ctrl_info_val = 0;
unsigned int bypass = SDHCI_MSM_ICE_ENABLE_BYPASS;
struct request *req;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
BUG_ON(!mrq);
memset(&ice_set, 0, sizeof(struct ice_data_setting));
req = mrq->req;
if (req) {
lba = req->__sector;
if (msm_host->ice.vops->config) {
err = msm_host->ice.vops->config(msm_host->ice.pdev,
req, &ice_set);
if (err) {
pr_err("%s: ice config failed %d\n",
mmc_hostname(host->mmc), err);
return err;
}
}
/* if writing data command */
if (rq_data_dir(req) == WRITE)
bypass = ice_set.encr_bypass ?
SDHCI_MSM_ICE_ENABLE_BYPASS :
SDHCI_MSM_ICE_DISABLE_BYPASS;
/* if reading data command */
else if (rq_data_dir(req) == READ)
bypass = ice_set.decr_bypass ?
SDHCI_MSM_ICE_ENABLE_BYPASS :
SDHCI_MSM_ICE_DISABLE_BYPASS;
pr_debug("%s: %s: slot %d encr_bypass %d bypass %d decr_bypass %d key_index %d\n",
mmc_hostname(host->mmc),
(rq_data_dir(req) == WRITE) ? "WRITE" : "READ",
slot, ice_set.encr_bypass, bypass,
ice_set.decr_bypass,
ice_set.crypto_data.key_index);
}
/* Configure ICE index */
ctrl_info_val =
(ice_set.crypto_data.key_index &
MASK_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX)
<< OFFSET_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX;
/* Configure data unit size of transfer request */
ctrl_info_val |=
(SDHCI_MSM_ICE_TR_DATA_UNIT_512_B &
MASK_SDHCI_MSM_ICE_CTRL_INFO_CDU)
<< OFFSET_SDHCI_MSM_ICE_CTRL_INFO_CDU;
/* Configure ICE bypass mode */
ctrl_info_val |=
(bypass & MASK_SDHCI_MSM_ICE_CTRL_INFO_BYPASS)
<< OFFSET_SDHCI_MSM_ICE_CTRL_INFO_BYPASS;
writel_relaxed((lba & 0xFFFFFFFF),
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_1_n + 16 * slot);
writel_relaxed(((lba >> 32) & 0xFFFFFFFF),
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_2_n + 16 * slot);
writel_relaxed(ctrl_info_val,
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n + 16 * slot);
/* Ensure ICE registers are configured before issuing SDHCI request */
mb();
return 0;
}
int sdhci_msm_ice_reset(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state before reset %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->reset) {
err = msm_host->ice.vops->reset(msm_host->ice.pdev);
if (err) {
pr_err("%s: ice reset failed %d\n",
mmc_hostname(host->mmc), err);
sdhci_msm_ice_print_regs(host);
return err;
}
}
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state after reset %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
return 0;
}
int sdhci_msm_ice_resume(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.state !=
SDHCI_MSM_ICE_STATE_SUSPENDED) {
pr_err("%s: ice is in invalid state before resume %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->resume) {
err = msm_host->ice.vops->resume(msm_host->ice.pdev);
if (err) {
pr_err("%s: ice resume failed %d\n",
mmc_hostname(host->mmc), err);
return err;
}
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_ACTIVE;
return 0;
}
int sdhci_msm_ice_suspend(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.state !=
SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state before resume %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->suspend) {
err = msm_host->ice.vops->suspend(msm_host->ice.pdev);
if (err) {
pr_err("%s: ice suspend failed %d\n",
mmc_hostname(host->mmc), err);
return -EINVAL;
}
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_SUSPENDED;
return 0;
}
int sdhci_msm_ice_get_status(struct sdhci_host *host, int *ice_status)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int stat = -EINVAL;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->status) {
*ice_status = 0;
stat = msm_host->ice.vops->status(msm_host->ice.pdev);
if (stat < 0) {
pr_err("%s: ice get sts failed %d\n",
mmc_hostname(host->mmc), stat);
return -EINVAL;
}
*ice_status = stat;
}
return 0;
}
void sdhci_msm_ice_print_regs(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
if (msm_host->ice.vops->debug)
msm_host->ice.vops->debug(msm_host->ice.pdev);
}

View file

@ -0,0 +1,138 @@
/*
* Copyright (c) 2015, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __SDHCI_MSM_ICE_H__
#define __SDHCI_MSM_ICE_H__
#include <linux/io.h>
#include <linux/of.h>
#include <linux/blkdev.h>
#include <crypto/ice.h>
#include "sdhci-msm.h"
#define SDHC_MSM_CRYPTO_LABEL "sdhc-msm-crypto"
/* Timeout waiting for ICE initialization, that requires TZ access */
#define SDHCI_MSM_ICE_COMPLETION_TIMEOUT_MS 500
/*
* SDHCI host controller ICE registers. There are n [0..31]
* of each of these registers
*/
#define NUM_SDHCI_MSM_ICE_CTRL_INFO_n_REGS 32
#define CORE_VENDOR_SPEC_ICE_CTRL 0x300
#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_1_n 0x304
#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_2_n 0x308
#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n 0x30C
/* SDHCI MSM ICE CTRL Info register offset */
enum {
OFFSET_SDHCI_MSM_ICE_CTRL_INFO_BYPASS = 0,
OFFSET_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX = 0x1,
OFFSET_SDHCI_MSM_ICE_CTRL_INFO_CDU = 0x6,
};
/* SDHCI MSM ICE CTRL Info register masks */
enum {
MASK_SDHCI_MSM_ICE_CTRL_INFO_BYPASS = 0x1,
MASK_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX = 0x1F,
MASK_SDHCI_MSM_ICE_CTRL_INFO_CDU = 0x7,
};
/* SDHCI MSM ICE encryption/decryption bypass state */
enum {
SDHCI_MSM_ICE_DISABLE_BYPASS = 0,
SDHCI_MSM_ICE_ENABLE_BYPASS = 1,
};
/* SDHCI MSM ICE Crypto Data Unit of target DUN of Transfer Request */
enum {
SDHCI_MSM_ICE_TR_DATA_UNIT_512_B = 0,
SDHCI_MSM_ICE_TR_DATA_UNIT_1_KB = 1,
SDHCI_MSM_ICE_TR_DATA_UNIT_2_KB = 2,
SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB = 3,
SDHCI_MSM_ICE_TR_DATA_UNIT_8_KB = 4,
SDHCI_MSM_ICE_TR_DATA_UNIT_16_KB = 5,
SDHCI_MSM_ICE_TR_DATA_UNIT_32_KB = 6,
SDHCI_MSM_ICE_TR_DATA_UNIT_64_KB = 7,
};
/* SDHCI MSM ICE internal state */
enum {
SDHCI_MSM_ICE_STATE_DISABLED = 0,
SDHCI_MSM_ICE_STATE_ACTIVE = 1,
SDHCI_MSM_ICE_STATE_SUSPENDED = 2,
};
#ifdef CONFIG_MMC_SDHCI_MSM_ICE
int sdhci_msm_ice_get_dev(struct sdhci_host *host);
int sdhci_msm_ice_init(struct sdhci_host *host);
void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot);
int sdhci_msm_ice_cfg(struct sdhci_host *host, struct mmc_request *mrq,
u32 slot);
int sdhci_msm_ice_reset(struct sdhci_host *host);
int sdhci_msm_ice_resume(struct sdhci_host *host);
int sdhci_msm_ice_suspend(struct sdhci_host *host);
int sdhci_msm_ice_get_status(struct sdhci_host *host, int *ice_status);
void sdhci_msm_ice_print_regs(struct sdhci_host *host);
#else
inline int sdhci_msm_ice_get_dev(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
if (msm_host) {
msm_host->ice.pdev = NULL;
msm_host->ice.vops = NULL;
}
return -ENODEV;
}
inline int sdhci_msm_ice_init(struct sdhci_host *host)
{
return 0;
}
inline void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot)
{
}
inline int sdhci_msm_ice_cfg(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot)
{
return 0;
}
inline int sdhci_msm_ice_reset(struct sdhci_host *host)
{
return 0;
}
inline int sdhci_msm_ice_resume(struct sdhci_host *host)
{
return 0;
}
inline int sdhci_msm_ice_suspend(struct sdhci_host *host)
{
return 0;
}
inline int sdhci_msm_ice_get_status(struct sdhci_host *host,
int *ice_status)
{
return 0;
}
inline void sdhci_msm_ice_print_regs(struct sdhci_host *host)
{
return;
}
#endif /* CONFIG_MMC_SDHCI_MSM_ICE */
#endif /* __SDHCI_MSM_ICE_H__ */

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,232 @@
/*
* Copyright (c) 2015, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __SDHCI_MSM_H__
#define __SDHCI_MSM_H__
#include <linux/mmc/mmc.h>
#include <linux/pm_qos.h>
#include "sdhci-pltfm.h"
/* This structure keeps information per regulator */
struct sdhci_msm_reg_data {
/* voltage regulator handle */
struct regulator *reg;
/* regulator name */
const char *name;
/* voltage level to be set */
u32 low_vol_level;
u32 high_vol_level;
/* Load values for low power and high power mode */
u32 lpm_uA;
u32 hpm_uA;
/* is this regulator enabled? */
bool is_enabled;
/* is this regulator needs to be always on? */
bool is_always_on;
/* is low power mode setting required for this regulator? */
bool lpm_sup;
bool set_voltage_sup;
};
/*
* This structure keeps information for all the
* regulators required for a SDCC slot.
*/
struct sdhci_msm_slot_reg_data {
/* keeps VDD/VCC regulator info */
struct sdhci_msm_reg_data *vdd_data;
/* keeps VDD IO regulator info */
struct sdhci_msm_reg_data *vdd_io_data;
};
struct sdhci_msm_gpio {
u32 no;
const char *name;
bool is_enabled;
};
struct sdhci_msm_gpio_data {
struct sdhci_msm_gpio *gpio;
u8 size;
};
struct sdhci_msm_pin_data {
/*
* = 1 if controller pins are using gpios
* = 0 if controller has dedicated MSM pads
*/
u8 is_gpio;
struct sdhci_msm_gpio_data *gpio_data;
};
struct sdhci_pinctrl_data {
struct pinctrl *pctrl;
struct pinctrl_state *pins_active;
struct pinctrl_state *pins_sleep;
};
struct sdhci_msm_bus_voting_data {
struct msm_bus_scale_pdata *bus_pdata;
unsigned int *bw_vecs;
unsigned int bw_vecs_size;
};
struct sdhci_msm_cpu_group_map {
int nr_groups;
cpumask_t *mask;
};
struct sdhci_msm_pm_qos_latency {
s32 latency[SDHCI_POWER_POLICY_NUM];
};
struct sdhci_msm_pm_qos_data {
struct sdhci_msm_cpu_group_map cpu_group_map;
enum pm_qos_req_type irq_req_type;
int irq_cpu;
struct sdhci_msm_pm_qos_latency irq_latency;
struct sdhci_msm_pm_qos_latency *cmdq_latency;
struct sdhci_msm_pm_qos_latency *latency;
bool irq_valid;
bool cmdq_valid;
bool legacy_valid;
};
/*
* PM QoS for group voting management - each cpu group defined is associated
* with 1 instance of this structure.
*/
struct sdhci_msm_pm_qos_group {
struct pm_qos_request req;
struct delayed_work unvote_work;
atomic_t counter;
s32 latency;
};
/* PM QoS HW IRQ voting */
struct sdhci_msm_pm_qos_irq {
struct pm_qos_request req;
struct delayed_work unvote_work;
struct device_attribute enable_attr;
struct device_attribute status_attr;
atomic_t counter;
s32 latency;
bool enabled;
};
struct sdhci_msm_pltfm_data {
/* Supported UHS-I Modes */
u32 caps;
/* More capabilities */
u32 caps2;
unsigned long mmc_bus_width;
struct sdhci_msm_slot_reg_data *vreg_data;
bool nonremovable;
bool nonhotplug;
bool largeaddressbus;
bool pin_cfg_sts;
struct sdhci_msm_pin_data *pin_data;
struct sdhci_pinctrl_data *pctrl_data;
int status_gpio; /* card detection GPIO that is configured as IRQ */
struct sdhci_msm_bus_voting_data *voting_data;
u32 *sup_clk_table;
unsigned char sup_clk_cnt;
int sdiowakeup_irq;
u32 *sup_ice_clk_table;
unsigned char sup_ice_clk_cnt;
u32 ice_clk_max;
u32 ice_clk_min;
struct sdhci_msm_pm_qos_data pm_qos_data;
bool core_3_0v_support;
};
struct sdhci_msm_bus_vote {
uint32_t client_handle;
uint32_t curr_vote;
int min_bw_vote;
int max_bw_vote;
bool is_max_bw_needed;
struct delayed_work vote_work;
struct device_attribute max_bus_bw;
};
struct sdhci_msm_ice_data {
struct qcom_ice_variant_ops *vops;
struct platform_device *pdev;
int state;
};
struct sdhci_msm_host {
struct platform_device *pdev;
void __iomem *core_mem; /* MSM SDCC mapped address */
int pwr_irq; /* power irq */
struct clk *clk; /* main SD/MMC bus clock */
struct clk *pclk; /* SDHC peripheral bus clock */
struct clk *bus_clk; /* SDHC bus voter clock */
struct clk *ff_clk; /* CDC calibration fixed feedback clock */
struct clk *sleep_clk; /* CDC calibration sleep clock */
struct clk *ice_clk; /* SDHC peripheral ICE clock */
atomic_t clks_on; /* Set if clocks are enabled */
struct sdhci_msm_pltfm_data *pdata;
struct mmc_host *mmc;
struct sdhci_pltfm_data sdhci_msm_pdata;
u32 curr_pwr_state;
u32 curr_io_level;
struct completion pwr_irq_completion;
struct sdhci_msm_bus_vote msm_bus_vote;
struct device_attribute polling;
u32 clk_rate; /* Keeps track of current clock rate that is set */
bool tuning_done;
bool calibration_done;
u8 saved_tuning_phase;
bool en_auto_cmd21;
struct device_attribute auto_cmd21_attr;
bool is_sdiowakeup_enabled;
bool sdio_pending_processing;
atomic_t controller_clock;
bool use_cdclp533;
bool use_updated_dll_reset;
bool use_14lpp_dll;
bool enhanced_strobe;
bool rclk_delay_fix;
u32 caps_0;
struct sdhci_msm_ice_data ice;
u32 ice_clk_rate;
struct sdhci_msm_pm_qos_group *pm_qos;
int pm_qos_prev_cpu;
struct device_attribute pm_qos_group_enable_attr;
struct device_attribute pm_qos_group_status_attr;
bool pm_qos_group_enable;
struct sdhci_msm_pm_qos_irq pm_qos_irq;
bool tuning_in_progress;
};
extern char *saved_command_line;
void sdhci_msm_pm_qos_irq_init(struct sdhci_host *host);
void sdhci_msm_pm_qos_irq_vote(struct sdhci_host *host);
void sdhci_msm_pm_qos_irq_unvote(struct sdhci_host *host, bool async);
void sdhci_msm_pm_qos_cpu_init(struct sdhci_host *host,
struct sdhci_msm_pm_qos_latency *latency);
void sdhci_msm_pm_qos_cpu_vote(struct sdhci_host *host,
struct sdhci_msm_pm_qos_latency *latency, int cpu);
bool sdhci_msm_pm_qos_cpu_unvote(struct sdhci_host *host, int cpu, bool async);
#endif /* __SDHCI_MSM_H__ */

File diff suppressed because it is too large Load diff

View file

@ -17,7 +17,7 @@
#include <linux/compiler.h>
#include <linux/types.h>
#include <linux/io.h>
#include <linux/ratelimit.h>
#include <linux/mmc/host.h>
/*
@ -137,22 +137,32 @@
#define SDHCI_INT_DATA_CRC 0x00200000
#define SDHCI_INT_DATA_END_BIT 0x00400000
#define SDHCI_INT_BUS_POWER 0x00800000
#define SDHCI_INT_ACMD12ERR 0x01000000
#define SDHCI_INT_AUTO_CMD_ERR 0x01000000
#define SDHCI_INT_ADMA_ERROR 0x02000000
#define SDHCI_INT_NORMAL_MASK 0x00007FFF
#define SDHCI_INT_ERROR_MASK 0xFFFF8000
#define SDHCI_INT_CMD_MASK (SDHCI_INT_RESPONSE | SDHCI_INT_TIMEOUT | \
SDHCI_INT_CRC | SDHCI_INT_END_BIT | SDHCI_INT_INDEX)
SDHCI_INT_CRC | SDHCI_INT_END_BIT | SDHCI_INT_INDEX | \
SDHCI_INT_AUTO_CMD_ERR)
#define SDHCI_INT_DATA_MASK (SDHCI_INT_DATA_END | SDHCI_INT_DMA_END | \
SDHCI_INT_DATA_AVAIL | SDHCI_INT_SPACE_AVAIL | \
SDHCI_INT_DATA_TIMEOUT | SDHCI_INT_DATA_CRC | \
SDHCI_INT_DATA_END_BIT | SDHCI_INT_ADMA_ERROR | \
SDHCI_INT_BLK_GAP)
#define SDHCI_INT_CMDQ_EN (0x1 << 14)
#define SDHCI_INT_ALL_MASK ((unsigned int)-1)
#define SDHCI_ACMD12_ERR 0x3C
#define SDHCI_AUTO_CMD_ERR 0x3C
#define SDHCI_AUTO_CMD12_NOT_EXEC 0x0001
#define SDHCI_AUTO_CMD_TIMEOUT_ERR 0x0002
#define SDHCI_AUTO_CMD_CRC_ERR 0x0004
#define SDHCI_AUTO_CMD_ENDBIT_ERR 0x0008
#define SDHCI_AUTO_CMD_INDEX_ERR 0x0010
#define SDHCI_AUTO_CMD12_NOT_ISSUED 0x0080
#define SDHCI_HOST_CONTROL2 0x3E
#define SDHCI_CTRL_UHS_MASK 0x0007
@ -170,6 +180,7 @@
#define SDHCI_CTRL_DRV_TYPE_D 0x0030
#define SDHCI_CTRL_EXEC_TUNING 0x0040
#define SDHCI_CTRL_TUNED_CLK 0x0080
#define SDHCI_CTRL_ASYNC_INT_ENABLE 0x4000
#define SDHCI_CTRL_PRESET_VAL_ENABLE 0x8000
#define SDHCI_CAPABILITIES 0x40
@ -190,6 +201,7 @@
#define SDHCI_CAN_VDD_300 0x02000000
#define SDHCI_CAN_VDD_180 0x04000000
#define SDHCI_CAN_64BIT 0x10000000
#define SDHCI_CAN_ASYNC_INT 0x20000000
#define SDHCI_SUPPORT_SDR50 0x00000001
#define SDHCI_SUPPORT_SDR104 0x00000002
@ -315,6 +327,12 @@ enum sdhci_cookie {
COOKIE_GIVEN,
};
enum sdhci_power_policy {
SDHCI_PERFORMANCE_MODE,
SDHCI_POWER_SAVE_MODE,
SDHCI_POWER_POLICY_NUM /* Always keep this one last */
};
struct sdhci_host {
/* Data set by hardware interface driver */
const char *hw_name; /* Hardware bus name */
@ -418,6 +436,72 @@ struct sdhci_host {
*/
#define SDHCI_QUIRK2_NEED_DELAY_AFTER_INT_CLK_RST (1<<16)
/*
* Read Transfer Active/ Write Transfer Active may be not
* de-asserted after end of transaction. Issue reset for DAT line.
*/
#define SDHCI_QUIRK2_RDWR_TX_ACTIVE_EOT (1<<17)
/*
* Slow interrupt clearance at 400KHz may cause
* host controller driver interrupt handler to
* be called twice.
*/
#define SDHCI_QUIRK2_SLOW_INT_CLR (1<<18)
/*
* If the base clock can be scalable, then there should be no further
* clock dividing as the input clock itself will be scaled down to
* required frequency.
*/
#define SDHCI_QUIRK2_ALWAYS_USE_BASE_CLOCK (1<<19)
/*
* Ignore data timeout error for R1B commands as there will be no
* data associated and the busy timeout value for these commands
* could be lager than the maximum timeout value that controller
* can handle.
*/
#define SDHCI_QUIRK2_IGNORE_DATATOUT_FOR_R1BCMD (1<<20)
/*
* The preset value registers are not properly initialized by
* some hardware and hence preset value must not be enabled for
* such controllers.
*/
#define SDHCI_QUIRK2_BROKEN_PRESET_VALUE (1<<21)
/*
* Some controllers define the usage of 0xF in data timeout counter
* register (0x2E) which is actually a reserved bit as per
* specification.
*/
#define SDHCI_QUIRK2_USE_RESERVED_MAX_TIMEOUT (1<<22)
/*
* This is applicable for controllers that advertize timeout clock
* value in capabilities register (bit 5-0) as just 50MHz whereas the
* base clock frequency is 200MHz. So, the controller internally
* multiplies the value in timeout control register by 4 with the
* assumption that driver always uses fixed timeout clock value from
* capabilities register to calculate the timeout. But when the driver
* uses SDHCI_QUIRK2_ALWAYS_USE_BASE_CLOCK base clock frequency is directly
* controller by driver and it's rate varies upto max. 200MHz. This new quirk
* will be used in such cases to avoid controller mulplication when timeout is
* calculated based on the base clock.
*/
#define SDHCI_QUIRK2_DIVIDE_TOUT_BY_4 (1 << 23)
/*
* Some SDHC controllers are unable to handle data-end bit error in
* 1-bit mode of SDIO.
*/
#define SDHCI_QUIRK2_IGN_DATA_END_BIT_ERROR (1<<24)
/* Controller has nonstandard clock management */
#define SDHCI_QUIRK_NONSTANDARD_CLOCK (1<<25)
/* Use reset workaround in case sdhci reset timeouts */
#define SDHCI_QUIRK2_USE_RESET_WORKAROUND (1<<26)
/* Some controllers doesn't have have any LED control */
#define SDHCI_QUIRK2_BROKEN_LED_CONTROL (1<<27)
int irq; /* Device IRQ */
void __iomem *ioaddr; /* Mapped address */
@ -426,6 +510,7 @@ struct sdhci_host {
/* Internal data */
struct mmc_host *mmc; /* MMC structure */
u64 dma_mask; /* custom DMA mask */
u64 coherent_dma_mask;
#if defined(CONFIG_LEDS_CLASS) || defined(CONFIG_LEDS_CLASS_MODULE)
struct led_classdev led; /* LED control */
@ -447,6 +532,7 @@ struct sdhci_host {
#define SDHCI_SDR104_NEEDS_TUNING (1<<10) /* SDR104/HS200 needs tuning */
#define SDHCI_USE_64_BIT_DMA (1<<12) /* Use 64-bit DMA */
#define SDHCI_HS400_TUNING (1<<13) /* Tuning for HS400 */
#define SDHCI_HOST_IRQ_STATUS (1<<14) /* host->irq status */
unsigned int version; /* SDHCI spec. version */
@ -510,6 +596,22 @@ struct sdhci_host {
unsigned int tuning_count; /* Timer count for re-tuning */
unsigned int tuning_mode; /* Re-tuning mode supported by host */
#define SDHCI_TUNING_MODE_1 0
ktime_t data_start_time;
enum sdhci_power_policy power_policy;
bool is_crypto_en;
bool crypto_reset_reqd;
bool sdio_irq_async_status;
u32 auto_cmd_err_sts;
struct ratelimit_state dbg_dump_rs;
struct cmdq_host *cq_host;
int reset_wa_applied; /* reset workaround status */
ktime_t reset_wa_t; /* time when the reset workaround is applied */
int reset_wa_cnt; /* total number of times workaround is used */
int slot_no;
unsigned long private[0] ____cacheline_aligned;
};
@ -539,16 +641,41 @@ struct sdhci_ops {
unsigned int (*get_ro)(struct sdhci_host *host);
void (*reset)(struct sdhci_host *host, u8 mask);
int (*platform_execute_tuning)(struct sdhci_host *host, u32 opcode);
int (*crypto_engine_cfg)(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot);
int (*crypto_engine_reset)(struct sdhci_host *host);
void (*crypto_cfg_reset)(struct sdhci_host *host, unsigned int slot);
void (*set_uhs_signaling)(struct sdhci_host *host, unsigned int uhs);
void (*hw_reset)(struct sdhci_host *host);
void (*adma_workaround)(struct sdhci_host *host, u32 intmask);
unsigned int (*get_max_segments)(void);
void (*platform_init)(struct sdhci_host *host);
#define REQ_BUS_OFF (1 << 0)
#define REQ_BUS_ON (1 << 1)
#define REQ_IO_LOW (1 << 2)
#define REQ_IO_HIGH (1 << 3)
void (*card_event)(struct sdhci_host *host);
int (*enhanced_strobe)(struct sdhci_host *host);
void (*platform_bus_voting)(struct sdhci_host *host, u32 enable);
void (*check_power_status)(struct sdhci_host *host, u32 req_type);
int (*config_auto_tuning_cmd)(struct sdhci_host *host,
bool enable,
u32 type);
int (*enable_controller_clock)(struct sdhci_host *host);
void (*clear_set_dumpregs)(struct sdhci_host *host, bool set);
void (*enhanced_strobe_mask)(struct sdhci_host *host, bool set);
void (*dump_vendor_regs)(struct sdhci_host *host);
void (*toggle_cdr)(struct sdhci_host *host, bool enable);
void (*voltage_switch)(struct sdhci_host *host);
int (*select_drive_strength)(struct sdhci_host *host,
struct mmc_card *card,
unsigned int max_dtr, int host_drv,
int card_drv, int *drv_type);
int (*notify_load)(struct sdhci_host *host, enum mmc_load state);
void (*reset_workaround)(struct sdhci_host *host, u32 enable);
void (*init)(struct sdhci_host *host);
void (*pre_req)(struct sdhci_host *host, struct mmc_request *req);
void (*post_req)(struct sdhci_host *host, struct mmc_request *req);
};
#ifdef CONFIG_MMC_SDHCI_IO_ACCESSORS
@ -668,4 +795,5 @@ extern int sdhci_runtime_suspend_host(struct sdhci_host *host);
extern int sdhci_runtime_resume_host(struct sdhci_host *host);
#endif
void sdhci_cfg_irq(struct sdhci_host *host, bool enable, bool sync);
#endif /* __SDHCI_HW_H */

View file

@ -12,7 +12,11 @@
#include <linux/device.h>
#include <linux/mmc/core.h>
#include <linux/mmc/mmc.h>
#include <linux/mod_devicetable.h>
#include <linux/notifier.h>
#define MMC_CARD_CMDQ_BLK_SIZE 512
struct mmc_cid {
unsigned int manfid;
@ -52,6 +56,7 @@ struct mmc_ext_csd {
u8 sec_feature_support;
u8 rel_sectors;
u8 rel_param;
bool enhanced_rpmb_supported;
u8 part_config;
u8 cache_ctrl;
u8 rst_n_function;
@ -83,11 +88,13 @@ struct mmc_ext_csd {
bool hpi; /* HPI support bit */
unsigned int hpi_cmd; /* cmd used as HPI */
bool bkops; /* background support bit */
bool man_bkops_en; /* manual bkops enable bit */
u8 bkops_en; /* bkops enable */
unsigned int data_sector_size; /* 512 bytes or 4KB */
unsigned int data_tag_unit_size; /* DATA TAG UNIT size */
unsigned int boot_ro_lock; /* ro lock support */
bool boot_ro_lockable;
u8 raw_ext_csd_cmdq; /* 15 */
u8 raw_ext_csd_cache_ctrl; /* 33 */
bool ffu_capable; /* Firmware upgrade support */
#define MMC_FIRMWARE_LEN 8
u8 fwrev[MMC_FIRMWARE_LEN]; /* FW version */
@ -95,6 +102,10 @@ struct mmc_ext_csd {
u8 raw_partition_support; /* 160 */
u8 raw_rpmb_size_mult; /* 168 */
u8 raw_erased_mem_count; /* 181 */
u8 raw_ext_csd_bus_width; /* 183 */
u8 strobe_support; /* 184 */
#define MMC_STROBE_SUPPORT (1 << 0)
u8 raw_ext_csd_hs_timing; /* 185 */
u8 raw_ext_csd_structure; /* 194 */
u8 raw_card_type; /* 196 */
u8 raw_driver_strength; /* 197 */
@ -116,9 +127,15 @@ struct mmc_ext_csd {
u8 raw_pwr_cl_ddr_52_195; /* 238 */
u8 raw_pwr_cl_ddr_52_360; /* 239 */
u8 raw_pwr_cl_ddr_200_360; /* 253 */
u8 cache_flush_policy; /* 240 */
u8 raw_bkops_status; /* 246 */
u8 raw_sectors[4]; /* 212 - 4 bytes */
u8 cmdq_depth; /* 307 */
u8 cmdq_support; /* 308 */
u8 barrier_support; /* 486 */
u8 barrier_en;
u8 fw_version; /* 254 */
unsigned int feature_support;
#define MMC_DISCARD_FEATURE BIT(0) /* CMD38 feature */
};
@ -189,7 +206,8 @@ struct sdio_cccr {
wide_bus:1,
high_power:1,
high_speed:1,
disable_cd:1;
disable_cd:1,
async_intr_sup:1;
};
struct sdio_cis {
@ -218,6 +236,28 @@ enum mmc_blk_status {
MMC_BLK_NEW_REQUEST,
};
enum mmc_packed_stop_reasons {
EXCEEDS_SEGMENTS = 0,
EXCEEDS_SECTORS,
WRONG_DATA_DIR,
FLUSH_OR_DISCARD,
EMPTY_QUEUE,
REL_WRITE,
THRESHOLD,
LARGE_SEC_ALIGN,
RANDOM,
FUA,
MAX_REASONS,
};
struct mmc_wr_pack_stats {
u32 *packing_events;
u32 pack_stop_reason[MAX_REASONS];
spinlock_t lock;
bool enabled;
bool print_in_read;
};
/* The number of MMC physical partitions. These consist of:
* boot partitions (2), general purpose partitions (4) and
* RPMB partition (1) in MMC v4.4.
@ -242,6 +282,62 @@ struct mmc_part {
#define MMC_BLK_DATA_AREA_RPMB (1<<3)
};
enum {
MMC_BKOPS_NO_OP,
MMC_BKOPS_NOT_CRITICAL,
MMC_BKOPS_PERF_IMPACT,
MMC_BKOPS_CRITICAL,
MMC_BKOPS_NUM_SEVERITY_LEVELS,
};
/**
* struct mmc_bkops_stats - BKOPS statistics
* @lock: spinlock used for synchronizing the debugfs and the runtime accesses
* to this structure. No need to call with spin_lock_irq api
* @manual_start: number of times START_BKOPS was sent to the device
* @hpi: number of times HPI was sent to the device
* @auto_start: number of times AUTO_EN was set to 1
* @auto_stop: number of times AUTO_EN was set to 0
* @level: number of times the device reported the need for each level of
* bkops handling
* @enabled: control over whether statistics should be gathered
*
* This structure is used to collect statistics regarding the bkops
* configuration and use-patterns. It is collected during runtime and can be
* shown to the user via a debugfs entry.
*/
struct mmc_bkops_stats {
spinlock_t lock;
unsigned int manual_start;
unsigned int hpi;
unsigned int auto_start;
unsigned int auto_stop;
unsigned int level[MMC_BKOPS_NUM_SEVERITY_LEVELS];
bool enabled;
};
/**
* struct mmc_bkops_info - BKOPS data
* @stats: statistic information regarding bkops
* @needs_check: indication whether need to check with the device
* whether it requires handling of BKOPS (CMD8)
* @needs_manual: indication whether have to send START_BKOPS
* to the device
*/
struct mmc_bkops_info {
struct mmc_bkops_stats stats;
bool needs_check;
bool needs_bkops;
u32 retry_counter;
};
enum mmc_pon_type {
MMC_LONG_PON = 1,
MMC_SHRT_PON,
};
#define MMC_QUIRK_CMDQ_DELAY_BEFORE_DCMD 6 /* microseconds */
/*
* MMC device
*/
@ -249,6 +345,10 @@ struct mmc_card {
struct mmc_host *host; /* the host this device belongs to */
struct device dev; /* the device */
u32 ocr; /* the current OCR setting */
unsigned long clk_scaling_lowest; /* lowest scaleable
* frequency */
unsigned long clk_scaling_highest; /* highest scaleable
* frequency */
unsigned int rca; /* relative card address of device */
unsigned int type; /* card type */
#define MMC_TYPE_MMC 0 /* MMC card */
@ -261,14 +361,17 @@ struct mmc_card {
#define MMC_STATE_BLOCKADDR (1<<2) /* card uses block-addressing */
#define MMC_CARD_SDXC (1<<3) /* card is SDXC */
#define MMC_CARD_REMOVED (1<<4) /* card has been removed */
#define MMC_STATE_DOING_BKOPS (1<<5) /* card is doing BKOPS */
#define MMC_STATE_DOING_BKOPS (1<<5) /* card is doing manual BKOPS */
#define MMC_STATE_SUSPENDED (1<<6) /* card is suspended */
#define MMC_STATE_CMDQ (1<<12) /* card is in cmd queue mode */
#define MMC_STATE_AUTO_BKOPS (1<<13) /* card is doing auto BKOPS */
unsigned int quirks; /* card quirks */
#define MMC_QUIRK_LENIENT_FN0 (1<<0) /* allow SDIO FN0 writes outside of the VS CCCR range */
#define MMC_QUIRK_BLKSZ_FOR_BYTE_MODE (1<<1) /* use func->cur_blksize */
/* for byte mode */
#define MMC_QUIRK_NONSTD_SDIO (1<<2) /* non-standard SDIO card attached */
/* (missing CIA registers) */
#define MMC_QUIRK_BROKEN_CLK_GATING (1<<3) /* clock gating the sdio bus will make card fail */
#define MMC_QUIRK_NONSTD_FUNC_IF (1<<4) /* SDIO card has nonstd function interfaces */
#define MMC_QUIRK_DISABLE_CD (1<<5) /* disconnect CD/DAT[3] resistor */
#define MMC_QUIRK_INAND_CMD38 (1<<6) /* iNAND devices have broken CMD38 */
@ -279,8 +382,18 @@ struct mmc_card {
#define MMC_QUIRK_SEC_ERASE_TRIM_BROKEN (1<<10) /* Skip secure for erase/trim */
#define MMC_QUIRK_BROKEN_IRQ_POLLING (1<<11) /* Polling SDIO_CCCR_INTx could create a fake interrupt */
#define MMC_QUIRK_TRIM_BROKEN (1<<12) /* Skip trim */
/* byte mode */
#define MMC_QUIRK_INAND_DATA_TIMEOUT (1<<13) /* For incorrect data timeout */
#define MMC_QUIRK_BROKEN_HPI (1 << 14) /* For devices which gets */
/* broken due to HPI feature */
#define MMC_QUIRK_CACHE_DISABLE (1 << 15) /* prevent cache enable */
#define MMC_QUIRK_QCA6574_SETTINGS (1 << 16) /* QCA6574 card settings*/
#define MMC_QUIRK_QCA9377_SETTINGS (1 << 17) /* QCA9377 card settings*/
/* Make sure CMDQ is empty before queuing DCMD */
#define MMC_QUIRK_CMDQ_EMPTY_BEFORE_DCMD (1 << 17)
unsigned int erase_size; /* erase size in sectors */
unsigned int erase_shift; /* if erase unit is power 2 */
unsigned int pref_erase; /* in sectors */
@ -313,6 +426,14 @@ struct mmc_card {
struct dentry *debugfs_root;
struct mmc_part part[MMC_NUM_PHY_PARTITION]; /* physical partitions */
unsigned int nr_parts;
unsigned int part_curr;
struct mmc_wr_pack_stats wr_pack_stats; /* packed commands stats*/
struct notifier_block reboot_notify;
enum mmc_pon_type pon_type;
u8 *cached_ext_csd;
bool cmdq_init;
struct mmc_bkops_info bkops;
};
/*
@ -353,19 +474,43 @@ struct mmc_fixup {
/* SDIO-specfic fields. You can use SDIO_ANY_ID here of course */
u16 cis_vendor, cis_device;
/* MMC-specific field, You can use EXT_CSD_REV_ANY here of course */
unsigned int ext_csd_rev;
void (*vendor_fixup)(struct mmc_card *card, int data);
int data;
};
#define CID_MANFID_SANDISK 0x2
#define CID_MANFID_TOSHIBA 0x11
#define CID_MANFID_MICRON 0x13
#define CID_MANFID_SAMSUNG 0x15
#define CID_MANFID_KINGSTON 0x70
#define CID_MANFID_HYNIX 0x90
#define CID_MANFID_ANY (-1u)
#define CID_OEMID_ANY ((unsigned short) -1)
#define CID_NAME_ANY (NULL)
#define EXT_CSD_REV_ANY (-1u)
#define END_FIXUP { NULL }
/* extended CSD mapping to mmc version */
enum mmc_version_ext_csd_rev {
MMC_V4_0,
MMC_V4_1,
MMC_V4_2,
MMC_V4_41 = 5,
MMC_V4_5,
MMC_V4_51 = MMC_V4_5,
MMC_V5_0,
MMC_V5_01 = MMC_V5_0,
MMC_V5_1
};
#define _FIXUP_EXT(_name, _manfid, _oemid, _rev_start, _rev_end, \
_cis_vendor, _cis_device, \
_fixup, _data) \
_fixup, _data, _ext_csd_rev) \
{ \
.name = (_name), \
.manfid = (_manfid), \
@ -376,23 +521,30 @@ struct mmc_fixup {
.cis_device = (_cis_device), \
.vendor_fixup = (_fixup), \
.data = (_data), \
.ext_csd_rev = (_ext_csd_rev), \
}
#define MMC_FIXUP_REV(_name, _manfid, _oemid, _rev_start, _rev_end, \
_fixup, _data) \
_fixup, _data, _ext_csd_rev) \
_FIXUP_EXT(_name, _manfid, \
_oemid, _rev_start, _rev_end, \
SDIO_ANY_ID, SDIO_ANY_ID, \
_fixup, _data) \
_fixup, _data, _ext_csd_rev) \
#define MMC_FIXUP(_name, _manfid, _oemid, _fixup, _data) \
MMC_FIXUP_REV(_name, _manfid, _oemid, 0, -1ull, _fixup, _data)
MMC_FIXUP_REV(_name, _manfid, _oemid, 0, -1ull, _fixup, _data, \
EXT_CSD_REV_ANY)
#define MMC_FIXUP_EXT_CSD_REV(_name, _manfid, _oemid, _fixup, _data, \
_ext_csd_rev) \
MMC_FIXUP_REV(_name, _manfid, _oemid, 0, -1ull, _fixup, _data, \
_ext_csd_rev)
#define SDIO_FIXUP(_vendor, _device, _fixup, _data) \
_FIXUP_EXT(CID_NAME_ANY, CID_MANFID_ANY, \
CID_OEMID_ANY, 0, -1ull, \
_vendor, _device, \
_fixup, _data) \
_fixup, _data, EXT_CSD_REV_ANY) \
#define cid_rev(hwrev, fwrev, year, month) \
(((u64) hwrev) << 40 | \
@ -431,6 +583,8 @@ static inline void __maybe_unused remove_quirk(struct mmc_card *card, int data)
#define mmc_card_removed(c) ((c) && ((c)->state & MMC_CARD_REMOVED))
#define mmc_card_doing_bkops(c) ((c)->state & MMC_STATE_DOING_BKOPS)
#define mmc_card_suspended(c) ((c)->state & MMC_STATE_SUSPENDED)
#define mmc_card_cmdq(c) ((c)->state & MMC_STATE_CMDQ)
#define mmc_card_doing_auto_bkops(c) ((c)->state & MMC_STATE_AUTO_BKOPS)
#define mmc_card_set_present(c) ((c)->state |= MMC_STATE_PRESENT)
#define mmc_card_set_readonly(c) ((c)->state |= MMC_STATE_READONLY)
@ -441,6 +595,12 @@ static inline void __maybe_unused remove_quirk(struct mmc_card *card, int data)
#define mmc_card_clr_doing_bkops(c) ((c)->state &= ~MMC_STATE_DOING_BKOPS)
#define mmc_card_set_suspended(c) ((c)->state |= MMC_STATE_SUSPENDED)
#define mmc_card_clr_suspended(c) ((c)->state &= ~MMC_STATE_SUSPENDED)
#define mmc_card_set_cmdq(c) ((c)->state |= MMC_STATE_CMDQ)
#define mmc_card_clr_cmdq(c) ((c)->state &= ~MMC_STATE_CMDQ)
#define mmc_card_set_auto_bkops(c) ((c)->state |= MMC_STATE_AUTO_BKOPS)
#define mmc_card_clr_auto_bkops(c) ((c)->state &= ~MMC_STATE_AUTO_BKOPS)
#define mmc_card_strobe(c) (((c)->ext_csd).strobe_support & MMC_STROBE_SUPPORT)
/*
* Quirk add/remove for MMC products.
@ -511,10 +671,37 @@ static inline int mmc_card_broken_irq_polling(const struct mmc_card *c)
return c->quirks & MMC_QUIRK_BROKEN_IRQ_POLLING;
}
static inline bool mmc_card_support_auto_bkops(const struct mmc_card *c)
{
return c->ext_csd.rev >= MMC_V5_1;
}
static inline bool mmc_card_configured_manual_bkops(const struct mmc_card *c)
{
return c->ext_csd.bkops_en & EXT_CSD_BKOPS_MANUAL_EN;
}
static inline bool mmc_card_configured_auto_bkops(const struct mmc_card *c)
{
return c->ext_csd.bkops_en & EXT_CSD_BKOPS_AUTO_EN;
}
static inline bool mmc_enable_qca6574_settings(const struct mmc_card *c)
{
return c->quirks & MMC_QUIRK_QCA6574_SETTINGS;
}
static inline bool mmc_enable_qca9377_settings(const struct mmc_card *c)
{
return c->quirks & MMC_QUIRK_QCA9377_SETTINGS;
}
#define mmc_card_name(c) ((c)->cid.prod_name)
#define mmc_card_id(c) (dev_name(&(c)->dev))
#define mmc_dev_to_card(d) container_of(d, struct mmc_card, dev)
#define mmc_get_drvdata(c) dev_get_drvdata(&(c)->dev)
#define mmc_set_drvdata(c,d) dev_set_drvdata(&(c)->dev, d)
/*
* MMC device driver (e.g., Flash card, I/O card...)
@ -531,5 +718,9 @@ extern void mmc_unregister_driver(struct mmc_driver *);
extern void mmc_fixup_device(struct mmc_card *card,
const struct mmc_fixup *table);
extern struct mmc_wr_pack_stats *mmc_blk_get_packed_statistics(
struct mmc_card *card);
extern void mmc_blk_init_packed_statistics(struct mmc_card *card);
extern int mmc_send_pon(struct mmc_card *card);
extern void mmc_blk_cmdq_req_done(struct mmc_request *mrq);
#endif /* LINUX_MMC_CARD_H */

View file

@ -67,6 +67,8 @@ struct mmc_command {
unsigned int busy_timeout; /* busy detect timeout in ms */
/* Set this flag only for blocking sanitize request */
bool sanitize_busy;
/* Set this flag only for blocking bkops request */
bool bkops_busy;
struct mmc_data *data; /* data segment associated with cmd */
struct mmc_request *mrq; /* associated request */
@ -93,6 +95,7 @@ struct mmc_data {
int sg_count; /* mapped sg entries */
struct scatterlist *sg; /* I/O scatter list */
s32 host_cookie; /* host private data */
bool fault_injected; /* fault injected */
};
struct mmc_host;
@ -105,10 +108,43 @@ struct mmc_request {
struct completion completion;
void (*done)(struct mmc_request *);/* completion function */
struct mmc_host *host;
struct mmc_cmdq_req *cmdq_req;
struct request *req;
};
struct mmc_bus_ops {
void (*remove)(struct mmc_host *);
void (*detect)(struct mmc_host *);
int (*pre_suspend)(struct mmc_host *);
int (*suspend)(struct mmc_host *);
int (*resume)(struct mmc_host *);
int (*runtime_suspend)(struct mmc_host *);
int (*runtime_resume)(struct mmc_host *);
int (*runtime_idle)(struct mmc_host *);
int (*power_save)(struct mmc_host *);
int (*power_restore)(struct mmc_host *);
int (*alive)(struct mmc_host *);
int (*shutdown)(struct mmc_host *);
int (*reset)(struct mmc_host *);
int (*change_bus_speed)(struct mmc_host *, unsigned long *);
};
struct mmc_card;
struct mmc_async_req;
struct mmc_cmdq_req;
extern int mmc_cmdq_discard_queue(struct mmc_host *host, u32 tasks);
extern int mmc_cmdq_halt(struct mmc_host *host, bool enable);
extern int mmc_cmdq_halt_on_empty_queue(struct mmc_host *host);
extern void mmc_cmdq_post_req(struct mmc_host *host, int tag, int err);
extern int mmc_cmdq_start_req(struct mmc_host *host,
struct mmc_cmdq_req *cmdq_req);
extern int mmc_cmdq_prepare_flush(struct mmc_command *cmd);
extern int mmc_cmdq_wait_for_dcmd(struct mmc_host *host,
struct mmc_cmdq_req *cmdq_req);
extern int mmc_cmdq_erase(struct mmc_cmdq_req *cmdq_req,
struct mmc_card *card, unsigned int from, unsigned int nr,
unsigned int arg);
extern int mmc_stop_bkops(struct mmc_card *);
extern int mmc_read_bkops_status(struct mmc_card *);
@ -120,10 +156,15 @@ extern int mmc_wait_for_cmd(struct mmc_host *, struct mmc_command *, int);
extern int mmc_app_cmd(struct mmc_host *, struct mmc_card *);
extern int mmc_wait_for_app_cmd(struct mmc_host *, struct mmc_card *,
struct mmc_command *, int);
extern void mmc_start_bkops(struct mmc_card *card, bool from_exception);
extern void mmc_check_bkops(struct mmc_card *card);
extern void mmc_start_manual_bkops(struct mmc_card *card);
extern int mmc_switch(struct mmc_card *, u8, u8, u8, unsigned int);
extern int __mmc_switch_cmdq_mode(struct mmc_command *cmd, u8 set, u8 index,
u8 value, unsigned int timeout_ms,
bool use_busy_signal, bool ignore_timeout);
extern int mmc_send_tuning(struct mmc_host *host, u32 opcode, int *cmd_error);
extern int mmc_get_ext_csd(struct mmc_card *card, u8 **new_ext_csd);
extern int mmc_set_auto_bkops(struct mmc_card *card, bool enable);
#define MMC_ERASE_ARG 0x00000000
#define MMC_SECURE_ERASE_ARG 0x80000000
@ -150,6 +191,7 @@ extern int mmc_set_blocklen(struct mmc_card *card, unsigned int blocklen);
extern int mmc_set_blockcount(struct mmc_card *card, unsigned int blockcount,
bool is_rel_write);
extern int mmc_hw_reset(struct mmc_host *host);
extern int mmc_cmdq_hw_reset(struct mmc_host *host);
extern int mmc_can_reset(struct mmc_card *card);
extern void mmc_set_data_timeout(struct mmc_data *, const struct mmc_card *);
@ -160,11 +202,22 @@ extern void mmc_release_host(struct mmc_host *host);
extern void mmc_get_card(struct mmc_card *card);
extern void mmc_put_card(struct mmc_card *card);
extern void __mmc_put_card(struct mmc_card *card);
extern void mmc_set_ios(struct mmc_host *host);
extern int mmc_flush_cache(struct mmc_card *);
extern int mmc_cache_barrier(struct mmc_card *);
extern int mmc_detect_card_removed(struct mmc_host *host);
extern void mmc_blk_init_bkops_statistics(struct mmc_card *card);
extern void mmc_deferred_scaling(struct mmc_host *host);
extern void mmc_cmdq_clk_scaling_start_busy(struct mmc_host *host,
bool lock_needed);
extern void mmc_cmdq_clk_scaling_stop_busy(struct mmc_host *host,
bool lock_needed, bool is_cmdq_dcmd);
/**
* mmc_claim_host - exclusively claim a host
* @host: mmc host to claim

View file

@ -15,14 +15,19 @@
#include <linux/timer.h>
#include <linux/sched.h>
#include <linux/device.h>
#include <linux/devfreq.h>
#include <linux/fault-inject.h>
#include <linux/mmc/core.h>
#include <linux/mmc/card.h>
#include <linux/mmc/pm.h>
#define MMC_AUTOSUSPEND_DELAY_MS 3000
struct mmc_ios {
unsigned int clock; /* clock rate */
unsigned int old_rate; /* saved clock rate */
unsigned long clk_ts; /* time stamp of last updated clock */
unsigned short vdd;
/* vdd stores the bit number of the selected voltage range from below. */
@ -79,7 +84,31 @@ struct mmc_ios {
#define MMC_SET_DRIVER_TYPE_D 3
};
/* states to represent load on the host */
enum mmc_load {
MMC_LOAD_HIGH,
MMC_LOAD_LOW,
};
struct mmc_cmdq_host_ops {
int (*init)(struct mmc_host *host);
int (*enable)(struct mmc_host *host);
void (*disable)(struct mmc_host *host, bool soft);
int (*request)(struct mmc_host *host, struct mmc_request *mrq);
void (*post_req)(struct mmc_host *host, int tag, int err);
int (*halt)(struct mmc_host *host, bool halt);
void (*reset)(struct mmc_host *host, bool soft);
void (*dumpstate)(struct mmc_host *host);
};
struct mmc_host_ops {
int (*init)(struct mmc_host *host);
/*
* 'enable' is called when the host is claimed and 'disable' is called
* when the host is released. 'enable' and 'disable' are deprecated.
*/
int (*enable)(struct mmc_host *host);
int (*disable)(struct mmc_host *host);
/*
* It is optional for the host to implement pre_req and post_req in
* order to support double buffering of requests (prepare one
@ -132,6 +161,7 @@ struct mmc_host_ops {
/* Prepare HS400 target operating frequency depending host driver */
int (*prepare_hs400_tuning)(struct mmc_host *host, struct mmc_ios *ios);
int (*enhanced_strobe)(struct mmc_host *host);
int (*select_drive_strength)(struct mmc_card *card,
unsigned int max_dtr, int host_drv,
int card_drv, int *drv_type);
@ -144,11 +174,42 @@ struct mmc_host_ops {
*/
int (*multi_io_quirk)(struct mmc_card *card,
unsigned int direction, int blk_size);
unsigned long (*get_max_frequency)(struct mmc_host *host);
unsigned long (*get_min_frequency)(struct mmc_host *host);
int (*notify_load)(struct mmc_host *, enum mmc_load);
void (*notify_halt)(struct mmc_host *mmc, bool halt);
void (*force_err_irq)(struct mmc_host *host, u64 errmask);
};
struct mmc_card;
struct device;
struct mmc_cmdq_req {
unsigned int cmd_flags;
u32 blk_addr;
/* active mmc request */
struct mmc_request mrq;
struct mmc_data data;
struct mmc_command cmd;
#define DCMD (1 << 0)
#define QBR (1 << 1)
#define DIR (1 << 2)
#define PRIO (1 << 3)
#define REL_WR (1 << 4)
#define DAT_TAG (1 << 5)
#define FORCED_PRG (1 << 6)
unsigned int cmdq_req_flags;
unsigned int resp_idx;
unsigned int resp_arg;
unsigned int dev_pend_tasks;
bool resp_err;
int tag; /* used for command queuing */
u8 ctx_id;
};
struct mmc_async_req {
/* active mmc request */
struct mmc_request *mrq;
@ -175,6 +236,33 @@ struct mmc_slot {
void *handler_priv;
};
/**
* mmc_cmdq_context_info - describes the contexts of cmdq
* @active_reqs requests being processed
* @data_active_reqs data requests being processed
* @curr_state state of cmdq engine
* @cmdq_ctx_lock acquire this before accessing this structure
* @queue_empty_wq workqueue for waiting for all
* the outstanding requests to be completed
* @wait waiting for all conditions described in
* mmc_cmdq_ready_wait to be satisified before
* issuing the new request to LLD.
*/
struct mmc_cmdq_context_info {
unsigned long active_reqs; /* in-flight requests */
unsigned long data_active_reqs; /* in-flight data requests */
unsigned long curr_state;
#define CMDQ_STATE_ERR 0
#define CMDQ_STATE_DCMD_ACTIVE 1
#define CMDQ_STATE_HALT 2
#define CMDQ_STATE_CQ_DISABLE 3
#define CMDQ_STATE_REQ_TIMED_OUT 4
wait_queue_head_t queue_empty_wq;
wait_queue_head_t wait;
int active_small_sector_read_reqs;
};
/**
* mmc_context_info - synchronization details for mmc context
* @is_done_rcv wake up reason was done request
@ -199,11 +287,68 @@ struct mmc_supply {
struct regulator *vqmmc; /* Optional Vccq supply */
};
enum dev_state {
DEV_SUSPENDING = 1,
DEV_SUSPENDED,
DEV_RESUMED,
};
/**
* struct mmc_devfeq_clk_scaling - main context for MMC clock scaling logic
*
* @lock: spinlock to protect statistics
* @devfreq: struct that represent mmc-host as a client for devfreq
* @devfreq_profile: MMC device profile, mostly polling interval and callbacks
* @ondemand_gov_data: struct supplied to ondemmand governor (thresholds)
* @state: load state, can be HIGH or LOW. used to notify mmc_host_ops callback
* @start_busy: timestamped armed once a data request is started
* @measure_interval_start: timestamped armed once a measure interval started
* @devfreq_abort: flag to sync between different contexts relevant to devfreq
* @skip_clk_scale_freq_update: flag that enable/disable frequency change
* @freq_table_sz: table size of frequencies supplied to devfreq
* @freq_table: frequencies table supplied to devfreq
* @curr_freq: current frequency
* @polling_delay_ms: polling interval for status collection used by devfreq
* @upthreshold: up-threshold supplied to ondemand governor
* @downthreshold: down-threshold supplied to ondemand governor
* @need_freq_change: flag indicating if a frequency change is required
* @clk_scaling_in_progress: flag indicating if there's ongoing frequency change
* @is_busy_started: flag indicating if a request is handled by the HW
* @enable: flag indicating if the clock scaling logic is enabled for this host
*/
struct mmc_devfeq_clk_scaling {
spinlock_t lock;
struct devfreq *devfreq;
struct devfreq_dev_profile devfreq_profile;
struct devfreq_simple_ondemand_data ondemand_gov_data;
enum mmc_load state;
ktime_t start_busy;
ktime_t measure_interval_start;
atomic_t devfreq_abort;
bool skip_clk_scale_freq_update;
int freq_table_sz;
u32 *freq_table;
unsigned long total_busy_time_us;
unsigned long target_freq;
unsigned long curr_freq;
unsigned long polling_delay_ms;
unsigned int upthreshold;
unsigned int downthreshold;
unsigned int lower_bus_speed_mode;
#define MMC_SCALING_LOWER_DDR52_MODE 1
bool need_freq_change;
bool clk_scaling_in_progress;
bool is_busy_started;
bool enable;
};
struct mmc_host {
struct device *parent;
struct device class_dev;
struct mmc_devfeq_clk_scaling clk_scaling;
int index;
const struct mmc_host_ops *ops;
const struct mmc_cmdq_host_ops *cmdq_ops;
struct mmc_pwrseq *pwrseq;
unsigned int f_min;
unsigned int f_max;
@ -289,9 +434,33 @@ struct mmc_host {
#define MMC_CAP2_HSX00_1_2V (MMC_CAP2_HS200_1_2V_SDR | MMC_CAP2_HS400_1_2V)
#define MMC_CAP2_SDIO_IRQ_NOTHREAD (1 << 17)
#define MMC_CAP2_NO_WRITE_PROTECT (1 << 18) /* No physical write protect pin, assume that card is always read-write */
#define MMC_CAP2_PACKED_WR_CONTROL (1 << 19) /* Allow write packing control */
#define MMC_CAP2_CLK_SCALE (1 << 20) /* Allow dynamic clk scaling */
/* Allows Asynchronous SDIO irq while card is in 4-bit mode */
#define MMC_CAP2_ASYNC_SDIO_IRQ_4BIT_MODE (1 << 21)
/* Some hosts need additional tuning */
#define MMC_CAP2_HS400_POST_TUNING (1 << 22)
#define MMC_CAP2_NONHOTPLUG (1 << 25) /*Don't support hotplug*/
#define MMC_CAP2_CMD_QUEUE (1 << 26) /* support eMMC command queue */
#define MMC_CAP2_SANITIZE (1 << 27) /* Support Sanitize */
#define MMC_CAP2_SLEEP_AWAKE (1 << 28) /* Use Sleep/Awake (CMD5) */
/* use max discard ignoring max_busy_timeout parameter */
#define MMC_CAP2_MAX_DISCARD_SIZE (1 << 29)
mmc_pm_flag_t pm_caps; /* supported pm features */
#ifdef CONFIG_MMC_CLKGATE
int clk_requests; /* internal reference counter */
unsigned int clk_delay; /* number of MCI clk hold cycles */
bool clk_gated; /* clock gated */
struct delayed_work clk_gate_work; /* delayed clock gate */
unsigned int clk_old; /* old clock value cache */
spinlock_t clk_lock; /* lock for clk fields */
struct mutex clk_gate_mutex; /* mutex for clock gating */
struct device_attribute clkgate_delay_attr;
unsigned long clkgate_delay;
#endif
/* host specific block data */
unsigned int max_seg_size; /* see blk_queue_max_segment_size */
unsigned short max_segs; /* see blk_queue_max_segments */
@ -305,6 +474,7 @@ struct mmc_host {
spinlock_t lock; /* lock for claim and bus ops */
struct mmc_ios ios; /* current io bus settings */
struct mmc_ios cached_ios;
/* group bitfields together to minimize padding */
unsigned int use_spi_crc:1;
@ -331,6 +501,7 @@ struct mmc_host {
wait_queue_head_t wq;
struct task_struct *claimer; /* task that has host claimed */
struct task_struct *suspend_task;
int claim_cnt; /* "claim" nesting count */
struct delayed_work detect;
@ -379,10 +550,42 @@ struct mmc_host {
} embedded_sdio_data;
#endif
/*
* Set to 1 to just stop the SDCLK to the card without
* actually disabling the clock from it's source.
*/
bool card_clock_off;
#ifdef CONFIG_MMC_PERF_PROFILING
struct {
unsigned long rbytes_drv; /* Rd bytes MMC Host */
unsigned long wbytes_drv; /* Wr bytes MMC Host */
ktime_t rtime_drv; /* Rd time MMC Host */
ktime_t wtime_drv; /* Wr time MMC Host */
ktime_t start;
} perf;
bool perf_enable;
#endif
enum dev_state dev_status;
bool wakeup_on_idle;
struct mmc_cmdq_context_info cmdq_ctx;
int num_cq_slots;
int dcmd_cq_slot;
bool cmdq_thist_enabled;
/*
* several cmdq supporting host controllers are extensions
* of legacy controllers. This variable can be used to store
* a reference to the cmdq extension of the existing host
* controller.
*/
void *cmdq_private;
struct mmc_request *err_mrq;
unsigned long private[0] ____cacheline_aligned;
};
struct mmc_host *mmc_alloc_host(int extra, struct device *);
extern bool mmc_host_may_gate_card(struct mmc_card *);
int mmc_add_host(struct mmc_host *);
void mmc_remove_host(struct mmc_host *);
void mmc_free_host(struct mmc_host *);
@ -401,6 +604,11 @@ static inline void *mmc_priv(struct mmc_host *host)
return (void *)host->private;
}
static inline void *mmc_cmdq_private(struct mmc_host *host)
{
return host->cmdq_private;
}
#define mmc_host_is_spi(host) ((host)->caps & MMC_CAP_SPI)
#define mmc_dev(x) ((x)->parent)
@ -478,6 +686,12 @@ static inline int mmc_boot_partition_access(struct mmc_host *host)
return !(host->caps2 & MMC_CAP2_BOOTPART_NOACC);
}
static inline bool mmc_card_and_host_support_async_int(struct mmc_host *host)
{
return ((host->caps2 & MMC_CAP2_ASYNC_SDIO_IRQ_4BIT_MODE) &&
(host->card->cccr.async_intr_sup));
}
static inline int mmc_host_uhs(struct mmc_host *host)
{
return host->caps &
@ -491,6 +705,56 @@ static inline int mmc_host_packed_wr(struct mmc_host *host)
return host->caps2 & MMC_CAP2_PACKED_WR;
}
static inline void mmc_host_set_halt(struct mmc_host *host)
{
set_bit(CMDQ_STATE_HALT, &host->cmdq_ctx.curr_state);
}
static inline void mmc_host_clr_halt(struct mmc_host *host)
{
clear_bit(CMDQ_STATE_HALT, &host->cmdq_ctx.curr_state);
}
static inline int mmc_host_halt(struct mmc_host *host)
{
return test_bit(CMDQ_STATE_HALT, &host->cmdq_ctx.curr_state);
}
static inline void mmc_host_set_cq_disable(struct mmc_host *host)
{
set_bit(CMDQ_STATE_CQ_DISABLE, &host->cmdq_ctx.curr_state);
}
static inline void mmc_host_clr_cq_disable(struct mmc_host *host)
{
clear_bit(CMDQ_STATE_CQ_DISABLE, &host->cmdq_ctx.curr_state);
}
static inline int mmc_host_cq_disable(struct mmc_host *host)
{
return test_bit(CMDQ_STATE_CQ_DISABLE, &host->cmdq_ctx.curr_state);
}
#ifdef CONFIG_MMC_CLKGATE
void mmc_host_clk_hold(struct mmc_host *host);
void mmc_host_clk_release(struct mmc_host *host);
unsigned int mmc_host_clk_rate(struct mmc_host *host);
#else
static inline void mmc_host_clk_hold(struct mmc_host *host)
{
}
static inline void mmc_host_clk_release(struct mmc_host *host)
{
}
static inline unsigned int mmc_host_clk_rate(struct mmc_host *host)
{
return host->ios.clock;
}
#endif
static inline int mmc_card_hs(struct mmc_card *card)
{
return card->host->ios.timing == MMC_TIMING_SD_HS ||

View file

@ -26,6 +26,11 @@
#include <uapi/linux/mmc/mmc.h>
/* class 11 */
#define MMC_CMDQ_TASK_MGMT 48 /* ac [31:0] task ID R1b */
#define DISCARD_QUEUE 0x1
#define DISCARD_TASK 0x2
static inline bool mmc_op_multi(u32 opcode)
{
return opcode == MMC_WRITE_MULTIPLE_BLOCK ||
@ -165,6 +170,7 @@ struct _mmc_csd {
* OCR bits are mostly in host.h
*/
#define MMC_CARD_BUSY 0x80000000 /* Card Power up status bit */
#define MMC_CARD_SECTOR_ADDR 0x40000000 /* Card supports sectors */
/*
* Card Command Classes (CCC)
@ -214,6 +220,8 @@ struct _mmc_csd {
* EXT_CSD fields
*/
#define EXT_CSD_CMDQ 15 /* R/W */
#define EXT_CSD_BARRIER_CTRL 31 /* R/W */
#define EXT_CSD_FLUSH_CACHE 32 /* W */
#define EXT_CSD_CACHE_CTRL 33 /* R/W */
#define EXT_CSD_POWER_OFF_NOTIFICATION 34 /* R/W */
@ -239,6 +247,7 @@ struct _mmc_csd {
#define EXT_CSD_PART_CONFIG 179 /* R/W */
#define EXT_CSD_ERASED_MEM_CONT 181 /* RO */
#define EXT_CSD_BUS_WIDTH 183 /* R/W */
#define EXT_CSD_STROBE_SUPPORT 184 /* RO */
#define EXT_CSD_HS_TIMING 185 /* R/W */
#define EXT_CSD_POWER_CLASS 187 /* R/W */
#define EXT_CSD_REV 192 /* RO */
@ -266,12 +275,17 @@ struct _mmc_csd {
#define EXT_CSD_PWR_CL_200_360 237 /* RO */
#define EXT_CSD_PWR_CL_DDR_52_195 238 /* RO */
#define EXT_CSD_PWR_CL_DDR_52_360 239 /* RO */
#define EXT_CSD_CACHE_FLUSH_POLICY 240 /* RO */
#define EXT_CSD_BKOPS_STATUS 246 /* RO */
#define EXT_CSD_POWER_OFF_LONG_TIME 247 /* RO */
#define EXT_CSD_GENERIC_CMD6_TIME 248 /* RO */
#define EXT_CSD_CACHE_SIZE 249 /* RO, 4 bytes */
#define EXT_CSD_PWR_CL_DDR_200_360 253 /* RO */
#define EXT_CSD_FIRMWARE_VERSION 254 /* RO, 8 bytes */
#define EXT_CSD_CMDQ_DEPTH 307 /* RO */
#define EXT_CSD_CMDQ_SUPPORT 308 /* RO */
#define EXT_CSD_BARRIER_SUPPORT 486 /* RO */
#define EXT_CSD_TAG_UNIT_SIZE 498 /* RO */
#define EXT_CSD_SUPPORTED_MODE 493 /* RO */
#define EXT_CSD_TAG_UNIT_SIZE 498 /* RO */
#define EXT_CSD_DATA_TAG_SUPPORT 499 /* RO */
@ -285,6 +299,7 @@ struct _mmc_csd {
*/
#define EXT_CSD_WR_REL_PARAM_EN (1<<2)
#define EXT_CSD_WR_REL_PARAM_EN_RPMB_REL_WR (1<<4)
#define EXT_CSD_BOOT_WP_B_PWR_WP_DIS (0x40)
#define EXT_CSD_BOOT_WP_B_PERM_WP_DIS (0x10)
@ -328,6 +343,7 @@ struct _mmc_csd {
#define EXT_CSD_BUS_WIDTH_8 2 /* Card is in 8 bit mode */
#define EXT_CSD_DDR_BUS_WIDTH_4 5 /* Card is in 4 bit DDR mode */
#define EXT_CSD_DDR_BUS_WIDTH_8 6 /* Card is in 8 bit DDR mode */
#define EXT_CSD_BUS_WIDTH_STROBE 0x80 /* Card is in 8 bit DDR mode */
#define EXT_CSD_TIMING_BC 0 /* Backwards compatility */
#define EXT_CSD_TIMING_HS 1 /* High speed */
@ -355,6 +371,9 @@ struct _mmc_csd {
#define EXT_CSD_PACKED_EVENT_EN BIT(3)
#define EXT_CSD_BKOPS_MANUAL_EN BIT(0)
#define EXT_CSD_BKOPS_AUTO_EN BIT(1)
/*
* EXCEPTION_EVENT_STATUS field
*/

View file

@ -102,6 +102,7 @@
#define SDIO_BUS_WIDTH_1BIT 0x00
#define SDIO_BUS_WIDTH_RESERVED 0x01
#define SDIO_BUS_WIDTH_4BIT 0x02
#define SDIO_BUS_WIDTH_8BIT 0x03
#define SDIO_BUS_ECSI 0x20 /* Enable continuous SPI interrupt */
#define SDIO_BUS_SCSI 0x40 /* Support continuous SPI interrupt */
@ -163,6 +164,10 @@
#define SDIO_DTSx_SET_TYPE_A (1 << SDIO_DRIVE_DTSx_SHIFT)
#define SDIO_DTSx_SET_TYPE_C (2 << SDIO_DRIVE_DTSx_SHIFT)
#define SDIO_DTSx_SET_TYPE_D (3 << SDIO_DRIVE_DTSx_SHIFT)
#define SDIO_CCCR_INTERRUPT_EXTENSION 0x16
#define SDIO_SUPPORT_ASYNC_INTR (1<<0)
#define SDIO_ENABLE_ASYNC_INTR (1<<1)
/*
* Function Basic Registers (FBR)
*/

View file

@ -1,5 +1,6 @@
/*
* Copyright (C) 2013 Google, Inc.
* Copyright (c) 2013-2015, The Linux Foundation. All rights reserved.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
@ -85,7 +86,153 @@ DEFINE_EVENT_CONDITION(mmc_blk_rw_class, mmc_blk_rw_end,
TP_CONDITION(((cmd == MMC_READ_MULTIPLE_BLOCK) ||
(cmd == MMC_WRITE_MULTIPLE_BLOCK)) &&
data));
#endif /* _TRACE_MMC_H */
TRACE_EVENT(mmc_cmd_rw_start,
TP_PROTO(unsigned int cmd, unsigned int arg, unsigned int flags),
TP_ARGS(cmd, arg, flags),
TP_STRUCT__entry(
__field(unsigned int, cmd)
__field(unsigned int, arg)
__field(unsigned int, flags)
),
TP_fast_assign(
__entry->cmd = cmd;
__entry->arg = arg;
__entry->flags = flags;
),
TP_printk("cmd=%u,arg=0x%08x,flags=0x%08x",
__entry->cmd, __entry->arg, __entry->flags)
);
TRACE_EVENT(mmc_cmd_rw_end,
TP_PROTO(unsigned int cmd, unsigned int status, unsigned int resp),
TP_ARGS(cmd, status, resp),
TP_STRUCT__entry(
__field(unsigned int, cmd)
__field(unsigned int, status)
__field(unsigned int, resp)
),
TP_fast_assign(
__entry->cmd = cmd;
__entry->status = status;
__entry->resp = resp;
),
TP_printk("cmd=%u,int_status=0x%08x,response=0x%08x",
__entry->cmd, __entry->status, __entry->resp)
);
TRACE_EVENT(mmc_data_rw_end,
TP_PROTO(unsigned int cmd, unsigned int status),
TP_ARGS(cmd, status),
TP_STRUCT__entry(
__field(unsigned int, cmd)
__field(unsigned int, status)
),
TP_fast_assign(
__entry->cmd = cmd;
__entry->status = status;
),
TP_printk("cmd=%u,int_status=0x%08x",
__entry->cmd, __entry->status)
);
DECLARE_EVENT_CLASS(mmc_adma_class,
TP_PROTO(unsigned int cmd, unsigned int len),
TP_ARGS(cmd, len),
TP_STRUCT__entry(
__field(unsigned int, cmd)
__field(unsigned int, len)
),
TP_fast_assign(
__entry->cmd = cmd;
__entry->len = len;
),
TP_printk("cmd=%u,sg_len=0x%08x", __entry->cmd, __entry->len)
);
DEFINE_EVENT(mmc_adma_class, mmc_adma_table_pre,
TP_PROTO(unsigned int cmd, unsigned int len),
TP_ARGS(cmd, len));
DEFINE_EVENT(mmc_adma_class, mmc_adma_table_post,
TP_PROTO(unsigned int cmd, unsigned int len),
TP_ARGS(cmd, len));
TRACE_EVENT(mmc_clk,
TP_PROTO(char *print_info),
TP_ARGS(print_info),
TP_STRUCT__entry(
__string(print_info, print_info)
),
TP_fast_assign(
__assign_str(print_info, print_info);
),
TP_printk("%s",
__get_str(print_info)
)
);
DECLARE_EVENT_CLASS(mmc_pm_template,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs),
TP_STRUCT__entry(
__field(s64, usecs)
__field(int, err)
__string(dev_name, dev_name)
),
TP_fast_assign(
__entry->usecs = usecs;
__entry->err = err;
__assign_str(dev_name, dev_name);
),
TP_printk(
"took %lld usecs, %s err %d",
__entry->usecs,
__get_str(dev_name),
__entry->err
)
);
DEFINE_EVENT(mmc_pm_template, mmc_runtime_suspend,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
DEFINE_EVENT(mmc_pm_template, mmc_runtime_resume,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
DEFINE_EVENT(mmc_pm_template, mmc_suspend,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
DEFINE_EVENT(mmc_pm_template, mmc_resume,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
DEFINE_EVENT(mmc_pm_template, sdhci_msm_suspend,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
DEFINE_EVENT(mmc_pm_template, sdhci_msm_resume,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
DEFINE_EVENT(mmc_pm_template, sdhci_msm_runtime_suspend,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
DEFINE_EVENT(mmc_pm_template, sdhci_msm_runtime_resume,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
#endif /* if !defined(_TRACE_MMC_H) || defined(TRACE_HEADER_MULTI_READ) */
/* This part must be outside protection */
#include <trace/define_trace.h>

View file

@ -1,4 +1,6 @@
# UAPI Header export list
header-y += core.h
header-y += core.h
header-y += ioctl.h
header-y += mmc.h
header-y += mmc.h

View file

@ -29,6 +29,7 @@
#define MMC_READ_MULTIPLE_BLOCK 18 /* adtc [31:0] data addr R1 */
#define MMC_SEND_TUNING_BLOCK 19 /* adtc R1 */
#define MMC_SEND_TUNING_BLOCK_HS200 21 /* adtc R1 */
#define MMC_SEND_TUNING_BLOCK_HS400 MMC_SEND_TUNING_BLOCK_HS200
#define MMC_TUNING_BLK_PATTERN_4BIT_SIZE 64
#define MMC_TUNING_BLK_PATTERN_8BIT_SIZE 128