The functions irq_move_irq() and irq_move_masked_irq() expect that the
caller passes the top-level irq_data to them when hierarchical
irqdomains are enabled. But that's not true when called from
apic_ack_edge(), which results in a null pointer dereference by
idata->chip->irq_mask(idata).
Instead of fixing callers to passing top-level irq_data, we rather
change irq_move_irq()/irq_move_masked_irq() to accept any irq_data.
Fixes: 52f518a3a7 'x86/MSI: Use hierarchical irqdomains to manage MSI interrupts'
Reported-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/1433145945-789-3-git-send-email-jiang.liu@linux.intel.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
For irq associated with hierarchy irqdomains, there will be multiple
irq_datas for one irq_desc. So enhance irq_data_to_desc() to support
hierarchy irqdomain. Also export irq_data_to_desc() as an inline
function for later reuse.
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Link: http://lkml.kernel.org/r/1433145945-789-2-git-send-email-jiang.liu@linux.intel.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
The current handler of MMC_BLK_CMD_ERR in mmc_blk_issue_rw_rq function
may cause new coming request permanent missing when the ongoing
request (previoulsy started) complete end.
The problem scenario is as follows:
(1) Request A is ongoing;
(2) Request B arrived, and finally mmc_blk_issue_rw_rq() is called;
(3) Request A encounters the MMC_BLK_CMD_ERR error;
(4) In the error handling of MMC_BLK_CMD_ERR, suppose mmc_blk_cmd_err()
end request A completed and return zero. Continue the error handling,
suppose mmc_blk_reset() reset device success;
(5) Continue the execution, while loop completed because variable ret
is zero now;
(6) Finally, mmc_blk_issue_rw_rq() return without processing request B.
The process related to the missing request may wait that IO request
complete forever, possibly crashing the application or hanging the system.
Fix this issue by starting new request when reset success.
Signed-off-by: Ding Wang <justin.wang@spreadtrum.com>
Fixes: 67716327ee ("mmc: block: add eMMC hardware reset support")
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
When dma mapping (dma_map_sg) fails in sdhci_pre_dma_transfer, -EINVAL
is returned. There are 3 callers of sdhci_pre_dma_transfer:
* sdhci_pre_req and sdhci_adma_table_pre: handle negative return
* sdhci_prepare_data: handles 0 (error) and "else" (good) only
sdhci_prepare_data is therefore broken. When it receives -EINVAL from
sdhci_pre_dma_transfer, it assumes 1 sg mapping was mapped. Later,
this non-existent mapping with address 0 is kmap'ped and written to:
Corrupted low memory at ffff880000001000 (1000 phys) = 22b7d67df2f6d1cf
Corrupted low memory at ffff880000001008 (1008 phys) = 63848a5216b7dd95
Corrupted low memory at ffff880000001010 (1010 phys) = 330eb7ddef39e427
Corrupted low memory at ffff880000001018 (1018 phys) = 8017ac7295039bda
Corrupted low memory at ffff880000001020 (1020 phys) = 8ce039eac119074f
...
So teach sdhci_prepare_data to understand negative return values from
sdhci_pre_dma_transfer and disable DMA in that case, as well as for
zero.
It was introduced in 348487cb28 (mmc:
sdhci: use pipeline mmc requests to improve performance). The commit
seems to be suspicious also by assigning host->sg_count both in
sdhci_pre_dma_transfer and sdhci_adma_table_pre.
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: stable@vger.kernel.org # 4.0+
Fixes: 348487cb28
Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Haibo Chen <haibo.chen@freescale.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Change AMD CZ SMBUS device ID from 0x790b to
use Macro definition
Signed-off-by: Wan ZongShun <Vincent.Wan@amd.com>
Acked-by: Wolfram Sang <wsa@the-dreams.de>
Acked-by: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Fix a "Trying to vfree() nonexistent vm area" error when unloading the CAAM
controller module by providing the correct pointer value to iounmap().
Signed-off-by: Victoria Milhoan <vicki.milhoan@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The CAAM driver uses two data buffers to store data for a hashing operation,
with one buffer defined as active. This change forces switching of the
active buffer when executing a hashing operation to avoid a later DMA unmap
using the length of the opposite buffer.
Signed-off-by: Victoria Milhoan <vicki.milhoan@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Added a mpi_read_buf() helper function to export MPI to a buf provided by
the user, and a mpi_get_size() helper, that tells the user how big the buf is.
Changed mpi_free to use kzfree instead of kfree because it is used to free
crypto keys.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The top-level CRYPTO_DEV_VMX option already depends on PPC64 so
there is no need to depend on it again at CRYPTO_DEV_VMX_ENCRYPT.
This patch also removes a redundant "default n".
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The hwrng output buffers (2) are cast inside of a a struct (caam_rng_ctx)
allocated in one DMA-tagged region. While the kernel's heap allocator
should place the overall struct on a cacheline aligned boundary, the 2
buffers contained within may not necessarily align. Consenquently, the ends
of unaligned buffers may not fully flush, and if so, stale data will be left
behind, resulting in small repeating patterns.
This fix aligns the buffers inside the struct.
Note that not all of the data inside caam_rng_ctx necessarily needs to be
DMA-tagged, only the buffers themselves require this. However, a fix would
incur the expense of error-handling bloat in the case of allocation failure.
Cc: stable@vger.kernel.org
Signed-off-by: Steve Cornelius <steve.cornelius@freescale.com>
Signed-off-by: Victoria Milhoan <vicki.milhoan@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Patch 1c6c9b1d9d caused a regression for rsrc_nonstatic: It relies
on pccard_validate_cis() to determine whether an iomem resource can
be used for PCMCIA cards. This override, however, lead invalid iomem
resources to be accepted -- and lead to a fake CIS being used instead
of the original CIS.
To fix this issue, move the override for anonymous cards to the one
place where it is needed -- when adding a PCMCIA device.
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
lapic.timer_mode was not properly initialized after migration, which
broke few useful things, like login, by making every sleep eternal.
Fix this by calling apic_update_lvtt in kvm_apic_post_state_restore.
There are other slowpaths that update lvtt, so this patch makes sure
something similar doesn't happen again by calling apic_update_lvtt
after every modification.
Cc: stable@vger.kernel.org
Fixes: f30ebc312c ("KVM: x86: optimize some accesses to LVTT and SPIV")
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Turns out 1366x768 does not in fact work on this hardware.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
Currently the con_id of the acquired clock is printed for debugging
purposes. But in several cases, the con_id is NULL, which doesn't
provide much debugging information when printed. These cases are:
- When explicitly passing a NULL con_id (which means the first clock
tied to the device, if available),
- When not using pm_clk_add(), but pm_clk_add_clk() (which takes a
"struct clk *" directly).
Hence print the actual clock name in addition to (and not instead of;
thanks Grygorii Strashko!) the con_id.
Note that the clock name is not available with legacy clock frameworks,
and the hex pointer address will be printed instead.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Grygorii Strashko <grygorii.strashko@linaro.org>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The PM Domain code uses ktime_get() to perform various latency
measurements. However, if ktime_get() is called while timekeeping is
suspended, the following warning is printed:
WARNING: CPU: 0 PID: 1340 at kernel/time/timekeeping.c:576 ktime_get+0x3
This happens when resuming the PM Domain that contains the clock events
source, which calls pm_genpd_syscore_poweron(). Chain of operations is:
timekeeping_resume()
{
clockevents_resume()
sh_cmt_clock_event_resume()
pm_genpd_syscore_poweron()
pm_genpd_sync_poweron()
genpd_syscore_switch()
genpd_power_on()
ktime_get(), but timekeeping_suspended == 1
...
timekeeping_suspended = 0;
}
Fix this by adding a "timed" parameter to genpd_power_{on,off}() and
pm_genpd_sync_power{off,on}(), to indicate whether latency measurements
are allowed. This parameter is passed as false in
genpd_syscore_switch() (i.e. during syscore suspend/resume), and true in
all other cases.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The new Dell XPS13 also requires the similar quirk for fixing the
noisy outputs. (But, as the codec was changed, now the fixup for
Latitude is used instead.)
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=99851
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Each time the CPU switches its frequency, the clock nodes in
DTS are walked through to find proper clock source. This is
very time-consuming, for example, it is up to 500+ us on T4240.
Besides, switching time varies from clock to clock.
To optimize this, each input clock of CPU is buffered, so that
it can be picked up instantly when needed.
Since for each CPU each input clock is stored in a pointer
which takes 4 or 8 bytes memory and normally there are several
input clocks per CPU, that will not take much memory as well.
Signed-off-by: Tang Yuantian <Yuantian.Tang@freescale.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
There are several races reported in cpufreq core around governors (only
ondemand and conservative) by different people.
There are at least two race scenarios present in governor code:
(a) Concurrent access/updates of governor internal structures.
It is possible that fields such as 'dbs_data->usage_count', etc. are
accessed simultaneously for different policies using same governor
structure (i.e. CPUFREQ_HAVE_GOVERNOR_PER_POLICY flag unset). And
because of this we can dereference bad pointers.
For example consider a system with two CPUs with separate 'struct
cpufreq_policy' instances. CPU0 governor: ondemand and CPU1: powersave.
CPU0 switching to powersave and CPU1 to ondemand:
CPU0 CPU1
store* store*
cpufreq_governor_exit() cpufreq_governor_init()
dbs_data = cdata->gdbs_data;
if (!--dbs_data->usage_count)
kfree(dbs_data);
dbs_data->usage_count++;
*Bad pointer dereference*
There are other races possible between EXIT and START/STOP/LIMIT as
well. Its really complicated.
(b) Switching governor state in bad sequence:
For example trying to switch a governor to START state, when the
governor is in EXIT state. There are some checks present in
__cpufreq_governor() but they aren't sufficient as they compare events
against 'policy->governor_enabled', where as we need to take governor's
state into account, which can be used by multiple policies.
These two issues need to be solved separately and the responsibility
should be properly divided between cpufreq and governor core.
The first problem is more about the governor core, as it needs to
protect its structures properly. And the second problem should be fixed
in cpufreq core instead of governor, as its all about sequence of
events.
This patch is trying to solve only the first problem.
There are two types of data we need to protect,
- 'struct common_dbs_data': No matter what, there is going to be a
single copy of this per governor.
- 'struct dbs_data': With CPUFREQ_HAVE_GOVERNOR_PER_POLICY flag set, we
will have per-policy copy of this data, otherwise a single copy.
Because of such complexities, the mutex present in 'struct dbs_data' is
insufficient to solve our problem. For example we need to protect
fetching of 'dbs_data' from different structures at the beginning of
cpufreq_governor_dbs(), to make sure it isn't currently being updated.
This can be fixed if we can guarantee serialization of event parsing
code for an individual governor. This is best solved with a mutex per
governor, and the placeholder for that is 'struct common_dbs_data'.
And so this patch moves the mutex from 'struct dbs_data' to 'struct
common_dbs_data' and takes it at the beginning and drops it at the end
of cpufreq_governor_dbs().
Tested with and without following configuration options:
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_DEBUG_RT_MUTEXES=y
CONFIG_DEBUG_PI_LIST=y
CONFIG_DEBUG_SPINLOCK=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=y
CONFIG_LOCKDEP=y
CONFIG_DEBUG_ATOMIC_SLEEP=y
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
cpufreq_governor_dbs() is hardly readable, it is just too big and
complicated. Lets make it more readable by splitting out event specific
routines.
Order of statements is changed at few places, but that shouldn't bring
any functional change.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Notifiers are required only for conservative governor and the common
governor code is unnecessarily polluted with that. Handle that from
cs_init/exit() instead of cpufreq_governor_dbs().
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Currently, amd-xgbe driver has separate logic to determine device
coherency for DT vs. ACPI. This patch simplifies the code with
a call to device_dma_is_coherent().
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Currently, the driver has separate logic to determine device coherency
for DT vs ACPI. This patch simplifies the code with a call to
device_dma_is_coherent().
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Currently, device drivers, which support both OF and ACPI,
need to call two separate APIs, of_dma_is_coherent() and
acpi_dma_is_coherent()) to determine device coherency attribute.
This patch simplifies this process by introducing a new device
property API, device_dma_is_coherent(), which calls the appropriate
interface based on the booting architecture.
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
section 6.2.17 _CCA states that ARM platforms require ACPI _CCA
object to be specified for DMA-cabpable devices. Therefore, this patch
specifies ACPI_CCA_REQUIRED in arm64 Kconfig.
In addition, to handle the case when _CCA is missing, arm64 would assign
dummy_dma_ops to disable DMA capability of the device.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
This patch implements support for ACPI _CCA object, which is introduced in
ACPIv5.1, can be used for specifying device DMA coherency attribute.
The parsing logic traverses device namespace to parse coherency
information, and stores it in acpi_device_flags. Then uses it to call
arch_setup_dma_ops() when creating each device enumerated in DSDT
during ACPI scan.
This patch also introduces acpi_dma_is_coherent(), which provides
an interface for device drivers to check the coherency information
similarly to the of_dma_is_coherent().
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
When the QR_EC transaction fails, the EC_FLAGS_QUERY_PENDING flag prevents
the event handling work queue from being scheduled again.
Though there shouldn't be failed QR_EC transactions, and this gap was
efficiently used for catching and learning the SCI_EVT clearing timing
compliance issues, we need to fix this as we are not fully compatible
with all platforms/Windows to handle SCI_EVT clearing timing correctly.
Fixing this gives the EC driver the chances to recover from a state machine
failure.
So this patch fixes this issue. When nr_pending_queries drops to 0, it
clears EC_FLAGS_QUERY_PENDING at the proper position for different modes in
order to ensure that the SCI_EVT handling can proceed.
In order to be clearer for future ec_event_clearing modes, all checks in
this patch are written in the inclusive style, not the exclusive style.
Cc: 3.16+ <stable@vger.kernel.org> # 3.16+
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
It is reported that on several platforms, EC firmware will not respond
non-expected QR_EC (see EC_FLAGS_QUERY_HANDSHAKE, only write QR_EC when
SCI_EVT is set).
Unfortunately, ACPI specification doesn't define when the SCI_EVT should be
cleared by the firmware, thus the original implementation queued up second
QR_EC right after writing QR_EC command and before reading the returned
event value as at that time the SCI_EVT is ensured not cleared. This
behavior is also based on the assumption that the firmware should be able
to return 0x00 to indicate "no outstanding event". This behavior did fix
issues on Samsung platforms where the spurious query value of 0x00 is
supported and didn't break platforms in my test queue.
But recently, specific Acer, Asus, Lenovo platforms keep on blaming this
change.
This patch changes the behavior to re-check the SCI_EVT a bit later and
removes EC_FLAGS_QUERY_HANDSHAKE quirks, hoping this is the Windows
compliant EC driver behavior.
In order to be robust to the possible regressions, instead of removing the
quirk directly, this patch keeps the quirk code, removes the quirk users
and keeps old behavior for Samsung platforms.
Cc: 3.16+ <stable@vger.kernel.org> # 3.16+
Link: https://bugzilla.kernel.org/show_bug.cgi?id=94411
Link: https://bugzilla.kernel.org/show_bug.cgi?id=97381
Link: https://bugzilla.kernel.org/show_bug.cgi?id=98111
Reported-and-tested-by: Gabriele Mazzotta <gabriele.mzt@gmail.com>
Reported-and-tested-by: Tigran Gabrielyan <tigrangab@gmail.com>
Reported-and-tested-by: Adrien D <ghbdtn@openmailbox.org>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
We've been suffering from the uncertainty of the SCI_EVT clearing timing.
This patch implements 3 of 4 possible modes to handle SCI_EVT clearing
variations. The old behavior is kept in this patch.
Status: QR_EC is re-checked as early as possible after checking previous
SCI_EVT. This always leads to 2 QR_EC transactions per SCI_EVT
indication and the target may implement event queue which returns
0x00 indicating "no outstanding event".
This is proven to be a conflict against Windows behavior, but is
still kept in this patch to make the EC driver robust to the
possible regressions that may occur on Samsung platforms.
Query: QR_EC is re-checked after the target has handled the QR_EC query
request command pushed by the host.
Event: QR_EC is re-checked after the target has noticed the query event
response data pulled by the host.
This timing is not determined by any IRQs, so we may need to use a
guard period in this mode, which may explain the existence of the
ec_guard() code used by the old EC driver where the re-check timing
is implemented in the similar way as this mode.
Method: QR_EC is re-checked as late as possible after completing the _Qxx
evaluation. The target may implement SCI_EVT like a level triggered
interrupt.
It is proven on kernel bugzilla 94411 that, Windows will have all
_Qxx evaluations parallelized. Thus unless required by further
evidences, we needn't implement this mode as it is a conflict of
the _Qxx parallelism requirement.
Note that, according to the reports, there are platforms that cannot be
handled using the "Status" mode without enabling the
EC_FLAGS_QUERY_HANDSHAKE quirk. But they can be handled with the other
modes according to the tests (kernel bugzilla 97381).
The following log entry can be used to confirm the differences of the 3
modes as it should appear at the different positions for the 3 modes:
Command(QR_EC) unblocked
Status: appearing after
EC_SC(W) = 0x84
Query: appearing after
EC_DATA(R) = 0xXX
where XX is the event number used to determine _QXX
Event: appearing after first
EC_SC(R) = 0xX0 SCI_EVT=x BURST=0 CMD=0 IBF=0 OBF=0
that is next to the following log entry:
Command(QR_EC) completed by hardware
Link: https://bugzilla.kernel.org/show_bug.cgi?id=94411
Link: https://bugzilla.kernel.org/show_bug.cgi?id=97381
Link: https://bugzilla.kernel.org/show_bug.cgi?id=98111
Reported-and-tested-by: Gabriele Mazzotta <gabriele.mzt@gmail.com>
Reported-and-tested-by: Tigran Gabrielyan <tigrangab@gmail.com>
Reported-and-tested-by: Adrien D <ghbdtn@openmailbox.org>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
During the period that a work queue is scheduled (queued up for run) but
hasn't been run, second schedule_work() could fail. This may not lead to
the loss of queries because QR_EC is always ensured to be submitted after
the work queue has been in the running state.
The event handling work queue can be changed into the loop style to allow
us to control the code in a more flexible way:
1. Makes it possible to add event=0x00 termination condition in the loop.
2. Increases the thoughput of the QR_EC transactions as the 2nd+ QR_EC
transactions may be handled in the same work item used for the 1st QR_EC
transaction, thus the delay caused by the 2nd+ work item scheduling can
be eliminated.
Except the logging message changes and the throughput improvement, this
patch is just a funcitonal no-op.
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Tested-by: Gabriele Mazzotta <gabriele.mzt@gmail.com>
Tested-by: Tigran Gabrielyan <tigrangab@gmail.com>
Tested-by: Adrien D <ghbdtn@openmailbox.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
This patch collects transaction state transition code into one function. We
then could have a single function to maintain transaction transition
related behaviors. No functional changes.
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Tested-by: Gabriele Mazzotta <gabriele.mzt@gmail.com>
Tested-by: Tigran Gabrielyan <tigrangab@gmail.com>
Tested-by: Adrien D <ghbdtn@openmailbox.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Make the button ACPI device ID array static const. Safes us a little bit
of code.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
There is no need to have processor_power_dmi_table[] writeable, constify
it.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Constify the acpi_hed_ids[] ACPI device IDs array -- no need to have it
writeable.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The device descriptors are never written to -- even pointed to as
'const' from struct lpss_private_data. Make them r/o for real.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The bat_dmi_table[] DMI table is referenced from the __init function
acpi_battery_init_async() only. It and its referenced functions can
therefore be marked __initconst to free up ~1kB of runtime memory after
initialization is done.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Make the acpi_battery_units() function take a const argument and return
a const char*, too. Also make it static. It probably doesn't matter, as
gcc will be clever enough to optimize and inline the code even without
these hints. However, we also get rid of a #ifdef block by moving the
function closer to its usage location, so it's at least a small gain in
code readability.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The offset tables are only read, not modified. Make them const.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
There is no need to have ac_dmi_table[] writeable, constify it.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Make the video ACPI device ID array static and constify the DMI system
IDs array. Saves us a little bit of code.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
On some systems acpi-video backlight is broken in the sense that it cannot
control the brightness of the backlight, but it must still be called on
resume to power-up the backlight after resume.
This commit allows these systems to work by going through all the usual
backlight control moves, while not registering a sysfs backlight
interface.
This commit also adds a quirk enabling this parameter on Toshiba Portege
R830 systems which are known to be affected by this.
I wish there was a better way to deal with this, but we've been unable to
find one.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=21012
Link: https://bugs.freedesktop.org/show_bug.cgi?id=82634
Reported-and-tested-by: Sylvain Pasche <sylvain.pasche@gmail.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Acked-by: Aaron Lu <aaron.lu@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
It seems that the latest generation of MacbookPro needs to use the
native backlight driver, just like most modern laptops do, but it does
not automatically get enabled as the Apple BIOS does not advertise
Windows 8 compatibility. So add a quirk for this.
Reported-by: Christopher Beland <beland@alum.mit.edu>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>