With the exception of a forced cast for phys_obj stuff (a problem in
other patches as well) all of these are fairly simple __iomem compliance
fixes.
As with other patches, yank/paste errors may exist.
Signed-off-by: Ben Widawsky <benjamin.widawsky@intel.com>
[danvet: Added comment to explain the __iomem cast.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Almost all of the errors related __iomem problems.
Most of the changes here are trivial, however there is plenty of chance
for yank/paste errors.
Signed-off-by: Ben Widawsky <benjamin.widawsky@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This was originally used as an attempt to diagnose GPU hangs, but was
never very reliable and superseded by the i915_error_state capture on
hangcheck. It now lies languishing unused and unwanted.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
IvyBridge requires an extra frame between disabling the low power
watermarks and enabling scaling on the sprite plane. If the scaling
is already enabled, then we have already disabled the low power
watermarks and need not incur an extra wait.
Similarly, as we disable the scaling when turning off the sprite plane,
we can update the scaling enabled flag and restore the low power
watermarks.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Pretty useful to debug our DP bandwidth woes.
v2: Also print out the required and available link bandwidth,
suggested by Chris Wilson.
v3: Also print out the input parameters so that diagnosing failures to
find a valid dp link configuration is possible.
v4: s/Display port/DP/ to shorten the output.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On SandyBridge IPS was entirely implemented in hardware and not reliant
on the driver monitoring power consumption and feeding back desired run
states, so the hardware is able to adapt quicker and more flexibly. Which
is a huge relief for us as we no longer have to carry empirically
derived magic algorithms.
Yet despite the advance in technology, the driver was still doing its
IPS polling on all machines. Restrict it to the only supported hardware,
Clarkdale/Arrandale.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Tested-by: Andrey Rahmatullin <wrar@wrar.name>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=49025
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We only execute intel_decrease_pllclock for pre-PCH hardware, typically
gen4 mobiles. However, in the variable declaration we did read from the
non-PCH DPLL register, quite naughty and detected by SandyBridge.
Reported-and-tested-by: Andrey Rahmatullin <wrar@wrar.name>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=49025
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Clearing bit 5 of CACHE_MODE_0 is necessary to prevent GPU hangs in
OpenGL programs such as Google MapsGL, Google Earth, and gzdoom when
using separate stencil buffers. Without it, the GPU tries to use the
LRA eviction policy, which isn't supported. This was supposed to be off
by default, but seems to be on for many machines.
This cannot be done in gen6_init_clock_gating with most of the other
workaround bits; the render ring needs to exist. Otherwise, the
register write gets dropped on the floor (one printk will show it
changed, but a second printk immediately following shows the value
reverts to the old one).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=47535
Cc: stable@vger.kernel.org
Cc: Rob Castle <futuredub@gmail.com>
Cc: Eric Appleman <erappleman@gmail.com>
Cc: aaron667@gmx.net
Cc: Keith Packard <keithp@keithp.com>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We seem to have a decent confusion between the output timings and the
input timings of the sdvo encoder. If I understand the code correctly,
we use the original mode unchanged for the output timings, safe for
the lvds case. And we should use the adjusted mode for input timings.
Clarify the situation by adding an explicit output_dtd to the sdvo
mode_set function and streamline the code-flow by moving the input and
output mode setting in the sdvo encode together.
Furthermore testing showed that the sdvo input timing needs the
unadjusted dotclock, the sdvo chip will automatically compute the
required pixel multiplier to get a dotclock above 100 MHz.
Fix this up when converting a drm mode to an sdvo dtd.
This regression was introduced in
commit c74696b9c8
Author: Pavel Roskin <proski@gnu.org>
Date: Thu Sep 2 14:46:34 2010 -0400
i915: revert some checks added by commit 32aad86f
particularly the following hunk:
diff --git a/drivers/gpu/drm/i915/intel_sdvo.c
b/drivers/gpu/drm/i915/intel_sdvo.c
index 093e914..62d22ae 100644
--- a/drivers/gpu/drm/i915/intel_sdvo.c
+++ b/drivers/gpu/drm/i915/intel_sdvo.c
@@ -1122,11 +1123,9 @@ static void intel_sdvo_mode_set(struct drm_encoder *encoder,
/* We have tried to get input timing in mode_fixup, and filled into
adjusted_mode */
- if (intel_sdvo->is_tv || intel_sdvo->is_lvds) {
- intel_sdvo_get_dtd_from_mode(&input_dtd, adjusted_mode);
+ intel_sdvo_get_dtd_from_mode(&input_dtd, adjusted_mode);
+ if (intel_sdvo->is_tv || intel_sdvo->is_lvds)
input_dtd.part2.sdvo_flags = intel_sdvo->sdvo_flags;
- } else
- intel_sdvo_get_dtd_from_mode(&input_dtd, mode);
/* If it's a TV, we already set the output timing in mode_fixup.
* Otherwise, the output timing is equal to the input timing.
Due to questions raised in review, below a more elaborate analysis of
the bug at hand:
Sdvo seems to have two timings, one is the output timing which will be
sent over whatever is connected on the other side of the sdvo chip (panel,
hdmi screen, tv), the other is the input timing which will be generated by
the gmch pipe. It looks like sdvo is expected to scale between the two.
To make things slightly more complicated, we have a bunch of special
cases:
- For lvds panel we always use a fixed output timing, namely
intel_sdvo->sdvo_lvds_fixed_mode, hence that special case.
- Sdvo has an interface to generate a preferred input timing for a given
output timing. This is the confusing thing that I've tried to clear up
with the follow-on patches.
- A special requirement is that the input pixel clock needs to be between
100MHz and 200MHz (likely to keep it within the electromechanical design
range of PCIe), 270MHz on later gen4+. Lower pixel clocks are
doubled/quadrupled.
The thing this patch tries to fix is that the pipe needs to be
explicitly instructed to double/quadruple the pixels and needs the
correspondingly higher pixel clock, whereas the sdvo adaptor seems to
do that itself and needs the unadjusted pixel clock. For the sdvo
encode side we already set the pixel mutliplier with a different
command (0x21).
This patch tries to fix this mess by:
- Keeping the output mode timing in the unadjusted plain mode, safe
for the lvds case.
- Storing the input timing in the adjusted_mode with the adjusted
pixel clock. This way we don't need to frob around with the core
crtc mode set code.
- Fixing up the pixelclock when constructing the sdvo dtd timing
struct. This is why the first hunk of the patch is an integral part
of the series.
- Dropping the is_tv special case because input_dtd is equivalent to
adjusted_mode after these changes. Follow-up patches clear this up
further (by simply ripping out intel_sdvo->input_dtd because it's
not needed).
v2: Extend commit message with an in-depth bug analysis.
Reported-and-Tested-by: Bernard Blackham <b-linuxgit@largestprime.net>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=48157
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: stable@kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On 32-bit systems, a large args->num_cliprects from userspace via ioctl
may overflow the allocation size, leading to out-of-bounds access.
This vulnerability was introduced in commit 432e58ed ("drm/i915: Avoid
allocation for execbuffer object list").
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On 32-bit systems, a large args->buffer_count from userspace via ioctl
may overflow the allocation size, leading to out-of-bounds access.
This vulnerability was introduced in commit 8408c282 ("drm/i915:
First try a normal large kmalloc for the temporary exec buffers").
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Chris' fix for my 32b breakage was incorrect. do_div returns a
remainder. Go back to a divide macro which is more 32b friendly.
Tested on x86-64.
This has only been compile tested on 32b systems.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=48756
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Sincere-apologies: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <benjamin.widawsky@intel.com>
[danvet: fixup 32bit compile-fail.]
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Somehow we have a fast-path that tries to avoid going through
the load-detect code when the encode already has a crtc associated.
But this fails horribly when the crtc is off. The load detect pipe
itself manages this case well (and also does not forget to restore the
dpms state), so just rip out this special case.
The issue seems to go back all the way to the commit that originally
introduced load-detection on the vga output:
commit e4a5d54f92
Author: Ma Ling <ling.ma@intel.com>
Date: Tue May 26 11:31:00 2009 +0800
drm/i915: Add support for VGA load detection (pre-945).
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=43020
Reported-by: Jean Delvare <khali@linux-fr.org>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This continues the theme started with vm_brk() and vm_munmap():
vm_mmap() does the same thing as do_mmap(), but additionally does the
required VM locking.
This uninlines (and rewrites it to be clearer) do_mmap(), which sadly
duplicates it in mm/mmap.c and mm/nommu.c. But that way we don't have
to export our internal do_mmap_pgoff() function.
Some day we hopefully don't have to export do_mmap() either, if all
modular users can become the simpler vm_mmap() instead. We're actually
very close to that already, with the notable exception of the (broken)
use in i810, and a couple of stragglers in binfmt_elf.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It looks like we also need to flush the render cache when we just
invalidate it. This fixes a regression in i-g-t/gem_tiled_blits on my
i855gm. I guess the render cache there is virtually indexed, so we
need to clean it when changing gtt mappings.
This regression has been introduce in
commit 46f0f8d120
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Apr 18 11:12:11 2012 +0100
drm/i915: Don't set a MBZ bit in gen2/3 MI_FLUSH
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When the change to start adjusting the sync polarity of the LVDS mode
was introduced in
commit aa9b500ddf
Author: Bryan Freed <bfreed@google.com>
Date: Wed Jan 12 13:43:19 2011 -0800
drm/i915: Honour LVDS sync polarity from EDID
we made the change in state verbose so that we could quickly spot any
regressions that made have also been introduced with it. As there do not
appear to have been any, remove the extra logging.
v2: Remove the no longer used variables.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This adds intel_pm routine for generic power-related infrastructure
initialization.
v2: now that all the platform-specific stuff is initialized in one place, we
can also add back the static definitions to platform-specific functions which
we abstract now.
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-by: Ben Widawsky <benjamin.widawsky@intel.com>
Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This moves the clock gating-related functions into intel_pm module.
Also, please note that we do change the function type from static to
non-static in this patch for the move, to prevent breaking bisecting with
non-working intermediate commit. Those are returned back to static form in
the following patch which setups a generic PM initialization function,
which was split into a different one to simplify review.
v2: rebase on top of latest drm-intel-next-queued to incorporate all the
changes that went there meanwhile.
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-by: Ben Widawsky <benjamin.widawsky@intel.com>
Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This moves the Ironlake energy monitoring functionality into intel_pm
module.
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-by: Ben Widawsky <benjamin.widawsky@intel.com>
Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This moves DRPS, RPS and RC6-related functionality into intel_pm module.
It also removes the linux/cpufreq.h include from intel_display, as its
only user was the GPU turbo-related functionality in Gen6+ code path.
v2: rebase on top of latest drm-intel-next-queued adding the bits that
shifted around since the last patch.
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-by: Ben Widawsky <benjamin.widawsky@intel.com>
Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The previous patch had way too long lines, this fixes them to fit into a
reasonable screen space.
Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This commit moves Frame Buffer Compression-related operations and support
functions into the new intel_pm module.
Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We can also take advantage of the new 'no retire' mode for seqno waiting
to avoid having to take a reference on the old fence object whilst
flushing an existing fence.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that we have a routine that is able to clear the fences as well as
setup up the register for a tiled object, remove the surplus routines to
clear the fences.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
One clarification that we make is to the existing semantics of
obj->tiling_changed to only mean that we need to update an associated
fence register (including the NO_FENCE when executing an untiled but
fenced GPU command). If we do not have a fence register or pending
fenced GPU access for the object (after put_fence() for example), then
we can clear the tiling_changed flag as any fence will necessarily be
rewritten upon acquisition.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Update the existing architecture specific fence writing routines to
either update the fence to point to a tiled object or to clear them in
preparation to remove the other fence writing routes.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As i915_wait_request() will first check for an already passed seqno,
doing it also in the caller is a waste of space for a cold path.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As the fences are stored in LRU order, we can simply reuse the oldest if
we do not have an unused register.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As we now never pipeline a fence update, obj->last_fenced_ring is always
the same as the obj->ring whenever obj->last_fenced_seqno is active, so
remove it.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As we now no longer track a pipelined fence change, we never use
ring->setup_seqno and can kill it.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Step 2 is then to replace the pipelined parameter with NULL and perform
constant folding to remove dead code.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We never succeeded in getting pipelined fencing to work (unresolved
spurious GPU hangs), so begin the process of dismantling and removal
the broken code.
Step 1 is the removal of the pipeline parameter to get_fence().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
During modeset we have to disable the pipe to reconfigure its timings
and maybe its size. Userspace may have queued up command buffers that
depend upon the pipe running in a certain configuration and so the
commands may become confused across the modeset. At the moment, we use a
less than satisfactory kick-scanline-waits should the GPU hang during
the modeset. It should be more reliable to wait for the pending
operations to complete first, even though we still have a window for
userspace to submit a broken command buffer during the modeset.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On gen2 MI_EXE_FLUSH is actually an AGP flush bit and on gen3 marked as
reserved. On both it is documented as being must-be-zero. So obey the
documentation, and separate the gen2 flush into its own little routine
and share with gen3.
This means that we can rename the existing render_ring_flush() to
reflect the generation from which it first applies and remove the code
for handling earlier generations from it.
v2: Applies to gen3 as well
v3: Make it compile and improve the commit message.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As we need to manipulate our device structure and allocate queue a task,
it is no longer a simple atomic operation and cannot be performed along
the atomic modeset paths. Instead make sure that we disable FBC (which
must be therefore kept as a set of simple register writes) when
performing the atomic modeset and leave the heavy-weight
intel_update_fbc() for the normal modeset.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This originally started as a patch from Bernard as a way of simply
setting the VS scheduler. After submitting the RFC patch, we decided to
also modify the DS scheduler. To be most explicit, I've made the patch
explicitly set all scheduler modes, and included the defines for other
modes (in case someone feels frisky later).
The rest of the story gets a bit weird. The first version of the patch
showed an almost unbelievable performance improvement. Since rebasing my
branch it appears the performance improvement has gone, unfortunately.
But setting these bits seem to be the right thing to do given that the
docs describe corruption that can occur with the default settings.
In summary, I am seeing no more perf improvements (or regressions) in my
limited testing, but we believe this should be set to prevent rendering
corruption, therefore cc stable.
v1: Clear bit 4 also (Ken + Eugeni)
Do a full clear + set of the bits we want (Me).
Cc: Bernard Kilarski <bernard.r.kilarski@intel.com>
Cc: stable <stable@vger.kernel.org>
Reviewed-by (RFC): Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The (2<<6) virtual memory space selector harks back to gen3 and is
mandatory given our use of GTT space for batchbuffers. On gen4+, use of
the GTT became mandatory and bit6 marked reserved. However the code must
now explicitly set (1<<7), which conveniently is also (2<<6).
To clarify the meaning for future readers, replace the open coded (2<<6)
with MI_BATCH_GTT.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As we defer updating the fence register from set-tiling to the point of
use, we need to declare every access through the GTT as either fenced or
unfenced.
This patches fixes an old bug in the execbuffer relocation processing
which could conceivably be hit by a pathological userspace.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Sparse doesn't like:
"error: bad constant expression"
Signed-off-by: Ben Widawsky <benjamin.widawsky@intel.com>
[danvet: apply s/drm_malloc_ab/kcalloc bikeshed. If it's small enough
for the stack, it's small enough for kmalloc.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This should contain all the changes which require no thought to make
sparse happy.
Signed-off-by: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When the PCH split occurred, hw dropped support for separate hsync and
vsync disable in the VGA DAC. So add a PCH specific DPMS function that
just uses the port enable bit for controlling DPMS states.
Before this fix, when anything other than a full DPMS off occurred,
the VGA port would be left enabled and scanning out while all the other
heads would turn off as expected.
v2: duplicate encoder helper vtable into pch and gmch versions (Daniel)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=48491
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
[danvet: s/intel_crt_dpms/gmch_crt_dpms as suggested by Chris.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Not only do the pageflip work without it at non-native modes (i.e. with
the panel fitter enabled), it also causes normal (non-pageflipped)
modesets to fail.
Reported-by: Adam Jackson <ajax@redhat.com>
Tested-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Wanted-by-for-fixes: Dave Airlie <airlied@gmail.com>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The purpose of this patch is to avoid zeroing the lower 12 reserved bits
of surface base address registers (framebuffer & sprite). There are bits
in that range that may occasionally be set by BIOS or by other components.
Signed-off-by: Armin Reese <armin.c.reese@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This needs proper enablement to avoid machine hangs, so let's just avoid
it for now.
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
They work differently, but the count is the same.
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>