The machines that fall in this category are the SDVs that have a PCI
ID starting with 0x0C. These are very early pre-production machines
and may not fully work. Other Haswell SDVs have PCI IDs that match the
real Haswell machines and we expect them to work better.
Even though they have problems, they still mostly work so I don't see
a reason to refuse loading our driver. But I do see a reason to reject
bug reports from these machines, so the message should help the bug
triagers.
As far as I know, we don't implement some workarounds that are
specific to these machines and suspend/resume may not work on most of
them, but besides this, they may work.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=61508
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Haswell GT3e has the unique feature of supporting Write-Through cacheing
of objects within the eLLC/LLC. The purpose of this is to enable the display
plane to remain coherent whilst objects lie resident in the eLLC/LLC - so
that we, in theory, get the best of both worlds, perfect display and fast
access.
However, we still need to be careful as the CPU does not see the WT when
accessing the cache. In particular, this means that we need to flush the
cache lines after writing to an object through the CPU, and on
transitioning from a cached state to WT.
v2: Actually do the clflush on transition to WT, nagging by Ville.
v3: Flush the CPU cache after writes into WT objects.
v4: Rease onto LLC updates and report WT as "uncached" for
get_cache_level_ioctl to remain symmetric with set_cache_level_ioctl.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that we skip clflushes more often, return a boolean indicating
whether the clflush was actually performed, and only if it was do the
chipset flush. (Though on most of the architectures where the clflush will
be skipped, the chipset flush is a no-op!)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As mentioned in the previous commit, reads and writes from both the CPU
and GPU go through the LLC. This gives us coherency between the CPU and
GPU irrespective of the attribute settings either device sets. We can
use to avoid having to clflush even uncached memory.
Except for the scanout.
The scanout resides within another functional block that does not use
the LLC but reads directly from main memory. So in order to maintain
coherency with the scanout, writes to uncached memory must be flushed.
In order to optimize writes elsewhere, we start tracking whether an
framebuffer is attached to an object.
v2: Use pin_display tracking rather than fb_count (to ensure we flush
cursors as well etc) and only force the clflush along explicit writes to
the scanout paths (i.e. pin_to_display_plane and pwrite into scanout).
v3: Force the flush after hitting the slowpath in pwrite, as after
dropping the lock the object's cache domain may be invalidated. (Ville)
Based on a patch by Ville Syrjälä.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The display engine has unique coherency rules such that it requires
special handling to ensure that all writes to cursors, scanouts and
sprites are clflushed. This patch introduces the infrastructure to
simply track when an object is being accessed by the display engine.
v2: Explain the is_pin_display() magic as the sources for obj->pin_count
and their individual rules is not obvious. (Ville)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This code was dead since:
commit 432e58edc9
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Thu Nov 25 19:32:06 2010 +0000
drm/i915: Avoid allocation for execbuffer object list
so just put it to rest for good.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We're going to want to know the crtc in the watermark code to avoid
doing more work than we have to. We should also pass the plane we're
disabling so that we know where to stick our watermark parameters
without having to go look the plane up.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Let's call hsw_lp_wm_result intel_wm_level from now on and move it to
i915_drv.h for later use.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
formerly: "drm/i915: Create VMAs (part 4) - Error capture"
Since the active/inactive lists are per VM, we need to modify the error
capture code to be aware of this, and also extend it to capture the
buffers from all the VMs. For now all the code assumes only 1 VM, but it
will become more generic over the next few patches.
NOTE: If the number of VMs in a real world system grows significantly
we'll have to focus on only capturing the guilty VM, or else it's likely
there won't be enough space for error capture.
v2: Squashed in the "part 6" which had dependencies on the mm_list
change. Since I've moved the mm_list change to an earlier point in the
series, we were able to accomplish it here and now.
v3: Rebased over new error capture
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
formerly: "drm/i915: Create VMAs (part 5) - move mm_list"
The mm_list is used for the active/inactive LRUs. Since those LRUs are
per address space, the link should be per VMx .
Because we'll only ever have 1 VMA before this point, it's not incorrect
to defer this change until this point in the patch series, and doing it
here makes the change much easier to understand.
Shamelessly manipulated out of Daniel:
"active/inactive stuff is used by eviction when we run out of address
space, so needs to be per-vma and per-address space. Bound/unbound otoh
is used by the shrinker which only cares about the amount of memory used
and not one bit about in which address space this memory is all used in.
Of course to actual kick out an object we need to unbind it from every
address space, but for that we have the per-object list of vmas."
v2: only bump GGTT LRU in i915_gem_object_set_to_gtt_domain (Chris)
v3: Moved earlier in the series
v4: Add dropped message from v3
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Frob patch to apply and use vma->node.size directly as
discused with Ben. Also drop a needles BUG_ON before move_to_inactive,
the function itself has the same check.]
[danvet 2nd: Rebase on top of the lost "drm/i915: Cleanup more of VMA
in destroy", specifically unlink the vma from the mm_list in
vma_unbind (to keep it symmetric with bind_to_vm) instead of
vma_destroy.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We're going to use the 1/2 vs. 5/6 split option already on IVB so the
HSW name is not proper. Just give it an intel_ prefix and move it to
i915_drv.h so that we can use it there later.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Eviction code, like the rest of the converted code needs to be aware of
the address space for which it is evicting (or the everything case, all
addresses). With the updated bind/unbind interfaces of the last patch,
we can now safely move the eviction code over.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As alluded to in several patches, and it will be reiterated later... A
VMA is an abstraction for a GEM BO bound into an address space.
Therefore it stands to reason, that the existing bind, and unbind are
the ones which will be the most impacted. This patch implements this,
and updates all callers which weren't already updated in the series
(because it was too messy).
This patch represents the bulk of an earlier, larger patch. I've pulled
out a bunch of things by the request of Daniel. The history is preserved
for posterity with the email convention of ">" One big change from the
original patch aside from a bunch of cropping is I've created an
i915_vma_unbind() function. That is because we always have the VMA
anyway, and doing an extra lookup is useful. There is a caveat, we
retain an i915_gem_object_ggtt_unbind, for the global cases which might
not talk in VMAs.
> drm/i915: plumb VM into object operations
>
> This patch was formerly known as:
> "drm/i915: Create VMAs (part 3) - plumbing"
>
> This patch adds a VM argument, bind/unbind, and the object
> offset/size/color getters/setters. It preserves the old ggtt helper
> functions because things still need, and will continue to need them.
>
> Some code will still need to be ported over after this.
>
> v2: Fix purge to pick an object and unbind all vmas
> This was doable because of the global bound list change.
>
> v3: With the commit to actually pin/unpin pages in place, there is no
> longer a need to check if unbind succeeded before calling put_pages().
> Make put_pages only BUG() after checking pin count.
>
> v4: Rebased on top of the new hangcheck work by Mika
> plumbed eb_destroy also
> Many checkpatch related fixes
>
> v5: Very large rebase
>
> v6:
> Change BUG_ON to WARN_ON (Daniel)
> Rename vm to ggtt in preallocate stolen, since it is always ggtt when
> dealing with stolen memory. (Daniel)
> list_for_each will short-circuit already (Daniel)
> remove superflous space (Daniel)
> Use per object list of vmas (Daniel)
> Make obj_bound_any() use obj_bound for each vm (Ben)
> s/bind_to_gtt/bind_to_vm/ (Ben)
>
> Fixed up the inactive shrinker. As Daniel noticed the code could
> potentially count the same object multiple times. While it's not
> possible in the current case, since 1 object can only ever be bound into
> 1 address space thus far - we may as well try to get something more
> future proof in place now. With a prep patch before this to switch over
> to using the bound list + inactive check, we're now able to carry that
> forward for every address space an object is bound into.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Rebase on top of the loss of "drm/i915: Cleanup more of VMA
in destroy".]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
MLC_LLC was never validated for Sandybridge and was superseded by a new
level of cacheing for the GPU in Ivybridge. Update our names to be
consistent with usage, and in the process stop setting the unwanted bit
on Sandybridge.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
[danvet: s/BUG/WARN_ON(1) bikeshed.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Rather than having to read the latency values out every time, just
store them in dev_priv.
On ILK and IVB there is a difference between some of the latency
values for different planes, so store the latency values for each
plane type separately, and apply the necesary fixups during init.
v2: Fix some checkpatch complaints
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
To verbalize it, one can say, "pin an object into the given address
space." The semantics of pinning remain the same otherwise.
Certain objects will always have to be bound into the global GTT.
Therefore, global GTT is a special case, and keep a special interface
around for it (i915_gem_obj_ggtt_pin).
v2: s/i915_gem_ggtt_pin/i915_gem_obj_ggtt_pin
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Earlier in the conversion sequence we attempted to quickly wedge in the
transitional interface as static inlines.
Now that we're sure these interfaces are sane, for easier debug and to
decrease code size (since many of these functions may be called quite a
bit), make them real functions
While at it, kill off the set_color interface. We'll always have the
VMA, or easily get to it.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For calculating watermarks we want to know whether sprites are
scaled. Pass that information to update_sprite_watermarks() so that
eventually we may do some watermark pre-computing.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Some of our macros we trying to convert from an drm_device to a
drm_i915_private and then use the pointer inline. This is not only
cumbersome but prone to error. Replacing it with a typesafe function
should help catch those errors in future.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Squash in fixup to correctly order static vs. inline
qualifiers, static comes first. Also fix up another offender.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Almost invariably the reason why FBC cannot be turned on is the same
every time (disabled via parameter, too many pipes, pipe too large etc)
as modesetting and framebuffer configuration changes less frequently
than trying to enable FBC.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Just use a spinlock to protect them.
v2: Rebase onto the new object create refcount fix patch.
v3: Don't kill dev_priv->mm.object_memory as requested by Chris and
hence just use a spinlock instead of atomic_t.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=67287
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Detangle the confusion that NOTRACE variants of the register read/write
routines were directly using the raw register access. We need for those
routines to reuse the common code for serializing register access and
ensuring the correct register power states. This is only possible now
that the only routines that required raw access use their own API.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The GT functions for enabling register access also need to occasionally
write to and read from registers. To avoid the potential recursion as we
modify the public interface to be stricter, introduce a private register
access API for the GT functions.
v2: Rebase
v3: Rebase onto uncore
v4: Use raw interfaces consistently so that we only use the low-level
readN functions from a single location.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Currently, the register access code is split between i915_drv.c and
intel_pm.c. It only bares a superficial resemblance to the reset of the
powermanagement code, so move it all into its own file. This is to ease
further patches to enforce serialised register access.
v2: Scan for random abuse of I915_WRITE_NOTRACE
v3: Take the opportunity to rename the GT functions as uncore. Uncore is
the term used by the hardware design (and bspec) for all functions
outside of the GPU (and CPU) cores in what is also known as the System
Agent.
v4: Rebase onto SNB rc6 fixes
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Wrestle patch into applying and inline
intel_uncore_early_sanitize (plus move the old comment to the new
function). Also keep the _santize postfix for intel_uncore_sanitize.]
[danvet: Squash in fixup spotted by Chris on irc: We need to call
intel_pm_init before intel_uncore_sanitize since the later will call
cancel_work on the delayed rps setup work the former initializes.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This backmerges Linus' merge commit of the latest drm-fixes pull:
commit 549f3a1218
Merge: 42577ca058ca4a
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date: Tue Jul 23 15:47:08 2013 -0700
Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux
We've accrued a few too many conflicts, but the real reason is that I
want to merge the 100% solution for Haswell concurrent registers
writes into drm-intel-next. But that depends upon the 90% bandaid
merged into -fixes:
commit a7cd1b8fea
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Fri Jul 19 20:36:51 2013 +0100
drm/i915: Serialize almost all register access
Also, we can roll up on accrued conflicts.
Usually I'd backmerge a tagged -rc, but I want to get this done before
heading off to vacations next week ;-)
Conflicts:
drivers/gpu/drm/i915/i915_dma.c
drivers/gpu/drm/i915/i915_gem.c
v2: For added hilarity we have a init sequence conflict around the
gt_lock, so need to move that one, too. Spotted by Jani Nikula.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The regression fix for gen6+ rps fallout
commit 7dcd2677ea
Author: Konstantin Khlebnikov <khlebnikov@openvz.org>
Date: Wed Jul 17 10:22:58 2013 +0400
drm/i915: fix long-standing SNB regression in power consumption after resume
unintentionally also changed the init sequence ordering between
gt_init and gt_reset - we need to reset BIOS damage like leftover
forcewake references before we run our own code. Otherwise we can get
nasty dmesg noise like
[drm:__gen6_gt_force_wake_mt_get] *ERROR* Timed out waiting for forcewake old ack to clear.
again. Since _reset suggests that we first need to have stuff
initialized (which isn't the case here) call it sanitze instead.
While at it also block out the rps disable introduced by the above
commit on ilk: We don't have any knowledge of ilk rps being broken in
similar ways. And the disable functions uses the default hw state
which is only read out when we're enabling rps. So essentially we've
been writing random grabage into that register.
Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Konstantin Khlebnikov <khlebnikov@openvz.org>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: stable@vger.kernel.org
Tested-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=47941
BugLink: https://bugs.launchpad.net/bugs/1163720
BugLink: https://bugs.launchpad.net/bugs/1162026
Some machines suffer from non-functional backlight controls if
BLM_PCH_PWM_ENABLE is set, so provide a quirk to avoid doing so.
Apply this quirk to Dell XPS 13 models.
Tested-by: Eric Griffith <EGriffith92@gmail.com>
Tested-by: Kent Baxley <kent.baxley@canonical.com>
Cc: <stable@vger.kernel.org> # v3.8+
Signed-off-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
prefault is stll enabled by default which prevent most of pwrite/pread/reloc
from running slow path, in order to verify these slow pathes, prefault need
to be disabled.
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
[danvet: Make checkpatch happy and bikeshed the module option help
text a bit.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)
iQEcBAABAgAGBQJR0K2gAAoJEHm+PkMAQRiGWsEH+gMZSN1qRm34hZ82q1Tx7HvL
Eb/Gsl3Qw/7G2TlTqgjBUs36IdqV9O2cui/aa3/TfXvdvrx+0GlhRkEwQPc+ygcO
Mvoyoke4tT4+4jVFdCg1J8avREsa28/6oaHs0ZZxuVmJBBLTJH7aXaNsGn6eU1q9
9+p798MQis6naIiPC63somlZcCIiBhsuWCPWpEfLMn8G1HWAFTM3xXIbNBqe/brS
bmIOfhomlIZ5dcdaXGvjtP3+KJhkNDwhkPC4tVYu8JqqgSlrE+a+EGyEuuGqKk10
U+swiqyuD31uBI9ga54u/2FzSqDiAu6YOcMXevjo/m3g9XLdYbYLvN+nvN8alCQ=
=Ob6Z
-----END PGP SIGNATURE-----
Merge tag 'v3.10' into drm-intel-fixes
Backmerge Linux 3.10 to get at
commit 19b2dbde57
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Jun 12 10:15:12 2013 +0100
drm/i915: Restore fences after resume and GPU resets
That commit is not in my current -fixes pile since that's based on my
-next queue for 3.11. And the above mentioned fix was merged really
late into 3.10 (and blew up, bad me) so was on a diverging branch.
Option B would have been to rebase my current pile of fixes onto
Dave's drm-fixes branch. But since some of the patches here are a bit
tricky I've decided not to void all the testing by moving over the
entire merge window.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
v2: prefer seq_puts to seq_printf detected by Paulo Zanoni.
v3: PSR is disabled by default. Without userspace ready it
will cause regression for kde and xdm users
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Reviewed-by: Shobhit Kumar <shobhit.kumar@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
v2: Prefer seq_puts to seq_printf by Paulo Zanoni.
v3: small changes like avoiding calling dp_to_dig_port twice as noticed by
Paulo Zanoni.
v4: Avoiding reading non-existent registers - noticed by Paulo
on first psr debugfs patch.
v5: Accepting more suggestions from Paulo:
* check sw interlace flag instead of i915_read
* introduce PSR_S3D_ENABLED to avoid forgeting it whenever added.
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Shobhit Kumar <shobhit.kumar@intel.com>
[danvet: Fix up debugfs output (spotted by Paulo) and rip out the
power well check since we really can't do that in a race-free manner,
so it's bogus.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Formerly: "drm/i915: Create VMAs (part 1)"
In a previous patch, the notion of a VM was introduced. A VMA describes
an area of part of the VM address space. A VMA is similar to the concept
in the linux mm. However, instead of representing regular memory, a VMA
is backed by a GEM BO. There may be many VMAs for a given object, one
for each VM the object is to be used in. This may occur through flink,
dma-buf, or a number of other transient states.
Currently the code depends on only 1 VMA per object, for the global GTT
(and aliasing PPGTT). The following patches will address this and make
the rest of the infrastructure more suited
v2: s/i915_obj/i915_gem_obj (Chris)
v3: Only move an object to the now global unbound list if there are no
more VMAs for the object which are bound into a VM (ie. the list is
empty).
v4: killed obj->gtt_space
some reworks due to rebase
v5: Free vma on error path (Imre)
v6: Another missed vma free in i915_gem_object_bind_to_gtt error path
(Imre)
Fixed vma freeing in stolen preallocation (Imre)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Imre Deak <imre.deak@intel.com>
[danvet: Squash in fixup from Ben to not deref a non-existing vma in
set_cache_level, reported by Chris.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Shamelessly manipulated out of Daniel :-)
"When moving the lists around explain that the active/inactive stuff is
used by eviction when we run out of address space, so needs to be
per-vma and per-address space. Bound/unbound otoh is used by the
shrinker which only cares about the amount of memory used and not one
bit about in which address space this memory is all used in. Of course
to actual kick out an object we need to unbind it from every address
space, but for that we have the per-object list of vmas."
v2: Leave the bound list as a global one. (Chris, indirectly)
v3: Rebased with no i915_gtt_vm. In most places I added a new *vm local,
since it will eventually be replaces by a vm argument.
Put comment back inline, since it no longer makes sense to do otherwise.
v4: Rebased on hangcheck/error state movement
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
After we plumb our code to support multiple address spaces (VMs), there
are a few situations where we want to be able to traverse the list of
all address spaces in the system. Cases like eviction, or error state
collection are obvious example.
v2: Delete the global link instead of the list head. While this in and
of itself shouldn't be really be a problem, doing this allows us to WARN
on an non-empty list, which is a problem. (Daniel)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Every address space should support object allocation. It therefore makes
sense to have the allocator be part of the "superclass" which GGTT and
PPGTT will derive.
Since our maximum address space size is only 2GB we're not yet able to
avoid doing allocation/eviction; but we'd hope one day this becomes
almost irrelvant.
v2: Rebased
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The GTT and PPGTT can be thought of more generally as GPU address
spaces. Many of their actions (insert entries), state (LRU lists), and
many of their characteristics (size) can be shared. Do that.
The change itself doesn't actually impact most of the VMA/VM rework
coming up, it just fits in with the grand scheme of abstracting the GPU
VM operations. GGTT will usually be a special case where we either know
an object must be in the GGTT (dislay engine, workarounds, etc.).
The scratch page is left as part of the VM (even though it's currently
shared with the ppgtt code) because in the future when we have Full
PPGTT, I intend to create a separate scratch page for each.
v2: Drop usage of i915_gtt_vm (Daniel)
Make cleanup also part of the parent class (Ben)
Modified commit msg
Rebased
v3: Properly share scratch page (Imre)
Finish commit message (Daniel, Imre)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
To run hangcheck in near future.
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The default context is always supported (as it contains the global
hangcheck stats) and the contexts for hangcheck are not limited
to any ring.
References: https://bugs.freedesktop.org/show_bug.cgi?id=65845
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The eLLC cannot be determined by PCIID because as far as we know, even
machines supporting eLLC may not have it enabled, or fused off or
whatever. It's possible this isn't actually true, and at that point we
can switch to a DEV_INFO flag instead.
I've defined everything where the docs are clear, and left the rest as
magic.
But we need it before we set the pte_encode function pointers, which
happens really early, in gtt_init.
The problem with just doing the normal sequence earlier is we don't have
the ability to use forcewake until after the pte functions have been set
up.
Since all solutions are somewhat ugly (barring rewriting all the init
ordering), I've opted to do the detection really early, and the enabling
later - since the register to detect doesn't require forcewake.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Move dev_priv->ellc_size away from the dri1 dungeon to a nice
place right next to the l3 parity stuff. Also squash in the follow-up
commit to read out the eLLC size a bit earlier.]
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Move error state generation and stringification to it's
own compilation unit. Sysfs also uses this so it can't be
under CONFIG_DEBUG_FS
This fixes a regression introduced in
commit ef86ddced7
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Thu Jun 6 17:38:54 2013 +0300
drm/i915: add error_state sysfs entry
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=66814
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that the rps interrupt locking isn't clearly separated (at elast
conceptually) from all the other interrupt locking having a different
lock stopped making sense: It protects much more than just the rps
workqueue it started out with. But with the addition of VECS the
separation started to blurr and resulted in some more complex locking
for the ring interrupt refcount.
With this we can (again) unifiy the ringbuffer irq refcounts without
causing a massive confusion, but that's for the next patch.
v2: Explain better why the rps.lock once made sense and why no longer,
requested by Ben.
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In kernel modeset driver mode we're in full control of the chip,
always. So there's no need at all to set mm.suspended in
i915_gem_idle. Hence move that out into the leavevt ioctl. Since
i915_gem_idle doesn't suspend gem any more we can also drop the
re-enabling for KMS in the thaw function.
Also clean up the handling of mm.suspend at driver load by coalescing
all the assignments.
Stumbled over while reading through our resume code for unrelated
reasons.
v2: Shovel mm.suspended into the (newly created) ums dungeon as
suggested by Chris Wilson. The plan is that once we've completely
stopped relying on the register save/restore code we could shovel even
that in there.
v3: Improve the locking for the entervt/leavevt ioctls a bit by moving
the dev->struct_mutex locking outside of i915_gem_idle. Also don't
clear dev_priv->ums.mm_suspended for the kms case, we allocate it with
kzalloc. Both suggested by Chris Wilson.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v2)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Embedding the node in the obj is more natural in the transition to VMAs
which will also have embedded nodes. This change also helps transition
away from put_block to remove node.
Though it's quite an uncommon occurrence, it's somewhat convenient to not
fail at bind time because we cannot allocate the node. Though in
practice there are other allocations (like the request structure) which
would probably make this point not terribly useful.
Quoting Daniel:
Note that the only difference between put_block and remove_node is
that the former fills up the preallocation cache. Which we don't need
anyway and hence is just wasted space.
v2: Clean up the stolen preallocation code.
Rebased on the reserve_node patches
renames ggtt_ stuff to gtt_ stuff
WARN_ON if the object is already bound (which doesn't mean it's in the
bound list, tricky)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With the getters in place from the previous patch this members serves no
purpose other than saving one spare pointer chase, which will be killed
in the next patch anyway.
Moving to VMAs, this members adds unnecessary confusion since an object
may exist at different offsets in different VMs.
v2: Properly preserve the stolen offset. This code is a bit hacky but it
all goes away when we embed the drm_mm_node and removes the need for the
incorrect patch I submitted previously: "Use gtt_space->start for stolen
reservation"
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Soon we want to gut a lot of our existing assumptions how many address
spaces an object can live in, and in doing so, embed the drm_mm_node in
the object (and later the VMA).
It's possible in the future we'll want to add more getter/setter
methods, but for now this is enough to enable the VMAs.
v2: Reworked commit message (Ben)
Added comments to the main functions (Ben)
sed -i "s/i915_gem_obj_set_color/i915_gem_obj_ggtt_set_color/" drivers/gpu/drm/i915/*.[ch]
sed -i "s/i915_gem_obj_bound/i915_gem_obj_ggtt_bound/" drivers/gpu/drm/i915/*.[ch]
sed -i "s/i915_gem_obj_size/i915_gem_obj_ggtt_size/" drivers/gpu/drm/i915/*.[ch]
sed -i "s/i915_gem_obj_offset/i915_gem_obj_ggtt_offset/" drivers/gpu/drm/i915/*.[ch]
(Daniel)
v3: Rebased on new reserve_node patch
Changed DRM_DEBUG_KMS to actually work (will need fixing later)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A magic -1 is a obscure, especially since it's actually passed as an
unsigned, so depends upon the magic sign extension rules in C. This has
been added in
commit 3727d55e4d
Author: Jesse Barnes <jbarnes@virtuousgeek.org>
Date: Wed May 8 10:45:14 2013 -0700
drm/i915: allow stolen, pre-allocated objects to avoid GTT allocation v2
Use a proper #define instead. Spotted while reviewing Ben's
drm_mm_create_block changes.
v2: Cast the constant to u32 since otherwise we again have a type
mismatch. Suggested by Chris Wilson.
Cc: Ben Widawsky <ben@bwidawsk.net>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>