Age | Commit message (Collapse) | Author | Files | Lines |
|
PLANE_CTL_FORMAT_AYUV is already supported, according to hardware
specification.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Bob Paauwe <bob.j.paauwe@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200407215546.5445-2-bob.j.paauwe@intel.com
|
|
If we use a non-forcewaked write to PMINTRMSK, it does not take effect
until much later, if at all, causing a loss of RPS interrupts and no GPU
reclocking, leaving the GPU running at the wrong frequency for long
periods of time.
Reported-by: Francisco Jerez <currojerez@riseup.net>
Suggested-by: Francisco Jerez <currojerez@riseup.net>
Fixes: 35cc7f32c298 ("drm/i915/gt: Use non-forcewake writes for RPS")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Francisco Jerez <currojerez@riseup.net>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Cc: <stable@vger.kernel.org> # v5.6+
Link: https://patchwork.freedesktop.org/patch/msgid/20200415170318.16771-2-chris@chris-wilson.co.uk
|
|
Since we depend upon RPS generating interrupts after evaluation
intervals to determine when to up/down clock the GPU, it is imperative
that we successfully enable interrupt generation! Verify that we do see
an interrupt if we keep the GPU busy for an entire EI.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200415170318.16771-1-chris@chris-wilson.co.uk
|
|
Even though the bspec is missing gen12 register details for the MCR
selector register (0xFDC), this is confirmed by hardware folks to be a
mistake; the register does exist and we do indeed need to steer
multicast register reads to an appropriate instance the same as we did
on gen11.
Note that despite the lack of documentation we were still using the MCR
selector to read INSTDONE and such in read_subslice_reg() too.
Cc: Matt Atwood <matthew.s.atwood@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200414211118.2787489-4-matthew.d.roper@intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
|
|
Cc: Matt Atwood <matthew.s.atwood@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200414211118.2787489-2-matthew.d.roper@intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
|
|
Media decompression support should not be advertised on any display
planes for steppings A0-C0.
Bspec: 53273
Fixes: 2dfbf9d2873a ("drm/i915/tgl: Gen-12 display can decompress surfaces compressed by the media engine")
Cc: Matt Atwood <matthew.s.atwood@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200414211118.2787489-3-matthew.d.roper@intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
|
|
Reflect recent bspec changes.
Bspec: 33451
Signed-off-by: Matt Atwood <matthew.s.atwood@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200413175322.12162-1-matthew.s.atwood@intel.com
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
|
|
We need to start passing memory latency as a
parameter when calculating plane wm levels,
as latency can get changed in different
circumstances(for example with or without SAGV).
So we need to be more flexible on that matter.
v2: Changed latency type from u32 to unsigned int(Ville Syrjälä)
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200409154730.18568-2-stanislav.lisovskiy@intel.com
|
|
Replace the TGL/ICL specific platform checks with a more generic check
using INTEL_GEN(). Fixes bug with broken audio after S3 resume on JSL
platforms.
An initial version of state save and restore of AUD_FREQ_CNTRL register
was added for subset of platforms in commit 87c1694533c9
("drm/i915: save AUD_FREQ_CNTRL state at audio domain suspend"). The state
save has proven to work well and it is needed in newer platforms, so needs
to be extended. Although the logic is not in practise needed on GEN9/10
systems, follow the hardware specification and apply state and restore on
all gen9+ platforms.
Bspec: 49281
Link: https://github.com/thesofproject/linux/issues/1719
Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200330144421.11632-1-kai.vehmanen@linux.intel.com
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
|
|
After changing the timing between GTT updates and execution on the GPU,
we started seeing sporadic failures on Ironlake. These were narrowed
down to being an insufficiently strong enough barrier/delay after
updating the GTT and scheduling execution on the GPU. By forcing the
uncached read, and adding the missing barrier for the singular
insert_page (relocation paths), the sporadic failures go away.
Fixes: 983d308cb8f6 ("agp/intel: Serialise after GTT updates")
Fixes: 3497971a71d8 ("agp/intel: Flush chipset writes after updating a single PTE")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Andi Shyti <andi.shyti@intel.com>
Cc: stable@vger.kernel.org # v4.0+
Link: https://patchwork.freedesktop.org/patch/msgid/20200410083535.25464-1-chris@chris-wilson.co.uk
|
|
With timeslice yielding on a semaphore, we may complete timeslices much
faster than we were expecting and already have yielded the stuck
request. Before complaining that timeslicing is not enabled, check that
we haven't already applied the switch.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200410081638.19893-1-chris@chris-wilson.co.uk
|
|
The variable err is being initialized with a value that is never read
and it is being updated later with a new value. The initialization is
redundant and can be removed.
Addresses-Coverity: ("Unused value")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20200409133107.415812-1-colin.king@canonical.com
|
|
In an address space there can be sprinkling of I915_COLOR_UNEVICTABLE
nodes, which lack a parent vma. For platforms with cache coloring we
might be very unlucky and abut with such a node thinking we can simply
unbind the vma.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20200408170456.399604-1-matthew.auld@intel.com
|
|
Since we are peeking into the batch object of the request, it is
beholden on us to hold a reference to it.
Closes: https://gitlab.freedesktop.org/drm/intel/issues/1634
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200408091723.28937-1-chris@chris-wilson.co.uk
|
|
We control b->irq_enabled inside the b->irq_lock, but we check before
entering the spinlock whether or not the interrupt is currently
unmasked.
[ 1511.735208] BUG: KCSAN: data-race in __intel_breadcrumbs_disarm_irq [i915] / intel_engine_disarm_breadcrumbs [i915]
[ 1511.735231]
[ 1511.735242] write to 0xffff8881f75fc214 of 1 bytes by interrupt on cpu 2:
[ 1511.735440] __intel_breadcrumbs_disarm_irq+0x4b/0x160 [i915]
[ 1511.735635] signal_irq_work+0x337/0x710 [i915]
[ 1511.735652] irq_work_run_list+0xd7/0x110
[ 1511.735666] irq_work_run+0x1d/0x50
[ 1511.735681] smp_irq_work_interrupt+0x21/0x30
[ 1511.735701] irq_work_interrupt+0xf/0x20
[ 1511.735722] __do_softirq+0x6f/0x206
[ 1511.735736] irq_exit+0xcd/0xe0
[ 1511.735756] do_IRQ+0x44/0xc0
[ 1511.735773] ret_from_intr+0x0/0x1c
[ 1511.735787] schedule+0x0/0xb0
[ 1511.735803] worker_thread+0x194/0x670
[ 1511.735823] kthread+0x19a/0x1e0
[ 1511.735837] ret_from_fork+0x1f/0x30
[ 1511.735848]
[ 1511.735867] read to 0xffff8881f75fc214 of 1 bytes by task 432 on cpu 1:
[ 1511.736068] intel_engine_disarm_breadcrumbs+0x22/0x80 [i915]
[ 1511.736263] __engine_park+0x107/0x5d0 [i915]
[ 1511.736453] ____intel_wakeref_put_last+0x44/0x90 [i915]
[ 1511.736648] __intel_wakeref_put_last+0x5a/0x70 [i915]
[ 1511.736842] intel_context_exit_engine+0xf2/0x100 [i915]
[ 1511.737044] i915_request_retire+0x6b2/0x770 [i915]
[ 1511.737244] retire_requests+0x7a/0xd0 [i915]
[ 1511.737438] intel_gt_retire_requests_timeout+0x3a7/0x6f0 [i915]
[ 1511.737633] i915_drop_caches_set+0x1e7/0x260 [i915]
[ 1511.737650] simple_attr_write+0xfa/0x110
[ 1511.737665] full_proxy_write+0x94/0xc0
[ 1511.737679] __vfs_write+0x4b/0x90
[ 1511.737697] vfs_write+0xfc/0x280
[ 1511.737718] ksys_write+0x78/0x100
[ 1511.737732] __x64_sys_write+0x44/0x60
[ 1511.737751] do_syscall_64+0x6e/0x2c0
[ 1511.737769] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200408092916.5355-1-chris@chris-wilson.co.uk
|
|
The intel_ring.head is updated as the requests are retired, but is
sampled at any time as we submit requests. Furthermore, it tracks
RING_HEAD which is inherently asynchronous.
[ 148.630314] BUG: KCSAN: data-race in execlists_dequeue [i915] / i915_request_retire [i915]
[ 148.630349]
[ 148.630374] write to 0xffff8881f4e28ddc of 4 bytes by task 90 on cpu 2:
[ 148.630752] i915_request_retire+0xed/0x770 [i915]
[ 148.631123] retire_requests+0x7a/0xd0 [i915]
[ 148.631491] engine_retire+0xa6/0xe0 [i915]
[ 148.631523] process_one_work+0x3af/0x640
[ 148.631552] worker_thread+0x80/0x670
[ 148.631581] kthread+0x19a/0x1e0
[ 148.631609] ret_from_fork+0x1f/0x30
[ 148.631629]
[ 148.631652] read to 0xffff8881f4e28ddc of 4 bytes by task 14288 on cpu 3:
[ 148.632019] execlists_dequeue+0x1300/0x1680 [i915]
[ 148.632384] __execlists_submission_tasklet+0x48/0x60 [i915]
[ 148.632770] execlists_submit_request+0x38e/0x3c0 [i915]
[ 148.633146] submit_notify+0x8f/0xc0 [i915]
[ 148.633512] __i915_sw_fence_complete+0x5d/0x3e0 [i915]
[ 148.633875] i915_sw_fence_complete+0x58/0x80 [i915]
[ 148.634238] i915_sw_fence_commit+0x16/0x20 [i915]
[ 148.634613] __i915_request_queue+0x60/0x70 [i915]
[ 148.634985] i915_gem_do_execbuffer+0x2de0/0x42b0 [i915]
[ 148.635366] i915_gem_execbuffer2_ioctl+0x2ab/0x580 [i915]
[ 148.635400] drm_ioctl_kernel+0xe9/0x130
[ 148.635429] drm_ioctl+0x27d/0x45e
[ 148.635456] ksys_ioctl+0x89/0xb0
[ 148.635482] __x64_sys_ioctl+0x42/0x60
[ 148.635510] do_syscall_64+0x6e/0x2c0
[ 148.635542] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 645.071436] BUG: KCSAN: data-race in gen8_emit_fini_breadcrumb [i915] / i915_request_retire [i915]
[ 645.071456]
[ 645.071467] write to 0xffff8881efe403dc of 4 bytes by task 14668 on cpu 3:
[ 645.071647] i915_request_retire+0xed/0x770 [i915]
[ 645.071824] i915_request_create+0x6c/0x160 [i915]
[ 645.072000] i915_gem_do_execbuffer+0x206d/0x42b0 [i915]
[ 645.072177] i915_gem_execbuffer2_ioctl+0x2ab/0x580 [i915]
[ 645.072194] drm_ioctl_kernel+0xe9/0x130
[ 645.072208] drm_ioctl+0x27d/0x45e
[ 645.072222] ksys_ioctl+0x89/0xb0
[ 645.072235] __x64_sys_ioctl+0x42/0x60
[ 645.072248] do_syscall_64+0x6e/0x2c0
[ 645.072263] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 645.072275]
[ 645.072285] read to 0xffff8881efe403dc of 4 bytes by interrupt on cpu 2:
[ 645.072458] gen8_emit_fini_breadcrumb+0x158/0x300 [i915]
[ 645.072636] __i915_request_submit+0x204/0x430 [i915]
[ 645.072809] execlists_dequeue+0x8e1/0x1680 [i915]
[ 645.072982] __execlists_submission_tasklet+0x48/0x60 [i915]
[ 645.073154] execlists_submit_request+0x38e/0x3c0 [i915]
[ 645.073330] submit_notify+0x8f/0xc0 [i915]
[ 645.073499] __i915_sw_fence_complete+0x5d/0x3e0 [i915]
[ 645.073668] i915_sw_fence_wake+0xc2/0x130 [i915]
[ 645.073836] __i915_sw_fence_complete+0x2cf/0x3e0 [i915]
[ 645.074006] i915_sw_fence_complete+0x58/0x80 [i915]
[ 645.074175] dma_i915_sw_fence_wake+0x3e/0x80 [i915]
[ 645.074344] signal_irq_work+0x62f/0x710 [i915]
[ 645.074360] irq_work_run_list+0xd7/0x110
[ 645.074373] irq_work_run+0x1d/0x50
[ 645.074386] smp_irq_work_interrupt+0x21/0x30
[ 645.074400] irq_work_interrupt+0xf/0x20
[ 645.074414] _raw_spin_unlock_irqrestore+0x34/0x40
[ 645.074585] execlists_submission_tasklet+0xde/0x170 [i915]
[ 645.074602] tasklet_action_common.isra.0+0x42/0x90
[ 645.074617] __do_softirq+0xc8/0x206
[ 645.074629] irq_exit+0xcd/0xe0
[ 645.074642] do_IRQ+0x44/0xc0
[ 645.074654] ret_from_intr+0x0/0x1c
[ 645.074667] finish_task_switch+0x73/0x230
[ 645.074679] __schedule+0x1c5/0x4c0
[ 645.074691] schedule+0x45/0xb0
[ 645.074704] worker_thread+0x194/0x670
[ 645.074716] kthread+0x19a/0x1e0
[ 645.074729] ret_from_fork+0x1f/0x30
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200407221832.15465-1-chris@chris-wilson.co.uk
|
|
Prefer struct drm_device based logging over struct device based logging.
No functional changes.
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-17-jani.nikula@intel.com
|
|
Prefer struct drm_device based logging over struct device based logging.
No functional changes.
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-16-jani.nikula@intel.com
|
|
Prefer struct drm_device based logging over struct device based logging.
No functional changes.
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-15-jani.nikula@intel.com
|
|
Prefer struct drm_device based logging over struct device based logging.
No functional changes.
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-14-jani.nikula@intel.com
|
|
Prefer struct drm_device based logging over struct device based logging.
No functional changes.
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-13-jani.nikula@intel.com
|
|
Prefer struct drm_device based logging over struct device based logging.
No functional changes.
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-12-jani.nikula@intel.com
|
|
Prefer struct drm_device based logging over struct device based logging.
No functional changes.
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-11-jani.nikula@intel.com
|
|
Prefer struct drm_device based logging over struct device based logging.
No functional changes.
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-10-jani.nikula@intel.com
|
|
Convert all the pr_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-9-jani.nikula@intel.com
|
|
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-8-jani.nikula@intel.com
|
|
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-7-jani.nikula@intel.com
|
|
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-6-jani.nikula@intel.com
|
|
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-5-jani.nikula@intel.com
|
|
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
|
|
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-3-jani.nikula@intel.com
|
|
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-2-jani.nikula@intel.com
|
|
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-1-jani.nikula@intel.com
|
|
Since the semaphore interrupt may cause us to yield the timeslice
immediately, we may cancel the timer before we notice the submission is
complete. The assertion is no longer valid due to the race with the
interrupt.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200407222625.15542-1-chris@chris-wilson.co.uk
|
|
If we find ourselves waiting on a MI_SEMAPHORE_WAIT, either within the
user batch or in our own preamble, the engine raises a
GT_WAIT_ON_SEMAPHORE interrupt. We can unmask that interrupt and so
respond to a semaphore wait by yielding the timeslice, if we have
another context to yield to!
The only real complication is that the interrupt is only generated for
the start of the semaphore wait, and is asynchronous to our
process_csb() -- that is, we may not have registered the timeslice before
we see the interrupt. To ensure we don't miss a potential semaphore
blocking forward progress (e.g. selftests/live_timeslice_preempt) we mark
the interrupt and apply it to the next timeslice regardless of whether it
was active at the time.
v2: We use semaphores in preempt-to-busy, within the timeslicing
implementation itself! Ergo, when we do insert a preemption due to an
expired timeslice, the new context may start with the missed semaphore
flagged by the retired context and be yielded, ad infinitum. To avoid
this, read the context id at the time of the semaphore interrupt and
only yield if that context is still active.
Fixes: 8ee36e048c98 ("drm/i915/execlists: Minimalistic timeslicing")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200407130811.17321-1-chris@chris-wilson.co.uk
|
|
Tidy the code by casting remain to unsigned long once for the duration
of eb_relocate_vma()
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200407085930.19421-1-chris@chris-wilson.co.uk
|
|
If we want to percolate information back from the HW, up through the GEM
context, we need to wait until the intel_context is scheduled out for
the last time. This is handled by the retirement of the intel_context's
barrier, i.e. by listening to the pulse after the notional unpin. So
wait until the intel_context is finally retired before releasing the
engine, so that we can inspect the final context state and pass it on.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200406155840.1728-3-chris@chris-wilson.co.uk
|
|
Allow the caller to also wait upon the barriers stored in i915_active.
v2: Hook up i915_request_await_active(I915_ACTIVE_AWAIT_BARRIER) as well
for completeness, and avoid the lazy GEM_BUG_ON()!
v3: Pull flush_lazy_signals() under the active-ref protection as it too
walks the rbtree and so we must be careful that we do not free it as we
iterate.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200406155840.1728-2-chris@chris-wilson.co.uk
|
|
Later use will require asynchronous waits on the active timelines, but
will not utilize an async wait on the exclusive channel. Make the await
on the exclusive fence explicit in the selection flags.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200406155840.1728-1-chris@chris-wilson.co.uk
|
|
If we set the debug flag to force ourselves not to relocate via the gpu,
do not relocate via the gpu.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200406123616.7334-1-chris@chris-wilson.co.uk
|
|
__i915_gem_object_flush_map() takes a byte range, so feed it the written
bytes and do not mistake the u32 index as bytes!
Fixes: a679f58d0510 ("drm/i915: Flush pages on acquisition")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Cc: <stable@vger.kernel.org> # v5.2+
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200406114821.10949-1-chris@chris-wilson.co.uk
|
|
On TypeC ports if a sink deasserts/reasserts its HPD signal, generating
a hotplug interrupt without the sink getting unplugged/replugged from
the connector, there can be an up to 3 seconds delay until the AUX
channel gets functional. To avoid detection failures this delay causes
retry the detection for 5 seconds.
I noticed this on ICL/TGL RVPs and a DELL XPS 13 7390 ICL laptop.
References: https://gitlab.freedesktop.org/drm/intel/issues/1067
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200330095425.29113-2-imre.deak@intel.com
|
|
On TypeC connectors we need to retry the detection after hotplug events
for a longer time, so add a retry counter to support this. The next
patch will add detection retries on TypeC ports needing this.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200330095425.29113-1-imre.deak@intel.com
|
|
While extremely unlikely to be populated, we could capture a request on
the virtual engine which we should free along with the virtual engine.
Fixes: 43acd6516ca9 ("drm/i915: Keep a per-engine request pool")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200403203303.10903-1-chris@chris-wilson.co.uk
|
|
If we submit, we do not start timeslicing until we process the CS event
that marks the start of the context running on HW. So in the selftest,
be sure to wait until we have processed the pending events before
asserting that timeslicing has begun.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200403190209.21818-1-chris@chris-wilson.co.uk
|
|
Do an early rejection of a i915_vma_unbind() attempt if the i915_vma is
currently pinned, without waiting to see if the inflight operations may
unpin it. We see this problem with the shrinker trying to unbind the
active vma from inside its bind worker:
<6> [472.618968] Workqueue: events_unbound fence_work [i915]
<4> [472.618970] Call Trace:
<4> [472.618974] ? __schedule+0x2e5/0x810
<4> [472.618978] schedule+0x37/0xe0
<4> [472.618982] schedule_preempt_disabled+0xf/0x20
<4> [472.618984] __mutex_lock+0x281/0x9c0
<4> [472.618987] ? mark_held_locks+0x49/0x70
<4> [472.618989] ? _raw_spin_unlock_irqrestore+0x47/0x60
<4> [472.619038] ? i915_vma_unbind+0xae/0x110 [i915]
<4> [472.619084] ? i915_vma_unbind+0xae/0x110 [i915]
<4> [472.619122] i915_vma_unbind+0xae/0x110 [i915]
<4> [472.619165] i915_gem_object_unbind+0x1dc/0x400 [i915]
<4> [472.619208] i915_gem_shrink+0x328/0x660 [i915]
<4> [472.619250] ? i915_gem_shrink_all+0x38/0x60 [i915]
<4> [472.619282] i915_gem_shrink_all+0x38/0x60 [i915]
<4> [472.619325] vm_alloc_page.constprop.25+0x1aa/0x240 [i915]
<4> [472.619330] ? rcu_read_lock_sched_held+0x4d/0x80
<4> [472.619363] ? __alloc_pd+0xb/0x30 [i915]
<4> [472.619366] ? module_assert_mutex_or_preempt+0xf/0x30
<4> [472.619368] ? __module_address+0x23/0xe0
<4> [472.619371] ? is_module_address+0x26/0x40
<4> [472.619374] ? static_obj+0x34/0x50
<4> [472.619376] ? lockdep_init_map+0x4d/0x1e0
<4> [472.619407] setup_page_dma+0xd/0x90 [i915]
<4> [472.619437] alloc_pd+0x29/0x50 [i915]
<4> [472.619470] __gen8_ppgtt_alloc+0x443/0x6b0 [i915]
<4> [472.619503] gen8_ppgtt_alloc+0xd7/0x300 [i915]
<4> [472.619535] ppgtt_bind_vma+0x2a/0xe0 [i915]
<4> [472.619577] __vma_bind+0x26/0x40 [i915]
<4> [472.619611] fence_work+0x1c/0x90 [i915]
<4> [472.619617] process_one_work+0x26a/0x620
Fixes: 2850748ef876 ("drm/i915: Pull i915_vma_pin under the vm->mutex")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200403120150.17091-1-chris@chris-wilson.co.uk
|
|
It is wrong to block the user thread in the next poll when OA data is
already available which could not fit in the user buffer provided in
the previous read. In several cases the exact user buffer size is not
known. Blocking user space in poll can lead to data loss when the
buffer size used is smaller than the available data.
This change fixes this issue and allows user space to read all OA data
even when using a buffer size smaller than the available data using
multiple non-blocking reads rather than staying blocked in poll till
the next timer interrupt.
v2: Fix ret value for blocking reads (Umesh)
v3: Mistake during patch send (Ashutosh)
v4: Remove -EAGAIN from comment (Umesh)
v5: Improve condition for clearing pollin and return (Lionel)
v6: Improve blocking read loop and other cleanups (Lionel)
v7: Added Cc stable
Testcase: igt/perf/polling-small-buf
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Signed-off-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Cc: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20200403010120.3067-1-ashutosh.dixit@intel.com
|
|
Make sure we revoke the user's mmaps of this vma to force them to take a
pagefault *before* we remove the associated aperture detiling register.
Fixes: 0d86ee35097a ("drm/i915/gt: Make fence revocation unequivocal")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200403160951.8271-1-chris@chris-wilson.co.uk
|
|
Move the final DP_TP_CTL frobbing of port sync to the master
encoder's enable hook. Now neatly out of sight from the high level
modeset code.
And thus we've eliminated all the special casing of port sync
in the high level modeset code.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200313164831.5980-14-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
|
|
We're going to want access to the atomic state for iterating
the slave crtcs when enabling the port sync master crtc. Pass
the atomic state all the way down.
The alternative would be yet another encoder hook which we'll
have to call after all the normal modeset stuff is done. Not
really a fan of yet another hook just for this.
Note that during readout state sanitation we are now going
to pass NULL as the atomic state since we don't have one.
We need to change that and then we can also s/crtc_state/crtc/
and s/conn_state/conn/ for the encoder hooks as well.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200313164831.5980-13-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
|