Age | Commit message (Collapse) | Author | Files | Lines |
|
It's either init code or already protected by other means. Note
that psb_gtt_pin/unpin has it's own lock, and that's really the
only piece of driver private state - all the modeset state is
protected with modeset locks already.
The only important bit is to switch all gem_obj_unref calls to the
_unlocked variant.
Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Acked-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1448271183-20523-18-git-send-email-daniel.vetter@ffwll.ch
|
|
This is called without dev->struct_mutex held, we need to use the
_unlocked variant.
Never caught in the wild since you'd need an evil userspace which
races a gem_close ioctl call with the in-progress open.
Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Acked-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1448271183-20523-17-git-send-email-daniel.vetter@ffwll.ch
|
|
Rather than using drm_match_cea_mode() to see if the EDID detailed
timings are supposed to represent one of the CEA/HDMI modes, add a
special version of that function that takes in an explicit clock
tolerance value (in kHz). When looking at the detailed timings specify
the tolerance as 5kHz due to the 10kHz clock resolution limit inherent
in detailed timings.
drm_match_cea_mode() uses the normal KHZ2PICOS() matching of clocks,
which only allows smaller errors for lower clocks (eg. for 25200 it
won't allow any error) and a bigger error for higher clocks (eg. for
297000 it actually matches 296913-297000). So it doesn't really match
what we want for the fixup. Using the explicit +-5kHz is much better
for this use case.
Not sure if we should change the normal mode matching to also use
something else besides KHZ2PICOS() since it allows a different
proportion of error depending on the clock. I believe VESA CVT
allows a maximum deviation of .5%, so using that for normal mode
matching might be a good idea?
Cc: Adam Jackson <ajax@redhat.com>
Tested-by: nathan.d.ciobanu@linux.intel.com
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92217
Fixes: fa3a7340eaa1 ("drm/edid: Fix up clock for CEA/HDMI modes specified via detailed timings")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
Add drm_atomic_helper_disable_planes_on_crtc() for disabling all
planes associated with the given CRTC. This can be used for instance
in the CRTC helper disable callback to disable all planes before
shutting down the display pipeline.
v2:
- Address Daniels review comments [1]
- Do atomic_begin() and atomic_flush() always if they are defined and
atomic knob is set
- update kerneldoc
- Put drm_atomic_helper_disable_planes_on_crtc() after
drm_atomic_helper_commit_planes_on_crtc() in drm_atomic_helper.c
to have functions in the same order as in drm_atomic_helper.h
Signed-off-by: Jyri Sarha <jsarha@ti.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1448633641-6486-1-git-send-email-jsarha@ti.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
The previous patch reintroduced a race condition whereby a failure in
one reader may allow a second reader to see out-of-order events.
Introduce a mutex to serialise readers so that an event is completed in
its entirety before another reader may process an event. The two readers
may race against each other, but the events each retrieves are in the
correct order.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Takashi Iwai <tiwai@suse.de>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1448462343-2072-2-git-send-email-chris@chris-wilson.co.uk
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
In
commit cdd1cf799bd24ac0a4184549601ae302267301c5
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Thu Dec 4 21:03:25 2014 +0000
drm: Make drm_read() more robust against multithreaded races
I fixed the races by serialising the use of the event by extending the
dev->event_lock. However, as Thomas pointed out, the copy_to_user() may
fault (even the __copy_to_user_inatomic() variant used here) and calling
into the driver backend with the spinlock held is bad news. Therefore we
have to drop the spinlock before the copy, but that exposes us to the
old race whereby a second reader could see an out-of-order event (as the
first reader may claim the first request but fail to copy it back to
userspace and so on returning it to the event list it will be behind the
current event being copied by the second reader).
Reported-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Takashi Iwai <tiwai@suse.de>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1448462343-2072-1-git-send-email-chris@chris-wilson.co.uk
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
To make the intention clearer, use list_next_entry instead of list_entry.
Signed-off-by: Geliang Tang <geliangtang@163.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
We have serious dangling else bugs waiting to happen in our for_each_
style macros with ifs. Consider, for example,
#define for_each_power_domain(domain, mask) \
for ((domain) = 0; (domain) < POWER_DOMAIN_NUM; (domain)++) \
if ((1 << (domain)) & (mask))
If this is used in context:
if (condition)
for_each_power_domain(domain, mask);
else
foo();
foo() will be called for each domain *not* in mask, if condition holds,
and not at all if condition doesn't hold.
Fix this by reversing the conditions in the macros, and adding an else
branch for the "for each" block, so that other if/else blocks can't
interfere. Provide a "for_each_if" helper macro to make it easier to get
this right.
v2: move for_each_if to drmP.h in a separate patch.
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1448392916-2281-2-git-send-email-jani.nikula@intel.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
We have serious dangling else bugs waiting to happen in our for_each_
style macros with ifs. Consider, for example,
#define drm_for_each_plane_mask(plane, dev, plane_mask) \
list_for_each_entry((plane), &(dev)->mode_config.plane_list, head) \
if ((plane_mask) & (1 << drm_plane_index(plane)))
If this is used in context:
if (condition)
drm_for_each_plane_mask(plane, dev, plane_mask);
else
foo();
foo() will be called for each plane *not* in plane_mask, if condition
holds, and not at all if condition doesn't hold.
Fix this by reversing the conditions in the macros, and adding an else
branch for the "for each" block, so that other if/else blocks can't
interfere. Provide a "for_each_if" helper macro to make it easier to get
this right.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1448392916-2281-1-git-send-email-jani.nikula@intel.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
To avoid even more code duplication punt this all to the probe worker,
which needs some slight adjustment to also generate a uevent when the
status has changed to due connector->force.
v2: Instead of running the output_poll_work (which is kinda the wrong
thing and a layering violation since it's an internal of the probe
helpers), or calling ->detect (which is again a layering violation
since it's used only by probe helpers) just call the official
->fill_modes function, like a GET_CONNECTOR ioctl call.
v3: Restore the accidentally removed forced-probe for echo "detect" >
force.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1447951610-12622-22-git-send-email-daniel.vetter@ffwll.ch
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
Use the correct function name for drm_atomic_clean_old_fb docs.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
When backwards is 0, __drm_mm_for_each_hole is same as
drm_mm_for_each_hole. So I rewrite drm_mm_for_each_hole
by using __drm_mm_for_each_hole.
Signed-off-by: Geliang Tang <geliangtang@163.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
We chase pointers/lists without taking the locks protecting them,
which isn't that good.
Fix it.
v2: Actually unlock properly, spotted by Julia.
v3: Put the label _before_ the mutex_unlock (Emil)
Cc: Emil Velikov <emil.l.velikov@gmail.com>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Link: http://patchwork.freedesktop.org/patch/msgid/1443783662-23066-1-git-send-email-daniel.vetter@ffwll.ch
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
To aid in debugging failures, print the src,dst,clip rectangles
when drm_plane_helper_check_update() fails.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
Allow the caller to specify a "prefix" string to drm_rect_debug_print()
to make it easier to see which drm_rect is being printed.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
Drivers shouldn't clobber the passed in addfb ioctl parameters.
i915 was doing just that. To prevent it from happening again,
pass the struct around as const, starting all the way from
internal_framebuffer_create().
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
The simple_strtoul function is marked as obsolete.
This patch replace it by kstrtouint.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
Adds clarification of the rotation property bits. I.e. rotation is
counter clockwise and that reflects are applied before any rotation.
v2: Refer to the define names instead of the property values.
Signed-off-by: Robert Fekete <robert.fekete@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
I noticed that intel_fbdev->our_mode is unused. Introduced by
79e539453b34 ("DRM: i915: add mode setting support").
Then I noticed that intel_fbdev->fbdev_list is unused as well.
Introduced by 386516744ba4 ("drm/fb: fix fbdev object model +
cleanup properly.") in i915, nouveau and radeon.
Subsequently cargo culted to amdgpu, ast, cirrus, qxl, udl,
virtio and mgag200.
Already removed from the latter with cc59487a05b1 ("drm/mgag200:
'fbdev_list' in 'struct mga_fbdev' is not used").
Remove it from the others.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
The drm_property_unreference_blob() function tests whether its argument
is NULL and then returns immediately.
Thus the tests around the calls are not needed.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Link: http://patchwork.freedesktop.org/patch/msgid/563C8B3E.405@users.sourceforge.net
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
Cc: Yetunde Adebisi <yetundex.adebisi@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
DRM_TEGRA_FBDEV config is currently used to enable/disable legacy fbdev
emulation for the tegra kms driver.
Remove this local config option and use the top level DRM_FBDEV_EMULATION
config option instead.
Signed-off-by: Archit Taneja <architt@codeaurora.org>
Link: http://patchwork.freedesktop.org/patch/msgid/1445933459-5249-4-git-send-email-architt@codeaurora.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
DRM_IMX_FB_HELPER config is currently used to enable/disable fbdev
emulation for the imx kms driver.
Remove this local config option and use the top level DRM_FBDEV_EMULATION
config option where applicable. Using this config lets us also prevent
wrapping around drm_fb_helper_* calls with #ifdefs in certain places.
Tested-by: Philipp Zabel <p.zabel@pengutronix.de>
Signed-off-by: Archit Taneja <architt@codeaurora.org>
Link: http://patchwork.freedesktop.org/patch/msgid/1445933459-5249-2-git-send-email-architt@codeaurora.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
A bunch of things have been removed meanwhile and docs not fully
brought up to speed. And a few gaps closed where I noticed missing
kerneldoc while reading through the overview sections.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1445533889-7661-3-git-send-email-daniel.vetter@ffwll.ch
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
I just realized that I've forgotten to update all the gem refcounting
docs. For pennance also add pretty docs for the overall drm_gem_object
structure, with a few links thrown in fore good.
As usually we need to make sure the kerneldoc reference is at most a
sect2 for otherwise it won't be listed.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1445533889-7661-1-git-send-email-daniel.vetter@ffwll.ch
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
|
|
Adjust kmem_cache_alloc_bulk API before we have any real users.
Adjust API to return type 'int' instead of previously type 'bool'. This
is done to allow future extension of the bulk alloc API.
A future extension could be to allow SLUB to stop at a page boundary, when
specified by a flag, and then return the number of objects.
The advantage of this approach, would make it easier to make bulk alloc
run without local IRQs disabled. With an approach of cmpxchg "stealing"
the entire c->freelist or page->freelist. To avoid overshooting we would
stop processing at a slab-page boundary. Else we always end up returning
some objects at the cost of another cmpxchg.
To keep compatible with future users of this API linking against an older
kernel when using the new flag, we need to return the number of allocated
objects with this API change.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Initial implementation missed support for kmem cgroup support in
kmem_cache_free_bulk() call, add this.
If CONFIG_MEMCG_KMEM is not enabled, the compiler should be smart enough
to not add any asm code.
Incoming bulk free objects can belong to different kmem cgroups, and
object free call can happen at a later point outside memcg context. Thus,
we need to keep the orig kmem_cache, to correctly verify if a memcg object
match against its "root_cache" (s->memcg_params.root_cache).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The call slab_pre_alloc_hook() interacts with kmemgc and is not allowed to
be called several times inside the bulk alloc for loop, due to the call to
memcg_kmem_get_cache().
This would result in hitting the VM_BUG_ON in __memcg_kmem_get_cache.
As suggested by Vladimir Davydov, change slab_post_alloc_hook() to be able
to handle an array of objects.
A subtle detail is, loop iterator "i" in slab_post_alloc_hook() must have
same type (size_t) as size argument. This helps the compiler to easier
realize that it can remove the loop, when all debug statements inside loop
evaluates to nothing. Note, this is only an issue because the kernel is
compiled with GCC option: -fno-strict-overflow
In slab_alloc_node() the compiler inlines and optimizes the invocation of
slab_post_alloc_hook(s, flags, 1, &object) by removing the loop and access
object directly.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reported-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Suggested-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Reviewed-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This change focus on improving the speed of object freeing in the
"slowpath" of kmem_cache_free_bulk.
The calls slab_free (fastpath) and __slab_free (slowpath) have been
extended with support for bulk free, which amortize the overhead of
the (locked) cmpxchg_double.
To use the new bulking feature, we build what I call a detached
freelist. The detached freelist takes advantage of three properties:
1) the free function call owns the object that is about to be freed,
thus writing into this memory is synchronization-free.
2) many freelist's can co-exist side-by-side in the same slab-page
each with a separate head pointer.
3) it is the visibility of the head pointer that needs synchronization.
Given these properties, the brilliant part is that the detached
freelist can be constructed without any need for synchronization. The
freelist is constructed directly in the page objects, without any
synchronization needed. The detached freelist is allocated on the
stack of the function call kmem_cache_free_bulk. Thus, the freelist
head pointer is not visible to other CPUs.
All objects in a SLUB freelist must belong to the same slab-page.
Thus, constructing the detached freelist is about matching objects
that belong to the same slab-page. The bulk free array is scanned is
a progressive manor with a limited look-ahead facility.
Kmem debug support is handled in call of slab_free().
Notice kmem_cache_free_bulk no longer need to disable IRQs. This
only slowed down single free bulk with approx 3 cycles.
Performance data:
Benchmarked[1] obj size 256 bytes on CPU i7-4790K @ 4.00GHz
SLUB fastpath single object quick reuse: 47 cycles(tsc) 11.931 ns
To get stable and comparable numbers, the kernel have been booted with
"slab_merge" (this also improve performance for larger bulk sizes).
Performance data, compared against fallback bulking:
bulk - fallback bulk - improvement with this patch
1 - 62 cycles(tsc) 15.662 ns - 49 cycles(tsc) 12.407 ns- improved 21.0%
2 - 55 cycles(tsc) 13.935 ns - 30 cycles(tsc) 7.506 ns - improved 45.5%
3 - 53 cycles(tsc) 13.341 ns - 23 cycles(tsc) 5.865 ns - improved 56.6%
4 - 52 cycles(tsc) 13.081 ns - 20 cycles(tsc) 5.048 ns - improved 61.5%
8 - 50 cycles(tsc) 12.627 ns - 18 cycles(tsc) 4.659 ns - improved 64.0%
16 - 49 cycles(tsc) 12.412 ns - 17 cycles(tsc) 4.495 ns - improved 65.3%
30 - 49 cycles(tsc) 12.484 ns - 18 cycles(tsc) 4.533 ns - improved 63.3%
32 - 50 cycles(tsc) 12.627 ns - 18 cycles(tsc) 4.707 ns - improved 64.0%
34 - 96 cycles(tsc) 24.243 ns - 23 cycles(tsc) 5.976 ns - improved 76.0%
48 - 83 cycles(tsc) 20.818 ns - 21 cycles(tsc) 5.329 ns - improved 74.7%
64 - 74 cycles(tsc) 18.700 ns - 20 cycles(tsc) 5.127 ns - improved 73.0%
128 - 90 cycles(tsc) 22.734 ns - 27 cycles(tsc) 6.833 ns - improved 70.0%
158 - 99 cycles(tsc) 24.776 ns - 30 cycles(tsc) 7.583 ns - improved 69.7%
250 - 104 cycles(tsc) 26.089 ns - 37 cycles(tsc) 9.280 ns - improved 64.4%
Performance data, compared current in-kernel bulking:
bulk - curr in-kernel - improvement with this patch
1 - 46 cycles(tsc) - 49 cycles(tsc) - improved (cycles:-3) -6.5%
2 - 27 cycles(tsc) - 30 cycles(tsc) - improved (cycles:-3) -11.1%
3 - 21 cycles(tsc) - 23 cycles(tsc) - improved (cycles:-2) -9.5%
4 - 18 cycles(tsc) - 20 cycles(tsc) - improved (cycles:-2) -11.1%
8 - 17 cycles(tsc) - 18 cycles(tsc) - improved (cycles:-1) -5.9%
16 - 18 cycles(tsc) - 17 cycles(tsc) - improved (cycles: 1) 5.6%
30 - 18 cycles(tsc) - 18 cycles(tsc) - improved (cycles: 0) 0.0%
32 - 18 cycles(tsc) - 18 cycles(tsc) - improved (cycles: 0) 0.0%
34 - 78 cycles(tsc) - 23 cycles(tsc) - improved (cycles:55) 70.5%
48 - 60 cycles(tsc) - 21 cycles(tsc) - improved (cycles:39) 65.0%
64 - 49 cycles(tsc) - 20 cycles(tsc) - improved (cycles:29) 59.2%
128 - 69 cycles(tsc) - 27 cycles(tsc) - improved (cycles:42) 60.9%
158 - 79 cycles(tsc) - 30 cycles(tsc) - improved (cycles:49) 62.0%
250 - 86 cycles(tsc) - 37 cycles(tsc) - improved (cycles:49) 57.0%
Performance with normal SLUB merging is significantly slower for
larger bulking. This is believed to (primarily) be an effect of not
having to share the per-CPU data-structures, as tuning per-CPU size
can achieve similar performance.
bulk - slab_nomerge - normal SLUB merge
1 - 49 cycles(tsc) - 49 cycles(tsc) - merge slower with cycles:0
2 - 30 cycles(tsc) - 30 cycles(tsc) - merge slower with cycles:0
3 - 23 cycles(tsc) - 23 cycles(tsc) - merge slower with cycles:0
4 - 20 cycles(tsc) - 20 cycles(tsc) - merge slower with cycles:0
8 - 18 cycles(tsc) - 18 cycles(tsc) - merge slower with cycles:0
16 - 17 cycles(tsc) - 17 cycles(tsc) - merge slower with cycles:0
30 - 18 cycles(tsc) - 23 cycles(tsc) - merge slower with cycles:5
32 - 18 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:4
34 - 23 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:-1
48 - 21 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:1
64 - 20 cycles(tsc) - 48 cycles(tsc) - merge slower with cycles:28
128 - 27 cycles(tsc) - 57 cycles(tsc) - merge slower with cycles:30
158 - 30 cycles(tsc) - 59 cycles(tsc) - merge slower with cycles:29
250 - 37 cycles(tsc) - 56 cycles(tsc) - merge slower with cycles:19
Joint work with Alexander Duyck.
[1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/slab_bulk_test01.c
[akpm@linux-foundation.org: BUG_ON -> WARN_ON;return]
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Make it possible to free a freelist with several objects by adjusting API
of slab_free() and __slab_free() to have head, tail and an objects counter
(cnt).
Tail being NULL indicate single object free of head object. This allow
compiler inline constant propagation in slab_free() and
slab_free_freelist_hook() to avoid adding any overhead in case of single
object free.
This allows a freelist with several objects (all within the same
slab-page) to be free'ed using a single locked cmpxchg_double in
__slab_free() and with an unlocked cmpxchg_double in slab_free().
Object debugging on the free path is also extended to handle these
freelists. When CONFIG_SLUB_DEBUG is enabled it will also detect if
objects don't belong to the same slab-page.
These changes are needed for the next patch to bulk free the detached
freelists it introduces and constructs.
Micro benchmarking showed no performance reduction due to this change,
when debugging is turned off (compiled with CONFIG_SLUB_DEBUG).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Adjust the linker script and map_pages() to map kernel text and data on
physical 1MB huge/large pages.
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
This patch adds huge page support to allow userspace to allocate huge
pages and to use hugetlbfs filesystem on 32- and 64-bit Linux kernels.
A later patch will add kernel support to map kernel text and data on
huge pages.
The only requirement is, that the kernel needs to be compiled for a
PA8X00 CPU (PA2.0 architecture). Older PA1.X CPUs do not support
variable page sizes. 64bit Kernels are compiled for PA2.0 by default.
Technically on parisc multiple physical huge pages may be needed to
emulate standard 2MB huge pages.
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
Use the 22bit instead of the 17bit branch instruction on a 64bit kernel
to reach the do_syscall_trace_exit function from the gateway page.
A huge page enabled kernel may need the additional branch distance bits.
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
For the 64bit kernel the initially 16 MB kernel memory might become too
small if you build a kernel with many modules built-in and with kernel
text and data areas mapped on huge pages.
This patch increases the initial mapping to 32MB for 64bit kernels and
keeps 16MB for 32bit kernels.
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
A fault vector on parisc needs to be 2K aligned. Furthermore the
checksum of the fault vector needs to sum up to 0 which is being
calculated and written at runtime.
Up to now we aligned both PA20 and PA11 fault vectors on the same 4K
page in order to easily write the checksum after having mapped the
kernel read-only (by mapping this page only as read-write).
But when we want to map the kernel text and data on huge pages this
makes things harder.
So, simplify it by aligning both fault vectors on 2K boundries and write
the checksum before we map the page read-only.
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
Huge pages on parisc will have the same size as one pmd table, which
is on a 64bit kernel 2MB on a kernel with 4K kernel page sizes, and
on a 32bit kernel 4MB when used with 4K kernel pages.
Since parisc does not physically supports 2MB huge page sizes, emulate
it with two consecutive 1MB page sizes instead. Keeping the same huge
page size as one pmd will allow us to add transparent huge page support
later on.
Bit 21 in the pte flags was unused and will now be used to mark a page
as huge page (_PAGE_HPAGE_BIT).
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
Drop the MADV_xxK_PAGES flags, which were never used and were from a proposed
API which was never integrated into the generic Linux kernel code.
Cc: stable@vger.kernel.org
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
fsl8250_handle_irq is now used by the of_serial driver, and that fails
if it is a loadable module:
ERROR: "fsl8250_handle_irq" [drivers/tty/serial/of_serial.ko] undefined!
This exports the symbol to avoid randconfig errors.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: d43b54d269d2 ("serial: Enable Freescale 16550 workaround on arm")
Cc: Scott Wood <scottwood@freescale.com>
Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
8250_mid uses rational_best_approximation() function, so the
driver needs to select CONFIG_RATIONAL option.
This fixes build error when CONFIG_RATIONAL is not enabled:
drivers/built-in.o: In function `mid8250_set_termios':
8250_mid.c:(.text+0x10169a): undefined reference to `rational_best_approximation'
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
The data to audit/record is in the 'from' buffer (ie., the input
read buffer).
Fixes: 72586c6061ab ("n_tty: Fix auditing support for cannonical mode")
Cc: stable <stable@vger.kernel.org> # 4.1+
Cc: Miloslav Trmač <mitr@redhat.com>
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Acked-by: Laura Abbott <labbott@fedoraproject.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Since commit 7d8c70d8048c ("serial: mctrl-gpio: rename init function"),
crisv32 either do not build or crash as follows.
Unable to handle kernel NULL pointer dereference
Linux 4.3.0-rc7-next-20151101 #1 Sun Nov 1 11:41:28 PST 2015
...
Call Trace: [<c0004a0e>] show_stack+0x0/0x9e
[<c004c0c0>] printk+0x0/0x2c
[<c00059d4>] show_registers+0x14a/0x1c2
[<c004c0c0>] printk+0x0/0x2c
[<c0004b52>] die_if_kernel+0x7c/0x9e
[<c0005346>] do_page_fault+0x32e/0x3e6
[<c01dc59c>] of_get_property+0x0/0x2c
[<c01e0558>] of_irq_parse_raw+0x12a/0x376
[<c01dc59c>] of_get_property+0x0/0x2c
[<c0053aca>] get_page_from_freelist+0x73e/0x856
[<c01dc59c>] of_get_property+0x0/0x2c
[<c0008912>] d_mmu_refill+0x10a/0x112
[<c01b488c>] devm_kmalloc+0x40/0x56
[<c01b47d0>] add_dr+0xc/0x1c
[<c01b4800>] devm_add_action+0x2/0x4e
[<c01abdbc>] mctrl_gpio_init_noauto+0x1c/0x76
[<c01abf9e>] mctrl_gpio_init+0x22/0x110
The function call in the etraxfs-uart driver was not renamed,
possibly due to interference with commit 7b9c5162c182 ("serial:
etraxfs-uart: use mctrl_gpio helpers for handling modem signals").
Fixes: 7d8c70d8048c ("serial: mctrl-gpio: rename init function")
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Acked-by: Niklas Cassel <nks@flawful.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Earlycon support for Freescale lpuart should only be enabled when
console support is enabled.
Fixes: 1d59b382f1c4 ("serial: fsl_lpuart: add earlycon support")
Acked-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Use the device name when registering an interrupt so that multiple
ports don't all have the same interrupt name.
Signed-off-by: Simon Arlott <simon@fire.lp0.eu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Recent abstraction of tty buffer work introduced api to manage
tty input kworker; use it.
Fixes: e176058f0de5 ("tty: Abstract tty buffer work")
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
The correct lock order is atomic_write_lock => termios_rwsem, as
established by tty_write() => n_tty_write().
Fixes: c274f6ef1c666 ("tty: Hold termios_rwsem for tcflow(TCIxxx)")
Reported-and-Tested-by: Dmitry Vyukov <dvyukov@google.com>
Cc: <stable@vger.kernel.org> # v3.18+
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
The #ifdef of CONFIG_SLUB_DEBUG is located very far from the associated
#else. For readability mark it with a comment.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Use the new function that can do allocation while interrupts are disabled.
Avoids irq on/off sequences.
Signed-off-by: Christoph Lameter <cl@linux.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Bulk alloc needs a function like that because it enables interrupts before
calling __slab_alloc which promptly disables them again using the expensive
local_irq_save().
Signed-off-by: Christoph Lameter <cl@linux.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|