| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
from threads other than the one currently having kcov enabled. A thread
with kcov enabled occasionally delegates work to another thread,
collecting coverage from such threads improves the ability of syzkaller
to correlate side effects in the kernel caused by issuing a syscall.
Remote coverage is divided into subsystems. The only supported subsystem
right now collects coverage from scheduled tasks and timeouts on behalf
of a kcov enabled thread. In order to make this work `struct task' and
`struct timeout' must be extended with a new field keeping track of the
process that scheduled the task/timeout. Both aforementioned structures
have therefore increased with the size of a pointer on all
architectures.
The kernel API is documented in a new kcov_remote_register(9) manual.
Remote coverage is also supported by kcov on NetBSD and Linux.
ok mpi@
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I wrote taskq_barrier with the behaviour described in the manpage:
taskq_barrier() guarantees that any task that was running on the tq taskq
when the barrier was called has finished by the time the barrier returns.
Note that it talks about the currently running task, not pending tasks.
It just so happens that the original implementation just used task_add
to put a condvar on the list and waited for it to run. Because task_add
uses TAILQ_INSERT_TAIL, you ended up waiting for all pending to work to
run too, not just the currently running task.
The new implementation took advantage of already holding the lock and
used TAILQ_INSERT_HEAD to put the barrier work at the front of the queue
so it would run next, which is closer to the stated behaviour.
Using the tail insert here restores the previous accidental behaviour.
jsg@ points out the following:
> The linux functions like flush_workqueue() we use this for in drm want
> to wait on all scheduled work not just currently running.
>
> ie a comment from one of the linux functions:
>
> /**
> * flush_workqueue - ensure that any scheduled work has run to completion.
> * @wq: workqueue to flush
> *
> * This function sleeps until all work items which were queued on entry
> * have finished execution, but it is not livelocked by new incoming ones.
> */
>
> our implementation of this in drm is
>
> void
> flush_workqueue(struct workqueue_struct *wq)
> {
> if (cold)
> return;
>
> taskq_barrier((struct taskq *)wq);
> }
I don't think it's worth complicating the taskq API, so I'm just
going to make taskq_barrier wait for pending work too.
tested by tb@
ok jsg@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is required for an upcoming drm update, where the linux workqueue
api that supports this is mapped to our taskq api.
this main way taskqs support that is to have the taskq worker threads
record thir curproc on the taskq, so taskq_barrier calls can iterate
over that list looking for their own curproc. if a barriers curproc
is in the list, it must be running inside the taskq, and should
pretend that it's a barrier task.
this also supports concurrent barrier calls by having the taskq
recognise the situation and have the barriers work together rather
than deadlocking. they end up trying to share the work of getting
the barrier tasks onto the workers. once all the workers (or in tq
barriers) have rendezvoused the barrier calls unwind, and the last
one out lets the other barriers and barrier tasks return.
all this barrier logic is implemented in the barrier code, it takes
the existing multiworker handling out of the actual taskq loop.
thanks to jsg@ for testing this and previous versions of the diff.
ok visa@ kettenis@
|
|
|
|
| |
ok visa@
|
|
|
|
| |
ok visa@
|
|
|
|
|
|
|
|
|
|
|
| |
if a taskq takes a lock, and something holding that lock calls
taskq_barrier, there's a potential deadlock. detect this as a lock
order problem when witness is enable. task_del conditionally followed
by taskq_barrier is a common pattern, so add a taskq_del_barrier
wrapper for it that unconditionally checks for the deadlock, like
timeout_del_barrier.
ok visa@
|
|
|
|
|
|
| |
if we ever want it back, it's in the attic.
ok mpi@ visa@ kettenis@
|
|
|
|
|
|
|
|
|
|
| |
jsg@ wants this for drm, and i've had a version of it in diffs sine
2016, but obviously havent needed to use it just yet.
task_pending is modelled on timeout_pending, and tells you if the
task is on a list waiting to execute.
ok jsg@
|
| |
|
|
|
|
|
|
|
|
| |
taskq_barrier guarantees that any task that was running on the taskq
has finished by the time taskq_barrier returns. it is similar to
intr_barrier.
this is needed for use in ifq_barrier as part of an upcoming change.
|
|
|
|
|
|
|
|
|
|
| |
reporting an error in a scenario like the following:
1. mtx_enter(&tqa->tq_mtx);
2. IRQ
3. mtx_enter(&tqb->tq_mtx);
Found by Hrvoje Popovski, OK mpi@
|
|
|
|
|
|
|
|
| |
The distinction between preempt() and yield() stays as it is usueful
to know if a thread decided to yield by itself or if the kernel told
him to go away.
ok tedu@, guenther@
|
|
|
|
| |
no functional change.
|
| |
|
| |
|
|
|
|
|
|
|
| |
know there's only one thread in the taskq. wakeups are much more
expensive than a simple compare.
from haesbart
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
interrupt context to a taskq running in a thread. however, there
is a concern that if we do that then we allow accidental use of
sleeping APIs in this work, which will make it harder to move the
work back to interrupts in the future.
guenther and kettenis came up with the idea of marking a proc with
CANTSLEEP which the sleep paths can check and panic on.
this builds on that so you create taskqs that run with CANTSLEEP
set except when they need to sleep for more tasks to run.
the taskq_create api is changed to take a flags argument so users
can specify CANTSLEEP. MPSAFE is also passed via this flags field
now. this means archs that defined IPL_MPSAFE to 0 can now create
mpsafe taskqs too.
lots of discussion at s2k15
ok guenther@ miod@ mpi@ tedu@ pelikan@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when workqs were introduced, we provided a second argument so you
could pass a thing and some context to work on it in. there were
very few things that took advantage of the second argument, so when
i introduced pools i suggested removing it. since tasks were meant
to replace workqs, it was requested that we keep the second argument
to make porting from workqs to tasks easier.
now that workqs are gone, i had a look at the use of the second
argument again and found only one good use of it (vdsp(4) on sparc64
if you're interested) and a tiny handful of questionable uses. the
vast majority of tasks only used a single argument. i have since
modified all tasks that used two args to only use one, so now we
can remove the second argument.
so this is a mechanical change. all tasks only passed NULL as their
second argument, so we can just remove it.
ok krw@
|
| |
|
|
|
|
| |
ok deraadt@ dlg@ phessler@
|
|
|
|
| |
after discussions with beck deraadt kettenis.
|
|
|
|
|
|
|
| |
currently unused.
ok dlg@
manpage improvement and ok jmc@
|
| |
|
|
|
|
|
|
| |
who is slacking to much.
ok dlg@
|
|
|
|
| |
ok matthew guenther mikeb
|
|
|
|
|
|
|
|
| |
*const systq defined in task.h
this reduces the cost of using the system taskq and looks less ugly.
requested by and ok kettenis@
|
|
|
|
|
|
| |
is safe to ask malloc to wait for memory.
pointed out by millert@
|
|
|
|
|
|
| |
to do that again.
kern/kern_task.c doesnt use pools so we dont need sys/pool.h either.
|
|
|
|
| |
might make jsg a little happier.
|
|
tasks are modelled on the timeout api, so users familiar with
timeout_set, timeout_add, and timeout_del will already know what
to expect from task_set, task_add, and task_del.
i wrote this because workq_add_task can fail in the place you
actually need it, and there arent any good ways of recovering at
that point. workq_queue_task was added to try and help, but required
external state to be stored for users of that api to know whether
something was already queued or not.
workqs also didnt provide a way to cancel or remove work.
this has been percolating with a bunch of people. putting it in as i
wrote it so i can apply their feedback to the code with the history kept
in cvs.
|