| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
ok patrick@
|
| |
|
| |
|
|
|
|
|
| |
This avoids errors that can arise when multiple cores update the
variable at the same time.
|
|
|
|
|
|
| |
This makes appear some redundant & racy checks.
ok semarie@
|
|
|
|
| |
ok kettenis@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tb@ reports that refaulting when there's contention on the vnode makes
firefox start very slowly on his machine. To revisit when the fault
handler will be unlocked.
ok anton@
Original commit message:
Fix a deadlock between uvn_io() and uvn_flush(). While faulting on a
page backed by a vnode, uvn_io() will end up being called in order to
populate newly allocated pages using I/O on the backing vnode. Before
performing the I/O, newly allocated pages are flagged as busy by
uvn_get(), that is before uvn_io() tries to lock the vnode. Such pages
could then end up being flushed by uvn_flush() which already has
acquired the vnode lock. Since such pages are flagged as busy,
uvn_flush() will wait for them to be flagged as not busy. This will
never happens as uvn_io() cannot make progress until the vnode lock is
released.
Instead, grab the vnode lock before allocating and flagging pages as
busy in uvn_get(). This does extend the scope in uvn_get() in which the
vnode is locked but resolves the deadlock.
ok mpi@
Reported-by: syzbot+e63407b35dff08dbee02@syzkaller.appspotmail.com
|
|
|
|
| |
ok claudio@ deraadt@
|
|
|
|
| |
ok drahn@
|
|
|
|
|
|
| |
Thanks to RJ Johnson for this work!
ok mpi@
|
|
|
|
|
|
|
|
|
| |
the visible result of this is that span ports aren't made promisc
like bridge ports. when cleaning up a span port, trying to take
promisc off it screwed up the refs, and it makes the underlying
interface not able to be promisc when it should be promisc.
found by dave voutila
|
|
|
|
|
|
|
|
|
| |
veb_p_ioctl() is used by both veb bridge and veb span ports, but
it had an assert to check that it was being called by a veb bridge
port. this extends the check so using it on a span port doesnt cause
a panic.
found by dave voutila
|
|
|
|
|
| |
excessive types into scope.
ok claudio
|
|
|
|
|
|
|
|
|
| |
Do not allow a faulting thread to sleep on a contended vnode lock to prevent
lock ordering issues with upcoming per-uobj lock.
ok anton@
Reported-by: syzbot+e63407b35dff08dbee02@syzkaller.appspotmail.com
|
|
|
|
|
|
|
|
| |
This fix (ab)use the vnode lock to serialize access to some fields of
the corresponding pages associated with UVM vnode object and this will
create new deadlocks with the introduction of a per-uobj lock.
ok anton@
|
|
|
|
|
|
|
| |
delay is awful in a hot path, and the SMMU is actually quite quick on
invalidation, so simply removing the delay is worth a thousand roses.
Found with mental support from dlg@ (and btrace)
|
|
|
|
| |
there until we have a proper way of making the MSI pages available.
|
|
|
|
|
|
|
|
|
|
| |
which is based on the IOMMU's. If you think about it, using the IOMMU's
DMA tag makes more sense because it is the IOMMU that does the actual DMA.
Noticed while debugging, since the SMMU's map function was called twice:
once for the PCI device, and once for its ppb(4). As the transaction has
the PCI device's Stream ID, not the ppb(4)'s, this would be useless work.
Suggested by kettenis@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
based on Stream IDs. On the Armada 8040 these Stream IDs can
be configured in different registers. The PCIe controller has
a register which maps root port, bus, dev and func number to
the Stream ID. This should be set up by TF-A firmware, but on
the 8040 the current images don't do this. For chips with more
than one PCIe controller this register must be setup correctly
depending on the implementation, but on the 8040 there only is
one controller, so we can configure a fixed value to match what
is defined in the device tree. This allows the SMMU to properly
track the PCIe controller's transactions.
ok kettenis@
|
|
|
|
|
|
|
|
| |
While there, enable the different voltage regulators and set the
PHY's assigned clocks. This makes PCIe work on the NanoPi R4S.
Tested by kurt@ on Rock Pi N10 and ROCKPro64
ok kurt@ kettenis@
|
|
|
|
|
|
|
|
|
|
| |
simplify the handling of the fragment list. Now the functions
ip_fragment() and ip6_fragment() always consume the mbuf. They
free the mbuf and mbuf list in case of an error and take care about
the counter. Adjust the code a bit to make v4 and v6 look similar.
Fixes a potential mbuf leak when pf_route6() called pf_refragment6()
and it failed. Now the mbuf is always freed by ip6_fragment().
OK dlg@ mvs@
|
|
|
|
|
|
|
| |
This change should have been part of the previous anon-locking diff and is
necessary to run the top part of uvm_fault() unlocked.
ok jmatthew@
|
|
|
|
|
|
|
|
|
| |
The name and logic come from NetBSD in order to reduce the difference
between the two code bases.
No functional change intended.
ok tb@
|
|
|
|
| |
ok kettenis@
|
|
|
|
| |
ok kettenis@
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
regular ARM CPU MMU re-used for I/O devices. Implementations can have a
mix of stage-2 only and stage-2/stage-2 context blocks (domains). The
IOMMU allows different ways of grouping devices into a single domain.
This implementation only supports SMMUv2, since there is basically
no relevant SMMUv1 hardware. It also only supports AArch64
pagetables, the same as our pmap. Hence lots of code was taken from
there. There is no support for 32-bit pagetables, which would have
also been needed for SMMUv1 support. I have not yet seen any
machines with SMMUv3, which will probably need a new driver.
There is some work to be done, but the code works and it's about
time it hits the tree.
ok kettenis@
|
|
|
|
|
|
| |
contains information which IOMMUs we have and how the devices are routed.
ok kettenis@
|
|
|
|
| |
ok kettenis@
|
|
|
|
| |
ok kettenis@
|
|
|
|
| |
ok kettenis@
|
|
|
|
| |
ok kettenis@
|
|
|
|
| |
deraadt@ says i broke hppa :(
|
| |
|
|
|
|
| |
ok patrick@
|
|
|
|
|
| |
This does not change the current behaviour, but filterops should be
invoked through filter_*() for consistency.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in route_input() we drop solock() after we checked socket state. We
pass mbuf(9) to this socket at next loops, while it referenced as
`last'. Socket's state could be changed by concurrent thread while
it's not locked.
Since we perform socket's checks and output in same iteration, the
logic which prevents mbuf(9) chain copy for the last socket in list
was removed.
ok bluhm@ claudio@
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Used by at least Skylake-SP (SKX) and Cascade Lake-SP (CLX).
Covers Xeon Scalable, Xeon D, Xeon W, Core Extreme/Core X product
families. The Scalable parts are marketed as Xeon Bronze, Silver, Gold
and Platinum.
As most of these ids are not described in public documents from Intel
use Skylake-ESystem.inf and KabyLakePCH-HSystem.inf from Intel's Windows
drivers to get an idea of what the names should be. With the name for
0x2088 found in a Intel authored Linux driver.
Initial patch and much discussion from Karel Gardas.
|
|
|
|
| |
also do the ethertype comparison before the conversion above.
|
|
|
|
|
|
|
| |
nvram files used for the different Apple devices. The device tree and
the OTP hold the information which of those we will have to use. For
now this information will simply be printed, but depending on how we
choose to do the firmare distribution we could use it for loadfirmware().
|
| |
|
|
|
|
|
|
|
|
| |
to use a different set of PCIE2REG registers. Accessing the "old" ones
even leads to faults. There are two surprises though. One is that it
seems that the interrupt status register always returns 0, and the other
one is that we receive the interrupts way too early, but both can be
worked around for now.
|
|
|
|
| |
advince from sthen@
|