diff options
author | 2009-04-14 13:07:35 +0100 | |
---|---|---|
committer | 2009-04-15 10:01:02 +0100 | |
commit | 7fccfc00c003c855936970facdbb667bae9dbe9a (patch) | |
tree | 73c486c5db1c042dba8d507655c5176742bbb9ca /lib/flex_array.c | |
parent | [ARM] 5449/1: S3C: Use disable_irq_nosync() to fix boot lockups (diff) | |
download | wireguard-linux-7fccfc00c003c855936970facdbb667bae9dbe9a.tar.xz wireguard-linux-7fccfc00c003c855936970facdbb667bae9dbe9a.zip |
[ARM] 5450/1: Flush only the needed range when unmapping a VMA
When unmapping N pages (e.g. shared memory) the amount of TLB flushes
done can be (N*PAGE_SIZE/ZAP_BLOCK_SIZE)*N although it should be N at
maximum. With PREEMPT kernel ZAP_BLOCK_SIZE is 8 pages, so there is a
noticeable performance penalty when unmapping a large VMA and the system
is spending its time in flush_tlb_range().
The problem is that tlb_end_vma() is always flushing the full VMA
range. The subrange that needs to be flushed can be calculated by
tlb_remove_tlb_entry(). This approach was suggested by Hugh Dickins,
and is also used by other arches.
The speed increase is roughly 3x for 8M mappings and for larger mappings
even more.
Signed-off-by: Aaro Koskinen <Aaro.Koskinen@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Diffstat (limited to 'lib/flex_array.c')
0 files changed, 0 insertions, 0 deletions