From 207d04baa3591a354711e863dd90087fc75873b3 Mon Sep 17 00:00:00 2001 From: Andi Kleen Date: Tue, 24 May 2011 17:12:29 -0700 Subject: readahead: reduce unnecessary mmap_miss increases The original INT_MAX is too large, reduce it to - avoid unnecessarily dirtying/bouncing the cache line - restore mmap read-around faster on changed access pattern Background: in the mosbench exim benchmark which does multi-threaded page faults on shared struct file, the ra->mmap_miss updates are found to cause excessive cache line bouncing on tmpfs. The ra state updates are needless for tmpfs because it actually disabled readahead totally (shmem_backing_dev_info.ra_pages == 0). Tested-by: Tim Chen Signed-off-by: Andi Kleen Signed-off-by: Wu Fengguang Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/filemap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) (limited to 'mm') diff --git a/mm/filemap.c b/mm/filemap.c index c974a2863897..e5131392d32e 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1566,7 +1566,8 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma, return; } - if (ra->mmap_miss < INT_MAX) + /* Avoid banging the cache line if not needed */ + if (ra->mmap_miss < MMAP_LOTSAMISS * 10) ra->mmap_miss++; /* -- cgit v1.2.3-59-g8ed1b