aboutsummaryrefslogtreecommitdiffstats
path: root/block
diff options
context:
space:
mode:
authorJosef Bacik <josef@toxicpanda.com>2018-09-28 13:45:41 -0400
committerJens Axboe <axboe@kernel.dk>2018-09-28 11:47:29 -0600
commit22ed8a93adc7a9cbb2c0a0fc1d7f10068a1f84c1 (patch)
tree2e1b6b870178f979740217aad4a0644c69a3811b /block
parentblk-iolatency: deal with nr_requests == 1 (diff)
downloadlinux-dev-22ed8a93adc7a9cbb2c0a0fc1d7f10068a1f84c1.tar.xz
linux-dev-22ed8a93adc7a9cbb2c0a0fc1d7f10068a1f84c1.zip
blk-iolatency: deal with small samples
There is logic to keep cgroups that haven't done a lot of IO in the most recent scale window from being punished for over-active higher priority groups. However for things like ssd's where the windows are pretty short we'll end up with small numbers of samples, so 5% of samples will come out to 0 if there aren't enough. Make the floor 1 sample to keep us from improperly bailing out of scaling down. Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block')
-rw-r--r--block/blk-iolatency.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 8daea7a4fe49..e7be77b0ce8b 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -366,7 +366,7 @@ static void check_scale_change(struct iolatency_grp *iolat)
* scale down event.
*/
samples_thresh = lat_info->nr_samples * 5;
- samples_thresh = div64_u64(samples_thresh, 100);
+ samples_thresh = max(1ULL, div64_u64(samples_thresh, 100));
if (iolat->nr_samples <= samples_thresh)
return;
}