aboutsummaryrefslogtreecommitdiffstats
path: root/fs/btrfs
diff options
context:
space:
mode:
authorJosef Bacik <josef@toxicpanda.com>2020-01-17 09:07:38 -0500
committerDavid Sterba <dsterba@suse.com>2020-01-31 14:01:55 +0100
commita7a63acc6575ded6f48ab293e275e8b903325e54 (patch)
tree1509f3939da6c5787fcd31c1e831f505e8cad4bd /fs/btrfs
parentbtrfs: Correctly handle empty trees in find_first_clear_extent_bit (diff)
downloadlinux-dev-a7a63acc6575ded6f48ab293e275e8b903325e54.tar.xz
linux-dev-a7a63acc6575ded6f48ab293e275e8b903325e54.zip
btrfs: fix force usage in inc_block_group_ro
For some reason we've translated the do_chunk_alloc that goes into btrfs_inc_block_group_ro to force in inc_block_group_ro, but these are two different things. force for inc_block_group_ro is used when we are forcing the block group read only no matter what, for example when the underlying chunk is marked read only. We need to not do the space check here as this block group needs to be read only. btrfs_inc_block_group_ro() has a do_chunk_alloc flag that indicates that we need to pre-allocate a chunk before marking the block group read only. This has nothing to do with forcing, and in fact we _always_ want to do the space check in this case, so unconditionally pass false for force in this case. Then fixup inc_block_group_ro to honor force as it's expected and documented to do. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Diffstat (limited to 'fs/btrfs')
-rw-r--r--fs/btrfs/block-group.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 14851584e245..c12e91ba7d7a 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -1213,7 +1213,7 @@ static int inc_block_group_ro(struct btrfs_block_group *cache, int force)
* Here we make sure if we mark this bg RO, we still have enough
* free space as buffer.
*/
- if (sinfo_used + num_bytes <= sinfo->total_bytes) {
+ if (force || (sinfo_used + num_bytes <= sinfo->total_bytes)) {
sinfo->bytes_readonly += num_bytes;
cache->ro++;
list_add_tail(&cache->ro_list, &sinfo->ro_bgs);
@@ -2225,7 +2225,7 @@ again:
}
}
- ret = inc_block_group_ro(cache, !do_chunk_alloc);
+ ret = inc_block_group_ro(cache, 0);
if (!do_chunk_alloc)
goto unlock_out;
if (!ret)