aboutsummaryrefslogtreecommitdiffstats
path: root/fs/gfs2/ops_fstype.c
diff options
context:
space:
mode:
authorBob Peterson <rpeterso@redhat.com>2016-12-16 08:01:28 -0600
committerBob Peterson <rpeterso@redhat.com>2017-01-05 14:47:36 -0500
commit2fcf5cc3be06126f9aa2430ca6d739c8b3c5aaf5 (patch)
treea8cdf4b9294c328769e6240e6b1035e312633262 /fs/gfs2/ops_fstype.c
parentGFS2: Fix reference to ERR_PTR in gfs2_glock_iter_next (diff)
downloadlinux-dev-2fcf5cc3be06126f9aa2430ca6d739c8b3c5aaf5.tar.xz
linux-dev-2fcf5cc3be06126f9aa2430ca6d739c8b3c5aaf5.zip
GFS2: Limit number of transaction blocks requested for truncates
This patch limits the number of transaction blocks requested during file truncates. If we have very large multi-terabyte files, and want to delete or truncate them, they might span so many resource groups that we overflow the journal blocks, and cause an assert failure. By limiting the number of blocks in the transaction, we prevent this overflow and give other running processes time to do transactions. The limiting factor I chose is sd_log_thresh2 which is currently set to 4/5ths of the journal. This same ratio is used in function gfs2_ail_flush_reqd to determine when a log flush is required. If we make the maximum value less than this, we can get into a infinite hang whereby the log stops moving because the number of used blocks is less than the threshold and the iterative loop needs more, but since we're under the threshold, the log daemon never starts any IO on the log. Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Diffstat (limited to 'fs/gfs2/ops_fstype.c')
0 files changed, 0 insertions, 0 deletions