diff options
author | 2023-09-29 20:31:27 -0700 | |
---|---|---|
committer | 2023-10-18 14:34:13 -0700 | |
commit | 054a9f7ccd0a60607fb9bbe1e06ca671494971bf (patch) | |
tree | f0ec3ff903dfa8bc7dd804ae91b51cca9263b142 /mm/migrate.c | |
parent | shmem: shmem_acct_blocks() and shmem_inode_acct_blocks() (diff) | |
download | wireguard-linux-054a9f7ccd0a60607fb9bbe1e06ca671494971bf.tar.xz wireguard-linux-054a9f7ccd0a60607fb9bbe1e06ca671494971bf.zip |
shmem: move memcg charge out of shmem_add_to_page_cache()
Extract shmem's memcg charging out of shmem_add_to_page_cache(): it's
misleading done there, because many calls are dealing with a swapcache
page, whose memcg is nowadays always remembered while swapped out, then
the charge re-levied when it's brought back into swapcache.
Temporarily move it back up to the shmem_get_folio_gfp() level, where the
memcg was charged before v5.8; but the next commit goes on to move it back
down to a new home.
In making this change, it becomes clear that shmem_swapin_folio() does not
need to know the vma, just the fault mm (if any): call it fault_mm rather
than charge_mm - let mem_cgroup_charge() decide whom to charge.
Link: https://lkml.kernel.org/r/4b2143c5-bf32-64f0-841-81a81158dac@google.com
Signed-off-by: Hugh Dickins <hughd@google.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Carlos Maiolino <cem@kernel.org>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Chuck Lever <chuck.lever@oracle.com>
Cc: Darrick J. Wong <djwong@kernel.org>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Tim Chen <tim.c.chen@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/migrate.c')
0 files changed, 0 insertions, 0 deletions