aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/mtd/ubi/fastmap.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2022-09-21ubi: fastmap: Use the bitmap API to allocate bitmapsChristophe JAILLET1-6/+4
Use bitmap_zalloc()/bitmap_free() instead of hand-writing them. It is less verbose and it improves the semantic. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2022-05-27ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not emptyZhihao Cheng1-11/+0
There at least 6 PEBs reserved on UBI device: 1. EBA_RESERVED_PEBS[1] 2. WL_RESERVED_PEBS[1] 3. UBI_LAYOUT_VOLUME_EBS[2] 4. MIN_FASTMAP_RESERVED_PEBS[2] When all ubi volumes take all their PEBs, there are 3 (EBA_RESERVED_PEBS + WL_RESERVED_PEBS + MIN_FASTMAP_RESERVED_PEBS - MIN_FASTMAP_TAKEN_PEBS[1]) free PEBs. Since commit f9c34bb529975fe ("ubi: Fix producing anchor PEBs") and commit 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering wear level rules") applied, there is only 1 (3 - FASTMAP_ANCHOR_PEBS[1] - FASTMAP_NEXT_ANCHOR_PEBS[1]) free PEB to fill pool and wl_pool, after filling pool, wl_pool is always empty. So, UBI could be stuck in an infinite loop: ubi_thread system_wq wear_leveling_worker <-------------------------------------------------- get_peb_for_wl | // fm_wl_pool, used = size = 0 | schedule_work(&ubi->fm_work) | | update_fastmap_work_fn | ubi_update_fastmap | ubi_refill_pools | // ubi->free_count - ubi->beb_rsvd_pebs < 5 | // wl_pool is not filled with any PEBs | schedule_erase(old_fm_anchor) | ubi_ensure_anchor_pebs | __schedule_ubi_work(wear_leveling_worker) | | __erase_worker | ensure_wear_leveling | __schedule_ubi_work(wear_leveling_worker) -------------------------- , which cause high cpu usage of ubi_bgt: top - 12:10:42 up 5 min, 2 users, load average: 1.76, 0.68, 0.27 Tasks: 123 total, 3 running, 54 sleeping, 0 stopped, 0 zombie PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1589 root 20 0 0 0 0 R 45.0 0.0 0:38.86 ubi_bgt0d 319 root 20 0 0 0 0 I 15.2 0.0 0:15.29 kworker/0:3-eve 371 root 20 0 0 0 0 I 14.9 0.0 0:12.85 kworker/3:3-eve 20 root 20 0 0 0 0 I 11.3 0.0 0:05.33 kworker/1:0-eve 202 root 20 0 0 0 0 I 11.3 0.0 0:04.93 kworker/2:3-eve In commit 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering wear level rules"), there are three key changes: 1) Choose the fastmap anchor when the most free PEBs are available. 2) Enable anchor move within the anchor area again as it is useful for distributing wear. 3) Import a candidate fm anchor and check this PEB's erase count during wear leveling. If the wear leveling limit is exceeded, use the used anchor area PEB with the lowest erase count to replace it. The anchor candidate can be removed, we can check fm_anchor PEB's erase count during wear leveling. Fix it by: 1) Removing 'fm_next_anchor' and check 'fm_anchor' during wear leveling. 2) Preferentially filling one free peb into fm_wl_pool in condition of ubi->free_count > ubi->beb_rsvd_pebs, then try to reserve enough free count for fastmap non anchor pebs after the above prerequisites are met. Then, there are at least 1 PEB in pool and 1 PEB in wl_pool after calling ubi_refill_pools() with all erase works done. Fetch a reproducer in [Link]. Fixes: 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs ... rules") Link: https://bugzilla.kernel.org/show_bug.cgi?id=215407 Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2022-01-10ubi: fastmap: Return error code if memory allocation fails in add_aeb()Zhihao Cheng1-9/+19
Abort fastmap scanning and return error code if memory allocation fails in add_aeb(). Otherwise ubi will get wrong peb statistics information after scanning. Fixes: dbb7d2a88d2a7b ("UBI: Add fastmap core") Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2020-06-02ubi: Select fastmap anchor PEBs considering wear level rulesArne Edholm1-0/+11
There is a risk that the fastmap anchor PEB is alternating between just two PEBs, the current anchor and the previous anchor that was just deleted. As the fastmap pools gets the first take on free PEBs, the pools may leave no free PEBs to be selected as the new anchor, resulting in the two PEBs alternating behaviour. If the anchor PEBs gets a high erase count the PEBs will not be used by the pools but remain in ubi->free, even more increasing the likelihood they will be used as anchors. Getting stuck using only a couple of PEBs continuously will result in an uneven wear, eventually leading to failure. To fix this: - Choose the fastmap anchor when the most free PEBs are available. This is during rebuilding of the fastmap pools, after the unused pool PEBs are added to ubi->free but before the pools are populated again from the free PEBs. Also reserve an additional second best PEB as a candidate for the next time the fast map anchor is updated. If a better PEB is found the next time the fast map anchor is updated, the candidate is made available for building the pools. - Enable anchor move within the anchor area again as it is useful for distributing wear. - The anchor candidate for the next fastmap update is the most suited free PEB. Check this PEB's erase count during wear leveling. If the wear leveling limit is exceeded, the PEB is considered unsuitable for now. As all other non used anchor area PEBs should be even worse, free up the used anchor area PEB with the lowest erase count. Signed-off-by: Arne Edholm <arne.edholm@axis.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2020-01-19ubi: Fix an error pointer dereference in error handling codeDan Carpenter1-9/+12
If "seen_pebs = init_seen(ubi);" fails then "seen_pebs" is an error pointer and we try to kfree() it which results in an Oops. This patch re-arranges the error handling so now it only frees things which have been allocated successfully. Fixes: daef3dd1f0ae ("UBI: Fastmap: Add self check to detect absent PEBs") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2020-01-16ubi: fastmap: Fix inverted logic in seen selfcheckSascha Hauer1-1/+1
set_seen() sets the bit corresponding to the PEB number in the bitmap, so when self_check_seen() wants to find PEBs that haven't been seen we have to print the PEBs that have their bit cleared, not the ones which have it set. Fixes: 5d71afb00840 ("ubi: Use bitmaps in Fastmap self-check code") Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Richard Weinberger <richard@nod.at>
2019-11-17ubi: Fix producing anchor PEBsSascha Hauer1-9/+5
When a new fastmap is about to be written UBI must make sure it has a free block for a fastmap anchor available. For this ubi_update_fastmap() calls ubi_ensure_anchor_pebs(). This stopped working with 2e8f08deabbc ("ubi: Fix races around ubi_refill_pools()"), with this commit the wear leveling code is blocked and can no longer produce free PEBs. UBI then more often than not falls back to write the new fastmap anchor to the same block it was already on which means the same erase block gets erased during each fastmap write and wears out quite fast. As the locking prevents us from producing the anchor PEB when we actually need it, this patch changes the strategy for creating the anchor PEB. We no longer create it on demand right before we want to write a fastmap, but instead we create an anchor PEB right after we have written a fastmap. This gives us enough time to produce a new anchor PEB before it is needed. To make sure we have an anchor PEB for the very first fastmap write we call ubi_ensure_anchor_pebs() during initialisation as well. Fixes: 2e8f08deabbc ("ubi: Fix races around ubi_refill_pools()") Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Richard Weinberger <richard@nod.at>
2019-06-05treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 286Thomas Gleixner1-10/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation version 2 this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 97 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190529141901.025053186@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-06-07ubi: fastmap: Check each mapping only onceRichard Weinberger1-0/+20
Maintain a bitmap to keep track of which LEB->PEB mapping was checked already. That way we have to read back VID headers only once. Signed-off-by: Richard Weinberger <richard@nod.at>
2018-01-17ubi: fastmap: Clean up the initialization of pointer pColin Ian King1-2/+1
The pointer p is being initialized with one value and a few lines later being set to a newer replacement value. Clean up the code by using the latter assignment to p as the initial value. Cleans up clang warning: drivers/mtd/ubi/fastmap.c:217:19: warning: Value stored to 'p' during its initialization is never read Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2018-01-17ubi: fastmap: Use kmem_cache_free to deallocate memoryPan Bian1-1/+1
Memory allocated by kmem_cache_alloc() should not be deallocated with kfree(). Use kmem_cache_free() instead. Signed-off-by: Pan Bian <bianpan2016@163.com> Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2017-09-13ubi: fastmap: fix spelling mistake: "invalidiate" -> "invalidate"Colin Ian King1-1/+1
Trivial fix to spelling mistake in ubi_err error message Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2017-05-08ubi: fastmap: Fix slab corruptionRabin Vincent1-4/+29
Booting with UBI fastmap and SLUB debugging enabled results in the following splats. The problem is that ubi_scan_fastmap() moves the fastmap blocks from the scan_ai (allocated in scan_fast()) to the ai allocated in ubi_attach(). This results in two problems: - When the scan_ai is freed, aebs which were allocated from its slab cache are still in use. - When the other ai is being destroyed in destroy_ai(), the arguments to kmem_cache_free() call are incorrect since aebs on its ->fastmap list were allocated with a slab cache from a differnt ai. Fix this by making a copy of the aebs in ubi_scan_fastmap() instead of moving them. ============================================================================= BUG ubi_aeb_slab_cache (Not tainted): Objects remaining in ubi_aeb_slab_cache on __kmem_cache_shutdown() ----------------------------------------------------------------------------- INFO: Slab 0xbfd2da3c objects=17 used=1 fp=0xb33d7748 flags=0x40000080 CPU: 1 PID: 118 Comm: ubiattach Tainted: G B 4.9.15 #3 [<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c) [<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0) [<804a3274>] (dump_stack) from [<8026c47c>] (slab_err+0x78/0x88) [<8026c47c>] (slab_err) from [<802735bc>] (__kmem_cache_shutdown+0x180/0x3e0) [<802735bc>] (__kmem_cache_shutdown) from [<8024e13c>] (shutdown_cache+0x1c/0x60) [<8024e13c>] (shutdown_cache) from [<8024ed64>] (kmem_cache_destroy+0x19c/0x20c) [<8024ed64>] (kmem_cache_destroy) from [<8057cc14>] (destroy_ai+0x1dc/0x1e8) [<8057cc14>] (destroy_ai) from [<8057f04c>] (ubi_attach+0x3f4/0x450) [<8057f04c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8) [<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8) [<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00) [<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64) [<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c) INFO: Object 0xb33d7e88 @offset=3720 INFO: Allocated in scan_peb+0x608/0x81c age=72 cpu=1 pid=118 kmem_cache_alloc+0x3b0/0x43c scan_peb+0x608/0x81c ubi_attach+0x124/0x450 ubi_attach_mtd_dev+0x60c/0xff8 ctrl_cdev_ioctl+0x110/0x2b8 do_vfs_ioctl+0xac/0xa00 SyS_ioctl+0x3c/0x64 ret_fast_syscall+0x0/0x1c kmem_cache_destroy ubi_aeb_slab_cache: Slab cache still has objects CPU: 1 PID: 118 Comm: ubiattach Tainted: G B 4.9.15 #3 [<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c) [<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0) [<804a3274>] (dump_stack) from [<8024ed80>] (kmem_cache_destroy+0x1b8/0x20c) [<8024ed80>] (kmem_cache_destroy) from [<8057cc14>] (destroy_ai+0x1dc/0x1e8) [<8057cc14>] (destroy_ai) from [<8057f04c>] (ubi_attach+0x3f4/0x450) [<8057f04c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8) [<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8) [<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00) [<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64) [<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c) cache_from_obj: Wrong slab cache. ubi_aeb_slab_cache but object is from ubi_aeb_slab_cache ------------[ cut here ]------------ WARNING: CPU: 1 PID: 118 at mm/slab.h:354 kmem_cache_free+0x39c/0x450 Modules linked in: CPU: 1 PID: 118 Comm: ubiattach Tainted: G B 4.9.15 #3 [<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c) [<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0) [<804a3274>] (dump_stack) from [<80120e40>] (__warn+0xf4/0x10c) [<80120e40>] (__warn) from [<80120f20>] (warn_slowpath_null+0x28/0x30) [<80120f20>] (warn_slowpath_null) from [<80271fe0>] (kmem_cache_free+0x39c/0x450) [<80271fe0>] (kmem_cache_free) from [<8057cb88>] (destroy_ai+0x150/0x1e8) [<8057cb88>] (destroy_ai) from [<8057ef1c>] (ubi_attach+0x2c4/0x450) [<8057ef1c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8) [<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8) [<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00) [<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64) [<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c) ---[ end trace 2bd8396277fd0a0b ]--- ============================================================================= BUG ubi_aeb_slab_cache (Tainted: G B W ): page slab pointer corrupt. ----------------------------------------------------------------------------- INFO: Allocated in scan_peb+0x608/0x81c age=104 cpu=1 pid=118 kmem_cache_alloc+0x3b0/0x43c scan_peb+0x608/0x81c ubi_attach+0x124/0x450 ubi_attach_mtd_dev+0x60c/0xff8 ctrl_cdev_ioctl+0x110/0x2b8 do_vfs_ioctl+0xac/0xa00 SyS_ioctl+0x3c/0x64 ret_fast_syscall+0x0/0x1c INFO: Slab 0xbfd2da3c objects=17 used=1 fp=0xb33d7748 flags=0x40000081 INFO: Object 0xb33d7e88 @offset=3720 fp=0xb33d7da0 Redzone b33d7e80: cc cc cc cc cc cc cc cc ........ Object b33d7e88: 02 00 00 00 01 00 00 00 00 f0 ff 7f ff ff ff ff ................ Object b33d7e98: 00 00 00 00 00 00 00 00 bd 16 00 00 00 00 00 00 ................ Object b33d7ea8: 00 01 00 00 00 02 00 00 00 00 00 00 00 00 00 00 ................ Redzone b33d7eb8: cc cc cc cc .... Padding b33d7f60: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ CPU: 1 PID: 118 Comm: ubiattach Tainted: G B W 4.9.15 #3 [<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c) [<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0) [<804a3274>] (dump_stack) from [<80271770>] (free_debug_processing+0x320/0x3c4) [<80271770>] (free_debug_processing) from [<80271ad0>] (__slab_free+0x2bc/0x430) [<80271ad0>] (__slab_free) from [<80272024>] (kmem_cache_free+0x3e0/0x450) [<80272024>] (kmem_cache_free) from [<8057cb88>] (destroy_ai+0x150/0x1e8) [<8057cb88>] (destroy_ai) from [<8057ef1c>] (ubi_attach+0x2c4/0x450) [<8057ef1c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8) [<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8) [<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00) [<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64) [<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c) FIX ubi_aeb_slab_cache: Object at 0xb33d7e88 not freed Signed-off-by: Rabin Vincent <rabinv@axis.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-28ubi: fastmap: Fix add_vol() return value test in ubi_attach_fastmap()Boris Brezillon1-5/+5
Commit e96a8a3bb671 ("UBI: Fastmap: Do not add vol if it already exists") introduced a bug by changing the possible error codes returned by add_vol(): - this function no longer returns NULL in case of allocation failure but return ERR_PTR(-ENOMEM) - when a duplicate entry in the volume RB tree is found it returns ERR_PTR(-EEXIST) instead of ERR_PTR(-EINVAL) Fix the tests done on add_vol() return val to match this new behavior. Fixes: e96a8a3bb671 ("UBI: Fastmap: Do not add vol if it already exists") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Acked-by: Sheng Yong <shengyong1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-20ubi: fix swapped arguments to call to ubi_alloc_aebColin Ian King1-1/+1
Static analysis by CoverityScan detected the ec and pnum arguments are in the wrong order on a call to ubi_alloc_aeb. Swap the order to fix this. Fixes: 91f4285fe389a27 ("UBI: provide helpers to allocate and free aeb elements") Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02ubi: Fix Fastmap's update_vol()Richard Weinberger1-0/+1
Usually Fastmap is free to consider every PEB in one of the pools as newer than the existing PEB. Since PEBs in a pool are by definition newer than everything else. But update_vol() missed the case that a pool can contain more than one candidate. Cc: <stable@vger.kernel.org> Fixes: dbb7d2a88d ("UBI: Add fastmap core") Signed-off-by: Richard Weinberger <richard@nod.at> Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02ubi: Fix races around ubi_refill_pools()Richard Weinberger1-4/+10
When writing a new Fastmap the first thing that happens is refilling the pools in memory. At this stage it is possible that new PEBs from the new pools get already claimed and written with data. If this happens before the new Fastmap data structure hits the flash and we face power cut the freshly written PEB will not scanned and unnoticed. Solve the issue by locking the pools until Fastmap is written. Cc: <stable@vger.kernel.org> Fixes: dbb7d2a88d ("UBI: Add fastmap core") Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02UBI: introduce the VID buffer conceptBoris Brezillon1-28/+43
Currently, all VID headers are allocated and freed using the ubi_zalloc_vid_hdr() and ubi_free_vid_hdr() function. These functions make sure to align allocation on ubi->vid_hdr_alsize and adjust the vid_hdr pointer to match the ubi->vid_hdr_shift requirements. This works fine, but is a bit convoluted. Moreover, the future introduction of LEB consolidation (needed to support MLC/TLC NANDs) will allows a VID buffer to contain more than one VID header. Hence the creation of a ubi_vid_io_buf struct to attach extra information to the VID header. We currently only store the actual pointer of the underlying buffer, but will soon add the number of VID headers contained in the buffer. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02UBI: provide an helper to query LEB informationBoris Brezillon1-2/+6
This is part of our attempt to hide EBA internals from other part of the implementation in order to easily adapt it to the MLC needs. Here we are creating an ubi_eba_leb_desc struct to hide the way we keep track of the LEB to PEB mapping. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02UBI: provide helpers to allocate and free aeb elementsBoris Brezillon1-18/+10
This not only hides the aeb allocation internals (which is always good in case we ever want to change the allocation system), but also helps us factorize the initialization of some common fields (ec and pnum). Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02UBI: fastmap: use ubi_io_{read, write}_data() instead of ubi_io_{read, write}()Boris Brezillon1-5/+5
ubi_io_{read,write}_data() are wrappers around ubi_io_{read/write}() that are used to read/write eraseblock payload data, which is exactly what fastmap does when calling ubi_io_{read,write}(). Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02UBI: fastmap: use ubi_rb_for_each_entry() in unmap_peb()Boris Brezillon1-6/+2
Use the ubi_rb_for_each_entry() macro instead of open-coding it. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02UBI: factorize code used to manipulate volumes at attach timeBoris Brezillon1-24/+3
Volume creation/search code is duplicated in a few places (fastmap and non fastmap code). Create some helpers to factorize the code. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02UBI: fastmap: scrub PEB when bitflips are detected in a free PEB EC headerBoris Brezillon1-3/+4
scan_pool() does not mark the PEB for scrubing when bitflips are detected in the EC header of a free PEB (VID header region left to 0xff). Make sure we scrub the PEB in this case. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Fixes: dbb7d2a88d2a ("UBI: Add fastmap core") Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02UBI: fastmap: avoid multiple be32_to_cpu() when unneccesaryBoris Brezillon1-4/+4
process_pool_aeb() does several times the be32_to_cpu(new_vh->vol_id) operation. Create a temporary variable and do it once. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02UBI: fastmap: use ubi_find_volume() instead of open coding itBoris Brezillon1-20/+3
process_pool_aeb() re-implements the logic found in ubi_find_volume(). Call ubi_find_volume() to avoid this duplication. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-07-29ubi: Use bitmaps in Fastmap self-check codeRichard Weinberger1-9/+11
...don't waste memory by allocating one sizeof(int) per PEB. Signed-off-by: Richard Weinberger <richard@nod.at>
2016-07-29ubi: Check whether the Fastmap anchor matches the super blockRichard Weinberger1-0/+7
This helps to detect cases where an user copies an UBI image to another target with different bad blocks. Signed-off-by: Richard Weinberger <richard@nod.at>
2016-07-29ubi: Rework Fastmap attach base codeRichard Weinberger1-3/+33
Introduce a new list to the UBI attach information object to be able to deal better with old and corrupted Fastmap eraseblocks. Also move more Fastmap specific code into fastmap.c. Signed-off-by: Richard Weinberger <richard@nod.at>
2016-07-29ubi: Fix whitespace issue in count_fastmap_pebs()Richard Weinberger1-1/+1
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-05-24UBI: Fix static volume checks when Fastmap is usedRichard Weinberger1-0/+1
Ezequiel reported that he's facing UBI going into read-only mode after power cut. It turned out that this behavior happens only when updating a static volume is interrupted and Fastmap is used. A possible trace can look like: ubi0 warning: ubi_io_read_vid_hdr [ubi]: no VID header found at PEB 2323, only 0xFF bytes ubi0 warning: ubi_eba_read_leb [ubi]: switch to read-only mode CPU: 0 PID: 833 Comm: ubiupdatevol Not tainted 4.6.0-rc2-ARCH #4 Hardware name: SAMSUNG ELECTRONICS CO., LTD. 300E4C/300E5C/300E7C/NP300E5C-AD8AR, BIOS P04RAP 10/15/2012 0000000000000286 00000000eba949bd ffff8800c45a7b38 ffffffff8140d841 ffff8801964be000 ffff88018eaa4800 ffff8800c45a7bb8 ffffffffa003abf6 ffffffff850e2ac0 8000000000000163 ffff8801850e2ac0 ffff8801850e2ac0 Call Trace: [<ffffffff8140d841>] dump_stack+0x63/0x82 [<ffffffffa003abf6>] ubi_eba_read_leb+0x486/0x4a0 [ubi] [<ffffffffa00453b3>] ubi_check_volume+0x83/0xf0 [ubi] [<ffffffffa0039d97>] ubi_open_volume+0x177/0x350 [ubi] [<ffffffffa00375d8>] vol_cdev_open+0x58/0xb0 [ubi] [<ffffffff8124b08e>] chrdev_open+0xae/0x1d0 [<ffffffff81243bcf>] do_dentry_open+0x1ff/0x300 [<ffffffff8124afe0>] ? cdev_put+0x30/0x30 [<ffffffff81244d36>] vfs_open+0x56/0x60 [<ffffffff812545f4>] path_openat+0x4f4/0x1190 [<ffffffff81256621>] do_filp_open+0x91/0x100 [<ffffffff81263547>] ? __alloc_fd+0xc7/0x190 [<ffffffff812450df>] do_sys_open+0x13f/0x210 [<ffffffff812451ce>] SyS_open+0x1e/0x20 [<ffffffff81a99e32>] entry_SYSCALL_64_fastpath+0x1a/0xa4 UBI checks static volumes for data consistency and reads the whole volume upon first open. If the volume is found erroneous users of UBI cannot read from it, but another volume update is possible to fix it. The check is performed by running ubi_eba_read_leb() on every allocated LEB of the volume. For static volumes ubi_eba_read_leb() computes the checksum of all data stored in a LEB. To verify the computed checksum it has to read the LEB's volume header which stores the original checksum. If the volume header is not found UBI treats this as fatal internal error and switches to RO mode. If the UBI device was attached via a full scan the assumption is correct, the volume header has to be present as it had to be there while scanning to get known as mapped. If the attach operation happened via Fastmap the assumption is no longer correct. When attaching via Fastmap UBI learns the mapping table from Fastmap's snapshot of the system state and not via a full scan. It can happen that a LEB got unmapped after a Fastmap was written to the flash. Then UBI can learn the LEB still as mapped and accessing it returns only 0xFF bytes. As UBI is not a FTL it is allowed to have mappings to empty PEBs, it assumes that the layer above takes care of LEB accounting and referencing. UBIFS does so using the LEB property tree (LPT). For static volumes UBI blindly assumes that all LEBs are present and therefore special actions have to be taken. The described situation can happen when updating a static volume is interrupted, either by a user or a power cut. The volume update code first unmaps all LEBs of a volume and then writes LEB by LEB. If the sequence of operations is interrupted UBI detects this either by the absence of LEBs, no volume header present at scan time, or corrupted payload, detected via checksum. In the Fastmap case the former method won't trigger as no scan happened and UBI automatically thinks all LEBs are present. Only by reading data from a LEB it detects that the volume header is missing and incorrectly treats this as fatal error. To deal with the situation ubi_eba_read_leb() from now on checks whether we attached via Fastmap and handles the absence of a volume header like a data corruption error. This way interrupted static volume updates will correctly get detected also when Fastmap is used. Cc: <stable@vger.kernel.org> Reported-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar> Tested-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar> Signed-off-by: Richard Weinberger <richard@nod.at>
2015-11-06UBI: Fastmap: Fix PEB array typeEzequiel García1-1/+1
The PEB array is an array of __be32, so let's fix the scan_pool() prototype accordingly. Signed-off-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar> Signed-off-by: Richard Weinberger <richard@nod.at>
2015-10-03UBI: Fastmap: Simplify expressionRichard Weinberger1-1/+1
There is no need to compute pnum again. Signed-off-by: Richard Weinberger <richard@nod.at> Acked-by: Brian Norris <computersforpeace@gmail.com>
2015-06-03UBI: Remove unnecessary `\'shengyong1-1/+1
Signed-off-by: Sheng Yong <shengyong1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2015-06-02UBI: Fastmap: Do not add vol if it already existsshengyong1-1/+8
During fastmap attaching, check if a volume already exists when adding the volume to volume tree. NOTE that the issue cannot happen, only if the on-flash fastmap data is modified. Signed-off-by: Sheng Yong <shengyong1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2015-06-02UBI: Fastmap: Rename variables to make them meaningfulshengyong1-29/+29
s/fmpl1/fmpl s/fmpl2/fmpl_wl Add "WL" to the error message when wrong WL pool magic number is detected. Signed-off-by: Sheng Yong <shengyong1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2015-06-02UBI: Fastmap: Remove unnecessary `\'shengyong1-7/+7
Signed-off-by: Sheng Yong <shengyong1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26UBI: Fastmap: Wire up WL accessor functionsRichard Weinberger1-16/+12
Use the new WL accessor functions in fastmap. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26UBI: Fastmap: Add self check to detect absent PEBsRichard Weinberger1-2/+84
This self check allows Fastmap to detect absent PEBs while writing a new fastmap to the MTD device. It will help to find implementation issues in Fastmap. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26UBI: Fastmap: Rework fastmap error pathsRichard Weinberger1-48/+114
If UBI is unable to write the fastmap to the device we have make sure that upon next attach UBI will fall back to scanning mode. In case we cannot ensure that they only thing we can do is falling back to read-only mode. The current error handling code is not powercut proof. It could happen that a powercut while invalidating would lead to a state where an too old fastmap could be used upon attach. This patch addresses the issue by writing a fake fastmap super block to a fresh PEB instead of reerasing the existing one. The fake fastmap super block will UBI case to do a full scan. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26UBI: Fastmap: Prepare for variable sized fastmapsRichard Weinberger1-2/+10
The current code assumes that each fastmap has the same amount of PEBs. So far this is true but will change soon. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26UBI: Fastmap: Locking updatesRichard Weinberger1-9/+9
a) Rename ubi->fm_sem to ubi->fm_eba_sem as this semaphore protects EBA changes. b) Turn ubi->fm_mutex into a rw semaphore. It will still serialize fastmap writes but also ensures that ubi_wl_put_peb() is not interrupted by a fastmap write. We use a rw semaphore to allow ubi_wl_put_peb() still to be executed in parallel if no fastmap write is happening. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26UBI: Fastmap: Set used_ebs only for static volumesRichard Weinberger1-2/+3
If we set it for dynamic ones we might confuse various self checks. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26UBI: Fastmap: Fix leb_count unbalanceRichard Weinberger1-0/+1
If a LEB is unmapped we have to decrement leb_count as well. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26UBI: Fastmap: Switch to ro mode if invalidate_fastmap() failsRichard Weinberger1-1/+3
We have to switch to ro mode to guarantee that upon next UBI attach all data is consistent. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26UBI: Fastmap: Remove eba_orphans logicRichard Weinberger1-83/+8
This logic is in vain as we treat protected PEBs also as used, so this case must not happen. If a PEB is found which is in the EBA table but not known as used has to be issued as fatal error. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26UBI: Fastmap: Remove bogus ubi_assert()Richard Weinberger1-1/+3
It is legal to have PEBs left in the used list. This can happen if UBI copies a PEB and a powercut happens between writing a new fastmap and adding this PEB into the EBA table. In this case the old PEB will be used. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26UBI: Fastmap: Fix memory leak while attachingRichard Weinberger1-13/+0
Currently we leak a few ubi_ainf_pebs while attaching. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26UBI: Fastmap: Don't allocate new ubi_wl_entry objectsRichard Weinberger1-26/+5
There is no need to allocate new ones every time, we can reuse the existing ones. This makes the code cleaner and more easy to follow. Signed-off-by: Richard Weinberger <richard@nod.at> Reviewed-by: Tanya Brokhman <tlinder@codeaurora.org> Reviewed-by: Guido Martínez <guido@vanguardiasur.com.ar>
2015-01-28UBI: Fastmap: Care about the protection queueRichard Weinberger1-0/+13
Fastmap can miss a PEB if it is in the protection queue and not jet in the used tree. Treat every protected PEB as used. Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>