aboutsummaryrefslogtreecommitdiffstats
path: root/fs/f2fs/node.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2013-05-08f2fs: cover free_nid management with spin_lockJaegeuk Kim1-16/+18
After build_free_nids() searches free nid candidates from nat pages and current journal blocks, it checks all the candidates if they are allocated so that the nat cache has its nid with an allocated block address. In this procedure, previously we used list_for_each_entry_safe(fnid, next_fnid, &nm_i->free_nid_list, list). But, this is not covered by free_nid_list_lock, resulting in null pointer bug. This patch moves this checking routine inside add_free_nid() in order not to use the spin_lock. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-05-08f2fs: optimize scan_nat_page()Haicheng Li1-5/+9
When nm_i->fcnt > 2 * MAX_FREE_NIDS, stop scanning other NAT entries. Signed-off-by: Haicheng Li <haicheng.li@linux.intel.com> [Jaegeuk Kim: fix handling the return value of add_free_nid()] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-05-08f2fs: code cleanup for scan_nat_page() and build_free_nids()Haicheng Li1-6/+4
This patch does two cleanups: 1. remove unused variable "fcnt" in build_free_nids(). 2. make scan_nat_page() as void type and remove useless variable "fcnt". Signed-off-by: Haicheng Li <haicheng.li@linux.intel.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-05-08f2fs: bugfix for alloc_nid_failed()Haicheng Li1-2/+6
Directly drop the free_nid cache when nm_i->fcnt > 2 * MAX_FREE_NIDS Since there is NOT nmi->free_nid_list_lock spinlock protection between a sequential calling of alloc_nid() and alloc_nid_failed(), some other threads may already add new free_nid to the free_nid_list during this period. We need to make sure nmi->fcnt is never > 2 * MAX_FREE_NIDS. Signed-off-by: Haicheng Li <haicheng.li@linux.intel.com> [Jaegeuk Kim: fit the coding style] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-04-30f2fs: modify the number of issued pages to merge IOsJaegeuk Kim1-4/+2
When testing f2fs on an SSD, I found some 128 page IOs followed by 1 page IO were issued by f2fs_write_node_pages. This means that there were some mishandling flows which degrades performance. Previous f2fs_write_node_pages determines the number of pages to be written, nr_to_write, as follows. 1. The bio_get_nr_vecs returns 129 pages. 2. The bio_alloc makes a room for 128 pages. 3. The initial 128 pages go into one bio. 4. The existing bio is submitted, and a new bio is prepared for the last 1 page. 5. Finally, sync_node_pages submits the last 1 page bio. The problem is from the use of bio_get_nr_vecs, so this patch replace it with max_hw_blocks using queue_max_sectors. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-04-30f2fs: fix inconsistent using of NM_WOUT_THRESHOLDHaicheng Li1-1/+1
try_to_free_nats() is usually called with parameter nr_shrink as "nm_i->nat_cnt - NM_WOUT_THRESHOLD" by flush_nat_entries() during checkpointing process. However, this is inconsistent with the actual threshold check as "if (nm_i->nat_cnt < 2 * NM_WOUT_THRESHOLD)" , which will ignore the free_nats requests when NM_WOUT_THRESHOLD < nm_i->nat_cnt < 2 * NM_WOUT_THRESHOLD So fix the threshold check condition. Signed-off-by: Haicheng Li <haicheng.li@linux.intel.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-04-29f2fs: check truncation of mapping after lock_pageJaegeuk Kim1-4/+16
We call lock_page when we need to update a page after readpage. Between grab and lock page, the page can be truncated by other thread. So, we should check the page after lock_page whether it was truncated or not. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-04-29f2fs: enhance alloc_nid and build_free_nids flowsJaegeuk Kim1-46/+36
In order to avoid build_free_nid lock contention, let's change the order of function calls as follows. At first, check whether there is enough free nids. - If available, just get a free nid with spin_lock without any overhead. - Otherwise, conduct build_free_nids. : scan nat pages, journal nat entries, and nat cache entries. We should consider carefullly not to serve free nids intermediately made by build_free_nids. We can get stable free nids only after build_free_nids is done. Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-04-26f2fs: check nid == 0 in add_free_nidJaegeuk Kim1-4/+4
It is more obvious that add_free_nid checks whether the free nid is zero or not. Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-04-26f2fs: give a chance to merge IOs by IO schedulerJaegeuk Kim1-0/+9
Previously, background GC submits many 4KB read requests to load victim blocks and/or its (i)node blocks. ... f2fs_gc : f2fs_readpage: ino = 1, page_index = 0xb61, blkaddr = 0x3b964ed f2fs_gc : block_rq_complete: 8,16 R () 499854968 + 8 [0] f2fs_gc : f2fs_readpage: ino = 1, page_index = 0xb6f, blkaddr = 0x3b964ee f2fs_gc : block_rq_complete: 8,16 R () 499854976 + 8 [0] f2fs_gc : f2fs_readpage: ino = 1, page_index = 0xb79, blkaddr = 0x3b964ef f2fs_gc : block_rq_complete: 8,16 R () 499854984 + 8 [0] ... However, by the fact that many IOs are sequential, we can give a chance to merge the IOs by IO scheduler. In order to do that, let's use blk_plug. ... f2fs_gc : f2fs_iget: ino = 143 f2fs_gc : f2fs_readpage: ino = 143, page_index = 0x1c6, blkaddr = 0x2e6ee f2fs_gc : f2fs_iget: ino = 143 f2fs_gc : f2fs_readpage: ino = 143, page_index = 0x1c7, blkaddr = 0x2e6ef <idle> : block_rq_complete: 8,16 R () 1519616 + 8 [0] <idle> : block_rq_complete: 8,16 R () 1519848 + 8 [0] <idle> : block_rq_complete: 8,16 R () 1520432 + 96 [0] <idle> : block_rq_complete: 8,16 R () 1520536 + 104 [0] <idle> : block_rq_complete: 8,16 R () 1521008 + 112 [0] <idle> : block_rq_complete: 8,16 R () 1521440 + 152 [0] <idle> : block_rq_complete: 8,16 R () 1521688 + 144 [0] <idle> : block_rq_complete: 8,16 R () 1522128 + 192 [0] <idle> : block_rq_complete: 8,16 R () 1523256 + 328 [0] ... Note that this issue should be addressed in checkpoint, and some readahead flows too. Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-04-23f2fs: add tracepoints for truncate operationNamjae Jeon1-2/+18
add tracepoints for tracing the truncate operations like truncate node/data blocks, f2fs_truncate etc. Tracepoints are added at entry and exit of operation to trace the success & failure of operation. Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Pankaj Kumar <pankaj.km@samsung.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> [Jaegeuk: combine and modify the tracepoint structures] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-04-09f2fs: introduce a new global lock schemeJaegeuk Kim1-27/+17
In the previous version, f2fs uses global locks according to the usage types, such as directory operations, block allocation, block write, and so on. Reference the following lock types in f2fs.h. enum lock_type { RENAME, /* for renaming operations */ DENTRY_OPS, /* for directory operations */ DATA_WRITE, /* for data write */ DATA_NEW, /* for data allocation */ DATA_TRUNC, /* for data truncate */ NODE_NEW, /* for node allocation */ NODE_TRUNC, /* for node truncate */ NODE_WRITE, /* for node write */ NR_LOCK_TYPE, }; In that case, we lose the performance under the multi-threading environment, since every types of operations must be conducted one at a time. In order to address the problem, let's share the locks globally with a mutex array regardless of any types. So, let users grab a mutex and perform their jobs in parallel as much as possbile. For this, I propose a new global lock scheme as follows. 0. Data structure - f2fs_sb_info -> mutex_lock[NR_GLOBAL_LOCKS] - f2fs_sb_info -> node_write 1. mutex_lock_op(sbi) - try to get an avaiable lock from the array. - returns the index of the gottern lock variable. 2. mutex_unlock_op(sbi, index of the lock) - unlock the given index of the lock. 3. mutex_lock_all(sbi) - grab all the locks in the array before the checkpoint. 4. mutex_unlock_all(sbi) - release all the locks in the array after checkpoint. 5. block_operations() - call mutex_lock_all() - sync_dirty_dir_inodes() - grab node_write - sync_node_pages() Note that, the pairs of mutex_lock_op()/mutex_unlock_op() and mutex_lock_all()/mutex_unlock_all() should be used together. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-04-03f2fs: reduce redundant spin_lock operationsJaegeuk Kim1-6/+11
This patch reduces redundant spin_lock operations in alloc_nid_failed(). The alloc_nid_failed() does not need to delete entry and add one again by triggering spin_lock and spin_unlock redundantly. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-04-03f2fs: avoid race for summary informationJaegeuk Kim1-1/+1
In order to do GC more reliably, I'd like to lock the vicitm summary page until its GC is completed, and also prevent any checkpoint process. Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-04-03f2fs: remove redundant lock_page callsJaegeuk Kim1-15/+25
In get_node_page, we do not need to call lock_page all the time. If the node page is cached as uptodate, 1. grab_cache_page locks the page, 2. read_node_page unlocks the page, and 3. lock_page is called for further process. Let's avoid this. Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-03-31f2fs: use kmemdupAlexandru Gheorghiu1-7/+5
Use kmemdup instead of kzalloc and memcpy. Signed-off-by: Alexandru Gheorghiu <gheorghiuandru@gmail.com> Acked-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-03-27f2fs: remain nat cache entries for further free nid allocationJaegeuk Kim1-2/+2
In the checkpoint flow, the f2fs investigates the total nat cache entries. Previously, if an entry has NULL_ADDR, f2fs drops the entry and adds the obsolete nid to the free nid list. However, this free nid will be reused sooner, resulting in its nat entry miss. In order to avoid this, we don't need to drop the nat cache entry at this moment. Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-03-20f2fs: fix not to allocate max_nidJaegeuk Kim1-0/+2
The build_free_nid should not add free nids over nm_i->max_nid. But, there was a hole that invalid free nid was added by the following scenario. Let's suppose nm_i->max_nid = 150 and the last NAT page has 100 ~ 200 nids. build_free_nids - get_current_nat_page loads the last NAT page - scan_nat_page can add 100 ~ 200 nids -> Bug here! So, when scanning an NAT page, we should check each candidate whether it is over max_nid or not. Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-03-20f2fs: fix return value of releasepage for node and dataJaegeuk Kim1-10/+1
If the return value of releasepage is equal to zero, the page cannot be reclaimed. Instead, we should return 1 in order to reclaim clean pages. Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-03-20f2fs: scan next nat page to reuse free nids in thereJaegeuk Kim1-1/+2
When we build new free nids, let's scan the just next NAT page instead of skipping a couple of previously scanned pages in order to reuse free nids in there. Otherwise, we can use too much wide range of nids even though several nids were deallocated, and also their node pages can be cached in the node_inode's address space. This means that we can retain lots of clean pages in the main memory, which induces mm's reclaiming overhead. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-03-20f2fs: should check the node page was truncated firstJaegeuk Kim1-7/+8
Currently, f2fs doesn't reclaim any node pages. However, if we found that a node page was truncated by checking its block address with zero during f2fs_write_node_page, we should not skip that node page and return zero to reclaim it. Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-03-20f2fs: reduce unncessary locking pages during readJaegeuk Kim1-23/+35
This patch reduces redundant locking and unlocking pages during read operations. In f2fs_readpage, let's use wait_on_page_locked() instead of lock_page. And then, when we need to modify any data finally, let's lock the page so that we can avoid lock contention. [readpage rule] - The f2fs_readpage returns unlocked page, or released page too in error cases. - Its caller should handle read error, -EIO, after locking the page, which indicates read completion. - Its caller should check PageUptodate after grab_cache_page. Signed-off-by: Changman Lee <cm224.lee@samsung.com> Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-03-18f2fs: avoid extra ++ while returning from get_node_pathNamjae Jeon1-7/+6
In all the breaking conditions in get_node_path, 'n' is used to track index in offset[] array, but while breaking out also, in all paths n++ is done. So, remove the ++ from breaking paths. Also, avoid reset of 'level=0' in first case. Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Amit Sahrawat <a.sahrawat@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-03-18f2fs: optimize and change return path in lookup_free_nid_listNamjae Jeon1-4/+3
Optimize and change return path in lookup_free_nid_list Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Amit Sahrawat <a.sahrawat@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-03-18f2fs: optimize get node page readahead partNamjae Jeon1-7/+6
We can remove the call to find_get_page to get a page from the cache and check for up-to-date, instead we can make use of grab_cache_page part itself to fetch the page from the cache. So, removing the call and moving the PageUptodate at proper place, also taken care of moving the lock_page condition in the page_hit part. Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Amit Sahrawat <a.sahrawat@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-03-18f2fs: check the level before calling get_nid functionChangman Lee1-1/+2
The caller of get_nid should be careful not to put lower value than NODE_DIR1_BLOCK in case of level is zero. Signed-off-by: Changman Lee <cm224.lee@samsung.com> Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-03-18f2fs: introduce readahead mode of node pagesJaegeuk Kim1-3/+3
Previously, f2fs reads several node pages ahead when get_dnode_of_data is called with RDONLY_NODE flag. And, this flag is set by the following functions. - get_data_block_ro - get_lock_data_page - do_write_data_page - truncate_blocks - truncate_hole However, this readahead mechanism is initially introduced for the use of get_data_block_ro to enhance the sequential read performance. So, let's clarify all the cases with the additional modes as follows. enum { ALLOC_NODE, /* allocate a new node page if needed */ LOOKUP_NODE, /* look up a node without readahead */ LOOKUP_NODE_RA, /* * look up a node with readahead called * by get_datablock_ro. */ } Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com>
2013-03-18f2fs: read with READ_SYNC when getting dnode pageJaegeuk Kim1-1/+1
The get_node_page_ra tries to: 1. grab or read a target node page for the given nid, 2. then, call ra_node_page to read other adjacent node pages in advance. So, when we try to read a target node page by #1, we should submit bio with READ_SYNC instead of READA. And, in #2, READA should be used. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com>
2013-03-18f2fs: fix to unlock node page when it was truncatedJaegeuk Kim1-2/+2
If the node page was truncated, its block address became zero. This means that we don't need to write the node page, but have to unlock NODE_WRITE, decrease the number of dirty node pages, and then unlock_page before returning the f2fs_write_node_page with zero. Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-02-12f2fs: avoid build warningJaegeuk Kim1-1/+1
This patch removes the following build warning: fs/f2fs/node.c: warning: 'nofs' may be used uninitialized in this function [-Wuninitialized]: => 738:8 Note that this is a false alarm. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-02-12Merge branch 'f2fs' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs into devJaegeuk Kim1-2/+2
Pull f2fs cleanup patches from Al Viro: f2fs: get rid of fake on-stack dentries f2fs: switch init_inode_metadata() to passing parent and name separately f2fs: switch new_inode_page() from dentry to qstr f2fs: init_dent_inode() should take qstr Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> Conflicts: fs/f2fs/recovery.c
2013-02-12f2fs: clarify and enhance the f2fs_gc flowJaegeuk Kim1-1/+1
This patch makes clearer the ambiguous f2fs_gc flow as follows. 1. Remove intermediate checkpoint condition during f2fs_gc (i.e., should_do_checkpoint() and GC_BLOCKED) 2. Remove unnecessary return values of f2fs_gc because of #1. (i.e., GC_NODE, GC_OK, etc) 3. Simplify write_checkpoint() because of #2. 4. Clarify the main f2fs_gc flow. o monitor how many freed sections during one iteration of do_garbage_collect(). o do GC more without checkpoints if we can't get enough free sections. o do checkpoint once we've got enough free sections through forground GCs. 5. Adopt thread-logging (Slack-Space-Recycle) scheme more aggressively on data log types. See. get_ssr_segement() Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-02-12f2fs: remove the use of page_cache_releaseJaegeuk Kim1-2/+2
Let's remove the use of page_cache_release() in f2fs, and instead, use f2fs_put_page(page, 0) which is exactly same but for code readability. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-02-12f2fs: reorganize code for ra_node_pageNamjae Jeon1-6/+2
We can remove unneeded label unlock_out, avoid unnecessary jump and reorganize the returning conditions in this function. Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Amit Sahrawat <a.sahrawat@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2013-02-08f2fs: switch new_inode_page() from dentry to qstrAl Viro1-2/+2
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2013-02-08f2fs: init_dent_inode() should take qstrAl Viro1-1/+1
for one thing, it doesn't (and shouldn't) use anything else from dentry; for another, on some call chains the dentry is fake and should be eliminated completely. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2013-01-22f2fs: avoid issuing small bios due to several dirty node pagesJaegeuk Kim1-6/+11
If some small bios of dirty node pages are supposed to be issued during the sequential data writes, there-in well-produced consecutive data bios are able to be split by the small node bios, resulting in performance degradation. So, let's collect a number of dirty node pages until reaching a threshold. And, by default, I set the threshold as 2MB, a segment size. This improves sequential write performance on i5, 512GB SSD (830 w/ SATA2) as follows. Before: 231 MB/s -> After: 255 MB/s Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com> Reviewed-by: Namjae Jeon <namjae.jeon@samsung.com>
2013-01-22f2fs: add __init to functions in init_f2fs_fsNamjae Jeon1-1/+1
Add __init to functions in init_f2fs_fs for code consistency. Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Amit Sahrawat <a.sahrawat@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2012-12-28f2fs: clean up unused variables and return valuesJaegeuk Kim1-5/+1
This patch cleans up a couple of unnecessary codes related to unused variables and return values. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2012-12-28f2fs: invalidate the node page if allocation is failedJaegeuk Kim1-17/+12
The new_node_page() is processed as the following procedure. 1. A new node page is allocated. 2. Set PageUptodate with proper footer information. 3. Check if there is a free space for allocation 4.a. If there is no space, f2fs returns with -ENOSPC. 4.b. Otherwise, go next. In the case of step #4.a, f2fs remains a wrong node page in the page cache with the uptodate flag. Also, even though a new node page is allocated successfully, an error can be occurred afterwards due to allocation failure of the other data structures. In such a case, remove_inode_page() would be triggered, so that we have to clear uptodate flag in truncate_node() too. So, we should remove the uptodate flag, if allocation is failed. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2012-12-26f2fs: fix handling errors got by f2fs_write_inodeJaegeuk Kim1-1/+1
Ruslan reported that f2fs hangs with an infinite loop in f2fs_sync_file(): while (sync_node_pages(sbi, inode->i_ino, &wbc) == 0) f2fs_write_inode(inode, NULL); The reason was revealed that the cold flag is not set even thought this inode is a normal file. Therefore, sync_node_pages() skips to write node blocks since it only writes cold node blocks. The cold flag is stored to the node_footer in node block, and whenever a new node page is allocated, it is set according to its file type, file or directory. But, after sudden-power-off, when recovering the inode page, f2fs doesn't recover its cold flag. So, let's assign the cold flag in more right places. One more thing: If f2fs_write_inode() returns an error due to whatever situations, there would be no dirty node pages so that sync_node_pages() returns zero. (i.e., zero means nothing was written.) Reported-by: Ruslan N. Marchenko <me@ruff.mobi> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2012-12-11f2fs: fix the compiler warning for uninitialized use of variableNamjae Jeon1-0/+1
When CONFIG_CC_OPTIMIZE_FOR_SIZE is enabled in the kernel, -Os optimisation flag is passed to gcc for compilation, and somehow while trying to optimize the code, compiler is might not able to see the initialisation of variable ne struct variable inside the get_node_info() function and results into following warning: fs/f2fs/node.c: In function 'get_node_info': fs/f2fs/node.c:175:3: warning: 'ne.block_addr' may be used uninitialized in this function [-Wuninitialized] fs/f2fs/node.c:265:24: note: 'ne.block_addr' was declared here fs/f2fs/node.c:176:3: warning: 'ne.ino' may be used uninitialized in this function [-Wuninitialized] fs/f2fs/node.c:265:24: note: 'ne.ino' was declared here fs/f2fs/node.c:177:3: warning: 'ne.version' may be used uninitialized in this function [-Wuninitialized] fs/f2fs/node.c:265:24: note: 'ne.version' was declared here Hence, lets initialise the ne struct variable to zero, which will remove this warning and also doing this does not seems to making any impact on the code behavior. Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Signed-off-by: Pankaj Kumar <pankaj.km@samsung.com>
2012-12-11f2fs: adjust kernel coding styleJaegeuk Kim1-11/+11
As pointed out by Randy Dunlap, this patch removes all usage of "/**" for comment blocks. Instead, just use "/*". Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2012-12-11f2fs: fix endian conversion bugs reported by sparseJaegeuk Kim1-2/+2
This patch should resolve the bugs reported by the sparse tool. Initial reports were written by "kbuild test robot" managed by fengguang.wu. In my local machines, I've tested also by running: > make C=2 CF="-D__CHECK_ENDIAN__" Accordingly, I've found lots of warnings and bugs related to the endian conversion. And I've fixed all at this moment. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
2012-12-11f2fs: add node operationsJaegeuk Kim1-0/+1763
This adds specific functions to manage NAT pages, a cache for NAT entries, free nids, direct/indirect node blocks for indexing data, and address space for node pages. - The key information of an NAT entry consists of a node id and a block address. - An NAT page is composed of block addresses covered by a certain range of NAT entries, which is maintained by the address space of meta_inode. - A radix tree structure is used to cache NAT entries. The index for the tree is a node id. - When there is no free nid, F2FS should scan NAT entries to find new one. In order to avoid scanning frequently, F2FS manages a list containing a number of free nids in memory. Only when free nids in the list are exhausted, scanning process, build_free_nids(), is triggered. - F2FS has direct and indirect node blocks for indexing data. This patch adds fuctions related to the node block management such as getting, allocating, and truncating node blocks to index data. - In order to cache node blocks in memory, F2FS has a node_inode with an address space for node pages. This patch also adds the address space operations for node_inode. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>