aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/md/bcache/bset.c
diff options
context:
space:
mode:
authorKent Overstreet <kmo@daterainc.com>2013-09-10 18:41:15 -0700
committerKent Overstreet <kmo@daterainc.com>2013-11-10 21:55:57 -0800
commit26c949f8062cb9221a28b2228104f1cc5b265097 (patch)
treeda302aae6cd3154241450e4934410ac43310bd9a /drivers/md/bcache/bset.c
parentbcache: Explicitly track btree node's parent (diff)
downloadlinux-dev-26c949f8062cb9221a28b2228104f1cc5b265097.tar.xz
linux-dev-26c949f8062cb9221a28b2228104f1cc5b265097.zip
bcache: Add btree_insert_node()
The flow of control in the old btree insertion code was rather - backwards; we'd recurse down the btree (in btree_insert_recurse()), and then if we needed to split the keys to be inserted into the parent node would be effectively returned up to btree_insert_recurse(), which would notice there was more work to do and finish the insertion. The main problem with this was that the full logic for btree insertion could only be used by calling btree_insert_recurse; if you'd gotten to a btree leaf some other way and had a key to insert, if it turned out that node needed to be split you were SOL. This inverts the flow of control so btree_insert_node() does _full_ btree insertion, including splitting - and takes a (leaf) btree node to insert into as a parameter. This means we can now _correctly_ handle cache misses - for cache misses, we need to insert a fake "check" key into the btree when we discover we have a cache miss - while we still have the btree locked. Previously, if the btree node was full inserting a cache miss would just fail. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Diffstat (limited to 'drivers/md/bcache/bset.c')
-rw-r--r--drivers/md/bcache/bset.c12
1 files changed, 12 insertions, 0 deletions
diff --git a/drivers/md/bcache/bset.c b/drivers/md/bcache/bset.c
index 22d1ae72c282..830eede86d56 100644
--- a/drivers/md/bcache/bset.c
+++ b/drivers/md/bcache/bset.c
@@ -73,6 +73,18 @@ struct bkey *bch_keylist_pop(struct keylist *l)
return l->top = k;
}
+void bch_keylist_pop_front(struct keylist *l)
+{
+ struct bkey *next = bkey_next(l->bottom);
+ size_t bytes = ((void *) l->top) - ((void *) next);
+
+ memmove(l->bottom,
+ next,
+ bytes);
+
+ l->top = ((void *) l->bottom) + bytes;
+}
+
/* Pointer validation */
bool __bch_ptr_invalid(struct cache_set *c, int level, const struct bkey *k)