aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/include/linux/livepatch.h
diff options
context:
space:
mode:
authorJason Baron <jbaron@akamai.com>2019-01-09 13:43:25 +0100
committerJiri Kosina <jkosina@suse.cz>2019-01-11 20:51:24 +0100
commite1452b607c48c642caf57299f4da83aa002f8533 (patch)
tree487267602c0c6cfaeb247950df4cbb24f435ae6a /include/linux/livepatch.h
parentlivepatch: Use lists to manage patches, objects and functions (diff)
downloadwireguard-linux-e1452b607c48c642caf57299f4da83aa002f8533.tar.xz
wireguard-linux-e1452b607c48c642caf57299f4da83aa002f8533.zip
livepatch: Add atomic replace
Sometimes we would like to revert a particular fix. Currently, this is not easy because we want to keep all other fixes active and we could revert only the last applied patch. One solution would be to apply new patch that implemented all the reverted functions like in the original code. It would work as expected but there will be unnecessary redirections. In addition, it would also require knowing which functions need to be reverted at build time. Another problem is when there are many patches that touch the same functions. There might be dependencies between patches that are not enforced on the kernel side. Also it might be pretty hard to actually prepare the patch and ensure compatibility with the other patches. Atomic replace && cumulative patches: A better solution would be to create cumulative patch and say that it replaces all older ones. This patch adds a new "replace" flag to struct klp_patch. When it is enabled, a set of 'nop' klp_func will be dynamically created for all functions that are already being patched but that will no longer be modified by the new patch. They are used as a new target during the patch transition. The idea is to handle Nops' structures like the static ones. When the dynamic structures are allocated, we initialize all values that are normally statically defined. The only exception is "new_func" in struct klp_func. It has to point to the original function and the address is known only when the object (module) is loaded. Note that we really need to set it. The address is used, for example, in klp_check_stack_func(). Nevertheless we still need to distinguish the dynamically allocated structures in some operations. For this, we add "nop" flag into struct klp_func and "dynamic" flag into struct klp_object. They need special handling in the following situations: + The structures are added into the lists of objects and functions immediately. In fact, the lists were created for this purpose. + The address of the original function is known only when the patched object (module) is loaded. Therefore it is copied later in klp_init_object_loaded(). + The ftrace handler must not set PC to func->new_func. It would cause infinite loop because the address points back to the beginning of the original function. + The various free() functions must free the structure itself. Note that other ways to detect the dynamic structures are not considered safe. For example, even the statically defined struct klp_object might include empty funcs array. It might be there just to run some callbacks. Also note that the safe iterator must be used in the free() functions. Otherwise already freed structures might get accessed. Special callbacks handling: The callbacks from the replaced patches are _not_ called by intention. It would be pretty hard to define a reasonable semantic and implement it. It might even be counter-productive. The new patch is cumulative. It is supposed to include most of the changes from older patches. In most cases, it will not want to call pre_unpatch() post_unpatch() callbacks from the replaced patches. It would disable/break things for no good reasons. Also it should be easier to handle various scenarios in a single script in the new patch than think about interactions caused by running many scripts from older patches. Not to say that the old scripts even would not expect to be called in this situation. Removing replaced patches: One nice effect of the cumulative patches is that the code from the older patches is no longer used. Therefore the replaced patches can be removed. It has several advantages: + Nops' structs will no longer be necessary and might be removed. This would save memory, restore performance (no ftrace handler), allow clear view on what is really patched. + Disabling the patch will cause using the original code everywhere. Therefore the livepatch callbacks could handle only one scenario. Note that the complication is already complex enough when the patch gets enabled. It is currently solved by calling callbacks only from the new cumulative patch. + The state is clean in both the sysfs interface and lsmod. The modules with the replaced livepatches might even get removed from the system. Some people actually expected this behavior from the beginning. After all a cumulative patch is supposed to "completely" replace an existing one. It is like when a new version of an application replaces an older one. This patch does the first step. It removes the replaced patches from the list of patches. It is safe. The consistency model ensures that they are no longer used. By other words, each process works only with the structures from klp_transition_patch. The removal is done by a special function. It combines actions done by __disable_patch() and klp_complete_transition(). But it is a fast track without all the transaction-related stuff. Signed-off-by: Jason Baron <jbaron@akamai.com> [pmladek@suse.com: Split, reuse existing code, simplified] Signed-off-by: Petr Mladek <pmladek@suse.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Jessica Yu <jeyu@kernel.org> Cc: Jiri Kosina <jikos@kernel.org> Cc: Miroslav Benes <mbenes@suse.cz> Acked-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Diffstat (limited to 'include/linux/livepatch.h')
-rw-r--r--include/linux/livepatch.h12
1 files changed, 12 insertions, 0 deletions
diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index e117e20ff771..53551f470722 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -48,6 +48,7 @@
* @old_size: size of the old function
* @new_size: size of the new function
* @kobj_added: @kobj has been added and needs freeing
+ * @nop: temporary patch to use the original code again; dyn. allocated
* @patched: the func has been added to the klp_ops list
* @transition: the func is currently being applied or reverted
*
@@ -86,6 +87,7 @@ struct klp_func {
struct list_head stack_node;
unsigned long old_size, new_size;
bool kobj_added;
+ bool nop;
bool patched;
bool transition;
};
@@ -125,6 +127,7 @@ struct klp_callbacks {
* @mod: kernel module associated with the patched object
* (NULL for vmlinux)
* @kobj_added: @kobj has been added and needs freeing
+ * @dynamic: temporary object for nop functions; dynamically allocated
* @patched: the object's funcs have been added to the klp_ops list
*/
struct klp_object {
@@ -139,6 +142,7 @@ struct klp_object {
struct list_head node;
struct module *mod;
bool kobj_added;
+ bool dynamic;
bool patched;
};
@@ -146,6 +150,7 @@ struct klp_object {
* struct klp_patch - patch structure for live patching
* @mod: reference to the live patch module
* @objs: object entries for kernel objects to be patched
+ * @replace: replace all actively used patches
* @list: list node for global list of actively used patches
* @kobj: kobject for sysfs resources
* @obj_list: dynamic list of the object entries
@@ -159,6 +164,7 @@ struct klp_patch {
/* external */
struct module *mod;
struct klp_object *objs;
+ bool replace;
/* internal */
struct list_head list;
@@ -174,6 +180,9 @@ struct klp_patch {
#define klp_for_each_object_static(patch, obj) \
for (obj = patch->objs; obj->funcs || obj->name; obj++)
+#define klp_for_each_object_safe(patch, obj, tmp_obj) \
+ list_for_each_entry_safe(obj, tmp_obj, &patch->obj_list, node)
+
#define klp_for_each_object(patch, obj) \
list_for_each_entry(obj, &patch->obj_list, node)
@@ -182,6 +191,9 @@ struct klp_patch {
func->old_name || func->new_func || func->old_sympos; \
func++)
+#define klp_for_each_func_safe(obj, func, tmp_func) \
+ list_for_each_entry_safe(func, tmp_func, &obj->func_list, node)
+
#define klp_for_each_func(obj, func) \
list_for_each_entry(func, &obj->func_list, node)