aboutsummaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorAlexei Starovoitov <ast@kernel.org>2022-04-25 17:31:36 -0700
committerAlexei Starovoitov <ast@kernel.org>2022-04-25 20:26:45 -0700
commit367590b7fccc2c317026abe7d29923322d959781 (patch)
tree5331a23b54ef32a3d8e1a8fab6838c38c167fc49 /include
parentbpf: Use bpf_prog_run_array_cg_flags everywhere (diff)
parentselftests/bpf: Add test for strict BTF type check (diff)
downloadlinux-dev-367590b7fccc2c317026abe7d29923322d959781.tar.xz
linux-dev-367590b7fccc2c317026abe7d29923322d959781.zip
Merge branch 'Introduce typed pointer support in BPF maps'
Kumar Kartikeya Dwivedi says: ==================== This set enables storing pointers of a certain type in BPF map, and extends the verifier to enforce type safety and lifetime correctness properties. The infrastructure being added is generic enough for allowing storing any kind of pointers whose type is available using BTF (user or kernel) in the future (e.g. strongly typed memory allocation in BPF program), which are internally tracked in the verifier as PTR_TO_BTF_ID, but for now the series limits them to two kinds of pointers obtained from the kernel. Obviously, use of this feature depends on map BTF. 1. Unreferenced kernel pointer In this case, there are very few restrictions. The pointer type being stored must match the type declared in the map value. However, such a pointer when loaded from the map can only be dereferenced, but not passed to any in-kernel helpers or kernel functions available to the program. This is because while the verifier's exception handling mechanism coverts BPF_LDX to PROBE_MEM loads, which are then handled specially by the JIT implementation, the same liberty is not available to accesses inside the kernel. The pointer by the time it is passed into a helper has no lifetime related guarantees about the object it is pointing to, and may well be referencing invalid memory. 2. Referenced kernel pointer This case imposes a lot of restrictions on the programmer, to ensure safety. To transfer the ownership of a reference in the BPF program to the map, the user must use the bpf_kptr_xchg helper, which returns the old pointer contained in the map, as an acquired reference, and releases verifier state for the referenced pointer being exchanged, as it moves into the map. This a normal PTR_TO_BTF_ID that can be used with in-kernel helpers and kernel functions callable by the program. However, if BPF_LDX is used to load a referenced pointer from the map, it is still not permitted to pass it to in-kernel helpers or kernel functions. To obtain a reference usable with helpers, the user must invoke a kfunc helper which returns a usable reference (which also must be eventually released before BPF_EXIT, or moved into a map). Since the load of the pointer (preserving data dependency ordering) must happen inside the RCU read section, the kfunc helper will take a pointer to the map value, which must point to the actual pointer of the object whose reference is to be raised. The type will be verified from the BTF information of the kfunc, as the prototype must be: T *func(T **, ... /* other arguments */); Then, the verifier checks whether pointer at offset of the map value points to the type T, and permits the call. This convention is followed so that such helpers may also be called from sleepable BPF programs, where RCU read lock is not necessarily held in the BPF program context, hence necessiating the need to pass in a pointer to the actual pointer to perform the load inside the RCU read section. Notes ----- * C selftests require https://reviews.llvm.org/D119799 to pass. * Unlike BPF timers, kptr is not reset or freed on map_release_uref. * Referenced kptr storage is always treated as unsigned long * on kernel side, as BPF side cannot mutate it. The storage (8 bytes) is sufficient for both 32-bit and 64-bit platforms. * Use of WRITE_ONCE to reset unreferenced kptr on 32-bit systems is fine, as the actual pointer is always word sized, so the store tearing into two 32-bit stores won't be a problem as the other half is always zeroed out. Changelog: ---------- v5 -> v6 v5: https://lore.kernel.org/bpf/20220415160354.1050687-1-memxor@gmail.com * Address comments from Alexei * Drop 'Revisit stack usage' comment * Rename off_btf to kernel_btf * Add comment about searching using type from map BTF * Do kmemdup + btf_get instead of get + kmemdup + put * Add comment for btf_struct_ids_match * Add comment for assigning non-zero id for mark_ptr_or_null_reg * Rename PTR_RELEASE to OBJ_RELEASE * Rename BPF_MAP_OFF_DESC_TYPE_XXX_KPTR to BPF_KPTR_XXX * Remove unneeded likely/unlikely in cold functions * Fix other misc nits * Keep release_regno instead of replacing with bool + regno * Add a patch to prevent type match for first member when off == 0 for release functions (kfunc + BPF helpers) * Guard kptr/kptr_ref definition in libbpf header with __has_attribute to prevent selftests compilation error with old clang not support type tags v4 -> v5 v4: https://lore.kernel.org/bpf/20220409093303.499196-1-memxor@gmail.com * Address comments from Joanne * Move __btf_member_bit_offset before strcmp * Move strcmp conditional on name to unref kptr patch * Directly return from btf_find_struct in patch 1 * Use enum btf_field_type vs int field_type * Put btf and btf_id in off_desc in named struct 'kptr' * Switch order for BTF_FIELD_IGNORE check * Drop dead tab->nr_off = 0 store * Use i instead of tab->nr_off to btf_put on failure * Replace kzalloc + memcpy with kmemdup (kernel test robot) * Reject both BPF_F_RDONLY_PROG and BPF_F_WRONLY_PROG * Add logging statement for reject BPF_MODE(insn->code) != BPF_MEM * Rename off_desc -> kptr_off_desc in check_mem_access * Drop check for err, fallthrough to end of function * Remove is_release_function, use meta.release_regno to detect release function, release reference state, and remove check_release_regno * Drop off_desc->flags, use off_desc->type * Update comment for ARG_PTR_TO_KPTR * Distinguish between direct/indirect access to kptr * Drop check_helper_mem_access from process_kptr_func, check_mem_reg in kptr_get * Add verifier test for helper accessing kptr indirectly * Fix other misc nits, add Acked-by for patch 2 v3 -> v4 v3: https://lore.kernel.org/bpf/20220320155510.671497-1-memxor@gmail.com * Use btf_parse_kptrs, plural kptrs naming (Joanne, Andrii) * Remove unused parameters in check_map_kptr_access (Joanne) * Handle idx < info_cnt kludge using tmp variable (Andrii) * Validate tags always precede modifiers in BTF (Andrii) * Split out into https://lore.kernel.org/bpf/20220406004121.282699-1-memxor@gmail.com * Store u32 type_id in btf_field_info (Andrii) * Use base_type in map_kptr_match_type (Andrii) * Free kptr_off_tab when not bpf_capable (Martin) * Use PTR_RELEASE flag instead of bools in bpf_func_proto (Joanne) * Drop extra reg->off and reg->ref_obj_id checks in map_kptr_match_type (Martin) * Use separate u32 and u8 arrays for offs and sizes in off_arr (Andrii) * Simplify and remove map->value_size sentinel in copy_map_value (Andrii) * Use sort_r to keep both arrays in sync while sorting (Andrii) * Rename check_and_free_timers_and_kptr to check_and_free_fields (Andrii) * Move dtor prototype checks to registration phase (Alexei) * Use ret variable for checking ASSERT_XXX, use shorter strings (Andrii) * Fix missing checks for other maps (Jiri) * Fix various other nits, and bugs noticed during self review v2 -> v3 v2: https://lore.kernel.org/bpf/20220317115957.3193097-1-memxor@gmail.com * Address comments from Alexei * Set name, sz, align in btf_find_field * Do idx >= info_cnt check in caller of btf_find_field_* * Use extra element in the info_arr to make this safe * Remove while loop, reject extra tags * Remove cases of defensive programming * Move bpf_capable() check to map_check_btf * Put check_ptr_off_reg reordering hunk into separate patch * Warn for ref_ptr once * Make the meta.ref_obj_id == 0 case simpler to read * Remove kptr_percpu and kptr_user support, remove their tests * Store size of field at offset in off_arr * Fix BPF_F_NO_PREALLOC set wrongly for hash map in C selftest * Add missing check_mem_reg call for kptr_get kfunc arg#0 check v1 -> v2 v1: https://lore.kernel.org/bpf/20220220134813.3411982-1-memxor@gmail.com * Address comments from Alexei * Rename bpf_btf_find_by_name_kind_all to bpf_find_btf_id * Reduce indentation level in that function * Always take reference regardless of module or vmlinux BTF * Also made it the same for btf_get_module_btf * Use kptr, kptr_ref, kptr_percpu, kptr_user type tags * Don't reserve tag namespace * Refactor btf_find_field to be side effect free, allocate and populate kptr_off_tab in caller * Move module reference to dtor patch * Remove support for BPF_XCHG, BPF_CMPXCHG insn * Introduce bpf_kptr_xchg helper * Embed offset array in struct bpf_map, populate and sort it once * Adjust copy_map_value to memcpy directly using this offset array * Removed size member from offset array to save space * Fix some problems pointed out by kernel test robot * Tidy selftests * Lots of other minor fixes ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/bpf.h113
-rw-r--r--include/linux/bpf_verifier.h3
-rw-r--r--include/linux/btf.h23
-rw-r--r--include/uapi/linux/bpf.h12
4 files changed, 121 insertions, 30 deletions
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 7bf441563ffc..0af5793ba417 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -23,6 +23,7 @@
#include <linux/slab.h>
#include <linux/percpu-refcount.h>
#include <linux/bpfptr.h>
+#include <linux/btf.h>
struct bpf_verifier_env;
struct bpf_verifier_log;
@@ -155,6 +156,41 @@ struct bpf_map_ops {
const struct bpf_iter_seq_info *iter_seq_info;
};
+enum {
+ /* Support at most 8 pointers in a BPF map value */
+ BPF_MAP_VALUE_OFF_MAX = 8,
+ BPF_MAP_OFF_ARR_MAX = BPF_MAP_VALUE_OFF_MAX +
+ 1 + /* for bpf_spin_lock */
+ 1, /* for bpf_timer */
+};
+
+enum bpf_kptr_type {
+ BPF_KPTR_UNREF,
+ BPF_KPTR_REF,
+};
+
+struct bpf_map_value_off_desc {
+ u32 offset;
+ enum bpf_kptr_type type;
+ struct {
+ struct btf *btf;
+ struct module *module;
+ btf_dtor_kfunc_t dtor;
+ u32 btf_id;
+ } kptr;
+};
+
+struct bpf_map_value_off {
+ u32 nr_off;
+ struct bpf_map_value_off_desc off[];
+};
+
+struct bpf_map_off_arr {
+ u32 cnt;
+ u32 field_off[BPF_MAP_OFF_ARR_MAX];
+ u8 field_sz[BPF_MAP_OFF_ARR_MAX];
+};
+
struct bpf_map {
/* The first two cachelines with read-mostly members of which some
* are also accessed in fast-path (e.g. ops, max_entries).
@@ -171,6 +207,7 @@ struct bpf_map {
u64 map_extra; /* any per-map-type extra fields */
u32 map_flags;
int spin_lock_off; /* >=0 valid offset, <0 error */
+ struct bpf_map_value_off *kptr_off_tab;
int timer_off; /* >=0 valid offset, <0 error */
u32 id;
int numa_node;
@@ -182,10 +219,7 @@ struct bpf_map {
struct mem_cgroup *memcg;
#endif
char name[BPF_OBJ_NAME_LEN];
- bool bypass_spec_v1;
- bool frozen; /* write-once; write-protected by freeze_mutex */
- /* 14 bytes hole */
-
+ struct bpf_map_off_arr *off_arr;
/* The 3rd and 4th cacheline with misc members to avoid false sharing
* particularly with refcounting.
*/
@@ -205,6 +239,8 @@ struct bpf_map {
bool jited;
bool xdp_has_frags;
} owner;
+ bool bypass_spec_v1;
+ bool frozen; /* write-once; write-protected by freeze_mutex */
};
static inline bool map_value_has_spin_lock(const struct bpf_map *map)
@@ -217,43 +253,44 @@ static inline bool map_value_has_timer(const struct bpf_map *map)
return map->timer_off >= 0;
}
+static inline bool map_value_has_kptrs(const struct bpf_map *map)
+{
+ return !IS_ERR_OR_NULL(map->kptr_off_tab);
+}
+
static inline void check_and_init_map_value(struct bpf_map *map, void *dst)
{
if (unlikely(map_value_has_spin_lock(map)))
memset(dst + map->spin_lock_off, 0, sizeof(struct bpf_spin_lock));
if (unlikely(map_value_has_timer(map)))
memset(dst + map->timer_off, 0, sizeof(struct bpf_timer));
+ if (unlikely(map_value_has_kptrs(map))) {
+ struct bpf_map_value_off *tab = map->kptr_off_tab;
+ int i;
+
+ for (i = 0; i < tab->nr_off; i++)
+ *(u64 *)(dst + tab->off[i].offset) = 0;
+ }
}
/* copy everything but bpf_spin_lock and bpf_timer. There could be one of each. */
static inline void copy_map_value(struct bpf_map *map, void *dst, void *src)
{
- u32 s_off = 0, s_sz = 0, t_off = 0, t_sz = 0;
+ u32 curr_off = 0;
+ int i;
- if (unlikely(map_value_has_spin_lock(map))) {
- s_off = map->spin_lock_off;
- s_sz = sizeof(struct bpf_spin_lock);
- }
- if (unlikely(map_value_has_timer(map))) {
- t_off = map->timer_off;
- t_sz = sizeof(struct bpf_timer);
+ if (likely(!map->off_arr)) {
+ memcpy(dst, src, map->value_size);
+ return;
}
- if (unlikely(s_sz || t_sz)) {
- if (s_off < t_off || !s_sz) {
- swap(s_off, t_off);
- swap(s_sz, t_sz);
- }
- memcpy(dst, src, t_off);
- memcpy(dst + t_off + t_sz,
- src + t_off + t_sz,
- s_off - t_off - t_sz);
- memcpy(dst + s_off + s_sz,
- src + s_off + s_sz,
- map->value_size - s_off - s_sz);
- } else {
- memcpy(dst, src, map->value_size);
+ for (i = 0; i < map->off_arr->cnt; i++) {
+ u32 next_off = map->off_arr->field_off[i];
+
+ memcpy(dst + curr_off, src + curr_off, next_off - curr_off);
+ curr_off += map->off_arr->field_sz[i];
}
+ memcpy(dst + curr_off, src + curr_off, map->value_size - curr_off);
}
void copy_map_value_locked(struct bpf_map *map, void *dst, void *src,
bool lock_src);
@@ -342,7 +379,18 @@ enum bpf_type_flag {
*/
MEM_PERCPU = BIT(4 + BPF_BASE_TYPE_BITS),
- __BPF_TYPE_LAST_FLAG = MEM_PERCPU,
+ /* Indicates that the argument will be released. */
+ OBJ_RELEASE = BIT(5 + BPF_BASE_TYPE_BITS),
+
+ /* PTR is not trusted. This is only used with PTR_TO_BTF_ID, to mark
+ * unreferenced and referenced kptr loaded from map value using a load
+ * instruction, so that they can only be dereferenced but not escape the
+ * BPF program into the kernel (i.e. cannot be passed as arguments to
+ * kfunc or bpf helpers).
+ */
+ PTR_UNTRUSTED = BIT(6 + BPF_BASE_TYPE_BITS),
+
+ __BPF_TYPE_LAST_FLAG = PTR_UNTRUSTED,
};
/* Max number of base types. */
@@ -391,6 +439,7 @@ enum bpf_arg_type {
ARG_PTR_TO_STACK, /* pointer to stack */
ARG_PTR_TO_CONST_STR, /* pointer to a null terminated read-only string */
ARG_PTR_TO_TIMER, /* pointer to bpf_timer */
+ ARG_PTR_TO_KPTR, /* pointer to referenced kptr */
__BPF_ARG_TYPE_MAX,
/* Extended arg_types. */
@@ -400,6 +449,7 @@ enum bpf_arg_type {
ARG_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_SOCKET,
ARG_PTR_TO_ALLOC_MEM_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_ALLOC_MEM,
ARG_PTR_TO_STACK_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_STACK,
+ ARG_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_BTF_ID,
/* This must be the last entry. Its purpose is to ensure the enum is
* wide enough to hold the higher bits reserved for bpf_type_flag.
@@ -1396,6 +1446,12 @@ void bpf_prog_put(struct bpf_prog *prog);
void bpf_prog_free_id(struct bpf_prog *prog, bool do_idr_lock);
void bpf_map_free_id(struct bpf_map *map, bool do_idr_lock);
+struct bpf_map_value_off_desc *bpf_map_kptr_off_contains(struct bpf_map *map, u32 offset);
+void bpf_map_free_kptr_off_tab(struct bpf_map *map);
+struct bpf_map_value_off *bpf_map_copy_kptr_off_tab(const struct bpf_map *map);
+bool bpf_map_equal_kptr_off_tab(const struct bpf_map *map_a, const struct bpf_map *map_b);
+void bpf_map_free_kptrs(struct bpf_map *map, void *map_value);
+
struct bpf_map *bpf_map_get(u32 ufd);
struct bpf_map *bpf_map_get_with_uref(u32 ufd);
struct bpf_map *__bpf_map_get(struct fd f);
@@ -1692,7 +1748,8 @@ int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf,
u32 *next_btf_id, enum bpf_type_flag *flag);
bool btf_struct_ids_match(struct bpf_verifier_log *log,
const struct btf *btf, u32 id, int off,
- const struct btf *need_btf, u32 need_type_id);
+ const struct btf *need_btf, u32 need_type_id,
+ bool strict);
int btf_distill_func_proto(struct bpf_verifier_log *log,
struct btf *btf,
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 3a9d2d7cc6b7..1f1e7f2ea967 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -523,8 +523,7 @@ int check_ptr_off_reg(struct bpf_verifier_env *env,
const struct bpf_reg_state *reg, int regno);
int check_func_arg_reg_off(struct bpf_verifier_env *env,
const struct bpf_reg_state *reg, int regno,
- enum bpf_arg_type arg_type,
- bool is_release_func);
+ enum bpf_arg_type arg_type);
int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
u32 regno);
int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 36bc09b8e890..2611cea2c2b6 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -17,6 +17,7 @@ enum btf_kfunc_type {
BTF_KFUNC_TYPE_ACQUIRE,
BTF_KFUNC_TYPE_RELEASE,
BTF_KFUNC_TYPE_RET_NULL,
+ BTF_KFUNC_TYPE_KPTR_ACQUIRE,
BTF_KFUNC_TYPE_MAX,
};
@@ -35,11 +36,19 @@ struct btf_kfunc_id_set {
struct btf_id_set *acquire_set;
struct btf_id_set *release_set;
struct btf_id_set *ret_null_set;
+ struct btf_id_set *kptr_acquire_set;
};
struct btf_id_set *sets[BTF_KFUNC_TYPE_MAX];
};
};
+struct btf_id_dtor_kfunc {
+ u32 btf_id;
+ u32 kfunc_btf_id;
+};
+
+typedef void (*btf_dtor_kfunc_t)(void *);
+
extern const struct file_operations btf_fops;
void btf_get(struct btf *btf);
@@ -123,6 +132,8 @@ bool btf_member_is_reg_int(const struct btf *btf, const struct btf_type *s,
u32 expected_offset, u32 expected_size);
int btf_find_spin_lock(const struct btf *btf, const struct btf_type *t);
int btf_find_timer(const struct btf *btf, const struct btf_type *t);
+struct bpf_map_value_off *btf_parse_kptrs(const struct btf *btf,
+ const struct btf_type *t);
bool btf_type_is_void(const struct btf_type *t);
s32 btf_find_by_name_kind(const struct btf *btf, const char *name, u8 kind);
const struct btf_type *btf_type_skip_modifiers(const struct btf *btf,
@@ -344,6 +355,9 @@ bool btf_kfunc_id_set_contains(const struct btf *btf,
enum btf_kfunc_type type, u32 kfunc_btf_id);
int register_btf_kfunc_id_set(enum bpf_prog_type prog_type,
const struct btf_kfunc_id_set *s);
+s32 btf_find_dtor_kfunc(struct btf *btf, u32 btf_id);
+int register_btf_id_dtor_kfuncs(const struct btf_id_dtor_kfunc *dtors, u32 add_cnt,
+ struct module *owner);
#else
static inline const struct btf_type *btf_type_by_id(const struct btf *btf,
u32 type_id)
@@ -367,6 +381,15 @@ static inline int register_btf_kfunc_id_set(enum bpf_prog_type prog_type,
{
return 0;
}
+static inline s32 btf_find_dtor_kfunc(struct btf *btf, u32 btf_id)
+{
+ return -ENOENT;
+}
+static inline int register_btf_id_dtor_kfuncs(const struct btf_id_dtor_kfunc *dtors,
+ u32 add_cnt, struct module *owner)
+{
+ return 0;
+}
#endif
#endif
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index d14b10b85e51..444fe6f1cf35 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -5143,6 +5143,17 @@ union bpf_attr {
* The **hash_algo** is returned on success,
* **-EOPNOTSUP** if the hash calculation failed or **-EINVAL** if
* invalid arguments are passed.
+ *
+ * void *bpf_kptr_xchg(void *map_value, void *ptr)
+ * Description
+ * Exchange kptr at pointer *map_value* with *ptr*, and return the
+ * old value. *ptr* can be NULL, otherwise it must be a referenced
+ * pointer which will be released when this helper is called.
+ * Return
+ * The old value of kptr (which can be NULL). The returned pointer
+ * if not NULL, is a reference which must be released using its
+ * corresponding release function, or moved into a BPF map before
+ * program exit.
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
@@ -5339,6 +5350,7 @@ union bpf_attr {
FN(copy_from_user_task), \
FN(skb_set_tstamp), \
FN(ima_file_hash), \
+ FN(kptr_xchg), \
/* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper