Commit 2e7199bd authored by David S. Miller's avatar David S. Miller
Browse files


Daniel Borkmann says:

====================
pull-request: bpf-next 2020-08-04

The following pull-request contains BPF updates for your *net-next* tree.

We've added 73 non-merge commits during the last 9 day(s) which contain
a total of 135 files changed, 4603 insertions(+), 1013 deletions(-).

The main changes are:

1) Implement bpf_link support for XDP. Also add LINK_DETACH operation for the BPF
   syscall allowing processes with BPF link FD to force-detach, from Andrii Nakryiko.

2) Add BPF iterator for map elements and to iterate all BPF programs for efficient
   in-kernel inspection, from Yonghong Song and Alexei Starovoitov.

3) Separate bpf_get_{stack,stackid}() helpers for perf events in BPF to avoid
   unwinder errors, from Song Liu.

4) Allow cgroup local storage map to be shared between programs on the same
   cgroup. Also extend BPF selftests with coverage, from YiFei Zhu.

5) Add BPF exception tables to ARM64 JIT in order to be able to JIT BPF_PROBE_MEM
   load instructions, from Jean-Philippe Brucker.

6) Follow-up fixes on BPF socket lookup in combination with reuseport group
   handling. Also add related BPF selftests, from Jakub Sitnicki.

7) Allow to use socket storage in BPF_PROG_TYPE_CGROUP_SOCK-typed programs for
   socket create/release as well as bind functions, from Stanislav Fomichev.

8) Fix an info leak in xsk_getsockopt() when retrieving XDP stats via old struct
   xdp_statistics, from Peilin Ye.

9) Fix PT_REGS_RC{,_CORE}() macros in libbpf for MIPS arch, from Jerry Crunchtime.

10) Extend BPF kernel test infra with skb->family and skb->{local,remote}_ip{4,6}
    fields and allow user space to specify skb->dev via ifindex, from Dmitry Yakunin.

11) Fix a bpftool segfault due to missing program type name and make it more robust
    to prevent them in future gaps, from Quentin Monnet.

12) Consolidate cgroup helper functions across selftests and fix a v6 localhost
    resolver issue, from John Fastabend.
====================

Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 76769c38 21594c44
Loading
Loading
Loading
Loading
+15 −6
Original line number Diff line number Diff line
@@ -7,8 +7,8 @@ Filter) facility, with a focus on the extended BPF version (eBPF).

This kernel side documentation is still work in progress. The main
textual documentation is (for historical reasons) described in
`Documentation/networking/filter.rst`_, which describe both classical
and extended BPF instruction-set.
:ref:`networking-filter`, which describe both classical and extended
BPF instruction-set.
The Cilium project also maintains a `BPF and XDP Reference Guide`_
that goes into great technical depth about the BPF Architecture.

@@ -48,6 +48,15 @@ Program types
   bpf_lsm


Map types
=========

.. toctree::
   :maxdepth: 1

   map_cgroup_storage


Testing and debugging BPF
=========================

@@ -59,7 +68,7 @@ Testing and debugging BPF


.. Links:
.. _Documentation/networking/filter.rst: ../networking/filter.txt
.. _networking-filter: ../networking/filter.rst
.. _man-pages: https://www.kernel.org/doc/man-pages/
.. _bpf(2): http://man7.org/linux/man-pages/man2/bpf.2.html
.. _BPF and XDP Reference Guide: http://cilium.readthedocs.io/en/latest/bpf/
.. _bpf(2): https://man7.org/linux/man-pages/man2/bpf.2.html
.. _BPF and XDP Reference Guide: https://docs.cilium.io/en/latest/bpf/
+169 −0
Original line number Diff line number Diff line
.. SPDX-License-Identifier: GPL-2.0-only
.. Copyright (C) 2020 Google LLC.

===========================
BPF_MAP_TYPE_CGROUP_STORAGE
===========================

The ``BPF_MAP_TYPE_CGROUP_STORAGE`` map type represents a local fix-sized
storage. It is only available with ``CONFIG_CGROUP_BPF``, and to programs that
attach to cgroups; the programs are made available by the same Kconfig. The
storage is identified by the cgroup the program is attached to.

The map provide a local storage at the cgroup that the BPF program is attached
to. It provides a faster and simpler access than the general purpose hash
table, which performs a hash table lookups, and requires user to track live
cgroups on their own.

This document describes the usage and semantics of the
``BPF_MAP_TYPE_CGROUP_STORAGE`` map type. Some of its behaviors was changed in
Linux 5.9 and this document will describe the differences.

Usage
=====

The map uses key of type of either ``__u64 cgroup_inode_id`` or
``struct bpf_cgroup_storage_key``, declared in ``linux/bpf.h``::

    struct bpf_cgroup_storage_key {
            __u64 cgroup_inode_id;
            __u32 attach_type;
    };

``cgroup_inode_id`` is the inode id of the cgroup directory.
``attach_type`` is the the program's attach type.

Linux 5.9 added support for type ``__u64 cgroup_inode_id`` as the key type.
When this key type is used, then all attach types of the particular cgroup and
map will share the same storage. Otherwise, if the type is
``struct bpf_cgroup_storage_key``, then programs of different attach types
be isolated and see different storages.

To access the storage in a program, use ``bpf_get_local_storage``::

    void *bpf_get_local_storage(void *map, u64 flags)

``flags`` is reserved for future use and must be 0.

There is no implicit synchronization. Storages of ``BPF_MAP_TYPE_CGROUP_STORAGE``
can be accessed by multiple programs across different CPUs, and user should
take care of synchronization by themselves. The bpf infrastructure provides
``struct bpf_spin_lock`` to synchronize the storage. See
``tools/testing/selftests/bpf/progs/test_spin_lock.c``.

Examples
========

Usage with key type as ``struct bpf_cgroup_storage_key``::

    #include <bpf/bpf.h>

    struct {
            __uint(type, BPF_MAP_TYPE_CGROUP_STORAGE);
            __type(key, struct bpf_cgroup_storage_key);
            __type(value, __u32);
    } cgroup_storage SEC(".maps");

    int program(struct __sk_buff *skb)
    {
            __u32 *ptr = bpf_get_local_storage(&cgroup_storage, 0);
            __sync_fetch_and_add(ptr, 1);

            return 0;
    }

Userspace accessing map declared above::

    #include <linux/bpf.h>
    #include <linux/libbpf.h>

    __u32 map_lookup(struct bpf_map *map, __u64 cgrp, enum bpf_attach_type type)
    {
            struct bpf_cgroup_storage_key = {
                    .cgroup_inode_id = cgrp,
                    .attach_type = type,
            };
            __u32 value;
            bpf_map_lookup_elem(bpf_map__fd(map), &key, &value);
            // error checking omitted
            return value;
    }

Alternatively, using just ``__u64 cgroup_inode_id`` as key type::

    #include <bpf/bpf.h>

    struct {
            __uint(type, BPF_MAP_TYPE_CGROUP_STORAGE);
            __type(key, __u64);
            __type(value, __u32);
    } cgroup_storage SEC(".maps");

    int program(struct __sk_buff *skb)
    {
            __u32 *ptr = bpf_get_local_storage(&cgroup_storage, 0);
            __sync_fetch_and_add(ptr, 1);

            return 0;
    }

And userspace::

    #include <linux/bpf.h>
    #include <linux/libbpf.h>

    __u32 map_lookup(struct bpf_map *map, __u64 cgrp, enum bpf_attach_type type)
    {
            __u32 value;
            bpf_map_lookup_elem(bpf_map__fd(map), &cgrp, &value);
            // error checking omitted
            return value;
    }

Semantics
=========

``BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE`` is a variant of this map type. This
per-CPU variant will have different memory regions for each CPU for each
storage. The non-per-CPU will have the same memory region for each storage.

Prior to Linux 5.9, the lifetime of a storage is precisely per-attachment, and
for a single ``CGROUP_STORAGE`` map, there can be at most one program loaded
that uses the map. A program may be attached to multiple cgroups or have
multiple attach types, and each attach creates a fresh zeroed storage. The
storage is freed upon detach.

There is a one-to-one association between the map of each type (per-CPU and
non-per-CPU) and the BPF program during load verification time. As a result,
each map can only be used by one BPF program and each BPF program can only use
one storage map of each type. Because of map can only be used by one BPF
program, sharing of this cgroup's storage with other BPF programs were
impossible.

Since Linux 5.9, storage can be shared by multiple programs. When a program is
attached to a cgroup, the kernel would create a new storage only if the map
does not already contain an entry for the cgroup and attach type pair, or else
the old storage is reused for the new attachment. If the map is attach type
shared, then attach type is simply ignored during comparison. Storage is freed
only when either the map or the cgroup attached to is being freed. Detaching
will not directly free the storage, but it may cause the reference to the map
to reach zero and indirectly freeing all storage in the map.

The map is not associated with any BPF program, thus making sharing possible.
However, the BPF program can still only associate with one map of each type
(per-CPU and non-per-CPU). A BPF program cannot use more than one
``BPF_MAP_TYPE_CGROUP_STORAGE`` or more than one
``BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE``.

In all versions, userspace may use the the attach parameters of cgroup and
attach type pair in ``struct bpf_cgroup_storage_key`` as the key to the BPF map
APIs to read or update the storage for a given attachment. For Linux 5.9
attach type shared storages, only the first value in the struct, cgroup inode
id, is used during comparison, so userspace may just specify a ``__u64``
directly.

The storage is bound at attach time. Even if the program is attached to parent
and triggers in child, the storage still belongs to the parent.

Userspace cannot create a new entry in the map or delete an existing entry.
Program test runs always use a temporary storage.
+2 −0
Original line number Diff line number Diff line
.. SPDX-License-Identifier: GPL-2.0

.. _networking-filter:

=======================================================
Linux Socket Filtering aka Berkeley Packet Filter (BPF)
=======================================================
+12 −0
Original line number Diff line number Diff line
@@ -22,5 +22,17 @@ struct exception_table_entry

#define ARCH_HAS_RELATIVE_EXTABLE

#ifdef CONFIG_BPF_JIT
int arm64_bpf_fixup_exception(const struct exception_table_entry *ex,
			      struct pt_regs *regs);
#else /* !CONFIG_BPF_JIT */
static inline
int arm64_bpf_fixup_exception(const struct exception_table_entry *ex,
			      struct pt_regs *regs)
{
	return 0;
}
#endif /* !CONFIG_BPF_JIT */

extern int fixup_exception(struct pt_regs *regs);
#endif
+9 −3
Original line number Diff line number Diff line
@@ -11,8 +11,14 @@ int fixup_exception(struct pt_regs *regs)
	const struct exception_table_entry *fixup;

	fixup = search_exception_tables(instruction_pointer(regs));
	if (fixup)
		regs->pc = (unsigned long)&fixup->fixup + fixup->fixup;
	if (!fixup)
		return 0;

	if (IS_ENABLED(CONFIG_BPF_JIT) &&
	    regs->pc >= BPF_JIT_REGION_START &&
	    regs->pc < BPF_JIT_REGION_END)
		return arm64_bpf_fixup_exception(fixup, regs);

	return fixup != NULL;
	regs->pc = (unsigned long)&fixup->fixup + fixup->fixup;
	return 1;
}
Loading