Age | Commit message (Collapse) | Author | Files | Lines |
|
Any process is able to send netlink messages with invalid types.
Make the warning rate-limited to prevent too much log spam.
The warning is supposed to help to find misbehaving programs, so
print the triggering command name and pid.
Reported-by: Florian Weimer <fweimer@redhat.com>
Signed-off-by: Vladis Dronov <vdronov@redhat.com>
[PM: subject line tweak to make checkpatch.pl happy]
Signed-off-by: Paul Moore <pmoore@redhat.com>
|
|
Make validatetrans decisions available through selinuxfs.
"/validatetrans" is added to selinuxfs for this purpose.
This functionality is needed by file system servers
implemented in userspace or kernelspace without the VFS
layer.
Writing "$oldcontext $newcontext $tclass $taskcontext"
to /validatetrans is expected to return 0 if the transition
is allowed and -EPERM otherwise.
Signed-off-by: Andrew Perepechko <anserper@ya.ru>
CC: andrew.perepechko@seagate.com
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Paul Moore <pmoore@redhat.com>
|
|
When gfs2 releases the glock of an inode, it must invalidate all
information cached for that inode, including the page cache and acls.
Use the new security_inode_invalidate_secctx hook to also invalidate
security labels in that case. These items will be reread from disk
when needed after reacquiring the glock.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Acked-by: Bob Peterson <rpeterso@redhat.com>
Acked-by: Steven Whitehouse <swhiteho@redhat.com>
Cc: cluster-devel@redhat.com
[PM: fixed spelling errors and description line lengths]
Signed-off-by: Paul Moore <pmoore@redhat.com>
|
|
When fetching an inode's security label, check if it is still valid, and
try reloading it if it is not. Reloading will fail when we are in RCU
context which doesn't allow sleeping, or when we can't find a dentry for
the inode. (Reloading happens via iop->getxattr which takes a dentry
parameter.) When reloading fails, continue using the old, invalid
label.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Paul Moore <pmoore@redhat.com>
|
|
Add a hook to invalidate an inode's security label when the cached
information becomes invalid.
Add the new hook in selinux: set a flag when a security label becomes
invalid.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Reviewed-by: James Morris <james.l.morris@oracle.com>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Paul Moore <pmoore@redhat.com>
|
|
Add functions dentry_security and inode_security for accessing
inode->i_security. These functions initially don't do much, but they
will later be used to revalidate the security labels when necessary.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Paul Moore <pmoore@redhat.com>
|
|
Make the inode argument of the inode_getsecid hook non-const so that we
can use it to revalidate invalid security labels.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Paul Moore <pmoore@redhat.com>
|
|
Make the inode argument of the inode_getsecurity hook non-const so that
we can use it to revalidate invalid security labels.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Paul Moore <pmoore@redhat.com>
|
|
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Paul Moore <pmoore@redhat.com>
|
|
TPM2 supports authorization policies, which are essentially
combinational logic statements repsenting the conditions where the data
can be unsealed based on the TPM state. This patch enables to use
authorization policies to seal trusted keys.
Two following new options have been added for trusted keys:
* 'policydigest=': provide an auth policy digest for sealing.
* 'policyhandle=': provide a policy session handle for unsealing.
If 'hash=' option is supplied after 'policydigest=' option, this
will result an error because the state of the option would become
mixed.
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Acked-by: Peter Huewe <peterhuewe@gmx.de>
|
|
Added 'hash=' option for selecting the hash algorithm for add_key()
syscall and documentation for it.
Added entry for sm3-256 to the following tables in order to support
TPM_ALG_SM3_256:
* hash_algo_name
* hash_digest_size
Includes support for the following hash algorithms:
* sha1
* sha256
* sha384
* sha512
* sm3-256
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: James Morris <james.l.morris@oracle.com>
Reviewed-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Acked-by: Peter Huewe <peterhuewe@gmx.de>
|
|
The trusted keys option parsing allows specifying the same option
multiple times. The last option value specified is used.
This is problematic because:
* No gain.
* This makes complicated to specify options that are dependent on other
options.
This patch changes the behavior in a way that option can be specified
only once.
Reported-by: James Morris James Morris <jmorris@namei.org>
Reviewed-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Acked-by: Peter Huewe <peterhuewe@gmx.de>
|
|
When the TPM response reception is interrupted in the wait_event_interruptable
call, the TPM is still busy processing the command and will only deliver the
response later. So we have to wait for an outstanding response before sending
a new request to avoid trying to put a 2nd request into the CRQ. Also reset
the res_len before sending a command so we will end up in that
wait_event_interruptable() waiting for the response rather than reading the
command packet as a response.
The easiest way to trigger the problem is to run the following
cd /sys/device/vio/71000004
while :; cat pcrs >/dev/null; done
And press Ctrl-C. This will then display an error
tpm_ibmvtpm 71000004: tpm_transmit: tpm_recv: error -4
followed by several other errors once interaction with the TPM resumes.
tpm_ibmvtpm 71000004: A TPM error (101) occurred attempting to determine the number of PCRS.
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Tested-by: Hon Ching(Vicky) Lo <honclo@linux.vnet.ibm.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Reviewed-by: Ashley Lai <ashley@ashleylai.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Acked-by: Peter Huewe <peterhuewe@gmx.de>
|
|
auto-probing doesn't work with shared interrupts, and the auto detection
interrupt range is for x86 only.
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Martin Wilck <Martin.Wilck@ts.fujitsu.com>
Tested-by: Scot Doyle <lkml14@scotdoyle.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Acked-by: Peter Huewe <peterhuewe@gmx.de>
|
|
Now that the probe and run cases are merged together we can use a
much simpler setup flow where probe and normal setup are done with
exactly the same code.
Since the new flow always calls tpm_gen_interrupt to confirm the IRQ
there is also no longer any need to call tpm_get_timeouts twice.
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Martin Wilck <Martin.Wilck@ts.fujitsu.com>
Tested-by: Scot Doyle <lkml14@scotdoyle.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Acked-by: Peter Huewe <peterhuewe@gmx.de>
|
|
The new code that works directly in tpm_tis_send is able to handle
IRQ probing duties as well, so just use it for everything.
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Martin Wilck <Martin.Wilck@ts.fujitsu.com>
Tested-by: Scot Doyle <lkml14@scotdoyle.com>
Signed-off--by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Acked-by: Peter Huewe <peterhuewe@gmx.de>
|
|
IRQ probing needs to know that the TPM is working before trying to
probe, so move tpm_get_timeouts() to the top of the tpm_tis_init().
This has the advantage of also getting the correct timeouts loaded
before doing IRQ probing.
All the timeout handling code is moved to tpm_get_timeouts() in order to
remove duplicate code in tpm_tis and tpm_crb.
[jarkko.sakkinen@linux.intel.com: squashed two patches together and
improved the commit message.]
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Martin Wilck <Martin.Wilck@ts.fujitsu.com>
Tested-by: Scot Doyle <lkml14@scotdoyle.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Acked-by: Peter Huewe <peterhuewe@gmx.de>
|
|
This should be done very early, before anything could possibly
cause the TPM to generate an interrupt. If the IRQ line is shared
with another driver causing an interrupt before setting up our
handler will be very bad.
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Martin Wilck <Martin.Wilck@ts.fujitsu.com>
Tested-by: Scot Doyle <lkml14@scotdoyle.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Acked-by: Peter Huewe <peterhuewe@gmx.de>
|
|
The interrupt is always allocated with devm_request_irq so it
must always be freed with devm_free_irq.
Fixes: 448e9c55c12d ("tpm_tis: verify interrupt during init")
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Acked-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Martin Wilck <Martin.Wilck@ts.fujitsu.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Acked-by: Peter Huewe <peterhuewe@gmx.de>
|
|
Smack security handler for sendmsg() syscall
is vulnerable to type confusion issue what
can allow to privilege escalation into root
or cause denial of service.
A malicious attacker can create socket of one
type for example AF_UNIX and pass is into
sendmsg() function ensuring that this is
AF_INET socket.
Remedy
Do not trust user supplied data.
Proposed fix below.
Signed-off-by: Roman Kubiak <r.kubiak@samsung.com>
Signed-off-by: Mateusz Fruba <m.fruba@samsung.com>
Acked-by: Casey Schaufler <casey@schaufler-ca.com>
|
|
The Kconfig currently controlling compilation of this code is:
ima/Kconfig:config IMA_MOK_KEYRING
ima/Kconfig: bool "Create IMA machine owner keys (MOK) and blacklist keyrings"
...meaning that it currently is not being built as a module by anyone.
Lets remove the couple of traces of modularity so that when reading the
driver there is no doubt it really is builtin-only.
Since module_init translates to device_initcall in the non-modular
case, the init ordering remains unchanged with this commit.
Cc: Mimi Zohar <zohar@linux.vnet.ibm.com>
Cc: Dmitry Kasatkin <dmitry.kasatkin@gmail.com>
Cc: James Morris <james.l.morris@oracle.com>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: linux-ima-devel@lists.sourceforge.net
Cc: linux-ima-user@lists.sourceforge.net
Cc: linux-security-module@vger.kernel.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
|
|
While creating a temporary list of new rules, the ima_appraise flag is
updated, but not reverted on failure to append the new rules to the
existing policy. This patch defines temp_ima_appraise flag. Only when
the new rules are appended to the policy is the flag updated.
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Acked-by: Petko Manolov <petkan@mip-labs.com>
|
|
Set the KEY_FLAGS_KEEP on the .ima_blacklist to prevent userspace
from removing keys from the keyring.
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
|
|
Userspace should not be allowed to remove keys from certain keyrings
(eg. blacklist), though the keys themselves can expire.
This patch defines a new key flag named KEY_FLAG_KEEP to prevent
userspace from being able to unlink, revoke, invalidate or timed
out a key on a keyring. When this flag is set on the keyring, all
keys subsequently added are flagged.
In addition, when this flag is set, the keyring itself can not be
cleared.
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Cc: David Howells <dhowells@redhat.com>
|
|
It is often useful to be able to read back the IMA policy. It is
even more important after introducing CONFIG_IMA_WRITE_POLICY.
This option allows the root user to see the current policy rules.
Signed-off-by: Zbigniew Jasinski <z.jasinski@samsung.com>
Signed-off-by: Petko Manolov <petkan@mip-labs.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
|
|
This option creates IMA MOK and blacklist keyrings. IMA MOK is an
intermediate keyring that sits between .system and .ima keyrings,
effectively forming a simple CA hierarchy. To successfully import a key
into .ima_mok it must be signed by a key which CA is in .system keyring.
On turn any key that needs to go in .ima keyring must be signed by CA in
either .system or .ima_mok keyrings. IMA MOK is empty at kernel boot.
IMA blacklist keyring contains all revoked IMA keys. It is consulted
before any other keyring. If the search is successful the requested
operation is rejected and error is returned to the caller.
Signed-off-by: Petko Manolov <petkan@mip-labs.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
|
|
The new rules get appended to the original policy, forming a queue.
The new rules are first added to a temporary list, which on error
get released without disturbing the normal IMA operations. On
success both lists (the current policy and the new rules) are spliced.
IMA policy reads are many orders of magnitude more numerous compared to
writes, the match code is RCU protected. The updater side also does
list splice in RCU manner.
Signed-off-by: Petko Manolov <petkan@mip-labs.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
|
|
The newly added EVM_LOAD_X509 code can be configured even if
CONFIG_EVM is disabled, but that causes a link error:
security/built-in.o: In function `integrity_load_keys':
digsig_asymmetric.c:(.init.text+0x400): undefined reference to `evm_load_x509'
This adds a Kconfig dependency to ensure it is only enabled when
CONFIG_EVM is set as well.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 2ce523eb8976 ("evm: load x509 certificate from the kernel")
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
|
|
The EVM verification status is cached in iint->evm_status and if it
was successful, never re-verified again when IMA passes the 'iint' to
evm_verifyxattr().
When file attributes or extended attributes change, we may wish to
re-verify EVM integrity as well. For example, after setting a digital
signature we may need to re-verify the signature and update the
iint->flags that there is an EVM signature.
This patch enables that by resetting evm_status to INTEGRITY_UKNOWN
state.
Changes in v2:
* Flag setting moved to EVM layer
Signed-off-by: Dmitry Kasatkin <dmitry.kasatkin@huawei.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
|
|
A crypto HW kernel module can possibly initialize the EVM key from the
kernel __init code to enable EVM before calling the 'init' process.
This patch provides a function evm_set_key() to set the EVM key
directly without using the KEY subsystem.
Changes in v4:
* kernel-doc style for evm_set_key
Changes in v3:
* error reporting moved to evm_set_key
* EVM_INIT_HMAC moved to evm_set_key
* added bitop to prevent key setting race
Changes in v2:
* use size_t for key size instead of signed int
* provide EVM_MAX_KEY_SIZE macro in <linux/evm.h>
* provide EVM_MIN_KEY_SIZE macro in <linux/evm.h>
Signed-off-by: Dmitry Kasatkin <dmitry.kasatkin@huawei.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
|
|
In order to enable EVM before starting the 'init' process,
evm_initialized needs to be non-zero. Previously non-zero indicated
that the HMAC key was loaded. When EVM loads the X509 before calling
'init', with this patch it is now possible to enable EVM to start
signature based verification.
This patch defines bits to enable EVM if a key of any type is loaded.
Changes in v3:
* print error message if key is not set
Changes in v2:
* EVM_STATE_KEY_SET replaced by EVM_INIT_HMAC
* EVM_STATE_X509_SET replaced by EVM_INIT_X509
Signed-off-by: Dmitry Kasatkin <dmitry.kasatkin@huawei.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
|
|
This patch defines a configuration option and the evm_load_x509() hook
to load an X509 certificate onto the EVM trusted kernel keyring.
Changes in v4:
* Patch description updated
Changes in v3:
* Removed EVM_X509_PATH definition. CONFIG_EVM_X509_PATH is used
directly.
Changes in v2:
* default key patch changed to /etc/keys
Signed-off-by: Dmitry Kasatkin <dmitry.kasatkin@huawei.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
|
|
The existing file receive hook checks for access on
the file inode even for UDS. This is not right, as
the inode is not used by Smack to make access checks
for sockets. This change checks for an appropriate
access relationship between the receiving (current)
process and the socket. If the process can't write
to the socket's send label or the socket's receive
label can't write to the process fail.
This will allow the legitimate cases, where the
socket sender and socket receiver can freely communicate.
Only strangly set socket labels should cause a problem.
Signed-off-by: Casey Schaufler <casey@schaufler-ca.com>
|
|
commit 07b133e6060b ("char/tpm: simplify duration calculation and
eliminate smatch warning.") includes a misleading test that is always
false. The tpm_ordinal_duration table is only valid for TPM_PROTECTED
ordinals where the higher 16 bits are all 0, anyway.
Signed-off-by: Martin Wilck <Martin.Wilck@ts.fujitsu.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Acked-by: Peter Huewe <peterhuewe@gmx.de>
|
|
Require all keys added to the EVM keyring be signed by an
existing trusted key on the system trusted keyring.
This patch also switches IMA to use integrity_init_keyring().
Changes in v3:
* Added 'init_keyring' config based variable to skip initializing
keyring instead of using __integrity_init_keyring() wrapper.
* Added dependency back to CONFIG_IMA_TRUSTED_KEYRING
Changes in v2:
* Replace CONFIG_EVM_TRUSTED_KEYRING with IMA and EVM common
CONFIG_INTEGRITY_TRUSTED_KEYRING configuration option
* Deprecate CONFIG_IMA_TRUSTED_KEYRING but keep it for config
file compatibility. (Mimi Zohar)
Signed-off-by: Dmitry Kasatkin <dmitry.kasatkin@huawei.com>
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
|
|
|
|
Adjust kmem_cache_alloc_bulk API before we have any real users.
Adjust API to return type 'int' instead of previously type 'bool'. This
is done to allow future extension of the bulk alloc API.
A future extension could be to allow SLUB to stop at a page boundary, when
specified by a flag, and then return the number of objects.
The advantage of this approach, would make it easier to make bulk alloc
run without local IRQs disabled. With an approach of cmpxchg "stealing"
the entire c->freelist or page->freelist. To avoid overshooting we would
stop processing at a slab-page boundary. Else we always end up returning
some objects at the cost of another cmpxchg.
To keep compatible with future users of this API linking against an older
kernel when using the new flag, we need to return the number of allocated
objects with this API change.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Initial implementation missed support for kmem cgroup support in
kmem_cache_free_bulk() call, add this.
If CONFIG_MEMCG_KMEM is not enabled, the compiler should be smart enough
to not add any asm code.
Incoming bulk free objects can belong to different kmem cgroups, and
object free call can happen at a later point outside memcg context. Thus,
we need to keep the orig kmem_cache, to correctly verify if a memcg object
match against its "root_cache" (s->memcg_params.root_cache).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The call slab_pre_alloc_hook() interacts with kmemgc and is not allowed to
be called several times inside the bulk alloc for loop, due to the call to
memcg_kmem_get_cache().
This would result in hitting the VM_BUG_ON in __memcg_kmem_get_cache.
As suggested by Vladimir Davydov, change slab_post_alloc_hook() to be able
to handle an array of objects.
A subtle detail is, loop iterator "i" in slab_post_alloc_hook() must have
same type (size_t) as size argument. This helps the compiler to easier
realize that it can remove the loop, when all debug statements inside loop
evaluates to nothing. Note, this is only an issue because the kernel is
compiled with GCC option: -fno-strict-overflow
In slab_alloc_node() the compiler inlines and optimizes the invocation of
slab_post_alloc_hook(s, flags, 1, &object) by removing the loop and access
object directly.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reported-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Suggested-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Reviewed-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This change focus on improving the speed of object freeing in the
"slowpath" of kmem_cache_free_bulk.
The calls slab_free (fastpath) and __slab_free (slowpath) have been
extended with support for bulk free, which amortize the overhead of
the (locked) cmpxchg_double.
To use the new bulking feature, we build what I call a detached
freelist. The detached freelist takes advantage of three properties:
1) the free function call owns the object that is about to be freed,
thus writing into this memory is synchronization-free.
2) many freelist's can co-exist side-by-side in the same slab-page
each with a separate head pointer.
3) it is the visibility of the head pointer that needs synchronization.
Given these properties, the brilliant part is that the detached
freelist can be constructed without any need for synchronization. The
freelist is constructed directly in the page objects, without any
synchronization needed. The detached freelist is allocated on the
stack of the function call kmem_cache_free_bulk. Thus, the freelist
head pointer is not visible to other CPUs.
All objects in a SLUB freelist must belong to the same slab-page.
Thus, constructing the detached freelist is about matching objects
that belong to the same slab-page. The bulk free array is scanned is
a progressive manor with a limited look-ahead facility.
Kmem debug support is handled in call of slab_free().
Notice kmem_cache_free_bulk no longer need to disable IRQs. This
only slowed down single free bulk with approx 3 cycles.
Performance data:
Benchmarked[1] obj size 256 bytes on CPU i7-4790K @ 4.00GHz
SLUB fastpath single object quick reuse: 47 cycles(tsc) 11.931 ns
To get stable and comparable numbers, the kernel have been booted with
"slab_merge" (this also improve performance for larger bulk sizes).
Performance data, compared against fallback bulking:
bulk - fallback bulk - improvement with this patch
1 - 62 cycles(tsc) 15.662 ns - 49 cycles(tsc) 12.407 ns- improved 21.0%
2 - 55 cycles(tsc) 13.935 ns - 30 cycles(tsc) 7.506 ns - improved 45.5%
3 - 53 cycles(tsc) 13.341 ns - 23 cycles(tsc) 5.865 ns - improved 56.6%
4 - 52 cycles(tsc) 13.081 ns - 20 cycles(tsc) 5.048 ns - improved 61.5%
8 - 50 cycles(tsc) 12.627 ns - 18 cycles(tsc) 4.659 ns - improved 64.0%
16 - 49 cycles(tsc) 12.412 ns - 17 cycles(tsc) 4.495 ns - improved 65.3%
30 - 49 cycles(tsc) 12.484 ns - 18 cycles(tsc) 4.533 ns - improved 63.3%
32 - 50 cycles(tsc) 12.627 ns - 18 cycles(tsc) 4.707 ns - improved 64.0%
34 - 96 cycles(tsc) 24.243 ns - 23 cycles(tsc) 5.976 ns - improved 76.0%
48 - 83 cycles(tsc) 20.818 ns - 21 cycles(tsc) 5.329 ns - improved 74.7%
64 - 74 cycles(tsc) 18.700 ns - 20 cycles(tsc) 5.127 ns - improved 73.0%
128 - 90 cycles(tsc) 22.734 ns - 27 cycles(tsc) 6.833 ns - improved 70.0%
158 - 99 cycles(tsc) 24.776 ns - 30 cycles(tsc) 7.583 ns - improved 69.7%
250 - 104 cycles(tsc) 26.089 ns - 37 cycles(tsc) 9.280 ns - improved 64.4%
Performance data, compared current in-kernel bulking:
bulk - curr in-kernel - improvement with this patch
1 - 46 cycles(tsc) - 49 cycles(tsc) - improved (cycles:-3) -6.5%
2 - 27 cycles(tsc) - 30 cycles(tsc) - improved (cycles:-3) -11.1%
3 - 21 cycles(tsc) - 23 cycles(tsc) - improved (cycles:-2) -9.5%
4 - 18 cycles(tsc) - 20 cycles(tsc) - improved (cycles:-2) -11.1%
8 - 17 cycles(tsc) - 18 cycles(tsc) - improved (cycles:-1) -5.9%
16 - 18 cycles(tsc) - 17 cycles(tsc) - improved (cycles: 1) 5.6%
30 - 18 cycles(tsc) - 18 cycles(tsc) - improved (cycles: 0) 0.0%
32 - 18 cycles(tsc) - 18 cycles(tsc) - improved (cycles: 0) 0.0%
34 - 78 cycles(tsc) - 23 cycles(tsc) - improved (cycles:55) 70.5%
48 - 60 cycles(tsc) - 21 cycles(tsc) - improved (cycles:39) 65.0%
64 - 49 cycles(tsc) - 20 cycles(tsc) - improved (cycles:29) 59.2%
128 - 69 cycles(tsc) - 27 cycles(tsc) - improved (cycles:42) 60.9%
158 - 79 cycles(tsc) - 30 cycles(tsc) - improved (cycles:49) 62.0%
250 - 86 cycles(tsc) - 37 cycles(tsc) - improved (cycles:49) 57.0%
Performance with normal SLUB merging is significantly slower for
larger bulking. This is believed to (primarily) be an effect of not
having to share the per-CPU data-structures, as tuning per-CPU size
can achieve similar performance.
bulk - slab_nomerge - normal SLUB merge
1 - 49 cycles(tsc) - 49 cycles(tsc) - merge slower with cycles:0
2 - 30 cycles(tsc) - 30 cycles(tsc) - merge slower with cycles:0
3 - 23 cycles(tsc) - 23 cycles(tsc) - merge slower with cycles:0
4 - 20 cycles(tsc) - 20 cycles(tsc) - merge slower with cycles:0
8 - 18 cycles(tsc) - 18 cycles(tsc) - merge slower with cycles:0
16 - 17 cycles(tsc) - 17 cycles(tsc) - merge slower with cycles:0
30 - 18 cycles(tsc) - 23 cycles(tsc) - merge slower with cycles:5
32 - 18 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:4
34 - 23 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:-1
48 - 21 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:1
64 - 20 cycles(tsc) - 48 cycles(tsc) - merge slower with cycles:28
128 - 27 cycles(tsc) - 57 cycles(tsc) - merge slower with cycles:30
158 - 30 cycles(tsc) - 59 cycles(tsc) - merge slower with cycles:29
250 - 37 cycles(tsc) - 56 cycles(tsc) - merge slower with cycles:19
Joint work with Alexander Duyck.
[1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/slab_bulk_test01.c
[akpm@linux-foundation.org: BUG_ON -> WARN_ON;return]
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Make it possible to free a freelist with several objects by adjusting API
of slab_free() and __slab_free() to have head, tail and an objects counter
(cnt).
Tail being NULL indicate single object free of head object. This allow
compiler inline constant propagation in slab_free() and
slab_free_freelist_hook() to avoid adding any overhead in case of single
object free.
This allows a freelist with several objects (all within the same
slab-page) to be free'ed using a single locked cmpxchg_double in
__slab_free() and with an unlocked cmpxchg_double in slab_free().
Object debugging on the free path is also extended to handle these
freelists. When CONFIG_SLUB_DEBUG is enabled it will also detect if
objects don't belong to the same slab-page.
These changes are needed for the next patch to bulk free the detached
freelists it introduces and constructs.
Micro benchmarking showed no performance reduction due to this change,
when debugging is turned off (compiled with CONFIG_SLUB_DEBUG).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Adjust the linker script and map_pages() to map kernel text and data on
physical 1MB huge/large pages.
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
This patch adds huge page support to allow userspace to allocate huge
pages and to use hugetlbfs filesystem on 32- and 64-bit Linux kernels.
A later patch will add kernel support to map kernel text and data on
huge pages.
The only requirement is, that the kernel needs to be compiled for a
PA8X00 CPU (PA2.0 architecture). Older PA1.X CPUs do not support
variable page sizes. 64bit Kernels are compiled for PA2.0 by default.
Technically on parisc multiple physical huge pages may be needed to
emulate standard 2MB huge pages.
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
Use the 22bit instead of the 17bit branch instruction on a 64bit kernel
to reach the do_syscall_trace_exit function from the gateway page.
A huge page enabled kernel may need the additional branch distance bits.
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
For the 64bit kernel the initially 16 MB kernel memory might become too
small if you build a kernel with many modules built-in and with kernel
text and data areas mapped on huge pages.
This patch increases the initial mapping to 32MB for 64bit kernels and
keeps 16MB for 32bit kernels.
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
A fault vector on parisc needs to be 2K aligned. Furthermore the
checksum of the fault vector needs to sum up to 0 which is being
calculated and written at runtime.
Up to now we aligned both PA20 and PA11 fault vectors on the same 4K
page in order to easily write the checksum after having mapped the
kernel read-only (by mapping this page only as read-write).
But when we want to map the kernel text and data on huge pages this
makes things harder.
So, simplify it by aligning both fault vectors on 2K boundries and write
the checksum before we map the page read-only.
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
Huge pages on parisc will have the same size as one pmd table, which
is on a 64bit kernel 2MB on a kernel with 4K kernel page sizes, and
on a 32bit kernel 4MB when used with 4K kernel pages.
Since parisc does not physically supports 2MB huge page sizes, emulate
it with two consecutive 1MB page sizes instead. Keeping the same huge
page size as one pmd will allow us to add transparent huge page support
later on.
Bit 21 in the pte flags was unused and will now be used to mark a page
as huge page (_PAGE_HPAGE_BIT).
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
Drop the MADV_xxK_PAGES flags, which were never used and were from a proposed
API which was never integrated into the generic Linux kernel code.
Cc: stable@vger.kernel.org
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
fsl8250_handle_irq is now used by the of_serial driver, and that fails
if it is a loadable module:
ERROR: "fsl8250_handle_irq" [drivers/tty/serial/of_serial.ko] undefined!
This exports the symbol to avoid randconfig errors.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: d43b54d269d2 ("serial: Enable Freescale 16550 workaround on arm")
Cc: Scott Wood <scottwood@freescale.com>
Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
8250_mid uses rational_best_approximation() function, so the
driver needs to select CONFIG_RATIONAL option.
This fixes build error when CONFIG_RATIONAL is not enabled:
drivers/built-in.o: In function `mid8250_set_termios':
8250_mid.c:(.text+0x10169a): undefined reference to `rational_best_approximation'
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|