aboutsummaryrefslogtreecommitdiffstats
path: root/net (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2017-11-16selinux: remove unnecessary assignment to subdir-Masahiro Yamada1-1/+0
Makefile.clean descends into $(subdir-y). Dummy assignment to subdir- is meaningless. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Paul Moore <paul@paul-moore.com>
2017-11-16kbuild: specify FORCE in Makefile.headersinst as .PHONY targetMasahiro Yamada1-2/+3
Swap the order of ".PHONY: $(PHONY)" and "PHONY += FORCE" so that FORCE is correctly specified as a .PHONY target. Use a preferred way for specifying $(subdirs) as .PHONY targets. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-11-16kbuild: remove redundant mkdir from ./KbuildMasahiro Yamada1-2/+0
These two targets are added to "targets". Their directories are automatically created. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-11-16kbuild: optimize object directory creation for incremental buildMasahiro Yamada1-0/+5
The previous commit largely optimized the object directory creation. We can optimize it more for incremental build. There are already *.cmd files in the output directory. The existing *.cmd files have been picked up by $(wildcard ...). Obviously, directories containing them exist too, so we can skip "mkdir -p". With this, Kbuild runs almost zero "mkdir -p" in incremental building. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-11-16kbuild: create object directories simpler and fasterMasahiro Yamada4-30/+6
For the out-of-tree build, scripts/Makefile.build creates output directories, but this operation is not efficient. scripts/Makefile.lib calculates obj-dirs as follows: obj-dirs := $(dir $(multi-objs) $(obj-y)) Please notice $(sort ...) is not used here. Usually the result is as many "./" as objects here. For a lot of duplicated paths, the following command is invoked. _dummy := $(foreach d,$(obj-dirs), $(shell [ -d $(d) ] || mkdir -p $(d))) Then, the costly shell command is run over and over again. I see many points for optimization: [1] Use $(sort ...) to cut down duplicated paths before passing them to system call [2] Use single $(shell ...) instead of repeating it with $(foreach ...) This will reduce forking. [3] We can calculate obj-dirs more simply. Most of objects are already accumulated in $(targets). So, $(dir $(targets)) is fine and more comprehensive. I also removed ugly code in arch/x86/entry/vdso/Makefile. This is now really unnecessary. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Ingo Molnar <mingo@kernel.org> Tested-by: Douglas Anderson <dianders@chromium.org>
2017-11-16kbuild: filter-out PHONY targets from "targets"Masahiro Yamada1-1/+1
The variable "targets" contains object paths for which existing .*.cmd files should be included. scripts/Makefile.build automatically adds $(MAKECMDGOALS) to "targets" as follows: targets += $(extra-y) $(MAKECMDGOALS) $(always) The $(MAKECMDGOALS) is a PHONY target in several places. PHONY targets never create .*.cmd files. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-11-16kbuild: remove redundant $(wildcard ...) for cmd_files calculationMasahiro Yamada4-8/+4
I do not see any reason why $(wildcard ...) needs to be called twice for computing cmd_files. Remove the first one. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-11-16kbuild: create directory for make cache only when necessaryMasahiro Yamada1-4/+9
Currently, the existence of $(dir $(make-cache)) is always checked, and created if it is missing. We can avoid unnecessary system calls by some tricks. [1] If KBUILD_SRC is unset, we are building in the source tree. The output directory checks can be entirely skipped. [2] If at least one cache data is found, it means the cache file was included. Obviously its directory exists. Skip "mkdir -p". [3] If Makefile does not contain any call of __run-and-store, it will not create a cache file. No need to create its directory. [4] The "mkdir -p" should be only invoked by the first call of __run-and-store Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Douglas Anderson <dianders@chromium.org>
2017-11-13sh: select KBUILD_DEFCONFIG depending on ARCHMasahiro Yamada1-2/+6
You can not select KBUILD_DEFCONFIG depending on any CONFIG option because include/config/auto.conf is not included when building config targets. So, CONFIG_SUPERH32 is never set during the configuration, then cayman_defconfig is always chosen. This commit provides a sensible way to choose shx3/cayman_defconfig. arch/sh/Kconfig sets either SUPERH32 or SUPERH64 depending on ARCH environment, like follows: config SUPERH32 def_bool ARCH = "sh" ... config SUPERH64 def_bool ARCH = "sh64" It should make sense to choose the default defconfig by ARCH, like arch/sparc/Makefile. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-11-13kbuild: fix linker feature test macros when cross compiling with ClangNick Desaulniers1-2/+3
I was not seeing my linker flags getting added when using ld-option when cross compiling with Clang. Upon investigation, this seems to be due to a difference in how GCC vs Clang handle cross compilation. GCC is configured at build time to support one backend, that is implicit when compiling. Clang is explicit via the use of `-target <triple>` and ships with all supported backends by default. GNU Make feature test macros that compile then link will always fail when cross compiling with Clang unless Clang's triple is passed along to the compiler. For example: $ clang -x c /dev/null -c -o temp.o $ aarch64-linux-android/bin/ld -E temp.o aarch64-linux-android/bin/ld: unknown architecture of input file `temp.o' is incompatible with aarch64 output aarch64-linux-android/bin/ld: warning: cannot find entry symbol _start; defaulting to 0000000000400078 $ echo $? 1 $ clang -target aarch64-linux-android- -x c /dev/null -c -o temp.o $ aarch64-linux-android/bin/ld -E temp.o aarch64-linux-android/bin/ld: warning: cannot find entry symbol _start; defaulting to 00000000004002e4 $ echo $? 0 This causes conditional checks that invoke $(CC) without the target triple, then $(LD) on the result, to always fail. Suggested-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-11-13kbuild: shrink .cache.mk when it exceeds 1000 linesMasahiro Yamada1-0/+6
The cache files are only cleaned away by "make clean". If you continue incremental builds, the cache files will grow up little by little. It is not a big deal in general use cases because compiler flags do not change quite often. However, if you do build-test for various architectures, compilers, and kernel configurations, you will end up with huge cache files soon. When the cache file exceeds 1000 lines, shrink it down to 500 by "tail". The Least Recently Added lines are cut. (not Least Recently Used) I hope it will work well enough. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Douglas Anderson <dianders@chromium.org>
2017-11-13kbuild: do not call cc-option before KBUILD_CFLAGS initializationMasahiro Yamada1-10/+11
Some $(call cc-option,...) are invoked very early, even before KBUILD_CFLAGS, etc. are initialized. The returned string from $(call cc-option,...) depends on KBUILD_CPPFLAGS, KBUILD_CFLAGS, and GCC_PLUGINS_CFLAGS. Since they are exported, they are not empty when the top Makefile is recursively invoked. The recursion occurs in several places. For example, the top Makefile invokes itself for silentoldconfig. "make tinyconfig", "make rpm-pkg" are the cases, too. In those cases, the second call of cc-option from the same line runs a different shell command due to non-pristine KBUILD_CFLAGS. To get the same result all the time, KBUILD_* and GCC_PLUGINS_CFLAGS must be initialized before any call of cc-option. This avoids garbage data in the .cache.mk file. Move all calls of cc-option below the config targets because target compiler flags are unnecessary for Kconfig. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Douglas Anderson <dianders@chromium.org>
2017-11-13kbuild: Cache a few more calls to the compilerDouglas Anderson1-2/+2
These are a few stragglers that I left out of the original patch to cache calls to the C compiler ("kbuild: Add a cache for generated variables") because they bleed out into the main Makefile and thus uglify things a little bit. The idea is the same here, though. Signed-off-by: Douglas Anderson <dianders@chromium.org> Tested-by: Ingo Molnar <mingo@kernel.org> Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-11-13kbuild: Add a cache for generated variablesDouglas Anderson2-14/+77
While timing a "no-op" build of the kernel (incrementally building the kernel even though nothing changed) in the Chrome OS build system I found that it was much slower than I expected. Digging into things a bit, I found that quite a bit of the time was spent invoking the C compiler even though we weren't actually building anything. Currently in the Chrome OS build system the C compiler is called through a number of wrappers (one of which is written in python!) and can take upwards of 100 ms to invoke even if we're not doing anything difficult, so these invocations of the compiler were taking a lot of time. Worse the invocations couldn't seem to take advantage of the multiple cores on my system. Certainly it seems like we could make the compiler invocations in the Chrome OS build system faster, but only to a point. Inherently invoking a program as big as a C compiler is a fairly heavy operation. Thus even if we can speed the compiler calls it made sense to track down what was happening. It turned out that all the compiler invocations were coming from usages like this in the kernel's Makefile: KBUILD_CFLAGS += $(call cc-option,-fno-delete-null-pointer-checks,) Due to the way cc-option and similar statements work the above contains an implicit call to the C compiler. ...and due to the fact that we're storing the result in KBUILD_CFLAGS, a simply expanded variable, the call will happen every time the Makefile is parsed, even if there are no users of KBUILD_CFLAGS. Rather than redoing this computation every time, it makes a lot of sense to cache the result of all of the Makefile's compiler calls just like we do when we compile a ".c" file to a ".o" file. Conceptually this is quite a simple idea. ...and since the calls to invoke the compiler and similar tools are centrally located in the Kbuild.include file this doesn't even need to be super invasive. Implementing the cache in a simple-to-use and efficient way is not quite as simple as it first sounds, though. To get maximum speed we really want the cache in a format that make can natively understand and make doesn't really have an ability to load/parse files. ...but make _can_ import other Makefiles, so the solution is to store the cache in Makefile format. This requires coming up with a valid/unique Makefile variable name for each value to be cached, but that's solvable with some cleverness. After this change, we'll automatically create a ".cache.mk" file that will contain our cached variables. We'll load this on each invocation of make and will avoid recomputing anything that's already in our cache. The cache is stored in a format that it shouldn't need any invalidation since anything that might change should affect the "key" and any old cached value won't be used. Signed-off-by: Douglas Anderson <dianders@chromium.org> Tested-by: Ingo Molnar <mingo@kernel.org> Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-11-13kbuild: add forward declaration of default target to Makefile.asm-genericMasahiro Yamada1-0/+3
$(kbuild-file) and Kbuild.include are included before the default target "all". We will add a target into Kbuild.include. In advance, add a forward declaration of the default target. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Douglas Anderson <dianders@chromium.org>
2017-10-31kbuild: remove KBUILD_SUBDIR_ASFLAGS and KBUILD_SUBDIR_CCFLAGSMasahiro Yamada1-4/+4
Accumulate subdir-{cc,as}flags-y directly to KBUILD_{A,C}FLAGS. Remove KBUILD_SUBDIR_{AS,CC}FLAGS. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Cao jin <caoj.fnst@cn.fujitsu.com>
2017-10-31hexagon/kbuild: replace CFLAGS_MODULE with KBUILD_CFLAGS_MODULECao jin1-3/+3
As kbuild document & commit 6588169d51 says: KBUILD_CFLAGS_MODULE is used to add arch-specific options for $(CC). From commandline, CFLAGS_MODULE shall be used. Doesn't have any functional change, but just follow kbuild rules. Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com> CC: linux-hexagon@vger.kernel.org Acked-by: Richard Kuo <rkuo@codeaurora.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-10-31c6x/kbuild: replace CFLAGS_MODULE with KBUILD_CFLAGS_MODULECao jin1-1/+1
As kbuild document & commit 6588169d51 says: KBUILD_CFLAGS_MODULE is used to add arch-specific options for $(CC). From commandline, CFLAGS_MODULE shall be used. Doesn't have any functional change, but just follow kbuild rules. Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com> CC: Mark Salter <msalter@redhat.com> CC: Aurelien Jacquiot <jacquiot.aurelien@gmail.com> CC: linux-c6x-dev@linux-c6x.org Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-10-31arm/kbuild: replace {C, LD}FLAGS_MODULE with KBUILD_{C, LD}FLAGS_MODULECao jin1-3/+3
As kbuild document & commit 6588169d51 says: KBUILD_{C,LD}FLAGS_MODULE are used to add arch-specific options for $(CC) and $(LD). From commandline, {C,LD}FLAGS_MODULE shall be used. Doesn't have any functional change, but just follow kbuild rules. Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com> CC: Russell King <linux@armlinux.org.uk> CC: linux-arm-kernel@lists.infradead.org Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-10-26kbuild: comments cleanup in Makefile.libCao jin1-14/+7
It has: 1. Move comments close to what it want to comment. 2. Comments cleanup & improvement. Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-10-26kbuild: clang: remove crufty HOSTCFLAGSNick Desaulniers1-5/+0
When compiling with `make CC=clang HOSTCC=clang`, I was seeing warnings that clang did not recognize -fno-delete-null-pointer-checks for HOSTCC targets. These were added in commit 61163efae020 ("kbuild: LLVMLinux: Add Kbuild support for building kernel with Clang"). Clang does not support -fno-delete-null-pointer-checks, so adding it to HOSTCFLAGS if HOSTCC is clang does not make sense. It's not clear why the other warnings were disabled, and just for HOSTCFLAGS, but I can remove them, add -Werror to HOSTCFLAGS and compile with clang just fine. Suggested-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Nick Desaulniers <nick.desaulniers@gmail.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-10-10kbuild: re-order the code to not parse unnecessary variablesMasahiro Yamada1-115/+118
The top Makefile is divided into some sections such as mixed targets, config targets, build targets, etc. When we build mixed targets, Kbuild just invokes submake to process them one by one. In this case, compiler-related variables like CC, KBUILD_CFLAGS, etc. are unneeded. Check what kind of targets we are building first, and parse variables for building only when necessary. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-10-10kbuild: move "_all" target out of $(KBUILD_SRC) conditionalMasahiro Yamada1-4/+4
The first "_all" occurrence around line 120 is only visible when KBUILD_SRC is unset. If O=... is specified, the working directory is relocated, then the only second occurrence around line 193 is visible, that is not set to PHONY. Move the first one to an always visible place. This clarifies "_all" is our default target and it is always set to PHONY. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Douglas Anderson <dianders@chromium.org>
2017-10-10kbuild: replace $(hdr-arch) with $(SRCARCH)Masahiro Yamada2-13/+10
Since commit 5e53879008b9 ("sparc,sparc64: unify Makefile"), hdr-arch and SRCARCH always match. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Douglas Anderson <dianders@chromium.org>
2017-10-09kbuild: mkcompile_h: do not create .versionMasahiro Yamada1-6/+1
This script does not need to create .version; it will be created by scripts/link-vmlinux.sh later. Clean-up the code slightly. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-10-09kbuild: link-vmlinux.sh: simplify .version incrementMasahiro Yamada2-11/+6
Since commit 1f2bfbd00e46 ("kbuild: link of vmlinux moved to a script"), it is easy to increment .version without using a temporary file .old_version. I do not see anybody who creates the .tmp_version. Probably it is a left-over of commit 4e25d8bb9550fb ("[PATCH] kbuild: adjust .version updating"). Just remove it. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-10-09hexagon: get rid of #include <generated/compile.h>Masahiro Yamada1-3/+1
<generated/compile.h> is created (or updated) when Kbuild descends into the init/ directory. In parallel building from a pristine source tree, there is no guarantee <generated/compile.h> exists when arch/hexagon/kernel/ptrace.c is compiled. For hexagon architecture, we know UTS_MACHINE is a fixed string "hexagon", so let's hard-code it, like many architectures do. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Richard Kuo <rkuo@codeaurora.org>
2017-10-01Linux 4.14-rc3Linus Torvalds1-1/+1
2017-09-29fix infoleak in waitid(2)Al Viro1-13/+10
kernel_waitid() can return a PID, an error or 0. rusage is filled in the first case and waitid(2) rusage should've been copied out exactly in that case, *not* whenever kernel_waitid() has not returned an error. Compat variant shares that braino; none of kernel_wait4() callers do, so the below ought to fix it. Reported-and-tested-by: Alexander Potapenko <glider@google.com> Fixes: ce72a16fa705 ("wait4(2)/waitid(2): separate copying rusage to userland") Cc: stable@vger.kernel.org # v4.13 Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-09-29x86/asm: Use register variable to get stack pointer valueAndrey Ryabinin5-18/+7
Currently we use current_stack_pointer() function to get the value of the stack pointer register. Since commit: f5caf621ee35 ("x86/asm: Fix inline asm call constraints for Clang") ... we have a stack register variable declared. It can be used instead of current_stack_pointer() function which allows to optimize away some excessive "mov %rsp, %<dst>" instructions: -mov %rsp,%rdx -sub %rdx,%rax -cmp $0x3fff,%rax -ja ffffffff810722fd <ist_begin_non_atomic+0x2d> +sub %rsp,%rax +cmp $0x3fff,%rax +ja ffffffff810722fa <ist_begin_non_atomic+0x2a> Remove current_stack_pointer(), rename __asm_call_sp to current_stack_pointer and use it instead of the removed function. Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20170929141537.29167-1-aryabinin@virtuozzo.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-29x86/mm: Disable branch profiling in mem_encrypt.cTom Lendacky1-0/+2
Some routines in mem_encrypt.c are called very early in the boot process, e.g. sme_encrypt_kernel(). When CONFIG_TRACE_BRANCH_PROFILING=y is defined the resulting branch profiling associated with the check to see if SME is active results in a kernel crash. Disable branch profiling for mem_encrypt.c by defining DISABLE_BRANCH_PROFILING before including any header files. Reported-by: kernel test robot <lkp@01.org> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Acked-by: Borislav Petkov <bp@suse.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20170929162419.6016.53390.stgit@tlendack-t1.amdoffice.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-29arm64: fault: Route pte translation faults via do_translation_faultWill Deacon1-1/+1
We currently route pte translation faults via do_page_fault, which elides the address check against TASK_SIZE before invoking the mm fault handling code. However, this can cause issues with the path walking code in conjunction with our word-at-a-time implementation because load_unaligned_zeropad can end up faulting in kernel space if it reads across a page boundary and runs into a page fault (e.g. by attempting to read from a guard region). In the case of such a fault, load_unaligned_zeropad has registered a fixup to shift the valid data and pad with zeroes, however the abort is reported as a level 3 translation fault and we dispatch it straight to do_page_fault, despite it being a kernel address. This results in calling a sleeping function from atomic context: BUG: sleeping function called from invalid context at arch/arm64/mm/fault.c:313 in_atomic(): 0, irqs_disabled(): 0, pid: 10290 Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [...] [<ffffff8e016cd0cc>] ___might_sleep+0x134/0x144 [<ffffff8e016cd158>] __might_sleep+0x7c/0x8c [<ffffff8e016977f0>] do_page_fault+0x140/0x330 [<ffffff8e01681328>] do_mem_abort+0x54/0xb0 Exception stack(0xfffffffb20247a70 to 0xfffffffb20247ba0) [...] [<ffffff8e016844fc>] el1_da+0x18/0x78 [<ffffff8e017f399c>] path_parentat+0x44/0x88 [<ffffff8e017f4c9c>] filename_parentat+0x5c/0xd8 [<ffffff8e017f5044>] filename_create+0x4c/0x128 [<ffffff8e017f59e4>] SyS_mkdirat+0x50/0xc8 [<ffffff8e01684e30>] el0_svc_naked+0x24/0x28 Code: 36380080 d5384100 f9400800 9402566d (d4210000) ---[ end trace 2d01889f2bca9b9f ]--- Fix this by dispatching all translation faults to do_translation_faults, which avoids invoking the page fault logic for faults on kernel addresses. Cc: <stable@vger.kernel.org> Reported-by: Ankit Jain <ankijain@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-09-29arm64: mm: Use READ_ONCE when dereferencing pointer to pte tableWill Deacon1-1/+1
On kernels built with support for transparent huge pages, different CPUs can access the PMD concurrently due to e.g. fast GUP or page_vma_mapped_walk and they must take care to use READ_ONCE to avoid value tearing or caching of stale values by the compiler. Unfortunately, these functions call into our pgtable macros, which don't use READ_ONCE, and compiler caching has been observed to cause the following crash during ext4 writeback: PC is at check_pte+0x20/0x170 LR is at page_vma_mapped_walk+0x2e0/0x540 [...] Process doio (pid: 2463, stack limit = 0xffff00000f2e8000) Call trace: [<ffff000008233328>] check_pte+0x20/0x170 [<ffff000008233758>] page_vma_mapped_walk+0x2e0/0x540 [<ffff000008234adc>] page_mkclean_one+0xac/0x278 [<ffff000008234d98>] rmap_walk_file+0xf0/0x238 [<ffff000008236e74>] rmap_walk+0x64/0xa0 [<ffff0000082370c8>] page_mkclean+0x90/0xa8 [<ffff0000081f3c64>] clear_page_dirty_for_io+0x84/0x2a8 [<ffff00000832f984>] mpage_submit_page+0x34/0x98 [<ffff00000832fb4c>] mpage_process_page_bufs+0x164/0x170 [<ffff00000832fc8c>] mpage_prepare_extent_to_map+0x134/0x2b8 [<ffff00000833530c>] ext4_writepages+0x484/0xe30 [<ffff0000081f6ab4>] do_writepages+0x44/0xe8 [<ffff0000081e5bd4>] __filemap_fdatawrite_range+0xbc/0x110 [<ffff0000081e5e68>] file_write_and_wait_range+0x48/0xd8 [<ffff000008324310>] ext4_sync_file+0x80/0x4b8 [<ffff0000082bd434>] vfs_fsync_range+0x64/0xc0 [<ffff0000082332b4>] SyS_msync+0x194/0x1e8 This is because page_vma_mapped_walk loads the PMD twice before calling pte_offset_map: the first time without READ_ONCE (where it gets all zeroes due to a concurrent pmdp_invalidate) and the second time with READ_ONCE (where it sees a valid table pointer due to a concurrent pmd_populate). However, the compiler inlines everything and caches the first value in a register, which is subsequently used in pte_offset_phys which returns a junk pointer that is later dereferenced when attempting to access the relevant pte. This patch fixes the issue by using READ_ONCE in pte_offset_phys to ensure that a stale value is not used. Whilst this is a point fix for a known failure (and simple to backport), a full fix moving all of our page table accessors over to {READ,WRITE}_ONCE and consistently using READ_ONCE in page_vma_mapped_walk is in the works for a future kernel release. Cc: Jon Masters <jcm@redhat.com> Cc: Timur Tabi <timur@codeaurora.org> Cc: <stable@vger.kernel.org> Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()") Tested-by: Richard Ruigrok <rruigrok@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-09-29kvm/x86: Handle async PF in RCU read-side critical sectionsBoqun Feng1-1/+2
Sasha Levin reported a WARNING: | WARNING: CPU: 0 PID: 6974 at kernel/rcu/tree_plugin.h:329 | rcu_preempt_note_context_switch kernel/rcu/tree_plugin.h:329 [inline] | WARNING: CPU: 0 PID: 6974 at kernel/rcu/tree_plugin.h:329 | rcu_note_context_switch+0x16c/0x2210 kernel/rcu/tree.c:458 ... | CPU: 0 PID: 6974 Comm: syz-fuzzer Not tainted 4.13.0-next-20170908+ #246 | Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS | 1.10.1-1ubuntu1 04/01/2014 | Call Trace: ... | RIP: 0010:rcu_preempt_note_context_switch kernel/rcu/tree_plugin.h:329 [inline] | RIP: 0010:rcu_note_context_switch+0x16c/0x2210 kernel/rcu/tree.c:458 | RSP: 0018:ffff88003b2debc8 EFLAGS: 00010002 | RAX: 0000000000000001 RBX: 1ffff1000765bd85 RCX: 0000000000000000 | RDX: 1ffff100075d7882 RSI: ffffffffb5c7da20 RDI: ffff88003aebc410 | RBP: ffff88003b2def30 R08: dffffc0000000000 R09: 0000000000000001 | R10: 0000000000000000 R11: 0000000000000000 R12: ffff88003b2def08 | R13: 0000000000000000 R14: ffff88003aebc040 R15: ffff88003aebc040 | __schedule+0x201/0x2240 kernel/sched/core.c:3292 | schedule+0x113/0x460 kernel/sched/core.c:3421 | kvm_async_pf_task_wait+0x43f/0x940 arch/x86/kernel/kvm.c:158 | do_async_page_fault+0x72/0x90 arch/x86/kernel/kvm.c:271 | async_page_fault+0x22/0x30 arch/x86/entry/entry_64.S:1069 | RIP: 0010:format_decode+0x240/0x830 lib/vsprintf.c:1996 | RSP: 0018:ffff88003b2df520 EFLAGS: 00010283 | RAX: 000000000000003f RBX: ffffffffb5d1e141 RCX: ffff88003b2df670 | RDX: 0000000000000001 RSI: dffffc0000000000 RDI: ffffffffb5d1e140 | RBP: ffff88003b2df560 R08: dffffc0000000000 R09: 0000000000000000 | R10: ffff88003b2df718 R11: 0000000000000000 R12: ffff88003b2df5d8 | R13: 0000000000000064 R14: ffffffffb5d1e140 R15: 0000000000000000 | vsnprintf+0x173/0x1700 lib/vsprintf.c:2136 | sprintf+0xbe/0xf0 lib/vsprintf.c:2386 | proc_self_get_link+0xfb/0x1c0 fs/proc/self.c:23 | get_link fs/namei.c:1047 [inline] | link_path_walk+0x1041/0x1490 fs/namei.c:2127 ... This happened when the host hit a page fault, and delivered it as in an async page fault, while the guest was in an RCU read-side critical section. The guest then tries to reschedule in kvm_async_pf_task_wait(), but rcu_preempt_note_context_switch() would treat the reschedule as a sleep in RCU read-side critical section, which is not allowed (even in preemptible RCU). Thus the WARN. To cure this, make kvm_async_pf_task_wait() go to the halt path if the PF happens in a RCU read-side critical section. Reported-by: Sasha Levin <levinsasha928@gmail.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-09-29KVM: nVMX: Fix nested #PF intends to break L1's vmlauch/vmresumeWanpeng Li1-1/+2
------------[ cut here ]------------ WARNING: CPU: 4 PID: 5280 at /home/kernel/linux/arch/x86/kvm//vmx.c:11394 nested_vmx_vmexit+0xc2b/0xd70 [kvm_intel] CPU: 4 PID: 5280 Comm: qemu-system-x86 Tainted: G W OE 4.13.0+ #17 RIP: 0010:nested_vmx_vmexit+0xc2b/0xd70 [kvm_intel] Call Trace: ? emulator_read_emulated+0x15/0x20 [kvm] ? segmented_read+0xae/0xf0 [kvm] vmx_inject_page_fault_nested+0x60/0x70 [kvm_intel] ? vmx_inject_page_fault_nested+0x60/0x70 [kvm_intel] x86_emulate_instruction+0x733/0x810 [kvm] vmx_handle_exit+0x2f4/0xda0 [kvm_intel] ? kvm_arch_vcpu_ioctl_run+0xd2f/0x1c60 [kvm] kvm_arch_vcpu_ioctl_run+0xdab/0x1c60 [kvm] ? kvm_arch_vcpu_load+0x62/0x230 [kvm] kvm_vcpu_ioctl+0x340/0x700 [kvm] ? kvm_vcpu_ioctl+0x340/0x700 [kvm] ? __fget+0xfc/0x210 do_vfs_ioctl+0xa4/0x6a0 ? __fget+0x11d/0x210 SyS_ioctl+0x79/0x90 entry_SYSCALL_64_fastpath+0x23/0xc2 A nested #PF is triggered during L0 emulating instruction for L2. However, it doesn't consider we should not break L1's vmlauch/vmresme. This patch fixes it by queuing the #PF exception instead ,requesting an immediate VM exit from L2 and keeping the exception for L1 pending for a subsequent nested VM exit. This should actually work all the time, making vmx_inject_page_fault_nested totally unnecessary. However, that's not working yet, so this patch can work around the issue in the meanwhile. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-09-29sched/sysctl: Check user input value of sysctl_sched_time_avgEthan Zhao1-1/+2
System will hang if user set sysctl_sched_time_avg to 0: [root@XXX ~]# sysctl kernel.sched_time_avg_ms=0 Stack traceback for pid 0 0xffff883f6406c600 0 0 1 3 R 0xffff883f6406cf50 *swapper/3 ffff883f7ccc3ae8 0000000000000018 ffffffff810c4dd0 0000000000000000 0000000000017800 ffff883f7ccc3d78 0000000000000003 ffff883f7ccc3bf8 ffffffff810c4fc9 ffff883f7ccc3c08 00000000810c5043 ffff883f7ccc3c08 Call Trace: <IRQ> [<ffffffff810c4dd0>] ? update_group_capacity+0x110/0x200 [<ffffffff810c4fc9>] ? update_sd_lb_stats+0x109/0x600 [<ffffffff810c5507>] ? find_busiest_group+0x47/0x530 [<ffffffff810c5b84>] ? load_balance+0x194/0x900 [<ffffffff810ad5ca>] ? update_rq_clock.part.83+0x1a/0xe0 [<ffffffff810c6d42>] ? rebalance_domains+0x152/0x290 [<ffffffff810c6f5c>] ? run_rebalance_domains+0xdc/0x1d0 [<ffffffff8108a75b>] ? __do_softirq+0xfb/0x320 [<ffffffff8108ac85>] ? irq_exit+0x125/0x130 [<ffffffff810b3a17>] ? scheduler_ipi+0x97/0x160 [<ffffffff81052709>] ? smp_reschedule_interrupt+0x29/0x30 [<ffffffff8173a1be>] ? reschedule_interrupt+0x6e/0x80 <EOI> [<ffffffff815bc83c>] ? cpuidle_enter_state+0xcc/0x230 [<ffffffff815bc80c>] ? cpuidle_enter_state+0x9c/0x230 [<ffffffff815bc9d7>] ? cpuidle_enter+0x17/0x20 [<ffffffff810cd6dc>] ? cpu_startup_entry+0x38c/0x420 [<ffffffff81053373>] ? start_secondary+0x173/0x1e0 Because divide-by-zero error happens in function: update_group_capacity() update_cpu_capacity() scale_rt_capacity() { ... total = sched_avg_period() + delta; used = div_u64(avg, total); ... } To fix this issue, check user input value of sysctl_sched_time_avg, keep it unchanged when hitting invalid input, and set the minimum limit of sysctl_sched_time_avg to 1 ms. Reported-by: James Puthukattukaran <james.puthukattukaran@oracle.com> Signed-off-by: Ethan Zhao <ethan.zhao@oracle.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: efault@gmx.de Cc: ethan.kernel@gmail.com Cc: keescook@chromium.org Cc: mcgrof@kernel.org Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/1504504774-18253-1-git-send-email-ethan.zhao@oracle.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-29x86/asm: Fix inline asm call constraints for GCC 4.4Josh Poimboeuf1-2/+4
The kernel test bot (run by Xiaolong Ye) reported that the following commit: f5caf621ee35 ("x86/asm: Fix inline asm call constraints for Clang") is causing double faults in a kernel compiled with GCC 4.4. Linus subsequently diagnosed the crash pattern and the buggy commit and found that the issue is with this code: register unsigned int __asm_call_sp asm("esp"); #define ASM_CALL_CONSTRAINT "+r" (__asm_call_sp) Even on a 64-bit kernel, it's using ESP instead of RSP. That causes GCC to produce the following bogus code: ffffffff8147461d: 89 e0 mov %esp,%eax ffffffff8147461f: 4c 89 f7 mov %r14,%rdi ffffffff81474622: 4c 89 fe mov %r15,%rsi ffffffff81474625: ba 20 00 00 00 mov $0x20,%edx ffffffff8147462a: 89 c4 mov %eax,%esp ffffffff8147462c: e8 bf 52 05 00 callq ffffffff814c98f0 <copy_user_generic_unrolled> Despite the absurdity of it backing up and restoring the stack pointer for no reason, the bug is actually the fact that it's only backing up and restoring the lower 32 bits of the stack pointer. The upper 32 bits are getting cleared out, corrupting the stack pointer. So change the '__asm_call_sp' register variable to be associated with the actual full-size stack pointer. This also requires changing the __ASM_SEL() macro to be based on the actual compiled arch size, rather than the CONFIG value, because CONFIG_X86_64 compiles some files with '-m32' (e.g., realmode and vdso). Otherwise Clang fails to build the kernel because it complains about the use of a 64-bit register (RSP) in a 32-bit file. Reported-and-Bisected-and-Tested-by: kernel test robot <xiaolong.ye@intel.com> Diagnosed-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Dmitriy Vyukov <dvyukov@google.com> Cc: LKP <lkp@01.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthias Kaehlcke <mka@chromium.org> Cc: Miguel Bernal Marin <miguel.bernal.marin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: f5caf621ee35 ("x86/asm: Fix inline asm call constraints for Clang") Link: http://lkml.kernel.org/r/20170928215826.6sdpmwtkiydiytim@treble Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-29sched/debug: Add explicit TASK_PARKED printingPeter Zijlstra3-11/+10
Currently TASK_PARKED is masqueraded as TASK_INTERRUPTIBLE, give it its own print state because it will not in fact get woken by regular wakeups and is a long-term state. This requires moving TASK_PARKED into the TASK_REPORT mask, and since that latter needs to be a contiguous bitmask, we need to shuffle the bits around a bit. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-29sched/debug: Ignore TASK_IDLE for SysRq-WPeter Zijlstra1-1/+23
Markus reported that tasks in TASK_IDLE state are reported by SysRq-W, which results in undesirable clutter. Reported-by: Markus Trippelsdorf <markus@trippelsdorf.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-29sched/debug: Add explicit TASK_IDLE printingPeter Zijlstra3-13/+27
Markus reported that kthreads that idle using TASK_IDLE instead of TASK_INTERRUPTIBLE are reported in as TASK_UNINTERRUPTIBLE and things like htop mark those red. This is undesirable, so add an explicit state for TASK_IDLE. Reported-by: Markus Trippelsdorf <markus@trippelsdorf.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-29sched/tracing: Use common task-state helpersPeter Zijlstra3-21/+10
Remove yet another task-state char instance. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-29locking/rwsem-xadd: Fix missed wakeup due to reordering of loadPrateek Sood1-0/+27
If a spinner is present, there is a chance that the load of rwsem_has_spinner() in rwsem_wake() can be reordered with respect to decrement of rwsem count in __up_write() leading to wakeup being missed: spinning writer up_write caller --------------- ----------------------- [S] osq_unlock() [L] osq spin_lock(wait_lock) sem->count=0xFFFFFFFF00000001 +0xFFFFFFFF00000000 count=sem->count MB sem->count=0xFFFFFFFE00000001 -0xFFFFFFFF00000001 spin_trylock(wait_lock) return rwsem_try_write_lock(count) spin_unlock(wait_lock) schedule() Reordering of atomic_long_sub_return_release() in __up_write() and rwsem_has_spinner() in rwsem_wake() can cause missing of wakeup in up_write() context. In spinning writer, sem->count and local variable count is 0XFFFFFFFE00000001. It would result in rwsem_try_write_lock() failing to acquire rwsem and spinning writer going to sleep in rwsem_down_write_failed(). The smp_rmb() will make sure that the spinner state is consulted after sem->count is updated in up_write context. Signed-off-by: Prateek Sood <prsood@codeaurora.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dave@stgolabs.net Cc: longman@redhat.com Cc: parri.andrea@gmail.com Cc: sramana@codeaurora.org Link: http://lkml.kernel.org/r/1504794658-15397-1-git-send-email-prsood@codeaurora.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-29sched/tracing: Fix trace_sched_switch task-state printingPeter Zijlstra2-8/+12
Convert trace_sched_switch to use the common task-state helpers and fix the "X" and "Z" order, possibly they ended up in the wrong order because TASK_REPORT has them in the wrong order too. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-29sched/debug: Remove unused variablePeter Zijlstra1-2/+0
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-29sched/debug: Convert TASK_state to hexPeter Zijlstra1-14/+14
Bit patterns are easier in hex. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-29sched/debug: Implement consistent task-state printingPeter Zijlstra2-20/+21
Currently get_task_state() and task_state_to_char() report different states, create a number of common helpers and unify the reported state space. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-29um/time: Fixup namespace collisionThomas Gleixner1-2/+2
The new timer_setup() function for struct timer_list collides with a private um function. Rename it. Fixes: 686fef928bba ("timer: Prepare to change timer callback argument type") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Richard Weinberger <richard@nod.at> Cc: Jeff Dike <jdike@addtoit.com> Cc: user-mode-linux-devel@lists.sourceforge.net Cc: Kees Cook <keescook@chromium.org>
2017-09-29perf/aux: Only update ->aux_wakeup in non-overwrite modeAlexander Shishkin1-5/+15
The following commit: d9a50b0256 ("perf/aux: Ensure aux_wakeup represents most recent wakeup index") changed the AUX wakeup position calculation to rounddown(), which causes a division-by-zero in AUX overwrite mode (aka "snapshot mode"). The zero denominator results from the fact that perf record doesn't set aux_watermark to anything, in which case the kernel will set it to half the AUX buffer size, but only for non-overwrite mode. In the overwrite mode aux_watermark stays zero. The good news is that, AUX overwrite mode, wakeups don't happen and related bookkeeping is not relevant, so we can simply forego the whole wakeup updates. Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: will.deacon@arm.com Link: http://lkml.kernel.org/r/20170906160811.16510-1-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-28Revert "Bluetooth: Add option for disabling legacy ioctl interfaces"Linus Torvalds2-16/+0
This reverts commit dbbccdc4ced015cdd4051299bd87fbe0254ad351. It turns out that the "legacy" users aren't so legacy at all, and that turning off the legacy ioctl will break the current Qt bluetooth stack for bluetooth LE devices that were released just a couple of months ago. So it's simply not true that this was a legacy interface that hasn't been needed and is only limited to old legacy BT devices. Because I actually read Kconfig help messages, and actively try to turn off features that I don't need, I turned the option off. Then I spent _way_ too much time debugging BLE issues until I realized that it wasn't the Qt and subsurface development that had broken one of my dive computer BLE downloads, but simply my broken kernel config. Maybe in a decade it will be true that this is a legacy interface. And maybe with a better help-text and correct dependencies, this kind of legacy removal might be acceptable. But as things are right now both the commit message and the Kconfig help text were misleading, and the Kconfig option had the wrong dependenencies. There's no reason to keep that broken Kconfig option in the tree. Cc: Marcel Holtmann <marcel@holtmann.org> Cc: Johan Hedberg <johan.hedberg@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-28perf test: Fix vmlinux failure on s390x part 2Thomas Richter2-22/+0
On s390x perf test 1 failed. It turned out that commit cf6383f73cf2 ("perf report: Fix kernel symbol adjustment for s390x") was incorrect. The previous implementation in dso__load_sym() is also suitable for s390x. Therefore this patch undoes commit cf6383f73cf2 Signed-off-by: Thomas-Mich Richter <tmricht@linux.vnet.ibm.com> Cc: Zvonko Kosic <zvonko.kosic@de.ibm.com> Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Fixes: cf6383f73cf2 ("perf report: Fix kernel symbol adjustment for s390x") LPU-Reference: 20170915071404.58398-2-tmricht@linux.vnet.ibm.com Link: http://lkml.kernel.org/n/tip-v101o8k25vuja2ogosgf15yy@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>