aboutsummaryrefslogtreecommitdiffstats
path: root/tools/testing/kunit/kunit_parser.py (follow)
AgeCommit message (Collapse)AuthorFilesLines
2022-09-30kunit: tool: make --raw_output=kunit (aka --raw_output) preserve leading spacesDaniel Latypov1-4/+6
With $ kunit.py run --raw_output=all ... you get the raw output from the kernel, e.g. something like > TAP version 14 > 1..26 > # Subtest: time_test_cases > 1..1 > ok 1 - time64_to_tm_test_date_range > ok 1 - time_test_cases But --raw_output=kunit or equivalently --raw_output, you get > TAP version 14 > 1..26 > # Subtest: time_test_cases > 1..1 > ok 1 - time64_to_tm_test_date_range > ok 1 - time_test_cases It looks less readable in my opinion, and it also isn't "raw output." This is due to sharing code with kunit_parser.py, which wants to strip leading whitespace since it uses anchored regexes. We could update the kunit_parser.py code to tolerate leaading spaces, but this patch takes the easier way out and adds a bool flag. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-07-07kunit: tool: refactoring printing logic into kunit_printer.pyDaniel Latypov1-44/+19
Context: * kunit_kernel.py is importing kunit_parser.py just to use the print_with_timestamp() function * the parser is directly printing to stdout, which will become an issue if we ever try to run multiple kernels in parallel This patch introduces a kunit_printer.py file and migrates callers of kunit_parser.print_with_timestamp() to call kunit_printer.stdout.print_with_timestamp() instead. Future changes: If we want to support showing results for parallel runs, we could then create new Printer's that don't directly write to stdout and refactor the code to pass around these Printer objects. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-16kunit: tool: misc cleanupsDaniel Latypov1-20/+17
This primarily comes from running pylint over kunit tool code and ignoring some warnings we don't care about. If we ever got a fully clean setup, we could add this to run_checks.py, but we're not there yet. Fix things like * Drop unused imports * check `is None`, not `== None` (see PEP 8) * remove redundant parens around returns * remove redundant `else` / convert `elif` to `if` where appropriate * rename make_arch_qemuconfig() param to base_kunitconfig (this is the name used in the subclass, and it's a better one) * kunit_tool_test: check the exit code for SystemExit (could be 0) Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-16kunit: tool: minor cosmetic cleanups in kunit_parser.pyDaniel Latypov1-54/+17
There should be no behavioral changes from this patch. This patch removes redundant comment text, inlines a function used in only one place, and other such minor tweaks. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-16kunit: tool: make parser stop overwriting status of suites w/ no_testsDaniel Latypov1-2/+5
Consider this invocation $ ./tools/testing/kunit/kunit.py parse <<EOF TAP version 14 1..2 ok 1 - suite # Subtest: no_tests_suite # catastrophic error! not ok 1 - no_tests_suite EOF It will have a 0 exit code even though there's a "not ok". Consider this one: $ ./tools/testing/kunit/kunit.py parse <<EOF TAP version 14 1..2 ok 1 - suite not ok 1 - no_tests_suite EOF It will a non-zero exit code. Why? We have this line in the kunit_parser.py > parent_test = parse_test_header(lines, test) where we have special handling when we see "# Subtest" and we ignore the explicit reported "not ok 1" status! Also, NO_TESTS at a suite-level only results in a non-zero status code where then there's only one suite atm. This change is the minimal one to make sure we don't overwrite it. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-16kunit: tool: remove dead parse_crash_in_log() logicDaniel Latypov1-21/+0
This logic depends on the kernel logging a message containing 'kunit test case crashed', but there is no corresponding logic to do so. This is likely a relic of the revision process KUnit initially went through when being upstreamed. Delete it given 1) it's been missing for years and likely won't get implemented 2) the parser has been moving to be a more general KTAP parser, kunit-only magic like this isn't how we'd want to implement it. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-12kunit: tool: print clearer error message when there's no TAP outputDaniel Latypov1-1/+2
Before: $ ./tools/testing/kunit/kunit.py parse /dev/null ... [ERROR] Test : invalid KTAP input! After: $ ./tools/testing/kunit/kunit.py parse /dev/null ... [ERROR] Test <missing>: could not find any KTAP output! This error message gets printed out when extract_tap_output() yielded no lines. So while it could be because of malformed KTAP output from KUnit, it could also be due to not having any KTAP output at all. Try and make the error message here more clear. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-05-12kunit: tool: update test counts summary line formatDaniel Latypov1-5/+5
Before: > Testing complete. Passed: 137, Failed: 0, Crashed: 0, Skipped: 36, Errors: 0 After: > Testing complete. Ran 173 tests: passed: 137, skipped: 36 Even with our current set of statuses, the output is a bit verbose. It could get worse in the future if we add more (e.g. timeout, kasan). Let's only print the relevant ones. I had previously been sympathetic to the argument that always printing out all the statuses would make it easier to parse results. But now we have commit acd8e8407b8f ("kunit: Print test statistics on failure"), there are test counts printed out in the raw output. We don't currently print out an overall total across all suites, but it would be easy to add, if we see a need for that. Signed-off-by: Daniel Latypov <dlatypov@google.com> Co-developed-by: David Gow <davidgow@google.com> Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-04-04kunit: tool: Do not colorize output when redirectedKees Cook1-0/+7
Filling log files with color codes makes diffs and other comparisons difficult. Only emit vt100 codes when the stdout is a TTY. Cc: Brendan Higgins <brendanhiggins@google.com> Cc: linux-kselftest@vger.kernel.org Cc: kunit-dev@googlegroups.com Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-15kunit: tool: fix newly introduced typechecker errorsDaniel Latypov1-2/+2
After upgrading mypy and pytype from pip, we see 2 new errors when running ./tools/testing/kunit/run_checks.py. Error #1: mypy and pytype They now deduce that importlib.util.spec_from_file_location() can return None and note that we're not checking for this. We validate that the arch is valid (i.e. the file exists) beforehand. Add in an `asssert spec is not None` to appease the checkers. Error #2: pytype bug https://github.com/google/pytype/issues/1057 It doesn't like `from datetime import datetime`, specifically that a type shares a name with a module. We can workaround this by either * renaming the import or just using `import datetime` * passing the new `--fix-module-collisions` flag to pytype. We pick the first option for now because * the flag is quite new, only in the 2021.11.29 release. * I'd prefer if people can just run `pytype <file>` Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-15kunit: tool: delete kunit_parser.TestResult typeDaniel Latypov1-7/+3
The `log` field is unused, and the `status` field is accessible via `test.status`. So it's simpler to just return the main `Test` object directly. And since we're no longer returning a namedtuple, which has no type annotations, this hopefully means typecheckers are better equipped to find any errors. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-13kunit: tool: print parsed test results fully incrementallyDaniel Latypov1-6/+16
With the parser rework [1] and run_kernel() rework [2], this allows the parser to print out test results incrementally. Currently, that's held up by the fact that the LineStream eagerly pre-fetches the next line when you call pop(). This blocks parse_test_result() from returning until the line *after* the "ok 1 - test name" line is also printed. One can see this with the following example: $ (echo -e 'TAP version 14\n1..3\nok 1 - fake test'; sleep 2; echo -e 'ok 2 - fake test 2'; sleep 3; echo -e 'ok 3 - fake test 3') | ./tools/testing/kunit/kunit.py parse Before this patch [1]: there's a pause before 'fake test' is printed. After this patch: 'fake test' is printed out immediately. This patch also adds * a unit test to verify LineStream's behavior directly * a test case to ensure that it's lazily calling the generator * an explicit exception for when users go beyond EOF [1] https://lore.kernel.org/linux-kselftest/20211006170049.106852-1-dlatypov@google.com/ [2] https://lore.kernel.org/linux-kselftest/20211005011340.2826268-1-dlatypov@google.com/ Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-13kunit: tool: Report an error if any test has no subtestsDavid Gow1-5/+11
It's possible for a test to have a subtest header, but zero valid subtests. We used to error on this if the test plan had no subtests listed, but it's possible to have subtests without a test plan (indeed, this is how parameterised tests work). Tests with 0 subtests now have the result NO_TESTS, and will report an error (which does not halt test execution, but is printed in a scary red colour and is noted in the results summary). Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-12-13kunit: tool: Do not error on tests without test plansDavid Gow1-3/+2
The (K)TAP spec encourages test output to begin with a 'test plan': a count of the number of tests being run of the form: 1..n However, some test suites might not know the number of subtests in advance (for example, KUnit's parameterised tests use a generator function). In this case, it's not possible to print the test plan in advance. kunit_tool already parses test output which doesn't contain a plan, but reports an error. Since we want to use nested subtests with KUnit paramterised tests, remove this error. Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-10-19kunit: tool: improve compatibility of kunit_parser with KTAP specificationRae Moar1-313/+702
Update to kunit_parser to improve compatibility with KTAP specification including arbitrarily nested tests. Patch accomplishes three major changes: - Use a general Test object to represent all tests rather than TestCase and TestSuite objects. This allows for easier implementation of arbitrary levels of nested tests and promotes the idea that both test suites and test cases are tests. - Print errors incrementally rather than all at once after the parsing finishes to maximize information given to the user in the case of the parser given invalid input and to increase the helpfulness of the timestamps given during printing. Note that kunit.py parse does not print incrementally yet. However, this fix brings us closer to this feature. - Increase compatibility for different formats of input. Arbitrary levels of nested tests supported. Also, test cases and test suites are now supported to be present on the same level of testing. This patch now implements the draft KTAP specification here: https://lore.kernel.org/linux-kselftest/CA+GJov6tdjvY9x12JsJT14qn6c7NViJxqaJk+r-K1YJzPggFDQ@mail.gmail.com/ We'll update the parser as the spec evolves. This patch adjusts the kunit_tool_test.py file to check for the correct outputs from the new parser and adds a new test to check the parsing for a KTAP result log with correct format for multiple nested subtests (test_is_test_passed-all_passed_nested.log). This patch also alters the kunit_json.py file to allow for arbitrarily nested tests. Signed-off-by: Rae Moar <rmoar@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-08-13kunit: Print test statistics on failureDavid Gow1-1/+1
When a number of tests fail, it can be useful to get higher-level statistics of how many tests are failing (or how many parameters are failing in parameterised tests), and in what cases or suites. This is already done by some non-KUnit tests, so add support for automatically generating these for KUnit tests. This change adds a 'kunit.stats_enabled' switch which has three values: - 0: No stats are printed (current behaviour) - 1: Stats are printed only for tests/suites with more than one subtest (new default) - 2: Always print test statistics For parameterised tests, the summary line looks as follows: " # inode_test_xtimestamp_decoding: pass:16 fail:0 skip:0 total:16" For test suites, there are two lines looking like this: "# ext4_inode_test: pass:1 fail:0 skip:0 total:1" "# Totals: pass:16 fail:0 skip:0 total:16" The first line gives the number of direct subtests, the second "Totals" line is the accumulated sum of all tests and test parameters. This format is based on the one used by kselftest[1]. [1]: https://elixir.bootlin.com/linux/latest/source/tools/testing/selftests/kselftest.h#L109 Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-08-13kunit: tool: make --raw_output support only showing kunit outputDaniel Latypov1-4/+0
--raw_output is nice, but it would be nicer if could show only output after KUnit tests have started. So change the flag to allow specifying a string ('kunit'). Make it so `--raw_output` alone will default to `--raw_output=all` and have the same original behavior. Drop the small kunit_parser.raw_output() function since it feels wrong to put it in "kunit_parser.py" when the point of it is to not parse anything. E.g. $ ./tools/testing/kunit/kunit.py run --raw_output=kunit ... [15:24:07] Starting KUnit Kernel ... TAP version 14 1..1 # Subtest: example 1..3 # example_simple_test: initializing ok 1 - example_simple_test # example_skip_test: initializing # example_skip_test: You should not see a line below. ok 2 - example_skip_test # SKIP this test should be skipped # example_mark_skipped_test: initializing # example_mark_skipped_test: You should see a line below. # example_mark_skipped_test: You should see this line. ok 3 - example_mark_skipped_test # SKIP this test should be skipped ok 1 - example [15:24:10] Elapsed time: 6.487s total, 0.001s configuring, 3.510s building, 0.000s running Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-07-12kunit: tool: Fix error messages for cases of no tests and wrong TAP headerRae Moar1-2/+4
This patch addresses misleading error messages reported by kunit_tool in two cases. First, in the case of TAP output having an incorrect header format or missing a header, the parser used to output an error message of 'no tests run!'. Now the parser outputs an error message of 'could not parse test results!'. As an example: Before: $ ./tools/testing/kunit/kunit.py parse /dev/null [ERROR] no tests run! ... After: $ ./tools/testing/kunit/kunit.py parse /dev/null [ERROR] could not parse test results! ... Second, in the case of TAP output with the correct header but no tests, the parser used to output an error message of 'could not parse test results!'. Now the parser outputs an error message of 'no tests run!'. As an example: Before: $ echo -e 'TAP version 14\n1..0' | ./tools/testing/kunit/kunit.py parse [ERROR] could not parse test results! After: $ echo -e 'TAP version 14\n1..0' | ./tools/testing/kunit/kunit.py parse [ERROR] no tests run! Additionally, this patch also corrects the tests in kunit_tool_test.py and adds a test to check the error in the case of TAP output with the correct header but no tests. Signed-off-by: Rae Moar <rmoar@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-06-25kunit: tool: Support skipped tests in kunit_toolDavid Gow1-24/+53
Add support for the SKIP directive to kunit_tool's TAP parser. Skipped tests now show up as such in the printed summary. The number of skipped tests is counted, and if all tests in a suite are skipped, the suite is also marked as skipped. Otherwise, skipped tests do affect the suite result. Example output: [00:22:34] ======== [SKIPPED] example_skip ======== [00:22:34] [SKIPPED] example_skip_test # SKIP this test should be skipped [00:22:34] [SKIPPED] example_mark_skipped_test # SKIP this test should be skipped [00:22:34] ============================================================ [00:22:34] Testing complete. 2 tests run. 0 failed. 0 crashed. 2 skipped. Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-06-25kunit: tool: internal refactor of parser input handlingDaniel Latypov1-47/+89
Note: this does not change the parser behavior at all (except for making one error message more useful). This is just an internal refactor. The TAP output parser currently operates over a List[str]. This works, but we only ever need to be able to "peek" at the current line and the ability to "pop" it off. Also, using a List means we need to wait for all the output before we can start parsing. While this is not an issue for most tests which are really lightweight, we do have some longer (~5 minutes) tests. This patch introduces an LineStream wrapper class that * Exposes a peek()/pop() interface instead of manipulating an array * this allows us to more easily add debugging code [1] * Can consume an input from a generator * we can now parse results as tests are running (the parser code currently doesn't print until the end, so no impact yet). * Tracks the current line number to print better error messages * Would allow us to add additional features more easily, e.g. storing N previous lines so we can print out invalid lines in context, etc. [1] The parsing logic is currently quite fragile. E.g. it'll often say the kernel "CRASHED" if there's something slightly wrong with the output format. When debugging a test that had some memory corruption issues, it resulted in very misleading errors from the parser. Now we could easily add this to trace all the lines consumed and why +import inspect ... def pop(self) -> str: n = self._next + print(f'popping {n[0]}: {n[1].ljust(40, " ")}| caller={inspect.stack()[1].function}') Example output: popping 77: TAP version 14 | caller=parse_tap_header popping 78: 1..1 | caller=parse_test_plan popping 79: # Subtest: kunit_executor_test | caller=parse_subtest_header popping 80: 1..2 | caller=parse_subtest_plan popping 81: ok 1 - parse_filter_test | caller=parse_ok_not_ok_test_case popping 82: ok 2 - filter_subsuite_test | caller=parse_ok_not_ok_test_case popping 83: ok 1 - kunit_executor_test | caller=parse_ok_not_ok_test_suite If we introduce an invalid line, we can see the parser go down the wrong path: popping 77: TAP version 14 | caller=parse_tap_header popping 78: 1..1 | caller=parse_test_plan popping 79: # Subtest: kunit_executor_test | caller=parse_subtest_header popping 80: 1..2 | caller=parse_subtest_plan popping 81: 1..2 # this is invalid! | caller=parse_ok_not_ok_test_case popping 82: ok 1 - parse_filter_test | caller=parse_ok_not_ok_test_case popping 83: ok 2 - filter_subsuite_test | caller=parse_ok_not_ok_test_case popping 84: ok 1 - kunit_executor_test | caller=parse_ok_not_ok_test_case [ERROR] ran out of lines before end token Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Acked-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-06-11kunit: Add 'kunit_shutdown' optionDavid Gow1-1/+1
Add a new kernel command-line option, 'kunit_shutdown', which allows the user to specify that the kernel poweroff, halt, or reboot after completing all KUnit tests; this is very handy for running KUnit tests on UML or a VM so that the UML/VM process exits cleanly immediately after running all tests without needing a special initramfs. Signed-off-by: David Gow <davidgow@google.com> Signed-off-by: Brendan Higgins <brendanhiggins@google.com> Reviewed-by: Stephen Boyd <sboyd@kernel.org> Tested-By: Daniel Latypov <dlatypov@google.com> Reviewed-by: Daniel Latypov <dlatypov@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-01-15kunit: tool: fix minor typing issue with None statusDaniel Latypov1-9/+8
The code to handle aggregating statuses didn't check that the status actually got set to some non-None value. Default the value to SUCCESS instead of adding a bunch of `is None` checks. This sorta follows the precedent in commit 3fc48259d525 ("kunit: Don't fail test suites if one of them is empty"). Also slightly simplify the code and add type annotations. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Tested-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-01-15kunit: tool: surface and address more typing issuesDaniel Latypov1-23/+23
The authors of this tool were more familiar with a different type-checker, https://github.com/google/pytype. That's open source, but mypy seems more prevalent (and runs faster). And unlike pytype, mypy doesn't try to infer types so it doesn't check unanotated functions. So annotate ~all functions in kunit tool to increase type-checking coverage. Note: per https://www.python.org/dev/peps/pep-0484/, `__init__()` should be annotated as `-> None`. Doing so makes mypy discover a number of new violations. Exclude main() since we reuse `request` for the different types of requests, which mypy isn't happy about. This commit fixes all but one error, where `TestSuite.status` might be None. Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Tested-by: Brendan Higgins <brendanhiggins@google.com> Acked-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-01-15kunit: tool: Fix spelling of "diagnostic" in kunit_parserDavid Gow1-12/+12
Various helper functions were misspelling "diagnostic" in their names. It finally got annoying, so fix it. Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Tested-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-12-01kunit: kunit_tool: Correctly parse diagnostic messagesDavid Gow1-3/+4
Currently, kunit_tool expects all diagnostic lines in test results to contain ": " somewhere, as both the subtest header and the crash report do. Fix this to accept any line starting with (minus indent) "# " as being a valid diagnostic line. This matches what the TAP spec[1] and the draft KTAP spec[2] are expecting. [1]: http://testanything.org/tap-specification.html [2]: https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD830@CY4PR13MB1175.namprd13.prod.outlook.com/T/ Signed-off-by: David Gow <davidgow@google.com> Acked-by: Marco Elver <elver@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-11-10kunit: tool: fix extra trailing \n in raw + parsed test outputDaniel Latypov1-1/+2
For simplcity, strip all trailing whitespace from parsed output. I imagine no one is printing out meaningful trailing whitespace via KUNIT_FAIL() or similar, and that if they are, they really shouldn't. `isolate_kunit_output()` yielded liens with trailing \n, which results in artifacty output like this: $ ./tools/testing/kunit/kunit.py run [16:16:46] [FAILED] example_simple_test [16:16:46] # example_simple_test: EXPECTATION FAILED at lib/kunit/kunit-example-test.c:29 [16:16:46] Expected 1 + 1 == 3, but [16:16:46] 1 + 1 == 2 [16:16:46] 3 == 3 [16:16:46] not ok 1 - example_simple_test [16:16:46] After this change: [16:16:46] # example_simple_test: EXPECTATION FAILED at lib/kunit/kunit-example-test.c:29 [16:16:46] Expected 1 + 1 == 3, but [16:16:46] 1 + 1 == 2 [16:16:46] 3 == 3 [16:16:46] not ok 1 - example_simple_test [16:16:46] We should *not* be expecting lines to end with \n in kunit_tool_test.py for this reason. Do the same for `raw_output()` as well which suffers from the same issue. This is a followup to [1], but rebased onto kunit-fixes to pick up the other raw_output() fix and fixes for kunit_tool_test.py. [1] https://lore.kernel.org/linux-kselftest/20201020233219.4146059-1-dlatypov@google.com/ Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Tested-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-11-10kunit: tool: fix pre-existing python type annotation errorsDaniel Latypov1-7/+7
The code uses annotations, but they aren't accurate. Note that type checking in python is a separate process, running `kunit.py run` will not check and complain about invalid types at runtime. Fix pre-existing issues found by running a type checker $ mypy *.py All but one of these were returning `None` without denoting this properly (via `Optional[Type]`). Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-10-26kunit: Don't fail test suites if one of them is emptyAndy Shevchenko1-1/+1
Empty test suite is okay test suite. Don't fail the rest of the test suites if one of them is empty. Fixes: 6ebf5866f2e8 ("kunit: tool: add Python wrappers for running KUnit tests") Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Tested-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-10-26kunit: Fix kunit.py --raw_output optionDavid Gow1-1/+0
Due to the raw_output() function on kunit_parser.py actually being a generator, it only runs if something reads the lines it returns. Since we no-longer do that (parsing doesn't actually happen if raw_output is enabled), it was not printing anything. Fixes: 45ba7a893ad8 ("kunit: kunit_tool: Separate out config/build/exec/parse") Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Tested-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-10-09kunit: test: add test plan to KUnit TAP formatBrendan Higgins1-14/+62
TAP 14 allows an optional test plan to be emitted before the start of the start of testing[1]; this is valuable because it makes it possible for a test harness to detect whether the number of tests run matches the number of tests expected to be run, ensuring that no tests silently failed. Link[1]: https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-specification.md#the-plan Signed-off-by: Brendan Higgins <brendanhiggins@google.com> Reviewed-by: Stephen Boyd <sboyd@kernel.org> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-06-26kunit: show error if kunit results are not presentUriel Guajardo1-4/+4
Currently, if the kernel is configured incorrectly or if it crashes before any kunit tests are run, kunit finishes without error, reporting that 0 test cases were run. To fix this, an error is shown when the tap header is not found, which indicates that kunit was not able to run at all. Signed-off-by: Uriel Guajardo <urielguajardo@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-03-26kunit: subtests should be indented 4 spaces according to TAPAlan Maguire1-5/+5
Introduce KUNIT_SUBTEST_INDENT macro which corresponds to 4-space indentation and KUNIT_SUBSUBTEST_INDENT macro which corresponds to 8-space indentation in line with TAP spec (e.g. see "Subtests" section of https://node-tap.org/tap-protocol/). Use these macros in place of one or two tabs in strings to clarify why we are indenting. Suggested-by: Frank Rowand <frowand.list@gmail.com> Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Reviewed-by: Frank Rowand <frank.rowand@sony.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-03-20kunit: Run all KUnit tests through allyesconfigHeidi Fahim1-0/+1
Implemented the functionality to run all KUnit tests through kunit_tool by specifying an --alltests flag, which builds UML with allyesconfig enabled, and consequently runs every KUnit test. A new function was added to kunit_kernel: make_allyesconfig. Firstly, if --alltests is specified, kunit.py triggers build_um_kernel which call make_allyesconfig. This function calls the make command, disables the broken configs that would otherwise prevent UML from building, then starts the kernel with all possible configurations enabled. All stdout and stderr is sent to test.log and read from there then fed through kunit_parser to parse the tests to the user. Also added a signal_handler in case kunit is interrupted while running. Tested: Run under different conditions such as testing with --raw_output, testing program interrupt then immediately running kunit again without --alltests and making sure to clean the console. Signed-off-by: Heidi Fahim <heidifahim@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-03-20kunit: kunit_parser: make parser more robustHeidi Fahim1-20/+20
Previously, kunit_parser did not properly handle kunit TAP output that - had any prefixes (generated from different configs e.g. CONFIG_PRINTK_TIME) - had unrelated kernel output mixed in the middle of it, which has shown up when testing with allyesconfig To remove prefixes, the parser looks for the first line that includes TAP output, "TAP version 14". It then determines the length of the string before this sequence, and strips that number of characters off the beginning of the following lines until the last KUnit output line is reached. These fixes have been tested with additional tests in the KUnitParseTest and their associated logs have also been added. Signed-off-by: Heidi Fahim <heidifahim@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2019-09-30kunit: tool: add Python wrappers for running KUnit testsFelix Guo1-0/+310
The ultimate goal is to create minimal isolated test binaries; in the meantime we are using UML to provide the infrastructure to run tests, so define an abstract way to configure and run tests that allow us to change the context in which tests are built without affecting the user. This also makes pretty and dynamic error reporting, and a lot of other nice features easier. kunit_config.py: - parse .config and Kconfig files. kunit_kernel.py: provides helper functions to: - configure the kernel using kunitconfig. - build the kernel with the appropriate configuration. - provide function to invoke the kernel and stream the output back. kunit_parser.py: parses raw logs returned out by kunit_kernel and displays them in a user friendly way. test_data/*: samples of test data for testing kunit.py, kunit_config.py, etc. Signed-off-by: Felix Guo <felixguoxiuping@gmail.com> Signed-off-by: Brendan Higgins <brendanhiggins@google.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Stephen Boyd <sboyd@kernel.org> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>