This allows control interface issues to be caught in a bit more readable
way in the debug logs. In addition, dump pending monitor socket
information more frequently and within each test case in the log files
to make the output clearer and less likely to go over the socket buffer
limit.
Signed-off-by: Jouni Malinen <jouni@qca.qualcomm.com>
run-tests.py now takes an optional --long parameter that can be used to
enable running of test cases that take a long time (multiple minutes).
By default, such test cases are skipped to avoid making the normal test
run take excessive amounts of time.
As an initial long test case, verify WPS PBC walk time expiration (two
minutes).
Signed-off-by: Jouni Malinen <j@w1.fi>
"parallel-vm.sh <number of VMs> [arguments..]" can now be used to run
multiple VMs in parallel to speed up full test cycle significantly. In
addition, the "--split srv/total" argument used in this design would
also make it possible to split this to multiple servers to speed up
testing.
Signed-off-by: Jouni Malinen <j@w1.fi>
The optional third argument to the test case functions can now be used
to receive additional parameters from run-tests.py. As the initial
parameter, logdir value is provided so that test cases can use it to
review the debug logs from the test run.
Signed-off-by: Jouni Malinen <j@w1.fi>
If trace-cmd command does not exist, run-tests.py could end up hanging
in a loop waiting for input. Fix this simply by checking whether the
trace-cmd command can be executed sucessfully and exiting the script if
not.
Signed-off-by: Eduardo Abinader <eduardo.abinader@openbossa.org>
Both the output file path and the current working directory included the
log directory and this failed if log directory was not absolute (e.g.,
when using the default logs/current in the case a VM is not used).
Signed-off-by: Jouni Malinen <jouni@qca.qualcomm.com>
This makes the script check the environment for the current python
interpreter in use instead of assuming that the python executable points
to a python 2 interpreter.
Signed-off-by: Roger Zanoni <roger.zanoni@openbossa.org>
This makes it easier to build a web page for analyzing failures without
having to fetch the log files themselves from the test server.
Signed-hostap: Jouni Malinen <j@w1.fi>
It was previously not obvious from the <test case>.log file that a test
case was marked failed based on kernel issues. Make this very clear to
avoid wasting time on figuring out what caused the failure.
Signed-hostap: Jouni Malinen <j@w1.fi>
There's no reason to format the failed tests as a python
list, just print a (space-separated) list of test names.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
Catch exceptions from operations that try to remove hostapd interface
and rename the log file. If these operations fail due to socket
connection issues, hostapd has likely died or gotten stuck somewhere.
Report the test case as a failure and stop test run cleanly.
Signed-hostap: Jouni Malinen <jouni@qca.qualcomm.com>
This can be used to request the previously used default behavior where
the devices are not stopped at the end of a test case if a single test
case is run.
Signed-hostap: Jouni Malinen <j@w1.fi>
Remove the -l command like option from run-tests.py and always enable
writing of debug level logs to files. The stdout debug verbosity is
controlled independently of the debug log files.
Signed-hostap: Jouni Malinen <j@w1.fi>
This optional argument can be used to randomize the order in which the
test cases are run. This can provide more coverage on testing
interactions of common use cases in various different sequences. Such
issues have already been found even with the fixed order of test cases,
but being able to reorder the tests makes this more efficient.
Signed-hostap: Jouni Malinen <j@w1.fi>
In some cases, e.g., with the VM tests if the VM crashes, it
can be useful to know which tests should have run but didn't
(or didn't finish). In order to catch these more easily, add
an option to prefill the database with all tests at the very
beginning of the testing (in a new NOTRUN state) and use the
option in the VM tests.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
Create a results.db in the output directory when running
the tests in a VM. To make that easier, create the tables
in the python script if they don't exist.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
Refactor the test reporting to treat the different results
(success/skip/failure) identically. This makes the timing
seem a bit longer, but cleans up the code which will allow
for adding more checks (e.g., on the captured data files)
later.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
test_ap_bss_add_remove verifies hostapd behavior when BSSes are
added/removed in multi-BSS configuration.
Signed-hostap: Jouni Malinen <jouni@qca.qualcomm.com>
This allows run-tests.py to use the same logs/<date> default logdir as
start.sh which is quite convenient for manual test runs.
Signed-hostap: Jouni Malinen <jouni@qca.qualcomm.com>
The run-tests.py -l argument does not take an argument value anymore.
Instead, debug output is directed to a separate file <test>.log for each
test case.
Signed-hostap: Jouni Malinen <jouni@qca.qualcomm.com>
This is unnecessary extra complexity for user, so use the 'test_' prefix
only internally within the python scripts and file names.
Signed-hostap: Jouni Malinen <jouni@qca.qualcomm.com>
This is unnecessary extra complexity for user and reports, so use the
'test_' prefix only internally within the python scripts.
Signed-hostap: Jouni Malinen <jouni@qca.qualcomm.com>
Commit 781b65cfbb ended up accidentally
changing this from an integer to a string. Fix this by not converting
the variable into a string.
Signed-hostap: Jouni Malinen <jouni@qca.qualcomm.com>
Resetting at the beginning causes the reset logging/tracing
data to leak from the previous test into the next, and the
last one being missed at all - reset at the end of each run
instead. Also reset before all tests just in case running a
test actually crashed the python script.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
In addition to tracing, allow collecting dmesg. There's no
provision for actually looking at it and finding problems
in it yet though.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
Instead of passing the log directory for each option
(-l, -r, -e, and -T) pass it once and make the other
options just take the filename (optionally, even).
This will also make it easier to extend later.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
In order to get tracing per test, allow run-tests.py to start
and stop tracing per test case. This is implemented using a
python 'with' context so it starts/stops automatically at the
right spots.
Instead of starting global tracing, also use it from run-all.sh
and put the trace files into the log dir.
Note that this only works right if you use a separate log dir
for all test runs as the trace files aren't timestamped.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
Reuse the code rather than duplicating the implementation
of starting the tests. To make that easier, allow passing
multiple modules with -f to run-tests.py.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
Instead of re-implementing a command-line parser, use the
argparse module.
The only real change (I hope) is that the test module must
now be given to the -f option without the .py suffix.
Also, --help now works, and if a test module/test name is
given that doesn't exist, the valid list is printed.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
Since the scripts must be run from the source directory to
find the tests to run, they can use a relative path to the
wpaspy module instead of requiring it to be installed.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
"run-tests.py -S <db file> -L" can now be used to update a database
table with the current set of test cases and their descriptions.
Signed-hostap: Jouni Malinen <j@w1.fi>
There is no point trying to go through a test case if the NOTE command
to write TEST-START entry does not succeed. This avoids some excessive
waits on buildbot trying to forcefully kill the programs on its timeout
if wpa_supplicant gets stuck waiting for something (like the current
issue with libnl events and commands having a chance of hitting a
blocking wait on netlink messages).
Signed-hostap: Jouni Malinen <j@w1.fi>
This can be used by test cases that depend on some external component
that may not always be available to indicate clearly that a test case
was skipped rather than passed or failed.
Signed-hostap: Jouni Malinen <j@w1.fi>
This makes it easier to figure out what failed and allows builbot to
integrate multiple logs and state information about the test cases.
Signed-hostap: Jouni Malinen <j@w1.fi>
print and logger.info() were directing output to different locations
(stdout and stderr, respectively) which resulted in buildbot showing
reordered entries. Use logger consistently to avoid that.
Signed-hostap: Jouni Malinen <j@w1.fi>
It looks like the NOTE commands can time out in some cases. Avoid
stopping the test run in such a case to get more coverage if this is a
temporary issue.
Signed-hostap: Jouni Malinen <j@w1.fi>
This allows more consistent interface to be used regardless of which
P2P driver design is used (especially for P2P management operations
over netdev vs. dedicated P2P_DEVICE).
Signed-hostap: Jouni Malinen <j@w1.fi>
Stopping the AP first was not ideal for the test cases since it could
result in wpa_supplicant trying to connect back and start a scan at the
end of a test case and cause problems for the following test case that
tried to scan in the beginning while the previously started scan was
still in progress.
Signed-hostap: Jouni Malinen <j@w1.fi>
Do not print the potentially long list of passed test cases. In case of
failure(s), make sure the failed test list is the last item in the
report.
Signed-hostap: Jouni Malinen <j@w1.fi>
This removes the unnecessary separation of P2P (no hostapd) and AP
tests. The same scripts can be used to prepare for these tests and to
execute the tests.
Signed-hostap: Jouni Malinen <j@w1.fi>