Check for MESH-PEER-CONNECTED from dev[1] before reporting MGMT-RX
timeout errors from dev[0]. This avoids false failures in case the short
0.01 s timeout at the end of the loop was not long enough to catch the
message.
Signed-off-by: Jouni Malinen <jouni@qca.qualcomm.com>
This verifies that the Accepting Additional Mesh Peerings field is being
cleared properly when the maximum peer links count is reached.
Signed-off-by: Jouni Malinen <j@w1.fi>
Start using the wpa_supplicant remote UDP interface for the control and
monitor sockets for P2P group interfaces so that P2P tests would work on
real hardware. Also have the group requests and events show in the test
log with the hostname and the interface name of the group interface.
Signed-off-by: Jonathan Afek <jonathanx.afek@intel.com>
The code contained some places that used an additional argument for
setup_hw after -R and also contained places where setup_hw cmdline was
passed as a string instead of an argument list. It also contained places
where the ifname was only treated as a single interface and disregarded
the possiblity of multiple interfaces. This commit fixes these issues
and executes setup_hw from a single function for all cases.
Signed-off-by: Jonathan Afek <jonathanx.afek@intel.com>
Use a monitor interface given in the command line that is not also a
station or an AP as a monitor running wlantest on the channel used by
the test. This makes all the tests that use wlantest available for
execution on real hardware on remote hosts.
Signed-off-by: Jonathan Afek <jonathanx.afek@intel.com>
In monitor.py in the remote tests code there is fucntion create() that
creates standalone monitor interfaces. In this function there is an
iteration of the ifaces of the host by using the ifaces variable but
this variable is non-existing. This patch creates this variable before
its usage.
Signed-off-by: Jonathan Afek <jonathanx.afek@intel.com>
This function is used for remote tests when a monitor interface is
needed on the channel on which the AP operates. This change enables us
to also query P2P interfaces for the channel information to use for
monitor interfaces.
Signed-off-by: Jonathan Afek <jonathanx.afek@intel.com>
Instead of accessing the logs list member of the remote host directly,
use a function to add logs to the remote host to be collected after the
test. This enables us to later have different implementation of remote
hosts or logs collection without requiring to have this list as the
implementation.
Signed-off-by: Jonathan Afek <jonathanx.afek@intel.com>
The regular hwsim tests use both unicast and broadcast frames to test
the connectivity between 2 interfaces. For real hardware (remote hwsim
tests) the broadcast frames will sometimes not be seen by all connected
stations since they can be in low power mode during DTIM or because
broadcast frames are not ACKed. Use 10 retries for broadcast
connectivity tests for real hardware so that the test will pass if we
successfully received at least one of them.
Signed-off-by: Jonathan Afek <jonathanx.afek@intel.com>
This is needed to work with the tls_openssl.c changes that renamed the
function that is used for deriving the EAP-FAST keys.
Signed-off-by: Jouni Malinen <jouni@qca.qualcomm.com>
Commit afb2e8b891 ('tests: Store P2P
Device ifname in class WpaSupplicant') did not take into account the
possibility of capa.flags not existing in get_driver_status() and broke
WEXT test cases. Fix this by checking that capa.flags is present before
looking at its value.
Signed-off-by: Jouni Malinen <j@w1.fi>
This allow to run hwsim test cases.
duts go to apdev while refs go to dev
For now I tested:
./run-tests.py -d hwsim0 -r hwsim1 -h ap_open -h dfs
./run-tests.py -r hwsim0 -r hwsim1 -h ibss_open -v
./run-tests.py -r hwsim0 -r hwsim1 -r hwsim2 -d hwsim3 -d hwsim4 -h ap_vht80 -v
./run-tests.py -r hwsim0 -r hwsim1 -r hwsim2 -d hwsim3 -d hwsim4 -h all -k ap -k vht
Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
This is simple example how to write a simple test case.
modprobe mac80211_hwsim radios=4
run example:
./run-tests.py -d hwsim0 -r hwsim1 -t example
run example with monitors:
./run-tests.py -d hwsim0 -r hwsim1 -t example -m all -m hwsim2
run example with trace record:
./run-tests.py -d hwsim0 -r hwsim1 -t example -T
run example with trace and perf:
./run-tests.py -d hwsim0 -r hwsim1 -t example -T -P
restart hw before test case run:
./run-tests.py -d hwsim0 -r hwsim1 -t example -R
run example verbose
./run-tests.py -d hwsim0 -r hwsim1 -t example -v
For perf/trace you need to write own hw specyfic scripts:
trace_start.sh, trace_stop.sh
perf_start.sh, perf_stop.sh
In any case you will find logs in the logs/current/ directory.
Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
Add monitor support. This supports monitors added to the current
interfaces. This also support standalone monitor with multi interfaces
support. This allows to get logs from different channels at the same
time to one pcap file.
Example of t3-monitor added to config.py file.
Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
Add tests/remote directory and files:
config.py - handle devices/setup_params table
run-tests.py - run test cases
test_devices.py - run basic configuration tests
You can add own configuration file, by default this is cfg.py, and put
there devices and setup_params definition in format you can find in
config.py file. You can use -c option or just create cfg.py file.
Print available devices/test_cases:
./run-tests.py
Check devices (ssh connection, authorized_keys, interfaces):
./run-test.py -t devices
Run sanity tests (test_sanity_*):
./run-test.py -d <dut_name> -t sanity
Run all tests:
./run-tests.py -d <dut_name> -t all
Run test_A and test_B:
./run-tests.py -d <dut_name> -t "test_A, test_B"
Set reference device, and run sanity tests:
./run-tests.py -d <dut_name> -r <ref_name> -t sanity
Multiple duts/refs/monitors could be setup:
e.g.
./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity
Monitor could be set like this:
./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor>
You can also add filters to tests you would like to run
./run-tests.py -d <dut_name> -t all -k wep -k g_only
./run-tests.py -d <dut_name> -t all -k VHT80
./run-test.py doesn't start/terminate wpa_supplicant or hostpad,
test cases are resposible for that, while we don't know test
case requirements.
Restart (-R) trace (-T) and perf (-P) options available.
This request trace/perf logs from the hosts (if possible).
As parameters each test case get:
- devices - table of available devices
- setup_params
- duts - names of DUTs should be tested
- refs - names of reference devices should be used
- monitors - names of monitors list
Each test could return append_text.
Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>