Instead of accessing the logs list member of the remote host directly,
use a function to add logs to the remote host to be collected after the
test. This enables us to later have different implementation of remote
hosts or logs collection without requiring to have this list as the
implementation.
Signed-off-by: Jonathan Afek <jonathanx.afek@intel.com>
This allow to run hwsim test cases.
duts go to apdev while refs go to dev
For now I tested:
./run-tests.py -d hwsim0 -r hwsim1 -h ap_open -h dfs
./run-tests.py -r hwsim0 -r hwsim1 -h ibss_open -v
./run-tests.py -r hwsim0 -r hwsim1 -r hwsim2 -d hwsim3 -d hwsim4 -h ap_vht80 -v
./run-tests.py -r hwsim0 -r hwsim1 -r hwsim2 -d hwsim3 -d hwsim4 -h all -k ap -k vht
Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
This is simple example how to write a simple test case.
modprobe mac80211_hwsim radios=4
run example:
./run-tests.py -d hwsim0 -r hwsim1 -t example
run example with monitors:
./run-tests.py -d hwsim0 -r hwsim1 -t example -m all -m hwsim2
run example with trace record:
./run-tests.py -d hwsim0 -r hwsim1 -t example -T
run example with trace and perf:
./run-tests.py -d hwsim0 -r hwsim1 -t example -T -P
restart hw before test case run:
./run-tests.py -d hwsim0 -r hwsim1 -t example -R
run example verbose
./run-tests.py -d hwsim0 -r hwsim1 -t example -v
For perf/trace you need to write own hw specyfic scripts:
trace_start.sh, trace_stop.sh
perf_start.sh, perf_stop.sh
In any case you will find logs in the logs/current/ directory.
Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
Add monitor support. This supports monitors added to the current
interfaces. This also support standalone monitor with multi interfaces
support. This allows to get logs from different channels at the same
time to one pcap file.
Example of t3-monitor added to config.py file.
Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>
Add tests/remote directory and files:
config.py - handle devices/setup_params table
run-tests.py - run test cases
test_devices.py - run basic configuration tests
You can add own configuration file, by default this is cfg.py, and put
there devices and setup_params definition in format you can find in
config.py file. You can use -c option or just create cfg.py file.
Print available devices/test_cases:
./run-tests.py
Check devices (ssh connection, authorized_keys, interfaces):
./run-test.py -t devices
Run sanity tests (test_sanity_*):
./run-test.py -d <dut_name> -t sanity
Run all tests:
./run-tests.py -d <dut_name> -t all
Run test_A and test_B:
./run-tests.py -d <dut_name> -t "test_A, test_B"
Set reference device, and run sanity tests:
./run-tests.py -d <dut_name> -r <ref_name> -t sanity
Multiple duts/refs/monitors could be setup:
e.g.
./run-tests.py -d <dut_name> -r <ref1_name> -r <ref2_name> -t sanity
Monitor could be set like this:
./run-tests.py -d <dut_name> -t sanity -m all -m <standalone_monitor>
You can also add filters to tests you would like to run
./run-tests.py -d <dut_name> -t all -k wep -k g_only
./run-tests.py -d <dut_name> -t all -k VHT80
./run-test.py doesn't start/terminate wpa_supplicant or hostpad,
test cases are resposible for that, while we don't know test
case requirements.
Restart (-R) trace (-T) and perf (-P) options available.
This request trace/perf logs from the hosts (if possible).
As parameters each test case get:
- devices - table of available devices
- setup_params
- duts - names of DUTs should be tested
- refs - names of reference devices should be used
- monitors - names of monitors list
Each test could return append_text.
Signed-off-by: Janusz Dziedzic <janusz.dziedzic@tieto.com>