Recently, qemu/seabios grew an annoying console/terminal reset,
which also causes my terminal to be left in a state where long
lines don't work well and less gets confused because of this.
Suppress this by suppressing all output from qemu before a new
magic string printed from inside.sh.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
This is useful when running a test multiple times, looking at
log output etc. to not have to pick out the right directory
each and every time.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
parallel-vm.py has obsoleted this a long time ago and there is no need
to maintain two scripts for doing more or less the same thing.
Signed-off-by: Jouni Malinen <j@w1.fi>
The hwsim's start.sh script spawns hostapd process using "sudo".
Since sudo forks a child process, $! holds the pid of sudo itself.
Fix that by storing the PID of the child process instead.
Since in VM "sudo" is replaced with a dummy script, pass an additional
argument to run-all.sh and start.sh scripts to indicate that they are
running inside a VM.
This is needed to fix ap_config_reload and ap_config_reload_file test
cases on some platforms where sudo is apparently not relaying the
signals properly.
Signed-off-by: Andrei Otcheretianski <andrei.otcheretianski@intel.com>
mac80211_hwsim module typically dumps a lot of details into the kernel
message buffer. While it's probably okay in a dedicated VM, it's way too
chatty in other setups.
The kernel allows fine-tuning logging via the dynamic debugging
facility. Let's enable all logging locations in the mac80211_hwsim
module so that we don't loose debugging output when the kernel adopts
the dynamic debug mechanism for the driver.
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
I find myself writing a version of this script every now and
then, but there's little point in that - just add one to the
tree so we can use it again.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
We capture the dmesg that contains everything, but if a test
causes a kernel crash we will miss all logging at higher levels
like debug. Change the printk level to catch all of that too.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
This kernel debugging option adds multiple seconds of extra latency to
interface removal operations. While this can be worked around by
increasing timeouts in number of test cases, there does not seem to be
any clean way of working around this for PMKSA cacheching test with
per-STA VIFs (e.g., pmksa_cache_preauth_vlan_used_per_sta_vif).
To avoid unnecessary test failures, remove CONFIG_DEBUG_KOBJECT_RELEASE
from the default config. If someone wants to test with this kernel debug
option, it can be enabled for custom kernel builds while understanding
that it can result in false failure reports and significantly extended
time needed to complete full testing run.
Signed-off-by: Jouni Malinen <j@w1.fi>
This is useful for now since the IPv6 support for proxyarp is not yet
included in the upstream kernel. This allows the IPv4 test cases to pass
with the current upstream kernel while allowing the IPv6 test cases to
report SKIP instead of FAIL.
Signed-off-by: Jouni Malinen <j@w1.fi>
This describes example steps on how to get the VM testing setup with
parallel VMs configured with Ubuntu Server 16.04.1.
Signed-off-by: Jouni Malinen <j@w1.fi>
This file is used only by hostapd_cli and wpa_cli and neither of those
are currently included in code coverage reporting. Avoid dropping the
coverage numbers by code that cannot be reached due to not being
included in the programs that are covered.
Signed-off-by: Jouni Malinen <j@w1.fi>
This reverts commit be9fe3d8af. While I
did manage to complete multiple test runs without failures, it looks
like this change increases full test run duration by about 30 seconds
when using seven VMs. The most visible reason for that seems to be in
"breaking" active scanning quite frequently with the Probe Response
frame coming out about 40 ms (or more) after the Probe Request frame
which is long enough for the station to already have left the channel.
Since this logging change is not critical, it is simplest to revert it
for now rather than make changes to huge number of test cases to allow
more scan attempts to be performed before timing out.
Signed-off-by: Jouni Malinen <j@w1.fi>
When running tests, make printk put all messages, including debug
messages, onto the serial console to go into the console file.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
This is useful for testing CRDA since it means you can use EPATH to
redirect the test scripts to a different crda binary.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Now that vm-run.sh supports a long list of test cases without crashing
the VM kernel, there is no need to use the "parallel-vm.py -1 1 <tests>"
workaround. Print the re-run example commands with vm-run.sh instead. In
addition, add the --long argument if it was specified for the test run
to avoid skipping test cases in the re-run case.
Signed-off-by: Jouni Malinen <jouni@qca.qualcomm.com>
The script is currently limited by the maximum kernel command line
length and if that's exceeded the kernel panics at boot. Fix this by
writing the arguments to a file and reading it in the VM.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
If /tmp has a relatively small size limit, or multiple people run the
tests on the same machine, using the same output directory can easily
cause problems.
Make the test framework honor the new HWSIM_TEST_LOG_DIR environment
variable to make it easier to avoid those problems.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
This adds hwsim test ap_vlan_iface_cleanup_multibss. It connects two
stations in different BSS but the same hostapd process. First both
stations are in VLAN 1, then they get reauthenticated into VLAN 2. Due
to the ordering of the stations moving around, this test checks that
bridge and tagged interface referencing counting is done globally, such
that the tagged interface is not removed too early and no bridge is
left over.
Signed-off-by: Michael Braun <michael-dev@fami-braun.de>
It looks like it was possible to receive an incomplete FAIL line and
break out from test execution due to a parsing error. Handle this more
robustly and log the error.
Signed-off-by: Jouni Malinen <jouni@qca.qualcomm.com>
The 'params' argument was not used at all. Use it as an alternative
means for setting the list of test cases to execute.
Signed-off-by: Jouni Malinen <j@w1.fi>
This can be used to filter out test cases that take significantly longer
time to execute (15 seconds or longer). While this reduces testing
coverage, this can be useful to get a pretty quick coverage in
significantly faster time.
Signed-off-by: Jouni Malinen <j@w1.fi>
It's somewhat annoying that you can only run parallel-vm.py as
./parallel-vm.py, not from elsewhere by giving the full path,
so fix that by resolving the paths correctly in the scripts where
needed.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Instead of hand-writing a (positional) parser, use the argparse module.
This also gets us nice help output.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
It looks like the 128M default memory size for the hwsim test setup was
not large enough to cover all the needs anymore. Some of the test cases
using tshark could hit OOM with that size. Increase the default
allocation to 192M to avoid this type of issues.
Signed-off-by: Jouni Malinen <jouni@qca.qualcomm.com>
This allows automated testing of the wpa_supplicant D-Bus interface. The
instance controlling wlan0 registers with D-Bus if dbus-daemon was
started successfully. This is only used in VM testing, i.e., not when
run-tests.sh is used on the host system with D-Bus running for normal
system purposes.
Signed-off-by: Jouni Malinen <j@w1.fi>
wpa_cli and hostapd_cli are not currently tested for code coverage, so
filter the files specific to those components away from the code
coverage reports. *_module_tests.c are not included in normal builds, so
drop them as well. In addition, drop the system header file (byteswap.h)
that gets somehow unnecessarily included in the reports for couple of
lines.
Signed-off-by: Jouni Malinen <j@w1.fi>
It was possible for the separate builds to not include
wpa_cli/hostapd_cli in the default location. Make sure hostapd_cli gets
built for --codecov cases and update both WPACLI and HAPDCLI paths to
match the alternative location.
Signed-off-by: Jouni Malinen <j@w1.fi>
It was possible for the scr.addstr() operations to fail and terminate
parallel-vm.py if the number of failed test cases increased beyond what
fits on the screen.
Signed-off-by: Jouni Malinen <j@w1.fi>
Merge partial lines together before processing them in parallel-vm.py.
This avoids issues in cases where the stdout read gets split into pieces
that do not include the full READY/PASS/FAIL/SKIP information. In
addition, strip unnecessary whitespace (mainly, '\r') from the log
lines.
Signed-off-by: Jouni Malinen <j@w1.fi>
parallel-vm.log is now written with details of test execution steps and
results. This makes it easier to debug if something goes wrong in VM
monitoring. The --debug option can be used to enable verbose debugging.
Signed-off-by: Jouni Malinen <j@w1.fi>
parallel-vm.py is now retrying failed cases once at the end of the run.
If all the failed test cases passed on the second attempt, that is noted
in the summary output. Results are also indicated as the exit value from
the run: 0 = all cases passed on first run, 1 = some cases failed once,
but everything passed after one retry, 2 = some cases failed did not
succeed at all.
Signed-off-by: Jouni Malinen <j@w1.fi>
This adds the remaining test cases that took more than 15 seconds to run
into the list of test cases to run at the beginning of the execution to
avoid these being left at the end when only some of the VMs may be
running.
Signed-off-by: Jouni Malinen <j@w1.fi>
These test cases had a long 120 seconds wait for the GO Negotiation
initiator to time out. This can be done using two devices in parallel to
save two minutes from total test execution time.
Signed-off-by: Jouni Malinen <j@w1.fi>
Wait one second between each kvm start to avoid hitting large number of
processes trying to start in parallel. This allows the VMs to be started
more efficiently for parallel-vm.py runs with large number of VMs.
Signed-off-by: Jouni Malinen <j@w1.fi>
Move test cases with long duration to the beginning as an optimization
to avoid last part of the test execution running a long duration test
case on a single VM while all other VMs have already completed their
work.
Signed-off-by: Jouni Malinen <j@w1.fi>
Now there is only one set of commands to maintain. The separate reports
for individual components have not been of much use in the past, so they
are dropped for now.
Signed-off-by: Jouni Malinen <j@w1.fi>
This allows code coverage report to be generated must faster with the
help of parallel VMs executing test cases.
Signed-off-by: Jouni Malinen <j@w1.fi>
There is no need to pass the test case names to the VMs when using
parallel-vm.py. Removing those from the command line helps in avoiding
kernel panic if maximum number of kernel parameters limit is hit.
Signed-off-by: Jouni Malinen <j@w1.fi>
Both the wpa_supplicant and kernel configuration need wext to
run the wext testcase, enable those in the default/example
configurations.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
This wpa_supplicant tests include basic tests for:
- Mesh scan
- Mesh group add/remove
- Mesh peer connected/disconnected
- Add/Set/Remove to test mesh mode network
- Open mesh connectivity test
- Secure mesh connectivity test
- no_auto_peer
Signed-off-by: Jason Mobarak <x@jason.mobarak.name>
[no_auto_peer test by: Javier Cardona <javier@cozybit.com>
Signed-off-by: Javier Lopez <jlopex@gmail.com>
This allows all VMs to be used at the end of a test sequence by
assigning test cases to VMs based on which VM is available for a new
test case rather than splitting the full task at the beginning and
potentially getting stuck with the last VM running long test cases for
significantly longer than another VM that gets shorter duration tests
assigned to it.
Signed-off-by: Jouni Malinen <j@w1.fi>
Previously, it was possible for a kernel panic to be missed since the
only sign of it in stdout was reduced number of passed test cases.
Signed-off-by: Jouni Malinen <j@w1.fi>
This avoids possible mismatches in directory and log file timestamps if
the UNIX timestamp (seconds) changes during the startup sequence.
Signed-off-by: Jouni Malinen <j@w1.fi>
This was currently breaking parallel-run.*, as it was passing
--split num/num parameters (intended for rnu-tests.py)
to vm-run.sh which broke the --codecov and --timewrap options.
Signed-off-by: Ilan Peer <ilan.peer@intel.com>
Update the code coverage documentation to also specify the
source base directory for the code coverage generation.
Signed-off-by: Ilan Peer <ilan.peer@intel.com>
When loading the hwsim module, disable support_p2p_device by default.
This will also become the default in the kernel, but until then it
makes sure it's not turned on by default.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
This is a more advanced version of the simple parallel-vm.sh script.
Status of each VM is printed out during the test and results are
provided in more convenient format in the end.
Signed-off-by: Jouni Malinen <j@w1.fi>
"parallel-vm.sh <number of VMs> [arguments..]" can now be used to run
multiple VMs in parallel to speed up full test cycle significantly. In
addition, the "--split srv/total" argument used in this design would
also make it possible to split this to multiple servers to speed up
testing.
Signed-off-by: Jouni Malinen <j@w1.fi>
This improves accuracy of the code coverage reports with hostapd-as-AS
and hlr_auc_gw getting analyzed separately.
Signed-off-by: Jouni Malinen <j@w1.fi>
To test the code under the influence of time jumps, add the option
(--timewarp) to the VM tests to reset the clock all the time, which
makes the wall clock time jump speed up 20x, causing gettimeofday()
to be unreliable for timeout calculations.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
The vm-config in the subdirectory is less useful as it
will get removed by "git clean" and similar, so read a
config file from ~/.wpas-vm-config in addition.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Use a more robust design for collecting the gcov logs from the case
where test cases are run within a virtual machine. This generates a
writable-from-vm build tree for each component separately so that the
lcov and gcov can easily find the matching source code and data files.
In addition, prepare the reports automatically at the end of the
vm-run.sh --codecov execution.
Signed-hostap: Jouni Malinen <j@w1.fi>
In order to handle regulatory domain requests, crda needs to be
installed on the host, but we also need to install a uevent helper in
the VM so that it gets executed (since we don't run udev).
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
If there's code coverage analysis data, copy it out of the VM
to be able to analyse it later. Also add a description to the
README file about how to use it.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
Add a CHANNELS configuration to the script running the VM
that can be added to the vm-config file to allow running
the tests with hwsim devices supporting more than a single
channel.
Eventually, with the (hopefully) upcoming dynamic work in
mac80211_hwsim, this might go away entirely, but for now
this allows testing more code paths.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
In some cases, e.g., with the VM tests if the VM crashes, it
can be useful to know which tests should have run but didn't
(or didn't finish). In order to catch these more easily, add
an option to prefill the database with all tests at the very
beginning of the testing (in a new NOTRUN state) and use the
option in the VM tests.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
Create a results.db in the output directory when running
the tests in a VM. To make that easier, create the tables
in the python script if they don't exist.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
Rather than just having KERNELDIR, allow setting KERNEL directly.
Also remove the -s option that prevents running multiple machines
at the same time, but add a KVMARGS= variable that can be used to
restore that if needed.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>
Instead of running on the host, it can be useful to run in a
VM, particularly to test kernel rather than userspace changes,
so add a few scripts that allow doing so easily.
The basic idea is that the VM kernel is the same architecture
as the host kernel, so the host's root filesystem can be used
(in read-only mode) to run everything. Only a log filesystem
is mounted read-write and will get all the test output.
The kernel console output is collected to a special 'console'
file in the logs directory and kernel crashes are detected.
Signed-hostap: Johannes Berg <johannes.berg@intel.com>