Tests

The tests directory is mainly intended for people who want to contribute to IVRE and want to make sure their changes do not break IVRE.

Dependencies

To run IVRE tests you will need coverage.py.

Test case

The first thing is to find samples. You need both (recent) Nmap XML scan result files (or Nmap JSON files, as generated by ivre scancli --json) and PCAP files (dump while you browse, and browse a lot, or sniff a busy open Wi-Fi network, if that’s legal in your country).

A good test case should have a lot of various data (sniff from different places, scan different hosts with different Nmap options).

It is mandatory to have at least, for the Nmap test:

  • Two scanned (and up) hosts with different IP addresses
  • One host scanned with the script http-robots.txt reporting /cgi-bin in its output.
  • One host scanned with an anonymous FTP server.
  • One scan result with traceroute and at least one hop with a hostname.
  • One host scanned with a hostname ending with “.com”.

For the passive test:

  • Two records with different IP addresses.
  • One SSL certificate.

First run

From the tests directory, create the samples subdirectory and place your samples there (the PCAP files must have the extension .pcap, the Nmap XML result files must have the extension .xml, and the Nmap JSON results must have the extension .json).

Then, run python ./tests.py (optionally replace python by the alternative interpreter you want to use, e.g., python3.11; note that coverage.py must be installed for this interpreter). The first run will create a samples/results file with the expected values for some results. The next runs will use those values to check if something has been broken.

For this reason, it is important to:

  • Run the tests for the first time with a “known-working” version.
  • Remove the file samples/results whenever a sample file is added, modified or removed.

Improving the test case

If you want to make sure to have enough samples, you can:

  • Check the samples/results file for *_count entries with low values (particularly 0, of course) and find or create new samples that will improve those values.
  • Check the report generated by coverage.py under the htmlcov directory, and check whether your current test case covers at least the code you want to change.

Failures

Tests failure are not always an issue. Apart from a new bug, of course, here are some reasons that can explain test failures:

  • You have added new samples and have not removed the samples/results file.
  • Your samples do not match the minimum requirements detailed above.
  • A new feature has been added to IVRE and the new results are actually better than the stored ones.

GitHub actions

Tests are run with several MongoDB and PostgreSQL versions, as well as TinyDB, SQLite and Elasticsearch for each pull requests. The tests run with Python 3.7 to 3.11.

The configurations are in the .github/workflows/*.yml YAML files.