tests directory is mainly intended for people who want to
contribute to IVRE and want to make sure their changes do not break
The first thing is to find samples. You need both (recent) Nmap XML scan
result files (or Nmap JSON files, as generated by
ivre scancli --json) and PCAP files (dump while you browse, and
browse a lot, or sniff a busy open Wi-Fi network, if that’s legal in
A good test case should have a lot of various data (sniff from different places, scan different hosts with different Nmap options).
It is mandatory to have at least, for the Nmap test:
- Two scanned (and up) hosts with different IP addresses
- One host scanned with the script
/cgi-binin its output.
- One host scanned with an anonymous FTP server.
- One scan result with traceroute and at least one hop with a hostname.
- One host scanned with a hostname ending with “.com”.
For the passive test:
- Two records with different IP addresses.
- One SSL certificate.
tests directory, create the
samples subdirectory and
place your samples there (the PCAP files must have the extension
.pcap, the Nmap XML result files must have the extension
and the Nmap JSON results must have the extension
python ./tests.py (optionally replace
python by the
alternative interpreter you want to use, e.g.,
coverage.py must be installed for this interpreter). The
first run will create a
samples/results file with the expected
values for some results. The next runs will use those values to check
if something has been broken.
For this reason, it is important to:
- Run the tests for the first time with a “known-working” version.
- Remove the file
samples/resultswhenever a sample file is added, modified or removed.
Improving the test case¶
If you want to make sure to have enough samples, you can:
- Check the
*_countentries with low values (particularly 0, of course) and find or create new samples that will improve those values.
- Check the report generated by coverage.py under the
htmlcovdirectory, and check whether your current test case covers at least the code you want to change.
Tests failure are not always an issue. Apart from a new bug, of course, here are some reasons that can explain test failures:
- You have added new samples and have not removed the
- Your samples do not match the minimum requirements detailed above.
- A new feature has been added to IVRE and the new results are actually better than the stored ones.
Tests are run with several MongoDB and PostgreSQL versions, as well as TinyDB, SQLite and Elasticsearch for each pull requests. The tests run with Python 3.7 to 3.11.
The configurations are in the .github/workflows/*.yml YAML files.