Linux Development Tools
Generic structure indications
Based on experiences from VLT software there are a series of recommendations given. For
a full comparison and rationale refer to RD1
:
Definition
:A software module is a piece of software (code and documentation) able to perform functions and having an interface available to an external user to access the functions provided. Technically a module is a way to organize functions in homogeneous groups. The interface hides the implementation and system dependencies from the user. Managerially the module is the basic unit for planning, project control, and configuration control.
A software package is a logical grouping of software modules. The only reason of the grouping is to simplify the management of the modules in a hierarchic way. Multiple level of package hierarchy is permitted (<pkg1>/<pkg1-1>/<pkg1-1-1>/<module>)
Naming
:Each module is identified by a name and is therefore unique in the project. The
module_name
can be made of a minimum of two up to a maximum of sixteen, suggested six, characters (a-z, 0-9) and shall be unique in the project. Names equal or too similar to UNIX names shall be avoided. The case cannot be used to build different names: i.e., the following are referring to the same module: xyz, XYZ, xYz. Themodule_name
is used in thenaming of all elements that belong to the software module. Themodule_name
should startwith an alphabetic character (a-z). for the ELT it has been decided to use namespaces (as it is the case for the majority of Linux SW applications) instead of using unique module names. Still it is required to have unique binary names. The following convention is suggested: the package name is prefix to the module name. This has to be done usinglower-camel-case
, for example packageModule or packageModuleFeature.
Each software module should produce only one artefact (one <module> per produced artefact) where “artefact” is like a program, a library, etc. The module should include the build script (wscript for waf) which includes the specification of the processor, (cross-)compiler, compiler options, etc., so the build systems knows for which HW to build the artefact.
The binaries and libraries are produced by the build system in a local temporary directory (build) so that is does not pollute the source code. The local temporary directory repeats the structure of the source tree, so there are no names collisions.
In case of C/C++, public include files are located in a dedicated directory within src (to facilitate the identification of the include files to install, alternatively this information would have to be added to the build or installation script): src/include/<a>. <a> can either replicate the directory structure of the package and map to the namespace or use a free-form structure. It is evident that when using a free-form structure a name collision could happen and this has to be managed by the developer. The recommended directory structure is to use a one level directory named after the module, which name is guaranteed to be unique by module definition. For example:
The most descriptive <a>=package/directory/structure/module/
The suggested <a>=module/
A free form structure <a>=freeform/structure/
Generated code to be archived could also go in a dedicated gen/ directory within src/ while
the test/ directory contains unit tests.
Integration tests should have their own artefact or even a separate module/artefact
depending on the type of interaction required: integrating several artifacts in a module or
integrating several modules.
The resource/config/ directory contains runtime configuration data which are “read only”,
not subject to modification during execution like the CDT
in the VLT
. Configuration
information may be provided by the configuration service of the SW platform; however, it
should be possible to have a local representation.
The resource/data/ directory contains runtime configuration data which may need to be
modified during/after execution (like calibration data).
The suggested structure, able to support different programming languages and different
target types, is the following:
<package>/<module>/src # Any source files (incl, headers, mocs etc.)
/include/ # public include files in case of C/C++
/gen/ # generated code to be archived
/resource # Resources (e.g. GUI glyphs, sounds, configs etc.)
/interface # Interface specifications (IDL, mockups?)
/doc # non-generated documentation
/test # unit tests
wscript # build script
Inside the resource/ directory a tree of different subdirectories will be present to differentiate between type of resources (<module>/resource/<directory path to module>/). The types we see now are:
config/ - contains default configuration files
audio/ - contains sounds, music and other audible files
image/ - contains images and other visual artefacts
model/ - contains models
dictionary/ - contains dictionaries
data/ - contains runtime configuration
Inside each resource subdirectory, as now, the structure is free form but by convention the rule is to create a subdirectory structure that clearly identifies the path to the module. In case of shared resources the subdirectory with the module path may not needed. The build system is supposed to just recursively copy the structure to the destination directory.
A set of directories, which are pointed by same named environment variables, are defined where the various result of an installation or execution can be stored:
System Root (
SYSROOT
): directory delivered by the ELT project that contains basic software and resources widely shared between everybody in the project. This includes for example source templates, basic libraries, basic utilities, build system support and so on. This is installed on the system and is read-only to the user and instrument manager.Integration Root (
INTROOT
): directory where the user, instrument for example, builds and installs specific software and support files. Default configuration files and resources are also part of the Integration Root. This is installed read-only to the user and is populated by the instrument manager.
The described Root areas are physical areas on the filesystem. Nevertheless, especially for the Integration Root that may be assembled from multiple projects output, the usage of filesystem overlaying may be introduced in the future.
Additionally, CFGPATH
environment variable is defined with the purpose of saving various resource directory paths useful
for executing applications installed in the respective machine. By default the first path added is the $INTROOT/resource/
but other can be appended as needed.
Environmental Modules System (Lmod)
Environment Modules provide a convenient way to dynamically change the users’
environment through modulefiles. This includes easily adding or removing directories to the
PATH environment variable.
The software Lmod
, a Lua
based environment module system, is used in the ELT
Development environment and replace the former VLT pecs.
A modulefile contains the necessary information to allow a user to run a particular
application or provide access to a particular library. All of this can be done dynamically
without logging out and back in. Modulefiles for applications modify the user’s path to make
access easy. Modulefiles for Library packages provide environment variables that specify
where the library and header files can be found.
Packages can be loaded and unloaded cleanly through the module system.
It is also very easy to switch between different versions of a package or remove it.
The latest online user guide of Lmod
can be found under:
The modulefile contains commands to add to the PATH or set environment variables. When loading the modulefile the commands are followed and when unloading the modulefile the actions are reversed. Example of commands which can be used in a modulefile:
prepend_path("PATH", value)
setenv("NAME", value)
set_alias("name","value")
family("name")
load("pkgA", "pkgB", "pkgC")
The standard ELT development environment provides some lua files to set default variables (PATH, LD_LIBRARY_PATH …). There are located under:
/elt/common/modulefiles
> tree -pugfi /elt/common/modulefiles
.
[drwxr-xr-x eltmgr elt ] ./core
[-rwxr-xr-x eltmgr elt ] ./core/eltdev.lua
[-rwxr-xr-x eltmgr elt ] ./core/introot.lua
[drwxr-xr-x eltmgr elt ] ./default
[-rw-r--r-- root root ] ./default/ciisrv.lua
[-rw-r--r-- root root ] ./default/cut.lua
[-rw-r--r-- root root ] ./default/ecsif.lua
[-rw-r--r-- root root ] ./default/elt-trs.lua
[-rw-r--r-- root root ] ./default/etr.lua
[-rw-r--r-- root root ] ./default/mal.lua
[-rw-r--r-- root root ] ./default/msgsend.lua
[-rw-r--r-- root root ] ./default/mudpi.lua
[-rw-r--r-- root root ] ./default/nomad-robot-library.lua
[-rw-r--r-- root root ] ./default/oldbloader.lua
[-rw-r--r-- root root ] ./default/rtms.lua
[-rw-r--r-- root root ] ./default/wdep.lua
[-rw-r--r-- root root ] ./default/wtools.lua
[drwxr-xr-x eltmgr elt ] ./extra
3 directories, 15 files
The eltdev.lua file is loaded by default at login and contains all the default setting (including the load of package python and jdk) The introot.lua can be loaded by the user to set up the PATH, LD_LIBRARY_PATH when he defines an INTROOT. In addition, the user can define private lua files under the directory:
$HOME/modulefiles
~/modulefiles 1062 > ll
total 12
-rw-r--r-- 1 eltmgr elt 129 Jul 5 07:37 private-eltint20.lua
-rw-r--r-- 1 eltmgr elt 61 Jul 4 09:47 private-eltint21.lua
-rw-r--r-- 1 eltmgr elt 130 Jul 6 10:03 private.lua
And from this directory, Lmod
will make available all the lua file and load by default, the files
private.lua and private-<hostname>.lua if exist.
Example: supposing that INTROOT is created in the home directory of
the user, create and edit the file $HOME/modulefiles/private.lua with the following lines:
local home = os.getenv("HOME")
local introot = pathJoin(home, "INTROOT")
setenv ("INTROOT", introot)
setenv ("PREFIX", introot)
load ("introot")
local pythonpath = pathJoin(introot, "lib/python3.7/site-packages/")
append_path("PYTHONPATH", pythonpath)
Note: Log-out and log-in again to allow new environments from the newly created $HOME/modulefiles/private.lua to be loaded.
Lmod basic commands
$ module help # display Lmod help message
$ module list # list of modules loaded
$ module show package # Display what is executed by the module
$ module avail # list of modules available to be loaded
Lmod
uses the directories listed in $MODULEPATH to find the modulefiles to load,
/elt/System/modulefiles and $HOME/modulefiles are added by default.
With the sub-command avail Lmod
reports only the modules that are in the current
MODULEPATH. Those are the only modules that the user can load.
User can add/remove a directory to the MODULEPATH with:
$ module use /path/to/modulefiles # Add the directory to $MODULEPATH search path
$ module unuse /path/to/modulefiles # Remove directory from $MODULEPATH
A user logs in with the standard modules loaded. Then the user modifies the default setup through the standard module commands:
$ module load package1 package2 ... # load modules
$ module unload package1 package2 ... # unload modules
Once users have the desired modules load then they can issue:
$ module save
This creates a file called ~/.lmod.d/default which has the list of desired modules (collection). This default collection will be the user’s initial set of modules (loaded at login). Users can have as many collections as they like. They can save to a named collection with:
$ module save <collection_name>
And, at any time, it is possible to restore the set of modules saved in that named collection with:
$ module restore <collection_name>
A user can print the contents of a collection with:
$ module describe <collection_name>
Example
eltint20 eltmgr:> module list
Currently Loaded Modules:
1) jdk/java-openjdk 2) python 3) eltdev 4) private-eltint20 5) private
eltint20 eltmgr:> module avail
------------------ /home/eltmgr/modulefiles ------------------------------------
private (L) private-eltint20 (L) private-eltint21
------------------ /elt/System/modulefiles -------------------------------------
eltdev (L) introot jdk/java-openjdk (L) python (L)
------------------ /usr/share/lmod/lmod/modulefiles/Core ------------------------
lmod/6.5.1 settarg/6.5.1
Where:
L: Module is loaded
eltint20 eltmgr:~ 1002 > module load introot
eltint20 eltmgr:~ 1003 > module list
Currently Loaded Modules:
1) jdk/java-openjdk 2) python 3) eltdev 4) private 5) private-eltint20
6) introot
eltint20 eltmgr:~ 1004 > module save
Saved current collection of modules to: default
ELT Common Basic Software
The System Root (SYSROOT
) delivered by the ELT project is by default installed under:
the directory /elt/.
It is distributed with the RPM elt-common-X.Y.Z-n
The default location for the SYSROOT
is defined in the default lua file by the variable
SYSROOT
:
SYSROOT=/elt/X.Y, where X.Y is the version of the rpm elt-common- X.Y.Z-n
The SYSROOT includes in particular the following:
Build system support wtools
getTemplate utility, used to generate module, wscript… from template
- ESO Sphinx theme
elt-devenv utility, to highlight modifications introduced on the default ELT installation
msgsend
utility, used to send one-shot commands via CII MAL
It is installed by user eltmgr and is read-only to the user and instrument manager. Example of usage of getTemplate:
$ cd <the location for introot>
$ getTemplate -d introot INTROOT
Inline documentation
As indicated in AD4
the inline documentation in source code files should be managed using
the Doxygen format. This gives the possibility to generate at the end a unique
documentation even if the project consists of different languages source code files.
Nevertheless, language specific extensions to Doxygen can be managed using additional
custom filters. The specific language filters present in the ELT Linux Development
environment are discussed in the language section.
Configuration options to Doxygen should be passed via the configuration file and not by
command line or otherwise. A template Doxygen configuration file can be generated with:
fede@esopc ~/waf/example $ doxygen -g mytemplate.config
Configuration file mytemplate.config created.
Now edit the configuration file and enter
doxygen mytemplate.config
to generate the documentation for your project
The configuration should be instructed to generate at least HTML documentation which is the most used in the ELT software development. To improve readability and organization of the documentation it is highly suggested to make good use of Doxygen groups, entities that permit to group together similar topics into a common documentation section. A simple grouping can be done by reflecting the directory structure: therefore, defining for each package a group, for each module a group that is part of the package group and then adding every artifact in the module to the module group. This would create a documentation group structure that totally reflects the filesystem structure, making it easy to find and maintain the information needed. To define a group the Doxygen directive defgroup can be used, for example:
\defgroup groupName Long Description of the group
And once defined the group can be referenced as:
\ingroup groupName
Additional details can be found in the Grouping section of the Doxygen manual. When creating documentation in the scope of a wtools project, the Doxygen documentation can be automatically generated using the waf build system. Please refer to the documentation of the build system for further details on how to generate documentation in such case.
C++ Tools
Compiler suite
The ELT Programming Languages Coding Standards (AD4
) specify the usage of the
standard C++17 which requires the GNU Compiler Suite of version 9.x or higher for an
appropriate support.
At present the ELT Linux Development environment ships with version 13.2.1, provided by system packages.
Linter
The tool for checking C++ code for code style and common errors is [clang-tidy](https://clang.llvm.org/extra/clang-tidy)
Natively, you can use it this way
clang-tidy test.cpp -checks=-*,clang-analyzer-*,-clang-analyzer-cplusplus*
By default, linter tools use configuration created for the ESO ELT project. To use a different configuration file, options –clang-tidy-config, –checkstyle-config, and –pylint-config can be used during the project configuration phase. The Absolute or relative path to the configuration file should be used as a parameter. Please note that in case of the clang-tidy configuration file should be in YAML format. For example:
waf configure --clang-tidy-config=./alt_clang-tidy.yml
It is also possible to set configuration files and other options for linter tools, by passing appropriate arguments to wtools.project.declare_project method. For example:
wtools.project.declare_project('waf-test', '0.1-dev',
recurse='cpp java python',
requires='cxx java python qt5 pyqt5 boost',
boost=dict(
libs='program_options',
force_static=False,
),
cxx=dict(
clang_tidy_config='./alt_clang-tidy.yml',
),
java=dict(
checkstyle_config='./alt_checks.xml'
),
python=dict(
pylint_config='./alt_pylintrc'
)
)
By default the linter tool will not run for up to date targets until inputs changes. To force an execution of all tests run with option –lintall:
$ waf lint --lintall
Note that the lint command will also update the necessary binaries, like the build command so there is no need to run build first. I.e. run waf lint instead of waf build lint.
Unit testing framework
There are multiple frameworks to write C++ unit tests supported by the ELT Development Environment and also integrated in the waf build framework. Those are:
Google Test framework. Additionally Google Bench benchmarking framework is supplied and can be used in addition to provide a benchmarking environment for Google Test
Catch2 header only testing framework.
Qt5 Unit test can be also used for C++ programs using the Qt framework
Dynamic checking tools
Several dynamic checking tools are included in the ELT Linux development environment and their usage is highly encouraged:
gdb, the GNU debugger
valgrind, a suite of tools for debugging and profiling
strace, the system calls and signal tracer
gcov, the GNU coverage library and tools to analyse the code coverage amount
The programs and libraries have to be compiled with coverage options enabled to use the feature provided, namely:
Compiler flags to be added: -O0 -fprofile-arcs -ftest-coverage
Linker flags to be added: -lgcov
To produce easily readable HTML or XML output from the gcov binary data the tool gcovr is supplied as part of the Python distribution. For GCC sanitizers, the wtools build system supports address, thread, leak and undefined sanitizers provided by GCC.
GCC Sanitizers examples
Currently supported sanitizers are: - address - thread - undefined
Create a simple program:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, const char *argv[]) {
char *s = malloc(100);
strcpy(s, "Hello world!");
printf("string is: %s\n", s);
return 0;
}
Then use a following commnd to build it:
gcc leak.c -o leak -fsanitize=address -static-libasan
The output would similar to this:
string is: Hello world!
=================================================================
==235624==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 100 byte(s) in 1 object(s) allocated from:
#0 0x4eaaa8 in __interceptor_malloc ../../.././libsanitizer/asan/asan_malloc_linux.cc:144
#1 0x5283dd in main /users/PZS0710/edanish/test/asan/leak.c:6
#2 0x2b0c29909544 in __libc_start_main (/lib64/libc.so.6+0x22544)
SUMMARY: AddressSanitizer: 100 byte(s) leaked in 1 allocation(s).
Python Tools
Python Interpreter
The ELT Linux development environment uses the OS RPM provided Python 3 (currently 3.11). To create separated, user-scoped python virtual env, use virtualenv
pip install virtualenv
virtualenv /home/vagrant/python
. /home/vagrant/python/bin/activate
To check which modules are installed, use
rpm -qa | grep python3
Python test framework
The Python code unit tests can be performed in two ways:
By writing in-lined doctest tests inside the source code
By writing totally separate tests under the test/ subdirectory that use the standard Python unittest library, nose2 tests syntax or pytest syntax. While both the nose2 and pytest runners are fully integrated into the build system, it is highly suggested to stick to pytest as that is the newer and therefore longer supported choice.
In both cases the tests can be run by calling directly the Python interpreter and eventually executing the test starter. To uniform the test execution the usage of the test runner using the build framework test execution commands is highly suggested. Please refer to the build framework manual for more details.
Python doctest example
def helloworld(name):
""" Returns a greeting to the person passed as a parameter
>>> print(helloworld("Teri"))
helloworld Teri
>>> print(helloworld("Ale"))
helloworld Ale
"""
return "helloworld "+name
if __name__ == "__main__":
import doctest
doctest.testmod()
It is important to notice that in some cases, for example when using together with Qt5 libraries, the usage of doctest can be problematic due to the necessity to scan and initialize multiple libraries. It is therefore suggested not to use doctest for testing complex situations or modules with high level library dependencies.
Python unittest example
The following unittest example is based on the previous helloworld module used as an example in the previous section:
#! /usr/bin/env python
# encoding: utf-8
import unittest
from hello import hello
class test_helloworld(unittest.TestCase):
def test_helloworld_print1(self):
self.assertEqual('helloworld Teri', hello.helloworld("Teri"))
def test_helloworld_print2(self):
self.assertEqual('helloworld Ale', hello.helloworld("Ale"))
Python pytest example
Create a simple python scipt file with the following content:
def inc(x):
return x + 1
def test_answer():
assert inc(3) == 5
Then, execute a pytest
(python) dev vagrant:~ 43 > pytest
====================== test session starts ====================
platform linux -- Python 3.11.6, pytest-7.2.2, pluggy-1.0.0
PySide2 5.15.7 -- Qt runtime 5.15.11 -- Qt compiled 5.15.11
rootdir: /home/vagrant
plugins: anyio-3.5.0, custom-exit-code-0.3.0, qt-4.2.0, cov-4.0.0, asyncio-0.20.3
asyncio: mode=Mode.STRICT
collected 1 item
test_sample.py F [100%]
========================== FAILURES ===========================
_________________________ test_answer _________________________
def test_answer():
> assert inc(3) == 5
E assert 4 == 5
E + where 4 = inc(3)
test_sample.py:6: AssertionError
=================== short test summary info ===================
FAILED test_sample.py::test_answer - assert 4 == 5
===================== 1 failed in 0.12s =======================
Python tools
The tool for checking Python code for code style and common errors is pylint. The graphical frontend pylint-gui can be also useful. Example command line execution for pylint:
PYTHONPATH=.;for i in `find . -name "src" -type d -exec readlink -f {} \\; | sort | uniq`;
do export PYTHONPATH=$PYTHONPATH:${i}; done; pylint -f parseable `find . -name *.py | grep -
v "./build/\\|./INTROOT/\\|./wtools/"` > pylint.log
The command line is pretty complicated as it tries to set the PYTHONPATH in the project for the various modules, so dependant modules can see each other when they are executing an import statement. It is therefore highly recommended to use instead the build framework command line to execute the linting operations, which will take care of setting all the needed paths as explained.
Python documentation
Python documentation using doxygen is enhanced in the ELT Linux development environment using the doxypypy Python module (<https://github.com/Feneric/doxypypy>) that extends the Doxygen notation with specific Python language constructs. Doxypypy is generally recalled by a wrapper script named py_filter which just passes some default parameters to it:
#!/bin/bash
doxypypy -a -c $1
Java Tools
Java Software Development Kit
The Java SDK used in the ELT Linux development environment is the OpenJDK 1.8.x. It is shipped as a system package and its compiler and bytecode interpreter are seen in the user’s default path.
(python) dev vagrant:~ 14 > java -version
openjdk version "17.0.9" 2023-10-17
OpenJDK Runtime Environment (Red_Hat-17.0.9.0.9-2) (build 17.0.9+9)
OpenJDK 64-Bit Server VM (Red_Hat-17.0.9.0.9-2) (build 17.0.9+9, mixed mode, sharing)
Unit test framework
The unit test framework for Java in ELT Linux development environment is TestNG (<http://testng.org/doc/index.html>) and is located under /usr/share/java directory. Execution of tests with TestNG require the preparation of an XML file containing the test description and the execution of the code using the TestNG runner class org.junit.runner.JUnitCore. For code coverage calculation and reporting the JaCoCo project (<http://www.jacoco.org/>) is used and is located under /usr/share/java/jacoco directory. There’s also a wrapper /usr/bin/jacococli
Usage: java -jar jacococli.jar --help | <command>
<command> : dump|instrument|merge|report|classinfo|execinfo|version
--help : show help (default: false)
--quiet : suppress all output on stdout (default: false)
To generate the binary coverage data the tests have to be run using the JaCoCo jar as a Java agent (using the -javaagent command line option). The jacococli.jar package, JaCoCo command line interface, can be used to generate easily readable HTML coverage reports.
Java code checking tools
The ELT Linux development environment offers mainly two tools for Java code checking:
checkstyle
for implementing code style checking.
- Usage: /usr/bin/checkstyle [-dEghjJtTV] [-b=<xpath>] [-c=<configurationFile>] [-f=<format>]
[-o=<outputPath>] [-p=<propertiesFile>] [-s=<suppressionLineColumnNumber>] [-w=<tabWidth>] [-e=<exclude>]… [-x=<excludeRegex>]… <files>…
Checkstyle verifies that the specified source code files adhere to the specified rules. By default, violations are reported to standard out in plain format. Checkstyle requires a configuration XML file that configures the checks to apply.
Example execution:
/usr/bin/checkstyle -f=xml -c=/checks.xml *
GUI Toolkit
The GUI Toolkit selected for the ELT project is QT5 (<https://www.qt.io/>). The ELT Linux
development environment provides QT5 libraries for C++ and Python language bindings.
Python language bindings for QT5 are currently via the Python module PyQt5.
Further information about the GUI developing can be found in AD3
.
Build system
The build system used for ELT Linux software is waf
(<https://waf.io>). waf
is a rather recent
build system written in Python which natively support C, C++, Java and obviously Python,
along with many other languages, such as for example D, C#, Ocaml and Fortran, and many
toolkits, such as Qt or glib. Build scripts in waf are written in Python and therefore particular
customizations can be made using this full programming language, making the tool very
powerful and without the need to learn some specific macro language specific to the build
system as in the other cases. The build system can be further customized using “tools”
which expand the build system to other languages or interfaces. Nevertheless, for standard
cases of the supported languages the syntax is very easy and intuitive, the learning curve
is very gentle.
To further ease the development process on the ELT, ESO prepared an additional layer on
top of waf
that simplifies the configuration scripts of the most common ELT related software
(as for example in the past the vltMakefile for VLT software and acsMakefile for ALMA
software). Usage of this layer, named wtools
, also makes future enhancements and
upgrades much easier as they are concentrated into a single library instead of being spread
throughout multiple configuration files. Nevertheless, when advanced features not
supported by wtools are needed, native waf/Python code can be used to override and
augment the functionalities for a single module.
So, usage of wtools
is highly desired, although a basic knowledge of waf
, presented
hereafter, is very useful to understand the basic functions of the build system and to give
the tools for very specific customization for specific advanced needs of modules. An
introduction to wtools
is given in the section 3.10.
The version of waf
must be 2.0.21 or greater, as from this version on the full support for the
QT graphical toolkit for both C++ and Python has been natively added and tested.
The reference documentation for waf
is the Waf Book at <https://waf.io/book/>
Introduction to waf scripts
The waf
build system used as a configuration file a so called wscript
. Therefore, the first
step to work with waf
is to prepare such a script, for example given a single file C++ in the
directory src named exProgC.cpp from which we want to compile an executable named
exProgC this could look like:
# encoding: utf-8
def configure(conf):
# We are using C++
conf.load('compiler_cxx')
def options(opt):
# We are using C++
opt.load('compiler_cxx')
def build(bld):
# Define the main program.
bld.program(source='src/exProgC.cpp', target='exProgC’)
As the waf
build system contains also the configuration step of the build procedure,
something usually separate in other packages such as GNU Make or CMake, a wscript
will
usually contain a configure and an options section that contain respectively the
configuration, done when explicitly requested by the waf configure execution, and options
to be used for the build. A configuration step of the example may look as follows:
fede@esopc ~/waf/example/pkg1/exProgC $ waf configure
Setting top to : /home/fede/waf/example/pkg1/exProgC
Setting out to : /home/fede/waf/example/pkg1/exProgC/build
Checking for 'g++' (C++ compiler) : /usr/bin/g++
'configure' finished successfully (0.033s)
The execution of the configure stage will create the build directory, where all temporary build files and the results are stored. Depending on the configuration requested waf will also check for the necessary tools needed, for example compilers or libraries, and report them in this stage. All this data is stored so further steps can then rely on them for a faster and coherent execution. A more articulate example, using Python and PyQt extensions, of configuration may look like this:
fede@esopc ~/waf/example/pkg2/exPyqt5 $ waf configure
Setting top to : /home/fede/waf/example/pkg2/exPyqt5
Setting out to : /home/fede/waf/example/pkg2/exPyqt5/build
Checking for program 'python' : /usr/bin/python
Checking for program 'pyuic5, pyside2-uic, pyuic4' : /usr/bin/pyuic5
Checking for program 'pyrcc5, pyside2-rcc, pyrcc4' : /usr/bin/pyrcc5
Checking for program 'pylupdate5, pyside2-lupdate, pylupdate4' : /usr/bin/pylupdate5
Checking for program 'lrelease-qt5, lrelease' : /usr/bin/lrelease
Checking for python version >= 2.7.4 : 2.7.6
'configure' finished successfully (0.099s)
Once the build tree is configured the build, defined in the build section of the wscript
, can
be executed with waf build:
fede@esopc ~/waf/example/pkg1/exProgC $ waf build
Waf: Entering directory `/home/fede/waf/example/pkg1/exProgC/build'
[1/2] Compiling src/exProgC.cpp
[2/2] Linking build/exProgC
Waf: Leaving directory `/home/fede/waf/example/pkg1/exProgC/build'
'build' finished successfully (0.088s)
The output shows how first the cpp file is compiled and then linked, to create the desired
executable. Everything is done under the build directory. By default waf
if also run unit tests,
if any, defined in the configuration file. The –notests option can be added on the command
line not to execute the command line tests.
Of course as waf keeps track of file changes it will not rebuild parts of the software that are
not necessary to be rebuilt. Executing immediately the same command as before once more
it will therefore lead to:
fede@esopc ~/waf/example/pkg1/exProgC $ waf build
Waf: Entering directory `/home/fede/waf/example/pkg1/exProgC/build'
Waf: Leaving directory `/home/fede/waf/example/pkg1/exProgC/build'
'build' finished successfully (0.015s)
Usually used command line invocations for waf
include also clean, to clean up the build but
keeping the configuration data, distclean, to totally clean up every file waf
generated, and
install, to install the generated results on the final filesystem (as specified by the
configuration script or in general by default in the /usr/local directory tree).
Of course the configuration part can contain specific customization, a few examples:
# Check for gtest library on the system
conf.check(compiler='cxx',lib='gtest',mandatory=True, use='GTEST')
# Check that Python is at least 3.4.0
conf.check_python_version((3,4,0))
# Set some flags to the compiler flags variable
conf.env.append_unique('CXXFLAGS', ['-g', '-O2'])
The text encoding comment at the beginning of the wscript
files is not mandatory when
Python version 3.x is used as by the PEP8 Style Guide the UTF-8 is the default. As one of
waf goals is to be both 2.7.x and 3.x compatible, the text encoding comment may be often
found in build script examples.
waf and the ELT directory structure
As mentioned in section 3.1 it is highly suggested that each directory generates one single
result, be it an executable, a library or a Python module.
This approach of course poses two immediate questions for the waf
scripts: recursive
execution and specification of dependencies.
For the recursive execution waf natively support the recurse function, that can be used in
the configuration, options and build sections, to include other waf scripts. Using recurse waf
will optimize tools that are eventually loaded multiple times and the dependencies between
the different trees will also be matched. As an example given the structure:
fede@esopc ~/waf/example $ find . -name wscript
./wscript
./pkg2/wscript
./pkg2/exCqt5/wscript
./pkg2/exPyqt5/wscript
./pkg2/exProgC2/wscript
./pkg1/wscript
./pkg1/exProgLinkedC/wscript
./pkg1/exJava/wscript
./pkg1/exPython/wscript
./pkg1/exLibC/wscript
./pkg1/exProgC/wscript
The wscript
at the top level would look like:
module_list = 'pkg1 pkg2'
def options(opt):
# We recurse options in our submodules
opt.recurse(module_list)
def configure(conf):
# We recurse configurations in our submodules
conf.recurse(module_list)
def build(bld):
bld.recurse(module_list)
And following for example the in pkg2 directory will contain:
artifact_list = 'exProgC2 exPyqt5 exCqt5'
def options(opt):
# We recurse options in our artifacts
opt.recurse(artifact_list)
def configure(conf):
# We recurse configurations in our artifacts
conf.recurse(artifact_list)
def build(bld):
bld.recurse(artifact_list)
The second interesting topic is how to specify a dependency explicitly. This is as easy as adding a use= indication where the usage is required, pointing the argument to the name of the file that is generated elsewhere. For example:
def options(opt):
# We are using C++
opt.load('compiler_cxx')
def configure(conf):
# We are using C++
conf.load('compiler_cxx')
def build(bld):
# Define the main program. Note: it is using (use=) a library that is
# generated as another artifact someplace else in the build tree
bld.program(source='src/exProgLinkedC.cpp', target='exProgLinkedC',
use='exLibC')
For the build of the program we require the usage of exLibC, a target generated in another directory with the following script (configuration and options have been omitted):
def build(bld):
# Define the main program
bld.shlib(source='src/exLibC.cpp', includes='src/', target='exLibC',
export_includes='src/')
This target will create a shared library with the name exLibC (the operating system specific
prefix and suffix will be managed by waf
) and will also export automatically the include files
from the src/ directory. If another build rule in the same waf scope will add it using the
use=exLibC option, then automatically the library will be linked and the needed includes
imported.
In general, considering the base idea that each module creates just one product as it will
be in the future ESO supplied additional layer, a generic pattern could be used:
bld.program(target='xyz', source=bld.path.ant_glob('src/*.cpp'),
includes=bld.path.ant_glob('src/includes/*.hpp'), ... )
Where the target name would indeed be the module directory name itself with the eventual package name prepended.
Installing files
The waf build system supports natively installation of the built artefacts using the install
directive to the waf
command line executable. In a similar fashion also a very clean uninstall
directive is present to exactly revert the installation process. Both of these directives rely on
another optional command line option specifying the root prefix of the whole installation,
namely –destdir.
If not specified otherwise, waf will try to install the artefacts accordingly to their types into a
standard Unix directory tree starting from the root destination directory, therefore binaries
in bin/, libraries in lib/ and so on. Each build rule can have the installation position specified
by adding an install_path parameter to the rule. Additionally to strip part of the path (for
example the +src/+ from the proposed directory structure) an additional parameter install_from
is present.
bld(name='hello', features='py', source=bld.path.ant_glob('src/**/*.py'),
install_path='$PREFIX/lib/python_modules/', install_from='src')
Additionally, files can be installed which are not built as artifacts by waf
using particular
directives that are exemplified below:
bld.install_files('${PREFIX}/include', ['a1.h', 'a2.h'])
bld.install_as('${PREFIX}/dir/bar.png', 'foo.png')
bld.symlink_as('${PREFIX}/lib/libfoo.so.1', 'libfoo.so.1.2.3')
It is important to notice that all the waf installation directives are just executed if waf
is called
with install or uninstall and not otherwise.
C++ Examples
The following is an example showing how to build a shared library and create a program that is a unit test based on the Google Test library. The Google Test library is defined in the configuration and then used as a normal dependency. The unit tests for C++ are handled by the standard waf_unit_test extra and are marked with the test feature. A shared library is created using the shlib directive, while a shared library can be created with a stlib directive and a program with the program directive). The build script looks as follows:
def options(opt):
# We are using C++ and Unit testing library
opt.load('compiler_cxx waf_unit_test')
def configure(conf):
# We are using C++ and Unit testing library
conf.load('compiler_cxx waf_unit_test')
# Define that the configuration stage requires Google test library
conf.check(compiler='cxx',lib='gtest',mandatory=True, use='GTEST')
def build(bld):
# Define the main program
bld.shlib(source='src/exLibC.cpp', includes='src/include',
target='exLibC', export_includes='src/include')
# Define the unit test program (features='test')
bld.program(features='test', source='test/unit_test.cpp src/exLibC.cpp',
includes=['src/include'] , use=['GTEST'], target='unit_test')
C++ and QT Example
The following example compiles a simple C++ application using QT5 GUI libraries. The
example takes care of generating user interface as needed from QT5 UI files. Waf
also
supports QT5 resources and language files generation.
def options(opt):
# We are using C++ and QT5. Important: order matters!
opt.load('compiler_cxx qt5')
def configure(conf):
conf.load('compiler_cxx qt5')
def build(bld):
# Define the main C++ program
bld(features = 'qt5 cxx',
uselib = 'QT5WIDGETS QT5CORE',
source = 'src/m1uiCmdCpp.cpp src/m1uiCmdCppUi.cpp
src/include/m1uiCmdCppUi.ui',
target = 'm1uiCmdCpp',
includes = 'src/ src/include',
defines = 'WAF=1')
It is important that for automatic generation of MOC files the WAF=1 is passed as from the example and then in the used C++ file the MOC is referenced so waf will generate it:
#if WAF
#include "include/m1uiCmdCppUi.moc"
#endif
Python Examples
Python modules and programs can be included in waf
and as such they will be compiled to
bytecode (to .pyc and .pyo by default for Python 2.x and just .pyc for Python 3.x) and will
therefore be checked for formal correctness. Modules can be easily managed using the
ant_glob to manage entire directory trees.
Unit tests are supported by the pytest extra which is based on the generic waf_unit_test
module and therefore output of Python and C++ unit tests can be united in a common report.
The parameter pytest_source points to the sources to be examined and ut_str to the
command line to execute the tests. In the example it is both presented the use of tests
embedded inside the main sources themselves, using doctest feature, or as truly separate
source files in the test/ subdirectory.
def configure(conf):
# We are using Python and use Python unit tests
conf.load('python pytest waf_unit_test')
conf.check_python_version(minver=(2, 7, 4))
def options(opt):
opt.load('python waf_unit_test')
def build(bld):
# Example module
bld(name='hello', features='py', source=bld.path.ant_glob('src/**/*.py'),
install_from='src')
# Module tests using doctest tests embedded in the source code itself
bld(features='pytest', use='hello', pytest_source=bld.path.ant_glob
('src/**/*.py'), ut_str='nosetests --with-doctest ${SRC}')
# Module tests using standard separate unit test
bld(features='pytest', use='hello', pytest_source=bld.path.ant_glob
('test/*.py'), ut_str='${PYTHON} -B -m unittest discover')
Python and QT5 Example
Using the pyqt5 extra waf
supports automatic generation of Python files from QT5
definitions (for UI, languages, resources). An example:
def options(opt):
# Load also python to demonstrate mixed calls
opt.load('python pyqt5')
def configure(conf):
# Load also python to demonstrate mixed calls
conf.load('python pyqt5')
conf.check_python_version((3,4,0))
def build(bld):
# Demostrates mixed usage of py and pyqt5 module, and tests also
# install_path and install_from (since generated files go into build
# it has to be reset inside the pyqt5 tool)
bld(features="py pyqt5", source="src/sample.py src/firstgui.ui",
install_path="${PREFIX}/play/", install_from="src/")
The pyqt5 waf
module supports both PyQt5, PyQt4 and PySide2 bindings, being PyQt5 the
default. To change the default value the options –pyqt5-pyqt4 or –pyqt5-pyside2 can be
passed at waf
configuration time.
Mixing Python and C++ and QT5
It is important to notice that natively in waf
the pyqt5 and qt5 extras cannot be loaded at the
same time as they will try to handle the same exact files based on their name extension (for
example .ui files). In a single-artefact directory structure this also applies when recursively
creating the build structure with the recurse command ad described. To overcome this
restriction and additional extra shipped in the playground section of waf must be loaded as
last extra named qtchainer:
def options(opt):
# Load what needed for qt5 and pyqt5 and chainer as *last* so it
# will chain to the proper one depending on feature
opt.load('compiler_cxx qt5 python pyqt5')
opt.load('qtchainer',
tooldir='/usr/share/doc/waf-1.9.5/playground/qt5-and-pyqt5/qtchainer')
def configure(conf):
conf.load('compiler_cxx qt5 python pyqt5 qtchainer')
conf.check_python_version((3,4,0))
The specific feature to use should be then defined, for example:
bld(features="pyqt5", source="sampleRes.qrc")
Or
bld(features = 'qt5 cxx cxxprogram', source="sampleRes2.qrc")
Java Example
Using the java tool waf
supports building java programs, preparing JAR archives and
running Java unit tests. So the basic setup in a wscript
file is:
def options(opt):
opt.load('java')
def configure(conf):
conf.load('java')
conf.check_java_class('java.io.FileOutputStream')
The check_java_class gives the possibility to check if a given class is available in the classpath. Additional classpaths can be defined in the configuration step as:
conf.env.CLASSPATH_ADDNAME = ['aaaa.jar', 'bbbb.jar']
And then the additional name can be used as a standard use dependency in a java build step:
use = 'ADDNAME',
Following to just compile java sources for example:
bld(features = 'javac',
srcdir = 'src/',# folder containing the sources to compile
outdir = 'src', # where to output the classes (build directory)
compat = '1.8', # java compatibility version number
sourcepath = ['src'],
classpath = ['.', '..'],
name = 'exJava-src')
And to create a JAR archive:
bld(features = 'jar',
basedir = 'src', # folder with the classes and files to package
# (must match outdir)
destfile = 'exJava.jar', # generated artifact name
manifest = 'src/exJava.Manifest',
name = 'exJava',
use = 'exJava-src')
The operations can be also combined into a single step:
bld(features = 'javac jar',
srcdir = 'src/', # folder containing the sources to compile
outdir = 'src', # where to output the classes (build directory)
compat = '1.8', # java compatibility version number
sourcepath = ['src'],
classpath = ['.', '..'],
basedir = 'src', # folder with the classes and files to package (must match outdir)
destfile = 'exJava.jar', # generated artifact name
manifest = 'src/exJava.Manifest')
Unit testing can be done using the javatest waf
extra:
def options(opt):
opt.load('java waf_unit_test javatest')
def configure(conf):
conf.load('java javatest')
bld(features = 'javac javatest',
srcdir = 'test/',
outdir = 'test',
sourcepath = ['test'],
classpath = [ 'src' ],
basedir = 'test',
use = ['JAVATEST', 'mainprog'], # mainprog is the program being tested in src/
ut_str = '${JAVA} -cp ${CLASSPATH} ${JTRUNNER} ${SRC}',
jtest_source = bld.path.ant_glob('test/*.xml'))
Executing the test with code coverage enabled would require the unit test execution script to include JaCoCo as an agent :
ut_str = '${JAVA} -cp ${CLASSPATH} -
javaagent:/usr/share/java/jacoco/org.jacoco.agent.jar=destfile=/some/path/jacoco.exec ${JTRUNNER}
${SRC}',
The HTML report can be generated by hand or by generating a new waf task invoking the JaCoCo command line interface with something like:
${JAVA} -jar ${JACOCOCLI} report ${OUTDIR}/jacoco.exec -–classfile ${CLASSFILE1} -
–classfile ${CLASSFILE2} ... -–classfile ${CLASSFILEN} --html ${OUTDIR}/jacoco --
sourcefiles ${SRCPATH}/src --sourcefiles ${SRCPATH}/test
Doxygen documentation generation
The waf
build system supports Doxygen documentation generation using the doxygen
extra. A Doxygen configuration file has to be supplied. For example:
def options(opt):
# Doxygen extra
opt.load('doxygen')
def configure(conf):
# Doxygen extra
conf.load('doxygen')
if not conf.env.DOXYGEN:
conf.fatal('doxygen is required, install it')
def build(bld):
# Doxygen generation
bld(features='doxygen', doxyfile='doxy.config',
install_path='${PREFIX}/doc')
In the Doxygen configuration file a few options are suggested to generate the recursive documentation without including in the indexes the waf generated build tree:
# The RECURSIVE tag can be used to turn specify whether or not subdirectories
# should be searched for input files as well. Possible values are YES and NO.
# If left blank NO is used.
RECURSIVE = YES
# The EXCLUDE tag can be used to specify files and/or directories that should be
# excluded from the INPUT source files. This way you can easily exclude a
# subdirectory from a directory tree whose root is specified with the INPUT tag.
# Note that relative paths are relative to the directory from which doxygen is
# run.
EXCLUDE = build/
# If the value of the INPUT tag contains directories, you can use the
# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
# certain files from those directories. Note that the wildcards are matched
# against the file with absolute path, so to exclude all test directories
# for example use the pattern */test/*
EXCLUDE_PATTERNS = */.*/*
EXCLUDE_PATTERNS += */build/*
Also setting the extensions of the files the user wants to generate the documentation for is a good idea if the defaults are not as desired:
# If the value of the INPUT tag contains directories, you can use the
# FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp
# and *.h) to filter out the source-files in the directories. If left
# blank the following patterns are tested:
# *.c *.cc *.cxx *.cpp *.c++ *.d *.java *.ii *.ixx *.ipp *.i++ *.inl *.h *.hh
# *.hxx *.hpp *.h++ *.idl *.odl *.cs *.php *.php3 *.inc *.m *.mm *.dox *.py
# *.f90 *.f *.for *.vhd *.vhdl
FILE_PATTERNS = *.c *.h *.cpp *.hpp *.py *.java *wscript
The latest pattern in the example, wscript, instructs Doxygen to pick up also the build
scripts in the documentation. This is very useful to create the documentation grouping in
the package build scripts and have them available in the modules, for example for our
example structure in the wscript
of pkg1 we can define:
"""
@file
@brief Top level pkg1 build script
@defgroup pkg1 pkg1 module
"""
That will create the pkg1 Doxygen group that can be then referenced in the modules to group documentation. To fully support the build scripts in Doxygen documentation there are two more details to setup in the configuration file, namely tell Doxygen to treat such files as Python scripts and to pass them through the Python documentation filter (which is doxypypy, as described in 3.6.4) for example as follows:
# Doxygen selects the parser to use depending on the extension of the files it
# parses. With this tag you can assign which parser to use for a given
# extension. Doxygen has a built-in mapping, but you can override or extend it
# using this tag. The format is ext=language, where ext is a file extension,
# and language is one of the parsers supported by doxygen: IDL, Java,
# Javascript, CSharp, C, C++, D, PHP, Objective-C, Python, Fortran, VHDL, C,
# C++. For instance to make doxygen treat .inc files as Fortran files (default
# is PHP), and .f files as C (default is Fortran), use: inc=Fortran f=C. Note
# that for custom extensions you also need to set FILE_PATTERNS otherwise the
# files are not read by doxygen.
EXTENSION_MAPPING = no_extension=Python
# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
# basis.
# Doxygen will compare the file name with each pattern and apply the
# filter if there is a match.
# The filters are a list of the form:
# pattern=filter (like *.cpp=my_cpp_filter). See INPUT_FILTER for further
# info on how filters are used. If FILTER_PATTERNS is empty or if
# non of the patterns match the file name, INPUT_FILTER is applied.
FILTER_PATTERNS = *.py=py_filter *wscript=py_filter
Sphinx documentation generation
Sphinx uses the reStructuredText markup language by default, and can read MyST markdown via third-party extensions. Both of these are powerful and straightforward to use, and have functionality for complex documentation and publishing workflows. They both build upon Docutils to parse and write documents.
Sphinx comes with a script called sphinx-quickstart that sets up a source directory and creates a default conf.py with the most useful configuration values from a few questions it asks you. To use this, run:
sphinx-quickstart
It created a source directory with conf.py and a root document, index.rst. The main function of the root document is to serve as a welcome page, and to contain the root of the “table of contents tree” (or toctree). This is one of the main things that Sphinx adds to reStructuredText, a way to connect multiple files to a single hierarchy of documents.
You add documents listing them in the content of the directive:
.. toctree::
:maxdepth: 2
usage/installation
usage/quickstart
...
This is exactly how the toctree for this documentation looks. The documents to include are given as document names, which in short means that you leave off the file name extension and use forward slashes (/) as directory separators.
In Sphinx source files, you can use most features of standard reStructuredText. There are also several features added by Sphinx. You can add cross-file references in a portable way (which works for all output types) using the ref role.
For an example, if you are viewing the HTML version, you can look at the source for this document – use the “Show Source” link in the sidebar. Now that you have added some files and content, let’s make a first build of the docs. A build is started with the sphinx-build program:
sphinx-build -M html sourcedir outputdir
where sourcedir is the source directory, and outputdir is the directory in which you want to place the built documentation. The -M option selects a builder; in this example Sphinx will build HTML files.
ESO waf extension: wtools
wtools
is a library that extends waf
with helpers and implementation of a lot of default
features. Specifically, it allows a user to declare a waf
project and corresponding modules
in a simplified way.
It also provides a homogeneous access to various operations such as running tests,
executing code coverage inspections, doing code style checks, installing artefacts into a
directory structure and so on.
The official documentation of wtools is automatically generated for each ELT Linux
Development Environment version. The latest version of the document can be found at this
URL:
<https://ftp.eso.org/pub/elt/repos/docs/devenv5/wtools/html/index.html>
Previous versions and documents can be found at:
Software versioning and revision control
The current versioning and revision control system for ELT is Git using Gitlab
(<https://gitlab.eso.org/>) as web-based Git repository manager. Git and the command line
tools gitk, git-gui, git-lfs are also provided in the ELT Linux development environment.
Additional information and guidelines on Git usage can be found in AD2
.
Integration tests
The integration tests framework in the ELT Linux development environment is Robot Framework (<http://robotframework.org/>) version 3.1.2. Robot Framework is a generic test automation framework for acceptance testing and acceptance test-driven development (ATDD). It has easy-to-use tabular test data syntax and it utilizes the keyword-driven testing approach. Its testing capabilities can be extended by test libraries implemented either with Python or Java, and users can create new higher-level keywords from existing ones using the same syntax that is used for creating test cases. Robot Framework is shipped in the ELT Linux development environment in its Python flavour. It can therefore be accessed launching the robot executable, for example:
fede@esopc ~/rtest $ robot test.rst
==============================================================================
Test
==============================================================================
Sample Code 1: launches ls -la /tmp/mustexistdir. Checks that rc 0... | FAIL |
2 != 0
------------------------------------------------------------------------------
Sample Code 2: launches ps ax and checs that ntpd is inside | PASS |
------------------------------------------------------------------------------
Sample Code 3: start two sleeps processes, sleeps a bit, and check... | PASS |
------------------------------------------------------------------------------
Test | FAIL |
3 critical tests, 2 passed, 1 failed
3 tests total, 2 passed, 1 failed
==============================================================================
Output: /home/fede/rtest/output.xml
Log: /home/fede/rtest/log.html
Report: /home/fede/rtest/report.html
The execution will generate output and report files as stated at the end of the execution. The file passed on the command line is usually a file written in structured text. Writing such tests doesn’t require the tester to have a knowledge of Python or Java. Documentation for the syntax can be found starting at <http://robotframework.org/robotframework/#user-guide>. A very simple shell execution-based test that produces the aforementioned output is presented below:
*** Settings ***
Library OperatingSystem
Library Process
*** Test Cases ***
Sample Code 1: Launches ls -la /tmp/mustexistdir. Checks that rc is 0 and mustexistfile inside it. Logs all to robot log
${rc} ${stdout} Run and Return RC and Output ls -la /tmp/mustexistdir
Should Be Equal As Integers ${rc} 0
Should Contain ${stdout} mustexistfile
Log ${stdout}
Sample Code 2: Launches ps ax and checks that ntpd is inside
${result} Run Process ps ax
Should Contain ${result.stdout} ntpd
Sample Code 3: Starts two sleep processes, sleeps a bit, and checks that one is still there and one not. Kills them. To fail, put the first sleep, for example, to 10
Start Process sleep 3 alias=proc1
Start Process sleep 9 alias=proc2
Sleep 6
Process Should Be Stopped proc1
Process Should Be Running proc2
Terminate All Processes kill=true
ETR examples
Extensible Test Runner
The ELT development environment natively supports writing integration tests using Robot Framework, pytest or Nose.
The test tool etr was created to act as a unified interface for executing integration or system tests implemented using different test frameworks/runners, and to provide a mechanism for introducing ELT specific behaviour if necessary, such as setting up the test environment. Although etr is a product of the ELT ICS Framework it does not depend on any other ICS Framework products and can be installed and used standalone.
Creating the ETR Test Module
To create integration tests the following structure should be used, where everything relevant for a test is contained in one root directory, referred to as the integration test module:
helloworld # integration test module root
|-- etr.yaml # main configuration file
`-- src # source directory for test files
At this point the etr.yaml needs to be created to be a valid module. As the filename hints at this is a normal YAML file with a specific schema. The following example shows an empty, but valid configuration file:
version: "1.0"
At this point it’s also possible to run etr:
helloworld$ etr
----------------------------------------------------------------------
----------------------------------------------------------------------
Ran 0 tests in 0.0s
OK
Here we will use Robot Framework as an example since it is natively supported by an etr provided plugin. We start by creating a Robot Framework test suite file helloworld/src/tests.robot:
*** Test Cases ***
Hello World
Log Hello World
Now we have to update the etr.yaml configuration to enable support for Robot Framework and configure the plugin to run our new test file:
version: "1.0"
plugins:
# Load the built-in plugin that enables support for Robot Framework
- etr.plugins.robot
# Configure the robot plugin to run tests in src/tests.robot
# Note: The path is relative to this configuration file.
robot:
tests:
- "src/tests.robot"
The tests in src/tests.robot will now be executed with etr:
helloworld$ etr
----------------------------------------------------------------------
running test suite "src/tests.robot"
Tests.Hello World ... OK
----------------------------------------------------------------------
Ran 1 tests in 0.0s
OK
System Deployment
NOMAD
Nomad is a flexible workload orchestrator that enables an organization to easily deploy and manage any containerized or legacy application using a single, unified workflow. It can run a diverse workload of Docker, non-containerized, microservice, and batch applications.
Some of Nomad’s main features include:
Efficient resource usage - Nomad optimizes the available cluster resources by efficiently placing the workloads onto the client nodes of the cluster through a process known as bin packing.
Self-healing - Nomad constantly monitors and detects if tasks stop responding and takes appropriate actions to reschedule them for high uptime.
Zero downtime deployments - Nomad supports several update strategies including rolling, blue/green, and canary deployments to make sure your applications are updated with zero downtime to your users.
Different workload types - Nomad’s flexibility comes from its use of task drivers and allows orchestration of Docker and other containers, as well as Java Jar files, QEMU virtual machines, raw commands with the exec driver, and more. Additionally, users can create their own task driver plugins for customized workloads.
Cross platform support - Nomad runs as a single binary and allows you to orchestrate your application across macOS, Windows, and Linux clients running on-premises, in the cloud, or on the edge.
Single unified and declarative workflow - Regardless of the workload type, the workflow for deploying and maintaining applications on Nomad is unified within a declarative job specification that outlines important attributes like workload type and configuration, service definitions for communication between components, and location values such as region and datacenter.