Runtime Verification Tool Classification
This questionnaire is designed to collect information about runtime verification (RV) tools directly from the authors. The tool classification follows the taxonomy proposed here:

http://people.inf.ethz.ch/trayteld/papers/rv18-tax/tax.pdf
(Official Publication at Springer: https://doi.org/10.1007/978-3-030-03769-7_14)

which additionally contains a classification of tools that participated in the RV competitions [1-3] and RV-cubes workshop 2017 [4]. The classification in the paper is performed based on the knowledge of the taxonomy authors.
 
Both the taxonomy and the classification are a work in progress and we would like to use your RV knowledge and experience to improve it!

We have designed this questionnaire to:
1. Validate the current classification of the tools
2. Learn about and classify new RV tools
3. Improve/extend the taxonomy

Before you start filling out the questionnaire visit the following link:

https://goo.gl/Mmuhdd#gid=667676632

If the tool you are about to classify is in the list, it means that one of your co-authors already classified it. In that case you do not need to fill out the questionnaire for that tool.

The questionnaire consists of 7 parts (that reflect the current structure of the taxonomy). Each part corresponds to a part in the taxonomy and contains the relevant questions. Questions are multi-choice with some pre-defined choices (that come from the taxonomy), but allow you to add your own answer. You can also select multiple answers, if they all apply to your tool. If you think that a specific pre-defined choice does not really reflect what your tool does/supports feel free to add a custom answer. We will use the custom answers to further refine the taxonomy.

Additionally, at the end of the questionnaire, we provide a place for your general comments on the taxonomy (that otherwise do not fit in any of the custom answers). Your feedback would be greatly appreciated!

We cannot estimate the time needed to fill out the questionnaire, it depends on how well your tool fits the taxonomy. The more time you spend, the richer and more comprehensive the taxonomy will be.

We are collecting email addresses solely to be able to contact you to further clarify your answers.

If you have any questions regarding the questionnaire, feel free to contact us:
Yliès Falcone <ylies.falcone@univ-grenoble-alpes.fr>
Srđan Krstić <srdan.krstic@inf.ethz.ch>
Giles Reger <giles.reger@manchester.ac.uk>
Dmitriy Traytel <traytel@inf.ethz.ch>

[1] https://doi.org/10.1007/s10009-017-0454-5
[2] https://doi.org/10.1007/978-3-319-23820-3_27
[3] https://doi.org/10.1007/978-3-319-46982-9_3
[4] http://www.easychair.org/publications/volume/RV-CuBES_2017


Sign in to Google to save your progress. Learn more
Email *
Tool name *
Relevant papers
Please add links to relevant technical and/or tool papers (ideally a list of DOI links)
Source code
Link to the repository with the source code (alternatively, a link to the tool website or a collection of binaries)
Tool classification
1. Specification
A specification indicates the intended system behavior (property), that is what one wants to check on the system behavior.
Specification Taxonomy
Specification type: *
A specification itself can be either implicit or explicit: 1) implicit specifications are not explicitly formulated or entered in the tool, but rather the tool checks for a predefined (typically critical) set of runtime errors (e.g., memory safety, deadlock freedom, absence of data races); 2) explicit specifications are formulated and supplied to the tool by the user as a formalization of the desired behavior of the monitored system
Required
Implicit specification *
Implicit specifications generally aim at avoiding critical runtime errors (that would typically be not caught by a compiler). An example is memory safety, whose purpose is to ensure proper accesses to the system memory. Implicit specifications often also describe correct concurrent behavior such as the absence of deadlocks, the atomicity of operations, and the absence of data races.
Required
Explicit specification language name *
Use n.a. if tool supports only implicit specifications. Please write the full name of the specification language and its acronym (if any). If your tool supports multiple specification languages and the choice of the language does not impact the answers to the other questions in the questionnaire, you can list all of the languages here, comma separated. Otherwise, please fill out the questionnaire multiple times (for each supported specification language).
Supported language fragment *
If the tool supports a fragment of the above specification language not captured by its semantics, concisely describe the fragment. For example: a tool has metric temporal logic (MTL) as the input language, but only supports the "weak" version of the until operator (rather than the standard one).
Data support *
This classifies the tool according to the type of data it supports as parameters of the observations: 1) propositional - observations are atomic and unstructured (i.e., do not have parameters); 2) simple parametric - observations are associated with a list of simple values (e.g., integers, reals, or strings) 3) complex parametric - values may have additional complexity (e.g. an XML or JSON object)
Required
Output *
Here we are interested in the output an explicit specification formalism (not a monitor) assigns to the input executions. In the standard case, the specification associates a verdict with an execution. The verdict indicates specification fulfillment or violation and may range over a domain extending the Boolean domain (e.g., ⊤, ⊥, and ?). A more refined output might include a witness for the verdict (e.g., a set of bindings of values to free variables in the specification) that lead to fulfillment or violation. Robustness information extends classical verdicts by providing a quantitative assessment of the specification fulfillment or violation. Finally, in the most general case, specifications can describe output streams, which is any form of continuously produced information. This may be a stream of verdicts or witnesses, e.g., by evaluating the specification at each observation point, or more generally may be any data computed from the observations.
Required
Logical time *
Logical time describes the relative ordering between events. Such an order can be total (e.g., when monitoring a monolithic single-threaded program) or partial (e.g., when monitoring a multi-threaded program or a distributed system).
Required
Physical time *
Physical time describes the actual time that elapses when running the system. The domain of this timing information can be discrete or dense. There is a special case where time is treated as data.
Required
Modalities *
Some formalisms may restrict assertions to the current observation whereas others may support constraints over past or future observations. In some cases, different modalities represent distinct expressiveness classes; in other cases it is merely a matter of usability.
Required
Paradigm *
Specifications may describe the intended function from traces to outputs operationally (e.g., by a finite-state automaton) or declaratively (e.g., by a temporal logic formula).
Required
2. Monitor
By monitor, we refer to a component executed together with the system for the purposes of the runtime verification process. A monitor implements a decision procedure which produces the expected output (either the related information for an implicit specification or the specification language output for an explicit specification).
Monitor Taxonomy
Decision procedure *
The decision procedure of the monitor can be either analytical or operational. Analytical decision procedures query and scan records (e.g., from a database) to determine whether some relations hold between the records and the current execution. Operational decision procedures are those based on automata or formula rewriting. In an automata-based monitor, the code relies on some form of automata-like formalism (either classical finite-state automata or richer forms). In a rewrite-based monitor, the decision procedure is based on a set of (possibly pre-defined) rewriting rules trigged while processing the input.
Required
Properties *
When designing monitors, it is desirable that its decision procedure guarantee several properties. Intuitively, a sound monitor never provides incorrect output, while a complete monitor always provide an output. The properties reflect how much confidence one can have in the output of monitor and how much confidence one can have that a monitor will produce an output, respectively. Impartiality requires that a verdict for a finite prefix of a trace is not produced if there still exists an (infinite) continuation leading to another verdict. Anticipation requires that as soon as every (infinite) continuation of a finite prefix of a trace leads to the same verdict, the verdict is outputted.
Required
Generation *
The decision procedure may act on an object (e.g. an automaton) which is itself often referred to as the monitor. This may be generated explicitly from the specification (e.g. an automaton synthesized from an LTL formula) or may exist implicitly (e.g. a rewrite system defined in an internal domain-specific language).
Required
Execution *
The execution of a monitor can be interpreted or direct. The key difference between the two approaches is whether monitors for different properties are implemented by a different piece of code (direct) or there is a generic monitoring code that is parametrized by some property information (interpreted). Directly executable monitors can be implemented as some extension of a programming language (i.e., an internal domain-specific language), or produced in a synthesis step from generation. Otherwise, the monitor is said to be interpreted.
Required
3. Deployment
By deployment, we refer to how the monitor is effectively implemented, organized, how it retrieves the information from the system, and when it does so.
Deployment Taxonomy
Stage *
The notion of stage describes when the monitor operates, with respect to the execution of the system. Runtime verification is said to apply offline when the monitor runs after the system finished executing and thus has access to the complete system execution (e.g., a log file). It is said to apply online when the monitor runs while the system executes and thus observes the current execution and a part of its history.
Required
Synchronization *
In the online case, the communication and connection between the monitor and the system can be synchronous or asynchronous, respectively depending on whether the initial program code stops executing when the monitor analyzes the retrieved information. It is possible for a monitor to be partially synchronous if it synchronises on some but not all observations.
Required
Architecture *
The architecture of the monitor may be centralized (e.g., in one monolithic procedure) or decentralized (e.g., by utilising communicating monitors, which may be synchronous or asynchronous).
Required
Placement *
The notion of placement describes where the monitor operates, with respect to the running system. Therefore, this concept only applies when the stage is online. Traditionally, the monitor is said to be inline (resp. outline) when it executes in the same (resp. in a different) address space as/than the running system. Pragmatically, the difference between inline and outline is a matter of instrumentation. An inline tool implicitly includes some form of instrumentation, used to inline the monitor in the monitored system. Conversely, outline tools typically provide an interface for receiving observations. This interface may exist within the same language and be called directly, or it may be completely separate with communication happening via other means (e.g., pipes). There is a (not uncommon) grey area between the two in the instance of tools that provide an outline interface but may also automatically generate instrumentation code.
Required
Instrumentation *
 Instrumentation itself may be at the hardware or software level.
Required
4. Reaction
By reaction, we refer to how the monitor affects the execution of the system; this can be passive or active.
Active *
Reaction is said to be active when the monitor affects the execution of the monitored system. An active monitor would typically affect the execution of the system when a violation is detected or considered irremediably happening. Various interventions are possible. A so-called enforcement monitor can try to prevent property violations from occurring by forcing the system to adhere to the specification. When a violation occurs, a monitor can execute recovery code to mitigate the effect of the fault and let the program either terminate or pursue the execution from a safer state. A monitor can also raise exceptions that were already present in the initial system. Finally, monitor can also roll the system back in a previous correct state.
Required
Passive *
Reaction is said to be passive when the monitor does not influence or minimally influences the initial execution of the program. A passive monitor is typically an observer, only collecting information. In that case, the purpose of monitoring typically pertains to producing the specification outputs (e.g., verdicts or robustness information) or providing a form of explanation of a specification output (e.g., a witness trace containing the important events leading to a specific verdict) or statistics (for instance violated/satisfied specifications, number of times intermediate verdicts were output before a final verdict is reached).
Required
5. Trace
The notion of trace appears in two places in a runtime verification framework: by observed trace we refer to the object extracted from the monitored system and examined by the monitor; conversely, the trace model is the mathematical object forming part of the semantics of the specification formalism.
Trace Taxonomy
Information *
Either form of trace object is an abstraction of the system execution and only contains some of the runtime information. The information retrieved from the program can take the form of isolated information. For instance, the trace can contain information on the internal state of the program or notifications that some events occurred in the program (or both).
Required
Sampling *
Sampling is said to be event triggered when the monitor gets information from the target system when an event of interest happens in the program. This can be the case when an event occurs in the system (in case the trace consists of a collection of events), when a relevant part of the program state changes, or when a new input is received or an output produced by the system. Sampling is said to be time triggered when there exists a more or less regular period at which information is collected on the program. The term sampling here reflects the fact that any trace will only collect a relevant subset of actual behaviours.
Required
Evaluation *
The runtime information retrieved by the monitor represents an evaluation of the system state. This information can be related to an identified point (in time or at a program location) or an interval.
Required
Precision *
If the trace contains all relevant events then it is precise, otherwise it is imprecise. Reasons for imprecision might be imperfect trace collection methods, or purposefully for reasons of overhead.
Required
Model *
Trace models may be infinite (as in standard LTL) whilst observed traces are necessarily finite – in such case the monitoring approach must evaluate a finite trace with respect to a property over infinite traces. A trace model must reflect the notions of time and data present in the specification
Required
6. Application area
We have included application areas as a top-level concept of the taxonomy as it can have a large impact on other aspects of the runtime verification tools. Currently we identified four main areas:
1) collecting/querying information/metrics - includes visualizing the execution (e.g., with traces, graphs, or diagrams) and evaluating the runtime performance over some metrics (execution time, memory consumption, communication, etc.) to collect  statistics.
2) property verification - includes checking if a system conforms to security, privacy, safety, and progress/liveness properties
3) testing/debugging - used to complement techniques for finding defects and locating faults in systems
4) failure prevention and reaction - used to detect faults, contain them, recover from them, and (potentially) repair the system

List of use cases/application areas of the tool *
Required
7. Interference
The interference characterizes monitoring frameworks as invasive or non-invasive.
There are two sources of interference for a monitoring framework with a system, namely through: 1) the induced (time and memory) overhead; and 2) active steering of the system.
The overhead can occur due to the instrumentation applied to the system (while the monitor uses separate resources), or due to the monitor competing for the resources with the monitoring system, or even altering the scheduling.
Invasiveness *
Required
General comments
Taxonomy
Use the text field below to provide your general comments about the taxonomy.
Submit
Clear form
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google. Report Abuse - Terms of Service - Privacy Policy