I am happy to announce that the following two papers have been
accepted at the 22nd International SPIN Symposium on Model Checking of
Software (SPIN 2015) to be held in Stellenbosch, South Africa on 24–26
On Refinement of Büchi Automata for Explicit Model Checking
František Blahoudek¹, Alexandre Duret-Lutz²,
Vojtěch Rujbr¹, and Jan Strejček¹
¹Faculty of Informatics, Masaryk University, Brno, Czech Republic
²LRDE, EPITA, Le Kremlin-Bicêtre, France
In explicit model checking, systems are typically described in an
implicit and compact way. Some valid information about the system
can be easily derived directly from this description, for example
that some atomic propositions cannot be valid at the same time. The
paper shows several ways to apply this information to improve the
Büchi automaton built from an LTL specification. As a result, we get
smaller automata with shorter edge labels that are easier to
understand andmore importantly, for which the explicit model
checking process performs better.
Practical Stutter-Invariance Checks for ω-Regular Languages
Thibaud Michaud and Alexandre Duret-Lutz
LRDE, EPITA, Le Kremlin-Bicêtre, France
We propose several automata-based constructions that check whether a
specification is stutter-invariant. These constructions assume that
a specification and its negation can be translated into Büchi
automata, but aside from that, they are independent of the
specification formalism. These transformations were inspired by a
construction due to Holzmann and Kupferman, but that we broke down
into two operations that can have different realizations, and that
can be combined in different ways. As it turns out, implementing
only one of these operations is needed to obtain a functional
stutter-invariant check. Finally we have implemented these
techniques in a tool so that users can easily check whether an LTL
or PSL formula is stutter-invariant.
I'm happy to announce the release of Declt version 1.1. Declt is a
reference manual generator for Common Lisp ASDF systems. Get it here:
New in this release:
* Declt now properly documents complex system and component dependencies
(:feature, :version and :require notably),
* Declt advertises a system's :if-feature conditional if any,
* Finally, and more importantly, Declt is no more limited to the
documentation of a single system. Given a main system to document, any
subsystem (that is, other systems we depend on and which are part of
the same distribution) will be documented in the same reference
Declt (pronounce “dec’let”) is a reference manual generator for Common
Lisp libraries. It works by loading an ASDF system and introspecting its
contents. The generated documentation contains the description of the
system itself and its local dependencies (other systems in the same
distribution): components (modules and files), packages and definitions
found in those packages.
Exported and internal definitions are listed separately. This allows the
reader to have a quick view on the library’s public API. Within each
section, definitions are sorted lexicographically.
In addition to ASDF system components and packages, Declt documents the
following definitions: constants, special variables, symbol macros,
macros, setf expanders, compiler macros, functions (including setf
ones), generic functions and methods (including setf ones), method
combinations, conditions, structures, classes and types.
The generated documentation includes every possible bit of information
that introspecting can provide: documentation strings, lambda lists
(including qualifiers and specializers where appropriate), slots
(including type, allocation and initialization arguments), definition
source file etc.
Every documented item provides a full set of cross-references to related
items: ASDF component dependencies, parents and children, classes direct
methods, super and subclasses, slot readers and writers, setf expanders
access and update functions etc.
Finally, Declt produces exhaustive and multiple-entry indexes for every
Reference manuals are generated in Texinfo format (compatible, but not
requiring Texinfo 5). From there it is possible to produce readable /
printable output in info, HTML, PDF, DVI and PostScript with tools such
as makeinfo, texi2dvi or texi2pdf.
The primary example of documentation generated by Declt is the Declt
reference manual itself.
My new Jazz CD entitled "Roots and Leaves" is out!
Check it out: http://didierverna.com/records/roots-and-leaves.php
Lisp, Jazz, Aïkido: http://www.didierverna.info
Spot is a C++11 library for ω-automata manipulation and model checking.
This release contains 713 patches contribued over the last 18 months
by Thibaud Michaud, Étienne Renault, Alexandre Lewkowicz, and myself.
As the name suggests, this release reflects a huge progress towards
Spot 2.0, but we are not quite there yet: the only thing you should
not consider as stable in this release is the C++ API, as it should
change a lot as we march towards version 2.0.
This release also comes with a new web site at
and Debian packages. See
for installation instructions, or grab the source tarball directly at
New in spot 1.99.1 (2015-06-23)
* Major changes motivating the jump in version number
- Spot now works with automata that can represent more than
generalized Büchi acceptance. Older versions were built around
the concept of TGBA (Transition-based Generalized Büchi
Automata) while this version now deals with what we call TωA
(Transition-based ω-Automata). TωA support arbitrary acceptance
conditions specified as a Boolean formula of transition sets
that must be visited infinitely often or finitely often. This
genericity allows for instance to represent Rabin or Streett
automata, as well as some generalized variants of those.
- Spot has near complete support for the Hanoi Omega Automata
format. http://adl.github.io/hoaf/ This formats supports
automata with the generic acceptance condition described above,
and has been implemented in a number of third-party tools (see
http://adl.github.io/hoaf/support.html) to ease their
interactions. The only part of the format not yet implemented in
Spot is the support for alternating automata.
- Spot is now compiling in C++11 mode. The set of C++11 features
we use requires GCC >= 4.8 or Clang >= 3.5. Although GCC 4.8 is
more than 2-year old, people with older installations won't be
able to install this version of Spot.
- As a consequence of the switches to C++11 and to TωA, a lot of
the existing C++ interfaces have been renamed, and sometime
reworked. This makes this version of Spot not backward
compatible with Spot 1.2.x. See below for the most important
API changes. Furtheremore, the reason this release is not
called Spot 2.0 is that we have more of those changes planned.
- Support for Python 2 was dropped. We now support only Python
3.2 or later. The Python bindings have been improved a lot, and
include some convenience functions for better integration with
IPython's rich display system. User familiar with IPython's
notebook should have a look at the notebook files in
* Major news for the command-line tools
- The set of tools installed by spot now consists in the following
11 commands. Those marked with a '+' are new in this release.
- randltl Generate random LTL/PSL formulas.
- ltlfilt Filter and convert LTL/PSL formulas.
- genltl Generate LTL formulas from scalable patterns.
- ltl2tgba Translate LTL/PSL formulas into Büchi automata.
- ltl2tgta Translate LTL/PSL formulas into Testing automata.
- ltlcross Cross-compare LTL/PSL-to-Büchi translators.
+ ltlgrind Mutate LTL/PSL formula.
- dstar2tgba Convert deterministic Rabin or Streett automata into Büchi.
+ randaut Generate random automata.
+ autfilt Filter and convert automata.
+ ltldo Run LTL/PSL formulas through other tools.
randaut does not need any presentation: it does what you expect.
ltlgrind is a new tool that mutates LTL or PSL formulas. If you
have a tool that is bogus on some formula that is too large to
debug, you can use ltlgrind to generate smaller derived formulas
and see if you can reproduce the bug on those.
autfilt is a new tool that processes a stream of automata. It
allows format conversion, filtering automata based on some
properties, and general transformations (e.g., change of
acceptance conditions, removal of useless states, product
between automata, etc.).
ltldo is a new tool that runs LTL/PSL formulas through other
tools, but uses Spot's command-line interfaces for specifying
input and output. This makes it easier to use third-party tool
in a pipe, and it also takes care of some necessary format
- ltl2tgba has a new option, -U, to produce unambiguous automata.
In unambiguous automata any word is recognized by at most one
accepting run, but there might be several ways to reject a word.
This works for LTL and PSL formulas.
- ltl2tgba has a new option, -S, to produce generalized-Büchi
automata with state-based acceptance. Those are obtained by
converting some transition-based GBA into a state-based GBA, so
they are usually not as small as one would wish. The same
option -S is also supported by autfilt.
- ltlcross will work with translator producing automata with any
acceptance condition, provided the output is in the HOA format.
So it can effectively be used to validate tools producing Rabin
or Streett automata.
- ltlcross has several new options:
--grind attempts to reduce the size of any bogus formula it
discovers, while still exhibiting the bug.
--ignore-execution-failures ignores cases where a translator
exits with a non-zero status.
--automata save the produced automata into the CSV or JSON
file. Those automata are saved using the HOA format.
ltlcross will also output two extra columns in its CSV/JSON
output: "ambiguous_aut" and "complete_aut" are Boolean
that respectively tells whether the automaton is
ambiguous and complete.
- "ltlfilt --stutter-invariant" will now work with PSL formulas.
The implementation is actually much more efficient
than our previous implementation that was only for LTL.
- ltlfilt's old -q/--quiet option has been renamed to
--ignore-errors. The new -q/--quiet semantic is the
same as in grep (and also autfilt): disable all normal
input, for situtations where only the exit status matters.
- ltlfilt's old -n/--negate option can only be used as --negate
now. The short '-n NUM' option is now the same as the new
--max-count=N option, for consistency with other tools.
- ltlfilt has a new --count option to count the number of matching
- ltlfilt has a new --exclusive-ap option to constrain formulas
based on a list of mutually exclusive atomic propositions.
- ltlfilt has a new option --define to be used in conjunction with
--relabel or --relabel-bool to print the mapping between old and
- all tools that produce formulas or automata now have an --output
(a.k.a. -o) option to redirect that output to a file instead of
standard output. The name of this file can be constructed using
the same %-escape sequences that are available for --stats or
- all tools that output formulas have a -0 option to separate
formulas with \0. This helps in conjunction with xargs -0.
- all tools that output automata have a --check option that
request extra checks to be performed on the output to fill
in properties values for the HOA format. This options
implies -H for HOA output. For instance
ltl2tgba -H 'formula'
will declare the output automaton as 'stutter-invariant'
only if the formula is syntactically stutter-invariant
(e.g., in LTL\X). With
ltl2tgba --check 'formula'
additional checks will be performed, and the automaton
will be accurately marked as either 'stutter-invariant'
or 'stutter-sensitive'. Another check performed by
--check is testing whether the automaton is unambiguous.
- ltlcross (and ltldo) have a list of hard-coded shorthands
for some existing tools. So for instance running
'ltlcross spin ...' is the same as running
'ltlcross "spin -f %s>%N" ...'. This feature is much
more useful for ltldo.
- For options that take an output filename (i.e., ltlcross's
--save-bogus, --grind, --csv, --json) you can force the file
to be opened in append mode (instead of being truncated) by
by prefixing the filename with ">>". For instance
will append to the end of the file.
* Other noteworthy news
- The web site moved to http://spot.lrde.epita.fr/.
- We now have Debian packages.
- The documentation now includes some simple code examples
for both Python and C++. (This is still a work in progress.)
- The curstomized version of BuDDy (libbdd) used by Spot has be
renamed as (libbddx) to avoid issues with copies of BuDDy
already installed on the system.
- There is a parser for the HOA format
(http://adl.github.io/hoaf/) available as a
spot::automaton_stream_parser object or spot::parse_aut()
function. The former version is able to parse a stream of
automata in order to do batch processing. This format can be
output by all tools (since Spot 1.2.5) using the --hoa option,
and it can be read by autfilt (by default) and ltlcross (using
the %H specifier). The current implementation does not support
alternation. Multiple initial states are converted into an
extra initial state; complemented acceptance sets Inf(!x) are
converted to Inf(x); explicit or implicit labels can be used;
aliases are supported; "--ABORT--" can be used in a stream.
- The above HOA parser can also parse never claims, and LBTT
automata, so the never claim parser and the LBTT parser have
been removed. This implies that autfilt can input a mix of HOA,
never claims, and LBTT automata. ltlcross also use the same
parser for all these output, and the old %T and %N specifiers
have been deprecated and replaced by %O (for output).
- While not all algorithms in the library are able to work with
any acceptance condition supported by the HOA format, the
following two new functions mitigate that:
- remove_fin() takes a TωA whose accepting condition uses Fin(x)
primitive, and produces an equivalent TωA without Fin(x):
i.e., the output acceptance is a disjunction of generalized
Büchi acceptance. This type of acceptance is supported by
SCC-based emptiness-check, for instance.
- similarly, to_tgba() converts any TωA into an automaton with
- randomize() is a new algorithm that randomly reorders the states
and transitions of an automaton. It can be used from the
command-line using "autfilt --randomize".
- the interface in iface/dve2 has been renamed to iface/ltsmin
because it can now interface the dynamic libraries created
either by Divine (as patched by the LTSmin group) or by
Spins (the LTSmin compiler for Promela).
- LTL/PSL formulas can include /* comments */.
- PSL SEREs support a new operator [:*i..j], the iterated fusion.
[:*i..j] is to the fusion operator ':' what [*i..j] is to the
concatenation operator ';'. For instance (a*;b)[:*3] is the
same as (a*;b):(a*;b):(a*;b). The operator [:+], is syntactic
sugar for [:*1..], and corresponds to the operator ⊕ introduced
by Dax et al. (ATVA'09).
- GraphViz output now uses an horizontal layout by default, and
also use circular states (unless the automaton has more than 100
states, or uses named-states). The --dot option of the various
command-line tools takes an optional parameter to fine-tune the
GraphViz output (including vertical layout, forced circular or
elliptic states, named automata, SCC information, ordered
transitions, and different ways to colorize the acceptance
sets). The environment variables SPOT_DOTDEFAULT and
SPOT_DOTEXTRA can also be used to respectively provide a default
argument to --dot, and add extra attributes to the output graph.
- Never claims can now be output in the style used by Spin since
version 6.2.4 (i.e., using do..od instead of if..fi, and with
atomic statements for terminal acceptance). The default output
is still the old one for compatibility with existing tools. The
new style can be requested from command-line tools using option
--spin=6 (or -s6 for short).
- Support for building unambiguous automata. ltl_to_tgba() has a
new options to produce unambiguous TGBA (used by ltl2tgba -U as
discussed above). The function is_unambiguous() will check
whether an automaton is unambigous, and this is used by
- The SAT-based minimization algorithm for deterministic automata
has been updated to work with ω-Automaton with any acceptance.
The input and the output acceptance can be different, so for
instance it is possible to create a minimal deterministic
Streett automaton starting from a deterministic Rabin automaton.
This functionnality is available via autfilt's --sat-minimize
option. See doc/userdoc/satmin.html for details.
- The on-line interface at http://spot.lrde.epita.fr/trans.html
can be used to check stutter-invariance of any LTL/PSL formula.
- The on-line interface will work around atomic propositions not
supported by ltl3ba. (E.g. you can now translate F(A) or
G("foo < bar").)
* Noteworthy code changes
- Boost is not used anymore.
- Automata are now manipulated exclusively via shared pointers.
- Most of what was called tgba_something is now called
twa_something, unless it is really meant to work only for TGBA.
This includes functions, classes, file, and directory names.
For instance the class tgba originally defined in tgba/tgba.hh,
has been replaced by the class twa defined in twa/twa.hh.
- the tgba_explicit class has been completely replaced by a more
efficient twa_graph class. Many of the algorithms that were
written against the abstract tgba (now twa) interface have been
rewritten using twa_graph instances as input and output, making
the code a lot simpler.
- The tgba_succ_iterator (now twa_succ_iterator) interface has
changed. Methods next(), and first() should now return a bool
indicating whether the current iteration is valid.
- The twa base class has a new method, release_iter(), that can
be called to give a used iterator back to its automaton. This
iterator is then stored in a protected member, iter_cache_, and
all implementations of succ_iter() can be updated to recycle
iter_cache_ (if available) instead of allocating a new iterator.
- The tgba (now called twa) base class has a new method, succ(),
to support C++11' range-based for loop, and hide all the above
Instead of the following syntax:
tgba_succ_iterator* i = aut->succ_iter(s);
for (i->first(); !i->done(); i->next())
// use i->current_state()
We now prefer:
for (auto i: aut->succ(s))
// use i->current_state()
And the above syntax is really just syntactic suggar for
twa_succ_iterator* i = aut->succ_iter(s);
// use i->current_state()
aut->release_iter(i); // allow the automaton to recycle the iterator
Where the virtual calls to done() and delete have been avoided.
- twa::succ_iter() now takes only one argument. The optional
global_state and global_automaton arguments have been removed.
- The following methods have been removed from the TGBA interface and
all their subclasses:
- tgba::all_acceptance_conditions() // use acc().accepting(...)
- tgba::number_of_acceptance_conditions() // use acc().num_sets()
- Membership to acceptance sets are now stored using bit sets,
currently limited to 32 bits. Each TωA has a acc() method that
returns a reference to an acceptance object (of type
spot::acc_cond), able to operate on acceptance marks
Instead of writing code like
i->current_acceptance_conditions() == aut->all_acceptance_conditions()
we now write
(Note that for accepting(x) to return something meaningful, x
should be a set of acceptance sets visitied infinitely often. So let's
imagine that in the above example i is looking at a self-loop.)
- All functions used for printing LTL/PSL formulas or automata
have been renamed to print_something(). Likewise the various
parsers should be called parse_something() (they haven't yet
all been renamed).
- All test suites under src/ have been merged into a single one in
src/tests/. The testing tool that was called
src/tgbatest/ltl2tgba has been renamed as src/tests/ikwiad
(short for "I Know What I Am Doing") so that users should be
less tempted to use it instead of src/bin/ltl2tgba.
* Removed features
- The long unused interface to GreatSPN (or rather, interface to
a non-public, customized version of GreatSPN) has been removed.
As a consequence, we could get rid of many cruft in the
implementation of Couvreur's FM'99 emptiness check.
- Support for symbolic, BDD-encoded TGBAs has been removed. This
includes the tgba_bdd_concrete class and associated supporting
classes, as well as the ltl_to_tgba_lacim() LTL translation
algorithm. Historically, this TGBA implementation and LTL
translation were the first to be implemented in Spot (by
mistake!) and this resulted in many bad design decisions. In
practice they were of no use as we only work with explicit
automata (i.e. not symbolic) in Spot, and those produced by
these techniques are simply too big.
- All support for ELTL, i.e., LTL logic extended with operators
represented by automata has been removed. It was never used in
practice because it had no practical user interface, and the
translation was a purely-based BDD encoding producing huge
automata (when viewed explictely), using the above and non
longuer supported tgba_bdd_concrete class.
- Our implementation of the Kupferman-Vardi complementation has
been removed: it was unused in practice beause of the size of
the automata built, and it did not survive the conversion of
acceptance sets from BDDs to bitsets.
- The unused implementation of state-based alternating Büchi
automata has been removed.
- Input and output in the old, Spot-specific, text format for
TGBA, has been fully removed. We now use HOA everywhere. (In
case you have a file in this format, install Spot 1.2.6 and use
"src/tgbatest/ltl2tgba -H -X file" to convert the file to HOA.)
I am happy to announce that the following paper has been accepted to the
13th International Conference on Document Analysis and Recognition (ICDAR)
that will take place in Gammarth, Tunisia, on August 23 - 26, 2015:
Using histogram representation and Earth Mover's Distance
as an evaluation tool for text detection
Stefania Calarasanu (1), Jonathan Fabrizio (1) and Séverine Dubuisson (2)
(1) LRDE-EPITA, 14-16, rue Voltaire, F-94276, Le Kremlin
(2) CNRS, UMR 7222, ISIR, F-75005, Paris, France
In the context of text detection evaluation, it is essential to use
protocols that are capable of describing both the quality and the
quantity aspects of detection results. In this paper we propose
a novel visual representation and evaluation tool that captures the
whole nature of a detector by using histograms. First, two histograms
(coverage and accuracy) are generated to visualize the different
characteristics of a detector. Secondly, we compare these two
histograms to a so called optimal one to compute representative
and comparable scores. To do so, we introduce the usage of the
Earth Mover's Distance as a reliable evaluation tool to estimate recall
and precision scores. Results obtained on the ICDAR 2013 dataset
show that this method intuitively characterizes the accuracy of a
text detector and gives at a glance various useful characteristics
of the analyzed algorithm.
Ana Stefania Calarasanu
EPITA Research and Development Laboratory (LRDE)
14-16 rue Voltaire, 94276 Le Kremlin-Bicêtre CEDEX, France
Annonce mailing list