
We are happy to announce the release of Spot 2.12 This is a long-awaited major release containing new features developed in the last 17 months (since the release of Spot 2.11), as well as fixes for bugs discovered in the last 9 months (since Spot 2.11.6). Some of these bug fixes are necessary for Spot to work correctly with newer versions of Python, GCC, or Jupyter. This release contains contributions by Philipp Schlehuber, Florian Renkin, Jonah Romero, Pierre Ganty, and myself. A list of user-visible changes is given at the end of this email. You can find the new release here: http://www.lrde.epita.fr/dload/spot/spot-2.12.tar.gz See https://spot.lre.epita.fr/ for documentation and installation instructions. Please direct any feedback to <spot@lrde.epita.fr>. ⚠ Note that LRDE (EPITA's Research & Development Laboratory) has been merged with another entity into something larger that is now called LRE (EPITA's Research Laboratory). However, it will be a while before we are able to replace all URLs ending in "lrde.epita.fr" by URLs ending "lrde.epita.fr", so do not be surprized about the mix of those. New in spot 2.12 (2024-05-16) Build: - When Python bindings are enabled, Spot now requires Python 3.6 or later. Python 3.6 has reached end-of-life in 2021, but is still used on CentOS 7 (which will reach end-of-support later in 2024). Documentation: - https://spot.lre.epita.fr/tut25.html is a new example showing how to print an automaton in the "BA format" (used by Rabbit and other tools). Command-line tools: - In places that accept format strings with '%' sequences, like options --stats, --name, or --output, the new '%l' can now be used to produce the 0-based serial number of the produced object. This differs from the existing '%L' that is usually related to the line number of the input (when that makes sense). For instance to split a file that contains many automata into one file per automaton, do autfilt input.hoa -o output-%l.hoa - For tools that produce automata, using -Hb or --hoa=b will produce an HOA file in which aliases are used to form a basis for the whole set of labels. Those aliases are only used when more than one atomic proposition is used (otherwise, the atomic proposition and its negation is already a basis). This can help reduce the size of large HOA files. - autfilt learned --separate-edges, to split the labels of the automaton using a basis of disjoint labels. See https://spot.lre.epita.fr/tut25.html for some motivation. - autfilt learned --reduce-acceptance-set/--enlarge-acceptance-set to heurisitcally remove/add to unnecessary acceptance marks in Büchi automata. (Issue #570) - ltlfilt has a new option --relabel-overlapping-bool=abc|pnn that will replace boolean subformulas by fresh atomic propositions even if those subformulas share atomic propositions. - ltlsynt's --ins and --outs options will iterpret any atomic proposition surrounded by '/' as a regular expression. For intance with ltlsynt --ins='/^in/,/env/' --outs=/^out/,/control/' ... any atomic proposition that starts with 'in' or contains 'env' will be considered as input, and one that starts with 'out' or contain 'control' will be considered output. By default, if neither --ins nor --outs is given, ltlsynt will behave as if --ins='/^[iI]/' and --outs='/^[oO]/ were used. - ltlsynt will now check for atomic propositions that always have the same polarity in the specification. When this happens, these output APs are replaced by true or false before running the synthesis pipeline, and the resulting game, Mealy machine, or Aiger circuit is eventually patched to include that constant output. This can be disabled with --polarity=no. - ltlsynt will now check for atomic propositions that are specified as equivalent. When this is detected, equivalent atomic propositions are replaced by one representative of their class, to limit the number of different APs processed by the synthesis pipeline. The resulting game, Mealy machine, or Aiger circuit is eventually patched to include the removed APs. This optimization can be disabled with --global-equivalence=no. As an exception, an equivalence between input and output signals (such as G(in<->out)) will be ignored if ltlsynt is configured to output a game (because patching the game a posteriori is cumbersome if the equivalence concerns different players). - genaut learned two new familes of automata, --cycle-log-nba and --cycle-onehot-nba. They both create a cycle of n^2 states that can be reduced to a cycle of n states using reduction based on direct simulation. Library: - The following new trivial simplifications have been implemented for SEREs: - f|[+] = [+] if f rejects [*0] - f|[*] = [*] if f accepts [*0] - f&&[+] = f if f rejects [*0] - b:b[*i..j] = b[*max(i,1)..j] - b[*i..j]:b[*k..l] = b[*max(i,1)+max(k,1)-1, j+l-1] - The following new trivial simplifications have been implemented for PSL operators: - {[*]}[]->0 = 0 - {[*]}<>->1 = 1 - The HOA parser is a bit smarter when merging multiple initial states into a single initial state (Spot's automaton class supports only one): it now reuses the edges leaving initial states without incoming transitions. - The automaton parser has a new option "drop_false_edges" to specify whether edges labeled by "false" should be ignored during parsing. It is enabled by default for backward compatibility. - spot::bdd_to_cnf_formula() is a new variant of spot::bdd_to_formula() that converts a BDD into a CNF instead of a DNF. - spot::relabel_bse() has been improved to better deal with more cases. For instance '(a & b & c) U (!c & d & e)' is now correctly reduced as '(p0 & p1) U (!p1 & p2)', and '((a & !b) | (a->b)) U c' now becomes '1 U c'. (Issue #540.) - spot::relabel_overlapping_bse() is a new function that will replace boolean subformulas by fresh atomic propositions even if those subformulas share atomic propositions. - spot::translate() has a new -x option "relabel-overlap=M" that augments the existing "relabel-bool=N". By default, N=4, M=8. When the formula to translate has more than N atomic propositions, relabel_bse() is first called to attempt to rename non-overlapping boolean subexpressions (i.e., no shared atomic proposition) in order to reduce the number of atomic proposition, a source of exponential explosion in several places of the translation pipeline. This relabel-bool optimization exists since Spot 2.4. The new feature is that if, after relabel-bool, the formula still has more than M atomic propositions, then spot::translate() now attempts to relabel boolean subexpressions even if they have overlapping atomic propositions, in an attempt to reduce the number of atomic proposition even more. Doing so has the slightly unfortunate side effect of hindering some simplifications (because the new atomic propositions hide their interactions), but it usually incures a large speedup. (See Issue #500, Issue #536.) For instance on Alexandre's laptop, running 'ltlsynt --tlsf SPIReadManag.tlsf --aiger' with Spot 2.11.6 used to produce an AIG circuit with 48 nodes in 36 seconds; it now produces an AIG circuit with 53 nodes in only 0.1 second. - spot::contains_forq() is an implementation of the paper "FORQ-Based Language Inclusion Formal Testing" (Doveri, Ganty, Mazzocchi; CAV'22) contributed by Jonah Romero. - spot::contains() still defaults to the complementation-based algorithm, however by calling spot::containment_select_version("forq") or by setting SPOT_CONTAINMENT_CHECK=forq in the environment, the spot::contains_forq() implementation will be used instead when applicable (inclusion between Büchi automata). The above also impacts autfilt's --included-in option. - spot::reduce_buchi_acceptance_set_here() and spot::enlarge_buchi_acceptance_set_here() will heuristically remove/add unnecessary acceptance marks in Büchi automata. (Issue #570.) - Given a twa_word_ptr W and a twa_ptr A both sharing the same alphabet, one can now write W->intersects(A) or A->intersects(W) instead of the longuer W->as_automaton()->intersects(A) or A->intersects(W->as_automaton()). - spot::scc_info has a new option PROCESS_UNREACHABLE_STATES that causes it to enumerate even unreachable SCCs. - spot::realizability_simplifier is a new class that performs the removal of superfluous APs that is now performed by ltlsynt (search for --polarity and --global-equivalence above). - scc_filter used to reduce automata tagged with the inherently-weak property to weak Büchi automata (unless the acceptance was already t or co-Büchi). In cases where the input automaton had no rejecting cycle, the Büchi acceptance was overkill: scc_filter will now use "t" acceptance. This change may have unexpected consequences in code paths that assume running scc_filter on a Büchi automaton will always return a Büchi automaton. For those, a "keep_one_color" option has been added to scc_filter. - spot::separate_edges() and spot::edge_separator offers more ways to split labels. See https://spot.lrde.epita.fr/ipynb/split.html - ltsmin's interface will now point to README.ltsmin in case an error is found while running divine or spins. - spot::is_safety_automaton() was generalized to detect any automaton for which the acceptance could be changed to "t" without changing the language. In previous versions this function assumed weak automata as input, but the documentation did not reflect this. - spot::remove_alternation() has a new argument to decide whether it should raise an exception or return nullptr if it requires more acceptance sets than supported. - spot::dualize() learned a trick to be faster on states that have less outgoing edges than atomic proposition declared on the automaton. spot::remove_alternation(), spot::tgba_powerset(), simulation-based reductions, and spot::tgba_determinize() learned a similar trick, except it isn't applied at the state level but if the entire automaton uses few distinct labels. These changes may speed up the processing of automata with many atomic propositions but few distinct labels. (Issue #566 and issue #568.) Backward incompatibilities: - spot::dualize() does not call cleanup_acceptance() anymore. This change ensures that the dual of a Büchi automaton will always be a co-Büchi automaton. Previously cleanup_acceptance(), which remove unused colors from the acceptance, was sometimes able to simplify co-Büchi to "t", causing surprizes. - Function minimize_obligation_garanteed_to_work() was renamed to minimize_obligation_guaranteed_to_work(). We believe this function is only used by Spot currently. - Function split_independant_formulas() was renamed to split_independent_formulas(). We believe this function is only used by Spot currently. Python: - The spot.automata() and spot.automaton() functions now accept a drop_false_edges=False argument to disable the historical behavior of ignoring edges labeled by False. - Calling aut.highlight_state(s, None) or aut.highlight_edge(e, None) may now be used to remove the highlighting color of some given state or edge. Use aut.remove_highlight_states() or aut.remove_highlight_edges() to remove all colors. (Issue #554.) - Calling aut.get_hight_state(s) or get.highlight_edge(e) will return the highlight color of that state/edge or None. - Recent version Jupyter Notebook and Jupyter Lab started to render SVG elements using <img...> tag to make it easier to copy/paste those images. This breaks several usages, including the possibility to have informative tooltips on states and edges (used in Spot). See the following issues for more details. https://github.com/jupyter/notebook/issues/7114 https://github.com/jupyterlab/jupyterlab/issues/10464 This version of Spot now declares its svg outputs as HTML to prevent Jypyter from wrapping them is images. Bugs fixed: - tgba_determinize()'s use_simulation option would cause it to segfault on automata with more than 2^16 SCCs, due to overflows in computations of indices in the reachability matrix for SCCs. (Issue #541.) This has been fixed by disabling the use_simulation optimization in this case. - product_or_susp() and product_susp() would behave incorrectly in presence of unsatisifable or universal acceptance conditions that were not f or t. (Part of issue #546.) - Several algorithms were incorrectly dealing with the "!weak" property. Because they (rightly) assumed that a weak input would produce a weak output, they also wrongly assumed that a non-weak output would produce a non-weak output. Unfortunately removing states or edges in a non-weak automaton can lead a weak automaton. The incorrect property lead to issue #546. In addition to fixing several algorithms, product() and print_hoa() will now raise an exception if automaton with t or f acceptance is declared !weak. (This cheap check will not catch all automata incorrectly labeled by !weak, but helps detects some issues nonetheless.) - The automaton parser forgot to update the list of highlighted edges while dropping edges labeled by bddfalse. (issue #548.) - The comparison operators for acceptance condition (==, !=) could fail to equate two "t" condition, because we have two ways to represent "t": the empty condition, or the empty "Inf({})". - Functions complement() and change_parity() could incorrectly read or write the unused edge #0. In the case of complement(), writing that edge was usually harmless. However, in some scenario, complement could need to stick a ⓪ acceptance mark on edge #0, then the acceptance condition could be simplified to "t", and finally change_parity could be confused to find such an accepting mark in an automaton that declares no colors, and perform some computation on that color that caused it to crash with a "Too many acceptance sets used" message. (issue #552) - The detection of terminal automata used did not exactly match the definition used in the HOA format. The definition of a terminal automaton is supposed to be: 1. the automaton is weak 2. its accepting SCCs are complete 3. no accepting cycle can reach a rejecting cycle However the implementation actually replaced the last point by the following variant: 3'. no accepting edge can reach a rejecting cycle This caused issues in automata with t acceptance. Fixing the code by replacing 3' by 3 this also made the third argument of is_terminal_automaton(), an optional boolean indicating whether transition between SCCs should be ignored when computing 3', completely obsolete. This third argument has been marked as depreated. (Issue #553) - twa::defrag_states(), which is called for instance by purge_dead_state(), did not update the highlight-edges property. (Issue #555.) - spot::minimize_obligation will skip attempts to complement very weak automata when those would require too many acceptance sets. - acd_transform() was not used by spot::postprocessor unless an option_map was passed. This was due to some bad default for the "acd" option: it defaulted to true when an option_map was given, and to false otherwise. This had no consequences on ltl2tgba/autfilt were some option_map is always passed, but for instance the parity automata generated by spot.postprocessor in Python were not using ACD by default. - Using spot::postprocessor to produce colored parity automata could fail to color some transiant edges when the "acd" option was activated. - The formula printer incorrectly replaced a SERE like "(!a)[*3];a" by "a[->]". The latter should only replace "(!a)[*];a". (Issue #559.) - The configure script failed to detect the include path for Python 3.12. (Issue #577.) - Work around many failures caused by incorrect rounding of floating point values in the counting of transitions. (Issue #582) - Some incorrectly escaped strings in Python code were causing warnings with Python 3.12.