Dear spot maintainers,
we have been using SPOT by using its web site to build monitors from LTL
formulas (http://spot.lip6.fr/ltl2tgba.html).
Now with the new version, translation to monitors gives always an error
like:
ome maps are not empty Variable Map: 0 Free 1 Var[a] x1 { 0xa001d10 }
Anonymous lists: [0] Free list: (0, 1) close failed in file object
destructor: sys.excepthook is missing lost sys.stderr
Could you please fix it?
Thanks
Angelo Gargantini
University of Bergamo
Italy
Dear Sir:
I want to ask whether SPOT integrate some algorithm that can check the
generated autoamta's emptyness ? Recently I want to use SPOT
to do the LTL satisfiability checking. If you do not, I have to combine
SPOT with SPIN to achieve that.
Thanks very much!
--
Best Regards
Jianwen
On Tue, Jun 19, 2012 at 11:48 AM, Agrawal, Akshay
<akshay.agrawal(a)vanderbilt.edu> wrote:
> Dear Alexandre,
> Thank you for a prompt reply. So I think my perception was wrong
> about TGBA. I did not learn about TGBA from SPOT's wiki. I learnt it
> from some papers and Wikipedia. But I think I formed a wrong
> impression.
> So after your explanation I believe that a TGBA for G(p->Xr) is a
> special case of TGBA with all transitions as accepting transitions??
Yes. Any safety formula can be translated into a TGBA without
acceptance set.
> Also, there was a need for me to convert these TGBA generated from
> SPOT to C code so I wanted to know how is Acc[r] logically read??
> Because sometimes in complex TGBA's there are various such accepting
> sets where we may have Acc[r] and Acc[r|q] or something. So the
> Boolean condition in the brackets of Acc[] , how is it read/what
> does it exactly mean?? I think this will help me to get C code out
> of these TGBAs.
You should regard these just as unique strings used to name the
acceptance conditions. They could be numbers, it would not change the
language. When evaluating an ω-word on such an automaton, you want to
see each of these strings infinitely often. The fact that they denote
subformulae is an implementation detail of Spot's translation.
During the translation of LTL into TGBA, the string actually denotes
subformulae that have been promised to hold eventually. (The
transitions that don't have Acc[φ] are the transitions that promise to
see φ later/) See
http://www.lrde.epita.fr/~adl/dl/adl/duret.11.vecos.pdf if you want
more details.
However you may not rely on this second meaning, because some
post-processing (like degeneralization, WDBA-minimization) might
introduce Acc[1]. This Acc[1] is also used during PSL translation.
Obviously it makes not sense to promise "true", since it's always true :-)
If you are confused, just forget about the previous two paragraphs.
Unless you want to change the translation algorithm, there is no need
to know what the argument of Acc[] means.
> Does SPOT give a C code or other type of executable monitor type
> code for generated TGBAs??
Spot does not generate C.
What we call monitor (in Spot at least) is a finite automaton that
accepts all finite prefixes of the executions that satisfy some
formula. Spot can generate a deterministic monitor for any LTL
formula, and will present it as a TGBA without acceptance condition.
See the "monitor" output on the online translator.
--
Alexandre Duret-Lutz
Dear sir:
I am trying to extend spot by adding a new ltl-to-ba translation
algorithm. The brief idea is that I extend the formula structure in spot
so that I can
use the parser for ltl formulas.
Now I have finished the code and want to compile my codes with those in
spot, but I am confused how I can achieve this. Does spot provides some
interfaces that can help someone to extend it?
Thanks very much for attention and I am expecting your reply.
Best
Jianwen
Hi,
I am a Grad Student at Vanderbilt University,TN,USA and am working on formal verification of software systems. I was using spot library and your site and wanted to know : if a TGBA ( Transition based generalized Buchi automaton) has all transitions as accepting transitions by definition then what is this {ACC[r]} condition that explicitly appears ion automaton when I write LTL formula : F r. And why don't I get any such thing when I use LTL formula : X r.
I will appreciate your reply.
Thanks in advance,
Akshay Agrawal
Dear sir:
I recently try to implement the algorithm translating LTL formulas
to Buchi automata in the SPOT framework. When I read the source codes of
SPOT I was confused about the structure of the automaton.
How did you store the transitions and states in the automaton, such as
tgba? I am sorry I can't catch you since I do not see any structure
storing them when you define the tgba (tgba.hh, tgba.cc).
Thank you for your attention.
Best
Jianwen
We are pleased to announce the release of Spot 0.9.
Spot is a model-checking library developed collaboratively by LRDE and
LIP6. It provides algorithms and data structures to implement the
automata-theoretic approach to LTL model checking.
You can find the new release here:
http://spot.lip6.fr/dl/spot-0.9.tar.gz (11MB)
The major news in this release are all related to the
logic-to-Büchi-automaton translation engine:
1. The linear-time fragment of PSL is supported.
This first implementation, enough to experiment with the
logic, shall be improved in future versions.
(Feedback most welcome.)
2. The LTL simplification code has been overhauled and
augmented with new rules.
3. A new simulation-based post-processing can be used
to simplify the TGBA constructed.
For more details about the first two points (supported operators, and
rewritings performed), please read the file doc/tl/tl.pdf included in
spot-0.9.tar.gz.
A consequence of the last two points is that Spot 0.9 is faster than
Spot 0.8.3 and can produce even smaller automata. For some
comparative benchmarks, please download
http://spot.lip6.fr/dl/bench-0.9.pdf
Please direct any feedback and questions to <spot(a)lrde.epita.fr>.
(Subscribe at https://www.lrde.epita.fr/mailman/listinfo/spot if you wish.)
A more detailed list of new features follows.
New in spot 0.9 (2012-05-09):
* New features:
- Operators from the linear fragment of PSL are supported. This
basically extends LTL with Sequential Extended Regulat
Expressions (SERE), and a couple of operators to bridge SERE and
LTL. See doc/tl/tl.pdf for the list of operators and their
semantics.
- Formula rewritings have been completely revamped, and augmented
with rules for PSL operators (and some new LTL rules as well).
See doc/tl/tl.pdf for the list of the rewritings implemented.
- Some of these rewritings that may produce larger formulas
(for instance to rewrite "{a;b;c}" into "a & X(b & Xc)")
may be explicitely disabled with a new option.
- The src/ltltest/randltl tool can now generate random SEREs
and random PSL formulae.
- Only one translator (ltl2tgba_fm) has been augmented to
translate the new SERE and PSL operators. The internal
translation from SERE to DFA is likely to be rewriten in a
future version.
- A new function, length_boolone(), computes the size of an
LTL/PSL formula while considering that any Boolean term has
length 1.
- The LTL/PSL parser recognizes some UTF-8 characters (like ◇ or
∧) as operators, and some output routines now have an UTF-8
output mode. Tools like randltl and ltl2tgba have gained an -8
option to enable such output. See doc/tl/tl.pdf for the list
of recognized codepoints.
- A new direct simulation reduction has been implemented. It
works directly on TGBAs. It is in src/tgbaalgos/simlation.hh,
and it can be tested via ltl2tgba's -RDS option.
- unabbreviate_wm() is a function that rewrites the W and M operators
of LTL formulae using R and U. This is called whenever we output
a formula in Spin syntax. By combining this with the aforementioned
PSL rewriting rules, many PSL formulae that use simple SERE can be
converted into LTL formulae that can be feed to tools that only
understand U and R. The web interface will let you do this.
- changes to the on-line translator:
+ SVG output is available
+ can display some properties of a formula
+ new options for direct simulation, larger rewritings, and
utf-8 output
- configure --without-included-lbtt will prevent LBTT from being
configured and built. This helps on systems (such as MinGW)
where LBTT cannot be built. The test-suite will skip any
LBTT-based test if LBTT is missing.
* Interface changes:
- Operators ->, <->, U, W, R, and M are now parsed as
right-associative to better match the PSL standard.
- The constructors for temporal formulae will perform some trivial
simplifications based on associativity, commutativity,
idempotence, and neutral elements. See doc/tl/tl.pdf for the
list of such simplifications.
- Formula instances now have many methods to inspect their
properties (membership to syntactic classes, absence of X
operator, etc...) in constant time.
- LTL/PSL formulae are now handled everywhere as 'const formula*'
and not just 'formula*'. This reflects the true nature of these
(immutable) formula objects, and cleanups a lot of code.
Unfortunately, it is a backward incompatible change: you may have
to add 'const' to a couple of lines in your code, and change
'ltl::const_vistitor' into 'ltl::visitor' if you have written a
custom visitor.
- The new entry point for LTL/PSL simplifications is the function
ltl_simplifier::simplify() declared in src/ltlvisit/simplify.hh.
The ltl_simplifier class implements a cache.
Functions such as reduce() or reduce_tau03() are deprecated.
- The old game-theory-based implementations for direct and delayed
simulation reductions have been removed. The old direct
simulation would only work on degeneralized automata, and yet
produce results inferior to the new direct simulation introduced
in this release. The implementation of delayed simulation was
unreliable. The function reduc_tgba_sim() has been kept
for compatibility (it calls the new direct simulation whatever
the type of simulation requested) and marked as deprecated.
ltl2tgba's options -Rd, -RD are gone. Options -R1t, -R1s,
-R2s, and -R2t are deprecated and all made equivalent to -RDS.
- The tgba_explicit hierarchy has been reorganized in order to
make room for sba_explicit classes that share most of the code.
The main consequence is that the tgba_explicit type no longuer
exists. However the tgba_explicit_number,
tgba_explicit_formula, and tgba_explicit_string still do.
Dear Sir (Madam):
Recently I am following your spot tool and try to use it to practice some of my ideas on ltl-to-Buchi translation. After reading the manuals I appreciate if you
can answer me these questions:
1. Can I use your ltl parser source codes directly to implement my ideas?
2. Does SPOT integrate some ltl generators? Or, Are there any test suits available for me?
3. Does SPOT provide any interface which can generate the Buchi automata from a list of formulas
and output them with any format (e.g prolmea or xml ) back to a file?
I appreciate this function very much since I do want to compare my results with those from SPOT. ^^
4. If I have some new idea to implement a new translation, is it possible for me to integrate it to SPOT? (I ask this since I thought SPOT is open :) )
Thank you very much and hope your reply!
Best
Jianwen Li
Dear Sir(Madam):
When I use the SPOT tool online recently I found the truth that the
result Buchi automata for such formulas G(F a && F b) from your tool is not
a traditional one. Can you explain why? Thank you very much!
Best
Jianwen Li
Hi Alexandre,
I have some code that uses the standard pattern for checking language
inclusion between two automata, using tgba_safra_complementation. The
automata have a small number of states (<100) and one acceptance condition.
Running my code under valgrind for time profiling reveals a curious time
distribution: 30% of the execution time is spent in malloc and free
(20%/10% resp). The call hierarchy is as follows:
malloc/free
<- operator new/delete
<- tgba_tba_proxy::succ_iter
<- tgba_sba_proxy ctor
<- safra_determinisation::create_safra_automaton
<- tgba_safra_complement
I have eliminated the contribution of my code to the run time of malloc
and free (basically by avoiding creating and destroying the same states
repeatedly through memoizing).
Is there any workaround to this, i.e., could better/different memory
management eliminate this 30% of overhead? Are other complementation
algorithms better for this use-case?
Best regards
Nikos
PS What is the preferred way of referencing Spot in a publication?