Dear Sir (Madam):
Recently I am following your spot tool and try to use it to practice some of my ideas on ltl-to-Buchi translation. After reading the manuals I appreciate if you
can answer me these questions:
1. Can I use your ltl parser source codes directly to implement my ideas?
2. Does SPOT integrate some ltl generators? Or, Are there any test suits available for me?
3. Does SPOT provide any interface which can generate the Buchi automata from a list of formulas
and output them with any format (e.g prolmea or xml ) back to a file?
I appreciate this function very much since I do want to compare my results with those from SPOT. ^^
4. If I have some new idea to implement a new translation, is it possible for me to integrate it to SPOT? (I ask this since I thought SPOT is open :) )
Thank you very much and hope your reply!
Best
Jianwen Li
Dear Sir(Madam):
When I use the SPOT tool online recently I found the truth that the
result Buchi automata for such formulas G(F a && F b) from your tool is not
a traditional one. Can you explain why? Thank you very much!
Best
Jianwen Li
Hi Alexandre,
I have some code that uses the standard pattern for checking language
inclusion between two automata, using tgba_safra_complementation. The
automata have a small number of states (<100) and one acceptance condition.
Running my code under valgrind for time profiling reveals a curious time
distribution: 30% of the execution time is spent in malloc and free
(20%/10% resp). The call hierarchy is as follows:
malloc/free
<- operator new/delete
<- tgba_tba_proxy::succ_iter
<- tgba_sba_proxy ctor
<- safra_determinisation::create_safra_automaton
<- tgba_safra_complement
I have eliminated the contribution of my code to the run time of malloc
and free (basically by avoiding creating and destroying the same states
repeatedly through memoizing).
Is there any workaround to this, i.e., could better/different memory
management eliminate this 30% of overhead? Are other complementation
algorithms better for this use-case?
Best regards
Nikos
PS What is the preferred way of referencing Spot in a publication?
Hi,
I'm trying to learn spot, and I want to run an emptiness check on a
product of two automata which I will be creating by subclassing tgba.
The two automata have overlapping, but different, signatures i.e. atomic
propositions. If I understand correctly, I cannot achieve this by using
two different bdd_dicts, as the two automata passed to the product
object must share the same bdd_dict.
How does one deal with this problem in spot?
Thanks
Nikos
Hi,
I've been looking for a library that would allow me to provide a (lazy)
interface for enumerating the states of an automaton and doing
on-the-fly model checking. Spot looks like exactly what I need! (which
is great; I couldn't find any other tool supporting custom model
checking like this).
What I will need on top of this, is when the emptiness check fails, I'd
like to modify the automaton using the information provided by the check
and re-run the check. Changes to the automaton will be localized, so
destroying and rebuilding the objects relating to the emptiness check
seems unnecessarily expensive. Is there any way around this?
Best regards
Nikos Gorogiannis
Dear SPOT developers,
I am doing a research on translation of LTL to BA and I am using the tool
"ltl2tgba" included in the SPOT 0.7.1 distribution as a reference.
While comparing "ltl2tgba" to some other translations using the "lbtt" tool, I
have noticed that it sometimes produces different results (regarding the size of
produced automata) without changing any parameters. It appers that for some
formulae it nondeterministicaly produces different automata. I attach an example
when "ltl2tgba" was executed four times in the row always producing a different
automaton. First three automata differs only by the names of states or the
ordering of transitions, while the last automaton has one state less than the
others. However, all automata seam to be correct and mutually equivalent.
After a closer examination, it seams that this behavior is caused be the -r7
option enabling tau03 formula reductions. Similar behavior can be observed using
-r5 or -r6 options as well. The question is whether such a nondeterminstic
behavior is expected while using language containment formula reductions or it
is caused by an implementation bug.
Thank you for your help.
Best regards,
Tomas Babiak
Bonjour,
Je suis HONG Silien, je travail avec Yann Thierry-Mieg pour intégrer
SPOT dans notre serveur d'intégration continu.
Yann m'a dit qu'il y avait une branche "stable" qui me permettait
d'éviter les erreurs de compilation avec les Warnings.
Malheuresement ca ne compile toujours pas, voici l'erreur :
/bin/sh ../../libtool --tag=CXX --mode=compile g++-4.2
-DPACKAGE_NAME=\"spot\" -DPACKAGE_TARNAME=\"spot\"
-DPACKAGE_VERSION=\"0.6\" -DPACKAGE_STRING=\"spot\ 0.6\"
-DPACKAGE_BUGREPORT=\"spot(a)lrde.epita.fr\" -DPACKAGE_URL=\"\"
-DPACKAGE=\"spot\" -DVERSION=\"0.6\" -DYYTEXT_POINTER=1 -DSTDC_HEADERS=1
-DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1
-DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1
-DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_SRAND48=1
-DHAVE_DRAND48=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" -I./..
-I../../buddy/src -I/Users/hong/usr/include -DNDEBUG -W -Wall
-Wcast-align -Wpointer-arith -Wwrite-strings -Wcast-qual
-DXTSTRINGDEFINES -Werror -g -O3 -ffast-math -fstrict-aliasing
-fomit-frame-pointer -MT tgbasafracomplement.lo -MD -MP -MF
.deps/tgbasafracomplement.Tpo -c -o tgbasafracomplement.lo
tgbasafracomplement.cc
libtool: compile: g++-4.2 -DPACKAGE_NAME=\"spot\"
-DPACKAGE_TARNAME=\"spot\" -DPACKAGE_VERSION=\"0.6\"
"-DPACKAGE_STRING=\"spot 0.6\""
-DPACKAGE_BUGREPORT=\"spot(a)lrde.epita.fr\" -DPACKAGE_URL=\"\"
-DPACKAGE=\"spot\" -DVERSION=\"0.6\" -DYYTEXT_POINTER=1 -DSTDC_HEADERS=1
-DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1
-DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1
-DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_SRAND48=1
-DHAVE_DRAND48=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" -I./..
-I../../buddy/src -I/Users/hong/usr/include -DNDEBUG -W -Wall
-Wcast-align -Wpointer-arith -Wwrite-strings -Wcast-qual
-DXTSTRINGDEFINES -Werror -g -O3 -ffast-math -fstrict-aliasing
-fomit-frame-pointer -MT tgbasafracomplement.lo -MD -MP -MF
.deps/tgbasafracomplement.Tpo -c tgbasafracomplement.cc -fno-common
-DPIC -o .libs/tgbasafracomplement.o
cc1plus: warnings being treated as errors
tgbasafracomplement.cc:57: warning: ‘spot::safra_tree_automaton’ has a
field ‘spot::safra_tree_automaton::automaton’ whose type uses the
anonymous namespace
tgbasafracomplement.cc:57: warning: ‘spot::safra_tree_automaton’ has a
field ‘spot::safra_tree_automaton::initial_state’ whose type uses the
anonymous namespace
Je connais pas très bien GIT, et j'espère avoir compilé la bonne
branche, voici la commande que j'ai tapé pour récupérer le source :
git clone --branch stable git://git.lrde.epita.fr/spot
Un petit coup de pouce est la bienvenue ...
Merci d'avance.
Cordialement,
Hi Ruediger,
>>> "e" == ehlers <ehlers(a)alan.cs.uni-sb.de> writes:
e> Dear all,
e> included in the SPOT distribution is the tool "ltl2tgba" that converts LTL
e> formulas to equivalent Büchi automata. For the research I am currently
e> working on, I need to convert LTL formulas to as-small-as-possible Büchi
e> automata (that are not transition-based, so I am using the tool with the
e> "-N" parameter to obtain SPIN never-claims). So far, for all my
e> benchmarks, I am calling the tool with the following parameter
e> combinations:
e> ltl2tgba -N -R3 -r4
e> ltl2tgba -N -R3 -r7
e> ltl2tgba -N -R3 -r4
e> ltl2tgba -N -R3 -r7
e> ltl2tgba -N -R3 -r4
e> ltl2tgba -N -R3 -r7
I presume you forgot to edit some of these lines? From the
rest of your mail, you might be using -R2q and -R2t also.
Have you seen cases where -r4 produces a smaller automaton than
-r7 ? I would think that given that -r7 includes -r4 it should
always be better. Of course these options only try to reduce
the LTL formulae, not the resulting automata. However if you
see cases where -r7 reduces the formula in a way that the
translated automaton is bigger than one obtained after -r4, I
would be interested to study them.
e> After that, I take the smallest automata obtained. So far, I am only
e> interested in the number of states.
e> Now my question is: Given that I have enough time, which other parameter
e> combinations might lead to automata that that are smaller than any
e> automaton obtained when using the parameter combinations given above? As
e> an example, can it be the case that "-R2q" is sometimes better than "-R3"?
These switches are not exclusive. I would always use -R3,
because removing dead states and useless acceptance conditions
(-R3 does that) will improve the efficiency of other algorithms
such as delayed simulations.
You could try the following combinations
ltl2tgba -f -N -R3 -R1q -R1t -r7
ltl2tgba -f -N -R3 -R2q -R2t -r7
ltl2tgba -f -N -R3 -R1q -R1t -R2q -R2t -r7
But I think the latter should always return the smaller
automaton (when it is correct...).
Unfortunately these -R1* and -R2* options are not always correct
(this is the reason they are not available on the on-line
translator script). They were implemented by a student during
his master's thesis, but he struggled to adapt Etessami's paper
to transition-based automata. I know of an easy case where the
-R2* simplification is clearly wrong:
./ltl2tgba -f -R3 -R2t 'FG a'
Apparently the structure of the automata generated by the "-f"
translation is such that -R2* can be wrong only when it is
mixed with -R3. But the bug really is in -R2*.
I'm a bit more confident about -R1* for which I'm not aware of
any problem (except that it does not work for generalized
automata, but that should not be an issue for you). Still,
personally I would only use -R3.
Rewriting -R1* and -R2* has been on the TODO list for a while. I
have seen some work, by Juvekar and Piterman, that generalizes
the technique to generalized Büchi automata: it would probably
be a better fit for Spot.
So to summarize, I think you should be safe with
ltl2tgba -f -N -R3 -R1q -R1t -r7
and I suggest you use -R2* only if you can check that the
resulting automaton is correct.
Although I think "-f" will create less states than the other
translations, you can also try to replace it by one of
-f
-f -x
-f -x -p
-f -L
-f -L -x
-f -L -x -p
-l
-taa -c
(If you encounter a case where -l or -taa is better than -f,
please let me know!)
--
Alexandre Duret-Lutz
Hi Ruediger,
>>> "e" == ehlers <ehlers(a)alan.cs.uni-sb.de> writes:
e> Dear all,
e> included in the SPOT distribution is the tool "ltl2tgba" that converts LTL
e> formulas to equivalent Büchi automata. For the research I am currently
e> working on, I need to convert LTL formulas to as-small-as-possible Büchi
e> automata (that are not transition-based, so I am using the tool with the
e> "-N" parameter to obtain SPIN never-claims). So far, for all my
e> benchmarks, I am calling the tool with the following parameter
e> combinations:
e> ltl2tgba -N -R3 -r4
e> ltl2tgba -N -R3 -r7
e> ltl2tgba -N -R3 -r4
e> ltl2tgba -N -R3 -r7
e> ltl2tgba -N -R3 -r4
e> ltl2tgba -N -R3 -r7
I presume you forgot to edit some of these lines? From the
rest of your mail, you might be using -R2q and -R2t also.
Have you seen cases where -r4 produces a smaller automaton than
-r7 ? I would think that given that -r7 includes -r4 it should
always be better. Of course these options only try to reduce
the LTL formulae, not the resulting automata. However if you
see cases where -r7 reduces the formula in a way that the
translated automaton is bigger than one obtained after -r4, I
would be interested to study them.
e> After that, I take the smallest automata obtained. So far, I am only
e> interested in the number of states.
e> Now my question is: Given that I have enough time, which other parameter
e> combinations might lead to automata that that are smaller than any
e> automaton obtained when using the parameter combinations given above? As
e> an example, can it be the case that "-R2q" is sometimes better than "-R3"?
These switches are not exclusive. I would always use -R3,
because removing dead states and useless acceptance conditions
(-R3 does that) will improve the efficiency of other algorithms
such as delayed simulations.
You could try the following combinations
ltl2tgba -f -N -R3 -R1q -R1t -r7
ltl2tgba -f -N -R3 -R2q -R2t -r7
ltl2tgba -f -N -R3 -R1q -R1t -R2q -R2t -r7
But I think the latter should always return the smaller
automaton (when it is correct...).
Unfortunately these -R1* and -R2* options are not always correct
(this is the reason they are not available on the on-line
translator script). They were implemented by a student during
his master's thesis, but he struggled to adapt Etessami's paper
to transition-based automata. I know of an easy case where the
-R2* simplification is clearly wrong:
./ltl2tgba -f -R3 -R2t 'FG a'
Apparently the structure of the automata generated by the "-f"
translation is such that -R2* can be wrong only when it is
mixed with -R3. But the bug really is in -R2*.
I'm a bit more confident about -R1* for which I'm not aware of
any problem (except that it does not work for generalized
automata, but that should not be an issue for you). Still,
personally I would only use -R3.
Rewriting -R1* and -R2* has been on the TODO list for a while. I
have seen some work, by Juvekar and Piterman, that generalizes
the technique to generalized Büchi automata: it would probably
be a better fit for Spot.
So to summarize, I think you should be safe with
ltl2tgba -f -N -R3 -R1q -R1t -r7
and I suggest you use -R2* only if you can check that the
resulting automaton is correct.
Although I think "-f" will create less states than the other
translations, you can also try to replace it by one of
-f
-f -x
-f -x -p
-f -L
-f -L -x
-f -L -x -p
-l
-taa -c
(If you encounter a case where -l or -taa is better than -f,
please let me know!)
--
Alexandre Duret-Lutz