Dear SPOT developers,
I am doing a research on translation of LTL to BA and I am using the tool "ltl2tgba" included in the SPOT 0.7.1 distribution as a reference. While comparing "ltl2tgba" to some other translations using the "lbtt" tool, I have noticed that it sometimes produces different results (regarding the size of produced automata) without changing any parameters. It appers that for some formulae it nondeterministicaly produces different automata. I attach an example when "ltl2tgba" was executed four times in the row always producing a different automaton. First three automata differs only by the names of states or the ordering of transitions, while the last automaton has one state less than the others. However, all automata seam to be correct and mutually equivalent. After a closer examination, it seams that this behavior is caused be the -r7 option enabling tau03 formula reductions. Similar behavior can be observed using -r5 or -r6 options as well. The question is whether such a nondeterminstic behavior is expected while using language containment formula reductions or it is caused by an implementation bug.
Thank you for your help.
Best regards, Tomas Babiak
Hi Tomáš!
Thank you for your email.
I noticed such a nondeterministic behavior a couple of months back when preparing a benchmark with lbtt for a talk [1]: the figures output by lbtt were not consistent between runs. But at the time I was in a rush, I didn't investigate, and then I simply forgot about it.
I've now looked into it. It turns out that the degeneralisation algorithm used a std::map to merge the outgoing transitions sharing the same destination. This std::map was then used to output these transitions. Unfortunately, the order used to store these transitions was based on the memory address of the destination state. The -r7/-r6/-r5 options seems unrelated (I was puzzled at first, because they do not reduce the formula you gave) but because of all the operations they involve (translations, products, emptiness checks) they probably shake the memory allocator enough to get states at different addresses in the different runs. I have fixed the degeneralization as shown here: http://git.lrde.epita.fr/?p=spot.git;a=commit;h=03aabf9a3afd39eb511cb812b099...
Note that this change should only avoid the nondeterminism in the order of the states and transitions.
The difference in the number of states that you observed might be a bug in the direct simulation algorithm behind "-R1t -R1q", but I'm not sure. Direct (-R1?) and delayed (-R2?) simulations were written several years ago by a master student who tried to adapt the game-theory-based algorithms to TGBA. Over the years these algorithms have proved to be flaky, and I had to demote them into handling only degeneralized Büchi automata. Still, I have evidence that -R2? mishandle some simple degeneralized automata (e.g.: ./ltl2tgba -DS -R2t -R2q 'FGq' produces an "always true" automaton) and I although I don't have similar evidence against -R1?, I don't trust this algorithm either. That is the reason there are no options for these simulations on the online translator. Another student should be working on a new implementation of these game-theory-based simulations at the beginning of next year. And there is also a "bdd-simul" branch on the git repository, where there is a preliminary implementation of a BDD-based direct simulation algorithm that works with TGBA (unfortunately it won't work for delayed simulation).
[1] slide 12/29 of http://www.lrde.epita.fr/~adl/dl/adl/duret.11.sumo.slides.pdf
Best regards,