Hi,
I'm trying to learn spot, and I want to run an emptiness check on a product of two automata which I will be creating by subclassing tgba.
The two automata have overlapping, but different, signatures i.e. atomic propositions. If I understand correctly, I cannot achieve this by using two different bdd_dicts, as the two automata passed to the product object must share the same bdd_dict.
How does one deal with this problem in spot?
Thanks Nikos
Hi Nikos,
On Wed, Oct 12, 2011 at 12:21 AM, Nikos Gorogiannis nikos.gorogiannis@eecs.qmul.ac.uk wrote:
The two automata have overlapping, but different, signatures i.e. atomic propositions. If I understand correctly, I cannot achieve this by using two different bdd_dicts, as the two automata passed to the product object must share the same bdd_dict.
You should use the same bdd_dict for all your automata. Each automaton will simplify register different (overlapping) sets of atomic propositions.
The purpose of the bdd_dict class is to ensure that the BDD variable assigned to each of the atomic proposition is the same in all automata sharing a bdd_dict. It does not force the automata to use the same set of atomic propositions.
Hi Alexandre,
On 12/10/11 00:05, Alexandre Duret-Lutz wrote:
Hi Nikos,
On Wed, Oct 12, 2011 at 12:21 AM, Nikos Gorogiannis nikos.gorogiannis@eecs.qmul.ac.uk wrote:
The two automata have overlapping, but different, signatures i.e. atomic propositions. If I understand correctly, I cannot achieve this by using two different bdd_dicts, as the two automata passed to the product object must share the same bdd_dict.
You should use the same bdd_dict for all your automata. Each automaton will simplify register different (overlapping) sets of atomic propositions.
The purpose of the bdd_dict class is to ensure that the BDD variable assigned to each of the atomic proposition is the same in all automata sharing a bdd_dict. It does not force the automata to use the same set of atomic propositions.
Oh, I see now. Thanks for your response.
While we are on the subject can I ask another newbie question: the support conditions for transitions are given as bdds, but does this mean that they can be arbitrary propositions, or must they be (equivalent to) conjunctions of literals?
Thanks! Nikos
Sorry for the delay. (I'm currently attending ATVA'11 at Taipei.)
On Wed, Oct 12, 2011 at 4:27 PM, Nikos Gorogiannis nikos.gorogiannis@eecs.qmul.ac.uk wrote:
While we are on the subject can I ask another newbie question: the support conditions for transitions are given as bdds, but does this mean that they can be arbitrary propositions, or must they be (equivalent to) conjunctions of literals?
The labels of the transitions can be any Boolean formula. For instance in an automaton with AP={a,b} you could have a state with two outgoing transitions labelled respectively with (a&b)|(!a&!b) and (!a)|b. These labels are returned by the current_condition() method of the tgba_succ_iterator class.
There is a different thing that we call "support conditions" in Spot, so I'm not really sure what you are talking about. The support conditions of a *state*, is a Boolean formula that should be implied by the label of each of the outgoing transitions. This is returned by the support_conditions() method of the tgba class. The common way to implement this is to return the disjunction of the labels of all of the outgoing transitions. With the above example we have ((a&b)|(!a&!b))|((!a)|b) = (!a)|b but another simpler implementation is to always return bddtrue... The purpose of this method was to enable a small optimization when a product of tgba involves a symbolic (i.e., bdd-encoded) tgba. You probably should not care about it.
The labels of the transitions can be any Boolean formula. For instance in an automaton with AP={a,b} you could have a state with two outgoing transitions labelled respectively with (a&b)|(!a&!b) and (!a)|b. These labels are returned by the current_condition() method of the tgba_succ_iterator class.
There is a different thing that we call "support conditions" in Spot, so I'm not really sure what you are talking about. The support conditions of a *state*, is a Boolean formula that should be implied by the label of each of the outgoing transitions. This is returned by the support_conditions() method of the tgba class. The common way to implement this is to return the disjunction of the labels of all of the outgoing transitions. With the above example we have ((a&b)|(!a&!b))|((!a)|b) = (!a)|b but another simpler implementation is to always return bddtrue... The purpose of this method was to enable a small optimization when a product of tgba involves a symbolic (i.e., bdd-encoded) tgba. You probably should not care about it.
Sorry for the confusion of terms, I meant the current_condition() one.
Thanks for the answer! Nikos
Hi,
There seem to be at least three complementation algorithms in Spot. I have constructed an automaton as a TGBA; can all algorithms be used with it? From my reading, the kv and saba cannot, as they treat the TGBA as a states-based Bucchi automaton. Is this correct?
Thanks Nikos
Hello and sorry for the too many questions!
I think I have answered myself w.r.t the complementation algorithms. It seems that safra does what I want.
Yet another thing came up: even when I implement the method transition_annotation, the dotty output seems to ignore that. Am I doing something wrong?
On the other hand replay_run seems to honour my method.
Nikos
On 18/10/11 15:38, Nikos Gorogiannis wrote:
Hi,
There seem to be at least three complementation algorithms in Spot. I have constructed an automaton as a TGBA; can all algorithms be used with it? From my reading, the kv and saba cannot, as they treat the TGBA as a states-based Bucchi automaton. Is this correct?
Thanks Nikos
Hi Nikos and Martin,
You are both talking about complementation, so let me answer your two mails at once.
On 18/10/11 15:38, Nikos Gorogiannis wrote:
There seem to be at least three complementation algorithms in Spot. I have constructed an automaton as a TGBA; can all algorithms be used with it? From my reading, the kv and saba cannot, as they treat the TGBA as a states-based Bucchi automaton. Is this correct?
On Thu, Oct 20, 2011 at 2:47 PM, Nikos Gorogiannis nikos.gorogiannis@eecs.qmul.ac.uk wrote:
Hello and sorry for the too many questions!
No problem. I probably won't be very reactive in the upcoming days, I just arrived today in Paris after two weeks in Taiwan and I have a lot of catching up to do.
I think I have answered myself w.r.t the complementation algorithms. It seems that safra does what I want.
This would be my best bet, but I'm not very knowledgeable about these algorithms. Guillaume Sadegh, the student who implemented these, is no longer here, however he wrote two detailed technical reports that should be helpful to anyone poking around these algorithms: http://lrde.epita.fr/~sadegh/buchi-complementation-techrep.pdf http://www.lrde.epita.fr/dload//20100106-Seminar/sadegh-spot.pdf
Martin Dieguez Lodeiro (CCed) has sent me some concerns about the Safra implementation in a separate (private) mail a few days ago, but I haven't had the time to investigate. In his tests, computing the Safra complement of
% cat x.tgba acc = "1"; "1", "1", "1",; "1", "2", "p",; "2", "3", "p", "1"; "2", "2", "1",; "3", "3", "p", "1"; "3", "2", "1",;
would return an automaton without any acceptance states, which sounds bogus indeed.
When I run "./complement -S -a x.tgba" in the tgbatest directory, it displays an automaton with some acceptance states (using two different acceptance conditions: Acc["1$1"] and Acc["1$2"]), and although I have not checked every accepting path, I think it looks OK. However if I edit complementation.cc to output the automaton with tgba_save_reachable() instead of dotty_reachable(), the automaton still declares two acceptance conditions, but only one is used by the transitions (hence the language is empty, which is wrong). So there is definitively something fishy here. I believe maybe the encoding used for the acceptance conditions in Safra is wrong. I'll try to look at this more closely before next week.
Yet another thing came up: even when I implement the method transition_annotation, the dotty output seems to ignore that. Am I doing something wrong? On the other hand replay_run seems to honour my method.
I think transition_annotation() was originally added to help interpret the counterexamples found while model checking (hence the use in replay-run), but I see no reason not to use it for dotty too. So I would call this a bug.
On Thu, Oct 20, 2011 at 6:20 PM, Alexandre Duret-Lutz adl@lrde.epita.fr wrote:
When I run "./complement -S -a x.tgba" in the tgbatest directory, it displays an automaton with some acceptance states (using two different acceptance conditions: Acc["1$1"] and Acc["1$2"]), and although I have not checked every accepting path, I think it looks OK. However if I edit complementation.cc to output the automaton with tgba_save_reachable() instead of dotty_reachable(), the automaton still declares two acceptance conditions, but only one is used by the transitions (hence the language is empty, which is wrong). So there is definitively something fishy here. I believe maybe the encoding used for the acceptance conditions in Safra is wrong. I'll try to look at this more closely before next week.
Actually, the encoding of the the acceptance conditions used when converting the Streett automaton into a TGBA is really bogus when using more than one acceptance condition is needed. The attached patch should fix it. Could you try it? I plan to do further tests too.
Hi,
I am checking emptiness of a product of an automaton A with the complement of another automaton B, as usual. I would like to project a run of the product to a run on A. I am trying to do this as follows.
spot::emptiness_check * ec = new spot::couvreur99_check(aut); spot::emptiness_check_result * res = ec->check(); spot::tgba_run* run = res->accepting_run(); spot::tgba_run * projrun = spot::project_tgba_run(aut, proj, run);
Where aut is the product and proj is automaton A. I have also written (what I think are reasonable) project_state methods for A and B. This method is never called on B (perhaps because the complementation algorithm intercepts that?).
However, if I try to use replay_run of the projected run, I get the message below, when setting debug=true. With debug=false, I get a segfault.
state 1 in prefix: transition with label=p_0 & !p_1 & !p_2 and acc= state 2 in cycle: S1 t_1,t_2 ERROR: no transition with label=p_0 & p_1 & !p_2 and acc={Acc["1$1"]} leaving state 2 for state S3 t_1,t_2 The following transitions leave state 2: * S2 label=p_1 & !p_0 & !p_2 and acc= going to S2 t_2 * S3 label=p_0 & p_1 & !p_2 and acc= going to S3 t_1,t_2
Am I doing something obviously wrong here?
Thanks! Nikos
PS this is with the patch to the safra complementation algorithm.
Hi Nikos,
On Mon, Oct 24, 2011 at 5:52 PM, Nikos Gorogiannis nikos.gorogiannis@eecs.qmul.ac.uk wrote:
Hi,
I am checking emptiness of a product of an automaton A with the complement of another automaton B, as usual. I would like to project a run of the product to a run on A. I am trying to do this as follows.
spot::emptiness_check * ec = new spot::couvreur99_check(aut); spot::emptiness_check_result * res = ec->check(); spot::tgba_run* run = res->accepting_run(); spot::tgba_run * projrun = spot::project_tgba_run(aut, proj, run);
Where aut is the product and proj is automaton A. I have also written (what I think are reasonable) project_state methods for A and B.
I believe you should not have to write anything. There is a default implementation in the TGBA class:
state* tgba::project_state(const state* s, const tgba* t) const { if (t == this) return s->clone(); return 0; }
This method is never called on B (perhaps because the complementation algorithm intercepts that?).
I don't think it would make sense to have project_state() traverse the complementation: if a run is accepted by !B we don't expect it to map to a run of B...
However, if I try to use replay_run of the projected run, I get the message below, when setting debug=true. With debug=false, I get a segfault.
state 1 in prefix: transition with label=p_0 & !p_1 & !p_2 and acc= state 2 in cycle: S1 t_1,t_2 ERROR: no transition with label=p_0 & p_1 & !p_2 and acc={Acc["1$1"]} leaving state 2 for state S3 t_1,t_2 The following transitions leave state 2: * S2 label=p_1 & !p_0 & !p_2 and acc= going to S2 t_2 * S3 label=p_0 & p_1 & !p_2 and acc= going to S3 t_1,t_2
Am I doing something obviously wrong here?
I think the projection only projects the states, but it does nothing for the labels and acceptance sets of the transitions, so those might not match actual transition of the original automaton. Hence replayrun() will fail.
In practice, replayrun() was mainly a tool for us to debug the emptiness checks, so I have never tried to run it on a projection. I can see how it would make sense, though.
It's not clear to me how to fix the project_run to use only the transition from the considered automaton. Actually I believe there are cases where you cannot decide which transition to use in the considered automaton if you do not know the other automata involved in the product... Maybe it is easier to project such a run with some additional hypotheses on the automaton you project onto (e.g. it has no acceptance sets and has no more than one transition between two given states) but then it become a projection specialized to a subclass of automata.
PS this is with the patch to the safra complementation algorithm.
FWIW, that patch was incomplete. I have made further tests yesterday with Martin's test case, a discovered that the all_acceptance_conditions() method of Safra automata was still incorrect. Also there where some collisions in the names of states output by format_state() (this only matters when trying to output Safra automata as text files for debugging, not when using them in a product), and a few other minor nits. The set of three patches I have finally applied is different from the small patch I sent, you can find them here:
http://git.lrde.epita.fr/?p=spot.git;a=patch;h=a4d1e18b;hp=101b18b2
(I'll probably have a snapshot ready in a couple of days.)
Hi Alexandre,
Where aut is the product and proj is automaton A. I have also written (what I think are reasonable) project_state methods for A and B.
I believe you should not have to write anything. There is a default implementation in the TGBA class:
OK, I should have checked the source. The text on what to implement when interfacing with Spot suggests that this may be necessary but does not provide any details so I assumed I had to implement the method.
This method is never called on B (perhaps because the complementation algorithm intercepts that?).
I don't think it would make sense to have project_state() traverse the complementation: if a run is accepted by !B we don't expect it to map to a run of B...
You are right of course, I hadn't thought it through properly.
I think the projection only projects the states, but it does nothing for the labels and acceptance sets of the transitions, so those might not match actual transition of the original automaton. Hence replayrun() will fail.
In practice, replayrun() was mainly a tool for us to debug the emptiness checks, so I have never tried to run it on a projection. I can see how it would make sense, though.
It's not clear to me how to fix the project_run to use only the transition from the considered automaton. Actually I believe there are cases where you cannot decide which transition to use in the considered automaton if you do not know the other automata involved in the product... Maybe it is easier to project such a run with some additional hypotheses on the automaton you project onto (e.g. it has no acceptance sets and has no more than one transition between two given states) but then it become a projection specialized to a subclass of automata.
It makes sense that the transitions of the product may not be projectable back onto the original automaton. In any case I can do without them, as my automaton is very deterministic and by looking at the states I can deduce the transitions as you say.
The reason I used replayrun instead of print_tgba_run is the hint in the documentation that print_tgba_run may print something which is not an actual accepting run. I assume that this only applies to bit-hashing emptiness checks, but couldn't know for sure.
PS this is with the patch to the safra complementation algorithm.
FWIW, that patch was incomplete. I have made further tests yesterday with Martin's test case, a discovered that the all_acceptance_conditions() method of Safra automata was still incorrect. Also there where some collisions in the names of states output by format_state() (this only matters when trying to output Safra automata as text files for debugging, not when using them in a product), and a few other minor nits. The set of three patches I have finally applied is different from the small patch I sent, you can find them here:
http://git.lrde.epita.fr/?p=spot.git;a=patch;h=a4d1e18b;hp=101b18b2
(I'll probably have a snapshot ready in a couple of days.)
OK, thanks. Many thanks for the answers too, they are very helpful in understanding Spot!
Nikos
On Tue, Oct 25, 2011 at 12:35 PM, Nikos Gorogiannis nikos.gorogiannis@eecs.qmul.ac.uk wrote:
OK, I should have checked the source. The text on what to implement when interfacing with Spot suggests that this may be necessary but does not provide any details so I assumed I had to implement the method.
Thanks for pointing this out. I have updated the page at http://spot.lip6.fr/wiki/InterfacingSpot with a note about this, and a couple of other points that have been introduced in 0.7 but were not explicit on this page: - states should be released using their destroy() method, and not using delete - another way to interface with Spot is to subclass the kripke class, whose interface is simpler that that of tgba.
It makes sense that the transitions of the product may not be projectable back onto the original automaton. In any case I can do without them, as my automaton is very deterministic and by looking at the states I can deduce the transitions as you say.
Yes. Maybe instead of trying to change project_run(), we could change replay_tgba_run() and teach it to follow transitions that are "compatible" with the run, instead of looking for exact matches.
The reason I used replayrun instead of print_tgba_run is the hint in the documentation that print_tgba_run may print something which is not an actual accepting run. I assume that this only applies to bit-hashing emptiness checks, but couldn't know for sure.
Thank for mentioning this, I will reword this documentation of print_tgba_run.
The idea is that a `tgba_run` object is just a sequence of states with some labels and acceptance conditions to display between. You could actually build one by picking random states, labels, and acceptance conditions, and that may not correspond to an actual run. Still, print_tgba_run() would print it; and that's what I meant with "even if it does not corresponds to an actual run of the automaton".
(I'll probably have a snapshot ready in a couple of days.)
Here: http://www.lrde.epita.fr/dload/spot/spot-snapshot.tar.gz
Beware that this development version will compile with optimization disabled and assertions turned on. If you want speed, you should configure it with --disable-devel
The top of the NEWS file says this:
* Major new features: - Spot can read DiVinE models. See iface/dve2/README for details. - The genltl tool can now output 20 differents LTL formula families. It also replace the LTLcounter Perl scripts. * Major interface changes: - The destructor of all states is now private. Any code that looks like "delete some_state;" will cause an compile error and should be updated to "some_state->destroy();". This new syntax is supported since version 0.7. - The experimental Nips interface has been removed. * Minor changes: - The dotty_reachable() function has a new option "assume_sba" that can be used for rendering automata with state-based acceptance. In that case, acceptance states are displayed with a double circle. ltl2tgba (both command line and on-line) Use it to display degeneralized automata. - Identifiers used to name atomic proposition can contain dots. E.g.: X.Y is now an atomic proposition, while it was understood as X&Y in previous versions. * Internal improvements: - The on-line ltl2tgba CGI script uses a cache to produce faster answers. - Better memory management for the states of explicit automata. Thanks to the aforementioned ->destroy() change, we can avoid cloning explicit states. - tgba_product has learned how to be faster when one of the operands is a Kripke structure (15% speedup). - The reduction rule for "a M b" has been improved: it can be reduced to "a & b" if "a" is a pure eventuallity. - More useless acceptance conditions are removed by SCC simplifications. * Bug fixes: - Safra complementation has been fixed in cases where more than one acceptance conditions where needed to convert the deterministic Streett automaton as a TGBA. - The degeneralization is now idempotent. Previously degeneralizing an already degeneralized automaton could add some states. - The degeneralization now has a deterministic behavior. Previously it was possible to obtain different output depending on the memory layout.
Hi Nikos,
On Thu, Oct 20, 2011 at 6:20 PM, Alexandre Duret-Lutz adl@lrde.epita.fr wrote:
On 18/10/11 15:38, Nikos Gorogiannis wrote: Yet another thing came up: even when I implement the method transition_annotation, the dotty output seems to ignore that.
[...]
I would call this a bug.
I've committed this: http://git.lrde.epita.fr/?p=spot.git;a=commitdiff;h=3010d7051baaaa96f278e954...
Because of a recent fix in the neverclaim output, I'd like to try to make a release before the end of the month, so if you have seen anything else that need fixing please let me know.