Hi Alexander,
Thank you for your prompt reply.
Currently, any automaton will suffice for my test, so I applied your
suggestion on skipping the simplifications and that works like a charm.
This makes my program more robust for future use.
Actually, the second formula timed-out when trying to translate the negated
formula. Sorry, I did not explicitly mention that in the text, only
implicit in the attachment.
The root cause remains the same as with the first case.
I will include a user instruction, for future users of my program, on
formula sizing and number of ap's.
Thanks again.
Kind regards
Carlo Sengers
Op zo 15 dec. 2019 om 10:04 schreef Alexandre Duret-Lutz <adl(a)lrde.epita.fr
Hi Carlo,
On Sun, Dec 15, 2019 at 8:30 AM Carlo Sengers <cejm.sengers(a)studie.ou.nl>
wrote:
However, during the setup of the experiment, I
observed that at least
two random generated formulas resulted in an error:
a C++ program I made, eventually resulted in error: " terminate called
after
throwing an instance of 'std::bad_alloc' what(): std::bad_alloc"
The online translator at
https://spot.lrde.epita.fr/app/ presented a
timeout with the same formula.
The spot-sandbox automata notebook at
https://spot-sandbox.lrde.epita.fr/notebooks/examples%20(read%20only)/autom…
Presented the error: Kernel Restarting. The
kernel appears to have
died. It will restart automatically.
All these are symptoms of the same issue : out of memory (or timeout)
during the simplification of the automaton.
I managed to translate the second of your formulas on my laptop (and
get a 63-state automaton), but the first one failed for the same
reason.
The annoying part: It was not immediately obvious
that my program
actually froze due to a 'questionable' formula instead
of doing some
heavy lifiting on huge automatons. resulting in some useless waiting time.
It's not clear to me what is the distinction you make here. The
formulas are problematic because their translation involve huge
automata.
Translating an LTL formula into a TGBA or BA can involve an
exponential blowup in the size of the formula (it's exponential in the
number of subformulas, and exponential in the number of atomic
propositions).
Simplifiying the resulting automata also involves some algorithms that
are exponential in the size of the automata. So that's doubly
exponential by default. Usually, none of this matters because
formulas used in practice are small and yield small automata.
If your goal is to obtain any automaton from the LTL formula, and not
necessarily a small automaton, you can translate the formula with
"ltl2tgba --low" (or call trans.set_level(spot::postprocessor::Low) in
C++). This will skip the simplifications, getting you a
single-exponential worst case. With this setting, I get 390 states
for the first formula and 67 for the second.
Are such formulas a natural aspect of LTL of is
this maybe a bug?
It's natural to find doubly-exponential worst cases with the
simplifications used in Spot.
Is it an option to avoid or reduce the likelihood
of occurrence of such
formulas in RandLTL? (making it less random :-) )
Reduce the size of your formulas or the number of atomic propositions
used. Or disable automaton simplifications.
Anyway thanks for reporting your issues. I'll look at the behavior of
the culprit algorithm on your first formula, and see if anything can
be improved. Its memory consumption was already improved last year,
but maybe that can be improved again.
https://gitlab.lrde.epita.fr/spot/spot/issues/395
Best regards,
--
Alexandre Duret-Lutz