ILC 2014 - International Lisp Conference
"Lisp on the Move"
August 14-17 2014, Université de Montréal, Montréal, Canada
Sponsored by the Association of Lisp Users
http://www.international-lisp-conference.org
Scope:
Lisp is one of the greatest ideas from computer science and a major
influence for almost all programming languages and for all
sufficiently complex software applications.
The International Lisp Conference is a forum for the discussion of
Lisp and, in particular, the design, implementation and application of
any of the Lisp dialects. We encourage everyone interested in Lisp to
participate.
We invite high quality submissions in all areas involving Lisp
dialects and any other languages in the Lisp family, including, but
not limited to, ACL2, AutoLisp, Clojure, Common Lisp, ECMAScript,
Dylan, Emacs Lisp, ISLISP, Racket, Scheme, SKILL, HOP etc.
This year's focus will be directed towards integrated solutions,
including mobile computing. We especially invite submissions in the
following areas:
* Pervasive computing
* Interoperability
* Portability
* Implementation challenges/tradeoffs for embedded/mobile platforms
* Language support for mobile toolkits and frameworks
* Language support for distribution
* Language support for reliability, availability, and serviceability
* Mobile IDEs
* Mobile applications
Contributions are also welcome in other areas, including but not
limited to:
* Language design and implementation
* Language integration, inter-operation and deployment
* Applications (especially commercial)
* Reflection, meta-object protocols, meta-programming
* Domain-specific languages
* Programming paradigms and environments
* Efficient parallel and concurrent computation
* Language support for managing both manual and automatic GC
* Theorem proving
* Scientific computing
* Data mining
* Semantic web
Technical Programme:
Original submissions in all areas related to the conference themes are
invited for the following categories:
Papers: Technical papers of up to 10 pages that describe original
results.
Demonstrations: Abstracts of up to 2 pages for demonstrations of
tools, libraries and applications.
Workshops: Abstracts of up to 2 pages for groups of people who intend
to work on a focused topic for half a day.
Tutorials: Abstracts of up to 2 pages for in-depth presentations about
topics of special interest for 1 to 2 hours.
Panel discussions: Abstracts of up to 2 pages for discussions about
current themes. Panel discussion proposals must mention panel member
who are willing to partake in a discussion.
The conference will also provide slots for lightning talks, to be
registered on-site every day.
For inquiries about any other kind of participation (commercial
exhibits, advertising, prizes, book signing etc.), please see the
contacts below.
Important Dates:
- May 18, 2014: Submission deadline
- June 09, 2014: Notification of acceptance
- June 29, 2014: Final Papers due
- August 14, 2014: Conference
All submissions should be formatted following the ACM SIGS guidelines
and include ACM classification categories and terms. For more
information on the submission guidelines and the ACM keywords, see:
http://www.acm.org/sigs/publications/proceedings-templates and
http://www.acm.org/about/class/1998.
Submissions should be uploaded to Easy Chair, at the following
address: https://www.easychair.org/conferences/?conf=ilc14
Organizing Committee:
General Chair: Marc Feeley (Université de Montréal, Montréal, Canada)
Programme Chair: Didier Verna (EPITA Research lab, Paris, France)
Local chair: Marc Feeley (Université de Montréal, Montréal, Canada)
Programme Committee:
to be announced
Contacts:
* General Questions: ilc14-organizing-committee at alu.org
* Programme Committee: ilc14 at easychair.org
For more information, see http://www.international-lisp-conference.org
--
Resistance is futile. You will be jazzimilated.
Lisp, Jazz, Aïkido: http://www.didierverna.info
We are happy to announce that the following paper has been
accepted for publication at the 11th IAPR International Workshop
on Document Analysis Systems (DAS) that will take place in Tours,
France, on April 7 – 10, 2014:
Guillaume Lazzara, Thierry Géraud and Roland Levillain
EPITA Research and Development Laboratory (LRDE)
Planting, Growing and Pruning Trees:
Connected Filters Applied to Document Image Analysis
Mathematical morphology, when used in the field of document image
analysis and processing, is often limited to some classical yet
basic tools. The domain however features a lesser-known class of
powerful operators, called connected filters. These operators
present an important property: they do not shift nor create
contours. Most connected filters are linked to a tree-based
representation of an image's contents, where nodes represent
connected components while edges express an inclusion relation.
By computing attributes for each node of the tree from the
corresponding connected component, then selecting nodes according
to an attribute-based criterion, one can either filter or
recognize objects in an image. This strategy is very intuitive,
efficient, easy to implement, and actually well-suited to
processing images of magazines. Examples of applications include
image simplification, smart binarization, and object
identification.
ELS'14 - 7th European Lisp Symposium
IRCAM, Paris, France
May 5-6, 2014
http://www.european-lisp-symposium.org/
The purpose of the European Lisp Symposium is to provide a forum for
the discussion and dissemination of all aspects of design,
implementation and application of any of the Lisp and Lisp-inspired
dialects, including Common Lisp, Scheme, Emacs Lisp, AutoLisp, ISLISP,
Dylan, Clojure, ACL2, ECMAScript, Racket, SKILL, Hop and so on. We
encourage everyone interested in Lisp to participate.
The 7th European Lisp Symposium invites high quality papers about
novel research results, insights and lessons learned from practical
applications, and educational perspectives. We also encourage
submissions about known ideas as long as they are presented in a new
setting and/or in a highly elegant way.
Topics include but are not limited to:
- Context-, aspect-, domain-oriented and generative programming
- Macro-, reflective-, meta- and/or rule-based development approaches
- Language design and implementation
- Language integration, inter-operation and deployment
- Development methodologies, support and environments
- Educational approaches and perspectives
- Experience reports and case studies
Please note that IRCAM, the conference venue, is a French institute
for research on music and acoustics. Submissions relating Lisp to
music or other acoustical matters will hence be particularly welcome,
although given no heightened favor during the review process.
We invite submissions in the following forms:
Papers: Technical papers of up to 8 pages that describe original
results or explain known ideas in new and elegant ways.
Demonstrations: Abstracts of up to 2 pages for demonstrations of
tools, libraries, and applications.
Tutorials: Abstracts of up to 4 pages for in-depth presentations about
topics of special interest for at least 90 minutes and up to 180
minutes.
The symposium will also provide slots for lightning talks, to be
registered on-site every day.
All submissions should be formatted following the ACM SIGS guidelines
and include ACM classification categories and terms. For more
information on the submission guidelines and the ACM keywords, see:
http://www.acm.org/sigs/publications/proceedings-templates and
http://www.acm.org/about/class/1998.
Important dates:
- TODAY: Mark your calendar. Start planning now!
- 09 Mar 2014: Submission deadline
- 31 Mar 2014: Notification of acceptance
- 21 Apr 2014: Final Papers due
- 05 May 2014: Symposium. Join us there!
Program Committee:
Chair:
Kent Pitman, Hypermeta Inc., U.S.A.
Local Organizers:
Didier Verna, EPITA Research Lab, France
Gérard Assayag, IRCAM, France
Members:
To be announced later
Search Keywords:
#els2014, ELS 2014, ELS '14, European Lisp Symposium 2014,
European Lisp Symposium '14, 7th ELS, 7th European Lisp Symposium,
European Lisp Conference 2014, European Lisp Conference '14
--
Resistance is futile. You will be jazzimilated.
Lisp, Jazz, Aïkido: http://www.didierverna.info
Bonjour à tous,
J'ai le plaisir de vous inviter à ma soutenance de thèse intitulée
``Tree-Based Shape Spaces: Definition and Applications in Image
Processing and Computer Vision''.
Celle-ci aura lieu le*12 Décembre 2013* à 14h00 dans l'amphithéâtre 260
de l'ESIEE Paris, située au 2, boulevard Blaise Pascal, Cité
Descartes, à Noisy-le-Grand (93). Vous trouverez un plan d'accès à
l'école à l'adresse suivante :
http://www.esiee.fr/Infos-pratiques/acces.php
La soutenance sera suivie d'un pot.
Composition du jury de thèse
----------------------------
Rapporteurs :
Philippe Salembier (Universitat Politècnica de Catalunya)
Lionel Moisan (Université Paris Descartes)
Examinateurs :
Jean Serra (ESIEE Paris)
Coloma Ballester (Universitat Pompeu Fabra)
Michael H.F. Wilkinson (Rijksuniversiteit Groningen)
Beatriz Marcotegui (Mines ParisTech)
Pascal Monasse (Ecole des Ponts ParisTech)
Directeurs de thèse :
Laurent Najman (ESIEE Paris - Université Paris-Est Marne-la-Vallée)
Thierry Géraud (EPITA)
Résumé de la thèse
------------------
This PhD work presents a versatile framework dealing with tree-based
image representations. Those representations have been proved to be
very useful in a large number of applications. Notably, in
mathematical morphology, those representations form the basis for
efficient implementation of connected operators.
Connected operators are filtering tools that act by merging some
elementary regions of an image. A popular filtering strategy relies
on (1) building a tree representation of the image; (2) computing a
shape-based attribute on each node of the tree; and (3) removing the
nodes for which the attribute is too low. This operation can be seen
as a thresholding of the tree, seen as a graph whose nodes are
weighted by the attribute function.
Rather than being satisfied with a mere thresholding, we propose to
expand on this idea. We define the notion of tree-based shape spaces
as seeing a tree as a graph whose neighbourhood is given by the
parenthood relationship. Then we apply some connected filters on such
shape spaces. This simple processing, that we call "shape-based
morphology", has several deep consequences. Firstly, it is a
generalization of the existing tree-based connected
operators. Besides, it allows us to propose two new classes of
connected operators: shape-based lower/upper levelings and
shapings. Secondly, this framework can be used for object
detection/segmentation by selecting relevant nodes in the shape space.
Last, we can also apply this framework for transforming hierarchies
using extinction values, so that we obtain some hierarchical image
simplification and segmentation methods.
We have developed several applications using this framework.
- A first one is a truly contrast invariant local feature detection
method called Tree-Based Morse Regions (TBMR), which can be seen as a
variant of MSER with better performances.
- Some applications to retinal image analysis (including blood vessel
segmentation and optic nerve head segmentation) have been developed
using shape-based lower/upper levelings.
- We have also proposed an efficient image simplification
method by minimizing the Mumford-Shah functional, which belongs to the
shaping family.
- An object detection scheme have been developed by defining an
efficient context-based energy estimator.
- Last, we have proposed an extension of constrained connectivity
based on our novel hierarchy transforms.
Amicalement,
Yongchao Xu
--
Yongchao XU Ph.D student
EPITA Research and Development Laboratory (LRDE)
Laboratoire d'informatique de l'Institut Gaspard Monge (LIGM)
Équipe Algorithmes, Architectures, Analyse et synthèse d'images (A3SI)
Université Paris Est, Marne-la-Vallée
France
Chers collègues,
La deuxième session du séminaire Performance et Généricité du LRDE
(Laboratoire de Recherche et Développement de l'EPITA) du mois de décembre aura lieu le
Mercredi 11 décembre 2013 (14h--15h30), Salle L0 du LRDE.
Au programme:
* 14h-15h30: A ``Diplomatic'' Parallel Algorithm for the Component Trees of High Dynamic Range Images
-- Michael Wilkinson --- Johann Bernoulli Institute, University of Groningen, The Netherlands
http://www.cs.rug.nl/~michael/
Component trees are essential tools in several morphological processing
methods, such as attribute filtering, or visualization, or the
computation of topological watersheds. Algorithms for computation of
these trees fall into two main categories: (i) Flood-filling algorithms,
exemplified by the algorithms of Salembier et al (1998), Hesselink
(2003), and Wilkinson (2011), and (ii) Union-find methods, such as
Najman and Couprie (2006), and Berger et al (2007). As images become
larger, and parallel computers become commonplace, there is an increased
need for concurrent algorithms to compute these trees. The first such
algorithm was published by Wilkinson et al in 2008, and was based on the
divide-and-conquer principle. It splits up the data spatially, computes
local component trees using any arbitrary algorithm which yields a
union-find type representation of each node, and glues these together
hierarchically. The main drawback of this method is that it does not
work well on high-dynamic-range images, because the complexity of the
glueing phase scales linearly with the number of grey levels.
In the current work, carried out by Moschini, Meijster, and Wilkinson
within the HyperGAMMA project, we develop a new algorithm for floating
point or high-dynamic-range integer images. It works in a two-tier
process, in which we first compute a pilot component tree at low dynamic
range in parallel, with one grey level per processor using dynamic
quantization, and Salembier's flood-filling method to build the local
trees, and the previous parallellization scheme. After this, a
refinement stage is performed. This refinement stage is based on the
algorithm of Berger et al. As such, the new algorithm combines the two
main types of algorithm in a single framework.
Timings on images of up to 3.9 GPixel indicate a speed-up of up to 22 on
64 cores. The algorithm is more than 14x faster than the fastest
sequential algorithm on the same machine. We will apply the new
algorithm to astronomical data sets, including improvements to the
SExtractor tool for object detection. The importance of the new
algorithm extends beyond computation of component trees because it
allows development of a fast alpha-tree algorithm suitable for any
pixel-difference metric in case of vector images (i.e. not just
$L_\infty$-based metrics on low dynamic range colour images).
-- Michael Wilkinson obtained an MSc in astronomy from the Kapteyn
Laboratory, University of Groningen in 1993, after which he worked on
image analysis of intestinal bacteria at the Department of Medical
Microbiology, University of Groningen, obtaining a PhD at the Institute
of Mathematics and Computing Science, also in Groningen, in 1995. After
this he worked as a researcher at the Johann Bernoulli Institute for
Mathematics and Computer Science (JBI) both on computer simulations and
on image analysis of diatoms. He is currently senior lecturer at the
JBI, working on morphological image analysis and especially connected
morphology. Apart from publishing in many journals and conferences, he
edited the book ``Digital Image Analysis of Microbes'' (John Wiley, UK,
1998) with Frits Schut, and is member of the Steering Committee of ISMM.
Pour plus de renseignements, consultez http://seminaire.lrde.epita.fr/.
L'entrée du séminaire est libre. Merci de bien vouloir diffuser cette
information le plus largement possible.
--
Akim Demaille
Akim.Demaille(a)lrde.epita.fr
_______________________________________________
Seminaire mailing list
Seminaire(a)lrde.epita.fr
https://lists.lrde.epita.fr/listinfo/seminaire
Chers collègues,
La prochaine session du séminaire Performance et Généricité du LRDE
(Laboratoire de Recherche et Développement de l'EPITA) aura lieu le
mercredi 4 décembre 2013 (11h--12h), Salle L0 du LRDE.
Au programme :
* 11-12h: CPC: Une implémentation efficace de la concurrence par passage de continuations
-- Juliusz Chroboczek, Laboratoire PPS, Université Paris-Diderot (Paris 7)
http://www.pps.univ-paris-diderot.fr/~jch/software/cpc/
CPC est une extension concurrente du langage C. Le code CPC, en style à
threads, est traduit par le compilateur CPC en un code à style à
événements; ce code peut ensuite être exécuté, au choix du programmeur,
par des threads natifs « lourds » ou par un ordonnanceur à événements
manipulant des structures de données extrêmement légères. Cette
technique d'implémentation induit un style de programmation original, où
les threads sont « gratuits ». Cependant, le programmeur peut choisir
d'utiliser des threads natifs « lourds » lorsque c'est nécessaire, par
exemple pour exploiter le parallélisme du matériel ou utiliser des
bibliothèques bloquantes.
La technique de compilation de CPC est basée sur des techniques
formalisées et bien connues de la communauté de la programmation
fonctionnelle, telles que la conversion en style à passage de
continuations (CPS), le lambda-lifting, ou l'introduction de fonctions
terminales. La correction de ces techniques a été prouvée formellement.
Dans cet exposé, je donnerai quelques exemples du style typique de
programmation en CPC tirées de Hekate, un seeder BitTorrent écrit en
CPC. Je décrirai ensuite la transformation en style à passage de
continuations et je décrirai la technique de traduction utilisée par le
compilateur CPC.
-- Juliusz Chroboczek est Maître de Conférences à l'Université
Paris-Diderot (Paris 7). Il travaille sur les implémentations efficaces
de la concurrence ainsi que sur la problématique du routage dans les
réseaux à commutation de paquets.
!! Attention : deuxième session en décembre avec un horaire différent (14h00 - 15h30) !!
Le mercredi 11 décembre 2013, nous accueillons Michael Wilkinson, Johann Bernoulli
Institute, University of Groningen, Pays-Bas, pour un exposé intitulé
"A “Diplomatic” Parallel Algorithm for the Component
Trees of High Dynamic Range Images".
Pour plus de renseignements, consultez http://seminaire.lrde.epita.fr/.
L'entrée du séminaire est libre. Merci de bien vouloir diffuser cette
information le plus largement possible.
--
Akim Demaille
Akim.Demaille(a)lrde.epita.fr
_______________________________________________
Seminaire mailing list
Seminaire(a)lrde.epita.fr
https://lists.lrde.epita.fr/listinfo/seminaire
Chers collègues,
Petit rappel, demain, mercredi 13 novembre 2013 aura lieu l'exposé de Dodji Seketeli, Red Hat, intitulé
"Address & Thread Sanitizer in GCC: Etat Actuel et Orientation Future" (11h--12h30), Salle Lα du LRDE.
En décembre, nous organisons deux séminaires :
* le mercredi 4 décembre 2013, 11h--12h, Salle L0 du LRDE, nous accueillons Juliusz Chroboczek, Laboratoire PPS,
Université Paris-Diderot (Paris 7) pour nous parler de "CPC: Une implémentation efficace de la concurrence par
passage de continuations"
* le mercredi 11 décembre 2013 (horaire à définir), nous invitons Michael Wilkinson, Johann Bernoulli Institute, University
of Groningen, Pays-Bas, pour nous faire un exposé intitulé "A “Diplomatic” Parallel Algorithm for the Component
Trees of High Dynamic Range Images".
Pour plus de renseignements, consultez http://seminaire.lrde.epita.fr/.
L'entrée du séminaire est libre. Merci de bien vouloir diffuser cette
information le plus largement possible.
--
Akim Demaille
Akim.Demaille(a)lrde.epita.fr
_______________________________________________
Seminaire mailing list
Seminaire(a)lrde.epita.fr
https://lists.lrde.epita.fr/listinfo/seminaire
Chers collègues,
La prochaine session du séminaire Performance et Généricité du LRDE
(Laboratoire de Recherche et Développement de l'EPITA) aura lieu le
Mercredi 13 novembre 2013 (11h--12h30), Salle Lα du LRDE.
Au programme:
* 11h-12h30: Address & Thread Sanitizer in GCC: Etat Actuel et Orientation Future
-- Dodji Seketeli, Red Hat
http://gcc.gnu.org
Address & Thread sanitizer sont des outils destinés à détecter les
erreurs d'accès à la mémoire ainsi que les erreurs d'accès concurrents
en environnement multi-threads.
Ces outils sont constitués de deux parties logiques distinctes: une
partie instrumentant le code généré de manière statique, et un
environnement d'exécution.
Cet exposé présente l'implémentation de Address & Thread Sanitizer dans
GCC, les principes de fonctionnement de l'environnement d'exécution de
ces deux outils ainsi que les futures directions du projet.
-- Dodji Seketeli est ingénieur dans l'équipe Tools de Red Hat. Il
travaille sur la suite des compilateurs GNU, principalement sur les
compilateurs C et C++. Pendant son temps libre, lorsqu'il ne joue pas
avec son épouse et ses enfants, il maintient Nemiver, le debuggeur
graphique du projet d'environnement de bureau libre GNOME. En dehors des
logiciels libres, il s'intéresse aux arts martiaux et à l'histoire.
Pour plus de renseignements, consultez http://seminaire.lrde.epita.fr/.
L'entrée du séminaire est libre. Merci de bien vouloir diffuser cette
information le plus largement possible.
--
Akim Demaille
Akim.Demaille(a)lrde.epita.fr
_______________________________________________
Seminaire mailing list
Seminaire(a)lrde.epita.fr
https://lists.lrde.epita.fr/listinfo/seminaire
We are pleased to inform you that the following paper has been accepted in the 19th International
Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR). Please find the
title and abstract below.
Title and corresponding authors:
"Three SCC-based Emptiness Checks for Generalized Büchi Automata"
E. Renault(1), A. Duret-Lutz(1), F. Kordon(2), and D. Poitrenaud(2)
(1) LRDE-EPITA, www.lrde.epita.fr
(2) UPMC-LIP6, www.lip6.fr
Abstract:
The automata-theoretic approach for the verification of linear time
properties involves checking the emptiness of a Büchi automaton.
However generalized Büchi automata, with multiple acceptance sets,
are preferred when verifying under weak fairness hypotheses.
Existing emptiness checks for which the complexity is independent of
the number of acceptance sets are all based on the enumeration of
Strongly Connected Components (SCCs).
In this paper, we review the state of the art SCC enumeration
algorithms to study how they can be turned into emptiness
checks. This leads us to define two new emptiness check algorithms
(one of them based on the Union Find data structure),
introduce new optimizations, and show that one of these can be of
benefit to a classic SCCs enumeration algorithm. We have
implemented all these variants to compare their relative
performances and the overhead induced by the emptiness check
compared to the corresponding SCCs enumeration algorithm.
Our experiments shows that these three algorithms are comparable.
I'm very pleased to announce the release of Spot 1.2.
Spot is a model-checking library developed collaboratively by LRDE
and LIP6. It provides algorithms and data structures to implement
the automata-theoretic approach to LTL model checking.
This release contains several new features (summarized below), but the
most notable are the interface with ltl2dstar (i.e., Spot can read
deterministic Rabin and Streett automata and convert them into Büchi
automata) and algorithms for minimizing deterministic TGBA or BA using
a SAT-solver.
You can find the new release here:
http://spot.lip6.fr/dl/spot-1.2.tar.gz
Please report any issue to <spot(a)lrde.epita.fr>.
New in spot 1.2 (2013-10-01)
* Changes to command-line tools:
- ltlcross has a new option --color to color its output. It is
enabled by default when the output is a terminal.
- ltlcross will give an example of infinite word accepted by the
two automata when the product between a positive automaton and a
negative automaton is non-empty.
- ltlcross can now read the Rabin and Streett automata output by
ltl2dstar. This type of output should be specified using '%D':
ltlcross 'ltl2dstar --ltl2nba=spin:path/to/ltl2tgba@-s %L %D'
However because Spot only supports Büchi acceptance, these Rabin
and Streett automata are immediately converted to TGBAs before
further processing by ltlcross. This is still interesting to
search for bugs in translators to Rabin or Streett automata, but
the statistics (of the resulting TGBAs) might not be very relevant.
- When ltlcross obtains a deterministic automaton from a
translator it will now complement this automaton to perform
additional intersection checks. This is complementation is done
only for deterministic automata (because that is cheap) and can
be disabled with --no-complement.
- To help with debugging problems detected by ltlcross, the
environment variables SPOT_TMPDIR and SPOT_TMPKEEP control where
temporary files are created and if they should be erased. Read
the man page of ltlcross for details.
- There is a new command, named dstar2tgba, that converts a
deterministic Rabin or Streett automaton (expressed in the
output format of ltl2dstar) into a TGBA, BA or Monitor.
In the case of Rabin acceptance, the conversion will output a
deterministic Büchi automaton if one such automaton exist. Even
if no such automaton exists, the conversion will actually
preserves the determinism of any SCC that can be kept
deterministic.
In the case of Streett acceptance, the conversion produces
non-deterministic Büchi automata with Generalized acceptance.
These are then degeneralized if requested.
See http://spot.lip6.fr/userdoc/dstar2tgba.html for some
examples, and the man page for more reference.
- The %S escape sequence used by ltl2tgba --stats to display the
number of SCCs in the output automaton has been renamed to %c.
This makes it more homogeneous with the --stats option of the
new dstar2tgba command.
Additionally, the %p escape can now be used to show whether the
output automaton is complete, and the %r escape will give the
number of seconds spent building the output automaton (excluding
the time spent parsing the input).
- ltl2tgba, ltl2tgta, and dstar2tgba have a --complete option
to output complete automata.
- ltl2tgba, ltl2tgta, and dstar2tgba can use a SAT-solver to
minimize deterministic automata. Doing so is only needed on
properties that are stronger than obligations (for obligations
our WDBA-minimization procedure will return a minimimal
deterministic automaton more efficiently) and is disabled by
default. See the spot-x(7) man page for documentation about the
related options: sat-minimize, sat-states, sat-acc, state-based.
See also http://spot.lip6.fr/userdoc/satmin.html for some
examples.
- ltlfilt, genltl, and randltl now have a --latex option to output
formulas in a way that its easier to embed in a LaTeX document.
Each operator is output as a command such as \U, \F, etc.
doc/tl/spotltl.sty gives one possible definition for each macro.
- ltlfilt, genltl, and randltl have a new --format option to
indicate how to present the output formula, possibly with
information about the input.
- ltlfilt as a new option, --relabel-bool, to abstract independent
Boolean subformulae as if they were atomic propositions.
For instance "a & GF(c | d) & b & X(c | d)" would be rewritten
as "p0 & GF(p1) & Xp1".
* New functions and classes in the library:
- dtba_sat_synthetize(): Use a SAT-solver to build an equivalent
deterministic TBA with a fixed number of states.
- dtba_sat_minimize(), dtba_sat_minimize_dichotomy(): Iterate
dtba_sat_synthetize() to reduce the number of states of a TBA.
- dtgba_sat_synthetize(), dtgba_sat_minimize(),
dtgba_sat_minimize_dichotomy(): Likewise, for deterministic TGBA.
- is_complete(): Check whether a TGBA is complete.
- tgba_complete(): Complete an automaton by adding a sink state
if needed.
- dtgba_complement(): Complement a deterministic TGBA.
- satsolver(): Run an (external) SAT solver, honoring the
SPOT_SATSOLVER environment variable if set.
- tba_determinize(): Run a power-set construction, and attempt
to fix the acceptance simulation to build a deterministic TBA.
- dstar_parse(): Read a Streett or Rabin automaton in
ltl2dstar's format. Note that this format allows only
deterministic automata.
- nra_to_nba(): Convert a (possibly non-deterministic) Rabin
automaton to a non-deterministic Büchi automaton.
- dra_to_ba(): Convert a deterministic Rabin automaton to a Büchi
automaton, preserving acceptance in all SCCs where this is possible.
- nsa_to_tgba(): Convert a (possibly non-deterministic) Streett
automaton to a non-deterministic TGBA.
- dstar_to_tgba(): Convert any automaton returned by dstar_parse()
into a TGBA.
- build_tgba_mask_keep(): Build a masked TGBA that shows only
a subset of states of another TGBA.
- build_tgba_mask_ignore(): Build a masked TGBA that ignore
a subset of states of another TGBA.
- class tgba_proxy: Helps writing on-the-fly algorithms that
delegate most of their methods to the original automaton.
- class bitvect: A dynamic bit vector implementation.
- class word: An infinite word, stored as prefix + cycle, with a
simplify() methods to simplify cycle and prefix in obvious ways.
- class temporary_file: A temporary file. Can be instanciated with
create_tmp_file() or create_open_tmpfile().
- count_state(): Return the number of states of a TGBA. Implement
a couple of specializations for classes where is can be know
without exploration.
- to_latex_string(): Output a formula using LaTeX syntax.
- relabel_bse(): Relabeling of Boolean Sub-Expressions.
Implements ltlfilt's --relabel-bool option describe above.
* Noteworthy internal changes:
- When minimize_obligation() is not given the formula associated
to the input automaton, but that automaton is deterministic, it
can still attempt to call minimize_wdba() and check the correcteness
using dtgba_complement(). This allows dstar2tgba to apply
WDBA-minimization on deterministic Rabin automata.
- tgba_reachable_iterator_depth_first has been redesigned to
effectively perform a DFS. As a consequence, it does not
inherit from tgba_reachable_iterator anymore.
- postproc::set_pref() was used to accept an argument among Any,
Small or Deterministic. These can now be combined with Complete
as Any|Complete, Small|Complete, or Deterministic|Complete.
- operands of n-ary operators (like & and |) are now ordered so
that Boolean terms come first. This speeds up syntactic
implication checks slightly. Also, literals are now sorted
using strverscmp(), so that p5 comes before p12.
- Syntactic implication checks have been generalized slightly
(for instance 'a & b & F(a & b)' is now reduced to 'a & b'
while it was not changed in previous versions).
- All the parsers implemented in Spot now use the same type to
store locations.
- Cleanup of exported symbols
All symbols in the library now have hidden visibility on ELF systems.
Public classes and functions have been marked explicitely for export
with the SPOT_API macro.
During this massive update, some of functions that should not have
been made public in the first place have been moved away so that
they can only be used from the library. Some old of unused
functions have been removed.
removed:
- class loopless_modular_mixed_radix_gray_code
hidden:
- class acc_compl
- class acceptance_convertor
- class bdd_allocator
- class free_list
* Bug fixes:
- Degeneralization was not indempotant on automata with an
accepting initial state that was on a cycle, but without
self-loop.
- Configuring with --enable-optimization would reset the value of
CXXFLAGS.
--
Alexandre Duret-Lutz