ELS'17 - 10th European Lisp Symposium
VUB - Vrije Universiteit Brussel
Belgium
April 3-4, 2017
In co-location with <Programming> 2017
http://www.european-lisp-symposium.org/
The purpose of the European Lisp Symposium is to provide a forum for
the discussion and dissemination of all aspects of design,
implementation and application of any of the Lisp and Lisp-inspired
dialects, including Common Lisp, Scheme, Emacs Lisp, AutoLisp, ISLISP,
Dylan, Clojure, ACL2, ECMAScript, Racket, SKILL, Hop and so on. We
encourage everyone interested in Lisp to participate.
The 10th European Lisp Symposium invites high quality papers about
novel research results, insights and lessons learned from practical
applications and educational perspectives. We also encourage
submissions about known ideas as long as they are presented in a new
setting and/or in a highly elegant way.
Topics include but are not limited to:
- Context-, aspect-, domain-oriented and generative programming
- Macro-, reflective-, meta- and/or rule-based development approaches
- Language design and implementation
- Language integration, inter-operation and deployment
- Development methodologies, support and environments
- Educational approaches and perspectives
- Experience reports and case studies
We invite submissions in the following forms:
Papers: Technical papers of up to 8 pages that describe original
results or explain known ideas in new and elegant ways.
Demonstrations: Abstracts of up to 2 pages for demonstrations of
tools, libraries, and applications.
Tutorials: Abstracts of up to 4 pages for in-depth presentations
about topics of special interest for at least 90 minutes and up to
180 minutes.
The symposium will also provide slots for lightning talks, to be
registered on-site every day.
All submissions should be formatted following the ACM SIGS guidelines
and include ACM classification categories and terms. For more
information on the submission guidelines and the ACM keywords, see:
http://www.acm.org/sigs/publications/proceedings-templates and
http://www.acm.org/about/class/1998. The conference proceedings will be
published in the ACM Digital Library.
Important dates:
- 30 Jan 2017 Submission deadline
- 27 Feb 2017 Notification of acceptance
- 20 Mar 2017 Final papers due
- 03-04 Apr 2017 Symposium
Programme chair:
Alberto Riva, University of Florida, USA
Programme committee:
tba
Search Keywords:
#els2017, ELS 2017, ELS '17, European Lisp Symposium 2017,
European Lisp Symposium '17, 10th ELS, 10th European Lisp Symposium,
European Lisp Conference 2017, European Lisp Conference '17
--
Resistance is futile. You will be jazzimilated.
Lisp, Jazz, Aïkido: http://www.didierverna.info
Chers collègues,
La prochaine session du séminaire du LRDE
(Laboratoire de Recherche et Développement de l'EPITA) aura lieu le
Mercredi 28 septembre 2016 (11h--12h), Salle L0 du LRDE.
Vous trouverez sur le site du séminaire [1] les prochaines séances,
les résumés, captations vidéos et planches des exposés précédents [2],
le détail de cette séance [3] ainsi que le plan d'accès [4].
[1] http://seminaire.lrde.epita.fr
[2] http://seminaire.lrde.epita.fr/Archives
[3] http://seminaire.lrde.epita.fr/2016-09-28
[4] http://www.lrde.epita.fr/Contact
Au programme du Mercredi 28 septembre 2016 :
* 11h--12h: Transformation de la prospection commerciale grâce à la science des données
-- Samuel Charron
http://samuelcharron.fr
Traditionnellement, la prospection commerciale B2B se fait grâce à des
listes d'entreprises classées manuellement, sur la base de données
administratives renseignées à la création des entreprises. Ces listes
sont donc souvent dépassées, très chères et peu qualifiées.
C-Radar, produit développé par la société Data Publica, veut transformer
la prospection commerciale en enrichissant ces données administratives
par des données issues du web. Ainsi, on obtient des données fraîches,
plus ciblées. Et grâce aux techniques de machine learning et de
clustering, on peut obtenir automatiquement des listes d'entreprises
pertinentes à faible coût.
Lors de cette présentation, nous aborderons les différentes techniques
de science des données mises en œuvre dans ce produit, en relation avec
les fonctionnalités du produit.
-- Diplomé CSI 2008, Samuel Charron a travaillé chez différents éditeurs
logiciels, toujours autour du thème de l'exploitation et de la
valorisation des données issues du web.
L'entrée du séminaire est libre. Merci de bien vouloir diffuser cette
information le plus largement possible. N'hésitez pas à nous faire
parvenir vos suggestions d’orateurs.
--
Akim Demaille
Akim.Demaille(a)lrde.epita.fr
We are happy to announce the release of Spot 2.1.
Spot 2.1 is mainly a collection of small improvements that have
accumulated since the release of Spot 2.0 four months ago. A
detailed list of those is given below.
You can find the new release here:
http://www.lrde.epita.fr/dload/spot/spot-2.1.tar.gz
See https://spot.lrde.epita.fr/ for documentation and installation
instructions.
As always, please direct any feedback to <spot(a)lrde.epita.fr>.
New in spot 2.1 (2016-08-08)
Command-line tools:
* All tools that input formulas or automata (i.e., autfilt,
dstar2tgba, ltl2tgba, ltl2tgta, ltlcross, ltldo, ltlfilt,
ltlgrind) now have a more homogeneous handling of the default
input.
- If no formula/automaton have been specified, and the standard
input is a not a tty, then the default is to read that. This is a
change for ltl2tgba and ltl2tgta. In particular, it simplifies
genltl --dac | ltl2tgba -F- | autfilt ...
into
genltl --dac | ltl2tgba | autfilt ...
- If standard input is a tty and no other input has been
specified, then an error message is printed. This is a change for
autfilt, dstar2tgba, ltlcross, ltldo, ltlfilt, ltlgrind, that used
to expect the user to type formula or automata at the terminal,
confusing people.
- All tools now accept - as a shorthand for -F-, to force reading
input from the standard input (regardless of whether it is a tty
or not). This is a change for ltl2tgba, ltl2tgta, ltlcross, and
ltldo.
* ltldo has a new option --errors=... to specify how to deal
with errors from executed tools.
* ltlcross and ltldo learned to bypass the shell when executing
simple commands (with support for single or double-quoted
arguments, and redirection of stdin and stdout, but nothing more).
* ltlcross and ltldo learned a new syntax to specify that an input
formula should be written in some given syntax after rewriting
some operators away. For instance the defaults arguments passed
to ltl2dstar have been changed from
--output-format=hoa %L %O
into
--output-format=hoa %[WM]L %O
where [WM] specifies that operators W and M should be rewritten
away. As a consequence running
ltldo ltl2dstar -f 'a M b'
will now work and call ltl2dstar on the equivalent formula
'b U (a & b)' instead. The operators that can be listed between
brackets are the same as those of ltlfilt --unabbreviate option.
* ltlcross learned to show some counterexamples when diagnosing
failures of cross-comparison checks against random state spaces.
* autfilt learned to filter automata by count of SCCs (--sccs=RANGE)
or by type of SCCs (--accepting-sccs=RANGE,
--rejecting-sccs=RANGE, trivial-sccs=RANGE, --terminal-sccs=RANGE,
--weak-sccs=RANGE, --inherently-weak-sccs=RANGE).
* autfilt learned --remove-unused-ap to remove atomic propositions
that are declared in the input automaton, but not actually used.
This of course makes sense only for input/output formats that
declare atomic propositions (HOA & DSTAR).
* autfilt learned two options to filter automata by count of used or
unused atomic propositions: --used-ap=RANGE --unused-ap=RANGE.
These differ from --ap=RANGE that only consider *declared* atomic
propositions, regardless of whether they are actually used.
* autfilt learned to filter automata by count of nondeterministsic
states with --nondet-states=RANGE.
* autfilt learned to filter automata representing stutter-invariant
properties with --is-stutter-invariant.
* autfilt learned two new options to highlight non-determinism:
--highlight-nondet-states=NUM and --highlight-nondet-states=NUM
where NUM is a color number. Additionally --highlight-nondet=NUM is
a shorthand for using the two.
* autfilt learned to highlight a run matching a given word using the
--highlight-word=[NUM,]WORD option. However currently this only
work on automata with Fin-less acceptance.
* autfilt learned two options --generalized-rabin and
--generalized-streett to convert the acceptance conditions.
* genltl learned three new families: --dac-patterns=1..45,
--eh-patterns=1..12, and --sb-patterns=1..27. Unlike other options
these do not output scalable patterns, but simply a list of formulas
appearing in these three papers: Dwyer et al (FMSP'98), Etessami &
Holzmann (Concur'00), Somenzi & Bloem (CAV'00).
* genltl learned two options, --positive and --negative, to control
wether formulas should be output after negation or not (or both).
* The formater used by --format (for ltlfilt, ltlgrind, genltl,
randltl) or --stats (for autfilt, dstar2tgba, ltl2tgba, ltldo,
randaut) learned to recognize double-quoted fields and double the
double-quotes output inbetween as expected from RFC4180-compliant
CSV files. For instance
ltl2tgba -f 'a U "b+c"' --stats='"%f",%s'
will output
"a U ""b+c""",2
* The --csv-escape option of genltl, ltlfilt, ltlgrind, and randltl
is now deprecated. The option is still here, but hidden and
undocumented.
* The --stats option of autfilt, dstar2tgba, ltl2tgba, ltldo,
randaut learned to display the output (or input if that makes
sense) automaton as a HOA one-liner using %h (or %H), helping to
create CSV files contained automata.
* autfilt and dstar2tgba learned to read automata from columns in
CSV files, specified using the same filename/COLUMN syntax used by
tools reading formulas.
* Arguments passed to -x (in ltl2tgba, ltl2tgta, autfilt, dstar2tgba)
that are not used are now reported as they might be typos.
This ocurred a couple of times in our test-suite. A similar
check is done for the arguments of autfilt --sat-minimize=...
Library:
* The print_hoa() function will now output version 1.1 of the HOA
format when passed the "1.1" option (i.e., use -H1.1 from any
command-line tool). As far as Spot is concerned, this allows
negated properties to be expressed. Version 1 of the HOA format
is still the default, but we plan to default to version 1.1 in the
future.
* The "highlight-states" and "highlight-edges" named properties,
which were introduced in 1.99.8, will now be output using
"spot.highlight.edges:" and "spot.highlight.states:" headers if
version 1.1 of the HOA format is selected. The automaton parser
was secretly able of reading that since 1.99.8, but that is now
documented at https://spot.lrde.epita.fr/hoa.html#extensions
* highlight_nondet_states() and highlight_nondet_edges() are
new functions that define the above two named properties.
* is_deterministic(), is_terminal(), is_weak(), and
is_inherently_weak(), count_nondet_states(),
highlight_nondet_edges(), highlight_nondet_states() will update
the corresponding properties of the automaton as a side-effect of
their check.
* the sbacc() function, used by "ltl2tgba -S" and "autfilt -S" to
convert automata to state-based acceptance, learned some tricks
(using SCCs, pulling accepting marks common to all outgoing edges,
and pushing acceptance marks common to all incoming edges) to
reduce the number of additional states needed.
* to_generalized_rabin() and to_generalized_streett() are two new
functions that convert the acceptance condition as requested
without changing the transition structure.
* language_containment_checker now has default values for all
parameters of its constructor.
* spot::twa_run has a new method, project(), that can be used to
project a run found in a product onto one of the original operand.
This is for instance used by autfilt --highlight-word.
The old spot::project_twa_run() function has been removed: it was
not used anywhere in Spot, and had an obvious bug in its
implementation, so it cannot be missed by anyone.
* spot::twa has two new methods that supplement is_empty():
twa::accepting_run() and twa::accepting_word(). They compute
what their names suggest. Note that twa::accepting_run(), unlike
the two others, is currently restricted to automata with Fin-less
acceptance.
* spot::check_stutter_invariance() can now work on non-deterministic
automata for which no corresponding formula is known. This
implies that "autfilt --check=stutter" will now label all
automata, not just deterministic automata.
* New LTL and PSL simplification rules:
- if e is pure eventuality and g => e, then e U g = Fg
- if u is purely universal and u => g, then u R g = Gg
- {s[*0..j]}[]->b = {s[*1..j]}[]->b
- {s[*0..j]}<>->b = {s[*1..j]}<>->b
* spot::twa::succ_iterable renamed to
spot::internal::twa_succ_iterable to make it clear this is not for
public consumption.
* spot::fair_kripke::state_acceptance_conditions() renamed to
spot::fair_kripke::state_acceptance_mark() for consistency. This
is backward incompatible, but we are not aware of any actual use
of this method.
Python:
* The __format__() method for formula supports the same
operator-rewritting feature introduced in ltldo and ltlcross.
So "{:[i]s}".format(f) is the same as
"{:s}".format(f.unabbreviate("i")).
* Bindings for language_containment_checker were added.
* Bindings for randomize() were added.
* Iterating over edges via "aut.out(s)" or "aut.edges()"
now allows modifying the edge fields.
* Under IPython the spot.ltsmin module now offers a
%%pml magic to define promela models, compile them
with spins, and dynamically load them. This is
akin to the %%dve magic that was already supported.
* The %%dve and %%pml magics honor the SPOT_TMPDIR and
TMPDIR environment variables. This especially helps
when the current directory is read-only.
Documentation:
* A new example page shows how to test the equivalence of
two LTL/PSL formulas. https://spot.lrde.epita.fr/tut04.html
* A new page discusses explicit vs. on-the-fly interfaces for
exploring automata in C++. https://spot.lrde.epita.fr/tut50.html
* Another new page shows how to implement an on-the-fly Kripke
structure for a custom state space.
https://spot.lrde.epita.fr/tut51.html
* The concepts.html page now lists all named properties
used by automata.
Bug fixes:
* When ltlcross found a bug using a product of complemented
automata, the error message would report "Comp(Ni)*Comp(Pj)" as
non-empty while the actual culprit was "Comp(Nj)*Comp(Pi)".
* Fix some non-deterministic execution of minimize_wdba(), causing
test-suite failures with the future G++ 7, and clang 3.9.
* print_lbtt() had a memory leak when printing states without
successors.
--
Alexandre Duret-Lutz
I am happy to announce that the following paper has been accepted to the
2nd International Workshop on Robust Reading (IWRR), in conjunction with ECCV,
that will take place in Amsterdam, on the 9th of September 2016:
From text detection to text segmentation:
a unified evaluation scheme
Stefania Calarasanu (1), Jonathan Fabrizio (1) and Séverine Dubuisson (2)
(1) LRDE-EPITA, 14-16, rue Voltaire, F-94276, Le Kremlin
Bicêtre, France
(2) CNRS, UMR 7222, ISIR, F-75005, Paris, France
Abstract:
Current text segmentation evaluation protocols are often incapable of
properly handling different scenarios (broken/merged/partial characters).
This leads to scores that incorrectly reflect the segmentation accuracy.
In this article we propose a new evaluation scheme that overcomes most
of the existent drawbacks by extending the EvaLTex protocol (initially
designed to evaluate text detection at region level). This new unified platform
has numerous advantages: it is able to evaluate a text understanding system
at every detection stage and granularity level (paragraph/line/word and now
character) by using the same metrics and matching rules; it is robust to all
segmentation scenarios; it provides a qualitative and quantitative evaluation
and a visual score representation that captures the whole behavior of a
segmentation algorithm. Experimental results on nine segmentation algorithms
using different evaluation frameworks are also provided to emphasize the
interest of our method.
Ana Stefania Calarasanu
PhD Engineer - EPITA Research and Development Laboratory (LRDE)
14-16 rue Voltaire, 94276 Le Kremlin-Bicêtre CEDEX, France
https://www.lrde.epita.fr/wiki/User:Calarasanu <https://www.lrde.epita.fr/wiki/User:Calarasanu>
About four hundred commits and five months after Vcsn 2.2, we are proud to
announce the release of Vcsn 2.3, code-named "the tuple release"!
http://vcsn.lrde.epita.fr/Vcsn2.3
As usual, many bugs were fixed (some quite old yet unnoticed so far!).
Noteworthy changes include:
- a particular effort was put on the documentation: there are thirty-five
new documentation notebooks, and about forty others were improved.
- full support for a "tuple" operator on all entities: expressions,
polynomials, automata, etc.
In [13]: aut = lambda e: vcsn.context('lan, q').expression(e).automaton()
In [14]: a = aut('[ab]*') | aut('x')
In [15]: a.shortest(6)
Out[15]: \e|x + a|x + b|x + aa|x + ab|x + ba|x
It is also available in the rational expressions themselves:
In [16]: c = vcsn.context('lat<lan, lan>, q'); c
Out[16]: {...}? x {...}? -> Q
In [17]: e = c.expression('[ab]*|x'); e
Out[17]: (a+b)*|x
In [18]: e.shortest(6)
Out[18]: \e|x + a|x + b|x + aa|x + ab|x + ba|x
The derived-term algorithm supports this operator, and generates
equivalent multitape automata.
- many error messages were improved, to help users understand their
mistakes. For instance, instead of
In [2]: vcsn.Q.expression('a**').derivation('a')
RuntimeError: q: star: invalid value: 1
we now display:
In [2]: vcsn.Q.expression('a**').derivation('a')
RuntimeError: Q: value is not starrable: 1
while computing derivative of: a**
with respect to: a
Besides, longuish stack traces under Python are now filtered from
internal details.
- in addition to `%automaton a`, which allows interactive edition of
automata, the notebooks now feature two new interactive editors:
`%context c` to edit/create context `c`, and `%expression e` for
expressions (with an interactive display of the generated automata).
- one may now generate random rational expressions and control the
operators and their probabilities.
- a lot of code improvement and consistency enforcement, both in C++ and in
Python.
For more details, please, see the news page (see below).
People who worked on this release:
- Akim Demaille
- Clément Gillard
- Lucien Boillod
- Raoul Billion
- Sébastien Piat
- Thibaud Michaud
People who influenced this release:
- Alexandre Duret-Lutz
- Jacques Sakarovitch
- Luca Saiu
- Sylvain Lombardy
Vcsn 2.3: http://vcsn.lrde.epita.fr/Vcsn2.3
News page: http://vcsn.lrde.epita.fr/News_File
Vcsn: http://vcsn.lrde.epita.fr
Playground: http://vcsn-sandbox.lrde.epita.fr
Doc: https://vcsn.lrde.epita.fr/dload/2.3/notebooks/!Read-me-first.html
Bonjour,
Nous avons le plaisir de vous annoncer que le Rapport d’activité
quinquennal du LRDE est désormais disponible sur notre site.
Il retrace les activités du LRDE entre 2011 et 2015.
Vous pouvez le télécharger ici :
http://www.lrde.epita.fr/dload/rapport_activite/ra2015.pdf
--
Daniela Becker
Responsable administrative du LRDE
We are pleased to announce that the following paper was accepted to
the 13th International Colloquium on Theoretical Aspects of Computing
(ICTAC 2016).
Derived-term Automata for Extended Weighted Rational Expressions
Akim Demaille
EPITA/LRDE
http://publis.lrde.epita.fr/demaille.16.ictac
We present an algorithm to build an automaton from a rational
expression. This approach introduces support for extended weighted
expressions. Inspired by derived-term based algorithms, its core
relies on a different construct, rational expansions. We introduce an
inductive algorithm to compute the expansion of an expression from
which the automaton follows. This algorithm is independent of the
size of the alphabet, and actually even supports infinite alphabets.
It can easily be accommodated to generate deterministic (weighted)
automata. These constructs are implemented in Vcsn, a free-software
platform dedicated to weighted automata and rational expressions.
Bonjour,
nous avons le plaisir de vous annoncer la sortie du n°36 du bulletin du LRDE.
C'est un numéro spécial dédié aux séminaires des étudiants-chercheurs du LRDE
qui auront lieu ce vendredi 1er juillet 2016.
Vous y trouverez le programme de la journée avec les résumés des exposés.
Nous y présentons également deux nouveaux projets auxquels participent les membres
de l’équipe Image du LRDE.
Vous pouvez télécharger le bulletin en couleur à la page suivante :
http://www.lrde.epita.fr/wiki/LrdeBulletin/l-air-de-rien-36
--
Daniela Becker
Bonjour,
nous avons le plaisir de vous inviter au Séminaire des étudiants du LRDE.
Il aura lieu le vendredi 1er juillet 2016 à partir de 10h00 en Amphi Masters (KB).
--------------------------------------------
Au programme :
VÉRIFICATION DU LOCUTEUR
10h00 : Compensation du Domain Mismatch
pour la Reconnaissance du Locuteur
– VALENTIN IOVENE
Le domain mismatch se produit quand les données d’entraînement
du système et les données utilisées pour l’évalution proviennent de
sources différentes. Cette étude présente les techniques de pointe
utilisées pour compenser le domain mismatch, comme une bibliothèque
de blanchisseurs et l’utilisation d’une normalisation de la
covariance invariante par rapport au dataset.
10h30 : Réseaux de neurones en goulot pour
la Reconnaissance du Locuteur – GUILLAUME
DAVAL-FREROT
Le réseau de neurones en goulot est une forme particulière de réseau
de neurones profond. Ce travail présente son principe et son
utilisation pour re-traiter les Coefficients MFCC dans un système de
reconnaissance du locuteur, pour ensuite étudier les performances
du réseau seul et au sein du système.
11h00 : Calcul de distance par un réseau de
neurones profond siamois – ANATOLE MOREAU
Nous utilisons une architecture siamoise pour calculer une similarité
pour une paire d’échantillons. Le DNN projète ces échantillons
dans un sous-espace à dimension réduite. On compare les résultats
avec les mesures classiques.
SPOT — BIBLIOTHÈQUE DE MODEL CHECKING
13h00 : Le support des automates alternants
– AMAURY FAUCHILLE
Les automates alternants ajoutent un branchement universel au
branchement existentiel des automates non-déterministes. Les automates
alternants de Büchi sont exponentiellement plus concis que
les automates de Büchi non-déterministes. De plus, ils conviennent
bien à la traduction de la logique temporelle linéaire car leur taille
est proportionnelle à la taille de la formule traduite. Ce travail vise à
ajouter le support des automates alternants à Spot. Cela rendra Spot
compatible avec les outils produisant et utilisant des automates alternants,
et permettra à de futurs algorithmes travaillant sur les automates
alternants d’être implémentés.
13h30 : Produit d’automates à parité – LAURENT
XU
Spot peut construire des automates à parité déterministes à partir
d’automates de Büchi non-déterministes. Nous présentons une méthode
de produit d’automates à parité basés sur les transitions, qui
conserve la parité. Cette méthode est susceptible d’améliorer la déterminisation
d’automate existante.
OLENA — TRAITEMENT D’IMAGES GÉNÉ-
RIQUE ET PERFORMANT
14h00 : Découpage automatique des cartes
de Cassini – ANNE-CLAIRE BERTHET
Nous cherchons à segmenter d’anciennes cartes de France qui ont
été découpées en morceaux puis recollées sur du tissu. Grâce à des
filtres morphologiques, on obtient des parties de la frontière au
pixel près. Pour finaliser la segmentation, on définit un masque séparant
l’image en trois zones qui, à l’aide d’une méthode de classification,
permet d’obtenir des résultats proches de la frontière réelle.
14h30 : Discrimination supervisée de caractères
sur des images – THIBAULT DEUTSCH
La séparation lettre/non-lettre est un sujet important dans le domaine
de la reconnaissance de caractères. Afin de générer un modèle
basé sur les données d’entraînement, nous analysons les composantes
principales (PCA), puis nous appliquons une PLDA.
VCSN — BIBLIOTHÈQUE DE MANIPULATION
D’AUTOMATES
15h30 : Génération aléatoire d’expression
rationnelle – LUCIEN BOILLOD
Nous présentons l’implémentation d’un algorithme efficace et générique
de génération aléatoire d’expressions rationnelles pondérées.
Nous présentons ensuite une manière de générer des chemins
aléatoires dans des automates pondérés.
16h00 : Quotients d’automates pondérés et
de séries rationnelles – THIBAUD MICHAUD
Nous montrons comment le quotient a été implémenté dans Vcsn.
Après avoir défini le quotient à gauche et à droite de langages,
nous expliquons l’algorithme implémenté dans Vcsn pour calculer
le quotient de deux automates. Nous explorons ensuite les conséquences
de l’introduction du quotient du côté des expressions.
16h30 : K plus courts chemins dans Vcsn –
SÉBASTIEN PIAT
Plusieurs algorithmes existent pour obtenir plusieurs plus courts
chemins d’un graphe. L’un d’eux, l’algorithme de Yen, cache des
transitions dans le graphe entre chaque calcul de plus court chemin.
Ce travail présente l’implementation de cet algorithme dans
Vcsn ainsi que les différentes techniques d’optimisation envisagées
(notamment l’implémentation d’un ensemble creux).
17h00 : Contribution à dyn : : – RAOUL BILLION
Pour palier au manque de flexibilité et au temps de compilation
assez lourd du C++ Vcsn introduit une couche nommée dyn : : permettant
entre autre un système de compilation à la volée. Ce travail
présente nos réflexions sur comment gérer et optimiser nos temps
de compilation ainsi que des améliorations apportées à la couche
dyn : :.
--
Daniela Becker
Responsable administrative du LRDE
I am happy to announce that the following two papers have been
accepted at the 22nd International SPIN Symposium on Model Checking of
Software (SPIN 2015) to be held in Stellenbosch, South Africa on 24–26
August 2015.
---
On Refinement of Büchi Automata for Explicit Model Checking
František Blahoudek¹, Alexandre Duret-Lutz²,
Vojtěch Rujbr¹, and Jan Strejček¹
¹Faculty of Informatics, Masaryk University, Brno, Czech Republic
²LRDE, EPITA, Le Kremlin-Bicêtre, France
https://www.lrde.epita.fr/wiki/Publications/blahoudek.15.spin
Abstract:
In explicit model checking, systems are typically described in an
implicit and compact way. Some valid information about the system
can be easily derived directly from this description, for example
that some atomic propositions cannot be valid at the same time. The
paper shows several ways to apply this information to improve the
Büchi automaton built from an LTL specification. As a result, we get
smaller automata with shorter edge labels that are easier to
understand andmore importantly, for which the explicit model
checking process performs better.
---
Practical Stutter-Invariance Checks for ω-Regular Languages
Thibaud Michaud and Alexandre Duret-Lutz
LRDE, EPITA, Le Kremlin-Bicêtre, France
https://www.lrde.epita.fr/wiki/Publications/michaud.15.spin
Abstract:
We propose several automata-based constructions that check whether a
specification is stutter-invariant. These constructions assume that
a specification and its negation can be translated into Büchi
automata, but aside from that, they are independent of the
specification formalism. These transformations were inspired by a
construction due to Holzmann and Kupferman, but that we broke down
into two operations that can have different realizations, and that
can be combined in different ways. As it turns out, implementing
only one of these operations is needed to obtain a functional
stutter-invariant check. Finally we have implemented these
techniques in a tool so that users can easily check whether an LTL
or PSL formula is stutter-invariant.
--
Alexandre Duret-Lutz