Chers collègues,
La prochaine session du séminaire Performance et Généricité du LRDE
(Laboratoire de Recherche et Développement de l'EPITA) aura lieu le
Mercredi 13 décembre 2017 (11h--12h), Amphi 4 de l'EPITA.
Vous trouverez sur le site du séminaire [1] les prochaines séances,
les résumés, captations vidéos et planches des exposés précédents [2],
le détail de cette séance [3] ainsi que le plan d'accès [4].
[1] http://seminaire.lrde.epita.fr
[2] http://seminaire.lrde.epita.fr/Archives
[3] http://seminaire.lrde.epita.fr/2017-12-13
[4] http://www.lrde.epita.fr/wiki/Contact
Au programme du Mercredi 13 décembre 2017 :
* 11h--12h: Vers l'apprentissage d'un sens commun visuel
-- Camille Couprie, Facebook AI research
https://perso.esiee.fr/~coupriec/
Les réseaux de neurones convolutifs connaissent depuis quelques
années
un franc succès dans de nombreuses applications de reconnaissance
visuelle. Nous reviendrons sur les premiers travaux en la matière en
segmentation sémantique (étiquetage de chaque pixel des images par
une
catégorie sémantique). Nous explorerons ensuite une piste
d'amélioration
visant à réduire la quantité de données labelisées utilisée, à base
d'entraînement de réseaux adversaires.
Dans un second temps, nous nous intéresserons au problème de la
prédiction d'images suivantes dans les vidéos: s'il nous parait
simple
d'anticiper ce qu'il va se passer dans un futur très proche, c'est un
problème difficile à modéliser mathématiquement étant données les
multiples sources d'incertitude. Nous présenterons nos travaux de
prédiction dans le domaine des images naturelles, puis dans l'espace
plus haut niveau des segmentations sémantiques, nous permettant de
prédire plus loin dans le futur.
-- Camille Couprie est chercheuse à Facebook Artificial Intelligence
Research. Elle a obtenu son doctorat en informatique de l'Université
Paris Est en 2011, sous la direction de Hugues Talbot, Laurent Najman
et
Leo Grady, avec une recherche spécialisée dans la formulation et
l'optimisation de problèmes de vision par ordinateur dans les
graphes.
En 2012, elle a travaillé comme postdoc a l'institut Courant de New
York
University avec Yann LeCun. Après un poste IFP new energies,
organisme
de recherche français actif dans les domaines de l'énergie, des
transports et de l'environnement, elle a rejoint Facebook en 2015.
L'entrée du séminaire est libre. Merci de bien vouloir diffuser cette
information le plus largement possible. N'hésitez pas à nous faire
parvenir vos suggestions d'orateurs.
--
Guillaume TOCHON
Maître de conférences // Assistant Professor
LRDE, EPITA
18, rue Pasteur
94270 Le Kremlin-Bicêtre
_______________________________________________
Seminaire mailing list
Seminaire(a)lrde.epita.fr
https://lists.lrde.epita.fr/listinfo/seminaire
Chers collègues,
La prochaine session du séminaire Performance et Généricité du LRDE
(Laboratoire de Recherche et Développement de l'EPITA) aura lieu le
Mercredi 29 novembre 2017 (10h--11h), Amphi 4 de l'EPITA.
Vous trouverez sur le site du séminaire [1] les prochaines séances,
les résumés, captations vidéos et planches des exposés précédents [2],
le détail de cette séance [3] ainsi que le plan d'accès [4].
[1] http://seminaire.lrde.epita.fr
[2] http://seminaire.lrde.epita.fr/Archives
[3] http://seminaire.lrde.epita.fr/2017-11-29
[4] http://www.lrde.epita.fr/wiki/Contact
Au programme du Mercredi 29 novembre 2017 :
* 10h--11h: Industrial Formal Verification – Cadence’s JasperGold Formal
Verification Platform
-- Barbara Jobstmann, Cadence Design Systems
Formal verification (aka Symbolic Model Checking) is becoming a
mainstream technology in system on a chip (SoC)/intellectual property
design and verification methodologies. In the past, the usage of
formal
verification was limited to a small range of applications; it was
mainly
used to verify complex protocols or intrinsic logic functionality by
formal verification experts. In recent years, we saw a rapid adoption
of
formal verification technology and many new application areas, such
as
checking of configuration and status register accesses, SoC
connectivity
verification, low power design verification, security applications,
and
many more. In this talk, we give an overview of the JasperGold Formal
Verification Platform. The platform provides a wide range of formal
apps, which ease adoption of formal verification by offering property
generation and other targeted capabilities for specific design and
verification tasks. In addition, JasperGold offers a unique
interactive
debug environment (called Visualize) that allows the user to easily
analyze the verification results. We present JasperGold from a user’s
point of view, showcase selected apps, and discuss features that were
essential for their wide adoption.
-- Barbara Jobstmann is a field application engineer for Cadence
Design
Systems and a lecturer at the École Polytechnique Fédérale de
Lausanne
(EPFL). She joined Cadence in 2014 through the acquisition of Jasper
Design Automation, where she worked since 2012 as an application
engineer. In the past, she was also a CNRS researcher (chargé de
recherche) in Verimag, an academic research laboratory belonging to
the
CNRS and the Communauté Université Grenoble Alpes in France. Her
research focused on constructing correct and reliable computer
systems
using formal verification and synthesis techniques. She received a
Ph.D.
degree in Computer Science from the University of Technology in Graz,
Austria in 2007.
L'entrée du séminaire est libre. Merci de bien vouloir diffuser cette
information le plus largement possible. N'hésitez pas à nous faire
parvenir vos suggestions d'orateurs.
--
Guillaume TOCHON
Maître de conférences // Assistant Professor
LRDE, EPITA
18, rue Pasteur
94270 Le Kremlin-Bicêtre
_______________________________________________
Seminaire mailing list
Seminaire(a)lrde.epita.fr
https://lists.lrde.epita.fr/listinfo/seminaire
The Vcsn team is happy to announce the long overdue release of Vcsn 2.6.
Most of our work was devoted to providing a better, smoother, user
experience. This includes improvements in the build system, better
performances, extended consistency, and more legible error messages.
Multitape expressions also received a lot of attention.
For more information see the detailed news below.
We warmly thank our users who made valuable feedback (read "bug reports")
including Victor Miller, Dominique Soudière and Harald Schilly. Dominique
and Harald helped integrating Vcsn into CoCalc (formerly SageMathCloud).
People who contributed to this release:
- Akim Demaille
- Clément Démoulins
- Clément Gillard
- Sarasvati Moutoucomarapoulé
Our next stop should be Vcsn 3.0, which will include extensive changes from
Sarasvati.
Release page: https://vcsn.lrde.epita.fr/Vcsn2.6
Tarball: https://www.lrde.epita.fr/dload/vcsn/2.6/vcsn-2.6.tar.bz2
Home page: https://vcsn.lrde.epita.fr
The various packages (Docker, Debian, MacPorts, etc.) will appear
soon.
## New features
### Promotion from single to multitape
It is now possible to mix single-tape expressions with multitape
expressions. This is especially handy when using labels classes (`[abc]`).
For instance:
In [2]: zmin2 = vcsn.context('lat<lan, lan>, zmin')
In [3]: zmin2.expression(r'([ab] + ⟨1⟩(\e|[ab] + [ab]|\e))*')
Out[3]: (a|a+b|b+<1>(\e|(a+b)+(a+b)|\e))*
is a easy means to specify an expression generating an automaton that
computes the edit-distance (between words or languages). Or, with a wide
alphabet:
In [4]: zmin = vcsn.context('lan(a-z), zmin')
In [5]: zmin2 = zmin | zmin
In [6]: zmin2
Out[6]: {abcdefghijklmnopqrstuvwxyz}? x {abcdefghijklmnopqrstuvwxyz}? -> Zmin
In [7]: zmin2.expression(r'([^] + ⟨1⟩(\e|[^] + [^]|\e))*')
In the future the display of expressions will also exploit this approach.
### vcsn compile now links with precompiled contexts
If you write a program using dyn::, then it will benefit from all the
precompiled algorithms.
### vcsn compile now supports --debug
Use this to avoid the removal of intermediate object files. On some
platforms, such as macOS, the object file contains the debug symbols, so
their removal makes the use of a debug harder.
### vcsn compile now uses -rpath
Programs/libraries generated by `vcsn compile` needed to be told where the
Vcsn libraries were installed. In practice, it meant that `vcsn run` was
needed to execute this program, and in some situations it was not even
sufficient.
Now, `vcsn compile my-prog.cc` generates `my-prog` which can be run directly.
### random_expression supports the tuple operator
It is now possible to generate multitape expressions such as `(a|x*)*`.
Before, random_expression was limited to expressions on multitape labels
such as `(a|x)*`.
In [1]: import vcsn
In [2]: c = vcsn.context('lan(abc), q')
In [3]: c2 = c | c
In [4]: for i in range(10):
...: e = c2.random_expression('|, +, ., *=.2, w.=.2, w="min=-2, max=2"', length=10)
...: print('{:u}'.format(e))
...:
a|a+(ε|b)*+⟨1/2⟩ε|ε
(ε+c)a|(⟨2⟩ε+c)
(ε|b+(c|a)*)*
ε|a+a|b
c|b+⟨2⟩ε|(ε+b)
ε|a+((a|ε)(ε|b)(ε|c))*(b|ε)
(a|b+c|ε)(a|b)
ε*|ε
ε+a|a+⟨2⟩(b|ε)
(ε+ε*)|ε
Generating expressions with compose operators is also supported.
In [5]: for i in range(10):
...: e = c2.random_expression('|=.5, +, ., @=2', length=10, identities='none')
...: print('{:u}'.format(e))
...:
((b|ε)ε)(ε(ε|a))
ε(ε|c)+((ε|a)(a|b)+ε|c)
((ε|a)(ε|b))((b|a)(c|c)@b|b)
(b|b@ε+a|b)@(c|ε@b|ε)
(ε+ε|a)@(ε|c@ε|a+b|ε)
(ε|a+ε|c)((ε|b)(c|ε))
(a|ε)((ε+c)|(a+ε))
c|ε@(a|ε)(ε|b@b|ε)
(ε+ε|b)@ε|b+ε|a
b(ε²)|((ε+ε)+a)
### New algorithms
A few algorithms were added:
- context.compose
Composing `{abc} x {xyz}` with `{xyz} x {ABC}` gives, of course,
`{abc} x {ABC}`.
- expansion.normalize, expansion.determinize
These are rather low-level features. The latter is used internally when
derived-term is asked for a deterministic automaton.
- expansion.expression
The "projection" of an expansion as an expression.
- label.compose
For instance `a|b|x @ x|c` -> `a|b|c`.
- polynomial.compose
Additive extension of monomial composition. For instance composing
`a|e|x + a|e|y + a|e|\e` with `x|E|A + y|E|A + \e|E|A` gives
`<3>a|e|E|A`.
- polynomial.shuffle and polynomial.infiltrate
The missing siblings of polynomial.conjunction.
### Doxygen is no longer needed
By default, `make install` generated and installed the C++ API documentation
using Doxygen. It is now disabled by default, pass `--enable-doxygen` to
`configure` to restore it.
## Bug Fixes
### Severe performance regression when reading daut
Version 2.5 introduced a large penalty when reading daut files, especially
large ones.
### Tupling of automata could be wrong on weights
As a consequence, `expression.inductive` was also producing incorrect
automata from multitape expressions.
### Polynomials featuring the zero-label
Under some circumstances it was possible to have polynomials featuring
monomials whose label is the zero of the labelset (e.g., `\z` with
expressions as labels). This is fixed.
### Portability issues
Newest compilers are now properly supported.
## Changes
### Divisions
The division of expressions and automata now compute the product of the
weights, not their quotient.
### Handling of the compose operator
The support of the compose operators in expansions was completely rewritten.
As a result, the derived-term automata are often much smaller, since many
useless states (non coaccessible) are no longer generated.
For instance the derived-term automaton of `(\e|a)* @ (aa|\e)*` has exactly
two states. It used to have three additional useless states.
### Better diagnostics
Many error messages have been improved.
The Daut automaton format now treats `->` as a keyword, so `0->1 a` is now
properly read instead of producing a weird error message because Vcsn
thought your state was named `0->1`.
Chers collègues,
La prochaine session du séminaire Performance et Généricité du LRDE
(Laboratoire de Recherche et Développement de l'EPITA) aura lieu le
Mercredi 8 novembre 2017 (10h--12h), Amphi 4 de l'EPITA.
Vous trouverez sur le site du séminaire [1] les prochaines séances,
les résumés, captations vidéos et planches des exposés précédents [2],
le détail de cette séance [3] ainsi que le plan d'accès [4].
[1] http://seminaire.lrde.epita.fr
[2] http://seminaire.lrde.epita.fr/Archives
[3] http://seminaire.lrde.epita.fr/2017-11-08
[4] http://www.lrde.epita.fr/wiki/Contact
Au programme du Mercredi 8 novembre 2017 :
* 10h--11h: Lire les lignes du cerveau humain
-- Jean-François Mangin, NeuroSpin, CEA, Paris-Saclay
www.cati-neuroimaging.com
La lecture des lignes de la main est une activité ancestrale sans
fondement scientifique, même si certains motifs sont associés à des
malformations congénitales comme la trisomie 21. Cette conférence
décrira l’émergence d’une véritable science de la lecture des «
lignes
du cerveau humain », qu’il s’agisse des plissements de son cortex ou
de
la trajectoire des faisceaux de fibres qui constituent son câblage à
longue distance. Des formes inhabituelles de ces plissements ou de
ces
faisceaux sont parfois la trace d’anomalies développementales
susceptibles d’augmenter le risque de développer certaines
pathologies.
-- Jean-François Mangin est directeur de recherche au CEA. Il y
dirige un
groupe de recherche algorithmique en neuro-imagerie au sein du centre
Neurospin, la plateforme IRM en champs intenses du CEA. Il est aussi
directeur du CATI, la plateforme française créée par le plan
Alzheimer
pour prendre en charge les grandes études de neuroimagerie
multicentriques. Il est enfin codirecteur du sous-projet «Human
Strategic Data» du Human Brain Project, le plus vaste projet de
recherche de la commission européenne. Il est ingénieur de l’Ecole
Centrale Paris et Docteur de Télécom ParisTech. Son programme de
recherche vise au développement d’outils de vision par ordinateur
dédiés
à l’interprétation des images cérébrales. Son équipe s’intéresse en
particulier aux anomalies des plissements ou de la connectivité du
cortex associées aux pathologies. Elle distribue les outils logiciels
issus de cette recherche à la communauté.
* 11h--12h: Apprentissage automatique en neuroimagerie: application aux
maladies cérébrales
-- Edouard Duchesnay, NeuroSpin, CEA, Paris-Saclay
Home page: https://duchesnay.github.io/
L'apprentissage automatique, ou "pattern recognition" multivarié,
peut
identifier des motifs complexes, associés à une variable d'intérêt,
et
ce, dans des données de grandes dimensions. Une fois l'apprentissage
effectué par l'algorithme, il est appliqué à un nouvel individu afin
de
prédire l'évolution future de ce dernier. L'imagerie par résonance
magnétique (IRM) fournit une approche efficace et non invasive pour
étudier les changements structurels et fonctionnels du cerveau,
associés
aux conditions cliniques des patients. En combinant apprentissage
automatique et imagerie cérébrale, il est possible de considérer
l'émergence d'une médecine personnalisée, où les algorithmes ont
appris
du passé à prédire la réponse probable et future d'un patient donné à
un
traitement spécifique. Ces nouvelles informations guideront le
clinicien
dans ses choix thérapeutiques. Nous présenterons la complexité des
données IRM manipulées, les algorithmes d'apprentissage et leurs
applications aux maladies cérébrales.
-- Edouard Duchesnay a obtenu un diplôme d'ingénieur en génie
logiciel de
l'EPITA en 1997 (spécialisation SCIA), puis un master et un doctorat
en
traitement du signal et des images de l'Université de Rennes 1,
respectivement en 1998 et 2001. Depuis 2008, il est chargé de
recherche
chez Neurospin, le centre de neuroimagerie par IRM du CEA. Il
développe
des algorithmes d'apprentissage automatique fournissant des outils de
diagnostic et pronostic ou des méthodes de découverte de biomarqueurs
pour les maladies du cerveau. E. Duchesnay est un contributeur majeur
de
la bibliothèque d'apprentissage automatique ParsimonY de Python,
dédiée
aux données structurées de grandes dimensions, telles que l'imagerie
cérébrale ou les données génétiques. Il a également contribué à la
bibliothèque d'apprentissage automatique scikit-learn de Python.
L'entrée du séminaire est libre. Merci de bien vouloir diffuser cette
information le plus largement possible. N'hésitez pas à nous faire
parvenir vos suggestions d'orateurs.
--
Guillaume TOCHON
Maître de conférences // Assistant Professor
LRDE, EPITA
18, rue Pasteur
94270 Le Kremlin-Bicêtre
_______________________________________________
Seminaire mailing list
Seminaire(a)lrde.epita.fr
https://lists.lrde.epita.fr/listinfo/seminaire
I am happy to announce that the following paper has been accepted
for publication in the Journal of Mathematical Imaging and Vision (JMIV):
A Tutorial on Well-Composedness
Nicolas Boutry (1), Thierry Géraud (1) and Laurent Najman (2)
(1) LRDE, EPITA,
Le Kremlin-Bicêtre, France
(2) Université Paris-Est, LIGM, Équipe A3SI, ESIEE,
Marne-la-Vallée, France
Abstract:
Due to digitization, usual discrete signals generally present
topological paradoxes, such as the connectivity paradoxes of
Rosenfeld. To get rid of those paradoxes, and to restore some
topological properties to the objects contained in the image,
like manifoldness, Latecki proposed a new class of images, called
well-composed images, with no topological issues. Furthermore,
well-composed images have some other interesting properties: for
example, the Euler number is locally computable, boundaries of
objects separate background from foreground, the tree of shapes
is well defined, and so on. Last, but not the least, some recent
works in mathematical morphology have shown that very nice
practical results can be obtained thanks to well-composed
images. Believing in its prime importance in digital topology, we
then propose this state-of-the-art of well-composedness,
summarizing its different flavours, the different methods
existing to produce well-composed signals, and the various topics
that are related to well-composedness.
URL:
http://publications.lrde.epita.fr/boutry.17.jmiv <http://publications.lrde.epita.fr/boutry.17.jmiv>
Nicolas Boutry
Chers collègues,
La prochaine session du séminaire Performance et Généricité du LRDE
(Laboratoire de Recherche et Développement de l'EPITA) aura lieu le
Mercredi 27 septembre 2017 (11h--12h), Salle L0 du LRDE.
Vous trouverez sur le site du séminaire [1] les prochaines séances,
les résumés, captations vidéos et planches des exposés précédents [2],
le détail de cette séance [3] ainsi que le plan d'accès [4].
[1] http://seminaire.lrde.epita.fr
[2] http://seminaire.lrde.epita.fr/Archives
[3] http://seminaire.lrde.epita.fr/2017-09-27
[4] http://www.lrde.epita.fr/wiki/Contact
Au programme du Mercredi 27 septembre 2017 :
* 11h--12h: Frama-C, une plateforme collaborative et extensible pour
l'analyse de code C
-- Julien Signoles, CEA LIST, Laboratoire de Sûreté des Logiciels
(LSL)
orateur : http://julien.signoles.free.fr
Frama-C est une plateforme d'analyse de code C visant à vérifier
des
programmes C de taille industrielle. Elle fournit à ses utilisateurs
une
collection de greffons effectuant notamment des analyses statiques
par
interprétation abstraite et des méthodes déductives ou encore
permettant
des vérifications à l'exécution. La plateforme permet également de
faire
coopérer les analyses grâce au partage d'un noyau et d'un langage de
spécification communs.
Cet exposé présente une vue générale de la plateforme, de ses
principaux
analyseurs et de quelques applications industrielles. Il se concentre
sur le langage de spécification ACSL et sur différentes façons de
vérifier des spécifications ACSL avec des analyses statiques ou
dynamiques.
-- Julien Signoles a obtenu un doctorat en informatique de
l'Université
Paris 11 en 2006. Il devint ensuite ingénieur-chercheur au CEA LIST
en
2007. Au sein du Laboratoire de Sûreté des Logiciels (LSL), il est
l'un
des développeurs principaux de Frama-C. Ses recherches se concentrent
aujourd'hui sur la vérification à l'exécution (runtime verification)
et
ses différentes applications pour améliorer la sûreté et la sécurité
des
logiciels critiques.
L'entrée du séminaire est libre. Merci de bien vouloir diffuser cette
information le plus largement possible. N'hésitez pas à nous faire
parvenir vos suggestions d'orateurs.
--
Guillaume TOCHON
Maître de conférences // Assistant Professor
LRDE, EPITA
18, rue Pasteur
94270 Le Kremlin-Bicêtre
_______________________________________________
Seminaire mailing list
Seminaire(a)lrde.epita.fr
https://lists.lrde.epita.fr/listinfo/seminaire
We are happy to announce the release of Spot 2.4.
Spot is a library of algorithms for manipulating LTL formulas and
omega-automata (objects as commonly used in model checking).
A detailed list of new features in this new release is given at the end
of this email. You can find the new release here:
http://www.lrde.epita.fr/dload/spot/spot-2.4.tar.gz
See https://spot.lrde.epita.fr/ for documentation and installation
instructions.
This release contains code contributed by Clément Gillard, Etienne
Renault, Henrich Lauko, Maximilien Colange, Thomas Medioni and myself.
As always, please direct any feedback to <spot(a)lrde.epita.fr>.
New in spot 2.4 (2017-09-06)
Build:
- Spot is now built in C++14 mode, so you need at least GCC 5 or
clang 3.4. The current version of all major linux distributions
ship with at least GCC 6, which defaults to C++14, so this should
not be a problem. In *this* release of Spot, most of the header
files are still C++11 compatible, so you should be able to include
Spot in a C++11 project in case you do not yet want to upgrade.
There is also an --enable-c++17 option to configure in case you
want to force a build of Spot in C++17 mode.
Tools:
- genaut is a new binary that produces families of automata defined
in the literature (in the same way as we have genltl for LTL
formulas).
- autcross is a new binary that compares the output of tools
transforming automata (in the same way as we have ltlcross for
LTL translators).
- genltl learned two generate two new families of formulas:
--fxg-or=RANGE F(p0 | XG(p1 | XG(p2 | ... XG(pn))))
--gxf-and=RANGE G(p0 & XF(p1 & XF(p2 & ... XF(pn))))
The later is a generalization of --eh-pattern=9, for which a
single state TGBA always exists (but previous version of Spot
would build larger automata).
- autfilt learned to build the union (--sum) or the intersection
(--sum-and) of two languages by putting two automata side-by-side
and fiddling with the initial states. This complements the already
implemented intersection (--product) and union (--product-or),
both based on a product.
- autfilt learned to complement any alternating automaton with
option --dualize. (See spot::dualize() below.)
- autfilt learned --split-edges to convert labels that are Boolean
formulas into labels that are min-terms. (See spot::split_edges()
below.)
- autfilt learned --simplify-acceptance to simplify some acceptance
conditions. (See spot::simplify_acceptance() below.)
- autfilt --decompote-strength has been renamed to --decompose-scc
because it can now extract the subautomaton leading to an SCC
specified by number. (The old name is still kept as an alias.)
- The --stats=%c option of tools producing automata can now be
restricted to count complete SCCs, using %[c]c.
- Tools producing automata have a --stats=... option, and tools
producing formulas have a --format=... option. These two options
work similarly, the use of different names is just historical.
Starting with this release, all tools that recognize one of these
two options also accept the other one as an alias.
Library:
- A new library, libspotgen, gathers all functions used to generate
families of automata or LTL formulas, used by genltl and genaut.
- spot::sum() and spot::sum_and() implements the union and the
intersection of two automata by putting them side-by-side and
using non-deterministim or universal branching on the initial
state.
- twa objects have a new property: prop_complete(). This obviously
acts as a cache for the is_complete() function.
- spot::dualize() complements any alternating automaton. Since
the dual of a deterministic automaton is still deterministic, the
function spot::dtwa_complement() has been deprecated and simply
calls spot::dualize().
- spot::decompose_strength() was extended and renamed to
spot::decompose_scc() as it can now also extract a subautomaton
leading to a particular SCC. A demonstration of this feature via
the Python bindings can be found at
https://spot.lrde.epita.fr/ipynb/decompose.html
- The print_dot() function will now display names for well known
acceptance conditions under the formula when option 'a' is used.
We plan to enable 'a' by default in a future release, so a new
option 'A' has been added to hide the acceptance condition.
- The print_dot() function has a new experimental option 'x' to
output labels are LaTeX formulas. This is meant to be used in
conjunction with the dot2tex tool. See
https://spot.lrde.epita.fr/oaut.html#dot2tex
- A new named property for automata called "original-states" can be
used to record the origin of a state before transformation. It is
currently defined by the degeneralization algorithms, and by
transform_accessible() and algorithms based on it (like
remove_ap::strip(), decompose_scc()). This is realy meant as an
aid for writing algorithms that need this mapping, but it can also
be used to debug these algorithms: the "original-states"
information is displayed by the dot printer when the 'd' option is
passed. For instance in
% ltl2tgba 'GF(a <-> Fb)' --dot=s
% ltl2tgba 'GF(a <-> Fb)' | autfilt -B --dot=ds
the second command outputs an automaton with states that show
references to the first one.
- A new named property for automata called "degen-levels" keeps track
of the level of a state in a degeneralization. This information
complements the one carried in "original-states".
- A new named property for automata called "simulated-states" can be
used to record the origin of a state through simulation. The
behavior is similar to "original-states" above. Determinization
takes advantage of this in its pretty print.
- The new function spot::acc_cond::is_streett_like() checks whether
an acceptance condition is conjunction of disjunctive clauses
containing at most one Inf and at most one Fin. It builds a
vector of pairs to use if we want to assume the automaton has
Streett acceptance. The dual function is
spot::acc_cond::is_rabin_like() works similarly.
- The degeneralize() function has learned to consider acceptance
marks common to all edges comming to a state to select its initial
level. A similar trick was already used in sbacc(), and saves a
few states in some cases.
- There is a new spot::split_edges() function that transforms edges
(labeled by Boolean formulas over atomic propositions) into
transitions (labeled by conjunctions where each atomic proposition
appear either positive or negative). This can be used to
preprocess automata before feeding them to algorithms or tools
that expect transitions labeled by letters.
- spot::scc_info has two new methods to easily iterate over the
edges of an SCC: edges_of() and inner_edges_of().
- spot::scc_info can now be passed a filter function to ignore
or cut some edges.
- spot::scc_info now keeps track of acceptance sets that are common
to all edges in an SCC. These can be retrieved using
scc_info::common_sets_of(scc), and they are used by scc_info to
classify some SCCs as rejecting more easily.
- The new function acc_code::remove() removes all the given
acceptance sets from the acceptance condition.
- It is now possible to change an automaton acceptance condition
directly by calling twa::set_acceptance().
- spot::cleanup_acceptance_here now takes an additional boolean
parameter specifying whether to strip useless marks from the
automaton. This parameter is defaulted to true, in order to
keep this modification backward-compatible.
- The new function spot::simplify_acceptance() is able to perform
some simplifications on an acceptance condition, and might lead
to the removal of some acceptance sets.
- The function spot::streett_to_generalized_buchi() is now able to
work on automata with Streett-like acceptance.
- The function for converting deterministic Rabin automata to
deterministic Büchi automata (when possible), internal to the
remove_fin() procedure, has been updated to work with
transition-based acceptance and with Rabin-like acceptance.
- spot::relabel_here() was used on automata to rename atomic
propositions, it can now replace atomic propositions by Boolean
subformula. This makes it possible to use relabeling maps
produced by relabel_bse() on formulas.
- twa_graph::copy_state_names_from() can be used to copy the state
names from another automaton, honoring "original-states" if
present.
- Building automata for LTL formula with a large number N of atomic
propositions can be costly, because several loops and
data-structures are exponential in N. However a formula like
((a & b & c) | (d & e & f)) U ((d & e & f) | (g & h & i))
can be translated more efficiently by first building an automaton
for (p0 | p1) U (p1 | p2), and then substituting p0, p1, p2 by the
appropriate Boolean formula. Such a trick is now attempted
for translation of formulas with 4 atomic propositions or
more (this threshold can be changed, see -x relabel-bool=N in
the spot-x(7) man page).
- The LTL simplification routines learned that an LTL formula like
G(a & XF(b & XFc & Fd) can be simplified to G(a & Fb & Fc & Fd),
and dually F(a | XG(b | XGc | Gd)) = F(a | Gb | Gc | Gd).
When working with SERE, the simplification of "expr[*0..1]" was
improved. E.g. {{a[*]|b}[*0..1]} becomes {a[*]|b} instead of
{{a[+]|b}[*0..1]}.
- The new function spot::to_weak_alternating() is able to take an
input automaton with generalized Büchi/co-Büchi acceptance and
convert it to a weak alternating automaton.
- spot::sbacc() is can now also convert alternating automata
to state-based acceptance.
- spot::sbacc() and spot::degeneralize() learned to merge
accepting sinks.
- If the SPOT_BDD_TRACE environment variable is set, statistics
about BDD garbage collection and table resizing are shown.
- The & and | operators for acceptannce conditions have been changed
slightly to be more symmetrical. In older versions, operator &
would move Fin() terms to the front, but that is not the case
anymore. Also operator & was already grouping all Inf() terms
(for efficiency reasons), in this version operator | is
symmetrically grouping all Fin() terms.
- The automaton parser is now reentrant, making it possible to
process automata from different streams at the same time (i.e.,
using multiple spot::automaton_stream_parser instances at once).
- The print_hoa() and parse_automaton() functions have been updated
to recognize the "exist-branch" property of the non-released HOA
v1.1, as well as the new meaning of property "deterministic". (In
HOA v1 "properties: deterministic" means that the automaton has no
existential branching; in HOA v1.1 it disallows universal
branching as well.) The meaning of "deterministic" in Spot has
been adjusted to these new semantics, see "Backward-incompatible
changes" below.
- The parser for HOA now recognizes and verifies correct use of the
"univ-branch" property. This is known to be a problem with option
-H1 of ltl3ba 1.1.2 and ltl3dra 0.2.4, so the environment variable
SPOT_HOA_TOLERANT can be set to disable the diagnostic.
Python:
- The 'spot.gen' package exports the functions from libspotgen.
See https://spot.lrde.epita.fr/ipynb/gen.html for examples.
Bugs fixed:
- When the remove_fin() function was called on some automaton with
Inf-less acceptance involving at least one disjunction (e.g.,
generalized co-Büchi), it would sometimes output an automaton with
transition-based acceptance but marked as state-based.
- The complete() function could complete an empty co-Büchi automaton
into an automaton accepting everything.
Backward-incompatible changes:
- spot::acc_cond::mark_t::operator bool() has been marked as
explicit. The implicit converion to bool (and, via bool, to int)
was a source of bugs.
- spot::twa_graph::set_init_state(const state*) has been removed.
It was never used. You always want to use
spot::twa_graph::set_init_state(unsigned) in practice.
- The previous implementation of spot::is_deterministic() has been
renamed to spot::is_universal(). The new version of
spot::is_deterministic() requires the automaton to be both
universal and existential. This should not make any difference in
existing code unless you work with the recently added support for
alternating automata.
- spot::acc_cond::mark_t::sets() now returns an internal iterable
object instead of an std::vector<unsigned>.
- The color palette optionally used by print_dot() has been extended
from 9 to 16 colors. While the first 8 colors are similar, they
are a bit more saturated now.
Deprecation notices:
(These functions still work but compilers emit warnings.)
- spot::decompose_strength() is deprecated, it has been renamed
to spot::decompose_scc().
- spot::dtwa_complement() is deprecated. Prefer the more generic
spot::dualize() instead.
- The spot::twa::prop_deterministic() methods have been renamed to
spot::twa::prop_universal() for consistency with the change to
is_deterministic() listed above. We have kept
spot::twa::prop_deterministic() as a deprecated synonym for
spot::twa::prop_universal() to help backward compatibility.
- The spot::copy() function is deprecated. Use
spot::make_twa_graph() instead.
--
Alexandre Duret-Lutz
The Vcsn team is happy to announce that the following paper was
accepted to the 14th International Colloquium on Theoretical Aspects
of Computing (ICTAC 2017), to be held in 23-27 October 2017 in Hanoi,
Vietnam.
Quotient Operators for Weighted Rational Expressions
Akim Demaille, Thibaud Michaud
EPITA/LRDE
Quotients of words play a key role in formal language theory, however
they have been rarely studied in the context of weighted rational
expressions. We define the left and right quotients of rational
series, and introduce corresponding operators for weighted
expressions. We generalize an alternative construction of the
derived-term (or Antimirov, or equation) automaton, based on
expansions, to handle these operators.
For more information about this publication see
http://publis.lrde.epita.fr/demaille.17.ictac
To visit Vcsn's home page, go to (considered harmless)
http://vcsn.lrde.epita.fr
To play with Vcsn on the topic of this paper
http://vcsn-sandbox.lrde.epita.fr/notebooks/Doc/ICTAC-2017.ipynb
Bonjour,
nous avons le plaisir de vous annoncer la sortie du n°38 du bulletin du
LRDE.
C'est un numéro spécial dédié aux séminaires des étudiants-chercheurs du LRDE
qui auront lieu le mardi 4 juillet 2017.
Vous y trouverez le programme de la journée avec les résumés des exposés.
Nous y laissons également la parole à trois de nos collègues qui ont décidé de partir
pour de nouveaux horizons.
Vous pouvez télécharger le bulletin en couleur à la page suivante :
http://www.lrde.epita.fr/wiki/LrdeBulletin/l-air-de-rien-38
--
Daniela Becker
Responsable administrative du LRDE
I am happy to announce that the following paper has been
accepted at the 20th International Conference on Theory and Applications of Satisfiability Testing to be held in
Melbourne, Australia from August 28th to September 1st.
"PaInleSS: a Framework for Parallel SAT Solving"
Ludovic Le Frioux (1,2), Souheib Baarir (1,2,3), Julien Sopena (2), and Fabrice Kordon (2)
(1) LRDE, EPITA, Kremlin-Bicêtre, France
(2) Sorbonne Universites, UPMC Univ Paris 06, UMR 7606, LIP6, Paris, France CNRS, UMR 7606, LIP6, Paris, France
(3) Universite Paris Nanterre, France.
Abstract :
Over the last decade, parallel SAT solving has been widely
studied from both theoretical and practical aspects. There are now numerous
solvers that dier by parallelization strategies, programming languages,
concurrent programming, involved libraries, etc.
Hence, comparing the eciency of the theoretical approaches is a challenging
task. Moreover, the introduction of a new approach needs either
a deep understanding of the existing solvers, or to start from scratch the
implementation of a new tool.
We present PaInleSS: a framework to build parallel SAT solvers for
many-core environments. Thanks to its genericity and modularity, it provides
the implementation of basics for parallel SAT solving like clause
exchanges, Portfolio and Divide-and-Conquer strategies. It also enables
users to easily create their own parallel solvers based on new strategies.
Our experiments show that our framework compares well with some of
the best state-of-the-art solvers.
Souheib Baarir.