Bonjour,
It is my great pleasure to announce that my PhD defense will be held on Tuesday 20 November
at EPITA at 10h00 in room KB 604. Everyone is welcome and invited as long as space is
available in the room. Also please join the reception (pot) afterwards (12h30) to take place
in room IP 12a.
https://www.lrde.epita.fr/wiki/Affiche-these-JN <https://www.lrde.epita.fr/wiki/Affiche-these-JN>
If you’d like to take a look at the thesis itself, it is available here temporarily until it reaches its final resting place.
https://drive.google.com/drive/folders/14L7kFNNyIOv3e-apqDTynCRPPWwoY2bw <https://drive.google.com/drive/folders/14L7kFNNyIOv3e-apqDTynCRPPWwoY2bw>
Representing and Computing with Types in Dynamically Typed Languages
Extending Dynamic Language Expressivity to Accommodate Rationally Typed Sequences
Abstract:
We present code generation techniques related to run-time type checking of heterogeneous sequences. Traditional regular expressions can be used to recognize well defined sets of character strings called rational languages or sometimes regular languages. Newton et.al. present an extension whereby a dynamic language may recognize a well defined set of heterogeneous sequences, such as lists and vectors.
As with the analogous string matching regular expression theory, matching these regular type expressions can also be achieved by using a finite state machine (deterministic finite automata, DFA). Constructing such a DFA can be time consuming. The approach we chose, uses meta-programming to intervene at compile-time, generating efficient functions specific to each DFA, and allowing the compiler to further optimize the functions if possible. The functions are made available for use at run-time. Without this use of meta-programming, the program might otherwise be forced to construct the DFA at run-time. The excessively high cost of such a construction would likely far outweigh the time needed to match a string against the expression.
Our technique involves hooking into the Common Lisp type system via the deftype macro. The first time the compiler encounters a relevant type specifier, the appropriate DFA is created, which may be a exponential complexity operation, from which specific low-level code is generated to match that specific expression. Thereafter, when the type specifier is encountered again, the same pre-generated function can be used. The code generated is of linear complexity at run-time.
A complication of this approach, which we explain in this report, is that to build the DFA we must calculate a disjoint type decomposition which is time consuming, and also leads to sub-optimal use of typecase in machine generated code. To handle this complication, we use our own macro optimized-typecase in our machine generated code. Uses of this macro are also implicitly expanded at compile time. Our macro expansion uses BDDs (Binary Decision Diagrams) to optimize the optimized-typecase into low level code, maintaining the typecasesemantics but eliminating redundant type checks. In the report we also describe an extension of BDDs to accomodate subtyping in the Common Lisp type system as well as an in-depth analysis of worst-case sizes of BDDs.
Kind regards
(soon to be Dr) Jim NEWTON
ELS'19 - 12th European Lisp Symposium
Hotel Bristol Palace
Genova, Italy
April 1-2 2019
In co-location with <Programming> 2019
Sponsored by EPITA
http://www.european-lisp-symposium.org/2019/
The purpose of the European Lisp Symposium is to provide a forum for
the discussion and dissemination of all aspects of design,
implementation and application of any of the Lisp and Lisp-inspired
dialects, including Common Lisp, Scheme, Emacs Lisp, AutoLisp, ISLISP,
Dylan, Clojure, ACL2, ECMAScript, Racket, SKILL, Hop and so on. We
encourage everyone interested in Lisp to participate.
The 12th European Lisp Symposium invites high quality papers about
novel research results, insights and lessons learned from practical
applications and educational perspectives. We also encourage
submissions about known ideas as long as they are presented in a new
setting and/or in a highly elegant way.
Topics include but are not limited to:
- Context-, aspect-, domain-oriented and generative programming
- Macro-, reflective-, meta- and/or rule-based development approaches
- Language design and implementation
- Language integration, inter-operation and deployment
- Development methodologies, support and environments
- Educational approaches and perspectives
- Experience reports and case studies
We invite submissions in the following forms:
Papers: Technical papers of up to 8 pages that describe original
results or explain known ideas in new and elegant ways.
Demonstrations: Abstracts of up to 2 pages for demonstrations of
tools, libraries, and applications.
Tutorials: Abstracts of up to 4 pages for in-depth presentations
about topics of special interest for at least 90 minutes and up to
180 minutes.
The symposium will also provide slots for lightning talks, to be
registered on-site every day.
All submissions should be formatted following the ACM SIGS guidelines
and include ACM Computing Classification System 2012 concepts and
terms. Submissions should be uploaded to Easy Chair, at the following
address: https://www.easychair.org/conferences/?conf=els2019
Note: to help us with the review process please indicate the type of
submission by entering either "paper", "demo", or "tutorial" in the
Keywords field.
Important dates:
- 01 Feb 2019 Submission deadline
- 01 Mar 2019 Notification of acceptance
- 18 Mar 2019 Final papers due
- 01-02 Apr 2019 Symposium
Programme chair:
Nicolas Neuss, FAU Erlangen-Nürnberg, Germany
Programme committee: tba
Search Keywords:
#els2019, ELS 2019, ELS '19, European Lisp Symposium 2019,
European Lisp Symposium '19, 12th ELS, 12th European Lisp Symposium,
European Lisp Conference 2019, European Lisp Conference '19
--
Resistance is futile. You will be jazzimilated.
Lisp, Jazz, Aïkido: http://www.didierverna.info
We are pleased to announce that we have had an article
entitled Intuitive Analysis of Certain Binary Decision Diagrams
accepted in ACM Transactions on Computational Logic.
A link to the paper and abstract are given below.
We’ll be happy to discuss the article if anyone is interested.
Kind Regards
Jim
https://www.lrde.epita.fr/dload/papers/newton.18.tocl.pdf
Abstract:
Binary Decision Diagrams (BDDs) and in particular ROBDDs (Reduced
Ordered BDDs) are a common data structure for manipulating Boolean
expressions, integrated circuit design, type inferencers, model
checkers, and many other applications. Although the ROBDD is a
lightweight data structure to implement, the behavior, in terms of
memory allocation, may not be obvious to the program architect. We
explore experimentally, numerically, and theoretically the typical
and worst-case ROBDD sizes in terms of number of nodes and residual
compression ratios, as compared to unreduced BDDs. While our
theoretical results are not surprising, as they are in keeping with
previously known results, we believe our method contributes to the
current body of research by our experimental and statistical
treatment of ROBDD sizes. In addition, we provide an algorithm to
calculate the worst-case size. Finally, we present an algorithm for
constructing a worst-case ROBDD of a given number of variables. Our
approach may be useful to projects deciding whether the ROBDD is the
appropriate data structure to use, and in building worst-case
examples to test their code.
Dear all,
I'm happy to announce the first public release of Quickref, a global
documentation project for Common Lisp.
Quickref automatically builds a website[1] containing reference manuals
for Quicklisp libraries (either those already installed or all of
them). The reference manuals are generated by Declt[2] (probably the
most complete documentation system for Lisp today) in Texinfo format,
and then processed by makeinfo. Because Declt works by introspection,
this whole process is non-intrusive: library authors do not need to do
anything to support them being documented.
The official Quickref website is kept up-to-date with Quicklisp, and
currently documents 1588 libraries successfully. This work is the result
of a collaboration with Antoine Martin, as part of an internship at
LRDE. The project is still evolving: a new student will arrive in
September to work on parallelization.
The source code is available here:
https://gitlab.common-lisp.net/quickref/quickref
The resulting website may also be regenerated locally with this Docker
2-liner:
docker run --name quickref quickref/quickref
docker cp quickref:/home/quickref/quickref .
Footnotes:
[1] http://quickref.common-lisp.net
[2] https://www.lrde.epita.fr/~didier/software/lisp/misc.php#declt
--
Resistance is futile. You will be jazzimilated.
Lisp, Jazz, Aïkido: http://www.didierverna.info
We are happy to announce the release of Spot 2.6.
Spot is a library of algorithms for manipulating LTL formulas and
omega-automata (objects as commonly used in model checking).
This release contains code contributed by Maximilien Colange, Antoine
Martin, and myself. A detailed list of new features in this new
release is given at the end of this email. You can find the new
release here:
http://www.lrde.epita.fr/dload/spot/spot-2.6.tar.gz
See https://spot.lrde.epita.fr/ for documentation and installation
instructions. Note that the unstable Debian packages and
stable RPM packages should be available within 24h.
As always, please direct any feedback to <spot(a)lrde.epita.fr>.
New in spot 2.6 (2018-07-04)
Command-line tools:
- autfilt learned --is-colored to filter automata that use
exactly one acceptance set per mark or transition.
- autfilt learned --has-univ-branching and --has-exist-branching
to keep automata that have universal branching, or that make
non-deterministic choices.
- autcross' tool specifications now have %M replaced by the name of
the input automaton.
- autcross now aborts if there is any parser diagnostic for the
input automata (previous versions would use the input automaton
whenever the parser would manage to read something).
- ltlcross, ltldo, and autcross learned shorthands to call
delag, ltl2dra, ltl2dgra, and nba2dpa.
- When autfilt is asked to build a Büchi automaton from
of a chain of many products, as in
"autfilt -B --product 1.hoa ... --product n.hoa in.hoa"
then it will automatically degeneralize the intermediate
products to avoid exceeding the number of supported
acceptance sets.
- genltl learned to generate six new families of LTL formulas:
--gf-equiv-xn=RANGE GF(a <-> X^n(a))
--gf-implies-xn=RANGE GF(a -> X^n(a))
--sejk-f=RANGE[,RANGE] f(0,j)=(GFa0 U X^j(b)), f(i,j)=(GFai U
G(f(i-1,j)))
--sejk-j=RANGE (GFa1&...&GFan) -> (GFb1&...&GFbn)
--sejk-k=RANGE (GFa1|FGb1)&...&(GFan|FGbn)
--sejk-patterns[=RANGE] φ₁,φ₂,φ₃ from Sikert et al's [CAV'16]
paper (range should be included in 1..3)
- genltl --ms-example can now take a two-range argument,
(as --sejk-f above).
Build:
- ./configure --enable-max-accsets=N let you specify a maximum
number of acceptance sets that Spot should support. The default
is still 32, but this limit is no longer hardcoded. Larger values
will cost additional memory and time.
- We know distribute RPM packages for Fedora 28. See
file:///home/adl/git/spot/doc/userdoc/install.html#Fedora
Library:
- The PSL/LTL simplification routine learned the following:
q R Xf = X(q R f) if q is suspendable
q U Xf = X(q U f) if q is suspendable
{SERE;1} = {1} if {SERE} accepts [*0]
{SERE;1} = {SERE} if {SERE} does not accept [*0]
- gf_guarantee_to_ba() is a specialized construction for translating
formulas of the form GF(guarantee) to BA or DBA, and
fg_safety_to_dca() is a specialized construction for translating
formulas of the form FG(safety) to DCA. These are generalizations
of some constructions proposed by J. Esparza, J. Křentínský, and
S. Sickert (LICS'18).
These are now used by the main translation routine, and can be
disabled by passing -x '!gf-guarantee' to ltl2tgba. For example,
here are the size of deterministic transition-based generalized
Büchi automata constructed from four GF(guarantee) formulas with
two versions of Spot, and converted to other types of
deterministic automata by other tools distributed with Owl 18.06.
"x(y)" means x states and y acceptance sets.
ltl2tgba -D delag ltl2dra
2.5 2.6 18.06 18.06
------------------------- ----------- -------------
GF(a <-> XXa) 9(1) 4(1) 4(2) 9(4)
GF(a <-> XXXa) 27(1) 8(1) 8(2) 25(4)
GF(((a & Xb) | XXc) & Xd) 6(1) 4(1) 16(1) 5(2)
GF((b | Fa) & (b R Xb)) 6(2) 2(1) 3(4) 3(4)
Note that in the above the automata produced by 'ltl2tgba -D' in
version 2.5 were not deterministic (because -D is only a
preference). They are deterministic in 2.6 and other tools.
- spot::product() and spot::product_or() learned to produce an
automaton with a simpler acceptance condition if one of the
argument is a weak automaton. In this case the resulting
acceptance condition is (usually) that of the other argument.
- spot::product_susp() and spot::product_or_susp() are new
functions for building products between an automaton A and
a "suspendable" automaton B. They are also optimized for
the case that A is weak.
- When 'generic' acceptance is enabled, the translation routine will
split the input formula on Boolean operators into components that
are syntactically 'obligation', 'suspendable', or 'something
else'. Those will be translated separately and combined with
product()/product_susp(). This is inspired by the ways things are
done in ltl2dstar or delag, and can be disabled by passing option
-xltl-split=0, as in ltl2tgba -G -D -xltl-split=0. Here are the
sizes of deterministic automata (except for ltl3tela which produces
non-deterministic automata) produced with generic acceptance
using two versions of ltl2tgba and other tools for reference.
ltl2tgba -DG delag ltl3tela
2.5 2.6 18.06 1.1.2
------------ ------- --------
FGa0&GFb0 2(2) 1(2) 1(2) 1(2)
(FGa1&GFb1)|FGa0|GFb0 16(6) 1(4) 1(4) 1(9)
(FGa2&GFb2)|((FGa1|GFb1)&FGa0&GFb0) 29(8) 1(6) 1(6) 3(11)
FGa0|GFb0 5(4) 1(2) 1(2) 1(5)
(FGa1|GFb1)&FGa0&GFb0 8(4) 1(4) 1(4) 1(7)
(FGa2|GFb2)&((FGa1&GFb1)|FGa0|GFb0) 497(14) 1(6) 1(6) 1(14)
GFa1 <-> GFz 4(6) 1(3) 1(4) 1(7)
(GFa1 & GFa2) <-> GFz 8(4) 1(3) 1(6) 3(10)
(GFa1 & GFa2 & GFa3) <-> GFz 21(4) 1(4) 1(8) 3(13)
GFa1 -> GFz 5(4) 1(2) 1(2) 1(5)
(GFa1 & GFa2) -> GFz 12(4) 1(3) 1(3) 1(6)
(GFa1 & GFa2 & GFa3) -> GFz 41(4) 1(4) 1(4) 1(7)
FG(a|b)|FG(!a|Xb) 4(2) 2(2) 2(2) 2(3)
FG(a|b)|FG(!a|Xb)|FG(a|XXb) 21(2) 4(3) 4(3) 4(4)
FG(a|b)|FG(!a|Xb)|FG(a|XXb)|FG(!a|XXXb) 170(2) 8(4) 8(4) 8(5)
- For 'parity' output, the 'ltl-split' optimization just separates
obligation subformulas from the rest, where a determinization is
still performed.
ltl2tgba -DP ltl3dra ltl2dpa
2.5 2.6 0.2.3 18.06
-------------- ------- -------
FGp0 & (Gp1 | XFp2) 6(2) 4(1) 4(1) 4(2)
G!p0 | F(p0 & (!p1 W p2)) 5(2) 4(2) n/a 5(2)
(p0 W XXGp0) & GFp1 & FGp2 6(2) 5(2) n/a 6(3)
(The above just show a few cases that were improved. There are
many cases where ltl2dpa still produces smaller automata.)
- The automaton postprocessor will now simplify acceptance
conditions more aggressively, calling spot::simplify_acceptance()
or spot::cleanup_acceptance() depending on the optimization level.
- print_dot(), used to print automata in GraphViz's format,
underwent several changes:
* option "a", for printing the acceptance condition, is now
enabled by default. Option "A", introduced in Spot 2.4, can be
used to hide the acceptance condition in case you do not want
it. This change of course affects the --dot option of all the
command-line tools, as well as the various way to display
automata using the Python bindings.
* when option "1" is used to hide state names and force the
display of state numbers, the actual state names is now moved to
the "tooltip" field of the state. The SVG files produced by
"dot -Tsvg" will show those as popups. This is also done for
state labels of Kripke structures.
* the output digraph is now named using the name of the automaton
if available, or the empty string otherwise. (Previous versions
used to call all digraphs "G".) This name appears as a tooltip
in SVG figures when the mouse is over the acceptance condition.
* a new option "u" hides "true states" behind "exiting
transitions". This can be used to display alternating automata
in a way many people expect.
* a new option "K" cancels the effect of "k" (which uses state
labels whenever possible). This is most useful when one want to
force transition-labeling of a Kripke structure, where "k" is
usually the default.
- spot::twa_graph::merge_states() is a new method that merges states
with the exact same outgoing edges. As it performs no reordering
of the edges, it is better to call it when you know that the edges
in the twa_graph are sorted (e.g. after a call to merge_edges()).
- spot::twa_graph::purge_unreachable_states() now takes a function
which is called with the new numbering of states. This is useful
to update an external structure that references states of the twa
that we want to purge.
- spot::scc_filter() now automatically turns automata marked as
inherently-weak into weak automata with state-based acceptance.
The acceptance condition is set to Büchi unless the input had
co-Büchi or t acceptance. spot::scc_filter_states() will pass
inherently-weak automata to spot::scc_filter().
- spot::cleanup_parity() and spot::cleanup_parity_here() are smarter
and now remove from the acceptance condition the parity colors
that are not used in the automaton.
- spot::contains() and spot::are_equivalent() can be used to
check language containement between two automata or formulas.
They are most welcome in Python, since we used to redefine
them every now and them. Some examples are shown in
https://spot.lrde.epita.fr/ipynb/contains.html
- aut1->exclusive_word(aut2) is a new method that returns a word
accepted by aut1 or aut2 but not both. The exclusive_run()
variant will return. This is useful when comparing automata and
looking for differences. See also
https://spot.lrde.epita.fr/ipynb/contains.html
- spot::complement_semidet(aut) is a new function that returns the
complement of aut, where aut is a semideterministic automaton. The
function uses the NCSB complementation algorithm proposed by
F. Blahoudek, M. Heizmann, S. Schewe, J. Strejček, and MH. Tsai
(TACAS'16).
- spot::remove_alternation() was slightly improved on very-weak
alternating automata: the labeling of the outgoing transitions in
the resulting TGBA makes it more likely that simulation-based
reductions will reduce it.
- When applied to automata that are not WDBA-realizable,
spot::minimize_wdba() was changed to produce an automaton
recognizing a language that includes the original one. As a
consequence spot::minimize_obligation() and
spot::is_wdba_realizable() now only need one containement check
instead of two.
- Slightly improved log output for the SAT-based minimization
functions. The CSV log files now include an additional column
with the size of the reference automaton, and they now have a
header line. These log files give more details and are more
accurate in the case of incremental SAT-solving.
Python:
- New spot.jupyter package. This currently contains a function for
displaying several arguments side by side in a Jupyter notebook.
See https://spot.lrde.epita.fr/ipynb/alternation.html for some
examples.
- Strings are now implicitely converted into formulas when passed
as arguments to functions that expect formulas. Previously this
was done only for a few functions.
- The Python bindings for sat_minimize() now have display_log and
return_log options; these are demonstrated on the new
https://spot.lrde.epita.fr/ipynb/satmin.html page.
Bugs fixed:
- Python *.py and *.so files are now always installed into the same
directory. This was an issue on systems like Fedora that separate
plateform-specific packages from non-plateform-specific ones.
- print_dot() will correctly escape strings containing \n in HTML
mode.
- The HOA parser will now accept Alias: declarations that occur
before AP:.
- The option --allow-dups of randltl now works properly.
- Converting generalized-co-Büchi to Streett using dnf_to_nca()
could produce bogus automata if the input had rejecting SCCs.
Deprecation notices:
- The type spot::acc_cond::mark_t has been overhauled and uses
a custom bit-vector to represent acceptance sets instead of
storing everything in a "unsigned int". This change is
to accomodate configure's --enable-max-accsets=N option and
has several effect:
* The following shortcuts are deprecated:
acc_cond::mark_t m1 = 0U;
acc_cond::mark_t m2 = -1U;
instead, use:
acc_cond::mark_t m1 = {};
acc_cond::mark_t m2 = acc_cond::mark_t::all();
* acc_cond::mark_t::value_t is deprecated. It is now only
defined when --enable-max-accsets=32 (the default) and
equal to "unsigned" for backward compatibility reasons.
Backward incompatibilities:
- Functions spot::parity_product() and spot::parity_product_or()
were removed. The code was unused, hard to maintain, and bogus.
- The output of print_dot() now include the acceptance condition.
Add option "A" (supported since version 2.4) to cancel that.
- Because genltl now supports LTL pattern with two argumens, using
--format=%L may output two comma-separated integer. This is an
issue if you used to produce CSV files using for instance:
genltl --format='%F,%L,%f' ...
Make sure to quote %L to protect the potential commas:
genltl --format='%F,"%L",%f' ...
- In Spot 2.5 and prior running "ltl2tgba --generic --det" on some
formula would attempt to translate it as deterministic TGBA or
determinize it into an automaton with parity acceptance. Version
2.5 introduced --parity to force parity acceptance on the output.
This version finally gives --generic some more natural semantics:
any acceptance condition can be used.
--
Alexandre Duret-Lutz
Chers collègues,
La prochaine session du séminaire Performance et Généricité du LRDE
(Laboratoire de Recherche et Développement de l'EPITA) aura lieu le
Mercredi 4 juillet 2018 (11h--12h), Amphi IP11.
Vous trouverez sur le site du séminaire [1] les prochaines séances,
les résumés, captations vidéos et planches des exposés précédents [2],
le détail de cette séance [3] ainsi que le plan d'accès [4].
[1] http://seminaire.lrde.epita.fr
[2] http://seminaire.lrde.epita.fr/Archives
[3] http://seminaire.lrde.epita.fr/2018-07-04
[4] http://www.lrde.epita.fr/wiki/Contact
Au programme du Mercredi 4 juillet 2018 :
* 11h--12h: Y a-t-il une théorie de la détection des anomalies dans les
images digitales?
-- Jean-Michel Morel (École Normale Supérieure Paris-Saclay)
https://sites.google.com/site/jeanmichelmorelcmlaenscachan/
Dans ce travail en collaboration avec Axel Davy, Mauricio Delbracio
et
Thibaud Ehret, je passerai en revue les classes d'algorithmes dont le
but est de détecter des anomalies dans les images digitales. Ces
détecteurs répondent au difficile problème de trouver automatiquement
des exceptions dans des images de fond, qui peuvent être aussi
diverses
qu'un tissu ou une mammographie. Des méthodes de détection ont été
proposées par milliers car chaque problème nécessite un modèle de
fond
différent. En analysant les approches existantes, nous montrerons que
le
problème peut être réduit à la détection d'anomalies dans les images
résiduelles (extraites de l'image cible) dans lesquelles prédominent
le
bruit et les anomalies. Ainsi, le problème général et impossible de
la
modélisation d'un fond arbitraire est remplacé par celui de modèliser
un
bruit. Or un modèle de bruit permet le calcul de seuils de détection
rigoureux. L'approche au problème peut donc être non supervisée et
fonctionner sur des images arbitraires. Nous illustrerons l'usage de
la
théorie de détection dite a contrario, qui évite la sur-détection en
fixant des seuils de détection prenant en compte la multiplicité des
tests.
-- Mathématicien de formation, docteur de l'Université Pierre et
Marie
Curie, Assistant à Marseille-Luminy, maître de conférences et
professeur
à l'Université Paris-Dauphine puis à l'ENS Cachan, JMM a fait ses
premiers travaux sur les équations aux dérivées partielles
non-linéaires
et les méthodes variationnelles. Il s'est ensuite consacré au
développement d'outils mathématiques pour le traitement et l'analyse
d'images et la modélisation de la perception visuelle.
L'entrée du séminaire est libre. Merci de bien vouloir diffuser cette
information le plus largement possible. N'hésitez pas à nous faire
parvenir vos suggestions d'orateurs.
--
Guillaume TOCHON
Maître de conférences // Assistant Professor
LRDE, EPITA
18, rue Pasteur
94270 Le Kremlin-Bicêtre
_______________________________________________
Seminaire mailing list
Seminaire(a)lrde.epita.fr
https://lists.lrde.epita.fr/listinfo/seminaire
Bonjour,
nous avons le plaisir de vous inviter au Séminaire des étudiants du LRDE.
Il aura lieu le mardi 3 juillet 2018 à partir de 10h00 en Amphi 4 (KB).
Les étudiants ING1 et ING2 présenteront le résultat de leurs travaux des derniers
mois.
Au programme :
SPOT — BIBLIOTHEQUE DE MODEL CHECKING
10h00 Recherche de chemin acceptant
bi-bande dans Spot – CLÉMENT GILLARD
Nous proposons un algorithme de recherche de chemin
acceptant bi-bande, qui vient compléter le test de vacuité bibande
présenté l’an dernier, et nous permet de l’utiliser dans
des applications plus concrètes.
10h30 Création d’une bibliothèque
d’antichaîne – ARTHUR REMAUD
Une antichaîne est un sous-ensemble d’éléments tous deux
à deux incomparables. Les antichaînes sont utilisées dans la
théorie des automates comme structures de données pour
stocker des ensembles d’états. De nouveaux algorithmes ont
été implémentés et évalués afin d’optimiser l’insertion dans
une antichaîne.
11h00 Implémentation des transitions
invisibles et transparentes dans Spot –
VINCENT TOURNEUR
Dans le domaine de la vérification formelle, la Réduction
d’Ordre Partiel (ROP) est une méthode qui permet de réduire
notablement la taille des structures de données utilisées
pour représenter les différentes exécutions d’un programme.
Pour vérifier des formules LTL en utilisant la ROP, il faut
ajouter des vérifications supplémentaires pour ne pas omettre
des exécutions qui modifient les valeurs des propositions
atomiques de la formule. Nous expliquons le fonctionnement
de deux méthodes qui résolvent ce problème et leurs
implémentations dans Spot : les méthodes des transitions
invisibles et transparentes.
CLIMB — TRAITEMENT D’IMAGES EN LISP
13h30 SUBTYPEP : une implémentation
de l’algorithme de Baker – LEO VALAIS
SUBTYPEP est un prédicat du langage Common Lisp qui
indique si un type est un sous-type d’un autre. Étant donné
le système de typage du langage, il n’est pas toujours
possible de donner de réponse. Les implémentations de cette
fonction ayant souvent tendance à ne pas répondre, même
lorsque c’est possible, Baker a publié un algorithme qu’il
prétend plus efficace. Nous proposons ici une implémentation
partielle de son algorithme et présentons ses avantages et ses
inconvénients.
OLENA — TRAITEMENT D’IMAGES GÉNÉRIQUE ET PERFORMANT
14h00 Optimisation d’une bibliothèque
de traitement d’images – VIRGILE HIRTZ
La bibliothèque de traitement d’images Pylene n’est pas
conforme aux nouveaux standards et subit des pertes de
performance. Nous présentons un nouveau design résolvant
ces problèmes.
14h30 Intégration d’histogrammes
dans l’algorithme NL-Mean pour le
débruitage d’images – ALIONA DANGLA
L’objectif principal de ce projet est d’incorporer l’information
liée à la distribution statistique des couleurs grâce à
l’utilisation d’histogrammes dans NL-Mean. Cela permet
d’effectuer toute transformation qui n’affecte pas la forme des
histogrammes et ajouter des régions qui auraient pu ne pas être
prises en compte dans l’algorithme originel.
15h00 Segmentation automatique du
coeur – YOUNES KHOUDLI
Nous proposons ici une méthode de segmentation
automatique, adaptée d’une méthode développée pour la
segmentation du cerveau humain.
--
Daniela Becker
Responsable administrative du LRDE
Bonjour,
Veuillez trouver ci-après une annonce de recrutement du LRDE
(Laboratoire de Recherche et Développement de l'EPITA).
https://www.lrde.epita.fr/wiki/Direction_du_laboratoire_%26_Direction_de_la…
---------------------------------------------------------------------------------------
Direction du laboratoire & Direction de la recherche de l'EPITA
Poste : Directrice ou Directeur de la recherche de l'EPITA / LRDE
Prérequis : HDR dans une thématique du laboratoire
Lieu : EPITA, Porte d'Italie, Kremlin-Bicêtre
Date : rentrée scolaire 2018
Salaire : suivant expérience
L'EPITA est une école d'ingénieur (CTI). Elle est reconnue comme une
école majeure dans le domaine de l'informatique. Le classement 2018 de
l'Usine Nouvelle l'a déclarée meilleure école dans les métiers du numérique
et lui a attribué la 12e place au classement général des grandes écoles.
Son statut d'établissement privé lui permet une grande réactivité et liberté
d'action.
L'activité de recherche à l'EPITA s'appuie en premier lieu sur le Laboratoire
de Recherche et Développement de l'EPITA (LRDE). Créé en 1998, il comprend
aujourd'hui une trentaine de membres : enseignants-chercheurs, un support
administratif, des doctorants et quelques élèves de l'école sélectionnés pour
travailler avec les permanents. Il dispose d'une large autonomie vis-à-vis de
l'EPITA. Les thématiques du LRDE sont le traitement d'images (de la morphologie
mathématique aux réseaux neuronaux) et la théorie des automates avec un accent
sur le model-checking. Le LRDE est membre de l'école doctorale EDITE de Paris,
et a été évalué par le Hcéres en 2018. Son rapport d'activité ainsi que ses
publications sont disponibles sur son site web.
D'autre part, l'EPITA encourage les enseignants non-membres du LRDE qui le
souhaitent à développer une activité de recherche, afin de couvrir les
différentes thématiques enseignées à l'EPITA (système, sécurité, big data,
robotique...). Le LRDE a pour vocation d'accompagner, voire à terme d'héberger,
ces nouvelles thématiques.
Votre mission :
* organiser et administrer le LRDE, en lien avec les autres enseignants-chercheurs
participer à la recherche au LRDE
* promouvoir la qualité de la production académique du LRDE
* accompagner la carrière des enseignants-chercheurs
* accompagner les enseignants non-membres du LRDE qui le souhaitent dans le
développement d'une activité de recherche
* participer, en lien avec la direction de l'EPITA, à l'élaboration et à la
conduite de la stratégie de l'école
* représenter l'EPITA à l'extérieur dans tous les aspects liés à la recherche
(développement de partenariats académiques ou industriels, promotion de la
recherche...)
Merci d'envoyer votre CV à info(a)lrde.epita.fr. Des entretiens avec les membres du
LRDE et la direction de l'EPITA seront mis en place pour les candidatures retenues.
Liens :
* L’EPITA : http://www.epita.fr/
* Le LRDE : http://www.lrde.epita.fr/
* Le rapport d'activité : https://www.lrde.epita.fr/dload/rapport_activite/ra2015.pdf
------------------------------------------------------------
Daniela Becker
Responsable administrative - LRDE
(Laboratoire de Recherche et Développement de l'Epita)
Tél: +33 1 53 14 59 22
Daniela.Becker(a)lrde.epita.fr
------------------------------------------------------------
Greetings,
We are pleased to announce that the following paper has been accepted in the 12th International Conference on Verification and Evaluation of Computer and Communication Systems (VECoS'18). Please find the title and abstract below.
Title and corresponding authors:
"Improving Parallel State-Space Exploration Using Genetic Algorithms"
Etienne Renault (1)
(1) LRDE, Epita, Paris, France
Abstract:
The verification of temporal properties against a given system may require the exploration of its full state space. In explicit model-checking this exploration uses a Depth-First-Search (DFS) and can be achieved with multiple randomized threads to increase performance. Nonetheless the topology of the state-space and the exploration order can cap the speedup up to a certain number of threads. This paper proposes a new technique that aims to tackle this limitation by generating artificial initial states, using genetic algorithms. Threads are then launched from these states and thus explore different parts of the state space. Our prototype implementation runs 10% faster than state-of-the-art algorithms. These results demonstrate that this novel approach worth to be considered as a way to overcome existing limitations.
More information at :
https://www.lrde.epita.fr/wiki/Publications/renault.18.vecos <https://www.lrde.epita.fr/wiki/Publications/renault.18.vecos>
Greetings,
We are pleased to announce that the following article has been accepted
for publication in the 7th Workshop on Synthesis (SYNT@CAV 2018), to be
held in Oxford on July, 18th 2018.
"Reactive Synthesis from LTL Specification with Spot"
Thibaud Michaud (LRDE -- EPITA)
Maximilien Colange (LRDE -- EPITA)
Abstract:
We present ltlsynt, a new tool for reactive
synthesis from LTL specifications. It relies on the
efficiency of Spot to translate the input LTL specification
to a deterministic parity automaton. The latter yields a
parity game, which we solve with Zielonka's recursive
algorithm.
The approach taken in ltlsynt was widely believed
to be impractical, due to the double-exponential size of
the parity game, and to the open status of the complexity
of parity games resolution. ltlsynt ranked second
of its track in the 2017 edition of the SYNTCOMP
competition. This demonstrates the practical applicability
of the parity game approach, when backed by efficient
manipulations of omega-automata such as the ones
provided by Spot. We present our approach and report on our
experimental evaluation of ltlsynt to better
understand its strengths and weaknesses.
--
Maximilien Colange
Assistant Professor
LRDE -- EPITA
(+33) 1 53 14 59 45