ELS'19 - 12th European Lisp Symposium
Hotel Bristol Palace
Genova, Italy
April 1-2 2019
In cooperation with: ACM SIGPLAN
In co-location with <Programming> 2019
Sponsored by EPITA and Franz Inc.
http://www.european-lisp-symposium.org/
Recent news:
- Submission deadline extended to Friday February 8.
- Keynote abstracts now available.
- <Programming> registration now open:
https://2019.programming-conference.org/attending/Registration
- Student refund program after the conference.
The purpose of the European Lisp Symposium is to provide a forum for
the discussion and dissemination of all aspects of design,
implementation and application of any of the Lisp and Lisp-inspired
dialects, including Common Lisp, Scheme, Emacs Lisp, AutoLisp, ISLISP,
Dylan, Clojure, ACL2, ECMAScript, Racket, SKILL, Hop and so on. We
encourage everyone interested in Lisp to participate.
The 12th European Lisp Symposium invites high quality papers about
novel research results, insights and lessons learned from practical
applications and educational perspectives. We also encourage
submissions about known ideas as long as they are presented in a new
setting and/or in a highly elegant way.
Topics include but are not limited to:
- Context-, aspect-, domain-oriented and generative programming
- Macro-, reflective-, meta- and/or rule-based development approaches
- Language design and implementation
- Language integration, inter-operation and deployment
- Development methodologies, support and environments
- Educational approaches and perspectives
- Experience reports and case studies
We invite submissions in the following forms:
Papers: Technical papers of up to 8 pages that describe original
results or explain known ideas in new and elegant ways.
Demonstrations: Abstracts of up to 2 pages for demonstrations of
tools, libraries, and applications.
Tutorials: Abstracts of up to 4 pages for in-depth presentations
about topics of special interest for at least 90 minutes and up to
180 minutes.
The symposium will also provide slots for lightning talks, to be
registered on-site every day.
All submissions should be formatted following the ACM SIGS guidelines
and include ACM Computing Classification System 2012 concepts and
terms. Submissions should be uploaded to Easy Chair, at the following
address: https://www.easychair.org/conferences/?conf=els2019
Note: to help us with the review process please indicate the type of
submission by entering either "paper", "demo", or "tutorial" in the
Keywords field.
Important dates:
- 08 Feb 2019 Submission deadline (*** extended! ***)
- 01 Mar 2019 Notification of acceptance
- 18 Mar 2019 Final papers due
- 01-02 Apr 2019 Symposium
Programme chair:
Nicolas Neuss, FAU Erlangen-Nürnberg, Germany
Programme committee:
Marco Antoniotti, Universita Milano Bicocca, Italy
Marc Battyani, FractalConcept, France
Pascal Costanza, IMEC, ExaScience Life Lab, Leuven, Belgium
Leonie Dreschler-Fischer, University of Hamburg, Germany
R. Matthew Emerson, thoughtstuff LLC, USA
Marco Heisig, FAU, Erlangen-Nuremberg, Germany
Charlotte Herzeel, IMEC, ExaScience Life Lab, Leuven, Belgium
Pierre R. Mai, PMSF IT Consulting, Germany
Breanndán Ó Nualláin, University of Amsterdam, Netherlands
François-René Rideau, Google, USA
Alberto Riva, Unversity of Florida, USA
Alessio Stalla, ManyDesigns Srl, Italy
Patrick Krusenotto, Deutsche Welle, Germany
Philipp Marek, Austria
Sacha Chua, Living an Awesome Life, Canada
Search Keywords:
#els2019, ELS 2019, ELS '19, European Lisp Symposium 2019,
European Lisp Symposium '19, 12th ELS, 12th European Lisp Symposium,
European Lisp Conference 2019, European Lisp Conference '19
--
Resistance is futile. You will be jazzimilated.
Lisp, Jazz, Aïkido: http://www.didierverna.info
Bonjour,
nous avons le plaisir de vous inviter au Séminaire des étudiants du LRDE.
Il aura lieu le mercredi 16 janvier 2019 à partir de 10h00 en Amphi IP11 (KB).
Les étudiants ING3 présenteront le résultat de leurs travaux des derniers
mois.
Au programme :
SPOT — BIBLIOTHEQUE DE MODEL CHECKING
10h00 Recherches de contrexemple dans
Spot – CLÉMENT GILLARD
Spot est une bibliothèque de manipulation
d'ω-automates qui tend à aider la vérification de
modèles par ω-automates et le développement d'outils
de transformation d'ω-automates. Elle fournit donc de
multiples algorithmes avec de multiples implémentations qui
fonctionnent sur une grande variété d'ω-automates.
Parmi ces algorithmes, les tests de
vacuité et les recherches de contrexemple sont souvent
utilisés pour diverses raisons. Comme leurs résultats
sont liés, ils sont souvent utilisés ensemble. Cependant,
il manque à Spot les implémentations de recherche de
contrexemple correspondant à certaines implémentations de
tests de vacuité qui soient capables de travailler de la
même manière sur les mêmes automates. Nous
présentons deux implémentations de calcul de contrexemple,
qui viennent compléter deux implémentations de tests de
vacuité déjà existantes, qui suivent leur sillage et se
servent des données déjà accumulées pour construire
efficacement des contrexemples.
10h30 Compression d’états dans
Spot – ARTHUR REMAUD
Pour représenter un système par un automate, il
faut sauvegarder toutes les valeurs des variables
du système pour chaque état de l'automate. Cela
peut prendre beaucoup de place en mémoire
lorsqu'il y a beaucoup de variables et/ou d'états,
et de fait ralentit le temps d'exécution à cause
des défauts de cache. Pour contourner ce problème
dans Spot, on utilise une simple compression du
tableau contenant les variables d'un état, et ce
pour chaque état, ce qui réduit la mémoire
utilisée et aussi le temps d'exécution. Dans ce
rapport, nous présentons une structure de données
qui améliore la compression des variables en
utilisant la redondance des valeurs présentes
dans différents états, et les différents
problèmes rencontrés lors de son ajout dans
Spot.
11h00 Mesures sur la réduction d’ordre partiel
dans Spot – VINCENT TOURNEUR
Ce rapport résume trois méthodes implémentées dans
l'outil de vérification de modèles Spot. La première
vise à améliorer la conversion de modèle de programme
en structure de Kripke grâce à la réduction d'ordre
partiel, une technique qui ignore l'ordre de certaines
actions du programme. Cela permet de réduire la taille de
la structure de Kripke. La seconde méthode modifie la
première pour permettre de l'utiliser pour tester des
propriétés LTL. La troisième a pour but de vérifier
qu'un modèle ne contient pas de livelock, sans utiliser de
propriété LTL. Nous testons toutes ces méthodes, en nous
concentrant sur leur but commun : réduire la quantité de
mémoire requise, ce qui est un goulot d'étranglement
important dans le domaine de la vérification de
modèles.
--
Daniela Becker
Responsable administrative du LRDE
ELS'19 - 12th European Lisp Symposium
Hotel Bristol Palace
Genova, Italy
April 1-2 2019
In cooperation with: ACM SIGPLAN
In co-location with <Programming> 2019
Sponsored by EPITA
http://www.european-lisp-symposium.org/
Recent news:
- Keynote by Stefan Monnier on Emacs Lisp
- Keynote by Christophe Rhodes on SBCL
- Guest appearance by Matthew Flatt on Racket
The purpose of the European Lisp Symposium is to provide a forum for
the discussion and dissemination of all aspects of design,
implementation and application of any of the Lisp and Lisp-inspired
dialects, including Common Lisp, Scheme, Emacs Lisp, AutoLisp, ISLISP,
Dylan, Clojure, ACL2, ECMAScript, Racket, SKILL, Hop and so on. We
encourage everyone interested in Lisp to participate.
The 12th European Lisp Symposium invites high quality papers about
novel research results, insights and lessons learned from practical
applications and educational perspectives. We also encourage
submissions about known ideas as long as they are presented in a new
setting and/or in a highly elegant way.
Topics include but are not limited to:
- Context-, aspect-, domain-oriented and generative programming
- Macro-, reflective-, meta- and/or rule-based development approaches
- Language design and implementation
- Language integration, inter-operation and deployment
- Development methodologies, support and environments
- Educational approaches and perspectives
- Experience reports and case studies
We invite submissions in the following forms:
Papers: Technical papers of up to 8 pages that describe original
results or explain known ideas in new and elegant ways.
Demonstrations: Abstracts of up to 2 pages for demonstrations of
tools, libraries, and applications.
Tutorials: Abstracts of up to 4 pages for in-depth presentations
about topics of special interest for at least 90 minutes and up to
180 minutes.
The symposium will also provide slots for lightning talks, to be
registered on-site every day.
All submissions should be formatted following the ACM SIGS guidelines
and include ACM Computing Classification System 2012 concepts and
terms. Submissions should be uploaded to Easy Chair, at the following
address: https://www.easychair.org/conferences/?conf=els2019
Note: to help us with the review process please indicate the type of
submission by entering either "paper", "demo", or "tutorial" in the
Keywords field.
Important dates:
- 01 Feb 2019 Submission deadline
- 01 Mar 2019 Notification of acceptance
- 18 Mar 2019 Final papers due
- 01-02 Apr 2019 Symposium
Programme chair:
Nicolas Neuss, FAU Erlangen-Nürnberg, Germany
Programme committee:
Marco Antoniotti, Universita Milano Bicocca, Italy
Marc Battyani, FractalConcept, France
Pascal Costanza, IMEC, ExaScience Life Lab, Leuven, Belgium
Leonie Dreschler-Fischer, University of Hamburg, Germany
R. Matthew Emerson, thoughtstuff LLC, USA
Marco Heisig, FAU, Erlangen-Nuremberg, Germany
Charlotte Herzeel, IMEC, ExaScience Life Lab, Leuven, Belgium
Pierre R. Mai, PMSF IT Consulting, Germany
Breanndán Ó Nualláin, University of Amsterdam, Netherlands
François-René Rideau, Google, USA
Alberto Riva, Unversity of Florida, USA
Alessio Stalla, ManyDesigns Srl, Italy
Patrick Krusenotto, Deutsche Welle, Germany
Philipp Marek, Austria
Sacha Chua, Living an Awesome Life, Canada
Search Keywords:
#els2019, ELS 2019, ELS '19, European Lisp Symposium 2019,
European Lisp Symposium '19, 12th ELS, 12th European Lisp Symposium,
European Lisp Conference 2019, European Lisp Conference '19
--
Resistance is futile. You will be jazzimilated.
Lisp, Jazz, Aïkido: http://www.didierverna.info
Hello,
I'm happy to announce the release of FiXme version 4.5.
What's new in this version:
* Public interface for extending FiXme with new key/value options.
* Revamp the AUCTeX support
* Fix PDF signature layouts not working anymore
* Fix spurious space at the end of environments contents
Grab it directly here:
http://www.lrde.epita.fr/~didier/software/latex.php#fixme
or wait until it propagates through CTAN...
--
Resistance is futile. You will be jazzimilated.
Lisp, Jazz, Aïkido: http://www.didierverna.info
We are happy to announce the release of Spot 2.7
This release contains contributions by Maximilien Colange, Etienne
Renault, and myself. See below for a detailed list of new features.
You can find the new release here:
http://www.lrde.epita.fr/dload/spot/spot-2.7.tar.gz
See https://spot.lrde.epita.fr/ for documentation and installation
instructions.
As always, please direct any feedback to <spot(a)lrde.epita.fr>.
New in spot 2.7 (2018-12-11)
Command-line tools:
- ltlsynt now has three algorithms for synthesis:
--algo=sd is the historical one. The automaton of the formula
is split to separate inputs and outputs, then
determinized (with Safra construction).
--algo=ds the automaton of the formula is determinized (Safra),
then split to separate inputs and outputs.
--algo=lar translate the formula to a deterministic automaton
with an arbitrary acceptance condition, then turn it
into a parity automaton using LAR, and split it.
In all three cases, the obtained parity game is solved using
Zielonka algorithm. Calude's quasi-polynomial time algorithm has
been dropped as it was not used.
- ltlfilt learned --liveness to match formulas representing liveness
properties.
- the --stats= option of tools producing automata learned how to
tell if an automaton uses universal branching (%u), or more
precisely how many states (%[s]u) or edges (%[e]u) use universal
branching.
Python:
- spot.translate() and spot.postprocess() now take an xargs=
argument similar to the -x option of ltl2tgba and autfilt, making
it easier to fine tune these operations. For instance
ltl2tgba 'GF(a <-> XXa)' --det -x gf-guarantee=0
would be written in Python as
spot.translate('GF(a <-> XXa)', 'det', xargs='gf-guarantee=0')
(Note: those extra options are documented in the spot-x(7) man page.)
- spot.is_generalized_rabin() and spot.is_generalized_streett() now return
a tuple (b, v) where b is a Boolean, and v is the vector of the sizes
of each generalized pair. This is a backward incompatible change.
Library:
- The LTL parser learned syntactic sugar for nested ranges of X
using the X[n], F[n:m], and G[n:m] syntax of TSLF. (These
correspond to the next!, next_e!, and next_a! operators of PSL,
but we do not support those under these names currently.)
X[6]a = XXXXXXa
F[2:4]a = XX(a | X(a | Xa))
G[2:4]a = XX(a & X(a & Xa))
The corresponding constructors (for C++ and Python) are
formula::X(unsigned, formula)
formula::F(unsigned, unsigned, formula)
formula::G(unsigned, unsigned, formula)
- spot::unabbreviate(), used to rewrite away operators such as M or
W, learned to use some shorter rewritings when an argument (e) is
a pure eventuality or (u) is purely universal:
Fe = e
Gu = u
f R u = u
f M e = F(f & e)
f W u = G(f | u)
- The twa_graph class has a new dump_storage_as_dot() method
to show its data structure. This is more conveniently used
as aut.show_storage() in a Jupyter notebook. See
https://spot.lrde.epita.fr/ipynb/twagraph-internals.html
- spot::generic_emptiness_check() is a new function that performs
emptiness checks of twa_graph_ptr (i.e., automata not built
on-the-fly) with an *arbitrary* acceptance condition. Its sister
spot::generic_emptiness_check_scc() can be used to decide the
emptiness of an SCC. This is now used by
twa_graph_ptr::is_empty(), twa_graph_ptr::intersects(), and
scc_info::determine_unknown_acceptance().
- The new function spot::to_parity() translates an automaton with
arbitrary acceptance condition into a parity automaton, based on a
last-appearance record (LAR) construction. (It is used by ltlsynt
but not yet by autfilt or ltl2tgba.)
- The new function is_liveness() and is_liveness_automaton() can be
used to check whether a formula or an automaton represents a
liveness property.
- Two new functions count_univbranch_states() and
count_univbranch_edges() can help measuring the amount of
universal branching in alternating automata.
Bugs fixed:
- translate() would incorrectly mark as stutter-invariant
some automata produced from formulas of the form X(f...)
where f... is syntactically stutter-invariant.
- acc_cond::is_generalized_rabin() and
acc_cond::is_generalized_streett() did not recognize the cases
were a single generalized pair is used.
- The pair of acc_cond::mark_t returned by
acc_code::used_inf_fin_sets(), and the pair (bool,
vector_rs_pairs) by acc_cond::is_rabin_like() and
acc_cond::is_streett_like() were not usable in Python.
- Many object types had __repr__() methods that would return the
same string as __str__(), contrary to Python usage where repr(x)
should try to show how to rebuild x. The following types have
been changed to follow this convention:
spot.acc_code
spot.acc_cond
spot.atomic_prop_set
spot.formula
spot.mark_t
spot.twa_run (__repr__ shows type and address)
spot.twa_word (likewise, but _repr_latex_ used in notebooks)
Note that this you were relying on the fact that Jupyter calls
repr() to display returned values, you may want to call print()
explicitely if you prefer the old representation.
- Fix compilation under Cygwin and Alpine Linux, both choking
on undefined secure_getenv().
Chers collègues,
La prochaine session du séminaire Performance et Généricité du LRDE
(Laboratoire de Recherche et Développement de l'EPITA) aura lieu le
Vendredi 14 décembre 2018 (11h--12h), Amphi IP12A.
Vous trouverez sur le site du séminaire [1] les prochaines séances,
les résumés, captations vidéos et planches des exposés précédents [2],
le détail de cette séance [3] ainsi que le plan d'accès [4].
[1] http://seminaire.lrde.epita.fr
[2] http://seminaire.lrde.epita.fr/Archives
[3] http://seminaire.lrde.epita.fr/2018-12-14
[4] http://www.lrde.epita.fr/wiki/Contact
Au programme du Vendredi 14 décembre 2018 :
* 11h--12h: Toward myocardium perfusion from X-ray CT
-- Clara Jaquet (ESIEE Marne-la-Vallée)
Recent advances in medical image computing have resulted in automated
systems that closely assist physicians in patient therapy. Computational
and personalized patient models benefit diagnosis, prognosis and
treatment planning, with a decreased risk for the patient, as well as
potentially lower cost. HeartFlow Inc. is a successful example of a
company providing such a service in the cardiovascular context. Based on
patient-specific vascular model extracted from X-ray CT images, they
identify functionally significant disease in large coronary arteries.
Their combined anatomical and functional analysis is nonetheless limited
by the image resolution. At the downstream scale, a functional exam
called Myocardium Perfusion Imaging (MPI) highlights myocardium regions
with blood flow deficit. However, MPI does not functionally relate
perfusion to the upstream coronary disease. The goal of our project is
to build the functional bridge between coronary and myocardium. To this
aim we propose an anatomical and functional extrapolation. We produce an
innovative vascular network generation method extending the coronary
model down to the microvasculature. In the resulting vascular model, we
compute a functional analysis pipeline to simulate flow from large
coronaries to the myocardium, and to enable comparison with MPI
ground-truth data.
-- After completing a technological university degree in biology at
Creteil, Clara Jaquet obtained the diploma of biomedical engineer from
ISBS (Bio-Sciences Institute) in 2015. She worked for one year at
HeartFlow Inc, California, before starting a PhD at ESIEE, Université
Paris-Est, within the LIGM laboratory, on a research project jointly
with the same company.
L'entrée du séminaire est libre. Merci de bien vouloir diffuser cette
information le plus largement possible. N'hésitez pas à nous faire
parvenir vos suggestions d'orateurs.
--
Edwin Carlinet
Laboratoire R&D de l'EPITA (LRDE)
_______________________________________________
Seminaire mailing list
Seminaire(a)lrde.epita.fr
https://lists.lrde.epita.fr/listinfo/seminaire
Bonjour à tous,
J'ai le plaisir de vous inviter à la soutenance de ma thèse intitulée
``Prise en compte d'informations d'inclusion et d'adjacence dans les
représentations morphologiques hiérarchiques, avec application à
l'extraction de texte en images naturelles et vidéos.’’
Celle-ci aura lieu le jeudi 13 décembre 2018 à 10h00 en salle KB 604 à
l'EPITA
La soutenance sera suivie d'un pot.
https://www.lrde.epita.fr/wiki/Affiche-these-DH
Manuscrit de thèse
------------------
https://1drv.ms/b/s!AmbdUYjEYP52i8BOYRqHLImi0oykfQ
Composition du jury de thèse
----------------------------
Rapporteurs :
Beatriz MARCOTEGUI, Pr., MINES ParisTech, CMM
Hugues TALBOT, Pr., CentraleSupelec, CVC
Examinateurs :
Isabelle BLOCH, Pr., Telecom ParisTech, LTCI
Laurent NAJMAN , Pr., l'Université Paris-Est, LIGM
Camille KURTZ, MdC, Université Paris Descartes, LIPADE
Directeurs de thèse :
Thierry GÉRAUD, Pr., EPITA, LRDE
Yongchao XU, MdC, HUST, Chine, MCLab
Résumé de la thèse
------------------
With the rising need for a higher understanding of images, the
pixel-based representation is not enough. To answer this, the
mathematical morphology framework provides several multi-scale,
region-based image representations which include the hierarchies of
segmentation (e.g., alpha-tree, BPT) and trees based on the threshold
decomposition (Min/Max-trees and Tree of Shapes). Because objects in the
real world rarely appear in isolation but a typical context with other
related objects, we should consider the spatial relationships between
image regions.
We are interested in two type of relationship, namely the inclusion and
adjacency (in the sense of ``being nearby) since they usually carry
contextual information. The adjacency between regions gives us a sense
of how regions are arranged in images and have been widely used. On the
other hand, while fitting the human's perception of the
object-background relationship: the objects are included in their
background, the inclusion relationship is usually not taken into
account. Both these drastic information opens up possibilities for image
analysis. In this thesis, we take advantage of both inclusion and
adjacency information in morphological hierarchical representations for
computer vision applications.
We introduce the spatial alignment graph w.r.t inclusion (SAG) that is
constructed from both inclusion and spatial arrangement of regions in
the tree-based image representations.
For simple scenes, we introduce the Tree of Shapes of Laplacian sign. It
encodes the inclusion of 0-crossing of a Morphological Laplacian map and
performs well even in the case of uneven illumination. The ToSoL is
computed in linear time w.r.t the number of pixels thanks to an
optimization that mimics well-composedness. In this representation, the
spatial alignment graph is reduced to a disconnected graph where each
connected component is a semantic group of objects.
For higher detail representation, the spatial alignment graph becomes
more complex. To address this issue, we expand the idea of the shape
spaces morphology. Our expansion has two primary results: 1) It allows
the manipulation of any graph of shapes that encode different
information, which encompasses the SAG. 2) It allows any tree filtering
strategy proposed by the connected operators frameworks. Within this
expansion, the SAG could be analyzed with an alpha-tree.
We demonstrated the application aspect of our method in text detection.
The experiment results show the efficiency and effectiveness of our
methods, which robust to noise, blur, or uneven illumination. These
features are appealing to mobile applications.
Bonjour,
It is my great pleasure to announce that my PhD defense will be held on Tuesday 20 November
at EPITA at 10h00 in room KB 604. Everyone is welcome and invited as long as space is
available in the room. Also please join the reception (pot) afterwards (12h30) to take place
in room IP 12a.
https://www.lrde.epita.fr/wiki/Affiche-these-JN <https://www.lrde.epita.fr/wiki/Affiche-these-JN>
If you’d like to take a look at the thesis itself, it is available here temporarily until it reaches its final resting place.
https://drive.google.com/drive/folders/14L7kFNNyIOv3e-apqDTynCRPPWwoY2bw <https://drive.google.com/drive/folders/14L7kFNNyIOv3e-apqDTynCRPPWwoY2bw>
Representing and Computing with Types in Dynamically Typed Languages
Extending Dynamic Language Expressivity to Accommodate Rationally Typed Sequences
Abstract:
We present code generation techniques related to run-time type checking of heterogeneous sequences. Traditional regular expressions can be used to recognize well defined sets of character strings called rational languages or sometimes regular languages. Newton et.al. present an extension whereby a dynamic language may recognize a well defined set of heterogeneous sequences, such as lists and vectors.
As with the analogous string matching regular expression theory, matching these regular type expressions can also be achieved by using a finite state machine (deterministic finite automata, DFA). Constructing such a DFA can be time consuming. The approach we chose, uses meta-programming to intervene at compile-time, generating efficient functions specific to each DFA, and allowing the compiler to further optimize the functions if possible. The functions are made available for use at run-time. Without this use of meta-programming, the program might otherwise be forced to construct the DFA at run-time. The excessively high cost of such a construction would likely far outweigh the time needed to match a string against the expression.
Our technique involves hooking into the Common Lisp type system via the deftype macro. The first time the compiler encounters a relevant type specifier, the appropriate DFA is created, which may be a exponential complexity operation, from which specific low-level code is generated to match that specific expression. Thereafter, when the type specifier is encountered again, the same pre-generated function can be used. The code generated is of linear complexity at run-time.
A complication of this approach, which we explain in this report, is that to build the DFA we must calculate a disjoint type decomposition which is time consuming, and also leads to sub-optimal use of typecase in machine generated code. To handle this complication, we use our own macro optimized-typecase in our machine generated code. Uses of this macro are also implicitly expanded at compile time. Our macro expansion uses BDDs (Binary Decision Diagrams) to optimize the optimized-typecase into low level code, maintaining the typecasesemantics but eliminating redundant type checks. In the report we also describe an extension of BDDs to accomodate subtyping in the Common Lisp type system as well as an in-depth analysis of worst-case sizes of BDDs.
Kind regards
(soon to be Dr) Jim NEWTON
ELS'19 - 12th European Lisp Symposium
Hotel Bristol Palace
Genova, Italy
April 1-2 2019
In co-location with <Programming> 2019
Sponsored by EPITA
http://www.european-lisp-symposium.org/2019/
The purpose of the European Lisp Symposium is to provide a forum for
the discussion and dissemination of all aspects of design,
implementation and application of any of the Lisp and Lisp-inspired
dialects, including Common Lisp, Scheme, Emacs Lisp, AutoLisp, ISLISP,
Dylan, Clojure, ACL2, ECMAScript, Racket, SKILL, Hop and so on. We
encourage everyone interested in Lisp to participate.
The 12th European Lisp Symposium invites high quality papers about
novel research results, insights and lessons learned from practical
applications and educational perspectives. We also encourage
submissions about known ideas as long as they are presented in a new
setting and/or in a highly elegant way.
Topics include but are not limited to:
- Context-, aspect-, domain-oriented and generative programming
- Macro-, reflective-, meta- and/or rule-based development approaches
- Language design and implementation
- Language integration, inter-operation and deployment
- Development methodologies, support and environments
- Educational approaches and perspectives
- Experience reports and case studies
We invite submissions in the following forms:
Papers: Technical papers of up to 8 pages that describe original
results or explain known ideas in new and elegant ways.
Demonstrations: Abstracts of up to 2 pages for demonstrations of
tools, libraries, and applications.
Tutorials: Abstracts of up to 4 pages for in-depth presentations
about topics of special interest for at least 90 minutes and up to
180 minutes.
The symposium will also provide slots for lightning talks, to be
registered on-site every day.
All submissions should be formatted following the ACM SIGS guidelines
and include ACM Computing Classification System 2012 concepts and
terms. Submissions should be uploaded to Easy Chair, at the following
address: https://www.easychair.org/conferences/?conf=els2019
Note: to help us with the review process please indicate the type of
submission by entering either "paper", "demo", or "tutorial" in the
Keywords field.
Important dates:
- 01 Feb 2019 Submission deadline
- 01 Mar 2019 Notification of acceptance
- 18 Mar 2019 Final papers due
- 01-02 Apr 2019 Symposium
Programme chair:
Nicolas Neuss, FAU Erlangen-Nürnberg, Germany
Programme committee: tba
Search Keywords:
#els2019, ELS 2019, ELS '19, European Lisp Symposium 2019,
European Lisp Symposium '19, 12th ELS, 12th European Lisp Symposium,
European Lisp Conference 2019, European Lisp Conference '19
--
Resistance is futile. You will be jazzimilated.
Lisp, Jazz, Aïkido: http://www.didierverna.info
We are pleased to announce that we have had an article
entitled Intuitive Analysis of Certain Binary Decision Diagrams
accepted in ACM Transactions on Computational Logic.
A link to the paper and abstract are given below.
We’ll be happy to discuss the article if anyone is interested.
Kind Regards
Jim
https://www.lrde.epita.fr/dload/papers/newton.18.tocl.pdf
Abstract:
Binary Decision Diagrams (BDDs) and in particular ROBDDs (Reduced
Ordered BDDs) are a common data structure for manipulating Boolean
expressions, integrated circuit design, type inferencers, model
checkers, and many other applications. Although the ROBDD is a
lightweight data structure to implement, the behavior, in terms of
memory allocation, may not be obvious to the program architect. We
explore experimentally, numerically, and theoretically the typical
and worst-case ROBDD sizes in terms of number of nodes and residual
compression ratios, as compared to unreduced BDDs. While our
theoretical results are not surprising, as they are in keeping with
previously known results, we believe our method contributes to the
current body of research by our experimental and statistical
treatment of ROBDD sizes. In addition, we provide an algorithm to
calculate the worst-case size. Finally, we present an algorithm for
constructing a worst-case ROBDD of a given number of variables. Our
approach may be useful to projects deciding whether the ROBDD is the
appropriate data structure to use, and in building worst-case
examples to test their code.