The Vcsners are proud to announce the release of Vcsn 2.5, aka the
k-lightest release!
Vcsn is a platform dedicated to the computation of, and with, finite
state machines. It provides an efficient C++ library, a Python interface,
and a graphical user interface on top of IPython.
Noteworthy changes include:
- two new implementations to compute the k-lightest paths (aka "best" paths:
with the smallest weights) in tropical automata.
- several new demos showing how to use the C++ library, including a
translator from (French) text messages (aka SMS) into French.
- a new means for the users to configure Vcsn. This adds a new prerequisite
to build Vcsn: libyaml-cpp.
- a much better and faster caching system of runtime generated C++ code.
Users of dyn (and therefore of Python) should note an improved
amortization of the compilation costs.
- several portability issues reported by users were fixed.
For more information, please, see the detailed news below.
People who worked on this release:
- Akim Demaille
- Clément Démoulins
- Clément Gillard
- Sarasvati Moutoucomarapoulé
- Sébastien Piat
- Younes Khoudli
For more information, please visit http://vcsn.lrde.epita.fr, in
particular https://vcsn.lrde.epita.fr/Vcsn2.5. Packages of Vcsn 2.5 should
appear soon on various distributions (MacPorts, Debian, ArchLinux, Docker Hub).
Vcsn 2.5 is also available online, at http://vcsn-sandbox.lrde.epita.fr.
## 2017-01-23
### Improved error messages
Parse errors were improved with caret-style reports for expressions and
automata in dot (Graphviz) and daut syntax.
In [5]: vcsn.Q.expression('<1/2>a + <1/0>b')
RuntimeError: 1.10-14: Q: null denominator
<1/2>a + <1/0>b
^^^^^
while reading expression: <1/2>a + <1/0>b
In [6]: vcsn.automaton('''
...: $ -> 0
...: 0 -> 1 <1/2>a, b
...: 1 -> $''')
RuntimeError: 3.1-16: B: unexpected trailing characters: /2
while reading: 1/2
while reading: <1/2>a, b
0 -> 1 <1/2>a, b
^^^^^^^^^^^^^^^^
while reading automaton
In [7]: vcsn.automaton('digraph{vcsn_context="lal, b" 0->1[label="<2>a"]}')
RuntimeError: 1.35-48: B: invalid value: 2
while reading: 2
while reading: <2>a
digraph{vcsn_context="lal, b" 0->1[label="<2>a"]}
^^^^^^^^^^^^^^
while reading automaton
## 2017-01-14
### "auto" automaton file format
Pass "auto" to read_automaton (in dyn, static, Python or the Tools) to let
the system guess the automaton file format (daut, dot, etc.).
## 2017-01-11
### Sms2fr
New Natural Language Processing demonstration of the computations of the
lightest paths. This application is a translator from SMS (i.e., text messages)
to proper French. The implementation is based on _Rewriting the orthography of
SMS messages_, François Yvon, In Natural Language Engineering, volume 16, 2010.
The translator uses pre-trained automata and through compositions with the
automaton representing the text message, generates all possible translations of
the word. The best solution is then found with a shortest path algorithm.
An example is given in the `Sms2fr.ipynb` notebook.
## 2017-01-01
### A richer dyn
The `vcsn::dyn` API was enriched. All the dyn types now support the usual
operators: comparisons (`==`, `!=`, `<`, `<=`, `>`, `>=`), and compositions
(`+`, `*`, `&`). New functions facilitate the creation of `dyn` values from
strings (`make_word`, etc.). The file `tests/demos/operators.cc` shows
several examples, explained in the `C++-Library.ipynb` notebook. For
instance:
// A simple automaton.
auto a1 = make_automaton("context = lal, q\n"
"$ 0 <1/2>\n"
"0 1 <2>a, <6>b\n"
"1 $\n", "daut");
// Its context.
auto ctx = context_of(a1);
// Evaluate it.
assert(evaluate(a1, make_word(ctx, "a")) == make_weight(ctx, "1"));
assert(evaluate(a1, make_word(ctx, "b")) == make_weight(ctx, "3"));
// Concatenate to itself.
auto a2 = a1 * a1;
assert(evaluate(a2, make_word(ctx, "ab")) == make_weight(ctx, "3"));
// Self-conjunction, aka "power 2".
auto a3 = a1 & a1;
assert(evaluate(a3, make_word(ctx, "b")) == make_weight(ctx, "9"));
## 2016-12-25
### Configuration
Vcsn now supports configuration files. They will be used to provide users
with a means to customize Vcsn, for instance to tune the graphical rendering
of the automata, to tailor the display of expressions, etc.
For a start, it provides a simple means to get the configuration
information, including from Vcsn Tools.
$ vcsn ipython --no-banner
In [1]: import vcsn
In [2]: vcsn.config('configuration.cxxflags')
Out[2]: '-Qunused-arguments -O3 -g -std=c++1z'
$ vcsn configuration configuration.cxxflags
-Qunused-arguments -O3 -g -std=c++1z
This adds a new dependency: libyaml-cpp. Beware that version 0.5.2 is buggy
and will not work properly. Use 0.5.1, or 0.5.3 or more recent.
## 2016-12-20
### compare: new algorithm
Three-way comparison is now available in all the layers, as `compare`, for
automata, expressions, labels, polynomials and weights. This is used in
Python to implement the comparisons (`<`, `<=`, `>`, `=>`, `==`, `!=`) of
expressions, and of automata (`<`, `<=`, `>`, `=>`).
However, since the comparison on automata is performed on the _list_ of
transitions, automata that are "very much alike" (i.e., different only by
superficial details) will be considered different, so `==` and `!=` are
still using a safer, but much slower, comparison.
In [2]: exp = vcsn.Q.expression
In [3]: exp('a').compare(exp('b'))
Out[3]: -1
In [4]: exp('a').compare(exp('a'))
Out[4]: 0
In [5]: exp('<3>a').compare(exp('a'))
Out[5]: 1
## 2016-12-12
### k shortest paths algorithms
Two algorithms for searching k shortest paths in graphs have been implemented:
"Eppstein" and "Yen". Hence, we can now search for the k lightest (smallest
with respect to weights, aka "best" paths) paths in an automaton through the
"lightest" algorithm.
"lightest" used to compute these paths with an inefficient heap-based implementation:
aut.lightest(5)
aut.lightest(5, "auto")
Note that "auto" does not use the latter algorithm when returning only one path,
as simpler shortest path algorithms would apply to the situation and be more
efficient. It then uses Dijkstra's algorithm.
It can now be called with the given Yen and Eppstein algorithms as follows:
aut.lightest(5, "eppstein")
aut.lightest(5, "yen")
Yen's algorithm requires the given automaton to have no cycles, while Eppstein
supports any type of automaton. For small values of k, Yen's algorithm has better
performances than Eppstein, but with increasing values of k, Eppstein is always
more efficient.
Bonjour,
J'ai le plaisir de vous inviter à ma soutenance d'habilitation à
diriger les recherches, intitulée
``Contributions to LTL and ω-automata for Model Checking''
La soutenance aura lieu le vendredi 10 février à 14h15 à l'Epita
dans l'amphi 4 (24 rue Pasteur, 94270 Le Kremlin-Bicêtre).
Un plan d'accès est disponible ici:
https://www.lrde.epita.fr/~adl/dl/lrde-amphi4.pdf
Vous êtes également invités au pot qui suivra, dans la salle
alpha du LRDE (2e étage du bâtiment X, 18 rue Pasteur).
Le jury sera composé de:
Javier Esparza, Tech. Univ. München, Germany, rapporteur
Radu Mateescu, Inria Grenoble, France, rapporteur
Moshe Y. Vardi, Rice Univ., Houston, Texas, USA, rapporteur
Rüdiger Ehlers, Univ. Bremen, Germany, examinateur
Stephan Merz, Inria Nancy & Loria, France, examinateur
Jaco van de Pol, Univ. Twente, Netherlands, examinateur
Fabrice Kordon, Univ. P.&M. Curie, Paris, France, examinateur
Une version non-définitive du mémoire peut être téléchargée ici:
https://www.lrde.epita.fr/~adl/dl/adl/duret.17.hdr.pdf
La soutenance aura lieu en anglais, avec le résumé qui suit.
We present multiple practical contributions to the automata-theoretic
approach to LTL model checking. Most of these contributions are
implemented in Spot (https://spot.lrde.epita.fr/), a library and a
set of tools exporting reusable algorithms, with a growing user base.
In particular, we will present an overview of the techniques used in
our LTL-to-Büchi translator, that is now one of the leading tools
available for this task. We will also discuss how we built algorithms
that support automata with arbitrary acceptance conditions, and how
some of these algorithms became more elegant as a consequence.
Finally we will show how the set of tools that we have built can be
used to improve existing algorithms by finding bugs or detecting
nonoptimal results (e.g., by using a SAT-solver to find minimal
deterministic omega-automata of arbitrary acceptance conditions).
--
Alexandre Duret-Lutz
Hello,
I'm happy to announce the release of DoX version 2.3.
What's new in this version:
* Support Doc's internal \saved@indexname command
Grab it directly here:
http://www.lrde.epita.fr/~didier/software/latex.php#dox
or wait until it propagates through CTAN...
--
Resistance is futile. You will be jazzimilated.
Lisp, Jazz, Aïkido: http://www.didierverna.info
Bonjour,
nous avons le plaisir de vous inviter au Séminaire des étudiants du LRDE.
Il aura lieu le mardi 31 janvier 2017 à partir de 10h30 en L0 au LRDE (KB).
--------------------------------------------
Au programme :
OLENA — TRAITEMENT D’IMAGES GÉNÉ-
RIQUE ET PERFORMANT
10h30 : Détection automatique de zones de textes sur des
pièces d’identité – ANNE-CLAIRE BERTHET
On cherche à détecter les zones de textes sur tout type de pièces
d’identité, pouvant provenir de n’importe quel pays et filmées
par des caméras de smartphones. Par conséquent, on peut être
amené à segmenter des lettres latines, cyrilliques ou des idéogrammes.
Par ailleurs, certains papiers d’identité possèdent
des filigranes et/ou un fond utilisant des lettres comme motif,
qui doivent être filtrés. De plus, le contexte de la prise de
vue (luminosité, fond...) doit être pris en compte afin, qu’une
fois l’image binarisée, le nombre de composantes à filtrer soit
minimal. Ainsi, le pré-traitement proposé permet, à l’aide de
filtres morphologiques, de détecter le texte efficacement en limitant
le nombre de composantes à traiter.
VCSN — BIBLIOTHÈQUE DE MANIPULATION
D’AUTOMATES
11h00 : K plus courts chemins dans Vcsn –
SÉBASTIEN PIAT
Le calcul des K plus courts chemins dans un automate peut être
très coûteux, surtout sur des automates énormes comme ceux
utilisés en linguistique. Ainsi, après avoir implémenté l’une
des solutions considérée comme l’état de l’art (appelée l’algorithme
de Yen) dans Vcsn, l’étape suivante était l’implémentation
de la meilleure solution pour ce calcul sur des automates
avec des cycles : Eppstein. Ce travail va décrire nos différentes
implementations et comparer leurs performances.
VÉRIFICATION DU LOCUTEUR
11h30 : Modèle du monde à base de réseaux de neurones à
délai temporel pour la reconnaissance du locuteur
– VALENTIN IOVENE
Dans le domaine de la reconnaissance du locuteur, les réseaux
de neurones profonds (DNN) ont récemment été montrés plus
efficaces pour collecter des statistiques Baum-Welch utilisables
pour l’extraction d’i-vector que les modèles de mélanges gaussiens
traditionnels. Cependant, ce type d’architecture peut être
trop lent au moment de l’évaluation, demandant l’utilisation
d’un processeur graphique pour atteindre des performances
"temps-réel". Nous montrons que les statistiques produites par
a réseau de neurones à délai temporel (TDNN) peuvent être
utilisées pour construire un GMM supervisé plus léger servant
de modèle du monde (UBM) dans un système i-vector
classique. L’erreur obtenue avec cette approche est comparée
à celles obtenues avec des modèles du monde basés sur des
GMM classiques
--
Daniela Becker
Responsable administrative du LRDE
We are happy to announce the release of Spot 2.3.
Spot is a library of algorithms for manipulating LTL formulas and
omega-automata (objects as commonly used in model checking).
This new release features some exiting news such as
- faster emptiness checks for automata that are not explored
on-the-fly,
- several improvements to the SAT-based minimization procedure,
- preliminary support for alternating automata,
- membership tests for all classes of the Manna and Pnueli hierarchy
of temporal properties.
See below for more details.
You can find the new release here:
http://www.lrde.epita.fr/dload/spot/spot-2.3.tar.gz
See https://spot.lrde.epita.fr/ for documentation and installation
instructions.
The contributors to this release are Maximilien Colange, Alexandre
Gbaguidi Aisse, Étienne Renault, and myself. As always, please direct
any feedback to <spot(a)lrde.epita.fr>.
New in spot 2.3 (2017-01-19)
Build:
* While Spot only relies on C++11 features, the configure script
learned --enable-c++14 to compile in C++14 mode. This allows us
check that nothing breaks when we will switch to C++14.
* Spot is now distributed with PicoSAT 965, and uses it for
SAT-based minimization of automata without relying on temporary
files. It is still possible to use an external SAT solver by
setting the SPOT_SATSOLVER environment variable.
* The development Debian packages for Spot now install static
libraries as well.
* We now install configuration files for users of pkg-config.
Tools:
* ltlcross supports translators that output alternating automata in
the HOA format. Cross-comparison checks will only work with weak
alternating automata (not necessarily *very* weak), but "ltlcross
--no-check --product=0 --csv=..." will work with any alternating
automaton if you just want satistics.
* autfilt can read alternating automata. This is still experimental
(see below). Some of the algorithms proposed by autfilt will
refuse to work because they have not yet been updated to work with
alternating automata, but in any case they should display a
diagnostic: if you see a crash, please report it.
* autfilt has three new filters: --is-very-weak, --is-alternating,
and --is-semi-deterministic.
* the --check option of autfilt/ltl2tgba/ltldo/dstar2tgba can now
take "semi-determinism" as argument.
* autfilt --highlight-languages will color states that recognize
identical languages. (Only works for deterministic automata.)
* 'autfilt --sat-minimize' and 'ltl2tgba -x sat-minimize' have
undergone some backward incompatible changes. They use binary
search by default, and support different options than they used
too. See spot-x(7) for details. The defaults are those that were
best for the benchmark in bench/dtgbasat/.
* ltlfilt learned --recurrence and --persistence to match formulas
belonging to these two classes of the temporal hierarchy. Unlike
--syntactic-recurrence and --syntactic-persistence, the new checks
are automata-based and will also match pathological formulas.
See https://spot.lrde.epita.fr/hierarchy.html
* The --format option of ltlfilt/genltl/randltl/ltlgrind learned to
print the class of a formula in the temporal hierarchy of Manna &
Pnueli using %h. See https://spot.lrde.epita.fr/hierarchy.html
* ltldo and ltlcross learned a --relabel option to force the
relabeling of atomic propositions to p0, p1, etc. This is more
useful with ltldo, as it allows calling a tool that restricts the
atomic propositions it supports, and the output automaton will
then be fixed to use the original atomic propositions.
* ltldo and ltlcross have learned how to call ltl3hoa, so
'ltl3hoa -f %f>%O' can be abbreviated to just 'ltl3hoa'.
Library:
* A twa is required to have at least one state, the initial state.
An automaton can only be empty while it is being constructed,
but should not be passed to other algorithms.
* Couvreur's emptiness check has been rewritten to use the explicit
interface when possible, to avoid overkill memory allocations.
The new version has further optimizations for weak and terminal
automata. Overall, this new version is roughly 4x faster on
explicit automata than the former one. The old version has been
kept for backward compatibility, but will be removed eventually.
* The new version of the Couvreur emptiness check is now the default
one, used by twa::is_empty() and twa::accepting_run(). Always
prefer these functions over an explicit call to Couvreur.
* experimental support for alternating automata:
- twa_graph objects can now represent alternating automata. Use
twa_graph::new_univ_edge() and twa_graph::set_univ_init_state()
to create universal edges an initial states; and use
twa_graph::univ_dests() to iterate over the universal
destinations of an edge.
- the automaton parser will now read alternating automata in the
HOA format. The HOA and dot printers can output them.
- the file twaalgos/alternation.hh contains a few algorithms
specific to alternating automata:
+ remove_alternation() will transform *weak* alternating automata
into TGBA.
+ the class outedge_combiner can be used to perform "and" and "or"
on the outgoing edges of some alternating automaton.
- scc_info has been adjusted to handle universal edges as if they
were existential edges. As a consequence, acceptance
information is not accurate.
- postprocessor will call remove_alternation() right away, so
it can be used as a way to transform alternating automata
into different sub-types of (generalized) Büchi automata.
Note that although prostprocessor optimize the resulting
automata, it still has no simplification algorithms that work
at the alternating automaton level.
- See https://spot.lrde.epita.fr/tut23.htmlhttps://spot.lrde.epita.fr/tut24.html and
https://spot.lrde.epita.fr/tut31.html for some code examples.
* twa objects have two new properties, very-weak and
semi-deterministic, that can be set or retrieved via
twa::prop_very_weak()/twa::prop_semi_deterministic(), and that can
be tested by is_very_weak_automaton()/is_semi_deterministic().
* twa::prop_set has a new attribute used in twa::prop_copy() and
twa::prop_keep() to indicate that determinism may be improved by
an algorithm. In other words properties like
deterministic/semi-deterministic/unambiguous should be preserved
only if they are positive.
* language_map() and highlight_languages() are new functions that
implement autfilt's --highlight-languages option mentionned above.
* dtgba_sat_minimize_dichotomy() and dwba_sat_minimize_dichotomy()
use language_map() to estimate a lower bound for binary search.
* The encoding part of SAT-based minimization consumes less memory.
* SAT-based minimization of automata can now be done using two
incremental techniques that take a solved minimization and attempt
to forbid the use of some states. This is done either by adding
clauses, or by using assumptions.
* If the environment variable "SPOT_XCNF" is set during incremental
SAT-based minimization, XCNF files suitable for the incremental SAT
competition will be generated. This requires the use of an exteral
SAT solver, setup with "SPOT_SATSOLVER". See spot-x(7).
* The new function mp_class(f) returns the class of the formula
f in the temporal hierarchy of Manna & Pnueli.
Python:
* spot.show_mp_hierarchy() can be used to display the membership of
a formula to the Manna & Pnueli hierarchy, in notebooks. An
example is in https://spot.lrde.epita.fr/ipynb/formulas.html
* The on-line translator will now display the temporal hierarchy
in the "Formula > property information" output.
Bugs fixed:
* The minimize_wdba() function was not correctly minimizing automata
with useless SCCs. This was not an issue for the LTL translation
(where useless SCCs are always removed first), but it was an issue
when deciding if a formula was safety or guarantee. As a
consequence, some tricky safety or guarantee properties were only
recognized as obligations.
* When ltlcross was running a translator taking the Spin syntax as
input (%s) it would not automatically relabel any unsupported
atomic propositions as ltldo already do.
* When running "autfilt --sat-minimize" on a automaton representing
an obligation property, the result would always be a complete
automaton even without the -C option.
* ltlcross --products=0 --csv should not output any product-related
column in the CSV output since it has nothing to display there.
--
Alexandre Duret-Lutz
Hello,
After more than 3 years of hyper-sleep, I'm happy to announce the
release of FiXme version 4.3.
What's new in this version:
* Add a paragraph about the duplication of notes in captions
* Update support for the KOMA-Script classes to the tocbasic interface
* Separate inline notes from the text they follow
* Fix potential inline layouts color leakage
* Fix several bracket parsing problems
Grab it directly here:
http://www.lrde.epita.fr/~didier/software/latex.php#fixme
or wait until it propagates through CTAN...
--
Resistance is futile. You will be jazzimilated.
Lisp, Jazz, Aïkido: http://www.didierverna.info
Chers collègues,
La prochaine session du séminaire Performance et Généricité du LRDE
(Laboratoire de Recherche et Développement de l'EPITA) aura lieu le
Mercredi 18 janvier 2017 (11h--12h), Salle L0 du LRDE.
Vous trouverez sur le site du séminaire [1] les prochaines séances,
les résumés, captations vidéos et planches des exposés précédents [2],
le détail de cette séance [3] ainsi que le plan d'accès [4].
[1] http://seminaire.lrde.epita.fr
[2] http://seminaire.lrde.epita.fr/Archives
[3] http://seminaire.lrde.epita.fr/2017-01-18
[4] http://www.lrde.epita.fr/Contact
Au programme du Mercredi 18 janvier 2017 :
* 11h--12h: Analyse topologique de données pour la visualisation
scientifique: où en est-on et où va-t-on?
-- Julien Tierny - CNRS - LIP6 - UPMC
http://lip6.fr/Julien.Tierny
La visualisation scientifique est un domaine qui vise à aider les
utilisateurs à (i) représenter, (ii) explorer et (iii) analyser des
données géométriques acquises ou simulées, à des fins
d'interprétation,
de validation ou de communication. Parmi les techniques existantes,
les
algorithmes inspirés par la théorie de Morse ont démontré leur
utilité
dans ce contexte pour l'extraction efficace et robuste de structures
d'intérêts, et ce, à plusieurs échelles d'importance.
Dans cette présentation, je donnerai un bref tutoriel sur l'analyse
topologique de champs scalaires, en introduisant quelques concepts
clés
comme celui de graphe de Reeb, de complexe de Morse-Smale ou de
diagramme de persistance. Par ailleurs, j'illustrerai ces notions par
des cas d'applications concrets en astrophysique, chimie moléculaire
ou
encore en combustion. Ensuite, je discuterai certaines problématiques
pratiques ayant récemment émergé avec le développement des ressources
de
calcul haute-performance. Ces problématiques induisent non seulement
des
jeux de données d'une taille inédite, mais également des types
nouveaux
de données, comme les champs scalaires multivariés ou incertains. Ces
difficultés ne sont pas uniquement intéressantes pour la communauté
de
recherche à cause de leur forte importance pratique, mais aussi parce
qu'elles nécessitent un redémarrage complet de l'effort de recherche
entrepris dans ce domaine ces vingt dernières années. En particulier,
je
présenterai de nouvelles directions de recherche, étayées par des
résultats préliminaires récents concernant l'analyse topologique dans
un
contexte de calcul haute-performance, ainsi que l'analyse topologique
de
champs scalaires incertains ou bivariés.
-- Julien Tierny a obtenu un Doctorat en informatique de l'Université
Lille 1 en 2008, et l'Habilitation à Diriger des Recherches de
l'Université Pierre-et-Marie-Curie en 2016. Depuis septembre 2014, il
est chercheur permanent au CNRS, affilié avec le laboratoire LIP6
(UPMC), après avoir été chercheur à Télécom ParisTech entre 2010 et
2014. Avant cela, lauréat d'une bourse Fulbright, il a été chercheur
associé en post-doctorat au Scientific Computing and Imaging
Institute
de l'Université d'Utah. Ses intérêts de recherche comprennent
l'analyse
topologique et géométrique de données pour la visualisation
scientifique. Il a reçu en 2016 la "Mention Honorable" du concours
IEEE
Scientific Visualization Contest ainsi que le prix du meilleur
article
de la conférence IEEE VIS.
L'entrée du séminaire est libre. Merci de bien vouloir diffuser cette
information le plus largement possible. N'hésitez pas à nous faire
parvenir vos suggestions d'orateurs.
--
Guillaume TOCHON
Maître de conférences // Assistant Professor
LRDE, EPITA
18, rue Pasteur
94270 Le Kremlin-Bicêtre
_______________________________________________
Seminaire mailing list
Seminaire(a)lrde.epita.fr
https://lists.lrde.epita.fr/listinfo/seminaire
I am happy to announce that the following paper has been
accepted at the 25rd Euromicro International Conference on Parallel,
Distributed, and Network-Based Processing (PDP 2017) to be held in St Petersburg,
Russia on March 6-8.
Title and authors:
Title : Parallel Satisfiability Solver Based on Hybrid Partitioning Method
Tarek Menour (1,3), Souheib Baarir(1,2,3).
(1) Paris Ouest Nanterre la Défense University
(2) LRDE, EPITA, Kremlin-Bicêtre, France
(3) LIP6 Laboratory, CNRS UMR 7606, Paris, France
Abstract :
This paper presents a hybrid partitioning method used to improve the performance of solving a Satisfiability (SAT) problem. The principle of our approach consists firstly in applying a static partitioning to decompose the search tree in finite set of disjoint sub-trees, then assign each sub-tree to one computing core. However it is not easy to choose the relevant branching variables to partition the search tree. We propose in this context to partition the search tree according to the variables that occur more frequently than others. The advantage of this method is to give a good disjoint sub-trees. However, the drawback is the imbalance load between computing cores.
To overcome this drawback, we propose, as novelty, to extend the static partitioning by combining with a new dynamic partitioning that assures a good load balancing between cores. Each time a new waiting core is detected, the dynamic partitioning select automatically by using an estimation function the computing core which has the most work to do in order to partition dynamically its sub-tree in two parts. It keeps one part and gives the second part to the waiting core. Preliminary results show that a good speedup is achieved by using our hybrid method.
ELS'17 - 10th European Lisp Symposium
VUB - Vrije Universiteit Brussel
Belgium
April 3-4, 2017
In co-location with <Programming> 2017
Sponsored by Brunner Software GmbH
http://www.european-lisp-symposium.org/
The purpose of the European Lisp Symposium is to provide a forum for
the discussion and dissemination of all aspects of design,
implementation and application of any of the Lisp and Lisp-inspired
dialects, including Common Lisp, Scheme, Emacs Lisp, AutoLisp, ISLISP,
Dylan, Clojure, ACL2, ECMAScript, Racket, SKILL, Hop and so on. We
encourage everyone interested in Lisp to participate.
The 10th European Lisp Symposium invites high quality papers about
novel research results, insights and lessons learned from practical
applications and educational perspectives. We also encourage
submissions about known ideas as long as they are presented in a new
setting and/or in a highly elegant way.
Topics include but are not limited to:
- Context-, aspect-, domain-oriented and generative programming
- Macro-, reflective-, meta- and/or rule-based development approaches
- Language design and implementation
- Language integration, inter-operation and deployment
- Development methodologies, support and environments
- Educational approaches and perspectives
- Experience reports and case studies
We invite submissions in the following forms:
Papers: Technical papers of up to 8 pages that describe original
results or explain known ideas in new and elegant ways.
Demonstrations: Abstracts of up to 2 pages for demonstrations of
tools, libraries, and applications.
Tutorials: Abstracts of up to 4 pages for in-depth presentations
about topics of special interest for at least 90 minutes and up to
180 minutes.
The symposium will also provide slots for lightning talks, to be
registered on-site every day.
All submissions should be formatted following the ACM SIGS guidelines
and include ACM classification categories and terms. For more
information on the submission guidelines and the ACM keywords, see:
http://www.acm.org/sigs/publications/proceedings-templates and
http://www.acm.org/about/class/1998. The conference proceedings will be
published in the ACM Digital Library.
Submissions should be uploaded to Easy Chair, at the following address:
https://www.easychair.org/conferences/?conf=els17
Note: to help us with the review process please indicate the type of
submission by entering either "paper", "demo", or "tutorial" in the
Keywords field.
Important dates:
- 30 Jan 2017 Submission deadline
- 27 Feb 2017 Notification of acceptance
- 20 Mar 2017 Final papers due
- 03-04 Apr 2017 Symposium
Programme chair:
Alberto Riva, University of Florida, USA
Programme committee:
Marco Antoniotti, Università Milano Bicocca, Italy
Marc Battyani, FractalConcept
Theo D'Hondt, Vrije Universiteit Brussel, Belgium
Marc Feeley, Université de Montreal, Canada
Stelian Ionescu, Google
Rainer Joswig, Independent Consultant, Germany
António Menezes Leitão, Technical University of Lisbon, Portugal
Nick Levine, RavenPack
Henry Lieberman, MIT, USA
Mark Tarver, Shen Programming Group
Jay McCarthy, University of Massachusetts Lowell, USA
Christian Queinnec, Université Pierre et Marie Curie, France
François-René Rideau, Bridgewater Associates, USA
Nikodemus Siivola, ZenRobotics Ltd
Alessio Stalla, Università degli Studi di Genova, Italy
Search Keywords:
#els2017, ELS 2017, ELS '17, European Lisp Symposium 2017,
European Lisp Symposium '17, 10th ELS, 10th European Lisp Symposium,
European Lisp Conference 2017, European Lisp Conference '17
--
Resistance is futile. You will be jazzimilated.
Lisp, Jazz, Aïkido: http://www.didierverna.info