Bonjour,
nous avons le plaisir de vous annoncer la sortie du n°34 du bulletin du LRDE.
C'est un numéro Spécial Rentrée qui présente l'ensemble des permanents du LRDE.
Vous y trouverez également un aperçu des activités du LRDE et de la majeure CSI
dont font partie tous les élèves épitéens intégrant le labo.
Vous pouvez télécharger le bulletin en couleur à la page suivante :
https://www.lrde.epita.fr/wiki/Latest_issue
--
Daniela Becker
Responsable administrative du LRDE
I am happy to announce that the following papers has been accepted at
the 20th International Conferences on Logic for Programming,
Artificial Intelligence and Reasoning (LPAR) to be held at the
University of the South Pacific (Suva, Fiji) on 24–28 November 2015.
SAT-based Minimization of Deterministic ω-Automata
Souheib Baarir¹, Alexandre Duret-Lutz¹
¹LRDE, EPITA, Le Kremlin-Bicêtre, France
https://www.lrde.epita.fr/wiki/Publications/baarir.15.lpar
Abstract:
We describe a tool that inputs a deterministic ω-automaton with any
acceptance condition, and synthesizes an equivalent ω-automaton with
another arbitrary acceptance condition and a given number of states,
if such an automaton exist. This tool, that relies on a SAT-based
encoding of the problem, can be used to provide minimal ω-automata
equivalent to given properties, for different acceptance conditions.
--
Alexandre Duret-Lutz
I am happy to announce that the following paper has been accepted in
the International Conferences on Logic for Programming, Artificial Intelligence and Reasoning (LPAR’15) :
SAT-based Minimization of Deterministic ω-Automata
Souheib Baarir and Alexandre Duret-Lutz.
LRDE, EPITA, Le Kremlin-Bicêtre, France
A+,
Soheib.
Bonjour,
nous avons le plaisir de vous annoncer la sortie du n°33 du bulletin du LRDE.
C'est un numéro spécial dédié aux séminaires des étudiants-chercheurs du LRDE
qui auront lieu le lundi 6 juillet 2015.
Vous y trouverez le programme des deux demi-journées avec les résumés des
présentations.
Nous y présentons également les nouveaux stagiaires et les futurs thésards qui nous
ont rejoints cette année.
Vous pouvez télécharger le bulletin en couleur à la page suivante :
http://www.lrde.epita.fr/wiki/LrdeBulletin/l-air-de-rien-33
--
Daniela Becker
Hello,
I'm happy to announce the release of Declt version 1.1. Declt is a
reference manual generator for Common Lisp ASDF systems. Get it here:
https://www.lrde.epita.fr/~didier/software/lisp/misc.php#declt
New in this release:
* Declt now properly documents complex system and component dependencies
(:feature, :version and :require notably),
* Declt advertises a system's :if-feature conditional if any,
* Finally, and more importantly, Declt is no more limited to the
documentation of a single system. Given a main system to document, any
subsystem (that is, other systems we depend on and which are part of
the same distribution) will be documented in the same reference
manual.
General Description:
Declt (pronounce “dec’let”) is a reference manual generator for Common
Lisp libraries. It works by loading an ASDF system and introspecting its
contents. The generated documentation contains the description of the
system itself and its local dependencies (other systems in the same
distribution): components (modules and files), packages and definitions
found in those packages.
Exported and internal definitions are listed separately. This allows the
reader to have a quick view on the library’s public API. Within each
section, definitions are sorted lexicographically.
In addition to ASDF system components and packages, Declt documents the
following definitions: constants, special variables, symbol macros,
macros, setf expanders, compiler macros, functions (including setf
ones), generic functions and methods (including setf ones), method
combinations, conditions, structures, classes and types.
The generated documentation includes every possible bit of information
that introspecting can provide: documentation strings, lambda lists
(including qualifiers and specializers where appropriate), slots
(including type, allocation and initialization arguments), definition
source file etc.
Every documented item provides a full set of cross-references to related
items: ASDF component dependencies, parents and children, classes direct
methods, super and subclasses, slot readers and writers, setf expanders
access and update functions etc.
Finally, Declt produces exhaustive and multiple-entry indexes for every
documented item.
Reference manuals are generated in Texinfo format (compatible, but not
requiring Texinfo 5). From there it is possible to produce readable /
printable output in info, HTML, PDF, DVI and PostScript with tools such
as makeinfo, texi2dvi or texi2pdf.
The primary example of documentation generated by Declt is the Declt
reference manual itself.
--
My new Jazz CD entitled "Roots and Leaves" is out!
Check it out: http://didierverna.com/records/roots-and-leaves.php
Lisp, Jazz, Aïkido: http://www.didierverna.info
Spot is a C++11 library for ω-automata manipulation and model checking.
This release contains 713 patches contribued over the last 18 months
by Thibaud Michaud, Étienne Renault, Alexandre Lewkowicz, and myself.
As the name suggests, this release reflects a huge progress towards
Spot 2.0, but we are not quite there yet: the only thing you should
not consider as stable in this release is the C++ API, as it should
change a lot as we march towards version 2.0.
This release also comes with a new web site at
https://spot.lrde.epita.fr/
and Debian packages. See
https://spot.lrde.epita.fr/install.html
for installation instructions, or grab the source tarball directly at
https://www.lrde.epita.fr/dload/spot/spot-1.99.1.tar.gz
New in spot 1.99.1 (2015-06-23)
* Major changes motivating the jump in version number
- Spot now works with automata that can represent more than
generalized Büchi acceptance. Older versions were built around
the concept of TGBA (Transition-based Generalized Büchi
Automata) while this version now deals with what we call TωA
(Transition-based ω-Automata). TωA support arbitrary acceptance
conditions specified as a Boolean formula of transition sets
that must be visited infinitely often or finitely often. This
genericity allows for instance to represent Rabin or Streett
automata, as well as some generalized variants of those.
- Spot has near complete support for the Hanoi Omega Automata
format. http://adl.github.io/hoaf/ This formats supports
automata with the generic acceptance condition described above,
and has been implemented in a number of third-party tools (see
http://adl.github.io/hoaf/support.html) to ease their
interactions. The only part of the format not yet implemented in
Spot is the support for alternating automata.
- Spot is now compiling in C++11 mode. The set of C++11 features
we use requires GCC >= 4.8 or Clang >= 3.5. Although GCC 4.8 is
more than 2-year old, people with older installations won't be
able to install this version of Spot.
- As a consequence of the switches to C++11 and to TωA, a lot of
the existing C++ interfaces have been renamed, and sometime
reworked. This makes this version of Spot not backward
compatible with Spot 1.2.x. See below for the most important
API changes. Furtheremore, the reason this release is not
called Spot 2.0 is that we have more of those changes planned.
- Support for Python 2 was dropped. We now support only Python
3.2 or later. The Python bindings have been improved a lot, and
include some convenience functions for better integration with
IPython's rich display system. User familiar with IPython's
notebook should have a look at the notebook files in
wrap/python/tests/*.ipynb
* Major news for the command-line tools
- The set of tools installed by spot now consists in the following
11 commands. Those marked with a '+' are new in this release.
- randltl Generate random LTL/PSL formulas.
- ltlfilt Filter and convert LTL/PSL formulas.
- genltl Generate LTL formulas from scalable patterns.
- ltl2tgba Translate LTL/PSL formulas into Büchi automata.
- ltl2tgta Translate LTL/PSL formulas into Testing automata.
- ltlcross Cross-compare LTL/PSL-to-Büchi translators.
+ ltlgrind Mutate LTL/PSL formula.
- dstar2tgba Convert deterministic Rabin or Streett automata into Büchi.
+ randaut Generate random automata.
+ autfilt Filter and convert automata.
+ ltldo Run LTL/PSL formulas through other tools.
randaut does not need any presentation: it does what you expect.
ltlgrind is a new tool that mutates LTL or PSL formulas. If you
have a tool that is bogus on some formula that is too large to
debug, you can use ltlgrind to generate smaller derived formulas
and see if you can reproduce the bug on those.
autfilt is a new tool that processes a stream of automata. It
allows format conversion, filtering automata based on some
properties, and general transformations (e.g., change of
acceptance conditions, removal of useless states, product
between automata, etc.).
ltldo is a new tool that runs LTL/PSL formulas through other
tools, but uses Spot's command-line interfaces for specifying
input and output. This makes it easier to use third-party tool
in a pipe, and it also takes care of some necessary format
conversion.
- ltl2tgba has a new option, -U, to produce unambiguous automata.
In unambiguous automata any word is recognized by at most one
accepting run, but there might be several ways to reject a word.
This works for LTL and PSL formulas.
- ltl2tgba has a new option, -S, to produce generalized-Büchi
automata with state-based acceptance. Those are obtained by
converting some transition-based GBA into a state-based GBA, so
they are usually not as small as one would wish. The same
option -S is also supported by autfilt.
- ltlcross will work with translator producing automata with any
acceptance condition, provided the output is in the HOA format.
So it can effectively be used to validate tools producing Rabin
or Streett automata.
- ltlcross has several new options:
--grind attempts to reduce the size of any bogus formula it
discovers, while still exhibiting the bug.
--ignore-execution-failures ignores cases where a translator
exits with a non-zero status.
--automata save the produced automata into the CSV or JSON
file. Those automata are saved using the HOA format.
ltlcross will also output two extra columns in its CSV/JSON
output: "ambiguous_aut" and "complete_aut" are Boolean
that respectively tells whether the automaton is
ambiguous and complete.
- "ltlfilt --stutter-invariant" will now work with PSL formulas.
The implementation is actually much more efficient
than our previous implementation that was only for LTL.
- ltlfilt's old -q/--quiet option has been renamed to
--ignore-errors. The new -q/--quiet semantic is the
same as in grep (and also autfilt): disable all normal
input, for situtations where only the exit status matters.
- ltlfilt's old -n/--negate option can only be used as --negate
now. The short '-n NUM' option is now the same as the new
--max-count=N option, for consistency with other tools.
- ltlfilt has a new --count option to count the number of matching
automata.
- ltlfilt has a new --exclusive-ap option to constrain formulas
based on a list of mutually exclusive atomic propositions.
- ltlfilt has a new option --define to be used in conjunction with
--relabel or --relabel-bool to print the mapping between old and
new labels.
- all tools that produce formulas or automata now have an --output
(a.k.a. -o) option to redirect that output to a file instead of
standard output. The name of this file can be constructed using
the same %-escape sequences that are available for --stats or
--format.
- all tools that output formulas have a -0 option to separate
formulas with \0. This helps in conjunction with xargs -0.
- all tools that output automata have a --check option that
request extra checks to be performed on the output to fill
in properties values for the HOA format. This options
implies -H for HOA output. For instance
ltl2tgba -H 'formula'
will declare the output automaton as 'stutter-invariant'
only if the formula is syntactically stutter-invariant
(e.g., in LTL\X). With
ltl2tgba --check 'formula'
additional checks will be performed, and the automaton
will be accurately marked as either 'stutter-invariant'
or 'stutter-sensitive'. Another check performed by
--check is testing whether the automaton is unambiguous.
- ltlcross (and ltldo) have a list of hard-coded shorthands
for some existing tools. So for instance running
'ltlcross spin ...' is the same as running
'ltlcross "spin -f %s>%N" ...'. This feature is much
more useful for ltldo.
- For options that take an output filename (i.e., ltlcross's
--save-bogus, --grind, --csv, --json) you can force the file
to be opened in append mode (instead of being truncated) by
by prefixing the filename with ">>". For instance
--save-bogus=">>bugs.ltl"
will append to the end of the file.
* Other noteworthy news
- The web site moved to http://spot.lrde.epita.fr/.
- We now have Debian packages.
See http://spot.lrde.epita.fr/install.html
- The documentation now includes some simple code examples
for both Python and C++. (This is still a work in progress.)
- The curstomized version of BuDDy (libbdd) used by Spot has be
renamed as (libbddx) to avoid issues with copies of BuDDy
already installed on the system.
- There is a parser for the HOA format
(http://adl.github.io/hoaf/) available as a
spot::automaton_stream_parser object or spot::parse_aut()
function. The former version is able to parse a stream of
automata in order to do batch processing. This format can be
output by all tools (since Spot 1.2.5) using the --hoa option,
and it can be read by autfilt (by default) and ltlcross (using
the %H specifier). The current implementation does not support
alternation. Multiple initial states are converted into an
extra initial state; complemented acceptance sets Inf(!x) are
converted to Inf(x); explicit or implicit labels can be used;
aliases are supported; "--ABORT--" can be used in a stream.
- The above HOA parser can also parse never claims, and LBTT
automata, so the never claim parser and the LBTT parser have
been removed. This implies that autfilt can input a mix of HOA,
never claims, and LBTT automata. ltlcross also use the same
parser for all these output, and the old %T and %N specifiers
have been deprecated and replaced by %O (for output).
- While not all algorithms in the library are able to work with
any acceptance condition supported by the HOA format, the
following two new functions mitigate that:
- remove_fin() takes a TωA whose accepting condition uses Fin(x)
primitive, and produces an equivalent TωA without Fin(x):
i.e., the output acceptance is a disjunction of generalized
Büchi acceptance. This type of acceptance is supported by
SCC-based emptiness-check, for instance.
- similarly, to_tgba() converts any TωA into an automaton with
generalized-Büchi acceptance.
- randomize() is a new algorithm that randomly reorders the states
and transitions of an automaton. It can be used from the
command-line using "autfilt --randomize".
- the interface in iface/dve2 has been renamed to iface/ltsmin
because it can now interface the dynamic libraries created
either by Divine (as patched by the LTSmin group) or by
Spins (the LTSmin compiler for Promela).
- LTL/PSL formulas can include /* comments */.
- PSL SEREs support a new operator [:*i..j], the iterated fusion.
[:*i..j] is to the fusion operator ':' what [*i..j] is to the
concatenation operator ';'. For instance (a*;b)[:*3] is the
same as (a*;b):(a*;b):(a*;b). The operator [:+], is syntactic
sugar for [:*1..], and corresponds to the operator ⊕ introduced
by Dax et al. (ATVA'09).
- GraphViz output now uses an horizontal layout by default, and
also use circular states (unless the automaton has more than 100
states, or uses named-states). The --dot option of the various
command-line tools takes an optional parameter to fine-tune the
GraphViz output (including vertical layout, forced circular or
elliptic states, named automata, SCC information, ordered
transitions, and different ways to colorize the acceptance
sets). The environment variables SPOT_DOTDEFAULT and
SPOT_DOTEXTRA can also be used to respectively provide a default
argument to --dot, and add extra attributes to the output graph.
- Never claims can now be output in the style used by Spin since
version 6.2.4 (i.e., using do..od instead of if..fi, and with
atomic statements for terminal acceptance). The default output
is still the old one for compatibility with existing tools. The
new style can be requested from command-line tools using option
--spin=6 (or -s6 for short).
- Support for building unambiguous automata. ltl_to_tgba() has a
new options to produce unambiguous TGBA (used by ltl2tgba -U as
discussed above). The function is_unambiguous() will check
whether an automaton is unambigous, and this is used by
autfilt --is-unmabiguous.
- The SAT-based minimization algorithm for deterministic automata
has been updated to work with ω-Automaton with any acceptance.
The input and the output acceptance can be different, so for
instance it is possible to create a minimal deterministic
Streett automaton starting from a deterministic Rabin automaton.
This functionnality is available via autfilt's --sat-minimize
option. See doc/userdoc/satmin.html for details.
- The on-line interface at http://spot.lrde.epita.fr/trans.html
can be used to check stutter-invariance of any LTL/PSL formula.
- The on-line interface will work around atomic propositions not
supported by ltl3ba. (E.g. you can now translate F(A) or
G("foo < bar").)
* Noteworthy code changes
- Boost is not used anymore.
- Automata are now manipulated exclusively via shared pointers.
- Most of what was called tgba_something is now called
twa_something, unless it is really meant to work only for TGBA.
This includes functions, classes, file, and directory names.
For instance the class tgba originally defined in tgba/tgba.hh,
has been replaced by the class twa defined in twa/twa.hh.
- the tgba_explicit class has been completely replaced by a more
efficient twa_graph class. Many of the algorithms that were
written against the abstract tgba (now twa) interface have been
rewritten using twa_graph instances as input and output, making
the code a lot simpler.
- The tgba_succ_iterator (now twa_succ_iterator) interface has
changed. Methods next(), and first() should now return a bool
indicating whether the current iteration is valid.
- The twa base class has a new method, release_iter(), that can
be called to give a used iterator back to its automaton. This
iterator is then stored in a protected member, iter_cache_, and
all implementations of succ_iter() can be updated to recycle
iter_cache_ (if available) instead of allocating a new iterator.
- The tgba (now called twa) base class has a new method, succ(),
to support C++11' range-based for loop, and hide all the above
change.
Instead of the following syntax:
tgba_succ_iterator* i = aut->succ_iter(s);
for (i->first(); !i->done(); i->next())
{
// use i->current_state()
// i->current_condition()
// i->current_acceptance_conditions()
}
delete i;
We now prefer:
for (auto i: aut->succ(s))
{
// use i->current_state()
// i->current_condition()
// i->current_acceptance_conditions()
}
And the above syntax is really just syntactic suggar for
twa_succ_iterator* i = aut->succ_iter(s);
if (i->first())
do
{
// use i->current_state()
// i->current_condition()
// i->current_acceptance_conditions()
}
while (i->next());
aut->release_iter(i); // allow the automaton to recycle the iterator
Where the virtual calls to done() and delete have been avoided.
- twa::succ_iter() now takes only one argument. The optional
global_state and global_automaton arguments have been removed.
- The following methods have been removed from the TGBA interface and
all their subclasses:
- tgba::support_variables()
- tgba::compute_support_variables()
- tgba::all_acceptance_conditions() // use acc().accepting(...)
- tgba::neg_acceptance_conditions()
- tgba::number_of_acceptance_conditions() // use acc().num_sets()
- Membership to acceptance sets are now stored using bit sets,
currently limited to 32 bits. Each TωA has a acc() method that
returns a reference to an acceptance object (of type
spot::acc_cond), able to operate on acceptance marks
(spot::acc_cond::mark_t).
Instead of writing code like
i->current_acceptance_conditions() == aut->all_acceptance_conditions()
we now write
aut->acc().accepting(i->current_acceptance_conditions())
(Note that for accepting(x) to return something meaningful, x
should be a set of acceptance sets visitied infinitely often. So let's
imagine that in the above example i is looking at a self-loop.)
Similarly,
aut->number_of_acceptance_conditions()
is now
aut->acc().num_sets()
- All functions used for printing LTL/PSL formulas or automata
have been renamed to print_something(). Likewise the various
parsers should be called parse_something() (they haven't yet
all been renamed).
- All test suites under src/ have been merged into a single one in
src/tests/. The testing tool that was called
src/tgbatest/ltl2tgba has been renamed as src/tests/ikwiad
(short for "I Know What I Am Doing") so that users should be
less tempted to use it instead of src/bin/ltl2tgba.
* Removed features
- The long unused interface to GreatSPN (or rather, interface to
a non-public, customized version of GreatSPN) has been removed.
As a consequence, we could get rid of many cruft in the
implementation of Couvreur's FM'99 emptiness check.
- Support for symbolic, BDD-encoded TGBAs has been removed. This
includes the tgba_bdd_concrete class and associated supporting
classes, as well as the ltl_to_tgba_lacim() LTL translation
algorithm. Historically, this TGBA implementation and LTL
translation were the first to be implemented in Spot (by
mistake!) and this resulted in many bad design decisions. In
practice they were of no use as we only work with explicit
automata (i.e. not symbolic) in Spot, and those produced by
these techniques are simply too big.
- All support for ELTL, i.e., LTL logic extended with operators
represented by automata has been removed. It was never used in
practice because it had no practical user interface, and the
translation was a purely-based BDD encoding producing huge
automata (when viewed explictely), using the above and non
longuer supported tgba_bdd_concrete class.
- Our implementation of the Kupferman-Vardi complementation has
been removed: it was unused in practice beause of the size of
the automata built, and it did not survive the conversion of
acceptance sets from BDDs to bitsets.
- The unused implementation of state-based alternating Büchi
automata has been removed.
- Input and output in the old, Spot-specific, text format for
TGBA, has been fully removed. We now use HOA everywhere. (In
case you have a file in this format, install Spot 1.2.6 and use
"src/tgbatest/ltl2tgba -H -X file" to convert the file to HOA.)
--
Alexandre Duret-Lutz
I am happy to announce that the following paper has been accepted to the
13th International Conference on Document Analysis and Recognition (ICDAR)
that will take place in Gammarth, Tunisia, on August 23 - 26, 2015:
Using histogram representation and Earth Mover's Distance
as an evaluation tool for text detection
Stefania Calarasanu (1), Jonathan Fabrizio (1) and Séverine Dubuisson (2)
(1) LRDE-EPITA, 14-16, rue Voltaire, F-94276, Le Kremlin
Bicêtre, France
(2) CNRS, UMR 7222, ISIR, F-75005, Paris, France
Abstract:
In the context of text detection evaluation, it is essential to use
protocols that are capable of describing both the quality and the
quantity aspects of detection results. In this paper we propose
a novel visual representation and evaluation tool that captures the
whole nature of a detector by using histograms. First, two histograms
(coverage and accuracy) are generated to visualize the different
characteristics of a detector. Secondly, we compare these two
histograms to a so called optimal one to compute representative
and comparable scores. To do so, we introduce the usage of the
Earth Mover's Distance as a reliable evaluation tool to estimate recall
and precision scores. Results obtained on the ICDAR 2013 dataset
show that this method intuitively characterizes the accuracy of a
text detector and gives at a glance various useful characteristics
of the analyzed algorithm.
Ana Stefania Calarasanu
___________________________________________________
PhD Student
EPITA Research and Development Laboratory (LRDE)
14-16 rue Voltaire, 94276 Le Kremlin-Bicêtre CEDEX, France
https://www.lrde.epita.fr/wiki/User:Calarasanu
_______________________________________________
Annonce mailing list
Annonce(a)lrde.epita.fr
https://lists.lrde.epita.fr/listinfo/annonce
I am happy to announce that the following paper has been accepted at
the 22th IEEE International Conference on Image Processing (ICIP'2015),
to be held on September 27-30 in Québec City, Canada.
How to make nD images well-composed
without interpolation
Nicolas Boutry¹², Thierry Géraud¹, Laurent Najman²
¹ EPITA Research and Development Laboratory (LRDE)
² Université Paris-Est, LIGM, Équipe A3SI, ESIEE Paris
https://www.lrde.epita.fr/wiki/Publications/boutry.15.icip
Abstract:
Latecki et al. have introduced the notion of well-composed images,
i.e., a class of images free from the connectivities paradox of
discrete topology. Unfortunately natural and synthetic images are
not a priori well-composed, usually leading to topological issues.
Making any nD image well-composed is interesting because, after-
wards, the classical connectivities of components are equivalent,
the component boundaries satisfy the Jordan separation theorem,
and so on. In this paper, we propose an algorithm able to make nD
images well-composed without any interpolation. We illustrate on
text detection the benefits of having strong topological properties.
========================================================================
Séminaire MeFoSyLoMa
http://www.mefosyloma.fr/
Méthodes Formelles pour les Systèmes Logiciels et Matériels
vendredi 22 mai 2015, 14h-17h
Adresse:
LRDE (EPITA), Salle Alpha
Bâtiment X, 2e étage
18 rue Pasteur, 94270 Le Kremlin-Bicêtre
Métro Porte d'Italie
Plans d'accès :
https://www.google.com/maps?q=48.8152808,2.3623765https://www.lrde.epita.fr/~adl/dl/acces-lrde.pdf
S'il y en a parmi vous qui comptent venir en voiture, le parking du
centre commercial Okabé est gratuit pendant 3h.
http://www.okabe.com/okabe/fr/okabe-parking
========================================================================
Le séminaire MeFoSyLoMa est animé conjointement par les laboratoires
Cedric (Cnam), IBISC (Univ. Evry), LACL (Univ. Paris 12), LIP6 (UPMC),
LIPN (Univ. Paris 13), LRDE (Epita), LSV (École Normale Supérieure de
Cachan) et LTCI (TELECOM ParisTech). Son objet est de permettre la
confrontation de différentes approches ou points de vue sur
l'utilisation des méthodes formelles dans les domaines du génie
logiciel, de la conception de circuit, des systèmes répartis, des
systèmes temps-réel ou encore des systèmes d'information. Il
s'organise autour de réunions bimestrielles où sont exposés des
travaux de recherche récents sur ce thème.
========================================================================
Programme
14h00-15h00: Ekkart Kindler, Denmark Technical University,
"Coordinating Interactions: The Event Coordination Notation"
Abstract: The purpose of a domain model is to concisely capture the
concepts of an application's domain, and their relation among each
other. Even though the main purpose of domain models is not
implementing the application, major parts of an application can be
generated from the application's domain models fully automatically
with today's technologies. The focus of most of today's code
generation technologies, however, is on the structural aspects of
the domain; the domain's behaviour is often not modelled at all,
implemented manually based on some informal models, or the behaviour
is modelled on a much much more technical level.
The Event Coordination Notation (ECNO) allows modelling the
behaviour of an application on a high level of abstraction that is
closer to the application's domain than to the software realizing
it. Still, these models contain all necessary details for actually
executing the models and for generating code from them.
In this talk, the limitations of today's modelling notations for
behaviour are briefly discussed. Then, the main idea, philosophy,
and concepts of ECNO and its notation are discussed -- mostly by
looking at some examples. The ECNO is now fully supported by a tool
which allows to generate code from ECNO models.
15h00-16h00: Ryszard Janicki, McMaster University,
"Modeling Concurrency With Interval Traces"
Abstract: Interval order structures are triples (X,≺,⊏) where X is a
set of event occurences and ≺, ⊏ are abstractions of 'earlier than'
and 'not later than' relationships.
Interval order structures are useful tools to model abstract
concurrent histories, i.e. sets of equivalent system runs, when
system runs are modeled with interval orders and we want to express
not only standard causality but also 'not later than'.
It turns out that interval order structures can be modeled by
partially commutative monoids, called interval traces, that are some
special case of general Mazurkiewicz traces. This new model will
then be used to provide a full semantics of Petri nets with
inhibitor arcs.
16h00-16h30: pause café
16h30-17h00: vie du groupe
========================================================================
Hello,
we're happy to announce that our paper entitled "Context-Oriented Image
Processing" has been accepted at ECOOP's Context-Oriented Programming
Workshop (COP'2015), to be held in Prague this July.
The abstract is given below.
Genericity aims at providing a very high level of abstraction in
order, for instance, to separate the general shape of an algorithm
from specific implementation details. Reaching a high level of
genericity through regular object-oriented techniques has two major
drawbacks, however: code cluttering (e.g. class / method
proliferation) and performance degradation (e.g. dynamic dispatch). In
this paper, we explore a potential use for the Context-Oriented
programming paradigm in order to maintain a high level of genericity
in an experimental image processing library, without sacrificing
either the performance or the original object-oriented design of the
application.
--
My new Jazz CD entitled "Roots and Leaves" is out!
Check it out: http://didierverna.com/records/roots-and-leaves.php
Lisp, Jazz, Aïkido: http://www.didierverna.info