22nd European
Conference on
Artificial
Intelligence
29 aug - 02 sep

Accepted Demos

Wednesday, 12:10 – 13:30

e-Turist: An Intelligent Personalized Trip Guide

Authors: Božidara Cvetković, Hristijan Gjoreski, Vito Janko, Boštjan Kaluža, Anton Gradišek, Mitja Luštrek

Abstract: We present a personalized mobile trip guide that allows users to plan their single or multiple day trip based on their preferences and constraints. It is composed of a trip planning module and a tour guidance module. The trip planning module consist of: (i) a hybrid recommender system, which rates the points of interest according to the user profile, (ii) a route planning system, which filters out and selects the optimal points of interest according to the time and the location and thus creates a route to be used by the tour guidance module. The tour guidance module routes the user according to the trip plan and gives textual and audio descriptions of the visited point of interest. The e-Turist application is available for four smartphone platforms and as a web application (https://www.e-turist.si).

Thursday, 12:10 – 13:30

SMACH: Simulation of Realistic Human Behaviors and Electrical Consumption

Authors: Reynaud Quentin, Sempé François, Haradji Yvon, Sabouret Nicolas

Abstract: The SMACH platform allows the modeling and simulation of multi-agent based simulations of virtual humans’ behaviors. It is focused on the behaviors taking place inside the housings, and it is able to deduce the load curve of electrical consumption of each electrical appliance in the housing. The simulations are entirely configurable: housing (type, surface, insulation, etc.), household (type, number of humans, relations, etc. virtual humans, electrical appliances, heating system, weather forecast, etc. It can simulate quickly any household for any period of time, from minutes to years.

It has been demonstrate that we are able to produce realistic humans’ behaviors and accurate electrical consumption, and we have a collection of scenarios which have been validated by the real humans that are modelled in the simulation.

Wednesday, 12:10 – 13:30

KNEWS: Using Logical and Lexical Semantics to Extract Knowledge from Natural Language

Authors: Valerio Basile, Elena Cabrio, Claudia Schon

Abstract: Machine Reading is the task of extracting formally encoded knowledge from natural language text. A complete machine reading tool is a step towards the construction of large repositories of general knowledge without having to rely on human-built resources. Moreover, a machine reading component can play an important role in other environments, helping to disambiguate predicate names, thus supporting selection of very focused background knowledge.
In this demo we present KNEWS, a pipeline of Natural Language Processing tools that accepts natural language text as input and outputs knowledge in a machine-readable format. The tool outputs frame-based knowledge as RDF triples or XML, including the word-level alignment with the surface form, as well as first-order logical formulae.

KNEWS is a pipeline system configurable to use different external modules, it provides different kind of meaning representations as output, and its source code is freely available.

Friday, 12:10 – 13:20

A Tool for Negotiating Privacy Constraints in Online Social Networks

Authors: Dilara Keküllüoglu, Nadin Kökciyan, Pınar Yolum

Abstract: In online social networks, content about a user can be shared by different individuals, without explicitly consent of the user. Hence, privacy violations is unfortunately common. We developed a
negotiation system for privacy that enables users’ agents to exchange offers among themselves to agree on how a post should be shared; for example, whom it should be shown to. The agents use weighted semantic rules to represent users’ privacy constraints and utility functions to generate offers. Using this system, agents of users can agree before a post is shared. We demonstrate the applicability of the system using three negotiation strategies that agents can follow. These strategies emphasize different aspects of privacy and serve to avoid privacy violations.

Thursday, 12:10 – 13:30

Making Data Understand People

Authors: Lernout Stephen, Devos Geert, Platteau Frank

Abstract: The pain Miia is addressing in this paper is that the older generation tools like Natural Language Processing, statistical keyword search and fuzzy logic do not deliver in terms of real text understanding. Their vendors struggle in delivering accurate quality and this results in ill-functioning applications. The newer generation methodologies like Deep Learning and Cognitive Computing are breaking barriers in the (Big Data) fields of Internet of Things, Robotics and Image/Video Recognition but cannot be successfully deployed for text without huge amounts of training and sample data. In the short term, we believe non-biological Artificial Intelligence will produce the best results for text understanding. We applied advanced Linguistic and Semantic Technologies combined with ConceptNet modeling and Machine Learning to cater deep intelligent and cross-language quality to several industries.

Thursday, 12:10 – 13:30

The AMIDST toolbox: a Java library for scalable probabilistic machine learning

Authors: Andrés R. Masegosa, Ana M. Martínez, and Darío Ramos-López, Thomas D. Nielsen, Helge Langseth, Antonio Salmerón, Anders L. Madsen

Abstract: AMIDST is a flexible Java library for probabilistic machine learning, which provides tailored parallel and distributed implementations of Bayesian parameter learning (and probabilistic inference) for batch and streaming data. This processing is based on flexible and scalable message passing algorithms. AMIDST handles probabilistic graphical models with latent variables and temporal dependencies which can be trained on large-scale data (making use of Apache Spark and Apache Flink) and provides interfaces to a number of other platforms like HUGIN, MOA, Weka and R. In this demonstration, some of the AMIDST main functionalities will be shown. The construction and use of customized probabilistic models, possibly with latent variables and temporal dependencies, will be explained. The AMIDST toolbox has been developed within the AMIDST project (Analysis of MassIve Data STreams) of the European Union’s Seventh Framework Programme, under grant agreement no 619209.

Wednesday, 12:10 – 13:30

Demo for Continuous Live Stress Monitoring with a Wristband

Authors: Martin Gjoreski, Hristijan Gjoreski, Mitja Lušterk, Matjaž Gams

Abstract: We will demonstrate a method for continuous stress monitoring using data provided by a commercial wrist device (Microsoft Band) equipped with physiological sensors and an accelerometer. The method consists of three machine-learning components: a laboratory stress-detector that detects short-term stress; an activity recognizer that continuously recognizes user’s activity and thus provides context information; and a context-based stress detector that first aggregates the predictions of the laboratory detector, and then exploits the user’s context to provide decision on 20 minutes interval. The method was trained on 21 subjects in a laboratory setting and tested on 5 subjects in a real-life setting. The accuracy on 55 days of real-life data was 92%. The method is integrated in a smartphone application, which will be demonstrated at the conference.

Thursday, 12:10 – 13:30

Interactive Exploration over Concept Lattices with LatViz

Authors: Mehwish Alam, Thi Nhu Nguyen Le, Amedeo Napoli

Abstract: In this demo paper, we introduce LatViz, a new tool which allows the construction, the display and the exploration of concept lattices. LatViz proposes some remarkable improvements over existing tools and introduces various new functionalities focusing on interaction with experts, such as visualization of pattern structures (for dealing with complex non-binary data), AOC-posets (the irreducible elements of the lattice), concept annotations, filtering based on various criteria and visualization of implications. This way the user can effectively perform interactive exploratory knowledge discovery as often needed in knowledge engineering, and especially in ontology engineering.

Friday, 12:10 – 13:20

Neonatologist at Home

Authors: Albert Pla-Planes, Natalia Mordvanyuk, Beatriz López, Abel López-Bermejo, Eva Bargalló, Cristina Armero

Abstract: Premature babies need special medical attention in the neonatal intensive care unit during a long time period. Recent studies show that time is shorter if spent in a familiar and loving environment. However, the vital signs of the baby should be closely monitored. To this end, mobile devices and artificial intelligence offer an invaluable service. NoaH is a mobile platform for helping parents, care givers and hospitals to remotely monitor pre-mature babies, allowing families returning home earlier.

NoaH uses wireless smart sensors to gather data regarding the babys status and combines a rule-based system and a case-based reasoner to provide support to both parents and care-givers. Awards: this work has been awarded with the second prize in the Vall d’Hebron Research Institute (VHIR) Innovation Healthcare Contest (Barcelona 2015), and has been finalist in the eHealth award of the University eSante (Castres 2015).

Wednesday. 12:10 – 13:30

Using Machine Learning for Link Discovery on the Web of Data

Authors: Axel-Cyrille Ngonga Ngomo, Daniel Obraczka, Kleanthi Georgala

Abstract: Link discovery is of key importance during the creation of Linked Data. A large number of link discovery frameworks for RDF data has thus been created over the last years. In this demo, we aim to present the machine-learning features of the novel graphical user of the LIMES framework, which is a state-of-the-art link discovery framework that implements a large number of machine-learning features. We will guide the participants in the demo and show how LIMES uses machine learning between the selection of attributes to the linking of data. In particular, we will focus on how LIMES uses different paradigms such as batch learning, active learning and unsupervised learning to support link discovery. We will use both real and synthetic data to demonstrate the scalability of our implementations.

Friday, 12:10 – 13:20

Cooperative UAV-UGV modeled by Petri Net Plans specification

Authors: Andrea Bertolaso, Masoume M. Raeissi, Alessandro Farinelli, Riccardo Muradore

Abstract: A cooperative multi robot plan is devised by using Petri Net Plans (PNPs), where an unmanned aerial vehicle (UAV) must land on an unmanned ground vehicle (UGV) while such ground vehicle is moving in the environment to execute its own mission. The video demonstrates the execution of the plan in V-REP simulation environment. The vehicles start executing far away from each other until UAV gets close to the UGV or decides to landing. When the vehicles are getting close, UAV sends an event to the UGV. UGV starts sending its future positions to the UAV and decreases its speed accordingly. The video also shows the evolution of the simplified version of the PNP during the simulation in order to better illustrate the behavior of the system. The mission is accomplished when the UAV lands on top of the UGV: this corresponds to the final state (place) of the plan.

Friday, 12:10 – 13:30

Demo: Natural Language Processing for Online Fraud Scenario Extraction

Authors: Bas Testerink, Floris Bex

Abstract: Online intakes of criminal complaints are hampered by the mismatch between the type of stories people tell when filing such a complaint and the sort of crime reports that police would prefer to have. In this demo we present our progress in the project Intelligence Amplification for Cybercrime (IAC), in which we apply AI techniques to allow natural online dialogues about fraud cases. We show the natural language processing and dialogue modules of the system. The dialogue module allows for mixed-initiative dialogues between human complainants and software agents for crime intake. An interface is provided that allows the complainant to input free text and form elements, which are then integrated into a structured knowledge graph by the NLP module. This knowledge graph then serves as input for the intake agent, who can use it to reason about the incident that has occurred and formulate follow up questions to the user.