6.6 Intelligent Wearable and Implantable Sensors for Augmented Living

Printer-friendly version PDF version

Date: Wednesday, March 27, 2019
Time: 11:00 - 12:30
Location / Room: Room 6

Chair:
Daniela De Venuto, Politecnico di Bari, IT, Contact Daniela De Venuto

Co-Chair:
Theocharis Theocharides, University of Cyprus, CY, Contact Theocharis Theocharides

This session brings together a set of novel technologies that exploit artificial intelligence and data analytics on low-power wearable and implantable sensors, for real-time augmented living and assistive healthcare.

TimeLabelPresentation Title
Authors
11:006.6.1LAELAPS: AN ENERGY-EFFICIENT SEIZURE DETECTION ALGORITHM FROM LONG-TERM HUMAN IEEG RECORDINGS WITHOUT FALSE ALARMS
Speaker:
Alessio Burrello, Università di Bologna, IT
Authors:
Alessio Burrello1, Lukas Cavigelli2, Kaspar Schindler3, Luca Benini2 and Abbas Rahimi2
1Department of InformationTechnology and Electrical Engineering, ETH Zurich, CH; 2Department of InformationTechnology and Electrical Engineering, ETH Zurich, CH; 3Sleep-Wake-Epilepsy-Center, Department of Neurology, Inselspital, Bern University Hospital, University Bern., CH
Abstract
We propose Laelaps, an energy-efficient and fast learning algorithm with no false alarms for epileptic seizure detection from long-term intracranial electroencephalography (iEEG) signals. Laelaps uses end-to-end binary operations by exploiting symbolic dynamics and brain-inspired hyperdimensional computing. Laelaps's results surpass those yielded by state-of-the-art (SoA) methods [1], [2], [3], including deep learning, on a new very large dataset containing 116 seizures of 18 drug-resistant epilepsy patients in 2656 hours of recordings—each patient implanted with 24 to 128 iEEG electrodes. Laelaps trains 18 patient-specific models by using only 24 seizures: 12 models are trained with one seizure per patient, the others with two seizures. The trained models detect 79 out of 92 unseen seizures without any false alarms across all the patients as a big step forward in practical seizure detection. Importantly, a simple implementation of Laelaps on the Nvidia Tegra X2 embedded device achieves 1.7x-3.9x faster execution and 1.4x-2.9x lower energy consumption compared to the best result from the SoA methods. Our source code and anonymized iEEG dataset are freely available at http://ieeg-swez.ethz.ch.
11:306.6.2AUTOMATIC TIME-FREQUENCY ANALYSIS OF MRPS FOR MIND-CONTROLLED MECHATRONIC DEVICES
Speaker:
Giovanni Mezzina, Politecnico di Bari, IT
Authors:
Daniela De Venuto and Giovanni Mezzina, Politecnico di Bari, IT
Abstract
This paper describes the design, implementation and in vivo test of a novel Brain Computer Interface (BCI) for the mechatronic devices control. The method exploits electroencephalogram acquisitions (EEG), and specifically the Movement Related Potentials (MRPs) (i.e., μ and β rhythms), to actuate the user intention on the mechatronic device. The EEG data are collected by only five wireless smart electrodes positioned on the central and parietal cortex area. The acquired data are analyzed by an innovative single-trial classification algorithm that, with respect to the current state of the art, strongly reduces the training time (Minimum: ~1 h, reached: 10 min), as well as the acquisition time - after stimulus - for a reliable classification (Typical: 4-8 s reached: 2 s). As first step, the algorithm performs an EEG time-frequency analysis in the selected bands, making the data suitable for further computations. The implemented machine learning (ML) stage consists of: (i) dimensionality reduction; (ii) statistical inference-based features extraction (FE); (iii) classification model selection. It is also proposed a dedicated algorithm, the MLE-RIDE, for the dimensionality reduction that, jointly with statistical analyses, digitalize the μ and β rhythms, performing the features extraction. Finally, the best support vector machine (SVM) model is selected and used in the on-line classification. As proof of concept, two mechatronic devices have been brain-controlled by using the proposed BCI algorithm: a three-finger robotic hand and an acrylic prototype car. The experimental results, obtained with data from 3 subjects (aged 26±1), showed an accuracy on human will wireless detection of 87.4%, in the real-time binary discrimination, with 33.7ms of computation times.
12:006.6.3A SELF-LEARNING METHODOLOGY FOR EPILEPTIC SEIZURE DETECTION WITH MINIMALLY-SUPERVISED EDGE LABELING
Speaker:
Damián Pascual, EPFL, ES
Authors:
Damian Pascual1, Amir Aminifar2 and David Atienza3
1EPFL, CH; 2Swiss Federal Institute of Technology Lausanne (EPFL), CH; 3École Polytechnique Fédérale de Lausanne (EPFL), CH
Abstract
Epilepsy is one of the most common neurological disorders and affects over 65 million people worldwide. Despite the continuing advances in anti-epileptic treatments, one third of the epilepsy patients live with drug resistant seizures. Besides, the mortality rate among epileptic patients is 2 - 3 times higher than in the matching group of the general population. Wearable devices offer a promising solution for the detection of seizures in real time so as to alert family and caregivers to provide immediate assistance to the patient. However, in order for the detection system to be reliable, a considerable amount of labeled data is needed to train it. Labeling epilepsy data is a costly and time-consuming process that requires manual inspection and annotation of electroencephalogram (EEG) recordings by medical experts. In this paper, we present a self-learning methodology for epileptic seizure detection without medical supervision. We propose a minimally-supervised algorithm for automatic labeling of seizures in order to generate personalized training data. We demonstrate that the median deviation of the labels from the ground truth is only 10.1 seconds or, equivalently, less than 1\% of the signal length. Moreover, we show that training a real-time detection algorithm with data labeled by our algorithm produces a degradation of less than 2.5\% in comparison to training it with data labeled by medical experts. We evaluated our methodology on a wearable platform and achieved a lifetime of 2.59 days on a single battery charge.
12:30IP3-8, 1005ZEROPOWERTOUCH: ZERO-POWER SMART RECEIVER FOR TOUCH COMMUNICATION AND SENSING FOR INTERNET OF THING AND WEARABLE APPLICATIONS
Speaker:
Michele Magno, ETH Zurich, CH
Authors:
Philipp Mayer, Raphael Strebel and Michele Magno, ETH Zurich, CH
Abstract
The human body can be used as a transmission medium for electric fields. By applying an electric field with a frequency of decades of megahertz to isolated electrodes on the human body, it is possible to send energy and data. Extra body and intra-body communication is an interesting alternative way to communicate in a wireless manner in the new era of wearable device and internet of things. In fact, this promising communication works without the need to design a dedicate radio hardware and with a lower power consumption. We designed and implemented a novel zero-power receiver targeting intra-body and extra-body wireless communication and touch sensing. To achieve zero-power and always-on working, we combined ultra-low power design and an energy-harvesting subsystem, which extracts energy directly from the received message. This energy is then employed to supply the whole receiver to demodulate the message and to perform data processing with a digital logic. The main goal of the proposed design is ideal to wake up external logic only when a specific address is received. Moreover, due to the presence of the digital logic, the designed zero-power receiver can implement identification and security algorithms. The zero-power receiver can be used either as an always-on touch sensor to be deployed in the field or as a body communication wake up smart and secure devices. A working prototype demonstrates the zero-power working, the communication intra-body, and extra-body, and the possibility to achieve more than 1.75m in intra-body without the use of any external battery.
12:31IP3-9, 252TAILORING SVM INFERENCE FOR RESOURCE-EFFICIENT ECG-BASED EPILEPSY MONITORS
Speaker:
Lorenzo Ferretti, Università della Svizzera italiana, CH
Authors:
Lorenzo Ferretti1, Giovanni Ansaloni1, Laura Pozzi1, Amir Aminifar2, David Atienza2, Leila Cammoun3 and Philippe Ryvlin3
1USI Lugano, CH; 2École Polytechnique Fédérale de Lausanne (EPFL), CH; 3Centre Hospitalier Universitaire Vaudois, CH
Abstract
Event detection and classification algorithms are resilient towards aggressive resource-aware optimisations. In this paper, we leverage this characteristic in the context of smart health monitoring systems. In more detail, we study the attainable benefits resulting from tailoring Support Vector Machine (SVM) inference engines devoted to the detection of epileptic seizures from ECG-derived features. We conceive and explore multiple optimisations, each effectively reducing resource budgets while minimally impacting classification performance. These strategies can be seamlessly combined, which results in 12.5X and 16X gains in energy and area, respectively, with a negligible loss, 3.2% in classification performance.
12:32IP3-10, 418AN INDOOR LOCALIZATION SYSTEM TO DETECT AREAS CAUSING THE FREEZING OF GAIT IN PARKINSONIANS
Speaker:
Graziano Pravadelli, Dept. of Computer Science, Univ. of Verona, IT
Authors:
Florenc Demrozi1, Vladislav Bragoi2, Federico Tramarin3 and Graziano Pravadelli4
1Computer Science Department, University of Verona, IT; 2Department of Computer Science, University of Verona, IT; 3Department of Information Engineering, University of Padua, IT; 4University of Verona, IT
Abstract
People affected by the Parkinson's disease are often subject to episodes of Freezing of Gait (FoG) near specific areas within their environment. In order to prevent such episodes, this paper presents a low-cost indoor localization system specifically designed to identify these critical areas. The final aim is to exploit the output of this system within a wearable device, to generate a rhythmic stimuli able to prevent the FoG when the person enters a risky area. The proposed localization system is based on a classification engine, which uses a fingerprinting phase for the initial training. It is then dynamically adjusted by exploiting a probabilistic graph model of the environment.
12:33IP3-11, 879ASSEMBLY-RELATED CHIP/PACKAGE CO-DESIGN OF HETEROGENEOUS SYSTEMS MANUFACTURED BY MICRO-TRANSFER PRINTING
Speaker:
Tilman Horst, Technische Universität Dresden, DE
Authors:
Robert Fischbach, Tilman Horst and Jens Lienig, Technische Universität Dresden, DE
Abstract
Technologies for heterogeneous integration have been promoted as an option to drive innovation in the semiconductor industry. However, adoption by designers is lagging behind and market shares are still low. Alongside the lack of appropriate design tools, high manufacturing costs are one of the main reasons. µTP is a novel and promising micro-assembly technology that enables the heterogeneous integration of dies originating from different wafers. This technology uses an elastomer stamp to transfer dies in parallel from source wafers to their target positions, indicating a high potential for reducing manufacturing time and cost. In order to achieve the latter, the geometrical interdependencies between source, target and stamp and the resulting wafer utilization must be considered during design. We propose an approach to evaluate a given µTP design with regard to the manufacturing costs. We achieve this by developing a model that integrates characteristics of the assembly process into the cost function of the design. Our approach can be used as a template how to tackle other assembly-related co-design issues -- addressing an increasingly severe cost optimization problem of heterogeneous systems design.
12:30End of session
Lunch Break in Lunch Area



Coffee Breaks in the Exhibition Area

On all conference days (Tuesday to Thursday), coffee and tea will be served during the coffee breaks at the below-mentioned times in the exhibition area.

Lunch Breaks (Lunch Area)

On all conference days (Tuesday to Thursday), a seated lunch (lunch buffet) will be offered in the ""Lunch Area"" to fully registered conference delegates only. There will be badge control at the entrance to the lunch break area.

Tuesday, March 26, 2019

  • Coffee Break 10:30 - 11:30
  • Lunch Break 13:00 - 14:30
  • Awards Presentation and Keynote Lecture in ""TBD"" 13:50 - 14:20
  • Coffee Break 16:00 - 17:00

Wednesday, March 27, 2019

  • Coffee Break 10:00 - 11:00
  • Lunch Break 12:30 - 14:30
  • Awards Presentation and Keynote Lecture in ""TBD"" 13:30 - 14:20
  • Coffee Break 16:00 - 17:00

Thursday, March 28, 2019

  • Coffee Break 10:00 - 11:00
  • Lunch Break 12:30 - 14:00
  • Keynote Lecture in ""TBD"" 13:20 - 13:50
  • Coffee Break 15:30 - 16:00