Photos are available in the DATE 2024 Gallery.

The time zone for all times mentioned at the DATE website is CET – Central Europe Time (UTC+1). AoE = Anywhere on Earth.

W05 Unlocking Tomorrow's Technology: Hyperdimensional Computing and Vector Symbolic Architectures for Automation and Design in Technology and Systems

Start
Wed, 27 Mar 2024 14:00
End
Wed, 27 Mar 2024 18:00
Room
TBA
Organiser
Antonello Rosato, Sapienza University of Rome, Italy

 

Short Description: The Workshop at DATE 2024 invites submissions for an exploration of cutting-edge developments in Hyperdimensional Computing (HDC) and Vector Symbolic Architectures (VSA) regarding the both the theory and application of these concepts to study, design and automate new systems that leverage the mathematical properties of high-dimensional spaces. The goal is to provide new insights on how HDC and VSA can be useful for a variety of practical applications in different field, also to enhance our understanding of human perception and cognition. The workshop will be organized in two main parts: an invited speakers’ keynote presentation and a poster session with open discussion.

Format: Hybrid event, with invited speakers and call for poster presentations.

Scope: We seek participants that delve into the practical application of HDC and VSA principles in various domains, with a particular focus on automation, design, and advanced AI functionalities. The poster presentations are encouraged in the following areas (but not limited to):

  1. Neural Network Models and AI algorithms: Present novel solutions and systems utilizing HDC and VSA in neural network architectures and generalized AI models, emphasizing their application potential.
  2. Practical Applications: Showcase how HDC and VSA concepts have been deployed to solve real-world challenges in fields such as Computer Vision, Wireless Communication, Language Processing, Classification, Time Series Prediction, and Neural Modeling.
  3. System Design: Explore the potential of constructing comprehensive artificial intelligence systems based on HDC/VSA principles and discuss the taxonomy of recent advancements in this area.
  4. Interpretability and Human Interaction: Investigate the new possible insights obtainable via HDC & VSA on explainability and human-machine teaming.
  5. Future Directions and New Perspectives: Reflect on the present and future of HDC and VSA principles, in the theory realm and examining their applications in emerging technologies such as robotics and energy-aware AI developments.

Talks

Session Start
Wed, 14:00
Session End
Wed, 18:00
Speaker
Evgeny Osipov, Luleå University of Technology, Sweden
Speaker
Abbas Rahimi, IBM Research Zurich, Switzerland
Speaker
Alpha Renner, Forschungszentrum Jülich, Germany
Speaker
Kenny Schlegel, Chemnitz University of Technology, Germany
Presentations

Hyperdimensional Computing: what is hot and what is not

Start
14:00
End
14:30
Speaker
Evgeny Osipov, Luleå University of Technology, Sweden

HDC (Hyperdimensional Computing) is a rapidly evolving field within artificial intelligence that captivates newcomers from their very first exposure. Described as brain-inspired and capable of computing in superposition, being a potential bridge to Artificial General Intelligence (AGI), wakes immediate enthusiasm among the audience. As an experienced HDC researcher, I feel compelled to review common pitfalls encountered when publishing HDC-empowered research solutions. In my presentation, I will overview the most promising trends in HDC, as well as those that are less intriguing, aiming to guide future research directions effectively.

Reducing computational complexity of perception and reasoning by neuro-vector-symbolic architectures

Start
14:30
End
15:00
Speaker
Abbas Rahimi, IBM Research Zurich, Switzerland

We recently proposed neuro-vector-symbolic architectures (NVSA) in which high-dimensional distributed vectors are properly generated by neural nets and further processed by a VSA-informed machinery at different levels of abstraction. Using NVSA, we could set state-of-the-art accuracy record on few-shot continual learning [CVPR 2022] as well as visual abstract reasoning tasks [Nature Machine Intelligence 2023]. This is not where the advantages end: NVSA also reduces the computational complexity associated with both perceptual and reasoning tasks, yet on modern CPUs/GPUs. NVSA expanded computation-in-superposition to highly nonlinear transformations in CNNs and Transformers, effectively doubling their throughput at nearly iso-accuracy and computational cost [NeurIPS 2023]. NVSA also made probabilistic abduction tractable by avoiding exhaustive probability computations and brute-force symbolic searches which led to 244× faster inference compared with the probabilistic reasoning within the state-of-the-art approaches [Nature Machine Intelligence 2023]. Finally, NVSA permitted learning-to-reason: instead of hard-coding the rule formulations associated with a reasoning task, NVSA could transparently learn the rule formulations with just one pass through the training data [NeurIPSW Math-AI 2023]. 

Hyperdimensional Computing for Efficient Neuromorphic Visual Processing A.Renner

Start
15:00
End
15:30
Speaker
Alpha Renner, Forschungszentrum Jülich, Germany

This talk explores the potential of Hyperdimensional Computing (HDC) as a framework for efficient neuromorphic processing. We introduce Hierarchical Resonator Networks (HRNs), a novel architecture for scene understanding. HRNs leverage HDC to identify objects and their generative factors directly from visual input. The network computes with complex-valued vectors implemented as spike-timing phasor neurons. This enables implementation on low-power neuromorphic hardware like Intel's Loihi.
Additionally, we demonstrate the HRN’s capability in visual odometry, accurately estimating robot self-motion from event-based data. This approach is a step towards robust, low-power computer vision and robotics.

HDC Feature Aggregation for Time Series Data and Beyond

Start
16:30
End
17:00
Speaker
Kenny Schlegel, Chemnitz University of Technology, Germany

This talk gives an overview of applying HDC for feature encoding prior to aggregation. This approach can be useful in several domains, including image processing for spatial features, time series for temporal sequences, and any other distinct features. Typically represented as vectors, features are commonly combined through superposition to create compact representations for tasks like classification. HDC, equipped with operators like superposition and binding, provides a practical way to incorporate more contextual or temporal knowledge into these vector-based representations. For instance, the presentation shows how HDC temporal encoding can enhance time series classification algorithms across different fields such as automotive or biomedical signals.

Poster Session

Session Start
Wed, 17:00
Session End
Wed, 18:00