6.2 Special Session: 3D Sensor - Hardware to Application

Printer-friendly version PDF version

Date: Wednesday, March 27, 2019
Time: 11:00 - 12:30
Location / Room: Room 2

Saibal Mukhopadhyay, Georgia Institute of Technology, US, Contact Saibal Mukhopadhyay
Pascal Vivet, CEA-Leti, FR, Contact Pascal Vivet

Fabien Clermidy, CEA-Leti, FR, Contact Fabien Clermidy

Pascal Vivet, CEA-Leti, FR, Contact Pascal Vivet

The 3D integration has emerged as a key enabler to continue performance growth of Moore's law. An application where 3D has already shown potential for tremendous benefit is the design of high-throughput and/or energy-efficient sensors. The ability to stack heterogeneous components in a small volumne coupled with potential for highly parallel access between sensing and processing has fueled new generation of senor platforms. Moreover, close proximity of processing and sensing has also lead to innovations in designing smart systems with in-built intelligence. This session will present four talks illustrating how 3D integration creates a platform for designing innovative sensors for applications ranging from high-performance imaging to ultra-low-power IoT platforms to bio-sensing. The first two talks will focus on application of 3D integration to high-performance and smart imaging. The first will present a detailed overview of recent advancements in 3D image sensor design, while the second talk will discuss the feasibility of embedding machine learning based feedback control within a 3D image sensor to create highly intelligent cameras. The third talk will present the concept of mm-scale sensors through 3D die stacking for ultra-low-power applications. Finally, the fourth talk will discuss design of innovative biosensors using fine-grain 3D integration.

TimeLabelPresentation Title
Pascal Vivet, CEA-Leti, FR
Pascal Vivet1, Gilles Sicard1, Laurent Millet1, Stephane Chevobbe2, Karim Ben Chehida2, Luis Angel Cubero MonteAlegre1, Maxence Bouvier1, Alexandre Valentian1, Maria Lepecq2, Thomas Dombek2, Olivier Bichler2, Sebastien Thuriès1, Didier Lattard1, Cheramy Séverine1, Perrine Batude1 and Fabien Clermidy1
Image Sensors will get more and more pervasive into their environment. In the context of Automotive and IoT, low cost image sensors, with high quality pixels, will embed more and more smart functions, such as the regular low level image processing but also object recognition, movement detection, light detection, etc. 3D technology is a key enabler technology to integrate into a single device the pixel layer and associated acquisition layer, but also the smart computing features and the required amount of memory to process all the acquired data. More computing and memory within the 3D Smart Image Sensors will bring new features and reduce the overall system power consumption. Advanced 3D technology with ultra-fine pitch vertical interconnect density will pave the way towards new architectures for 3D Smart Image Sensors, allowing local vertical communication between pixels, and the associated computing and memory structures. The presentation will give an overview of recent 3D technologies solutions, such as Hybrid Bonding technology and the Monolithic 3D CoolCubeTM technology, with respective 3D interconnect pitch in the order of 1µm and 100nm. Recent 3D Image Sensors will be presented, showing the capability of 3D technology to implement fine grain pixel acquisition and processing providing ultra-high speed image acquisition and tile-based processing. Finally, as further perspectives, multi-layer 3D image sensor architecture based on events and spiking will further reduce power consumption with new detection and learning processing capabilities.
Saibal Mukhopadhyay, Georgia Institute of Technology, US
Burhan Ahmad Mudassar, Priyabrata Saha, Yun Long, Muhammad Faisal Amir, Evan Gebhardt, Taesik Na, Jong Hwan Ko, Marilyn Wolf and Saibal Mukhopadhyay, Georgia Institute of Technology, US
The cameras today are designed to capture signals with highest possible accuracy to most faithfully represent what it sees. However, many mission-critical autonomous applications ranging from traffic monitoring to disaster recovery to defense requires quality of information, where 'useful information' depends on the tasks and is defined using complex features, rather than only changes in captured signal. Such applications require cameras that capture 'useful information' from a scene with highest quality while meeting system constraints such as power, performance, and bandwidth. This talk will discuss the feasibility of a camera that learns how to capture 'task-dependent information' with highest quality, paving the pathway to design a camera with brain. The talk will first discuss that 3D integration of digital pixel sensors with massively parallel computing platform for machine learning creates a hardware architecture for such a camera. Next, the talk will discuss embedded machine learning algorithms that can run on such platform to enhance quality of useful information by real-time control of the sensor parameters. The talk will conclude by identifying critical challenges as well as opportunities for hardware and algorithmic innovations to enable machine learning in the feedback loop of a 3D image sensor based camera.
David Blaauw, University of Michigan, US
Sechang Oh, Minchang Cho, Xiao Wu, Yejoong Kim, Li-Xuan Chuo, Wootaek Lim, Pat Pannuto, Suyoung Bang, Kaiyuan Yang, Hun-Seok Kim, Dennis Sylvester and David Blaauw, University of Michigan, US
The Internet of Things (IoT) is a rapidly evolving application space. One of the fascinating new fields in IoT research is mm-scale sensors, which make up the Internet of Tiny Things (IoT2). With their miniature size, these systems are poised to open up a myriad of new application domains. Enabled by the unique characteristics of cyber-physical systems and recent advances in low-power design and bare-die 3D chip stacking, mm-scale sensors are rapidly becoming a reality. In this paper, we will survey the challenges and solutions to 3D-stacked mm-scale design, highlighting low-power circuit issues ranging from low-power SRAM and miniature neural network accelerators to radio communication protocols and analog interfaces. We will discuss system-level challenges and illustrate several complete systems and their merging application spaces.
Speaker and Author:
Muhannad Bakir, Georgia Institute of Technology, US
We present a system for recording in vivo electromyography (EMG) signals from songbirds using flexible multi-electrode arrays (MEAs) featuring 3D integrated electronics. Electrodes with various pitches and topologies are evaluated by measuring EMG activity from the expiratory muscle of anesthetized songbirds. Air pressure data is also recorded simultaneously from the air sac of the songbird. Together, EMG recordings and air pressure measurements can be used to characterize how the nervous system controls breathing. Such technologies can in turn provide unique insights into motor control in a range of species, including humans. 3D IC integration enables the formfactors and interconnect densities needed in such applications. Focus will be given on the technology and microfabrication advances to enables such systems. We also discuss methods to use fine-grain 3D IC technology for electronic microplate technologies for CMOS biosensor systems with cell assays.
12:30End of session
Lunch Break in Lunch Area

Coffee Breaks in the Exhibition Area

On all conference days (Tuesday to Thursday), coffee and tea will be served during the coffee breaks at the below-mentioned times in the exhibition area.

Lunch Breaks (Lunch Area)

On all conference days (Tuesday to Thursday), a seated lunch (lunch buffet) will be offered in the Lunch Area to fully registered conference delegates only. There will be badge control at the entrance to the lunch break area.

Tuesday, March 26, 2019

Wednesday, March 27, 2019

Thursday, March 28, 2019