ECOSCALE - Energy-efficient Heterogeneous COmputing at exaSCALE

ECOSCALE - Energy-efficient Heterogeneous COmputing at exaSCALE (Booth: EP 6)

Contact: Iakovos Mavroidis


Telecommunication Systems Research Institute (T.S.I.)
Technical University of Crete Campus - Akrotiri
73100 Chania
Greece

Tel: +30 2810 391402
Fax:

ECOSCALE - Energy-efficient Heterogeneous COmputing at exaSCALE

E-Mail: iakovosmavro@gmail.com
Website: www.ecoscale.eu

In order to reach exascale performance, current HPC systems need to be improved. Simple hardware scaling is not a feasible solution due to the increasing utility costs and power consumption limitations. Apart from improvements in implementation technology, what is needed is to refine the HPC application development flow as well as the system architecture of future HPC systems.

ECOSCALE tackles these challenges by proposing a scalable programming environment and architecture, aiming to substantially reduce energy consumption as well as data traffic and latency. ECOSCALE introduces a novel heterogeneous energy-efficient hierarchical architecture, as well as a hybrid many-core+OpenCL programming environment and runtime system. The ECOSCALE approach is hierarchical and is expected to scale well by partitioning the physical system into multiple independent Workers (i.e. compute nodes). Workers are interconnected in a tree-like fashion and define a contiguous global address space that can be viewed either as a set of partitions in a Partitioned Global Address Space (PGAS), or as a set of nodes hierarchically interconnected via an MPI protocol.

To further increase energy efficiency, as well as to provide resilience, the Workers employ reconfigurable accelerators mapped into the  virtual address space utilizing a dual stage System Memory Management Unit with coherent memory access. The architecture supports shared partitioned reconfigurable resources accessed by any Worker in a PGAS partition, as well as automated hardware synthesis of these resources from an OpenCL-based programming model.