HePREM: Enabling Predictable GPU Execution on Heterogeneous SoC

Björn Forsberg1,a, Luca Benini1,2,b and Andrea Marongiu1,2,c
1Swiss Federal Institute of Technology Zürich
abjoernf@iis.ee.ethz.ch
blbenini@iis.ee.ethz.ch
ca.marongiu@iis.ee.ethz.ch
2University of Bologna

ABSTRACT


Heterogeneous systems‐on‐a‐chip are increasingly embracing shared memory designs, in which a single DRAM is used for both the main CPU and an integrated GPU. This architectural paradigm reduces the overheads associated with data movements and simplifies programmability. However, the deployment of real‐time workloads on such architectures is troublesome, as memory contention significantly increases execution time of tasks and the pessimism in worst‐case execution time (WCET) estimates. The Predictable Execution Model (PREM) separates memory and computation phases in real‐time codes, then arbitrates memory phases from different tasks such that only one core at a time can access the DRAM. This paper revisits the original PREM proposal in the context of heterogeneous SoCs, proposing a compiler‐based approach to make GPU codes PREM-compliant. Starting from high‐level specifications of computation offloading, suitable program regions are selected and separated into memory and compute phases. Our experimental results show that the proposed technique is able to reduce the sensitivity of GPU kernels to memory interference to near zero, and achieves up to a 20× reduction in the measured WCET.



Full Text (PDF)