IP5_3 Interactive Presentations

Date: Wednesday, 03 February 2021
Time: 10:30 - 11:00 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/k6wgwYQCKRZDDRDpx

Interactive Presentations run simultaneously during a 30-minute slot. Additionally, each IP paper is briefly introduced in a one-minute presentation in a corresponding regular session

Label Presentation Title
Authors
IP5_3.1 M2H: OPTIMIZING F2FS VIA MULTI-LOG DELAYED WRITING AND MODIFIED SEGMENT CLEANING BASED ON DYNAMICALLY IDENTIFIED HOTNESS
Speaker:
Lihua Yang, Huazhong University of Science and Technology, CN
Authors:
Lihua Yang, Zhipeng Tan, Fang Wang, Shiyun Tu and Jicheng Shao, Huazhong University of Science and Technology, CN
Abstract
With the widespread use of flash memory from mobile devices to large data centers, flash friendly file system (F2FS) designed for flash memory features has become popular. However, F2FS suffers from severe cleaning overhead due to its logging scheme writes. Mixed storage of data with different hotness in the file system aggravates segment cleaning. We propose {m}ulti-log delayed writing and {m}odified segment cleaning based on dynamically identified {h}otness (M2H). M2H defines the hotness by the file block update distance and uses the K-means clustering to identify hotness accurately for dynamic access patterns. Based on fine-grained hotness, we design multi-log delayed writing and modify the selection and release of the victim segment. The hotness metadata cache is used to reduce overheads induced by hotness metadata management and clustering calculations. Compared with the existing strategy of F2FS, migrated blocks of segment cleaning in M2H reduce by 36.05% to 36.51% and the file system bandwidth increases by 69.52% to 70.43% cumulatively.
IP5_3.2 CHARACTERIZING AND OPTIMIZING EDA FLOWS FOR THE CLOUD
Speaker:
Abdelrahman Hosny, Brown University, US
Authors:
Abdelrahman Hosny and Sherief Reda, Brown University, US
Abstract
Cloud computing accelerates design space exploration in logic synthesis, and parameter tuning in physical design. However, deploying EDA jobs on the cloud requires EDA teams to deeply understand the characteristics of their jobs in cloud environments. Unfortunately, there has been little to no public information on these characteristics. Thus, in this paper, we formulate the problem of migrating EDA jobs to the cloud. First, we characterize the performance of four main EDA applications, namely: synthesis, placement, routing and static timing analysis. We show that different EDA jobs require different machine configurations. Second, using observations from our characterization, we propose a novel model based on Graph Convolutional Networks to predict the total runtime of a given application on different machine configurations. Our model achieves a prediction accuracy of 87%. Third, we develop a new formulation for optimizing cloud deployments in order to reduce deployment costs while meeting deadline constraints. We present a pseudo-polynomial optimal solution using a multi-choice knapsack mapping that reduces costs by 35.29%.