M05 Automation goes both ways: ML for security and security for ML
This tutorial focuses on the state of the art research in the intersection of AI and security. On the one hand, recent advances in Deep Learning (DL) have enabled a paradigm shift to include machine intelligence in a wide range of autonomous tasks. As a result, a largely unexplored surface has opened up for attacks jeopardizing the integrity of DL models and hindering their ubiquitous deployment across various intelligent applications. On the other hand, DL-based algorithms are also being employed for identifying several security vulnerabilities on long streams of multi-modal data and logs. In distributed complex settings, often times this is the only way to monitor and audit the security and robustness of the system. The tutorial integrates the views from three experts: Prof. Garg explores the emerging landscape of "adversarial ML" with the goal of answering basic questions about the trustworthiness and reliability of modern machine learning systems. Prof. Dmitrienko presents novel usages of federated and distributed learning for risk detection on mobile platforms with proof-of-concept realization and evaluation on data from millions of users. Prof. Koushanfar discusses how end-to-end automated frameworks based on algorithm/hardware co-design help with both (1) realizing accelerated low-overhead shields against DL attacks, and (2) enabling low overhead and real-time intelligent security monitoring.