Program

Tuesday, September 14, 2021

1:00 – 1:10 pm

Dr. Kevin Butler, UF FICS Research
1:10 – 2:00 pm

Confidential Computing: Brave New World

The semiconductor industry is witnessing a nascent security paradigm shift in the rise of Confidential Computing. Driven by the need to protect computations delegated to co-tenanted
machines operated by Cloud Computing services, mainstream instruction set architectures are gradually introducing novel features that can be used to establish protected Isolates offering strong integrity and confidentiality guarantees to code and data contained within. Coupled with a Remote Attestation protocol, a third-party may request the launch of an isolate on an otherwise untrusted machine and know—with a high degree of assurance—that a payload of code and data was indeed loaded into a legitimate isolate with a particular configuration.
I’ll argue that this ability to reliably establish a safe “beach-head” on an untrusted third-party’s machine has far-reaching consequences with applications beyond protecting workloads delegated to Cloud Computing services. In a future world where facilities for Confidential Computing are widely deployed and used, we can imagine a utopia where inadvertent data leakage is a curiosity of a bygone age, with encrypted data moving from isolate to isolate and never resting in plaintext. I’ll report on some recent, ongoing activities within Arm in attempting to realize this vision, focusing in particular on a Confidential Computing project that we have been working on for some time, called Veracruz, now open-sourced and adopted by the Linux Foundation’s Confidential Compute Consortium. Moreover, I’ll highlight some problems/research challenges standing in the way of delivering upon this vision.

Domenic Mulligan, Principal Research Engineer, arm

2:00 – 2:20 pm

Demo 1:

LeopardSeal: Detecting Rogue Base Stations with Acoustic Distance Bounding

Rogue Base Stations (RBSs) allow an adversary to intercept cellular network traffic and metadata. Many believe that evolving standards have made some attacks, specifically eavesdropping on call audio, obsolete; however, we argue that this is largely due to a lack of understanding of the realistic threat model in this space. Specifically, standards reflect the expectation that cryptographic keys will be compromised by adversaries, yet none of the academic solutions to detection in this space do so. This paper presents a method for detecting RBSs in this more realistic setting. Our system, which we call LeopardSeal, uses acoustic distance bounding to determine whether or not extra wireless hops that are characteristic of RBSs are present during a call. We implement a proof of concept RBS using open source guides, and perform a measurement study across the United States. We demonstrate the ability to detect 100\% of attacks (with zero false positives) due to a statistically significant difference in round trip times between benign and attack call audio (i.e., t-test: $$p<1.04\times 10^{-143}$$). Through this effort, we demonstrate the ability to robustly detect these eavesdropping devices regardless of their possession of network credentials.

Christian Peeters, UF FICS Research

2:20 – 2:40 pm

Demo 2:

SAUSAGE: Security Analysis of Unix domain Socket usAGE in Android

The Android operating system is currently the most popular mobile operating system in the world. Android is based on Linux and therefore inherits its features including its Inter-Process Communication (IPC) mechanisms. These mechanisms are used by processes to communicate with one another and are extensively used in Android. Although the Android-specific IPC mechanisms have been studied extensively, Unix domain sockets have not been studied as much despite playing a crucial role in the IPC of highly privileged system daemons. In this paper, we propose SAUSAGE, an efficient novel static analysis framework to study the security properties of these sockets. SAUSAGE considers access control policies implemented in the Android security model as well as authentication checks implemented by the daemon binaries. It is a fully static large-scale analysis framework specifically designed to analyze Unix domain socket usage in Android system daemons. We use this framework to analyze 148 Android images across 6 popular smartphone vendors spanning Android versions 7-9. As a result, we uncover multiple access control misconfigurations and insecure authentication checks introduced by vendor customization. Our notable findings include a permission bypass in highly privileged Qualcomm system daemons and an unprotected socket that allows an untrusted app to set the scheduling priority of other processes running on the system.

Blas Kojusner,  UF FICS Research

2:40 – 3:00 pm

Demo 3:

Logic Locking Hardware Obfuscation

Design-hiding techniques are a central piece of academic and industrial efforts to protect electronic circuits from being reverse-engineered. However, these techniques have lacked a principled foundation to guide their design and security evaluation, leading to a long line of broken schemes. In this paper, we begin to lay this missing foundation.
We establish formal syntax for design-hiding (DH) schemes, a cryptographic primitive that encompasses all known design-stage methods to hide the circuit that is handed to a (potentially adversarial) foundry for fabrication. We give two security notions for this primitive: function recovery (FR) and key recovery (KR). The former is the ostensible goal of design-hiding methods to prevent reverse-engineering the functionality of the circuit, but most prior work has focused on the latter. We then present the first provably (FR,KR)-secure DH scheme, OneChaff_hd. A side-benefit of our security proof is a framework for analyzing a broad class of new DH schemes. We finish by unpacking our main security result, to provide parameter-setting guidance.

Animesh Chotaray, UF FICS Research

3:00 – 3:10 pm
BREAK
3:10 – 3:30 pm

Demo 4:

Covert Message Passing over Public Internet Platforms Using Model-Based Format-Transforming Encryption

We introduce a new type of format-transforming encryption where the format of ciphertexts is implicitly encoded within a machine-learned generative model. Around this primitive, we build a system for covert messaging over large, public internet platforms (e.g., Twitter). Loosely, our system composes an authenticated encryption scheme, with a method for encoding random ciphertext bits into samples from the generative model’s family of seed-indexed token-distributions. By fixing a deployment scenario, we are forced to consider system-level and algorithmic solutions to real challenges —such as receiver-side parsing ambiguities, and the low information-carrying capacity of actual token-distributions— that were elided in prior work. We use GPT-2 as our generative model, so that our system cryptographically transforms plaintext bitstrings into natural-language covertexts suitable for posting to public platforms. We consider adversaries with full view of the internet platform’s content, whose goal is to surface posts that are using our system for covert messaging. We carry out a suite of experiments to provide heuristic evidence of security, and to explore tradeoffs between operational efficiency and detectability

Luke Bauer, UF FICS Research

3:30 – 3:50 pm

Demo 5:

Simulating Physical Removal attacks on LiDAR-based Autonomous Driving Systems

Autonomous Vehicles leverage heavily on LiDAR sensors for depth perception and decision making. Many recent works have demonstrated efficient ways to physically attack these sensors and manipulate the vehicle’s perception and planning. These works generally rely on injecting fake point clouds or modifying the structure of objects. In this work, we study the impact of removal attacks on the perception module of AVs. To do this, we simulate removal attacks of various magnitudes on the KITTI dataset and analyze its effects on LiDAR-based clustering, segmentation, and detection algorithms.

Sri Hrushikesh Varma, UF FICS Research

3:50 – 4:10 pm

Demo 6:

SrFTL: SGX-reinforced Flash Translation Layer for Defending Against Ransomware

Ransomware has become a popular type of malware used by attackers, leading to billions of dollars worth of data and operational losses every year. Many state-of-the-art defenses created to combat ransomware assume a trusted operating system, which leaves them susceptible to compromise by privileged ransomware. Deploying defense schemes inside storage could solve this problem, but the scarcity of semantic information makes it hard to filter out and identify malicious traffic. To address this gap, we develop SrFTL, a fail-secure ransomware defense platform that bridges an enclave (e.g., Intel SGX) and an SSD’s FTL to quickly and accurately flag malicious I/O requests, even in the presence of privileged OS-level ransomware. SrFTL combines semantic filesystem information in SGX with a modified FTL, communicating through a secure channel, to improve detection accuracy over purely FTL-based defenses, all with a stronger threat model than OS-based detection schemes. We achieve a victim page classification accuracy (i.e., the percentage of detected ransomware writes) average of 86\% and maximum of 100\%. Evaluating SrFTL with a variety of real-world workloads and applications demonstrates that we incur a performance overhead of less than 2\% on average. SrFTL bridges the semantic gap between the FTL and OS-level file information to stop ransomware, all while maintaining the integrity and authenticity of classification decisions.

Weidong Zhu,  UF FICS Research

4:00 – 4:10 pm

Wednesday, September 15, 2021

1:00 – 1:05 pm

Opening Remarks, Dr Kevin Butler
1:05 – 1:40 pm

Lessons in Legacy: Redesigning the Secure Boot Solution for Smartphones

A secure boot feature is designed to make sure that only firmware which has been authorized can run on a device. It typically does that by ensuring that each trusted firmware image is authenticated by the trusted firmware image that precedes it, thus creating a chain of trust all the way back to an original “root of trust” in immutable hardware. This creates a challenging situation for security experts: the bootloader has to be flexible enough to meet all use cases and simple enough that we can have assurance that it is secure before it is committed to hardware. An organization changes its hardware bootloader at its peril! In this talk, we will explore the reasons why Qualcomm Technologies Inc. changed its hardware bootloader and the lessons that I learnt about dealing with critical legacy software as we did
so.

Alex Dent, Director of Engineering @ Qualcomm

1:40 – 2:00 pm

Demo 7:

VeriMask: Easy and Automated Decontamination of N95 Masks for Frontline Workers Under Emergency Shortages

The US CDC has recognized moist-heat as one of the most effective and accessible methods of decontaminating N95 masks for reuse in response to the persistent N95 mask shortages caused by the COVID-19 pandemic. However, it is challenging to reliably deploy this technique in healthcare settings under shortages due to a lack of smart technologies capable of ensuring proper decontamination conditions of hundreds of masks simultaneously. To tackle these challenges, we developed an open-source wireless sensor platform that we called VeriMask that facilitates per-mask verification of the moist-heat decontamination process. VeriMask is capable of monitoring hundreds of masks simultaneously in commercially available heating systems and provides a throughput-maximization functionality to help operators in under-resourced facilities to optimize the decontamination settings.

Dr. Sarah Rampazzi, UF FICS Research

2:00 – 2:20 pm

Demo 8:

FIRMMOD: A Firmware Analysis Framework Based on API Model Guided Symbolic Execution:

Embedded systems and IoT security have raised increasing concerns in recent decades. Due to the complexity of chip design and diversity of microcontroller families, it is unscalable to analyze firmware without interacting with the real device and various peripherals. We have developed an automated firmware modeling approach and implemented in a tool called FIRMMOD for effective firmware analysis. FIRMMOD leverages S2E (and QEMU) to re-host firmware and conduct dynamic analysis through a set of plugin interfaces. It reduces path explosion and improves code coverage by up to 4X compared to vanilla S2E by monitoring the impact of API modeling on code coverage. FIRMMOD has found an API misuse vulnerability in TICC2640R2 SDK.

Yihang (Ken) Bai,  UF FICS Research

2:20 – 2:40 pm

Demo 9:

Characterizing the Stalkerware Monetization Ecosystem Through App Analysis

Vanessa Frost and Cassidy Gibson, UF FICS Research

2:40 – 3:00 pm

Demo 10:

Hear “No Evil”, See “Kenansville”*: Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems

Automatic speech recognition and voice identification systems are being deployed in a wide array of applications, from providing control mechanisms to devices lacking traditional interfaces, to the automatic transcription of conversations and authentication of users. Many of these applications have significant security and privacy considerations. We develop attacks that force mistranscription and misidentification in state of the art systems, with minimal impact on human comprehension. Processing pipelines for modern systems are comprised of signal preprocessing and feature extraction steps, whose output is fed to a machine-learned model. Prior work has focused on the models, using white-box knowledge to tailor model-specific attacks. We focus on the pipeline stages before the models, which (unlike the models) are quite similar across systems. As such, our attacks are black-box, transferable, can be tuned to require zero queries to the target, and demonstrably achieve mistranscription and misidentification rates as high as 100% by modifying only a few frames of audio. We perform a study via Amazon Mechanical Turk demonstrating that there is no statistically significant difference between human perception of regular and perturbed audio. Our findings suggest that models may learn aspects of speech that are generally not perceived by human subjects, but that are crucial for model accuracy.

3:00 – 3:10 pm
BREAK
3:10 – 3:40 pm

When Worlds Collide: AI Meets Cyber

Artificial intelligence has been around for decades. So has cybersecurity. But how do these two fields combine to change the threat landscape, and how can we address the changes in threats? This talk will discuss how AI impacts cybersecurity, and how cybersecurity impacts AI, and why the sky isn’t falling (yet, at least!). A special focus will be on the types of attacks that are specific to AI models, and the different defenses that organizations can apply to protect themselves and their customers. Open research questions and things to think about will be sprinkled throughout the presentation.

Carrie Gates, Senior Vice President & Senior Information Security Manager @ Bank of America

3:40 – 4:00 pm

Demo 11:

Hard-label Manifolds: Unexpected Advantages of Query Efficiency for Finding On-manifold Adversarial Examples

Washington Garcia, UF FICS Research

4:00 – 4:20 pm

Demo 12;

Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction

Generative machine learning models have made convincing voice synthesis a reality. While such tools can be extremely useful in applications where people consent to their voices being cloned (e.g., patients losing the ability to speak, actors not wanting to have to redo dialog, etc), they also allow for the creation of unconsented content known as deepfakes. This malicious audio is problematic not only because it can convincingly be used to impersonate arbitrary users, but because detecting deepfakes is challenging and generally requires knowledge of the speciﬁc deepfake generator. In this paper, we develop a new mechanism for detecting audio deepfakes using techniques from the ﬁeld of articulatory phonetics. Speciﬁcally, we apply ﬂuid dynamics to estimate the arrangement of the human vocal tract during speech, and show that generated deepfakes often model impossible or highly unlikely anatomical arrangements. When parameterized to achieve 97.5% precision, our detection mechanism achieves a recall of 96.2%, correctly identifying all but one deepfake sample in our dataset. We then discuss the limitations of this approach, and how deepfake models fail to reproduce all aspects of speech equally. In so doing, we demonstrate that subtle but biologically constrained aspects of how humans generate speech are not captured by current models, and can therefore act as a powerful tool to detect audio deepfakes.

Logan Blue, UF FICS Research

4:20 – 4:40 pm

Demo 13: