Georgia Tech at USENIX Security 2017

Atlanta   |   Aug. 16, 2017

The Georgia Institute of Technology (Georgia Tech) presented more discoveries than any other organization worldwide when information security researchers convened at the 26th USENIX Security Symposium (USENIX '17) in Vancouver. The highly competitive conference -- with an acceptance rate of just 16.3 percent for peer-reviewed research -- featured universities from Asia, Europe, the Middle East and North America, as well as work by scientists at industry giants Google, Microsoft, Samsung, SAP and others.

More than 520 research papers were submitted for consideration. Just 85 projects by 113 organizations were accepted. Georgia Tech presented seven discoveries in the areas of systems security, hardware security, malware and binary analysis, and understanding recent attacks.

In one project presented at USENIX, researchers from Georgia Tech and Rutgers University unveiled a three-layer system that can be used to determine if an additive manufacturing process has been maliciously compromised.

“These 3-D printed components will be going into people, aircraft and critical infrastructure systems,” said Raheem Beyah, the Motorola Foundation Professor and associate chair in Georgia Tech’s School of Electrical and Computer Engineering. “Malicious software installed in the printer or control computer could compromise the production process. We need to make sure that these components are produced to specification and not affected by malicious actors or unscrupulous producers.” (More below)

USENIX is organized by the Advanced Computing Systems Association. It was founded in 1975 under the name "Unix Users Group," to study and develop Unix and similar systems. The USENIX Association today is a nonprofit professional organization that makes research and conference proceedings freely available under an open-access policy. https://www.usenix.org/conference/usenixsecurity17

 

Research by Georgia Tech

 

"See No Evil, Hear No Evil, Feel No Evil, Print No Evil? Malicious Fill Patterns Detection in Additive Manufacturing"

in collaboration with Rutgers University
Christian Bayens, (Georgia Tech); Tuan Le and Luis Garcia, (Rutgers University); Raheem Beyah, (Georgia Tech); Mehdi Javanmard and Saman Zonouz, (Rutgers University)

Additive Manufacturing is an increasingly integral part of industrial manufacturing. Safety-critical products, such as medical prostheses and parts for aerospace and automotive industries are being printed by additive manufacturing methods with no standard means of verification. In this paper, we develop a scheme of verification and intrusion detection that is independent of the printer firmware and controller PC. The scheme incorporates analyses of the acoustic signature of a manufacturing process, real-time tracking of machine components, and post production materials analysis. Not only will these methods allow the end user to verify the accuracy of printed models, but they will also save material costs by verifying the prints in real time and stopping the process in the event of a discrepancy. We evaluate our methods using three different types of 3D printers and one CNC machine and find them to be 100% accurate when detecting erroneous prints in real time. We also present a use case in which an erroneous print of a tibial knee prosthesis is identified.

 

"Efficient Protection of Path-Sensitive Control Security"

Ren Ding and Chenxiong Qian, (Georgia Tech); Chengyu Song, (UC Riverside); Bill Harris, Taesoo Kim, and Wenke Lee (Georgia Tech)

Control-Flow Integrity (CFI), as a means to prevent control-flow hijacking attacks, enforces that each instruction transfers control to an address in a set of valid targets. The security guarantee of CFI thus depends on the definition of valid targets, which conventionally are defined as the result of a static analysis. Unfortunately, previous research has demonstrated that such a definition, and thus any implementation that enforces it, still allows practical control-flow attacks.

In this work, we present a path-sensitive variation of CFI that utilizes runtime path-sensitive point-to analysis to compute the legitimate control transfer targets. We have designed and implemented a runtime environment, PITTYPAT, that enforces path-sensitive CFI efficiently by combining commodity, low-overhead hardware monitoring and a novel runtime points-to analysis. Our formal analysis and empirical evaluation demonstrate that, compared to CFI based on static analysis, PITTYPAT ensures that applications satisfy stronger security guarantees, with acceptable overhead for security-critical contexts.


"PlatPal: Detecting Malicious Documents with Platform Diversity"

Meng Xu and Taesoo Kim (Georgia Tech)

Due to the continued exploitation of Adobe Reader, malicious document (maldoc) detection has become a pressing problem. Although many solutions have been proposed, recent works have highlighted some common drawbacks, such as parser-confusion and classifier-evasion attacks.

In response to this, we propose a new perspective for maldoc detection: platform diversity. In particular, we identify eight factors in OS design and implementation that could cause behavioral divergences under attack, ranging from syscall semantics (more obvious) to heap object metadata structure (more subtle) and further show how they can thwart attackers from finding bugs, exploiting bugs, or performing malicious activities.

We further prototype PLATPAL to systematically harvest platform diversity. PLATPAL hooks into Adobe Reader to trace internal PDF processing and also uses sandboxed execution to capture a maldoc’s impact on the host system. Execution traces on different platforms are compared, and maldoc detection is based on the observation that a benign document behaves the same across platforms, while a maldoc behaves differently during exploitation. Evaluations show that PLATPAL raises no false alarms in benign samples, detects a variety of behavioral discrepancies in malicious samples, and is a scalable and practical solution.


"Hacking in Darkness: Return-oriented Programming against Secure Enclaves"

in collaboration with KAIST and Microsoft Research
Jaehyuk Lee and Jinsoo Jang, (KAIST); Yeongjin Jang, (Georgia Tech); Nohyun Kwak, Yeseul Choi, and Changho Choi, (KAIST); Taesoo Kim, (Georgia Tech); Marcus Peinado, (Microsoft Research); Brent Byunghoon Kang, (KAIST).

Intel Software Guard Extensions (SGX) is a hardware-based Trusted Execution Environment (TEE) that is widely seen as a promising solution to traditional security threats. While SGX promises strong protection to bug-free software, decades of experience show that we have to expect vulnerabilities in any non-trivial application. In a traditional environment, such vulnerabilities often allow attackers to take complete control of vulnerable systems. Efforts to evaluate the security of SGX have focused on side-channels. So far, neither a practical attack against a vulnerability in enclave code nor a proof-of-concept attack scenario has been demonstrated. Thus, a fundamental question remains: What are the consequences and dangers of having a memory corruption vulnerability in enclave code?

To answer this question, we comprehensively analyze exploitation techniques against vulnerabilities inside enclaves. We demonstrate a practical exploitation technique, called Dark-ROP, which can completely disarm the security guarantees of SGX. Dark-ROP exploits a memory corruption vulnerability in the enclave software through return-oriented programming (ROP). However Dark-ROP differs significantly from traditional ROP attacks because the target code runs under solid hardware protection. We overcome the problem of exploiting SGX-specific properties and obstacles by formulating a novel ROP attack scheme against SGX under practical assumptions. Specifically, we build several oracles that inform the attacker about the status of enclave execution. This enables him to launch the ROP attack while both code and data are hidden. In addition, we exfiltrate the enclave’s code and data into a shadow application to fully control the execution environment. This shadow application emulates the enclave under the complete control of the attacker, using the enclave (through ROP calls) only to perform SGX operations such as reading the enclave’s SGX crypto keys.

The consequences of Dark-ROP are alarming; the attacker can completely breach the enclave’s memory protections and trick the SGX hardware into disclosing the enclave’s encryption keys and producing measurement reports that defeat remote attestation. This result strongly suggests that SGX research should focus more on traditional security mitigations rather than on making enclave development more convenient by expanding the trusted computing base and the attack surface (e.g., Graphene, Haven).


"Inferring Fine-grained Control Flow Inside SGX Enclaves with Branch Shadowing"

in collaboration with Microsoft Research
Sangho Lee, Ming-Wei Shih, Prasun Gera, Taesoo Kim, and Hyesoon Kim, (Georgia Tech); Marcus Peinado, (Microsoft Research)

Intel has introduced a hardware-based trusted execution environment, Intel Software Guard Extensions (SGX), that provides a secure, isolated execution environment, or enclave, for a user program without trusting any underlying software (e.g., an operating system) or firmware. Researchers have demonstrated that SGX is vulnerable to a page-fault-based attack. However, the attack only reveals page-level memory accesses within an enclave.

In this paper, we explore a new, yet critical, sidechannel attack, branch shadowing, that reveals fine-grained control flows (branch granularity) in an enclave. The root cause of this attack is that SGX does not clear branch history when switching from enclave to non-enclave mode, leaving fine-grained traces for the outside world to observe, which gives rise to a branch-prediction side channel. However, exploiting this channel in practice is challenging because 1) measuring branch execution time is too noisy for distinguishing fine-grained control-flow changes and 2) pausing an enclave right after it has executed the code block we target requires sophisticated control. To overcome these challenges, we develop two novel exploitation techniques: 1) a last branch record (LBR)-based history-inferring technique and 2) an advanced programmable interrupt controller (APIC)-based technique to control the execution of an enclave in a fine-grained manner. An evaluation against RSA shows that our attack infers each private key bit with 99.8% accuracy. Finally, we thoroughly study the feasibility of hardware-based solutions (i.e., branch history flushing) and propose a software-based approach that mitigates the attack.


"Seeing Through The Same Lens: Introspecting Guest Address Space At Native Speed"

in collaboration with Singapore Management University and JiaoTong University
Siqi Zhao and Xuhua Ding, Singapore Management University; Wen Xu, (Georgia Tech); Dawu Gu, Shanghai JiaoTong University

Software-based MMU emulation lies at the heart of out-of-VM live memory introspection, an important technique in the cloud setting that applications such as live forensics and intrusion detection depend on. Due to the emulation, the software-based approach is much slower compared to native memory access by the guest VM. The slowness not only results in undetected transient malicious behavior, but also inconsistent memory view with the guest; both undermine the effectiveness of introspection. We propose the immersive execution environment (ImEE) with which the guest memory is accessed at native speed without any emulation. Meanwhile, the address mappings used within the ImEE are ensured to be consistent with the guest throughout the introspection session. We have implemented a prototype of the ImEE on Linux KVM. The experiment results show that ImEE-based introspection enjoys a remarkable speed up, performing several hundred times faster than the legacy method. Hence, this design is especially useful for realtime monitoring, incident response and high-intensity introspection.


"Understanding the Mirai Botnet"

in collaboration with Akamai, Cloudflare, Google, Merit Network, Inc., University of Illinois, Urbana-Champaign, and University of Michigan, Ann Arbor
Manos Antonakakis, (Georgia Tech); Tim April, (Akamai); Michael Bailey, (University of Illinois, Urbana-Champaign); Matt Bernhard, (University of Michigan, Ann Arbor); Elie Bursztein, (Google); Jaime Cochran, (Cloudflare); Zakir Durumeric and J. Alex Halderman, (University of Michigan, Ann Arbor); Luca Invernizzi, (Google); Michalis Kallitsis, (Merit Network, Inc.); Deepak Kumar, (University of Illinois, Urbana-Champaign); Chaz Lever, (Georgia Tech); Zane Ma, (University of Illinois, Urbana-Champaign); Joshua Mason, (University of Illinois. Urbana-Champaign); Damian Menscher, (Google); Chad Seaman, (Akamai); Nick Sullivan, (Cloudflare); Kurt Thomas, (Google); Yi Zhou, (University of Illinois, Urbana-Champaign)

The Mirai botnet, composed primarily of embedded and IoT devices, took the Internet by storm in late 2016 when it overwhelmed several high-profile targets with massive distributed denial-of-service (DDoS) attacks. In this paper, we provide a seven-month retrospective analysis of Mirai’s growth to a peak of 600k infections and a history of its DDoS victims. By combining a variety of measurement perspectives, we analyze how the botnet emerged, what classes of devices were affected, and how Mirai variants evolved and competed for vulnerable hosts. Our measurements serve as a lens into the fragile ecosystem of IoT devices. We argue that Mirai may represent a sea change in the evolutionary development of botnets—the simplicity through which devices were infected and its precipitous growth, demonstrate that novice malicious techniques can compromise enough low-end devices to threaten even some of the best-defended targets. To address this risk, we recommend technical and nontechnical interventions, as well as propose future research directions.

 


 
About Cybersecurity at Georgia Tech


Download the 2017-18 Fact Sheet
The Institute for Information Security & Privacy (IISP) at Georgia Tech is a coordinating body for cybersecurity research; as a gateway to faculty, students, and scientists at Georgia Tech, and as a central location for collaboration around six, critical research thrusts: Policy, Consumer-facing privacy, Risk, Trust, Attribution and Cyber-physical systems. By leveraging intellectual capital from across Georgia Tech and its external partners, we address vital solutions for national defense, economic continuity, and individual freedom. Working in partnership with the IISP, government and industry partners can help move Georgia Tech's cybersecurity research into deployable solutions that close the innovation gap with immediate application in the world. For research or partnership opportunities, contact: Gloria Griessman

 

 

Contact

Tara La Bouff
Marketing Communications Manager
404.769.5408 (mobile)
tara.labouff@iisp.gatech.edu