Georgia Tech at NDSS 2017

Atlanta   |   Feb. 27, 2017

A new method for location privacy in mobile devices... A vulnerability that lets cyberattackers control more than 91 percent of an operating system... And a safeguard that secretly bootstraps memory space to protect against corruption attacks... Researchers, students and faculty from the Georgia Institute of Technology (Georgia Tech) bring these and other cybersecurity advances to the Network and Distributed System Security Symposium (NDSS '17) -- one of the information security community's premier international events.

Georgia Tech is one of three universities worldwide with the most research accepted into the peer-reviewed conference. Five academic papers will be presented by Georgia Tech -- placing it first alongside the University of Michigan and Purdue University for research volume at NDSS '17. In all, more than 90 organizations worldwide will bring new work to NDSS '17, including Google, Microsoft Research, Mozilla, and leading universities from Asia, Europe, the Middle East, and North America. The NDSS conference annually examines advances in the state of network and distributed systems security, with a focus on actual system design and implementation. The Symposium -- held Feb. 26-Mar. 1, 2017 in San Diego, Calif. -- is sponsored by The Internet Society.

 

Research by Georgia Tech

 
"Dynamic Differential Location Privacy with Personalized Error Bounds"

Lei Yu, Ling Liu, and Calton Pu (Georgia Tech)

Location privacy continues to attract significant attention, fueled by the rapid growth of location-based services (LBSs) and smart mobile devices. Location obfuscation has been the dominating approach to preserving privacy by transforming the exact location of a mobile user to a perturbed location before its public release. The notion of location privacy has evolved from user-defined location (k-anonymity) to two statistical quantification-based privacy notions: "geo-indistinguishability" and "expected inference error." In this paper we argue that these are two complementary notions for location privacy. We formally study and effectively combine two privacy notions into "PIVE" -- a dynamic differential location privacy framework. Experiments with real-world datasets demonstrate that the PIVE approach effectively guarantees the two privacy notions simultaneously and outperforms the existing mechanisms in terms of adaptive privacy protection in the presence of skewed locations and computation efficiency.


"SGX-Shield: Enabling Address Space Layout Randomization for SGX Programs"

in collaboration with KAIST of South Korea
Jaebaek Seo (KAIST), Byoungyoung Lee (formerly Georgia Tech, now Purdue University), Seongmin Kim (KAIST), Ming-Wei Shih (Georgia Tech), Insik Shin (KAIST), Dongsu Han (KAIST), Taesoo Kim (Georgia Tech)

Traditional execution environments deploy Address Space Layout Randomization (ASLR) to defend against memory corruption attacks. However, Intel Software Guard Extension (SGX) -- a new trusted execution environment designed to serve security-critical applications on the cloud -- lacks such an effective, well-studied feature. In fact, we find that applying ASLR to SGX programs raises non-trivial issues beyond simple engineering for a number of reasons: 1) SGX is designed to defeat a stronger adversary than the traditional model, which requires the address space layout to be hidden from the kernel; 2) the limited memory uses in SGX programs present a new challenge in providing a sufficient degree of entropy; 3) remote attestation conflicts with the dynamic relocation required for ASLR; and 4) the SGX specification relies on known and fixed addresses for key data structures that cannot be randomized. This paper presents SGX-Shield, a new ASLR scheme that is built to secretly bootstrap the memory space layout with a finer-grained randomization. SGX-Shield shows a high degree of randomness in memory layouts and stops memory corruption attacks with a high probability. SGX-Shield shows 7.61% performance overhead in running common micro-benchmarks, and 2.25% overhead in running a more realistic workload of an HTTPS server.


Researchers from Georgia Tech and KAIST disclosed to Intel SGX engineers both a vulnerability they discovered and a solution to defend against control-channel attacks. The group met with Intel in Portland, Ore., on February 24, 2017.  L to R: Ming-Wei Shih, Sangho Lee, Assistant Professor Taesoo Kim, Jaebaek Seo (KAIST), and Yeongjin Jang

 

"T-SGX: Eradicating Controlled-Channel Attacks Against Enclave Programs"

Ming-Wei-Shih, Sangho Lee, Taesoo Kim, and Marcus Peinado (Georgia Tech)

Intel Software Guard Extensions (SGX) is a hardware-based Trusted Execution Environment (TEE) that enables secure execution of a program in an isolated environment, called an enclave. This strong security property allows trustworthy execution of programs in hostile environments, such as a public cloud, without trusting anyone (e.g., a cloud provider) between the enclave and the SGX hardware. However, recent studies have demonstrated that enclave programs are vulnerable to accurate, controlled-channel attacks conducted by a malicious operating system (OS). In this paper, we propose T-SGX, a complete mitigation solution to the controlled-channel attack in terms of compatibility, performance, and ease of use. T-SGX relies on a commodity component of the Intel processor (since Haswell), called Transactional Synchronization Extensions (TSX), which implements a restricted form of hardware transactional memory. We implemented T-SGX as a compiler-level scheme to automatically transform a normal enclave program into a secured enclave program without requiring manual source code modification or annotation. To evaluate the performance of T-SGX, we ported 10 benchmark programs of n-bench to the SGX environment. Our evaluation results look promising. T-SGX is an order of magnitude faster than the state-of-the-art mitigation schemes. On our benchmarks, T-SGX incurs on average 50% performance overhead and less than 30% storage overhead.


"Unleashing Use-Before-Initialization Vulnerabilities in the Linux Kernel Using Targeted Stack Spraying"

in collaboration with Saarland University, the Max Planck Institute for Software Systems, and Saarland Informatics Campus
 Kangjie Lu (Georgia Tech), Marie-Therese Walter (CISPA, Saarland University & Saarland Informatics Campus), David Pfaff (CISPA, Saarland University & Saarland Informatics Campus),Stefan Nürnberger (DFKI & CISPA, Saarland University & Saarland Informatics Campus), Wenke Lee (Georgia Tech), Michael Backes (CISPA, Saarland University & MPI-SWS & Saarland Informatics Campus)

A common type of memory error in the Linux kernel is using uninitialized variables (or uninitialized use). Uninitialized uses not only cause undefined behaviors but also impose a severe security risk if an attacker takes control. Previously, it was considered unreliable to exploit uninitialized uses on the kernel stack. Therefore, uninitialized uses are largely overlooked and regarded as undefined behaviors, rather than security vulnerabilities. In this paper, we propose a fully automated targeted stack-spraying approach that allows attackers to reliably control more than 91% of the Linux kernel stack. Our targeted stack-spraying includes two techniques: (1) a deterministic stack spraying technique that suitably combines tailored symbolic execution and guided fuzzing to identify kernel inputs that user-mode programs can use to deterministically guide kernel code paths and thereby leave attacker-controlled data on the kernel stack, and (2) an exhaustive memory spraying technique that uses memory occupation and pollution to reliably control a large region of the kernel stack. As a countermeasure, we propose a compiler-based mechanism that initializes potentially unsafe pointer-type fields with almost no performance overhead. Our results show that uninitialized use is a severe attack vector that can be readily exploited with targeted stack-spraying, so future memory-safety techniques should consider it a prevention target, and systems should not use uninitialized memory as a randomness source.


"Internet-scale Probing of CPS: Inference, Characterization and Orchestration Analysis"

in collaboration with Florida Atlantic University, New York University and NYU Abu Dhabi
Claude Fachkha (New York University Abu Dhabi), Elias Bou-Harb (Florida Atlantic University), Anastasis Keliris (New York University Abu Dhabi), Nasir Memon (New York University), Mustaque Ahamad (Georgia Tech)

Although the security of Cyber-Physical Systems (CPS) has been recently receiving significant attention from the research community, undoubtedly, there still exists a substantial lack of a comprehensive and a holistic understanding of attackers' malicious strategies, aims and intentions. This paper uniquely exploits passive monitoring and analysis of a newly deployed network telescope IP address space in a first attempt ever to build broad notions of real CPS maliciousness. Specifically, we approach this problem by inferring, investigating, characterizing and reporting large-scale probing activities that specifically target more than 20 diverse, heavily employed CPS protocols. Our analysis and evaluations, which draw upon extensive network telescope data observed over a recent one month period, demonstrate a staggering 33,000 probes towards ample of CPS protocols, the lack of interest in UDP-based CPS services, and the prevalence of probes towards the ICCP and Modbus protocols. Additionally, we infer a considerable 74% of CPS probes that were persistent throughout the entire analyzed period targeting prominent protocols such as DNP3 and BACnet. Further, we uncover close to 9,000 large-scale, stealthy, previously undocumented orchestrated probing events targeting a number of such CPS protocols. We validate the various outcomes through cross-validations against publicly available threat repositories. We concur that the devised approaches, techniques, and methods provide a solid first step towards better comprehending real CPS unsolicited objectives and intents.


 
About Cybersecurity at Georgia Tech

Cybersecurity at the Georgia Institute of Technology (Georgia Tech) is an interdisciplinary effort, spanning 11 research labs and centers across seven campus units and the Georgia Tech Research Institute, with more than 460 researchers, and 200,000 square feet of secured, classified space. The Institute for Information Security & Privacy (IISP) at Georgia Tech serves as a coordinating body for cybersecurity research; as a gateway to faculty, students, and scientists at Georgia Tech, and as a central location for collaboration around six, critical research thrusts: Policy, Consumer-facing privacy, Risk, Trust, Attribution and Cyber-physical systems. By leveraging intellectual capital from across Georgia Tech and its external partners, we address vital solutions for national defense, economic continuity, and individual freedom. In partnership with the IISP,  government and industry partners can help move Georgia Tech's cybersecurity research into deployable solutions that close the innovation gap with immediate application in the world. To inquire about licensing existing research or to begin a project, contact: Stephen Moulton, 757.634.6828

 

 

Contact

Tara La Bouff
Marketing Communications Manager
404.769.5408 (mobile)
tara.labouff@iisp.gatech.edu