The Source Port is Georgia Tech's monthly cybersecurity newsletter, featuring commentary about topics in the news, what wasn't written between the lines, the big (and sometimes nagging) questions that are driving our research, and new projects underway.
November 7, 2016
Donating More Than Blood
Australia experienced what could be its largest personal data breach to date, as nearly 1.3 million records from a Red Cross blood donor database were inadvertently disclosed in late October. Data in the breach went beyond typical contact details to include blood type and answers to intensely personal lifestyle questions (used for donor screening purposes). The information was publicly available from 5 September until its discovery 25 October by a good samaritan, who reached out to noted cybersecurity expert Troy Hunt, who in turn notified the Australian CERT for appropriate response measures.
- IT News (Australia): http://www.itnews.com.au/news/australias-biggest-data-breach-sees-13m-records-leaked-440305
- Troy Hunt: https://www.troyhunt.com/the-red-cross-blood-service-australias-largest-ever-leak-of-personal-data/
- ABC News (Australia): http://www.abc.net.au/news/2016-10-28/red-cross-blood-service-admits-to-data-breach/7974036
IISP Analyst Holly Dragoo: "A lot of the cybersecurity industry is based on raising awareness (or fear-mongering to some) about best practices, insider threat, and breach/disaster recovery. What is often glossed over, however, is how costly negligence can be; it is, in fact, an overlooked form of “insider threat.” No actual hacking occurred, Red Cross networks were reasonably secure, and there was no malicious intent behind the act. Regardless, the effects, official response, and public relations are the same as if there was a hostile exfiltration of data. Also of particular importance for businesses: many cyber insurance policies won’t cover events caused by negligence.
While Red Cross Australia was not to blame for this breach, a third party hosting service that they contracted with, is. Learn your business partners security policies and how they are safeguarding their networks! One of the blood donor database backups was published to a web server that had directory browsing enabled (which is how it was discovered). Backup information, as Troy Hunt points out, should never be connected to a public-facing site – let alone touching the web at all. Fortunately, this “human error” was discovered by benevolent types, (who should be applauded for having contacted the appropriate authorities with discretion and the least amount of surprise to the victim) but who knows who may have silently discovered and profited from it during the time window it was available."
Numerous Devices Vulnerable to Physics-based Attack
Computer scientists think of computers in abstract terms: a well-defined machine manipulates bits of information in well-defined ways in order to calculate well-defined results. The physical properties governing the computer are hidden behind abstraction. A team of researchers recently demonstrated how a Rowhammer attack on Android devices can be coupled with a series of clever techniques to allow an attacker to assume full control of a device. The attack uses the physical properties of DRAM to flip a bit that allows the attacking process to gain control of its own credentials, hence allowing the process to give itself root privileges. Researchers demonstrated the attack on a handful of different Android devices from different vendors.
IISP Analyst Joel Odom: "Much of the computer security is about cheating: attackers break the rules of the game in order to gain an unscrupulous win. Attacks on the underlying physical properties of a computer are a great example of how attackers can cheat. For example, monitoring a computer’s physical emissions to learn information about cryptographic keys is another example of how attackers can exploit the physics of computers. The Drammer research also shows how a clever attacker can evolve a theoretical attack into a practical attack. We should remember that, although this attack was demonstrated on Android devices, other computing devices that use DRAM (practically all of them) may be susceptible to similar attacks."
Mirai DDoS Proves Earliest Tricks Still Thrive
A large distributed denial of service (DDoS) attack temporarily disabled some of the DNS provider Dyn's customers, including Twitter, Spotify, Github, and many others during the latter half of October. While using standard DDoS techniques, this attack was launched by the Mirai botnet, which is comprised of Internet of Things (IoT) devices. Numerous groups, including the U.S. Department of Homeland Security, are investigating the threat and attempting to attribute the attacks.
- TechCrunch: https://techcrunch.com/2016/10/21/many-sites-including-twitter-and-spotify-suffering-outage/
- KrebsOnSecurity.com: https://krebsonsecurity.com/2016/10/source-code-for-iot-botnet-mirai-released/
- TechTarget.com: http://searchsecurity.techtarget.com/news/450401962/Details-emerging-on-Dyn-DNS-DDoS-attack-Mirai-IoT-botnet
IISP Analyst Yacin Nadji: "DDoS attacks have been getting more publicity these days and other security experts are predicting more powerful are yet to come. Although novel for relying on IoT devices, the Mirai botnet compromised hosts using one of the earliest tricks in the book: a dictionary attack of lists of known default credentials. This highlights a big fear involving IoT devices, namely, their development processes do not follow established security practices. The attack against DNS infrastructure also is interesting because as more devices rely on Internet infrastructure to function—like security cameras and wireless door locks—disabling fundamental infrastructure like the DNS can lock users out if appropriate failover mechanisms are not in place."
Dynamic IPs Reveal Your Identity
The European Union Court of Justice (CJEU) issued a ruling on a German case that dynamic internet protocol (IP) addresses may now be considered personal data, even if the website itself can’t identify the user behind the IP address. An IP address can be fixed (static) or changeable (dynamic), and allow a user’s computer to communicate with a web server. Essentially the precedent states that since users access the web through internet service providers (ISPs), the data held by these companies, together with a user’s IP address, can reveal a user’s identity.
- National Law Review: http://www.natlawreview.com/article/court-justice-european-union-confirms-dynamic-ip-addresses-to-be-personal-data
- ArsTechnica (UK): http://arstechnica.co.uk/tech-policy/2016/10/eu-dynamic-static-ip-personal-data/
- The Irish Times: http://www.irishtimes.com/business/technology/european-court-of-justice-rules-ip-addresses-are-personal-data-1.2835704
IISP Analyst Holly Dragoo: "This is important for several reasons, including the upcoming implementation of GDPR – the General Data Protection Regulation in the EU in 2018. The GDPR states that companies, websites, retailers, etc., who have clients based in the EU will be responsible for protecting their personal data or face specific penalties. A very expensive preparation effort is currently under way across the globe to ready infrastructure and equipment for this new standard. So adding to the list of data that must be protected is now a – frequently changeable – IP address? Is this really necessary?
The argument appears to be (as I understand it) that ISP data in conjunction with a user’s IP address can tell you who is accessing a website. Of course it can. A license plate in conjunction with the DMV database can identify the driver or owner of a car (most likely) as well. A license plate is public information, even though you might take precautions to protect the information. Same with your address and property tax records; you certainly don’t want to advertise your address to the world, but you must concede that it is public information. I would argue that a MAC address serves the same function. These data points rarely change, and can be reasonably associated with an individual.
Dynamic IPs, however, are also public…but they are different every time a user connects to the internet. Oh, by the way, they can easily be spoofed as well. There’s an entirely valid argument over whether or not they should be considered personal at all.
The critical aspect here comes in the linking of this data with other facts (tax record database, ISP, DMV database, etc.), which in basic terms, is simply analysis. Analysis like any of these examples does encroach on privacy rights and has a limited number of uses. There is a law enforcement and national security exemption that provides for warranted data searching and ISP compliance. Which leaves the question, who is going to be hacking into an ISP for client data? Cyber criminals – people, who by definition, disobey laws."
Are Computer Users Giving Up on Security?
A qualitative study conducted by the National Institute of Standards and Technology (NIST) suggests that "security fatigue" is causing computer users to give up on security. "Researchers found that the result of weariness leads to feelings of resignation and loss of control. These reactions can lead to avoiding decisions, choosing the easiest option among alternatives, making decisions influenced by immediate motivations, behaving impulsively, and failing to follow security rules.
- NIST: https://www.nist.gov/news-events/news/2016/10/security-fatigue-can-cause-computer-users-feel-hopeless-and-act-recklessly
IISP Analyst Joel Odom: "The information security community understands that people are the weak link in the security chain, but this story caught my attention because of the way that it frames the problem.
Our brains use a combination of punishment and reward to teach us how to behave. When we do a good job, our brains release a bit of dopamine to reward the behavior. When we do a poor job we feel let down. This mechanism helps us to learn good behaviors, even in routine choices that we face from minute to minute.
Good behavior in computer security does not align with our natural reward system. We don’t feel good at the end of a day if we go home with no security problems for that day. To practice computer security we have to engage our cognitive minds to remember complex passwords, to vet applications before we run them, and to maintain secure settings on our computers. In a sense, practicing computer security actually prevents us from experiencing the reward of a job well done by getting in the way of doing our job. It’s this constant cognitive aspect of security that causes the fatigue described in the NIST study.
Bruce Schneier wrote an in-depth essay on the psychology of security that basically boils down to the point that humans are irrational when it comes to making security decisions in modern life. He concludes the essay by stating, “We make the best security trade-offs… when our feeling of security matches the reality of security.”
I suggest two ways to reduce security fatigue by making security natural for humans. First, remove security decisions from the user by making security transparent. Users shouldn't have to worry that plugging a USB device into their computer will compromise their banking information, nor should users have to decide whether a downloaded game is trustworthy or not. Our computers should be designed so that it doesn't matter. Second, where users must make security decisions, the human-computer interaction should allow the user to intuitively feel the right thing to do and should reward the user for good behavior. Bad behavior should be difficult. These ideas are not my own: modern mobile operating systems and web browsers are already doing much better at making computer security natural for humans."
Signal Messaging Subpeona Reveals Little
Open Whisper Systems (OWS)—makers of the popular, encrypted messaging app Signal—released redacted transcripts of court documents related to a subpoena for user information, including a gag order. OWS stores very little information about its users. OWS complied with the order, but the only available information was account creation and most recent login time stamps. The ACLU argues that the gag order, and the government's unwillingness to fight to maintain it, shows a default preference for secrecy without need.
- Threat Post: https://threatpost.com/subpoena-for-signal-messaging-data-renders-little/121081/
- Whisper Systems (redacted response to subpeona): https://whispersystems.org/bigbrother/documents/2016-10-04-eastern-virginia-subpoena-response.pdf
- U.S. House of Representatives Committee on Oversight & Government Reform (YouTube post by Steve Gibson): https://www.youtube.com/watch?v=zk78_zmH4QI
- U.S. House of Representatives Committee on Oversight & Government Reform (official page): https://oversight.house.gov/subcommittee/information-technology/
IISP Analyst Yacin Nadji: "This was particularly interesting due to the amount of information OWS was able to share with the public. Not everyone can count on ACLU support, but this case could encourage others to fight needless gag orders. It shows yet another battle in the war between encryption advocates, and law enforcement and government agencies. With tech companies taking the privacy of user information more seriously, expect to see more of these fights..."
Self-Driving Pandora's Box
Keen Security Lab, formerly known as KEEN Team, a security research group based in China, demonstrated an attack on Tesla Model S cars which allows near complete remote control. Full details are yet to emerge as Keen is coordinating with Tesla as part of their responsible disclosure policy. What we know is that Tesla's on-board web browser was compromised while connected to a malicious Wi-Fi access point. From that point, the exploit was able to take control of the vehicle's CAN (Controller Area Network) bus, essentially allowing control of anything electronic in the car. In a video posted to their website, Keen demonstrated remote control of windshield wipers, turn signals, seat adjustments, brakes, and more. Tesla responded with a recent over-the-air update to patch the issues.
- Keen Security Lab Blog: http://keenlab.tencent.com/en/
- Forbes: http://www.forbes.com/sites/thomasbrewster/2016/09/20/keen-team-remotely-hack-tesla-cars/
- Fortune: http://fortune.com/2016/09/20/tesla-security-bug-hack/
- Engadget: https://www.engadget.com/2016/09/20/tesla-model-s-remote-hack-keen-security/
IISP Analyst Stone Tillotson: "As the Internet-of-Things (IoT) continues to grow, we are going to see an increase in these kinds of exploits. The browser based entry vector is an important problem, but more noteworthy is compromising the CAN bus running under the hood. CAN provides no in-built security mechanisms and relies on individual components to implement their own. With predictable results, most components have little to no security, making them readily controllable after the initial penetration. With the ongoing push toward IoT, systems which effectively relied on physical isolation for security are being exposed to attacks they were not designed to withstand. The Tesla compromise is only the latest in a series of similar assaults which started unwinding with SCADA attacks last decade. More and more, exploits like this expose the public to real, physical danger. In the past, a compromised network could make your day take a left turn, not your car. In this seemingly inevitable march toward a fully networked world, it's only a matter of time until we see the headline, 'Coffee Machine Hack; House-fires Throughout U.S.'"
Of Text Messages and Password Circumvention
Sergei Skorobogatov, a security researcher with the University of Cambridge, recently demonstrated a proof of concept attack on Apple's iPhone 5c password protection. If further developed, the attack would allow an attacker to brute force a password-protected iPhone 5 generation of cell phones without the risk of tripping the memory erasure feature. Using only £75 ($100) in parts, Skorobogatov accomplished what the FBI paid an undisclosed security company $1.3 million in early 2016 for assistance unlocking a phone belonging to one of the San Bernardino terrorists. At the time, FBI Director James Comey was pressuring Apple to develop a password circumvention approach, a demand Apple strongly opposed. Skorobogatov's technique, popping out the main memory chip and resetting it after failed password attempts, represents a generalized attack requiring minimal time and resources, and could be applied to a wide range of devices; however, it is less likely to be as effective moving forward as cell phone chipsets begin to be produced with embedded anti-circumvention technology.
- BBC: http://www.bbc.com/news/technology-37407047
- Light Blue Touchpaper.org: https://www.lightbluetouchpaper.org/2016/09/15/hacking-the-iphone-pin-retry-counter/
- CNN Money: http://money.cnn.com/2016/04/07/technology/fbi-iphone-hack-san-bernardino/?iid=EL
IISP Analyst Stone Tillotson: “Skorobogatov's feat demonstrates there are still wide swaths of the security landscape left unexplored, with more fields continually opening up as new generations of devices appear. There is still plenty of space for the interested amateur or underfunded lab. His technique, one debated publically at the time, wasn't something entirely new; many an electronics engineer have and will perform similar tasks as part of their education or professional work. This analyst can't wait to see the day when the local science project store has an aisle labeled, ‘Robotics, Electronics, Security Hacking.’”