Cybersecurity News & Commentary - July 31, 2017

The Source Port is Georgia Tech's monthly cybersecurity newsletter, featuring commentary from its researchers about topics in the news, what wasn't written between the lines, the big (and sometimes nagging) questions driving our research, and new projects underway.

July 31, 2017


Amazon Win Sets Good Precedent for Internet Governance

In a victory for fairness and rule-based Internet governance, an independent review panel (IRP) decided that ICANN was wrong to deny retailing giant Amazon, Inc. the top level domain (TLD) of "AMAZON." We believe the IRP made the right decision, and we hope the ICANN board and the GAC will come to their senses and simply allow the TLD to be awarded to Amazon and be done with it... Going forward, we prefer a cleaner solution with better long-term consequences for Internet governance – a solution that does not encourage governments or other objectors to make claims not based in any law or engage in hold up politics in the future.

Read the full piece by Milton Mueller, professor, School of Public Policy


CableTap: Wirelessly Tapping Your Home Network 

On Saturday, July 29, details of numerous vulnerabilities found on consumer wireless gateways and set-top boxes were revealed at DEF CON 2017 in Las Vegas. What began as a search, by Bastille Networks and Web Sight Io, for a method to derive default keys for wireless routers resulted in 26 common vulnerabilities and exposures (CVEs). Many of them are critical; for example, remote root code execution and arbitrary file reads—both from the Internet—as well as a priori generating lists of hidden wi-fi SSIDs and passphrases based solely on their geographic region. Many target the Reference Design Kit (RDK) platform, an open-source framework for customer-premises equipment (CPE), such as wireless gateways and set-top boxes. This suggests that many vendors are vulnerable, putting millions of ISP customers at risk.


IISP Analyst Yacin Nadji: "I was always leery of the combination router/modem devices rented out by ISPs due to frugality and distrust (I'll keep my $9/month—thank you very much) and was worried about security issues. While the technical details are interesting, I was more interested in the implications of the findings with respect to software homogeneity, and a sensitive patch cycle that no doubt plagues RDK developers.

First, while consolidating the software systems for wireless gateways and set-top boxes has practical benefits (e.g., it eases development, deployment, and the creation of new features), a downside is the sheer size of the vulnerable population if something were to go awry – as in our current situation. Hopefully this makes patching those vulnerabilities and updating the devices easier, however, evidence suggests that this proposition will be overwhelming and time consuming for developers.

Second, the open-source nature of the project has some pluses and minuses. More eyes and more hands get to pass over the code and make it easier for security researchers to find additional flaws. Conversely, public diffs allow would-be miscreants to gain some insight into the vulnerabilities. Worse yet, some vulnerabilities may be described, fixed, but not merged into the main source tree leaving users at risk. This process also requires the vendor to maintain a strict and fast patch cycle: if fixes come quickly, but vulnerable users' devices are not updated, attackers are gifted with a window of opportunity for nearly free exploitation. This highlights the procedural barriers that often are overlooked when it comes to security, particularly as ISPs begin to enter the device market with more gusto."


Should the Internet Allow for Eavesdropping?

As the Internet Engineering Task Force (IETF) approaches finalization of the new version of Transport Layer Security (TLS), the question of providing mechanisms for passive eavesdropping continues to receive some debate. Because TLS provides encryption for the worldwide web as well as for other data that flows across the internet, it is the majority online security protocol. Since its initial deployment in 1994, TLS (formerly called SSL), has undergone numerous revisions to fix security weaknesses. One of the features of the new version of TLS, version 1.3, is that all of its configuration options prevent a passive attacker from being able to eavesdrop on encrypted communications. This provision is debated because enterprise networks (such as those deployed by banks) want to be able to decrypt traffic on their network to examine the traffic for security alarms.


IISP Analyst Joel Odom: "The debate of whether or not to include a way for servers implementing TLS 1.3 to allow passive eavesdroppers caught my interest because it provides an interesting opportunity to better understand some important features of secure protocols, and because the debate is a good example of the tensions between what security means to different actors.
Let's first look at the interesting educational aspects of this debate. Modern security protocols often include a feature called forward secrecy, which is a concept best illustrated by example. Suppose that Alice and Bob have a private connection over the internet. A passive eavesdropper, Eve, should be unable to gather any information about Alice and Bob's conversation, except perhaps the amount of data that they are exchanging. The private connection that Alice and Bob share is typically built using public key cryptography. Public key cryptography means that both Alice and Bob have their own secret keys. As long as their secret keys stay secret, Eve cannot eavesdrop. But what if Eve can record the conversation between Alice and Bob and steal their private keys at some future time? A protocol with forward secrecy provides protection against decryption of recorded conversations, even if private keys are compromised after the conversation. This is possible because each conversation generates some ephemeral key material that is disregarded and unrecoverable after the conversation is over.
Forward secrecy relates to the TLS 1.3 debate because any change that allows eavesdropping must break forward secrecy, whereas one of the strongest guarantees provided by TLS 1.3 is forward secrecy for all conversations. In his writing, Stephen Checkoway, an assistant professor of computer science at the University of Illinois Chicago, does a great job of explaining three viewpoints on this debate, which he calls the forward secret viewpoint, the enterprise viewpoint, and the pragmatic viewpoint. I do not need to repeat his excellent writing on the matter here, but I do want to restate and emphasize one of his motifs. Once it is finalized, TLS 1.3 will probably be a part of the internet for decades to come.  Do we really want to build the most secure version of TLS with intentional weaknesses? Even if we argue that only enterprise users will use that weakness for their particular use case, what unintended consequences may this have? What happens when malicious actors with agendas contrary to security and privacy activate this protocol 'feature' without the knowledge of the endpoints, or by coercion?
TLS 1.3 is built on the principle of the most security for the least complexity. In my opinion, accommodating special use cases by decreasing security and increasing complexity is exactly the wrong way to go about building a world with stronger cybersecurity."


Congressional Report Finds Grid Still Is Vulnerable to Cyberattacks

A new Congressionally mandated report from the National Academies of Sciences, Engineering, and Medicine concludes the United States' electric grid is vulnerable to cyberattacks that could potentially cause long-term and widespread blackouts. It called upon the U.S. Department of Energy and the Department of Homeland Security to work with utility operators and other stakeholders to improve cyber and physical security. A well-executed cyberattack could potentially cause extensive damage and result in large-area, prolonged outages that could cost billions of dollars and cause loss of life.


IISP Analyst Chris M. Roberts: "Stuxnet, a sophisticated attack on a nuclear facility in Iran, was first identified in 2010 and is thought to have been in development since 2005. It has been about 10 years since Stuxnet, and this new study finds that the power grid in the U.S. still is vulnerable. Just as concerning as that is, there are no response plans in place for long-term outages. While this report is light on specifics, it does recognize some of the less-obvious attack vectors, such as GPS since it is heavily used for time synchronization. Additionally, it points out that the cyberattack could cause physical damage and have long-lasting effects -- a point that many studies overlook.
I’m very pleased to see groups recognizing that cyberattacks of embedded systems can cause extreme physical damage and long-lasting effects. These embedded systems need to be protected. Unfortunately, there doesn’t seem to be much incentive in the power industry (public or private), to protect themselves against attacks. Protecting the power grid will be a very long and hard task since it has an extremely large attack surface with many attack vectors. With the slow development thus far, it appears we should stop asking if an attack will happen, but rather when.”


No Joy from Compromised Toys

The FBI issued a consumer notice about the security risks of internet-connected toys. The public service announcement (PSA) spelled out the identity theft risks, device vulnerabilities, and safety precautions parents should take before purchasing toys with wireless connections or recording devices. In some cases, particularly in toys without authentication procedures, hackers can co-opt the toy and listen to conversations in the home, or even communicate with the child directly. The PSA also provides consumer protection law references and guidance on smart habits to adopt when these devices are at home..


IISP Analyst Holly Dragoo: "This message is woefully underrepresented in parenting media, in my opinion. While there certainly are child safety concerns surrounding toys with recording and/or data storage capacity, it’s a much smaller risk than the likelihood that these devices also could be corrupted for use in a botnet for advanced criminal computing power, too. The Mirai malware preyed upon unsecured Internet of Things (IoT) devices to a disastrous effect in the past year, alone harnessing the power of a constructed botnet to conduct over a dozen DDoS attacks in excess of 100Gbps. It’s a whole other topic to discuss the safety obligations manufacturers should own up to in today’s IoT devices, but the realities exist: as consumers, we have to shift the mindset from 'Oh, that’s someone else’s problem' to 'It could happen to me' and be proactive."


Windows 10 Controlled Folder Access Protects Against Ransomware

Microsoft Corp. introduced a new security mechanism in Windows 10 that helps to protect user data from malicious programs.  This means that ransomware, which encrypts valuable data in order to hold it hostage, and spyware, which may operate by stealing sensitive files, will have a harder time accessing its target data when this feature is enabled.  The new security feature works by detecting attempts to access data in protected folders and notifying users when a blacklisted program attempts unauthorized access.  In this way, users have more control over which applications can access folders that contain valuable data.


IISP Analyst Joel Odom: "It used to be that when I thought about access control on a computer system, I thought about protecting my data from unauthorized access by other users.  My assumption was that it was legitimate that all of the programs running in my login session should be able to access all of my files.  Now that I am older and wiser, I realize this assumption was unsafe.  I use my home computer to track sensitive financial data, to maintain a repository of family photos and videos, to perform online banking, and to exchange sensitive e-mail.  I use the same login on the same computer to play computer games, to access websites that may deliver content of dubious origin, and to run productivity applications from third-party sources.  If every application on my PC can access every other application's data, I am an easy target for ransomware, spyware, or simple mistakes that could cause me to lose control of important family and financial records.

"The Windows 10 Controlled Folder Access feature is a way to help me to protect my important data by controlling which applications can access which data.  This is not a new idea.  Traditional enterprise operating systems have isolated programs from each other by running different programs under different user accounts.  For example, a web server application usually runs within its own account so that compromise of the application by an external attacker does not necessarily mean compromise of the entire server.  The same goes for database applications, etc.  This works well in a server environment, where a technical team administers different server functions, but this paradigm does not work as well in a single user environment, where I expect to be able to switch between applications on my desktop with ease.

"The first kind of isolation that we saw between different applications running in the same user session in Windows was User Account Control (UAC), which is the protection mechanism that throws up a special click-through notification when an application attempts to perform some function that requires elevated privileges, such as modifying the operating system.  (The Linux equivalent is the sudo command.)  Windows 10 has introduced further application isolation features -- Controlled Folder Access being the newest of such features.  It's a good idea.

"We should note that the mobile world has designed its products with isolation built-in from the start.  In the iOS model, applications cannot access the data of other applications without explicit permission from the user.  Similarly, applications cannot access privacy-compromising features such as the camera or location data without explicit user permission.  This means that we can install mobile banking applications and puzzle applications on the same mobile device without too much fear that one application will compromise the other.

"Isolation on computer systems has been around for decades, but isolation of applications and application data in personal computing has only begun to mature over the last decade.  This kind of isolation is one way that modern security practices raise the bar to prevent or mitigate the effects of malicious attacks."


Partners in Cybersecurity: Strange Bedfellows

Russian news sources reported that the United States and Russia plan to create a joint cybersecurity working group, but later reports from U.S. media suggested that the talks were ceremonial and not substantive. Participants thus far have consisted mostly of political officials, not necessarily security practitioners, and there are no agreements yet. In response, senior congressional leadership expressed deep concern about the lack of reservation shown in collaborating on cybersecurity matters with the very country allegedly responsible for manipulating election results through offensive cyber operations.


IISP Analyst Holly Dragoo: "This is ludicrous. While President Trump has since backed off of the effort to try to ‘partner’ with Russia (to the extent that any Western power can), the fact that the idea is or was on the table is ignoring substantial analysis (from international and domestic sources) that suggests they are the embodiment of an adversary in cyberspace. Sharing threat information with allies is essential because in the process when you encounter vulnerabilities in your networks you can count on your allies not to exploit that information. Sharing the same information with adversaries just invitesdisaster."



Just $90 Can Take Over .io Domains 

The .io top-level domain was hijacked by a security researcher because one of its points of delegation -- the domain -- had recently expired. This allowed anyone with $90 to authoritatively resolve domains for  under the .io zone. Due to the hierarchical nature of DNS, specific domains are designated at the root servers to manage their sub-tree for reasons of scale. Ownership of this domain allows for partial control of the zone dependent upon how the name server is chosen from the possible candidates. The author demonstrates, however, that four of the seven authorities were registrable, which means the majority of requests would be hijacked. Furthermore, tricks can cause the malicious authorities to remain cached for much longer, exacerbating the damage. As of the article's posting, the problem had been remediated.


IISP Analyst Yacin Nadji: "An expired domain retains its residual trust, which in this case included the trust of administering the entire .io zone! Because the DNS is so fundamental to how the internet functions, an expiration can have  dire consequences. Ironically, the DNS's resilience would cover up these expirations by instead relying on non-expired name servers to handle the resolutions. This vulnerability was hidden unless you knew precisely where to look.

For companies and individuals that wish to protect themselves, there are some options. First, simply knowing that a domain expiration can cause extreme problems can help. Know what domain names you own and when they expire. If you do not know all of the domains you own, online repositories of WHOIS information or passive DNS information may allow you to search for additional domains that you or your organization own, however, these services often are not free.

Second, techniques exist to identify expired domains or domains that have recently undergone an ownership change. A past paper by Georgia Tech describes a technique for identifying such domain names, and relies on actively collected (read: free for research) data sources. Our internal list of ownership changes has been interesting for research purposes, and we would be interested in exploring uses outside of academia. Feel free to contact me if you or your organization could benefit from such data."


Nasdaq's Fire Alarm

Nasdaq inadvertently fed a test data stream to third party providers in early July (Bloomberg and Google included) that led them to show incorrect prices for a variety of companies. A number of companies displayed wildly inaccurate prices, jumping or falling dramatically in only a few moments. Fortunately, the market closed early leading into the July 4th holiday, so trades based on the erroneous data could not be executed through the exchange, though the immediate effect on dark pool trading was less clear.


IISP Analyst Stone Tillotson: "Nasdaq's susceptibility to this kind of mistake has put the U.S. financial system's vulnerability into the spotlight. The proliferation of robo-trading has placed large parts of the stock market on autopilot. This incident recalls the 2010 Flash Crash, a trillion-dollar, 36-minute catastrophe, caused by a single trader living with his parents. Attackers, state-sponsored or otherwise, could create far more damage by similarly planting false stock quotes. Computer trading algorithms would pick up and mutually amplify bad information feeds with their trades, so false quotes may only need marginally veer from their real values. The opening salvo of an economic attack could be over in a matter of minutes -- chaos rippling through the financial system. Fortunately, interest in combating high-frequency trading has provided us a variety of tools to curtail the feasibility of this kind of attack, but it remains to be seen if either the markets or regulatory bodies have the will to implement them."