November 1, 2017 | By Wenke Lee
Co-Director, Institute for Information Security & Privacy
We’re now one year into allegations of “fake news,” which rose to prominence during last year’s Presidential election. Instead of arguing about which nation, social media service, or political party is to blame, it is time to realize that the algorithms behind “personalization” and “customization” preferences on the Internet can and are being hijacked. This is a matter that extends beyond content creation; this is about real cybersecurity vulnerabilities in the mechanics behind today’s Internet and apps.
“Fake news” attacks are possible because there is an inherent security weakness of information services on the Internet and services, such as Google and Facebook: their underlying algorithms rely on inputs from the Internet, which is an open environment where attackers can alter or falsify contents and behavior data. For example, these algorithms use machine learning to analyze a user’s profile from the user’s browsing history in order to deliver “personalized” contents for the user. As our 2013 study at Georgia Tech had shown, one potential attack is to pollute a user’s profile by planting malware on the user’s computer and having the malware “browse” the Internet. With the polluted profile, a user will then be shown Internet content intended by the attacker. Similarly, information services also consider preferences of a user population, but attackers can procure large-scale user behaviors and hence generate fake preferences. For example, one can hire mechanical turk in a geographic area in order to Google search an organization and only click on URLs that contain manufactured, positive comments about the organization. Google Search then can be misled to believe that users in that area only care to see the positive information about the organization in question.
Information manipulation has now become a critical cybersecurity problem that all sectors of society – technologists, startups, media companies, lawmakers, and citizens -- must accept as reality, remain alert, and work swiftly to correct it.
For further reading
- Fast Company: https://www.fastcompany.com/40489793/senators-grill-facebook-twitter-google-on-fake-news-your-power-scares-me
- Bloomberg: https://www.bloomberg.com/news/articles/2017-10-31/facebook-is-still-in-denial-about-fake-news
- U.S. Senate Committee on the Judiciary (Oct. 31 testimonies): https://www.judiciary.senate.gov/meetings/extremist-content-and-russian-disinformation-online-working-with-tech-to-find-solutions