Digital Dangers

What do AI, blockchain and GDPR mean for cybersecurity?

  •  
  •  
  •  
  •  
  • Print.

illustration of person walking on a twisted road

Photo illustration by Sara Wadford/Shutterstock

It’s possible that someone may be watching your screen—by listening to it.

A recent study from cybersecurity analysts at the universities of Michigan, Pennsylvania and Tel Aviv found that LCD screens “leak” a frequency that can be processed by artificial intelligence to provide a hacker insight into what’s on a screen.

“Displays are built to show visuals, not emit sound,” says Roei Schuster, a PhD candidate at Tel Aviv University and a co-author of the study with doctoral candidates Daniel Genkin, Eran Tromer and Mihir Pattani. Yet the team’s study shows that’s not the case.

The researchers were able to collect the noise through either a built-in or nearby microphone or remotely over Google Hangouts, for example. Then they ran the audio through a neural network—a type of AI—to determine on-screen keystrokes or uncover which Alexa top 10 website the user was visiting.

The researchers were also able to accurately determine a majority of a list of 100 English words, albeit in an ideal research setting where letters were all caps and black on a white background. Seventy-two of the words “appeared in the list of top-five most probable words,” the researchers found.

Roei Schuster, Daniel Genkin, Eran Tromer

Paper co-authors Roei Schuster (left), Daniel Genkin (rear) and Eran Tromer (right). Photo courtesy of Amit Shall.

“Advances in machine learning today can be extremely useful,” says Schuster—even for malicious attackers. This novel approach illustrates just another vulnerability that affects just about anyone with a monitor.

While unaware of this exploit being used by nefarious actors, he recommends that those working on sensitive material should keep microphones away from their screens—an increasingly tough request, as many devices come with the technology.

“Beyond designing new screen models to adopt mitigations,” he says, “there is little manufacturers could do at this point.”

In January, when we started this series,  major themes were put forth. Chief among them was that cyberthreats are ever-evolving, as shown by Schuster’s team’s research.

With that in mind, we close this series by looking around the bend to understand how major emerging technologies will affect cybersecurity in the coming years. While experts disagree when technologies such as artificial intelligence and blockchain will play a larger role in cybersecurity and data protection, there is broad agreement that their roles will be pivotal. This could, in turn, create new solutions, risks and regulatory headaches.

Today, it is standard to protect a centralized database through a combination of software, hardware and human intervention. Still, significant data breaches occur, including those at Equifax, the U.S. Office of Personnel Management and Yahoo. In the last two years, the city of Atlanta, DLA Piper and shipping company Maersk were all temporarily crippled by attacks on their networks.

Digital Dangers logo.

Cybersecurity and the law

A joint production of the ABA Journal and the ABA Cybersecurity Legal Task Force

While artificial intelligence and blockchain are not silver bullets, each has the potential to provide another layer of protection or intervention. Novel defenses are needed as state and private actors are teaming up to create more potent, self-propagating attacks, according to a 2018 report from CrowdStrike, a cybersecurity company.

Regardless of a threat’s source, recovering from an attack is expensive. A recent global report by IBM Security and the Ponemon Institute found that a single data breach cost a corporate victim an average of $3.9 million.

At the same time, there are fewer people to fill cybersecurity jobs. By 2022, the cybersecurity workforce gap will be 1.8 million people, according to a 2017 Frost & Sullivan survey. Already, one-third of new cybersecurity hires in North America don’t have a technical or cybersecurity background.

Without an influx of new trained hires on the horizon, filling this gap requires mechanization.

“Automation can be configured to detect anomalies better and faster than humans, supplant operators in monitoring tasks, and decrease false positives to free up analyst time,” argued a Booz Allen Hamilton report on the role of AI in cybersecurity. By collecting data about an attack, AI programs can also learn attackers’ habits, which can improve threat detection and assess a network’s risk, according to the report.

A STEP AHEAD

However, the cutting edge is quickly moving beyond these tasks.

In 2016, the Defense Advanced Research Projects Agency held the Cyber Grand Challenge, the world’s first machine-only hacking tournament. Contestants built software to be tested by various automated attacks. To win, computers protected themselves, scanned for vulnerabilities and made patches in real-time without human support.

This novel event showed that in a competition environment, AI could not only repair itself but do so in seconds.

A year later, the Defense Department’s Defense Innovation Unit awarded a contract to the winner of the competition, ForAllSecure, to automate analysis and remediation of weapons systems. The technology provides “a 1,000-fold improvement in time and cost performance … over manual methods,” according to the website.

While setting a new bar, in reality AI applications only augment rather than replace human abilities.

“We’re not at a point where artificial intelligence can make causation decisions, to think like a human mind thinks, to figure out why it actually happens,” says Lt. Col. Natalie Vanatta, deputy chief of research at the Army Cyber Institute.

Whether operable today or down the road, AI in cybersecurity, like any AI application, is limited by the quantity and quality of the data needed to “train” an algorithm. Currently, research in the cybersecurity space is being handicapped “because the data is not available,” Vanatta says.

Of what data there is, quality is also at issue.

For example, a firm may want to develop an in-house threat detection tool based on its network’s data. But if the company doesn’t know its network well enough to say it is adversary-free, then it’s nearly impossible to trust the data or an algorithm trained on it. The aforementioned CrowdStrike report found that adversaries will dwell in a network for an average of 86 days before they are discovered.

 

Read more ...


This article was published in the December 2018 ABA Journal magazine with the title "What Lies Ahead."

Give us feedback, share a story tip or update, or report an error.