Tomorrow’s Intelligent Malware Will Attack When It Sees Your Face

 A spectator takes a cellphone photo of the CHAIN Cup at the China National Convention Center in Beijing, Saturday, June 30, 2018.

A spectator takes a cellphone photo of the CHAIN Cup at the China National Convention Center in Beijing, Saturday, June 30, 2018. AP Photo/Mark Schiefelbein

IBM researchers have injected viruses with neural nets, making them stealthier and precisely targetable.

You may think today’s malware is bad, but artificial intelligence may soon make malicious software nearly impossible to detect as it waits for just the right person to sit in front of the computer. That’s according to work by a group of researchers with IBM, which they revealed at the BlackHat cybersecurity conference last week.

Here’s how the new smart spyware works and why it’s such a large potential threat. Traditional virus-catching software finds malicious code on your computer by matching it to a stored library of malware. More sophisticated anti-virus tools can deduce that unknown code is malware because it targets sensitive data. Advanced defensive software creates virtual environments, called sandboxes, in which to open suspicious file payloads to see how they act.

Now enter deep neural nets, or DNNs, which defy easy probing and exploration even by advanced human analysts, much less by software. In sort of the same way that the inner works of the mind are a mystery, it’s nearly impossible to understand how neural networks actually work to produce the outputs that they do.

A neural network has three layers. The first layer receives inputs from the outside world. Those could be keyboard commands, sensed images, or something else. The second layer is the indecipherable one. Called the hidden layer, it’s where the network trains itself to do something with the input it received from the first layer. The final layer is the output, the end result of the process. Because neural networks train themselves, it’s impossible to really see how they arrive at their conclusions.  

The opaque nature of DNNs is one reason why policy, intelligence, and defense leaders have a lot of reservations about employing them in life-or-death situations. It’s hard for a commander to explain the decision to drop a bomb on a target based on a process that no one can explain. But they are becoming increasingly popular in commercial and civilian settings such as market forecasting because they work so well.

The IBM researchers figured out a way to weaponize that hidden layer; and that presents a big new potential threat.

“It’s going to be very difficult to figure out what it is targeting, when it will target, and the malicious code,” said Jiyong Jang, one of the researchers on the project.

Head researcher Marc Ph. Stoecklin said,“The complex decision-making process of a [deep neural net] model is encoded in the hidden layer. A conventional virus scanner can’t identify the intended targets and a sandbox can’t trigger its malicious behavior to see how it works.”

That’s because the program needs a key to open it up, a series of values that matches an internal code. The IBM team decided to make the key a specific person’s face — or more precisely, the set of data generated by a facial-recognition algorithm. They concealed it in applications that don’t trigger a response from antivirus programs, applications like the ones that run the camera, for instance. The neural network will only produce the key when the face in view matches the face it is expecting. With the camera under its control, the DNN sits quietly, waiting and watching for the right person. When that person’s face appears before the computer, the DNN uses the key to decrypt the malware and launch the attack.

And face data is just one kind of trigger, the team said. Audio and other means could also be used.