A panel of industry experts gathered at RSA 2018 in San Francisco to explore the role that machine learning and artificial intelligence is playing in the current cyber landscape.
Moderator: Ira Winkler, president, Secure Mentem
Panel:
Oliver Friedrichs, founder and CEO, Phantom
Dustin Hillard, CTO, Versive
Dr Ramesh Sepehrrad, VP of technology and business resiliency risk, Freddie Mac
After opening the discussion by asking the panel to each give their own definition of what machine learning is, Ira asked the speakers to define what types of applications are most appropriate for the use of machine learning and AI.
Hillard: The places where it is most mature is around speech and image processing, and also around fraud detection. “The technology should be an enabler to solving a problem but sometimes it gets lost in what’s being accomplished.”
Friedrichs: Most people have woken up to the fact that machine learning and AI are not the panacea that marketing tells us they are, but they can add to the feature set of a product. Particularly recently we are seeing it used for “augmenting our decision-making, being able to augment [data] to increase capacity.”
Ira then asked the panel about the potential social implications of the use of machine learning and AI, and whether there are issues that arise in that regard.
Sepehrrad: “I’m very worried that it’s the technology defining the user experience, and not the user defining the technology. These are the things we have to think about as technologists – this is not an innovation challenge, it’s not just a cool idea that’s going to make money; this is something that’s going to have generational impact beyond us.”
Moving the discussion on, Ira asked about scenarios in which machine learning and AI can be targeted and manipulated for malicious gain.
Friedrichs: There’s a whole domain called adversarial machine learning, which involves attacking “a machine learning algorithm to trick it into doing something different.” In terms of security, attackers “will attack these algorithms by either getting passed them or causing them to train on things that eventually allow them to evade and create evasion scenarios.”
Is there the possibility of a ‘Skynet-like’ future, Ira asked, in which machine learning might become autonomous in bad ways that we do not want.
Friedrichs: Algorithms can definitely fight other algorithms – “it’s entirely conceivable.”
Hillard: “There are some mirco examples of how an algorithm can go off the rails and how, without enough controls and transparency,” things can go wrong. “If an algorithm is left unattended it can go down a path that was not perceived by the original designer of it.”
Sepehrrad: “I’d want to take a step back and ask whose finger is on the keyboard. We have to think through what the problem we are trying to solve is, and you really have to think through what the motivation is, the potential goal and the drive to achieve that goal.”
To conclude, Ira asked whether there are things that machine learning and AI should not be used for.
Hillard: It should never be used in any place in which it “increases complexity without improving the outcome.”