The UK could be a world-leader in artificial intelligence (AI) if it puts ethics first, according to a new House of Lords report — with experts claiming the technology could also help combat cybersecurity challenges.
The Lords select committee’s report, AI in the UK: ready, willing and able?, argued that by taking a proactive role in the development of the new technology, the UK could boost its economy and help to mitigate any associated risks and “misuse.”
The committee recommended AI tech be developed on five principles. It said it should be designed “for the common good and benefit of humanity” and that “the autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.”
To that end, Cyber Security Challenge CEO Colin Lobley, argued that AI could help combat endemic industry skills shortages.
“A lot has been made of a skills gap in cybersecurity and a lack of resources to process, analyse and protect the vast amounts of data being created and processed across virtually every industry. With AI and machine learning, a lot of tasks can be automated, allowing analysts and security professionals to focus on the tasks that require the human touch — assessing flaws, mitigating damage caused by breaches and the like,” he explained.
“As cyber-attacks become more sophisticated, it’s the critical thought brought by people which will be the key to combatting breaches. This will mean a shift from cybersecurity being the reserve of the ‘techie’ to encompass people with skills in areas as varied as behavioral and forensic psychology or even creative disciplines.”
However, a vigorous debate is underway in cybersecurity circles about whether AI will ultimately be a more useful tool for IT security teams or cyber-criminals and state-sponsored hackers.
A Webroot survey from December 2017 found that although 87% of US IT security professionals in the country are using AI and 99% believe it could improve their organization’s cybersecurity posture, 91% of global cybersecurity professionals said they’re concerned about hackers using the same tech against them.
NTT Security EMEA SVP, Kai Grunwitz, argued recently that AI presents cyber-criminals with several opportunities, including automating the process of discovering new vulnerabilities.
“AI could also be used by the black hats to model, baseline and then imitate ‘normal’ user behavior to craft highly convincing phishing emails,” he added.
“AI might start off the preserve of a select few cyber-crime gangs and nation states, who have the resources to invest in it. But just as with previous tools and techniques before it, the trickle-down effect will see the technology eventually democratized via dark web forums to the majority.”