When it comes to technology, the old adage "if it ain't broke, don't fix it" won't fly. Technology can and will eventually break, and AI technology is no exception. AI is gaining popularity, but advancements for the sake of keeping up in the global marketplace won't ensure that the technology is secure or well utilized.
The Washington Post reported that executives from global conglomerates Amazon, Facebook, Intel and Google, along with 34 other U.S. companies, will convene at the White House tomorrow for an AI summit.
"By the Trump administration’s own estimate, the U.S. government spent more than $2 billion in unclassified programs alone during the 2017 fiscal year to research and develop AI technology, according to data furnished this week by the White House’s Office of Science and Technology Policy," the Post reported.
With the proliferation of interconnected devices, AI technology has become mainstream, with more and more vendors racing to adopt the technology as their solution, but there are dangers of early AI.
In recent days, the facial recognition technology used by Welsh police had a 92% false positive rate, incorrectly identifying the names and faces of thousands of people with those in their criminal database.
And there are growing concerns about the inherent risks to a connected world so enamored by the wonders of AI that its widespread adoption puts people out of work.
While advancement in AI is important and it might be fun to build products, Itsik Mantin, lead scientist, Imperva Defense Center, said it's necessary to adopt a hacker's way of thinking in order to build secure solutions.
In an 8 May blog post, "The AI'ker's Guide to the (Cybersecurity) Galaxy," Mantin analyzes AI technology and security from an attacker’s perspective, including guidelines for safely using AI. Additionally, he estimates the risk associated with using AI due to adversarial behavior.
Calling upon his experiences over the last four years in which he worked on several research projects aimed at incorporating AI technology into products, Mantin said, "The most significant challenge we had to cope with was making sure that our use of AI worked safely in adversarial settings.
"In its early days, the community talks innovation and opportunity, expectations and excitement couldn’t be higher. Then comes disillusionment. Once security researchers find ways to make the system do things it wasn’t supposed to, in particular, if the drops of vulnerabilities turn into a flood, the excitement is replaced with FUD – fear, uncertainty and doubt around the risk associated with the new technology," Mantin wrote.