Whose Team Is Artificial Intelligence On: The Corporations or Hackers?

Written by

The mantra of modern technology is to improve and innovate continuously. It makes sense as we strive to look for more improved ways to get processes, actions and activities done.

Automation and machine learning, for instance, is currently used across many industries to streamline basic processes and remove the repetition from a normal worker’s routine. Not to mention, machines tend to be more efficient and less resource intensive. A robotic or automated system continues to work at its set performance, never tiring, growing hungry or getting burnt out.

As we create more innovative solutions as a society, do we also set ourselves up for harder, more damaging falls? In regards to AI, for example, do we open up the gates to potentially more dangerous and common attacks?

It’s no secret that the technology at our disposal can be used for both good and bad, it just depends on who has control and possession of the necessary systems. With AI, who is truly in control? Is it possible that hackers and unscrupulous parties may take advantage to create more havoc and trouble for the rest of us?

Does modern AI pose a cybersecurity risk?
A recent study, comprised of 25 technical and public policy researchers from Cambridge, Oxford and Yale, alongside privacy and military experts, reveals a potential risk for misuse of AI by rogue states, criminals and other unscrupulous parties. A list of potential threats would come with digital, physical and political ramifications depending on how the systems and tools were leveraged, used and structured.

The study specifically focuses on plausible and reality-based developments that can or may happen over the next five years. Instead of a “what if” scenario, the idea is more of a “when” over the course of the coming decade.

There’s no reason to be alarmed just yet: the paper doesn’t explicitly say AI is dangerous or will definitely be used to harm modern society, only that there’s a series of risks evident.

In fact, one researcher from Oxford’s Future of Humanity Institute named Miles Brundage said: “We all agree there are a lot of positive applications of AI.”

He also goes on to state that “there was a gap in the literature around the issue of malicious use.” It’s not quite so dire, but instead should serve as a warning. If we intend to use AI more openly in the future, which we certainly do, then we need to come up with more advanced security and privacy measures to protect organizations, citizens and devices.

With self-driving vehicles — controlled primarily by a computer-based AI — it’s possible for hackers to gain access to said vehicles in motion, and take control. By no stretch of the imagination, they could easily careen vehicles off the road, disengage locks and various features, or much worse.

Imagine commercial and military drones being turned into remote-access weapons used by shadow parties and criminals?

These are, of course, worst-case scenarios that will only happen if the necessary administrators and developers don’t spend as much time building robust security and protections into the foundation of these devices. 

Right now, the risk is balanced
Imagine a box, teetering on the edge of a cliff or balancing evenly on a ledge. That can be likened to the current state of AI, machine learning and similar neural-based network tools. They can and likely will be used to cause harm, but the risks can also be mitigated if security is handled properly. A proper authentication and control scheme — to block unauthorized access — is just the start.

Hackers can leverage the technology, yes, and some report a wave of oncoming AI attacks, but it’s important to remember the technology can also be used to do the opposite, preventing them from occurring more often and in much larger instances.

According to the House of Lords, the committee believes that “the UK is in a strong position to be a world leader in the development of AI.”

During a recent communication, they revealed that the country has a “unique opportunity to shape AI positively” which could be used for the “public’s benefit […], rather than passively [accepting] it’s [negative] consequences.”

It should be developed, maintained and deployed to uphold positive and ethical principles. This means retaining the data rights and privacy of individuals, families and communities. It also means educating everyone on the potential use and effects of the technology. Furthermore, there should be limitations and systems in place to stop “autonomous power to hurt, destroy or deceive” the human race as a whole.

The House of Lords committee believes this approach can be upheld by establishing a cross-sector AI code, adopted on a national and international scale.

At this point, we don’t know for certain whether or not the House of Lords promises will ring true. The future is open, as open as it will ever be for AI and machine learning technologies. However, with the appropriate support, development and care we can move forward without creating a system that would cause widespread harm.

What’s hot on Infosecurity Magazine?