As the new decade begins, we are still in the Wild West when it comes to deployment of artificial intelligence. AI has the potential to be transformative across all facets of society, reshaping areas such as medical research, education, agriculture, law enforcement and customer service for the better. But left unchecked, there remain well-founded concerns about the unintended consequences that will result.
These concerns aren’t new – the late Stephen Hawking once warned that AI “could spell the end of the human race” and Elon Musk has urged regulation of AI “before it’s too late” – but the risk posed to society clearly is rising as AI increasingly becomes integrated into our lives.
As I look at products being released today, many of them seem to come with claims that they utilize AI or machine learning technology. Predictably, the results are not always as envisioned.
A recent Wall Street Journal article suggests AI used by TikTok sent sexual videos “en masse” to male users. In the healthcare sector, there are concerns that AI could worsen existing problems in medical care. Meanwhile, the potential for bias to be introduced, often unintentionally, into AI algorithms has emerged as another legitimate point of concern.
Then there are the more existential threats, such as how AI could factor into autonomous weapons and other forms of warfare. Despite all this, as enterprise leaders decide when and how to incorporate AI, considerations of whether the technology will increase profitability and efficiency often overshadow matters of security and ethics. That has to change. Just because AI provides the capability to do something does not mean it should be done, meaning new approaches will be needed to encourage responsible implementation.
This backdrop helps explain why Google and other major influencers in the tech world are giving thought to what type of AI regulations might be necessary. I’m ordinarily not a major proponent of regulation, but protecting humans from serious harm rises to the level that warrant these types of conversations.
There are other guardrails that can be put in place beyond regulation, including careful consideration of how AI is trained. There are two basic ways in which AI systems learn: one is through training, and the other self-learning. In the case of human-trained AI, huge trust and responsibility is placed in those individuals to do the job right, considering that maliciously trained AI is capable of inflicting such enormous damage.
How much power are we willing to put in the hands of AI trainers? Certification is an important benchmark for many types of technology roles, so utilizing certification to vet those who train AI would be a worthwhile course of action.
The necessary guardrails also must be in place from a technical perspective, including utilizing AI programs that are hardcoded – perhaps embedded in firmware or hardware – that can’t be reprogrammed.
For example, a traffic system AI should be hardcoded to create the most efficient traffic possible, but not at the cost of potentially fatal accidents that could kill people. AI tends to accomplish whatever goal it is given to accomplish, so adding qualifiers to the goal given to an AI is critically important.
Organizations also need to recognize the importance of AI ethics. In ISACA’s recent Next Decade of Tech survey, respondents were split 50-50 on whether enterprises in the new decade would give ethical considerations around deploying AI and machine learning sufficient attention. This needs to become a widespread focal point for all organizations that use AI.
Larger organizations might be equipped to put in place a dedicated AI ethics officer, while enterprises with fewer resources can incorporate these considerations into their governance and risk management processes. Until the regulatory landscape catches up, it is up to organizations to self-police their AI practices, both to protect their customers and their own business interests.
Placing sharp focus on AI ethics is quickly becoming the table stakes of doing business, as has become the case with cybersecurity.
Let’s not wait for major tragedies or calamitous business crises to occur to mobilize us toward action. Absent measures such as regulation, certification for AI trainers and proper emphasis on AI ethics, major incidents are inevitable.
Now is the time to get our act together and to build industry-wide consensus on what is legally and ethically acceptable when it comes to leveraging this powerful and transformative technology.