The US Department of the Treasury has warned of the cybersecurity risks posed by AI to the financial sector.
The report, which was written at the direction of Presidential Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, also sets out a series of recommendations for financial institutions on how to mitigate such risks.
AI-Based Cyber Threats to the Financial Sector
Financial services and technology-related companies interviewed in the report acknowledged the threat posed by advanced AI tools such as generative AI, with some believing they will initially give threat actors the “upper hand.”
This is due to such technologies improving the sophistication of attacks like malware and social engineering, as well as reducing barriers to entry for less-skilled attackers.
Other ways cyber threat actors can use AI to target financial systems highlighted were vulnerability discovery and disinformation – including the use of deepfakes to impersonate individuals like CEOs to defraud companies.
The report acknowledged that financial institutions have used AI systems to support operations for a number of years, including in cybersecurity and anti-fraud measures. However, some of the institutions included in the study reported that existing risk management frameworks may not be adequate to cover emerging AI technologies such as generative AI.
A number of the interviewees said they are paying attention to unique cyber-threats to AI systems used in financial organizations, which could be a particular target for insider threat actors.
These include data poisoning attacks, which aim to corrupt the training data of the AI model.
Another concern with in-house AI solutions identified in the report is that the resource requirements of AI systems will generally increase institutions’ direct and indirect reliance on third-party IT infrastructure and data.
Factors such as how the training data was gathered and handled could expose financial organizations to extra financial, legal and security risks, according to the interviewees.
How to Manage AI-Specific Cybersecurity Risks
The Treasury provided a number of steps financial organizations can take to address immediate AI-related operational risk, cybersecurity and fraud challenges:
- Utilize applicable regulations. While existing laws, regulations and guidance may not expressly address AI, the principles in some of them can apply to the use of AI in financial services. This includes regulations related to risk management.
- Improve data sharing to build anti-fraud AI models. As more financial organizations deploy AI, a significant gap has emerged in fraud prevention between large and small institutions. This is because large organizations tend to have far more historical data to build anti-fraud AI models than smaller ones. As such, there should be more data sharing to allow smaller institutions to develop effective AI models in this area.
- Develop best practices for data supply chain mapping. Advancements in generative AI have underscored the importance of monitoring data supply chains to ensure that models are using accurate and reliable data, and that privacy and safety are considered. Therefore, the industry should develop data supply chain mapping best practice, and also consider implementing ‘nutrition labels’ for vendor-provided AI systems and data providers. These labels would clearly identify what data was used to train the model and where it originated.
- Address the AI Talent Shortage. Financial organizations are urged to train less-skilled practitioners on how to use AI systems safely, and provide role-specific AI training for employees outside of information technology.
- Implement digital identity solutions. Robust digital identity solutions can help combat AI-enabled fraud and strengthen cybersecurity.
The report also acknowledged that the government needs to take more action to help organizations resolve AI-based threats. This includes ensuring coordination at state and federal level for AI regulations, as well as globally.
Additionally, the Treasury believes the National Institute of Standards and Technology (NIST) AI Risk Management Framework could be tailored and expanded to include more applicable content on AI governance and risk management related to the financial services sector.
Under Secretary for Domestic Finance Nellie Lian, commented: “Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden Administration is committed to working with financial institutions to utilize emerging technologies while safeguarding against threats to operational resiliency and financial stability.”