Follow Us on WhatsApp | Telegram | Google News

Research Uncovers Critical Security Risks in Hugging Face's AI Platform

Table of Contents

Security Vulnerability in Hugging Face

Security researchers at Wiz, a leading cloud security company, have found a critical security vulnerability in Hugging Face's platform that leads to RCE on the server and put users’ data and models at risk.

The collaboration with Hugging Face, a prominent AI-as-a-Service provider, comes as the adoption of AI continues to grow at an astonishing rate, highlighting the need for robust security measures to protect sensitive data and models.

Wiz researchers discovered architectural risks that could potentially compromise AI-as-a-Service providers and put customer data at risk. The findings, which are not unique to Hugging Face, represent the challenges of tenant separation that many AI-as-a-Service companies face as they handle large amounts of data while experiencing rapid growth.

The research focused on two key areas: the Inference API and Inference Endpoints, which allow users to interact with and deploy AI models, and Hugging Face Spaces, a service for hosting AI-powered applications.

Wiz researchers found that by uploading a specially crafted, malicious model, they could execute arbitrary code within Hugging Face's infrastructure and gain cross-tenant access to other customers' models.

Furthermore, the researchers discovered that due to insufficient scoping, it was possible to pull and push images to Hugging Face's centralized container registry, potentially leading to supply chain attacks on customers' spaces.

Hugging Face has taken swift action to mitigate these issues, working closely with Wiz to strengthen the platform's security. The company has recently implemented Wiz CSPM and vulnerability scanning to proactively identify and address security risks. Additionally, Hugging Face is undergoing its annual penetration test to ensure that identified vulnerabilities have been sufficiently mitigated.

The research also sheds light on the risks associated with using untrusted AI models, particularly those based on the Pickle format. Hugging Face has been working to find a middle ground, allowing the use of Pickle files while implementing measures to mitigate the associated risks. These measures include creating clear documentation, developing automated scanning tools, labelling models with security vulnerabilities, and providing a secure alternative in the form of Safetensors.

The company noted. it is committed to protecting its users and customers from security threats while democratizing AI and allowing the community to responsibly experiment with and operationalize AI systems. The company has also announced plans for further security improvements and publications that will address not only the risks to the Hugging Face platform but also the systemic security risks of AI and best practices for mitigation.

As the AI industry continues to evolve rapidly, collaborations like the one between Wiz and Hugging Face are crucial in identifying and addressing new attack vectors and exploits. Hugging Face's proactive approach to security and its partnership with the community demonstrate the importance of transparency and collaboration in maintaining a secure platform for AI development and deployment.

The findings from this research serve as a wake-up call for the entire AI industry, emphasizing the need for mature regulation and security practices similar to those enforced on public cloud service providers.

As more organizations worldwide adopt AI-as-a-Service, it is essential that the industry recognizes the potential risks in this shared infrastructure and takes the necessary steps to protect sensitive data and models.

Read Also
Post a Comment