Protect AI Releases 'Bug Bounty' Report On July Vulnerabilities
The vulnerabilities involve tools used to build machine language models that fuel artificial intelligence applications.
Protect AI, which offers artificial intelligence application security, just released its July vulnerability report.
The report was created with Protect AI's AI/ML "bug bounty" program, huntr. According to the company, the program is made up of over 15,000 members who hunt for vulnerabilities across the "entire OSS AI/ML supply chain."
The vulnerabilities involve tools used to build ML models that fuel AI applications. The huntr community, along with Protect AI researchers, found these tools to be vulnerable to "unique security threats." These tools are open source, heavily downloaded and may come embedded with vulnerabilities, according to the company.
Here is a list of the vulnerabilities huntr has discovered in July:
"Privilege Escalation (PE) in ZenML
Impact: Unauthorized users can escalate their privileges to the server account, potentially compromising the entire system. A vulnerability in ZenML allows users with normal privileges to escalate their privileges to the server account by sending a crafted HTTP request. This can be exploited by modifying the is_service_account parameter in the request payload.
Local File Inclusion (LFI) in lollms
Impact: Attackers can read or delete sensitive files on the server, potentially leading to data breaches or denial of service.
The sanitize_path_from_endpoint function in lollms does not properly sanitize Windows-style paths, making it vulnerable to directory traversal attacks. This allows attackers to access or delete sensitive files by sending specially crafted requests.
Impact: Attackers can read, delete or overwrite critical files, leading to data breaches, application compromise or denial of service.
A bypass in the normalizePath() function allows attackers to perform path traversal attacks. This can be exploited to read, delete or overwrite files in the storage directory, including the application's database and configuration files.
Protect AI also released recommendations to fix these vulnerabilities (and also offers Sightline, a security feed of all found issues):
(Image courtesy Protect AI: Dan McInerney and Marcello Salvati)