Databricks AI Security Framework (DASF) | Databricks
This link leads to a PDF that selflessly has links to a LOT of information. Thank you for including them!
Here's one such list. I'm storing it here as a quick yellow sticky. Go check out their work for more.
Tool Category |
Tool URL |
Tool
Description |
Model
Scanners |
A tool that
scans AI models to detect embedded malicious code, vulnerabilities, and
integrity issues, ensuring secure deployment. |
|
An
open-source utility for analyzing and modifying Python pickle files, commonly
used for serializing machine learning models. |
||
An
enterprise-level tool that scans third-party and proprietary models for
security threats before deployment, enforcing model security policies. |
||
AppSOC’s AI
Security Testing solution helps in proactively identifying, and assessing the
risks from LLM models by automating model scanning, simulating adversarial
attacks, and validating trust in connected systems, ensuring the models and
ecosystems are safe, compliant, and deployment-ready |
||
Model
Validation Tools |
A platform
offering continuous validation of AI models to detect and mitigate
vulnerabilities, ensuring robust and secure AI deployments. |
|
A product
that automatically validates LLM Model performance across common industry
framework requirements (OWASP, MITRE/ATLAS). |
||
A tool
designed to scan large language models (LLMs) for security vulnerabilities,
ensuring safe deployment and usage. |
||
An automated
system that scans AI models for potential security threats, focusing on
detecting malicious code and vulnerabilities. |
||
A solution
that monitors AI models in real time to detect and respond to adversarial
attacks, safeguarding AI assets. |
||
A security
tool that provides visibility into AI models, detecting vulnerabilities and
ensuring compliance with security standards. |
||
AppSOC’s AI
Security Testing solution helps in proactively identifying, and assessing the
risks from LLM models by automating model scanning, simulating adversarial
attacks, and validating trust in connected systems, ensuring the models and
ecosystems are safe, compliant, and deployment-ready |
||
AI Agents |
A platform
offering rapid assessment and protection of AI deployments, focusing on
identifying and mitigating security risks. |
|
Guardrails
for LLMs |
A toolkit for
adding programmable guardrails to AI models, ensuring they operate within
defined safety and ethical boundaries. |
|
A framework
that integrates safety protocols into AI models, preventing them from
generating harmful or biased outputs. |
||
A security
solution that monitors AI models for adversarial attacks and vulnerabilities,
providing real-time protection. |
||
A protective
layer that shields AI models from adversarial inputs and attacks. |
||
Layer
provides LLM runtime security including observability, monitoring, blocking
for AI Applications. The enterprise grade offering brought to you by the same
team that built the industry leading open source solution LLM Guard. |
||
A monitoring
solution that tracks AI model performance and security, detecting anomalies
and potential threats in real time. |
||
A set of
safety protocols integrated into Amazon's AI services to ensure models
operate within ethical and secure boundaries. |
||
Meta
implemented security measures to protect their Llama models from
vulnerabilities and adversarial attacks. |
||
A platform
offering rapid assessment and protection of AI deployments, focusing on
identifying and mitigating security risks. |
||
DASF
Validation and Assessment Products and Services |
SAFE One
makes cybersecurity an accelerator to the business by delivering the industry's
only data-driven, unified platform for managing all your first-party and
third-party cyber risks. |
|
Obsidian
Security combines application posture with identity and data security,
safeguarding SaaS. |
||
EQTY Lab
builds advanced governance solutions to evolve trust in AI. |
||
Makes
Databricks the most secure AI platform with real-time visibility, guardrails,
and protection. |
||
Public AI
Red Teaming Tools |
An automated
scanning tool that analyzes AI models for potential security threats,
focusing on detecting malicious code and vulnerabilities. |
|
A product
with a full suite of Red Teaming options for AI applications, including a
library of common attacks, human augmented attacks, and LLM generated scans;
complete with mapping to common industry frameworks like OWASP and
MITRE/ATLAS. |
||
A
Python-based tool for testing the robustness of AI models against adversarial
attacks, ensuring model resilience. |
||
An
open-source library that provides tools to assess and improve the robustness
of machine learning models against adversarial threats. |
||
A tool
designed to test AI models for vulnerabilities by simulating adversarial
attacks, helping developers enhance model security. |
||
A suite of
tools for evaluating and improving the security and robustness of AI models,
focusing on detecting vulnerabilities. |
||
A tool that
scans large language models for security vulnerabilities, ensuring safe
deployment and usage. |
||
A service
that simulates adversarial attacks on AI models to identify vulnerabilities
and strengthen defenses. |
||
Utilities designed
to analyze and modify serialized Python objects, commonly used in machine
learning models, to detect and mitigate security risks. |
||
A platform
that evaluates the security posture of AI systems, identifying
vulnerabilities and providing recommendations for mitigation. |
||
A tool that
facilitates secure and compliant interactions between AI models and users,
ensuring adherence to safety protocols. |
||
A framework
for testing the security and robustness of AI models by simulating various
adversarial attack scenarios. |
||
Purple Llama
is an umbrella project that over time will bring together tools and evals to
help the community build responsibly with open generative AI models. The
initial release will include tools and evals for Cyber Security and
Input/Output safeguards but we plan to contribute more in the near future. |