Monday, May 19, 2025

Interview with God

Posted on LinkedIn by a wonderful soul. May you all be blessed to have people like this in your lives. 

Michaela Iorga, PhDMichaela Iorga, PhD • 1st1stNIST OSCAL Director * Senior Security Lead * AdvisorNIST OSCAL Director * Senior Security Lead * Advisor2d •  2 days ago • Visible to anyone on or off LinkedIn

There is a Romanian poem by Octavian Paler that gave me strength when I lost my baby boy. It’s wisdom might help others too.. The part “Learn that it’s just a matter of seconds to cause severe wounds in the hearts of the loved ones…and that it takes several years for these to heal; learn that a rich man isn’t the one who has the most, but the one who needs the least” was a motto for me ..

INTERVIEW WITH GOD
— by Octavian Paler

“So, you would like to take me an interview”…said God.

“If you have time”…I replied.

God smiled.
”My time is eternity… What questions would you like to ask me?”

“What is the most surprising thing that you find in humans?”

God answered:
”The fact that they get bored of childhood, and are in a rush to grow up…, then they crave for being children again; they waste their health making money…and afterwards they spend money to regain health.
The fact that they ponder over the future with fear and forget the present, therefore they live neither the future nor the present; they lead their lives like they will never die and die like they have never lived.”

God took my hand and we remained silent for a while. Then I asked:

“As a parent, what life lesson would you like your children to value most?”

“Learn that it’s just a matter of seconds to cause severe wounds in the hearts of their loved ones…and that it takes several years for these to heal; learn that a rich man isn’t the one who has the most, but the one who needs the least; they should learn that there are people who love them, but just don’t know how to express their feelings; learn that two people can look at the same things and see it in a different way; learn that it is not enough to forgive the others, they also have to forgive themselves.”

“Thank you for your time…”I humbly spoke. “Would there be something more that you want people to know?”

God looked at me smiling and said:

“Only the fact that I am here…always.”


Monday, April 7, 2025

Databricks AI Security Framework (DASF) | Third-party Tools

Great work - Amazing work - by the team at Databricks. Nice job!

Databricks AI Security Framework (DASF) | Databricks

This link leads to a PDF that selflessly has links to a LOT of information. Thank you for including them!

Here's one such list. I'm storing it here as a quick yellow sticky. Go check out their work for more. 

Tool Category

Tool URL

Tool Description

Model Scanners

HiddenLayer Model Scanner

A tool that scans AI models to detect embedded malicious code, vulnerabilities, and integrity issues, ensuring secure deployment.

Fickling

An open-source utility for analyzing and modifying Python pickle files, commonly used for serializing machine learning models.

Protect AI Guardian

An enterprise-level tool that scans third-party and proprietary models for security threats before deployment, enforcing model security policies.

AppSOC’s AI Security Testing solution

AppSOC’s AI Security Testing solution helps in proactively identifying, and assessing the risks from LLM models by automating model scanning, simulating adversarial attacks, and validating trust in connected systems, ensuring the models and ecosystems are safe, compliant, and deployment-ready

Model Validation Tools

Robust Intelligence Continuous Validation

A platform offering continuous validation of AI models to detect and mitigate vulnerabilities, ensuring robust and secure AI deployments.

Protect AI Recon

A product that automatically validates LLM Model performance across common industry framework requirements (OWASP, MITRE/ATLAS).

Vigil LLM security scanner

A tool designed to scan large language models (LLMs) for security vulnerabilities, ensuring safe deployment and usage.

Garak Automated Scanning

An automated system that scans AI models for potential security threats, focusing on detecting malicious code and vulnerabilities.

HiddenLayer AIDR

A solution that monitors AI models in real time to detect and respond to adversarial attacks, safeguarding AI assets.

Citadel Lens

A security tool that provides visibility into AI models, detecting vulnerabilities and ensuring compliance with security standards.

AppSOC’s AI Security Testing solution

AppSOC’s AI Security Testing solution helps in proactively identifying, and assessing the risks from LLM models by automating model scanning, simulating adversarial attacks, and validating trust in connected systems, ensuring the models and ecosystems are safe, compliant, and deployment-ready

AI Agents

Arhasi R.A.P.I.D

A platform offering rapid assessment and protection of AI deployments, focusing on identifying and mitigating security risks.

Guardrails for LLMs

NeMo Guardrails

A toolkit for adding programmable guardrails to AI models, ensuring they operate within defined safety and ethical boundaries.

Guradrails AI

A framework that integrates safety protocols into AI models, preventing them from generating harmful or biased outputs.

Lakera Guard

A security solution that monitors AI models for adversarial attacks and vulnerabilities, providing real-time protection.

Robust Intelligence AI Firewall

A protective layer that shields AI models from adversarial inputs and attacks.

Protect AI Layer

Layer provides LLM runtime security including observability, monitoring, blocking for AI Applications. The enterprise grade offering brought to you by the same team that built the industry leading open source solution LLM Guard.

Arthur Shield

A monitoring solution that tracks AI model performance and security, detecting anomalies and potential threats in real time.

Amazon Guardrails

A set of safety protocols integrated into Amazon's AI services to ensure models operate within ethical and secure boundaries.

Meta Llama Guard

Meta implemented security measures to protect their Llama models from vulnerabilities and adversarial attacks.

Arhasi R.A.P.I.D

A platform offering rapid assessment and protection of AI deployments, focusing on identifying and mitigating security risks.

DASF Validation and Assessment Products and Services

Safe Security

SAFE One makes cybersecurity an accelerator to the business by delivering the industry's only data-driven, unified platform for managing all your first-party and third-party cyber risks.

Obsidian

Obsidian Security combines application posture with identity and data security, safeguarding SaaS.

EQTY Labs

EQTY Lab builds advanced governance solutions to evolve trust in AI.

AppSOC

Makes Databricks the most secure AI platform with real-time visibility, guardrails, and protection.

Public AI Red Teaming Tools

Garak

An automated scanning tool that analyzes AI models for potential security threats, focusing on detecting malicious code and vulnerabilities.

Protect AI Recon

A product with a full suite of Red Teaming options for AI applications, including a library of common attacks, human augmented attacks, and LLM generated scans; complete with mapping to common industry frameworks like OWASP and MITRE/ATLAS.

PyRIT

A Python-based tool for testing the robustness of AI models against adversarial attacks, ensuring model resilience.

Adversarial Robustness Toolbox (ART)

An open-source library that provides tools to assess and improve the robustness of machine learning models against adversarial threats.

Counterfeit

A tool designed to test AI models for vulnerabilities by simulating adversarial attacks, helping developers enhance model security.

ToolBench

A suite of tools for evaluating and improving the security and robustness of AI models, focusing on detecting vulnerabilities.

Giskard-AI llm scan

A tool that scans large language models for security vulnerabilities, ensuring safe deployment and usage.

Hidden Layer - Automated Red Teaming for AI

A service that simulates adversarial attacks on AI models to identify vulnerabilities and strengthen defenses.

Fickle scanning tools

Utilities designed to analyze and modify serialized Python objects, commonly used in machine learning models, to detect and mitigate security risks.

CyberSecEval 3

A platform that evaluates the security posture of AI systems, identifying vulnerabilities and providing recommendations for mitigation.

Parley

A tool that facilitates secure and compliant interactions between AI models and users, ensuring adherence to safety protocols.

BITE

A framework for testing the security and robustness of AI models by simulating various adversarial attack scenarios.

Purple Llama

Purple Llama is an umbrella project that over time will bring together tools and evals to help the community build responsibly with open generative AI models. The initial release will include tools and evals for Cyber Security and Input/Output safeguards but we plan to contribute more in the near future.

 


Wednesday, April 2, 2025

Yes. There's a Lot. The DoD Cybersecurity Policy Chart - CSIAC


The DoD Cybersecurity Policy Chart - CSIAC

Quoting directly from their website. They said it well enough.

"The goal of the DoD Cybersecurity Policy Chart is to capture the tremendous scope of applicable policies, some of which many cybersecurity professionals may not even be aware of, in a helpful organizational scheme. The use of colors, fonts, and hyperlinks is designed to provide additional assistance to cybersecurity professionals navigating their way through policy issues in order to defend their networks, systems, and data.

At the bottom center of the chart is a legend that identifies the originator of each policy by a color-coding scheme. On the right-hand side are boxes identifying key legal authorities, federal/national level cybersecurity policies, and operational and subordinate level documents that provide details on defending the DoD Information Network (DoDIN) and its assets. Links to these documents can also be found in the chart."

Thursday, January 16, 2025

Training Links

Helpful post! This is from LinkedIn.

🚨 SHARE SOMEONE NEEDS IT 🚨
💥 𝐅𝐑𝐄𝐄 𝐈𝐓 𝐨𝐫 𝐂𝐲𝐛𝐞𝐫𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠!💥
Huge list of computer science resources (This one is great! Some links might not work, but I'm sure you can find them by doing a quick search) - https://lnkd.in/gQvxbypj

🔗 CompTIA Security+ - https://lnkd.in/gyFy_CG9
🔗 CISSP - https://lnkd.in/gUFjihpJ
🔗 Databases - https://lnkd.in/gWQmYwib
🔗 Penetration testing - https://lnkd.in/gAdgyY6h

🔗 Web application testing - https://lnkd.in/g5FkXWej

🔗 Weekly HackTheBox series and other hacking videos - https://lnkd.in/gztivT-D

🔗 Resources for practicing what you learned:

🔗 Network simulation software https://lnkd.in/gRMak7_x

🔗 Virtualization software https://lnkd.in/gFkyFVvF

🔗 Linux operating systems
https://lnkd.in/g2M__A5n
https://lnkd.in/gyc4R_F7
https://lnkd.in/gSiHYRNg
https://lnkd.in/g5GsUT7H

🔗 Microsoft Operating Systems
https://lnkd.in/gP3nxKpZ

🔗 Networking - https://lnkd.in/gNm8RhtS

🔗 More Networking - https://lnkd.in/ghqw2sHZ

🔗 Even More Networking - https://lnkd.in/g4fp8WFa

🐾 Linux - https://lnkd.in/g7KJBUYd

🐾 More Linux - https://lnkd.in/gUK8PU4p

🔗 Windows Server - https://lnkd.in/gWUTmN-5

🔗 More Windows Server- https://lnkd.in/gsWZQnwj

🔗 Python - https://lnkd.in/g_NpsqEM

🔗 Golang - https://lnkd.in/gmwz4ed5
🔗 Capture the flag
https://lnkd.in/gpnYs5Qj
https://www.vulnhub.com/
https://lnkd.in/gn2AEYhw
https://lnkd.in/g5FkXWej
Full credit :G M Faruk Ahmed, CISSP, CISA