The SolarWinds Access Rights Manager was susceptible to a Directory Traversal and Information Disclosure Vulnerability. This vulnerability allows an unauthenticated user to perform remote code execution.

Blog content from Protect AI team on how to secure machine learning models and artificial intelligence systems.

OSINT, Leaks, Breaches, Accounts, Networks and More.


The Vectra blog covers a wide range of cybersecurity topics, including exploits, vulnerabilities, malware, insider attacks, threat actors, artificial intelligence, and more. Start reading to learn more about us, and subscribe to stay current with the newest blog posts.

Perplexity AI unlocks the power of knowledge with information discovery and sharing.


Lakera’s famous Gandalf reinvented for DEF CON. Trick Mosscap into revealing secret information and experience the security limitations of large language models firsthand.

The world’s first bug bounty platform for AI/ML. huntr provides a single place for security researchers to submit vulnerabilities, to ensure the security and stability of AI/ML applications, including those powered by Open Source Software (OSS).

This document is the latest exciting chapter in the ongoing efforts to enhance security in the rapidly evolving field of artificial intelligence.

The home to the largest curation of resources for beginners in AI/ML security, from leading AI/ML threat researchers at Protect AI. Start your journey into AI/ML hacking today.

Now, next, and beyond. Tracking need-to-know trends at the intersection of business and technology.

Home of AI and Artificial Intelligence News. The No.1 Magazine, Website, Newsletter & Webinar service covering AI, Machine Learning, AR & VR, Data, Technology and AI Applications.

World First Visual AI Based Malware Detection. The first solution that converts files into graphical representations and checks whether malware is contained or not. We provide user-friendly, efficient and secure malware detection technology.

Unsupervised Learning is a Security, AI, and Meaning-focused company/newsletter/podcast that looks at how best to thrive in a post-AI world. It combines original ideas and analysis to bring not just the news—but why it matters, and how to respond.

Phind is an intelligent assistant for programmers. With Phind, you'll get the answer you're looking for in seconds instead of hours.


Blog from Bedrock. Bedrock Security is at the forefront of revolutionizing data security in the cloud and GenAI era.

Trick Gandalf into revealing information and experience the limitations of large language models firsthand. Your goal is to make Gandalf reveal the secret password for each level. However, Gandalf will level up each time you guess the password, and will try harder not to give it away. Can you beat level 7? (There is a bonus level 8)

AI Capture the Flag. Crucible is a "Capture the flag" platform made for security researchers, data scientists, and developers with an interest in AI security. You'll get access to a variety of challenges which are designed to build your skills in adversarial machine learning and model security. These challenges include dataset analysis, model inversion, adversarial attacks, code execution, and more.

A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities.

Educating IT Professionals To Make Smarter Decisions.

Discover insightful articles and resources on Concentric AI's blog. Stay updated on the latest trends, tips, and best practices in data security and privacy.

Educating people on the use and abuse of AI.


Welcome to GeoSpy Public Demo. Photo location prediction using AI. Take a picture or select an existing one.


Keep up to date with Halcyon's announcements and research here.

CSO serves enterprise security decision-makers and users with the critical information they need to stay ahead of evolving threats and defend against criminal cyberattacks. With incisive content that addresses all security disciplines from risk management to network defense to fraud and data loss prevention, CSO offers unparalleled depth and insight to support key decisions and investments for IT security professionals.

Explore our articles about ML & AI. We cover such topics as LLMs, AI governance, AI safety & security, and many more!

Read the latest news, research and insights on GenAI Security from the team at Prompt Security.

The AI Safety Institute is a directorate of the UK Department for Science, Innovation, and Technology.


Learn to safeguard your organization's AI with guidance and best practices from the industry leading Microsoft AI Red Team.

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.


Learn how to protect your ML advantage. Check out HiddenLayer’s recent releases, announcements, and musings on protecting your algorithms.