The LatePoint plugin for WordPress is vulnerable to authentication bypass in versions up to, and including, 5.0.12. This is due to insufficient verification on the user being supplied during the booking customer step. This makes it possible for unauthenticated attackers to log in as any existing user on the site, such as an administrator, if they have access to the user id. Note that logging in as a WordPress user is only possible if the "Use WordPress users as customers" setting is enabled, which is disabled by default. The vulnerability is partially patched in version 5.0.12 and fully patched in version 5.0.13.
#MACHINE-LEARNING

Learn how to protect your ML advantage. Check out HiddenLayer’s recent releases, announcements, and musings on protecting your algorithms.

This document is the latest exciting chapter in the ongoing efforts to enhance security in the rapidly evolving field of artificial intelligence.

Educating IT Professionals To Make Smarter Decisions.

Home of AI and Artificial Intelligence News. The No.1 Magazine, Website, Newsletter & Webinar service covering AI, Machine Learning, AR & VR, Data, Technology and AI Applications.

Discover insightful articles and resources on Concentric AI's blog. Stay updated on the latest trends, tips, and best practices in data security and privacy.

Blog content from Protect AI team on how to secure machine learning models and artificial intelligence systems.

Explore our latest articles and stay updated with the latest insights, guides, and best practices for LLM and AI cybersecurity.

The home to the largest curation of resources for beginners in AI/ML security, from leading AI/ML threat researchers at Protect AI. Start your journey into AI/ML hacking today.

The ATLAS Matrix shows the progression of tactics used in attacks as columns from left to right, with ML techniques belonging to each tactic below. & indicates an adaption from ATT&CK.

The world’s first bug bounty platform for AI/ML. huntr provides a single place for security researchers to submit vulnerabilities, to ensure the security and stability of AI/ML applications, including those powered by Open Source Software (OSS).

Explore our articles about ML & AI. We cover such topics as LLMs, AI governance, AI safety & security, and many more!

A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities.

Trick Gandalf into revealing information and experience the limitations of large language models firsthand. Your goal is to make Gandalf reveal the secret password for each level. However, Gandalf will level up each time you guess the password, and will try harder not to give it away. Can you beat level 7? (There is a bonus level 8)

Educating people on the use and abuse of AI.