calypsoai logo

Primers on Adversarial Machine Learning

May 13, 2020

DON’T TRUST THE ROBOTS. THEY ARE NOT SECURE.

In order to shed light on the world of adversarial machine learning, CalypsoAI staff member Ilja Moisejevs has prepared a series articles on Towards Data Science to inform readers as to the cutting art of the science and the risks it brings.

What Everyone Forgets about Machine Learning – Provides an overview of machine learning security threats and the parallels of these threats to traditional cybersecurity.

Will my Machine Learning System be Attacked – Here, CalypsoAI details our threat model for machine learning systems and provides a blueprint for understanding attacks.

Poisoning Attacks on Machine Learning – This is a primer on poisoning attacks to machine learning, including information on how an attacker can poison a data lake to install a backdoor.

Evasion attacks on Machine Learning (or “Adversarial Examples”) – The most common form of attack on machine learning systems, evasion attacks are something all machine learning users must be aware of and defended against.

Privacy Attacks on Machine Learning – Models and data can be stolen. Here CalypsoAI discusses the state of the art.

Subscribe to our newsletter

Stay up-to-date on our latest developments
with our monthly newsletter.