The Problem

Problem Icon

Systems need to be robust and explainable

Researchers are discovering vulnerabilities and biases in the systems that we are developing today and will be depending on tomorrow.

Traditional cybersecurity methods and tools do not work to secure Artificial Intelligence. AI introduces new risks and vulnerabilities into any digital system. Risks can range from having biased training data to adversarial threats to the AI itself.

AI Security can be broadly broken down into two key areas: Robustness and Explainability.

AI Robustness refers to the internal robustness of the underlying data, the model training, and model selection. It also refers to how an AI performs across a wide range of environments, including physical environments, market environments, and scenarios. Finally, AI robustness also indicates how strong an AI system is against adversarial attacks.

For AI systems to be deployed safely, they must also be Explainable. AI explainability breaks down the blackbox of AI systems, informing end users how an AI made its decision. In heavily regulated markets, such explainability is critical for compliance, regulation, and safety.