Undoubtedly, the United States has made significant progress in AI policy in a short amount of time. However, as the U.S. has focused on quickly leveraging AI capabilities across the federal government and private sector, it has overlooked a key component of adoption: repeatable and standardized testing, evaluation, verification, and validation (TEVV) process. In this paper, we highlight how Congress, as part of the National Defense Authoritizan Act, has influenced the DoD's approach to trusted AI. We also examine how those efforts have fallen short, and what Congress can do going forward to operationalize more critical AI capabilities that will maintain the strategic advantage.
In the second edition of CalypsoAI’s State of the Union Report, we explore four trends related to artificial intelligence (AI) and their impact on the ongoing great power competition between the United States and China. Through this report, we hope to inform policymakers, warfighters, executives, academia, and the general public of the need for rigorous testing, evaluation, verification, and validation (TEVV), which can help build trust in AI systems while addressing the challenges inherent in these trends.
By mapping ourselves to the Responsible AI memo, we demonstrate how working with the DoD enables us to build more than AI models - we build user and stakeholder trust. Ultimately, trust is key to enabling the acceleration of wide scale AI adoption, and to succeeding against our strategic competitors.
Artificial intelligence (AI) and machine learning (ML) technologies create paradigm- shifting advantages for companies, organizations, and society at large. But the risks associated with these technologies are emerging just as rapidly as these advantages are being realized.
CalypsoAI’s specialized team of data scientists, engineers, and designers have pioneered new methods in the development of transparent, explainable, and trustworthy artificial intelligence. Not only have we furthered current research efforts, but also developed and applied new methods for secure AI. Working closely with some of the world’s best AI teams in public and private sectors, we have developed a concise definition of secure AI that drives our efforts.
Stay up-to-date on our latest developments
with our monthly newsletter.