Since launching Armilla, we’ve witnessed an increasing lack of trust in AI products. The trust deficit is making it harder for even the most responsible companies to sell high-quality, reliable AI products that deserve to be trusted on their merits. This is true for a number of companies we’ve met over the last year, whose products have the potential to deliver positive outcomes for public health, banking and insurance, climate action, privacy, access to justice, and a host of companies using AI to help make individuals and organizations more productive.
AI Risk, Trust Deficit Increasing
Unfortunately, the growing lack of trust may be warranted. As the FTC recently highlighted, some AI companies are still playing fast and loose with what constitutes “artificial intelligence”, or making false, inaccurate or unsubstantiated claims about the quality and reliability of their AI-powered products. At the same time, with the rise of generative AI, a wave of new and more complex AI applications that are harder to scrutinize are being released and deployed with increasing speed – even while many companies and organizations are only beginning to investigate what corporate AI policies, governance frameworks and technical controls will be critical to the development or operation of responsible AI. To complicate matters, some of the most practical industry standards and guidelines available, such NIST’s AI Risk Management Framework, have only just been published and will take time and effort to implement.
And so, while disheartening, it was not very surprising to see that the 2023 Stanford AI Index reported that the number of AI incidents and controversies has increased 26X since 2012. Or that, according to the AI incident database, the number of AI incidents in 2023 alone is projected to double. The bottom line is that it’s getting harder – not easier – for companies to keep pace with the challenges of assuring that the AI products they are either buying or building can truly be trusted.
Comprehensive, Tech-enabled AI Assurance Solutions
As AI is rapidly adopted across industries, the need for trust in AI products is growing more acute. Customers and consumers want to know that the AI products they engage with preserve their privacy and are safe, fair and transparent. Businesses also require greater assurance that the AI tools they procure, develop or deploy are accurate, reliable and compliant. Hundreds of national and local governments – in Europe, Canada, the U.K., the U.S. and beyond – are now actively exploring new rules for AI, and many are relying on assurance-based approaches such as impact or risk assessments, and third party audits, to support compliance. As AI begins to permeate all aspects of business and society, establishing trust in AI solutions through some form of independent verification is now critical.
We are launching a series of new offerings that will help organizations solve for this trust gap:
- AI Assessments: To verify the quality and reliability of AI-first products, and measure risks related to bias and fairness, transparency, and robustness.
- Third-Party Risk Management (TPRM) Programs for AI: To guide enterprises through the procurement and operation of safe, transparent and robust third-party AI solutions.
Powered by Armilla’s industry-leading AI evaluation technology, we are already providing comprehensive, efficient and scalable assessment solutions to vendors of AI products. We’re also working with enterprises to design and operate third-party risk management programs for AI. We have built a proprietary tech-enabled assurance platform to make sure that your AI-driven products are trustworthy and safe, and help provide a fast and affordable way to reduce risk and give you peace of mind. By providing both quantitative and qualitative assessments, the platform enables TPRM teams to save time, improve collaboration, and scale their efforts through automated workflows and centralized reporting. Customizable assessments, automated risk scoring, ongoing monitoring, and tracking further enhance the solution's capabilities.
We know that AI brings tremendous opportunities, but it also introduces new risks around bias and discrimination, safety and failures that can cause real harms and result in significant reputational and financial damages. Companies developing or deploying AI need rigorous assurance that these risks are being properly managed and that they will stay compliant with emerging rules. Armilla’s technology and diverse, interdisciplinary expertise make us uniquely positioned to deliver critical AI assurance solutions at scale – to establish trust in AI, and drive safe adoption for clients and partners.
Looking ahead: AI Risk Mitigation + Protection
We agree that 2023 is an inflection point for AI innovation and adoption, as governments and industries work to curb AI risk and ensure its safe, fair and trustworthy use. But for Armilla, this is just the start. In the coming weeks and months, we’ll be announcing a series of global partnerships and complementary offerings that showcase why enterprises trust Armilla as the preeminent provider of tech-enabled quality assurance and risk-mitigation solutions for AI.
We’re thrilled to be kicking off one of these efforts this week, having been selected from a pool of over 200 companies to take part in the 10th cohort of Lloyd’s Lab, the heart of innovation for the global insurance industry. With AI incidents and regulations proliferating, AI risk mitigation requires liability protection. We’re excited to share more with you as our work unfolds.