Categories
Uncategorized

How organizations can mitigate the risks of AI

Has Responsible AI Peaked? It’s no secret that the pandemic has accelerated the adoption and, more critically, organizations’ desire to adopt artificial intelligence (AI) capabilities. However, it’s notably difficult to make AI work. Only 6% of organizations have been able to operationalize AI, according to PwC’s recent global Responsible AI survey of more than 1,000…

Has ResponsibleAI Peaked?

It is no secret that artificial intelligence (AI), has been adopted faster than ever. It’s not easy to make AI work. Only 6% of organizations have been able to operationalize AI, according to PwC’s recent global Responsible AI survey of more than 1,000 participants from leading organizations in the U.S., U.K., Japan, and India. The survey found that more than half of the companies surveyed said they are still trying to figure out how AI works and that they have not committed to large investments in AI.

Companies that have an embedded AI strategy are more likely to deploy apps at scale and with greater adoption throughout the company than those who don’t. Larger companies (greater than $1 billion) in particular are significantly more likely to be exploring new use cases for AI (39%), increasing their use of AI (38%), and training employees to use AI (35%).

Responsible AI

While there are some technical challenges or limitations to operationalization, a trust gap is still a barrier.

A major trend is to integrate “responsible AI”, practices to close this trust gap. Responsible AI is the combination of the people, tools, and processes needed to manage AI systems in a way that suits our environment. It uses technical and procedural capabilities to address safety and security concerns, as well as bias, explainability and robustness. Responsible AI is often referred to as trusted AI, AI ethics or beneficial AI. It is a method of developing AI and analytics systems. This is done to ensure that they are high-quality and well documented, and to minimize unintended harms .

Responsible AI in the enterprise

Awareness of the potential risks AI poses to organizations has resulted in a significant rise in risk mitigation activities. Customers and regulators demand that organizations develop strategies to reduce the risks associated with individual applications, as well as wider risks to society. These risks can be experienced at three levels: the application level (performance instability and bias in AI decision making); the business level (enterprise or financial risk); and the national level (job displacement from automation and misinformation). Organizations are using a range of risk-mitigation strategies to address these and other risks. They start with ad hoc measures before moving on to more structured governance. More than a third of companies (37%) have strategies and policies to tackle AI risk, a stark increase from 2019 (18%).

Figure 1: Risk taxonomy, PwC

Despite the increased focus on risk mitigation, organizations continue to debate how to govern AI. Only 19% of companies in the survey have a formal documented process that gets reported to all stakeholders; 29% have a formal process only to address a specific event; and the balance have only an informal process or no clearly defined process at all.

Part of the discrepancy can be attributed to a lack of clarity about AI governance ownership. Who is responsible for this process? What are the responsibilities for the developers, compliance, risk-management, and internal auditor?

Banks, and other organizations that are subject to regulatory oversight of their algorithms, tend to have strong functions (“second-line”) teams that can independently validate models. However, others must rely on separate development teams as the second-line team does not have the necessary skills to review AI systems. These organizations have chosen to augment their second-line team with more technical expertise while others are developing more robust guidelines for quality control within the first line.

No matter what level of responsibility an organization holds, it needs a standard development process with specific stage gates at certain points to ensure high-quality AI development. This applies to procurement as well since many AI systems are introduced into organizations via a vendor or a software platform.

Figure 2: Stage gates in the AI development process, PwC

Awareness of AI risks is complemented by another trend to think about technology ethics–adopting technologies for procurement, development, usage and monitoring of AI that are based on a “what should I do” mentality rather than a ‘what can you do”.

While there is a litany of ethical principles for AI, data, and technology, fairness remains a core principle. Thirty-six percent of survey respondents identify algorithmic bias as

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *