Navigate the road to Responsible AI

Deploying AI ethically and responsibly will involve cross-functional team collaboration, new tools and processes, and proper support from key stakeholders.



By Ben Lorica, Principal at Gradient Flow

Find out how to implement AI responsibly— Watch the recorded webinar video Responsible AI in Practice: learn about fairness, AI in the law, and AI security from experts.

The use of machine learning (ML) applications has moved beyond the domains of academia and research into mainstream product development across industries looking to add artificial intelligence (AI) capabilities. Along with the increase in AI and ML applications is a growing interest in principles, tools, and best practices for deploying AI ethically and responsibly.

In efforts to organize ethical, responsible tools and processes around a common collective, a number of names have been bandied about, including Ethical AI, Human Centered AI, and Responsible AI. Based on what we’ve seen in industry, several companies, including some major cloud providers, have focused on the term Responsible AI, and we’ll do the same in this post.

Figure

The term “Responsible AI” is emerging across industries to describe the ethical, responsible deployment of AI applications. Source: GradientFlow.

 

It’s important to note that the practice of Responsible AI encompasses more than just privacy and security; those aspects are important, of course, and are perhaps covered more in mainstream media, but Responsible AI also includes concerns around safety and reliability, fairness, and transparency and accountability. Given the breadth and depth of domain knowledge required to address those disparate areas, it is clear that the pursuit of Responsible AI is a team sport. Deploying AI ethically and responsibly will involve cross-functional team collaboration, new tools and processes, and proper support from key stakeholders.

Figure

Responsible AI encompasses several areas including security and privacy, safety and reliability, fairness, and transparency and accountability. Source: GradientFlow.

 

In this post, we’ll examine the maturity of the Responsible AI space through the lens of several recent surveys and an ethnographic study. We’ll take a look at current guiding principles, what companies are doing today, and at the aspirational direction of Responsible AI practices. While companies and consumers in East Asian countries are embarking on similar pursuits—find information herehere, and here—this post, and the surveys and studies covered, focus primarily on the growth of Responsible AI in Western countries.

 

Guiding Principles

 
recent study from ETH Zurich, published in Nature Machine Intelligence, investigated whether a consensus is emerging around ethical requirements, technical standards, and best practices for deploying AI applications. The study’s authors identified documents to examine by adapting a data collection protocol used for literature reviews and meta-analyses, and analyzed 84 AI ethics guidelines, written in English, German, French, Italian or Greek, issued by public and private sector organizations.

No ethical principle appeared in all 84 sources, but there were convergences around several: transparency, justice and fairness, non-maleficence, and responsibility and privacy; these principles appeared in more than half of the guidelines analyzed. Of these, transparency and justice and fairness were the most prevalent.

Figure

recent study from ETH Zurich investigated whether a consensus is emerging around ethical AI principles.

 

 

What organizations are doing today

 
Our conversations with data scientists and machine learning professionals anecdotally agree that fairness and transparency have been the first principles they’ve aimed to address. The recent regulatory changes required in the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), however, have elevated the priority of privacy, security, and transparency principles. In industry sectors like healthcare and finance that tend to be more heavily regulated or deal with more sensitive data, these principles were always a top priority, but with GDPR, CCPA and other emerging regulatory changes, they’ve become crucial for all organizations.

The shift in Responsible AI priorities is reflected in a 2019 survey from PwC, which surveyed 1,000 US executives. Results confirmed that security and transparency were the top two principles they intend to address; about half of the respondents also indicated that fairness—or testing for bias—has become a top priority.

Figure

A 2019 survey from PwC examined Responsible AI principles companies planned to address in the near-term.

 

Tools

 
The prioritization of principles to address is also informed by the state of tools available. Responsible AI is an emerging area, so it’s difficult to make sweeping conclusions about the state of tools, but Michael Kearns, computer science professor and author of The Ethical Algorithm, recently gave a talk where he ranked the different areas of Responsible AI, based on the scientific maturity at the time of writing his book (November 2019):

  1. Privacy
  2. Fairness
  3. Accountability
  4. Interpretability
  5. Morality

These rankings are somewhat subjective, but given that Kearns is deeply familiar with research in these fields, they likely map closely to how they manifest in real-world scenarios. They generally do agree with the findings in the surveys and reports we used for this investigation, as well as with our knowledge of available tools.

Privacy and security tools anecdotally get more coverage (see here and here, for example), which indicates they may be further along in development. Part of the challenge is that to make progress in developing tools around each of these principles is that stakeholders will need to come to agreement on precise definitions of each. All of these areas are being examined by machine learning researchers, so steady development is likely. As tools for responsible AI continue to improve, organizations face two key challenges: (1) they need to develop a clear understanding of the limitations of the tools they are using, and (2) they need to learn how to match models and techniques to their specific problems and challenges. The good news is that product and consulting companies are beginning to provide assistance in these areas.

 

Identifying Responsible AI issues

 
Companies need to choose a method or approach to prioritize areas of Responsible AI they need to address. A 2018 report from Cognizant (pdf), based on a survey of almost 1,000 executives across the US and Europe, revealed the top two tools used to identify potential unethical behavior in AI applications were testing by employees and customer feedback.

Figure

A 2018 survey from Cognizant revealed tools used to identify Responsible AI areas that need to be addressed.

 

It’s notable that close to two-thirds of respondents cited “Testing” as the tool used to identify areas to address—it means they have testing protocols in place that focus on areas pertaining to Responsible AI. The numbers of respondents citing “Customer feedback” are also encouraging; even if we have to generously interpret that result as companies having put tools in place to solicit such feedback (as opposed to incidental data gathering), it does indicate that companies realize the importance and usefulness of customer feedback.

 

Responsible AI is challenging to address

 
In 2018, the Capgemini Research Institute surveyed 1,580 executives in 510 organizations and over 4,400 consumers in the US and Europe to discover how they view transparency and ethics in AI-enabled applications. While the survey confirmed a strong interest in and expectation of ethical components in AI products, it also highlighted the lack of maturity in the AI landscape: only a quarter of respondents reported having fully implemented AI projects. The rest were in a pilot or proof-of-concept stage, or in the initial planning phase.

There is a growing competitive pressure to adopt new technologies like AI. Product managers and others charged with implementing AI and machine learning in products and systems are expected to focus on ROI-related aspects like business metrics and KPIs. As such, addressing Responsible AI features gets pushed down the priority list. The Capgemini survey results showed that the urgent nature of AI production was the number one reason areas pertaining to responsible AI were not adequately addressed.

Figure

survey by Capgemini investigated reasons why areas pertaining to Responsible AI were not adequately addressed.

 

Looking ahead: What do companies aspire to do?

 
The surveys we researched showed that companies are beginning to realize and appreciate the importance of incorporating Responsible AI principles as they produce and deploy AI applications. To discover the path forward, a group of researchers from Accenture, Spotify, and the Partnership on AI conducted an ethnographic study in 2020 based on 26 semi-structured interviews with people from 19 organizations on four continents. Their report, Where Responsible AI meets Reality, focused mainly on the Fair-ML principle, but it also assembled a great snapshot of where many organizations are today, and where they hope to be in the future. They asked participants “what they envision for the ideal future state of their work” in Fair-ML. The authors plan to conduct followup studies on other aspects of Responsible AI, but for now let’s assume the results (see Table 1 below) at least partially reflect how companies are tackling areas outside of fairness.

The study results confirm several key findings from the surveys we researched. For instance most of the study participants described their current work in AI as “reactive”:

“Practitioners embedded in product teams explained that they often need to distill what they do into standard metrics such as number of clicks, user acquisition, or churn rate, which may not apply to their work. Most commonly, interviewees reported being measured on delivering work that generates revenue.”

The cited revenue-generating measurements directly mirror results from the Capgemini survey, where more than one-third of respondents cited, “Pressure to urgently implement AI without adequately addressing ethical issues” as a main concern.

Another interesting finding from the study highlighted a potential culture shift that will be required to provide the institutional scaffolding necessary to support the integration of Responsible AI principles:

“Multiple practitioners expressed that they needed to be a senior person in their organization in order to make their Fair-ML related concerns heard. Several interviewees talked about the lack of accountability across different parts of their organization, naming reputational risk as the biggest incentive their leadership sees for the work on Fair-ML.”

The authors organized their investigation around three phases of the transition to integrating Responsible AI: the prevalent state (where organizations are now); the emerging state (what practices are being designed to move forward); and the aspirational state (the ideal framework that will support the democratization and implementation of Responsible AI).

Companies looking to establish or expand a framework to accommodate Responsible AI principles can begin by closely examining the study results around when to move forward and how to define success.

Figure

A 2020 ethnographic study investigated the practicality of integrating Responsible AI.

 

When to act

 
According to the study interviewees, most organizations today are in reactive mode when it comes to incorporating fair ML principles into product pipelines. The catalyst most often cited was negative media attention, either as a result of a public catastrophe or from a general perspective shift that the status quo is no longer sufficient. Some companies report they are beginning to proactively implement Fair-ML practices, including procedural reviews conducted across company teams.

Aspirationally, companies report that a fully proactive framework would work best to support Fair-ML initiatives. Interviewees cited clear and open transparency and communication not only internally across all company teams, but externally with customers and stakeholders. They also cited the need for proper tools to solicit specific feedback internally and externally: internally from process and product reviews, and externally from customer oversight.

The key takeaway in terms of timing is that the steps along the road to effective Responsible AI should be aimed at integrating and implementing the principles as early in the product development process as possible. The inclusion of Responsible AI principles should also be routine and part of the production culture.

 

How to measure success

 
One of the main challenges is that current methods of measuring business success don’t translate to measuring the success of Responsible AI implementations. Many study interviewees noted that key performance indicators (KPIs) for business are very different from academic benchmarks, and that trying to distill academic benchmarks into traditional business KPIs was inappropriate and misleading. Interviewees also reported that they’re traditionally measured against goals structured around revenue, which is tricky (if not impossible) to tie to successful Responsible AI practices.

Some interviewees reported progress at their companies in moving beyond traditional revenue metrics by establishing new frameworks for evaluating Fair-ML risks in their products and services. They outlined three drivers behind this cultural shift: establishing rewards around efforts for internal education, rewards for instigating internal investigations of potential issues, and instituting frameworks to support cross-collaboration across the company.

The key takeaway on metrics for success is that they’re still under construction. Traditional quantitative business metrics aren’t designed to encompass the qualitative aspects of Responsible AI principles, and, as such, aren’t appropriate for measuring success in that arena. Companies will need to establish new KPIs to fit business needs in their specific contexts.

 

Concluding thoughts

 
In this post, we examined how companies are approaching Responsible AI today and what they aspire to do. Interest in Responsible AI comes at a time when companies are beginning to roll out more ML and AI models into products and services. As companies evolve their MLOps tools and processes, they not only need to account for Responsible AI, but put infrastructure in place to integrate it early on into product development pipelines. To realize aspirational goals around Responsible AI, companies need to cultivate a shift in perspective, starting with company leaders, that embraces the primary Responsible AI principles: privacy and security, safety and reliability, fairness, and transparency and accountability. Successful deployment of ethical, responsible AI will require collaboration between cross-functional teams, adoption of new tools and processes, and buy-in from everyone involved.

 
Bio: Ben Lorica is chair of the NLP Summit, co-chair of the Ray Summit, and principal at Gradient Flow.

Original. Reposted with permission.

Related: