1. Blog
  2. Innovation
  3. What You Need to Know About Responsible AI
Innovation

What You Need to Know About Responsible AI

One element that is essential to understand within this context is bias, given that biased AI can yield different results for different subsets of people.

Nate Dow

By Nate Dow

Solutions Architect Nate Dow helps BairesDev teams provide the highest quality of software delivery and products with creative business solutions.

6 min read

Featured image

Artificial intelligence (AI) has the potential to significantly change many aspects of our society — for better or for worse. For example, AI can be used in law enforcement to identify criminals using facial recognition software or to determine which young people are likely to become criminals later on. But what if innocent people are misidentified by the software or information about youth is used to target, rather than help them?

These situations are just a couple of examples of why companies and other entities that use AI need to make sure they use it responsibly. Responsible AI is a governance framework that spells out how an organization will address such challenges. Because new issues will no doubt arise, such frameworks should be directed toward both current and potential future circumstances.

Currently, there is no overarching set of principles to cover responsible AI. Therefore, “the development of fair, trustworthy AI standards is up to the discretion of the data scientists and software developers who write and deploy a specific organization’s AI algorithmic models,” according to a recent article appearing on TechTarget. Anyone designing or using AI should know what its responsible use looks like.

Principles of Responsible AI

While no oversight entity has established responsible AI principles that everyone should follow, Microsoft has taken the lead in publishing those it takes into account in its own work. Its high-level commitment is “to the advancement of AI driven by ethical principles that put people first.”

Here are the specific points:

  • Fairness: AI systems should treat all people fairly
  • Reliability & Safety: AI systems should perform reliably and safely
  • Privacy & Security: AI systems should be secure and respect privacy
  • Inclusiveness: AI systems should empower everyone and engage people
  • Transparency: AI systems should be understandable
  • Accountability: People should be accountable for AI systems

Microsoft puts its principles into practice through 3 internal company groups: the Office of Responsible AI (ORA), the AI, Ethics, and Effects in Engineering and Research (Aether) Committee, and Responsible AI Strategy in Engineering (RAISE). It also offers teachings to other companies through its AI Business School and responsible AI resources.

How to Design Responsible AI

According to the TechTarget article, another important overarching principle is “to reduce the risk that a minor change in an input’s weight will drastically change the output of a machine learning model.” Doing so requires attention to a number of different factors. One element that is essential to understand within this context is bias, given that biased AI can yield different results for different subsets of people, such as extending credit to more men than women.

The following video explains the bias problem:

Data analytics provider Appen explains, “Algorithmic bias in AI is a pervasive problem. You can likely recall biased algorithm examples in the news, such as…face recognition software being less likely to recognize people of color…. While entirely eliminating bias in AI is not possible, it’s essential to know not only how to reduce bias in AI, but actively work to prevent it.” 

Appen recommends the following steps for reducing bias in AI:

  • Define and narrow the business problem you’re solving
  • Structure data gathering that allows for different opinions
  • Understand your training data
  • Gather a diverse machine learning (ML) team that asks diverse questions
  • Think about all of your end users
  • Annotate with diversity
  • Test and deploy with feedback in mind
  • Have a concrete plan to improve your model with that feedback

Companies that resist devoting resources to responsible AI should reconsider. Building algorithms based on a responsible framework serves to generate trust among customers, patients, members, and others who use the organization’s services. Such trust leads to continued engagement and more successful endeavors.

Become Responsible AI-Driven

The World Economic Forum points out that using responsible AI principles may not be enough. Companies must “engage in a fundamental organizational change to become a responsible AI-driven company.” It offers steps to start in that direction:

  • Define what responsible AI means for your company. Use a collaborative process involving board members, executives, and senior managers across departments.
  • Build organizational capabilities. This step requires sound planning, cross-functional and coordinated execution, employee training, and significant investment in resources.
  • Facilitate cross-functional collaboration. Have complementary perspectives from various departments.
  • Adopt more holistic performance metrics. Monitor and assess the behavior of systems against responsible AI principles.
  • Define clear lines of accountability. Employees must have the right incentives for doing the right thing.

In addition to Microsoft, other companies are taking proactive steps to become more responsible AI-driven. Financial analytics company FICO has issued responsible AI governance policies to show employees and customers how their AI-based systems work. Data scientists at the company test model effectiveness continuously.

Another example is IBM, which has implemented an ethics board dedicated to the issues surrounding AI. This board supports the creation of responsible AI throughout the company.

An Ongoing Issue

AI has the potential to impact people’s lives in a wide variety of important areas, including healthcare, housing, finance, and criminal justice. Its use will only become more ubiquitous over time, spreading throughout many more sectors of life. The Responsible Artificial Intelligence Institute states that “By 2022, over 60% of companies will have implemented machine learning, big data analytics, and related AI tools into their operations.”

That’s why creators should take the time now to commit to solid principles to ensure AI-enabled applications are developed in responsible ways. Some have already started along this path, and many have emphasized the need to reduce bias as much as possible and to ensure AI-based software works equally well for all groups it serves.

Principles already developed are a great start, but creators must be ever-vigilant, rigorously test for bias, and revisit guidelines regularly to ensure a responsible approach now and in the coming years.

If you enjoyed this article, check out one of our other AI articles.

Tags:
Nate Dow

By Nate Dow

As a Solutions Architect, Nate Dow helps BairesDev provide the highest quality software delivery and products by overcoming technical challenges and defining internal teams. His creative approaches help solve clients' business problems with technology.

Stay up to dateBusiness, technology, and innovation insights.Written by experts. Delivered weekly.

Related articles

Innovation - The Future of
Innovation

By BairesDev Editorial Team

4 min read

Contact BairesDev
By continuing to use this site, you agree to our cookie policy and privacy policy.