AI can yield important benefits in business, such as increased efficiency and productivity, higher accuracy, and improved customer service. However, AI can also be misused as when Amazon realized its AI-based HR algorithm was weeding out women applicants for technical positions. Other examples include software used to predict future criminals being biased against Black Americans and algorithms used for mortgage lending charging Latinx and Black borrowers higher interest rates.
And that is just the tip of the iceberg in terms of the damage that can be done when AI is allowed to operate without concern for inclusivity and fairness. Because humans are biased, business professionals must take extra care to ensure that our technology isn’t. if operators aren’t careful, biases introduced now may become permanently integrated into the systems they’re part of.
But with few regulations and no overarching set of principles to cover responsible AI, companies must make their own decisions about how to ensure they use AI safely and transparently. Fortunately, nearly 80% of CEOs across the world are prepared to take actions to increase AI accountability. In the following sections, we offer suggestions for how to do so.
Establish a Definition
Before taking any further action on ethical AI, agree on a definition for what it means. Any definition must be specific and actionable. Some companies have published statements about their intentions in this area. For example, Microsoft states that its high-level commitment is “to the advancement of AI driven by ethical principles that put people first.” It lists specific points, including fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
In the video below, Rob High, CTO of IBM Watson states that ethical AI comes down to three main areas: trust, transparency, and privacy. Companies, their customers, and others must be able to trust that the AI is doing the right thing. The Facebook Cambridge Analytica scandal was a good example of what happens when people find out their data is being used inappropriately.
Transparency refers to the ability to see what sources of information are being used and whether machines are being properly trained. Privacy means recognizing that each person’s data belongs to them, and they should be the ones to choose whether it is used for a specific purpose.
Provide Education
Once you have a working definition of ethical AI, communicate it to stakeholders, including partners, employees, and customers. Forbes states, “Everyone across the organization needs to understand what AI is, how it can be used, and what its ethical challenges are.” Tools like the World Economic Forum (WEF) AI C-Suite Toolkit can help executives and others examine challenging issues like how to create a culture of AI within the organization and the skills required for business leaders to successfully drive AI initiatives.
To create an effective education program, assign a leader (perhaps someone from the governance team) to develop the curriculum, allow staff to participate in hands-on learning, and regularly test participants to ensure they understand key principles.
Establish a Governance Framework
Create a group of experts who will develop and maintain ethical AI processes and procedures. To avoid some of the problems associated with bias in AI, the group should be composed of diverse races, genders, economic statuses, and sexual orientations. Such a group should include business leaders, customers, government officials, and other interested parties. The initial task should be to discuss topics like privacy, fairness, and explainability (in which algorithm processes can be clearly articulated) as they relate to ethical AI.
The next tasks should be to consider the company’s AI data risk profile and to develop internal structures, policies, and processes for monitoring AI ethics. As mentioned above, there aren’t many guidelines for working with AI ethically. But companies should strive to follow the rules that are in place. For example, the OECD AI principles include ways for organizations to use AI to benefit people and the planet. One of the principles is international cooperation for trustworthy AI.
As the field of AI continues to evolve, AI ethics within companies and organizations must evolve with it. Therefore, governance teams must revisit this topic regularly and ensure initial targets for fairness and transparency are still being met and ensure processes for compliance with any new regulations. Additionally, companies must build human capacity and prepare for the labor market transition as AI becomes more prevalent.
Align With Existing Strategies
To ensure the question, “Why are we doing this?” is answered, any successful business initiative must either fit into a company’s existing values framework or change that framework. If existing values include things like transparency, inclusivity, or community, that’s a great start. AI ethics fit well within any of those descriptors. If the company lacks similar values or an overarching values structure, it may be time to rethink.
Rather than making ethics a separate component of product development, bake it into existing processes. Doing so may not be simple or straightforward but will be well worth the effort. Specific actions might include asking questions about data collection, implementing new controls, introducing new approval mechanisms, and regularly reviewing processes and practices.
Commit to Transparency
Companies that want to practice ethical AI must commit to transparency in their efforts. A recent WEF article states, “Companies should go above and beyond in explaining what data is being used, how it’s being used, and for what purpose.” Companies might also be more straightforward about when technology is being used. For example, they can identify chatbots as such rather than having them pose as humans.
Another aspect of transparency is testing your ideas and practices by sharing them with others in your industry. That might include participating in peer groups, engaging with lawmakers, and developing white papers, blog posts, articles, and other content on the matter.
If you enjoyed this article, check out one of our other AI articles.