1. Blog
  2. Technology
  3. The Ethics of AI: A Challenge for the Next Decade?
Technology

The Ethics of AI: A Challenge for the Next Decade?

AI is everywhere, but while there is a lot of excitement and uncertainty about its future, we have to start asking ourselves, is what we are doing ethical? Where will AIs take us next?

Joe Lawrence

By Joe Lawrence

As a Principal at BairesDev, Joe Lawrence is transforming industries by leveraging the power of veteran “nearshore" software engineers from Latin America.

11 min read

Featured image

The nature of artificial intelligence (AI) is a complex and ever-evolving field. AI is the science of creating intelligent machines that can think, learn, and act autonomously. It has been used in many areas, such as robotics, computer vision, natural language processing, and machine learning.

AI has the potential to revolutionize how we interact with technology and how we live our lives. At its core, AI is about creating systems that can make decisions based on data or information they have been given. This could be anything from recognizing objects in an image to playing a game of chess against a human opponent.

AI aims to create machines that can understand their environment, predict outcomes, and take appropriate actions without being explicitly programmed by humans. However, there are ethical implications associated with the implementation of AI technologies. For example, who would be held responsible if an autonomous vehicle were to cause an accident due to a programming error or a person was wrongfully scored (expected) to commit a future crime?

Similarly, what safeguards should be implemented to ensure a positive outcome or fairness if an AI system was used for decision-making in healthcare, genetic sequencing, or criminal justice contexts? These questions raise important ethical considerations when it comes to using AI technologies in real-world applications.

We are already seeing everything from ChatGPT text and image generation to classifiers in social media, assisting business processes and shaping our culture and societies in ways we can’t yet predict. Considering this, isn’t it important to understand the underlying ethical issues and see how they can impact our projects?

Issues With AI

Biases

Automating processes and making decisions with AI are prone to bias because AI systems are based on algorithms and data sets that are not transparent or easily understood by humans. As such, there is an inherent risk that these decisions could be biased or inaccurate due to errors in the underlying data or algorithms used by the system.

Perhaps the biggest example was Amazon’s catastrophic secret AI project. In 2014, Amazon began building computer programs to review job applicants’ resumes to mechanize the search for top talent. The company’s experimental hiring tool used AI to give job candidates scores ranging from one to five stars.

However, by 2015, it was discovered that the AI system was not rating candidates gender-neutrally since most resumes were coming from men. Amazon edited the algorithm to make the scores neutral but couldn’t guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory.

Ultimately in 2016, Amazon disbanded the team because executives lost trust in the project, and recruiters never relied solely on its rankings. This experiment serves as a lesson to companies looking to automate portions of their hiring process and highlights the limitations of machine learning.

The Black Box Problem

The AI black box problem is a concern in the computing world, as it refers to the fact that with most AI-based tools, we don’t know how they do what they do. We can see the input and output of these tools but not the processes and workings between them. This lack of understanding makes it difficult to trust AI decisions, as mistakes can be made without any moral code or educated understanding of the output.

The cause of this problem lies with artificial neural networks and deep learning, which consist of hidden layers of nodes that process data and pass their output to the next layer. Effectively, no engineer can accurately tell you how a model reached a conclusion. It’s like asking a neurologist to look at brain activity and tell us what someone is thinking.

Although the value of AI systems is undeniable, without an understanding of the underlying logic, our models could lead to costly mistakes, and we wouldn’t be able to tell what happened except for “well, it appears to be wrong.”

For example, if an AI system is used in a medical setting to diagnose patients or recommend treatments, who will ultimately be held accountable if something goes wrong? This question highlights the need for greater oversight and regulation when using AI in critical applications where mistakes could have serious consequences.

To solve this issue, developers are focusing on explainable AI, which produces results humans can understand and explain. But that’s easier said than done. Until we can create interfaces that allow us to understand how AI black boxes make decisions, we must be extremely careful with their outputs.

We also know that there is wisdom in crowds. Explanations and decisions are better made by a group of well-intentioned and informed individuals than by any single liberal member of the group.

Human Error

Sometimes, it’s not a matter of “can we?” but rather “should we?” Just because some brilliant mind thinks of a new application for AI, it doesn’t mean that they have the ethical basis to see the ramifications of their actions. For example, Harrisburg University in Pennsylvania proposed an automated facial recognition system that could predict criminality from a single photograph.

This sparked a backlash from the Coalition for Critical Technology, who wrote a letter to the publisher of Springer Nature urging them not to publish the study due to its potential to amplify discrimination. The publisher responded by not circulating the book, and Harrisburg University removed its news release.

Enticing as the project may sound, there are no two ways about it: it’s discriminatory at best and a sure path to ethnic profiling at worst. We have to be extremely careful with our solutions, even if they are built with the best of intentions. Sometimes we are so tempted by the novelty or utility of the technology that we forget its ethical ramifications and social impact.

Privacy

The use of AI in data processing and analysis can lead to the collection of large amounts of personal data without user permission. This data can then be used to train AI algorithms, which can then be applied to various purposes, such as targeted advertising or predictive analytics.

This raises serious ethical questions about how this data is collected and used without user consent. In addition, the use of AI also poses a risk to privacy due to its ability to process large amounts of data quickly and accurately. This means that AI algorithms may be able to identify patterns in user behavior that could potentially reveal sensitive information about individuals or groups.

For example, an AI algorithm might be able to detect patterns in online shopping habits that could reveal someone’s political leanings or religious beliefs. To address these concerns, it is important for organizations using AI technologies to adhere to the General Data Protection Regulation (GDPR) when collecting and processing personal data.

GDPR requires organizations to obtain explicit consent from users before collecting their data and to provide users with clear information about how their data will be used. Additionally, organizations should ensure that they have appropriate security measures to protect user data from unauthorized access or misuse.

Now, it’s very important to understand that a model does not save the user information, but rather, the weights (the relation between two neurons in the model) are calculated based on that information. This is a gray area in data collection regulations but one that poses a difficult challenge.

Remember what we mentioned about the black box? Well, how can an engineer tell if a certain weight was based on someone’s preferences? The answer is; it’s really hard to know. So what happens when a user wants to be removed from a sample? Bright minds all over the planet are working on this problem under the umbrella of intentional forgetting, but neither the ethics nor the technology is clear yet.

Finally, it is important for organizations using AI technologies to consider the ethical implications of using user data without permission for training AI algorithms. Organizations should strive for transparency when it comes to how they are using personal data and should ensure that any decisions made by their AI system is fair, unbiased, protects privacy, and is used for the great good of our society. AI systems should  not harm individuals or groups based on their race, gender,, political affiliation, religion, etc., as this could lead to discrimination or other forms of injustice.

Security

As more data is collected and analyzed by AI systems, there is a risk that personal information could be misused or abused by malicious actors. Some harmful practices include:

  • Automated identity theft: Malicious actors use AI to collect and analyze personal data from online sources, such as social media accounts, to create fake identities for financial gain.
  • Predictive analytics: Malicious actors use AI to predict an individual’s behavior or preferences based on their location or purchase history, to target them with unwanted ads or services.
  • Surveillance: Malicious actors use AI-powered facial recognition technology to track individuals without their knowledge or consent.
  • Manipulation of public opinion: Malicious actors use AI-driven algorithms to spread false information about a person or group to influence public opinion or sway elections.
  • Data mining: Malicious actors use AI-driven algorithms to collect large amounts of personal data from unsuspecting users for shameful marketing or other nefarious activities.

Organizations need to ensure that proper security measures are taken so that user data (and all data) remains secure and private. Should we be worried about unstoppable Generative AIs? Not yet. These systems are amazing, but they are still very limited in their scope. Nevertheless, all of the “best practices” in cybersecurity must be diligently applied toward AI (e.g., multifactor authentication, encryption, using AI to detect network traffic anomalies, etc.) because it can create havak so much faster than a human. For example, an AI could take a leaked password and try to match it with possible companies where the victim is working — just like any other cyberattack, with the exception that it can do it in a fraction of the time that it would take a human.

The Effects of AI on Job Displacement

Another ethical concern related to the use of AI technologies is job displacement. As more tasks become automated through the use of AI systems, there may be fewer jobs available for humans as machines take over certain repetitive roles traditionally performed by people. This could lead to increased unemployment rates and economic instability as people struggle to find new employment opportunities in an increasingly automated world.

While there may be cause for concern, we have to remember that this is hardly the first time that something disruptive like this has happened. Let’s not forget the industrial revolution. While the crafter and merchant class, in general, suffered due to the industrialization of the economy, most people were able to adapt, which led to the division of labor as we know it today. So what can we do to mitigate job replacement?

First and foremost, it is important to stay up to date on the latest developments in AI technology to identify potential areas where your job could be replaced by an AI system and how to evolve to remain competitive in the job market.

Second, professionals need to focus on developing skills that are not easily replicated by machines or algorithms. For example, creativity and problem-solving are two skills that are difficult for machines to replicate.

Much like how calculators displaced the need for manual computation, but at the same time opened the possibility for scientists and engineers to spend more time innovating, AI can free us from repetitive work and provide the extra processing power to enhance our productivity.

What’s Next

As AI becomes more pervasive in our society, there is a need for greater public education about the application, personal implications, and risks of this technology.We must promote an AI-aware culture that understands how AI is shaping our lives and businesses.

It will also be important for organizations deploying AI solutions to ensure they are taking steps to protect user privacy and security while still allowing users access to the benefits offered by this technology. With proper oversight and regulation, accountability and responsibility issues can be addressed before they become major problems.

Finally, we must create a regulatory framework that holds companies accountable for their use of AI and ensures that any decisions made by AI systems are justified, virtuous, fair, and ethical. With these measures in place, we can ensure that artificial intelligence is used responsibly for the benefit of all of society.

If you enjoyed this, be sure to check out our other AI articles.

Tags:
Joe Lawrence

By Joe Lawrence

As a Principal at BairesDev, Joe Lawrence is transforming industries by leveraging the power of veteran “nearshore" software engineers. As a seasoned cross-industry executive, he knows what it takes to deliver end-to-end, scalable, and high-performing solutions across the full spectrum of modern technologies.

Stay up to dateBusiness, technology, and innovation insights.Written by experts. Delivered weekly.

Related articles

Contact BairesDev
By continuing to use this site, you agree to our cookie policy and privacy policy.