by Teemu Kinos
on April 04, 2019
Ethics of AI: Preserving and growing human purpose
As artificial intelligence and machine learning become a greater part of our daily lives, it’s becoming more and more important to consider the ethical implications of using AI.
With the technology becoming cheaper and readily-accessible to the masses, and hundreds of new AI startups appearing and developing the technology every year, there is a risk that these firms will apply the technology irresponsibly or unethically. This has led to calls for greater transparency and accountability in the industry.
We’ve seen significant delays in the manufacturing of self-driving cars as we are still struggling with the moral and ethical decisions those cars need to make. An example is when it comes to unforeseen obstacles in the road.
For small animals and objects, the advice for people is to avoid swerving. But what if the machine mistakenly identifies a small child for an animal and continues on to hit the child?
The consequences would be severe and could set the technology back years as people no longer trust the technology. Who would be held accountable? Who’s to blame?
A whole bunch of ethical questions need to be asked when it comes to AI. Below we’ve highlighted some of the most talked about considerations.
In a 2017 survey from PwC, it found that 76% of CEOs said that the potential for bias and the lack of transparency was holding AI back in enterprises, while 73% were concerned about ensuring governance and rules to control AI.
Another study by Deloitte of 1,400 of AI-aware U.S. executives found that 32% ranked ethical issues as one of the top three risks related to AI. Bias was the second most commonly ranked ethical risks.
Bias can also be caused inadvertently. This means that despite the makers of the AI wanting to use the technology for good, they can accidentally include some form of bias in the data they are using to train the machine.
Artificial intelligence can only be as good as the data (or engineer) used to train it. As the World Economic Forum highlights, AI can be accidentally trained to be biased. The article provides an example of how software was used to predict future criminals showed a bias against black people. This isn’t an isolated case, there are dozens of similar cases where machines have been biased in one way, shape, or form.
So how do you teach a machine, that primarily uses algorithms, to overcome racial and other biases when training it? Machines don’t know the concept of fair unless you’re able to program and algorithmically conceptualize what fairness is. As humans, we’re not able to easily convey ethics or morality in measurable ways that make it easy for machines to process.
Machines don't know the concept of ethics unless we teach them.
Inequality and AI
With increased automation as a result of AI, companies are, in some cases, able to drastically reduce the amount of the workforce needed. This means that revenues made by companies are spread across fewer people meaning that stakeholders in AI-driven companies can fill their pockets with more cash, creating a wider income distribution in society.
A Deloitte study found that 36% of their AI-aware executives feel that job cuts from AI automation rise to the level of ethical risk.
Also, companies who use AI in their recruiting may present bias which in turn leads to further inequality. Take Amazon, for example, who had to drop recruiting software that used AI because it preferred to hire men over women, as reported in Reuters.
Threat to human dignity
In 1976, arguably a time when AI was barely known to most, Joseph Weizenbaum discussed that AI shouldn’t be used to replace people in positions that require respect and care, such as customer service representatives, therapists, judges, police officers, and soldiers.
Weizenbaum explains that if machines replace the types of jobs that require empathy, humans will find themselves to be alienated, devalued, and frustrated, and this represents a threat to our human dignity.
In today’s society, devaluing and alienating people is an obvious breach of ethics which rightly has people up in arms when it’s discovered. If AI ever reaches the point of sentience or sapience, then we also have the reverse to consider. As soon as we start thinking of machines as entities that can perceive, act and feel, it’s not so crazy to consider their rights.
Are machines destined to follow a similar set of rules that we humans follow or will they be governed by a much different set of rules?
Transparency and accountability
The AI industry undoubtedly needs greater transparency and accountability.
Companies like Microsoft are at the forefront of corporate AI responsibility and have developed a transparent set of 6 ethical principles which guides their work and are rooted in their company values.
But what if something does go wrong? Today, we focus on “algorithmic accountability.” This means that companies should be held responsible for the results of their programmed algorithms.
As Search Enterprise AI points out, the main challenge for achieving algorithmic accountability is not how to achieve it but to make companies accept the legal and ethical responsibility for it.
Force for good
Currently, AI is still driven as a force for good, for empowering people to achieve more than they could before and resolve many different and complex problems faced today.
BUT, there is a big risk that AI can be created to cause harm, negatively disrupt ecosystems, and operate outside of the boundaries of control. Many companies today are still in the infancy of their AI applications, and very few have likely addressed the ethical use of AI in their business.
To stay above board, we need to create more and larger discussions that surround the ethical implications of AI. As VTT point out, there’s a lack of substance in what people are already discussing surrounding the AI of ethics. We shouldn’t all just wait for something to go wrong for us all to learn from, we should be proactively making the industry a better and more ethical place.
We are contributing through small things
On the 18th of April, there will be ten senior citizens coming to our office to play around with our product. Finland is focusing on educating its citizens and providing Finns who don’t have any previous technical backgrounds with knowledge of how to comfortably use AI. We’re very proud to be a part of the scheme. You can find more details about this mentorship program here.
As honesty is our core value, we’d like to mention that professional writers have helped us
with this text.
GetJenny CEO & Co-founder
Chatbot Case Studies for Media and Entertainment Companies
How can media and entertainment companies keep up with customers? Learn how chatbots help service teams stay on track and serve customers better.
4 Things You Need to Know When Considering Chatbots
Ersin walks through the surprising opportunities potential chatbot customers miss in their chatbot project planning.
Infographic: Customer Experience in the Energy Sector
Key insights for customer experience in the Energy and Utilities sector in a handy one-sheet. Get the facts and numbers in one place.