The Path to Ethical AI: Major Obstacles and Solutions

Image title

It’s good to have ethical AI.

From ordering pizza online with a chatbot to generating non-fiction texts and optimizing logistics processes, AI has made so many amazing things possible. Not only has it allowed businesses to automate and optimize complex processes, but it has also helped people with conducting research, analyzing vast amounts of data, and increasing the security of personal devices such as smartphones.

However, as AI-powered technologies grow and develop, so does their potential to assist cybercriminals with getting private data. Many governments across the world have legitimate concerns about the security of sensitive data handled by AI-enabled tools and are already working on corresponding laws and guidelines. On top of security risks, many share concerns about how AI can change the way people interact as well as potential job losses.

Obviously, the path to truly ethical AI is difficult, and before we can make it possible, we’d have to overcome the following obstacles.

You may also like: Friend or Foe? Five Ethical Questions Raised by AI 

1. Income Inequality

This problem has been making headlines for quite a while now. Many economists, scientists, and analysts have been researching the impact of AI on wages, and their conclusions are far from positive.

“My reading of the data is that technology is the main driver of the recent increases in inequality.” Technology Review quoted Erik Brynjolfson, a management professor from MIT, as saying, “It’s the biggest factor.”

Indeed, Industry 4.0 is quickly approaching as more and more businesses are considering automating their manufacturing processes. This time, however, AI takes automation to a whole other level, and technology becomes more advanced and capable of replacing even more human workers. Despite the calls that AI will also create a lot of jobs, the estimations of the impact aren’t comprehensive at this point.

Indeed, it’s not impossible that the wealthiest organizations — the ones that can afford to use AI — will take advantage of this high-end technology, thus improving their market positions and getting more profits. As a result, thousands and thousands of smaller businesses and their workforces will be further and further behind.

“To reduce or even mitigate the problem of wealth inequality and AI, the world’s governments will have to work together to come up with international regulations,” claims Martha Kane, a researcher at Studicus. “Since the issue of wealth inequality already exists and persists despite our effort to prevent it, battling it will be incredibly difficult.”

2. AI Bias and Discrimination

Even though AI is capable of processing and analyzing vast volumes of data and doing other things with unmatched effectiveness, it still can make such simple mistakes as bias and discrimination. We’ve seen this a number of times already, and we increasingly realize that trusting AI with things like recruiting at this point may not be such a good idea.

For example, Amazon found out about this the hard way. The company was using a machine-learning tool to recruit candidates, and it studied tons of data before teaching itself that male candidates were preferable by the company (according to reports, it analyzed resumes that were sent to the company over a 10-year period, the majority of which came from males). As a result, it downgraded female candidates and rated all candidates in a very biased way.

According to the company, the tool was never used to evaluate candidates’ experience. We’ll never know exactly what happened there, but one thing is clear: AI bias is real because it’s possible to introduce it during data preparation as well as fail to prevent during data processing and analysis.

An even more frustrating fact is that AI bias is incredibly difficult to fix because of the following challenges:

  • If bias is introduced during the analysis model construction stage, discovering its exact sources becomes incredibly difficult, let alone figuring out how to eliminate it

  • The vast majority of standard practices in deep learning fail to consider bias and discrimination issues

  • AI ignores the social context. Many creators of AI models like chatbots, for example, proudly claim that their products can be used for different tasks in different contexts, but learning the social context to be applied fairly is something they’re yet to learn

  • A lack of the definition of fairness. In order for an AI-powered model to minimize the impact of bias on data analysis, its creators have to define the concept of fairness in mathematical terms. That’s where the problem arises, as one can define it in a variety of ways, many of which are also mutually exclusive.

3. Security of Sensitive Data

This one is without a doubt the most well-known risk to ethical AI. Since AI-powered systems and models require lots of personal information to be stored, it’s not impossible for cybercriminals to access it. The governments around the world are already working on implementing data protection laws for AI systems for this very reason.

To make sure that any data breach is recognized and mitigated as soon as possible, we’d have to come up with robust and comprehensive protection systems. Moreover, this also raises the need for international regulations for AI that uses sensitive data of citizens of more than one country. Doing so won’t be easy in today’s regulatory climate, but it’s definitely something we have to do to avoid massive data losses to cybercriminals.

Moreover, it’s possible that governments will use AI systems for military purposes, which also creates the risk of cybercriminals getting access to control of such tools. If used maliciously, they can cause significant damage, so proper protection mechanisms should be in place to keep AI safe from criminals and terrorists.

4. AI Stupidity

Didn’t expect to see this one on the list? Well, according to Andrew Moore, a Google vice president, AI “is currently very, very stupid,” especially when compared to humans. Here’s what he had to say to explain the statement, as described by CNet:

“It is really good at doing certain things which our brains can’t handle, but it’s not something we could press to do general-purpose reasoning involving things like analogies or creative thinking or jumping outside the box.”

Apparently, doing tasks like “thinking outside the box” and “thinking creatively” are still pretty much impossible for AI, which means that it cannot be applied in many areas. On top of that, AI systems also must go through the “learning” phase when they learn how to detect the right patterns and act appropriately; however, it’s clear that humans simply can’t provide all examples that a system may have to deal with in the real world, which makes it vulnerable.

For example, cybercriminals can fool the system to arrive at specific conclusions by giving it an example that it wasn’t trained to deal with.

At this point, we’re only starting to design AI systems that can do things like resolving issues involving repetitive processes and other very easy stuff. Until we train AI systems to recognize that someone is trying to fool it, the risk of making mistakes remains.

The Path to Ethical AI Is Long

You know what they say; some things take time. Given the major obstacles that we’ve just discussed, designing an ethical AI will take a long time. Since so many industries stand to benefit from AI, Natural language processing, and machine learning, we should definitely improve our effort to make it ethical and objective. After all, it’s for our own good.

Further Reading

Ethical AI: Lessons From Google AI Principles

Ethics in AI: When Will We Progress?

This UrIoTNews article is syndicated fromDzone