Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

The ethical dilemmas developers working on Artificial Intelligence products must consider

Save for later
  • 10 min read
  • 29 Sep 2018

article-image
Facebook has recently come under the scanner for sharing the data of millions of users without their consent. Their use of Artificial Intelligence to predict their customers’ behavior and then to sell this information to advertisers has come under heavy criticism and has raised concerns over the privacy of users’ data. A lot of it inadvertently has to do with the ‘smart use’ of data by companies like Facebook.

As Artificial Intelligence continues to revolutionize the industry, and as the applications of AI continue to rapidly grow across a spectrum of real-world domains, the need for a regulated, responsible use of AI has also become more important than ever. Several ethical questions are being asked of the way the technology is being used and how it is impacting our lives, Facebook being just one of the many examples right now.

In this article, we look at some of these ethical concerns surrounding the use of AI.

Infringement of users’ data privacy


Probably the biggest ethical concern in the use of Artificial Intelligence and smart algorithms is the way companies are using them to gain customer insights, without getting the consent of the said customers in the first place. Tracking customers’ online activity, or using the customer information available on various social media and e-commerce websites in order to tailor marketing campaigns or advertisements that are targeted towards the customer is a clear breach of their privacy, and sometimes even amounts to ‘targeted harassment’.

In the case of Facebook, for example, there have been many high profile instances of misuse and abuse of user data, such as:


How far will these tech giants such as Facebook go to fix what they have broken - the trust of many of its users?

The European Union General Data Protection Regulation (GDPR) is a positive step to curb this malpractice. However, such a regulation needs to be implemented worldwide, which has not been the case yet. There needs to be a universal agreement on the use of public data in the modern connected world. Individual businesses and developers must be accountable and hold themselves ethically responsible when strategizing or designing these AI products, keeping the users’ privacy in mind.

Risk of automation in the workplace


The most fundamental ethical issue that comes up when we talk about automation, or the introduction of Artificial Intelligence in the workplace, is how it affects the role of human workers. ‘Does the AI replace them completely?’ is a common question asked by many. Also, if human effort is not going to be replaced by AI and automation, in what way will the worker’s role in the organization be affected?

The World Economic Forum (WEF) recently released a Future of Jobs report in which they highlight the impact of technological advancements on the current workforce. The report states that machines will be able to do half of the current job tasks within the next 5 years.

A few important takeaways from this report with regard to automation and its impact on the skilled human workers are:

  • Existing jobs will be augmented through technology to create new tasks and resulting job roles altogether - from piloting drones to remotely monitoring patients.
  • The inclusion of AI and smart algorithms is going to reduce the number of workers required for certain work tasks
  • The layoffs in certain job roles will also involve difficult transitions for many workers and investment for reskilling and training, commonly referred to as collaborative automation.
  • As we enter the age of machine augmented human productivity, employees will be trained to work along with the AI tools and systems, empowering them to work quickly and more efficiently. This will come with an additional cost of training which the organization will have to bear
  • Unlock access to the largest independent learning library in Tech for FREE!
    Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
    Renews at €18.99/month. Cancel anytime

Artificial stupidity - how do we eliminate machine-made mistakes?


It goes without saying that learning happens over time, and it is no different for AI. The AI systems are fed lots and lots of training data and real-world scenarios. Once a system is fully trained, it is then made to predict outcomes on real-world test data and the accuracy of the model is then determined and improved.

It is only normal, however, that the training model cannot be fed with every possible scenario there is, and there might be cases where the AI is unprepared for or can be fooled by an unusual scenario or test-case. Some images where the deep neural network is unable to identify their pattern is an example of this. Another example would be the presence of random dots in an image that would lead the AI to think there is a pattern in an image, where there really isn’t any.

Deceptive perceptions like this may lead to unwanted errors, which isn’t really the AI’s fault, it’s just the way they are trained. These errors, however, can prove costly to a business and can lead to potential losses.

What is the way to eliminate these possibilities? How do we identify and weed out such training errors or inadequacies that go a long way in determining whether an AI system can work with near 100% accuracy? These are the questions that need answering. It also leads us to the next problem that is - who takes accountability for the AI’s failure?

If the AI fails or misbehaves, who takes the blame?


When an AI system designed to do a particular task fails to correctly perform the required task for some reason, who is responsible? This aspect needs careful consideration and planning before any AI system can be adopted, especially on an enterprise-scale.

When a business adopts an AI system, it does so assuming the system is fail-safe. However, if for some reason the AI system isn’t designed or trained effectively because either:

  1. It was not trained properly using relevant datasets
  2. The AI system was not used in a relevant context and as a result, gave inaccurate predictions


Any failure like this could lead to potentially millions in losses and could adversely affect the business, not to mention have adverse unintended effects on society.

Who is accountable in such cases? Is it the AI developer who designed the algorithm or the model? Or is it the end-user or the data scientist who is using the tool as a customer?

Clear expectations and accountabilities need to be defined at the very outset and counter-measures need to be set in place to avoid such failovers, so that the losses are minimal and the business is not impacted severely.

Bias in Artificial Intelligence - A key problem that needs addressing


One of the key questions in adopting Artificial Intelligence systems is whether they can be trusted to be impartial, fair or neutral. In her NIPS 2017 keynote, Kate Crawford - who is a Principal Researcher at Microsoft as well as the Co-Founder & Director of Research at the AI Now institute - argues that bias in AI cannot just be treated as a technical problem; the underlying social implications need to be considered as well. For example, a machine learning software to detect potential criminals, that tends to be biased against a particular race, raises a lot of questions on its ethical credibility. Or when a camera refuses to detect a particular kind of face because it does not fit into the standard template of a human face in its training dataset, it naturally raises the racism debate.

Although the AI algorithms are designed by humans themselves, it is important that the learning data used to train these algorithms is as diverse as possible, and factors in possible kinds of variations to avoid these kinds of biases. AI is meant to give out fair, impartial predictions without any preset predispositions or bias, and this is one of the key challenges that is not yet overcome by the researchers and AI developers.

The problem of Artificial Intelligence in cybersecurity


As AI revolutionizes the security landscape, it is also raising the bar for the attackers. With passing time it is getting more difficult to breach security systems. To tackle this, attackers are resorting to adopting state-of-the-art machine learning and other AI techniques to breach systems, while security professionals adopt their own AI mechanisms to prevent and protect the systems from these attacks. A cybersecurity firm Darktrace reported an attack in 2017 that used machine learning to observe and learn user behavior within a network. This is one of the classic cases of facing disastrous consequences where technology falls into the wrong hands and necessary steps cannot be taken to tackle or prevent the unethical use of AI - in this case, a cyber attack.

The threats posed by a vulnerable AI system with no security measures in place - it can be easily hacked into and misused, doesn’t need any new introduction. This is not a desirable situation for any organization to be in, especially when it has invested thousands or even millions of dollars into the technology.

When the AI is developed, strict measures should be taken to ensure it is accessible to only a specific set of people and can be altered or changed by only its developers or by authorized personnel.

Just because you can build an AI, should you?


The more potent the AI becomes, the more potentially devastating its applications can be. Whether it is replacing human soldiers with AI drones, or developing autonomous weapons - the unmitigated use of AI for warfare can have consequences far beyond imagination.

Earlier this year, we saw hundreds of Google employees quit the company over its ties with the Pentagon, protesting against the use of AI for military purposes. The employees were strong of the opinion that the technology they developed has no place on a battlefield, and should ideally be used for the benefit of mankind, to make human lives better. Google isn’t an isolated case of a tech giant lost in these murky waters.

Microsoft employees too protested Microsoft’s collaboration with US Immigration and Customs Enforcement (ICE) over building face recognition systems for them, especially after the revelations that ICE was found to confine illegal immigrant children in cages and inhumanely separated asylum-seeking families at the US Mexican border. Amazon is also one of the key tech vendors of facial recognition software to ICE, but its employees did not openly pressure the company to drop the project.

While these companies have assured their employees of no direct involvement, it is quite clear that all the major tech giants are supplying key AI technology to the government for defensive (or offensive, who knows) military measures.

The secure and ethical use of Artificial Intelligence for non-destructive purposes currently remains one of the biggest challenges in its adoption today.

Today, there are many risks and caveats associated with implementing an AI system. Given the tools and techniques we have at our disposal currently, it is far-fetched to think of implementing a flawless Artificial Intelligence within a given infrastructure. While we consider all the risks involved, it is also important to reiterate one important fact. When we look at the bigger picture, all technological advancements effectively translate to better lives for everyone. While AI has tremendous potential, whether its implementation is responsible is completely down to us, humans.

Read more


Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms

New cybersecurity threats posed by artificial intelligence

Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers