Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Teaching AI ethics - Trick or Treat?

Save for later
  • 5 min read
  • 31 Oct 2018

article-image

The Public Voice Coalition announced Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018, last week. “The rise of AI decision-making also implicates fundamental rights of fairness, accountability, and transparency. Modern data analysis produces significant outcomes that have real-life consequences for people in employment, housing, credit, commerce, and criminal sentencing. Many of these techniques are entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them. We propose these Universal Guidelines to inform and improve the design and use of AI”, reads the EPIC’s guideline page.

Artificial Intelligence ethics aim to improve the design and use of AI, as well as to minimize the risk for society, as well as ensures the protection of human rights. AI ethics focuses on values such as transparency, fairness, reliability, validity, accountability, accuracy, and public safety.

Why teach AI ethics?


Without AI ethics, the wonders of AI can convert into the dangers of AI, posing strong threats to society and even human lives. One such example is when earlier this year, an autonomous Uber car, a 2017 Volvo SUV traveling at roughly 40 miles an hour, killed a woman in the street in Arizona. This incident brings out the challenges and nuances of building an AI system with the right set of values embedded in them. As different factors are considered for an algorithm to reach the required set of outcomes, it is more than possible that these criteria are not always shared transparently with the users and authorities. Other non-life threatening but still dangerous examples include the time when Google Allo, responded with a turban emoji on being asked to suggest three emoji responses to a gun emoji, and when Microsoft’s Twitter bot Tay, who tweeted racist and sexist comments.

AI scientists should be taught at the early stages itself that they these values are meant to be at the forefront when deciding on factors such as the design, logic, techniques, and outcome of an AI project.

Universities and organizations promoting learning about AI ethics


What’s encouraging is that organizations and universities are taking steps (slowly but surely) to promote the importance of teaching ethics to students and employees working with AI or machine learning systems. For instance, The World Economic Forum Global Future Councils on Artificial Intelligence and Robotics has come out with “Teaching AI ethics” project that includes creating a repository of actionable and useful materials for faculties wishing to add social inquiry and discourse into their AI coursework. This is a great opportunity as the project connects professors from around the world and offers them a platform to share, learn and customize their curriculum to include a focus on AI ethics.

Cornell, Harvard, MIT, Stanford, and the University of Texas are some of the universities that recently introduced courses on ethics when designing autonomous and intelligent systems. These courses put an emphasis on the AI’s ethical, legal, and policy implications along with teaching them about dealing with challenges such as biased data sets in AI.

Mozilla has taken initiative to make people more aware of the social implications of AI in our society through its Mozilla’s Creative Media Awards. “We’re seeking projects that explore artificial intelligence and machine learning. In a world where biased algorithms, skewed data sets, and broken recommendation engines can radicalize YouTube users, promote racism, and spread fake news, it’s more important than ever to support artwork and advocacy work that educates and engages internet users”, reads the Mozilla awards page. Moreover, Mozilla also announced a $3.5 million award for ‘Responsible Computer Science Challenge’ to encourage teaching ethical coding to CS graduates.

Other examples include Google’s AI ethics principles announced back in June, to abide by when developing AI projects, and SAP’s AI ethics guidelines and an advisory panel created last month. SAP says that they have designed these guidelines as it “considers the ethical use of data a core value. We want to create software that enables intelligent enterprise and actually improves people’s lives. Such principles will serve as the basis to make AI a technology that augments human talent”.

Other organizations, like Drivendata have come out with tools like Deon, a handy tool that helps data scientists add an ethics checklist to your data science projects, making sure that all projects are designed keeping ethics at the center.

Some, however, feel that having to explain how an AI system reached a particular outcome (in the name of transparency) can put a damper on its capabilities. For instance, according to David Weinberger, a senior researcher at the Harvard Berkman Klein Center for Internet & society, “demanding explicability sounds fine, but achieving it may require making artificial intelligence artificially stupid”.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime

Teaching AI ethics- trick or treat?


AI has transformed the world as we know it. It has taken over different spheres of our lives and made things much simpler for us. However, to make sure that AI continues to deliver its transformative and evolutionary benefits effectively, we need ethics. From governments to tech organizations to young data scientists, everyone must use this tech responsibly.

Having AI ethics in place is an integral part of the AI development process and will shape a healthy future of robotics and artificial intelligence. That is why teaching AI ethics is a sure-shot treat. It is a TREAT that will boost the productivity of humans in AI, and help build a better tomorrow.