Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Bryan Cantrill on the changing ethical dilemmas in Software Engineering

Save for later
  • 6 min read
  • 17 May 2019

article-image

Earlier this month at the Craft Conference in Budapest, Bryan Cantrill (Chief Technology Officer at Joyent) gave a talk on “Andreessen's Corollary: Ethical Dilemmas in Software Engineering”.

In 2011, Marc Andreessen had penned an essay ‘Why Software Is Eating The World’ in The Wall Street Journal. In the article, he’s talking about how software is present in all fields and are poised to take over large swathes of the economy. He believed, way back in 2011, that “many of the prominent new Internet companies are building real, high-growth, high-margin, highly defensible businesses.”

Eight years later, Bryan Cantrill believes this prophecy is clearly coming to fulfillment. According to the article ‘Software engineering code of ethics’ published in 1997 by ACM (Association for Computing Machinery), a code is not a simple ethical program that generates ethical judgements. In some situations, these codes can generate conflict with each other. This will require a software engineer to use ethical judgement that will be consistent in terms of ethics. The article provides certain principles for software engineers to follow. According to Bryan, these principles are difficult to follow.

Some of the principles expect software engineers to ensure that their product on which they are working is useful and of acceptable quality to the public, the employer, the client and user, is completed on time and of reasonable cost & free of errors. The codes specifications should be well documented according to the user’s requirements and have the client’s approval.  The codes should have appropriate methodology and good management. Software engineers should ensure realistic estimate of cost, scheduling, and outcome of any project on which they work or propose to work.

The guiding context surrounding the code of ethics remains timeless, but as time has changed, these principles have become old fashioned. With the immense use of software and industries implementing these codes, it’s difficult for software engineers to follow these old principles and be ethically sound.

Bryan calls this era as an ‘ethical grey area’ for software engineers. The software's contact with our broader world, has brought with it novel ethical dilemmas for those who endeavor to build it.  More than ever, software engineers are likely to find themselves in new frontiers with respect to society, the law or their own moral compass. Often without any formal training or even acknowledgement with respect to the ethical dimensions of their work, software engineers have to make ethical judgments.

Ethical dilemmas in software development since Andreessen’s prophecy


2012 : Facebook started using emotional manipulation by beginning to perform experiments in the name of dynamic research or to generate revenue. The posts were determined to be positive or negative.

2013 : ‘Zenefits’ is a Silicon Valley startup. In order to build a software, they had to be certified by the  state of California. For which, they had to sit through 52 hours of training studying the materials through the web browser. The manager created a hack called ‘Macro’ that made it possible to complete the pre-licensing education requirement in less than 52 hours. This was passed on to almost 100 Zenefit employees to automate the process for them too.

2014 : Uber illegally entered the Portland market, with a software called ‘Greyball’. This software was used by uber to intentionally evade Portland Bureau of Transportation (PBOT) officers and deny separate ride requests.

2015 : Google started to mislabel photo captions. One of the times, Google mistakenly identified a dark skinned individual as a ‘Gorilla’. They offered a prompt reaction and removed the photo. This highlighted a real negative point of Artificial Intelligence (AI), that AI relies on biased human classification, at times using repeated patterns. Google was facing the problem of defending this mistake, as Google had not intentionally misled its network with such wrong data.

2016 : The first Tesla ‘Autopilot’ car was launched. It had traffic avoiding cruise control and steering assists features but was sold and marketed as a autopilot car. In an accident, the driver was killed, maybe because he believed that the car will drive itself. This was a serious problem. Tesla was using two cameras to judge the movements while driving. It should be understood that this Tesla car was just an enhancement to the driver and not a replacement.

2017 : Facebook faced the ire of the anti- Rohingya violence in Myanmar. Facebook messages were used to coordinate a effective genocide against the Rohingya, a mostly Muslim minority community, where 75000 people died. Facebook did not enable it or advocate it. It was a merely a communication platform, used for a wrong purpose. But Facebook could have helped to reduce the gravity of the situation by acting promptly and not allowing such messages to be circulated. This shows how everything should not be automated and human judgement cannot be replaced anytime soon.

2018 : In the wake of Pittsburg shooting, the alleged shooter had used the Gab platform to post against the Jews. Gab, which bills itself as "the free speech social network," is small compared to mainstream social media platforms but it has an avid user base. Joyent provided infrastructure to Gab, but quickly removed them from their platform, after the horrific incident.

2019 : After the 737 MAX MCAS AND JT/610 /ET302 crashes, reports emerged that aircraft's MCAS system played a role in the crash. The crash happened because a faulty sensor erroneously reported that the airplane was stalling. The false report triggered an automated system known as the Maneuvering Characteristics Augmentation System (MCAS). MCAS is activated without the pilot’s input. The crew confirmed that the manual trim operation was not working.

These are some of the examples of ethical dilemmas in the post-Andreessen’s prophecy. As seen, all the incidents were the result of ethical decisions gone wrong. It is clear that ‘what is right for software is not necessarily right for society.

How to deal with these ethical dilemmas?


In the summer of 2018, the ACM came up with a new code of Ethics:

  • Contribute to society and human well being
  • Unlock access to the largest independent learning library in Tech for FREE!
    Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
    Renews at €18.99/month. Cancel anytime
  • Avoid harm
  • Be honest and trustworthy


It has also included an Integrity project which will have case studies and  “Ask an Ethicist” feature. These efforts by ACM will help software engineers facing ethical dilemmas. This will also pave way for great discussions resulting in a behavior consistent with the code of ethics. Organisations should encourage such discussions. This will help like minded people to perpetuate a culture of consideration of ethical consequences. As software’s footprint continues to grow, the ethical dilemmas of software engineers will only expand.

These Ethical dilemmas are Andreessen’s corollary. And software engineers must address them collectively and directly.

Software engineers agree with this evolving nature of ethical dilemmas.

https://twitter.com/MA_Hanin/status/1129082836512911360

Watch the talk by Bryan Cantrill at Craft Conference.

All coding and no sleep makes Jack/Jill a dull developer, research confirms

Red Badger Tech Director Viktor Charypar talks monorepos, lifelong learning, and the challenges facing open source software

Google AI engineers introduce Translatotron, an end-to-end speech-to-speech translation model