An Ethical license for AI

AI, like all technologies can be beneficial as well as harmful to both individuals and society. This has been mentioned time and again through motion pictures, be it Kubrick’s 2001: A space Odyssey or Marvel’s Avengers: Age of Ultron. Technology is neither good nor bad; nor is it neutral. It is how we use technology that matters, for what ends, and by what means is it employed, as both require contemplation. There are choices to be made and compromises to be struck to ensure that the benefits are realized while minimizing, or suitably managing, the problems. Forgoing a technology due to potential problems might not be the most desirable option, though, as a good enough solution in an imperfect world might, on balance, be preferable to the imperfect world on its own. The question is, however, what is “good enough”?

[The AI robot from 2001: A space Odyssey]
[Source:  Pinterest]

In the last few years, AI has brought solutions to problems which seemed impossible to solve. Over this time, our thoughts towards AI have fluctuated between being hopeful of overcoming obstacles using AI to AI becoming a threat to our very existence. Various principles are being used to filter out what we want and do not want from AI, but they are not enough, as they fall short in describing how any solution adheres to them. The latest hope is that design methodologies will enable us to apply these principles, but of course, as it has always been in case of AI, we do not know if it will be enough.

The efforts to grapple with the challenge of realizing AI’s value while minimizing problems have been complicated by three challenges:

  • The definitional challenge of understanding what AI is, and therefore, what the problems are.
  • Challenge of aligning AI solutions with social norms.
  • The challenge of bridging different social worlds – the different cultural segments of society that shape how their members understand and think about the world.

The definitional challenge: What is AI, and what are the problems?

A useful working definition of AI according to Nils Nilsson is “Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.” Apart from this there is no widely agreed upon and precise definition what AI is and what it is not. Perhaps this lack of precise definition might have helped the field to grow, as it has enabled AI to become something where the practitioners of AI borrow ideas and techniques from other fields in pursuit of their goals.


Regardless of where one draws the line between “intelligent” technologies and others, the growing concern for ethical AI is not due to new technology such as, for instance, the development of CRISPRor genetically modified organisms (GMOs) that enables us to do new and unprecedented things. The concern is due to dramatic reductions in price-performance that enable existing technologies to be applied in a broad range of new contexts. The ethical challenges presented by AI are not due to some unique capability of the technology, but to the ability to deploy the technology easily and cheaply at scale. It is the scale of this deployment that is disruptive.

Aligning technical solutions with social norms

The second challenge, the problem of aligning technical solutions with social norms is one of not seeing the wood for the trees. The technical community focuses on the details. For example, the problem for creating a perfect car becomes the problem of defining how the car should behave in different contexts: what to do when approaching a red light, when a pedestrian stumbles in front of the car, and so on. On the other hand, designing the correct car behavior is a question of identifying different contexts, different behaviors, and then coming up with an appropriate response. Maybe, Mr. Musk already has this in mind.

This reductionistic approach is rightly seen as problematic, as whether a particular response is ethical or not is often an “it depends” problem. For autonomous cars, this manifests in the trolley problem, a thought experiment first posed in its modern form by Phillipa Foot. The trolley problem proposes a dilemma where a human operator must choose whether to pull a lever that will change the track that a trolley is running down. The dilemma is that a group of people is standing on the first track, while a separate individual is on the second, so the operator is forced to choose between the group dying due to their inaction, or the individual dying due to their action. The point here is that there is no single “correct” choice; any choice made will be based on subjective values applied to circumstances one finds oneself in, nor can one refuse to choose.

There will always be another, sometimes unforeseen scenario to consider; newly defined scenarios may well be in conflict with existing ones, largely because these systems are working with human-defined categories and types that are, by their nature, fluid and imprecise. Changing the operating context of a solution can also undo all the hard work put into considering scenarios, as assumptions about demographics or nature of the environment and therefore, the applicable scenarios might no longer hold. Autonomous cars designed in Europe, for example, can be confused by Australian wildlife. Or a medical diagnosis solution might succeed in the lab but fail in the real world.

The natural bias of practitioners leads them to think that fair or ethical can be defined algorithmically.

This is not possible, a blind spot, generally for the technologists.

Bridging social worlds

The third and final challenge is bridging different social worlds. All of us have our own uniquely lived experience, an individual history that has shaped who we are and how we approach the world and society. While it is agreed upon that our AI solutions should be ethical and should adhere to principles of fairness and avoid harm, it is also reasonable to disagree on which trade-offs are required to translate these principles into practice, and how these principles are enacted. Applying the same clearly defined principle to different social worlds can result in very different outcomes. It’s quite possible, in our open and diverse society, for different teams working from the same set of principles to create very different solutions. These differences can easily be enough for one group to consider a solution from another group to be unethical.

The challenge of developing ethical AI solutions can be summarized as a game we cannot win, cannot break even, and cannot just leave. We cannot win because framing ethics in terms of a single social world means prioritizing that social world over others. We cannot break even because landing on a middle ground, a bridge connecting different social worlds, would mean our technical solution will be rife with exceptions, corner cases and problems that we might consider unethical. Essentially, we are in a stalemate. 

To move beyond this stalemate, we need to find a way to address all of these challenges: a method that enables us to address the concerns of all involved social worlds, that enables us to consider both the system and the community it touches, and one that also provides us with a mechanism for managing the conflicts and uncertainty that are inherent in any automated decision system.


  1. AI and Ethics Journal – John Macintyre, Larry Medsker
  2. UK Essays –
  3. Deloitte Insights—

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s