Avoiding Algorithmic Bias

Lori Carter


Student Materials


Ethics background required: Ideally, students should be familiar with the 4 ethical frameworks presented in the first year curriculum (virtue ethics, deontology, utilitarianism, analogies). If they (or the instructor) are not, a brief summary is provided in the reflection.

Subject matter referred to in this lab: Algorithms

Placement in overall ethics curriculum:

Time required:

Learning objectives:

Ethical issues to be considered: Algorithmic bias, diversity, transparency



Guide for Instructors

Lesson plan

Introduction (5 minutes)

Ask students for the definition of “algorithm.” (Something like a step-by-step approach to use as a guideline for producing a program that the computer can use to solve a problem)

Remind students that their college application was probably subject to some sort of algorithm to determine whether they were admitted. Ask class to suggest what some of the steps might have been in the decision making process. (This is similar to question 1 from the worksheet.)

Here are some possibilities:

To students: Generally, these types of algorithms are point-based. Why would a point-based method be adopted? What are the benefits? (probably something about objectivity)

Do you see any problems with the point-based method? (possible responses: not everything can be boiled down to numbers, how to decide how many points something is worth, is it easier for some students to meet the requirements (parents can pay for tutoring, for example))

Activity 1 (5 min)

Facilitate the division of students into groups of 2-4 members. Each group should be given one of the scenarios in the worksheet. Direct the students to read the scenario as a group and discuss what went right and what went wrong.

Preparation for activity 2 (5 min)

Instructor should read or summarize after the previous group activity is finished:

Point-based algorithms for making decisions are a rudimentary form of Artificial Intelligence. Kaplan and Haenlein recently defined AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.” Our example of a point-based algorithm for college admission is used to predict who will most likely succeed at a particular college or university. Similarly, each of the scenarios that you read involved simple predictive algorithms.

We hear frequently of the challenges with AI and even simple predictive algorithms share the same issues. Recently, the EU came up with a set of guidelines to help diminish the chance of ethical issues in such algorithms. As we go through these guidelines, think back on our college admissions example and the example that you read in your group and see if any of them would be helpful to consider if you were in charge of developing such an algorithm. Be prepared to discuss your observations with your group.

Note to instructor: The EU Ethics Guidelines for Trustworthy AI should be made available to students so they can follow along.

Note to instructor: Definition of AI. There are some weaknesses to the definition of AI given by Kaplan and Haenlein. In particular, many of the terms used in the definition do not have self-evident meanings, so it would probably be difficult to use this definition to decide whether something is or is not an example of AI. Try not to focus on the issue of defining AI – keep the focus on the scenarios and the ethical issues they raise.

Activity 2 (5 min)

Students discuss with their group observations that they made regarding the potential benefits of applying 1 or more of the guidelines to the college admissions scenario or to the scenario read as a group.

Reflection (10 min)

Have each group report on one or more of the guidelines that would have helped in the design of either the college admission scenario or the scenario that they read. If they choose to use their group scenario, have the group first summarize the scenario. It is probably best to ask if other groups have comments on that same scenario before moving on.

Ask the students: Who can tell me which ethical framework is being used by applying a set of guidelines to vet a predictive algorithm? (virtue ethics, deontological, analogies, utilitarianism?) If they need a refresher:

Ask the students if they believe that this was an appropriate framework to use in this case. Why or why not?

Possible answers:



All of the scenarios were excerpted from O’Neil (2016). Cathy O’Neil is also the author of Weapons of Math Destruction.

Principles are directly from the text of Floridi (2019), compiled by the High-Level Expert Group on Artificial Intelligence set up by the European Commission.

AI definition is from Kaplan (2019)

Additional reading

Lee, Resnick, and Barton (2019)


Floridi, Luciano. 2019. “Establishing the Rules for Building Trustworthy AI.” Nature Machine Intelligence 1 (6): 261–62. https://ai.bsa.org/wp-content/uploads/2019/09/AIHLEG\_EthicsGuidelinesforTrustworthyAI-ENpdf.pdf.

Kaplan, Michael, Andreas; Haenlein. 2019. “Siri, Siri, in My Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence.” Business Horizons 62 (1): 15–25.

Lee, Nicol Turner, Paul Resnick, and Genie Barton. 2019. “Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms.” https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.

O’Neil, Cathy. 2016. “How Algorithms Rule Our Working Lives.” https://www.theguardian.com/science/2016/sep/01/how-algorithms-rule-our-working-lives.