Back to Modules
IS457

Fairness in Socio-technical Systems

1 CreditsBoth

Description

We interact with a variety of services and systems in our daily lives. While manual labors still take some part in those systems, some other parts become more and more automated by artificial intelligence (AI). In general, we might expect that those systems treat users fairly. If the system uses AI that is built on big data and complex algorithms, such expectation would be strengthened. Compared to human labor that might involve subjective decision-making, algorithmic systems are expected to objectively work and treat users fairly. However, in recent years, there are raising concerns about the potential harms of those systems, which are rooted in biases embedded in socio-technical systems. The inherent opaque nature of AI systems makes the problem worse. For example, YouTube recommends next videos when a video is finished playing. Those recommendations, on the one hand, are helpful to find interesting videos from a tremendous number of YouTube videos, but on the other hand, it is often unclear how or why the video is recommended. What happens if some biases exist in the recommendation algorithm, such as favoring videos with a specific (political) view? No matter whether those biases are intentional or unintentional, users would be exposed to a certain set of videos and are likely to be influenced by them. YouTube is only one out of many examples because AI systems are becoming pervasive these days. In various areas, including healthcare, hiring, financial service, ads, policymaking, and internet services, AI systems are actively used. Thus, it is crucial to ensure that those systems are working fairly without any potential biases. It might be overlooked that the biases are embedded not only in the AI systems but also in established processes or human operators within the systems. The goal of this course is to provide students with an extensive understanding of diverse concepts of fairness and bias in socio-technical systems through examples across diverse domains, from healthcare to internet search. Then, students will learn how to audit practical systems in terms of fairness and bias through recent case studies. The course also aims to understand public concerns related to AI systems and help students to deeply think about ethical AI within multiple social contexts.

Requisites

Prerequisites: (IS200/IS111/SMT111/CS101/COR-IS1704) & (ANLY104/IS217/MGMT108/CS105) - Pre-req

Co-requisites: None

Anti-requisites: None

Attributes

Department: SCIS

Course Level: Undergraduate

Tracks: IS/T4BS: Smart-City Management & Technology Track

Areas: Business Options Business-Oriented Electives Econ Major Rel/Econ Options IS Depth Electives IT Solution Development Electives Politics, Law & Economics Electives Public Pol and Public Mgmt Electives Smart-City Management & Tech Electives Social Sciences/PLE Major-related

Learning Outcomes

1. To gain knowledge about the different notions of fairness and bias in machine learning algorithms and socio-technical systems 2. To understand the potential bias in machine learning models 3. To understand the potential bias in publicly available datasets 4. To apply well-known algorithms (libraries) to different datasets and examine potential bias 5. To collect the data from target socio-technical systems and examine potential bias therein 6. To understand the concept of interpretability of machine learning models 7. To understand mechanisms to ensure fairness in classification 8. To understand ethical considerations in AI 9. To understand the impact of AI for social goods

Graduate Learning Outcomes

Critical thinking & problem solving, Ethics and social responsibility

Competencies

Project Management, Design Thinking Practice, Algorithm Analysis, Failure Analysis, Software Testing