Get AI summaries of any video or article — Sign up free
Lecture 09: Ethics (FSDL 2022) thumbnail

Lecture 09: Ethics (FSDL 2022)

The Full Stack·
7 min read

Based on The Full Stack's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Ethical disputes in tech often come from optimizing measurable proxies that don’t track the real objective, creating alignment failures.

Briefing

Ethics in tech and machine learning comes down to managing three recurring tensions—alignment failures, stakeholder trade-offs, and the need for humility—while staying grounded in real-world cases rather than abstract thought experiments. The lecture frames ethics as a practical vocabulary for describing what people find acceptable or unacceptable in specific deployments, then uses that lens to show how “good intentions” can still produce harmful outcomes when systems optimize the wrong targets, when metrics can’t capture what truly matters, and when engineers overestimate what they can safely control.

A central theme is the “proxy problem”: teams often optimize measurable surrogates (like training loss, accuracy, or short-term engagement) that only loosely correlate with the real objective (like user value, long-term welfare, or safety). That mismatch can produce unintended harm even when the model is performing well on the metric it was trained to maximize. The lecture connects this to familiar ML patterns—training/validation gaps and downstream utility losses—and to non-ML examples where planners chose what was easy to measure instead of what was important.

The second tension is trade-offs among stakeholders. Some ethical disputes arise when what one group wants conflicts with what another group wants, including people who can’t easily provide input (like future generations or those affected indirectly). The asteroid-and-orphans hypothetical is used to illustrate why these decisions are hard: engineers may be able to compute Pareto fronts for technical metrics, but they’re less equipped to quantify human rights, utility, or acceptable losses across time. That gap in expertise is why humility is presented as the “appropriate response”—a mindset that pushes teams to seek help, consult domain experts, and treat their own assumptions as provisional.

The lecture then applies these ideas to concrete tech controversies. Public trust in large tech companies has eroded amid scandals involving government interfaces, disinformation, and manipulative content systems. It argues the industry should borrow from professional ethics cultures in other fields—especially engineering traditions like Canada’s engineer oath and the human-subjects research framework shaped by historical abuses and codified in regulations such as the Helsinki Declaration and the U.S. 1973 research act.

It also highlights ethical concerns that scale with business incentives: carbon emissions from compute, deceptive “dark patterns,” and growth hacking tactics that boost short-term metrics while degrading user goodwill. In machine learning specifically, ethics becomes more urgent because models touch human lives more directly, are wrong in probabilistic ways, and involve many stakeholders. Four recurring questions anchor the discussion: whether a model is fair, whether it’s accountable, who owns the data, and whether the system should be built at all.

The Compass criminal-justice risk scoring example illustrates how fairness definitions can conflict and how equalizing one metric can still increase harm for a subgroup. The lecture argues that sometimes the right ethical move is not “fix the model,” but “question whether the model should exist,” especially when systems are proprietary, uninterpretable, and trained on proxies that reflect biased institutions.

Finally, the lecture extends ethics to AI’s frontier: hype and capability overclaims (“AI snake oil”) can mislead users and trigger backlash, while longer-term risks include self-improving systems and alignment failures framed through Bostrom’s “astronomical waste” and the paperclip maximizer thought experiment. The closing message is not only about avoiding harm: it points to medicine and responsible ML practices—like auditing, failure-mode analysis, and clinical-trial style standards—as evidence that ethics can be operationalized, and that ML tools can also reduce suffering and expand access to benefits when built with care.

Cornell Notes

The lecture argues that tech ethics is best handled through concrete cases and three recurring tensions: alignment failures (optimizing proxies that don’t match real goals), stakeholder trade-offs (including harms to people who can’t easily consent), and humility (recognizing limits and seeking expertise). It uses examples from automated weapons, criminal-justice risk scoring (Compass), dark patterns, carbon emissions, and data governance to show how “ethical” outcomes can fail even when teams start with good intentions. In machine learning, ethics intensifies because models are probabilistic, touch human lives directly, and involve many stakeholders. The Compass case illustrates how fairness metrics can be incompatible and how equalizing one measure can still increase false positives for Black defendants. The lecture concludes that ethics should also be operational—through accountability, auditing, and user feedback—rather than purely philosophical.

What is the “proxy problem,” and why does it matter for ethical outcomes in machine learning?

The proxy problem is the gap between what teams can measure and optimize versus what they actually care about. In ML, training loss is easy to optimize, but real goals often live elsewhere—like validation/test performance, downstream utility, or user value in production. The lecture notes that optimizing training loss can worsen validation loss, and that higher classification accuracy can reduce embedding utility in downstream tasks. Ethically, this means a system can look successful on its chosen metric while harming the underlying objective (e.g., user welfare or safety) because the correlation between proxy and goal is weak.

Why do fairness disputes persist even when teams try to “equalize bias” from the start?

Fairness is not one thing. The lecture emphasizes that multiple fairness definitions are mutually incompatible, so equalizing one metric can worsen another. The Compass example is used to show this: Compass aimed to calibrate rearrest probabilities so that a given score corresponded to the same rearrest likelihood across racial groups. ProPublica’s analysis found that even with calibration, the system produced more false positives for Black defendants (higher risk labels that didn’t lead to rearrest) and more false negatives for white defendants. The lecture explains that because prevalence differs across groups, at least one of false positive rate, false negative rate, or positive predictive value must shift—so “all metrics equal” is impossible.

What does humility mean in this ethics framework, and how should it change engineering behavior?

Humility is treated as the “appropriate response” when teams don’t know what they want, how to measure it, or how to weigh trade-offs among stakeholders. The lecture argues engineers often prefer optimizing single numbers, but ethical dilemmas require deciding among competing values that are hard to quantify. Humility means adopting a student mindset: asking for help, seeking domain experts, and recognizing that even well-intentioned interventions can be dangerous. It also supports a user-oriented process—listening to users and incorporating feedback—so decisions are revisited rather than assumed correct.

How do dark patterns and growth hacking create ethical risk even when they boost short-term metrics?

Dark patterns are deceptive or user-hostile design tactics that manipulate user choices while still appearing functional. The lecture gives examples like misleading countdown timers and “people viewing this item” counters, and it describes how ad labeling in search results became less transparent over time. Growth hacking is framed as especially risky when it uses deception to drive rapid growth: it can increase sign-ups or revenue in the short term while eroding long-term goodwill and customer value. The ethical warning is that ML-driven A/B testing can amplify this problem if the metric optimized (e.g., opt-in rates) correlates with deceptive variants.

Why does the lecture argue that sometimes the ethical answer is “don’t build it,” not “fix it”?

Because some systems are built on proxies that reflect biased institutions or cannot be made accountable. In the Compass case, the model is proprietary and uninterpretable, and the lecture notes that an accuracy around 65% is not clearly better than simpler baselines. More importantly, rearrest is not the same as recidivism or actual criminal behavior; it depends on policing patterns and officer decisions, which are themselves biased. The lecture concludes that when the measurement target is ethically compromised and the system lacks transparency, the right question becomes whether the system should exist at all.

What lessons does medicine offer to machine learning ethics?

Medicine is presented as having stronger professional ethics culture and more mature standards for evaluating interventions. The lecture cites a “scary and embarrassing” finding: much COVID-19 ML research produced models that were insufficiently documented, lacked best-practice development, and failed external validation. It contrasts medicine’s emphasis on “do no harm” with tech’s “move fast and break things.” It then highlights emerging ML-in-medicine standards—SPIRIT-AI and CONSORT-AI for clinical trial design/reporting—and auditing frameworks emphasizing failure modes, adversarial testing, and rigorous artifacts. The takeaway is that ethics can be built into research and deployment workflows, not just debated.

Review Questions

  1. Which ethical tensions—alignment, trade-offs, and humility—show up in the proxy problem and how would you detect them in a new ML product?
  2. In the Compass example, how can calibration across groups still produce unequal harm, and what does that imply about choosing fairness metrics?
  3. What does “accountability” mean for ML systems when interpretability methods are unreliable or easily fooled?

Key Points

  1. 1

    Ethical disputes in tech often come from optimizing measurable proxies that don’t track the real objective, creating alignment failures.

  2. 2

    Stakeholder trade-offs are unavoidable in ethical decisions, but engineers often lack the expertise to quantify human values and acceptable losses.

  3. 3

    Humility should be treated as an engineering requirement: seek domain experts, revisit assumptions, and incorporate user feedback rather than assuming correctness.

  4. 4

    Public trust declines when companies repeatedly deploy manipulative or deceptive practices, so professional ethics needs to be treated as part of engineering culture.

  5. 5

    In machine learning, fairness is not a single target; different fairness definitions can conflict, so “equalizing one metric” may still increase harm for a subgroup.

  6. 6

    Some high-stakes ML systems should be questioned at the “should this be built at all?” stage, especially when targets are biased proxies and systems are proprietary or non-accountable.

  7. 7

    Medicine offers concrete models for responsible ML through clinical-trial standards, auditing frameworks, and systematic failure-mode analysis.

Highlights

The lecture frames ethics as a practical vocabulary: the meaning of “ethics” is revealed by how people use it to praise or criticize specific technologies in concrete cases.
Compass shows how fairness can fail even with calibration: equalizing predicted rearrest probabilities across groups can still yield more false positives for Black defendants due to prevalence differences.
Dark patterns and growth hacking can be ethically harmful because they boost short-term metrics while degrading long-term user trust and goodwill.
Machine learning ethics intensifies because models are probabilistic and wrong in ways that can directly affect human lives, making fairness, accountability, and data governance central questions.
Medicine’s COVID-19 ML “fiasco” is used to argue that rigorous external validation, documentation, and auditing are essential for trustworthy ML in high-stakes domains.

Topics

Mentioned