Real-World Examples
1. Medical Diagnosis Mistakes
- Shortcut Learning in AI Models
- Researchers at the University of Washington found that some AI tools for diagnosing illnesses like COVID-19 depended on unusual clues, such as the patient’s posture or random markers in the data. https://healthcare-in-europe.com/en/news/ai-shortcuts-could-lead-to-misdiagnosis-of.html
- These “shortcuts” caused incorrect diagnoses and the wrong treatments.
- Why It Matters: When AI relies on the wrong information, patients can receive the wrong surgery or medication, putting their health at risk.
2. Law Enforcement and Legal Judgments
-
- Bias in the COMPAS Algorithm
- COMPAS is used in U.S. courts to predict if someone might commit another crime.
- Investigations showed it labeled Black defendants as “high risk” too often and White defendants as “low risk” too often.
- Why It Matters: If a tool is unfair, it can lead to biased jail sentences or parole decisions.
- Bias in the COMPAS Algorithm
Learning Activities
A. Case Discussions
- Activity: Talk in groups about what went wrong in these AI examples.
- Questions:
- What caused the errors?
- How could we prevent them?
- Why is human judgment still important?
B. Risk Classification Chart
- What to Do:
- Look at different AI use cases (in healthcare, law, or education).
- Decide if they are low, medium, or high risk.
- Suggest ways to reduce those risks (like checking for bias, adding human reviewers, or following stricter rules).
High-Risk AI Use Cases and Risk Mitigation
Risk Classification Chart (Detailed Activity)
Prompt:
“Find ‘high-risk’ AI use cases in different industries. Sort them into low, medium, or high risk, and then think about ways to make them safer.”
Example Questions:
-
What makes this use case high risk?
- Could it affect people’s lives, freedoms, or rights?
- Could it impact a lot of people at once?
-
How can the risk be reduced?
- Could we run bias audits on the data? If so, how?
- Should there be more human oversight, like a review panel? What should they do?
- Would transparency rules help people trust the AI more? How can we determine the transparency rules?
-
Who should manage these risks? And how?
- Developers who create the AI?
- Regulators who make the rules?
- Users who need to understand how the AI works?
Why This Matters
High-risk AI systems can change lives in major ways. By learning about these problems and thinking about solutions, you can help ensure AI is used safely and fairly. Remember: humans are still responsible for final decisions, especially when people’s health, freedom, or well-being is at stake.