Safe and Responsible Use?

AI is transforming education by enhancing learning through tools like chatbots, personalized platforms, and multimedia generators, moving beyond traditional classrooms. However, while these technologies offer convenience, they also present ethical risks such as copyright infringement, which this section addresses through real-world examples, checklists, and practical guidelines for responsible AI use.

Basic Concepts for Ethical AI Use?

Why is Ethical Use Important? Need for Self-Awareness and Habit Formation

Specific AI Tool Use Cases and Ethical Principles?

AI Generated Text AI Generated Multimedia Recommendation Algorithms

Using Checklists for Habitual Compliance

Quiz?

Check your understanding of applying AI Ethics

Human-Led AI Life cycle

Discussion about Human Responsibility and Human Rights

Misuse of AI in High-Risk Decision-Making

Real-World Examples

1. Medical Diagnosis Mistakes

  • Shortcut Learning in AI Models
    • Researchers at the University of Washington found that some AI tools for diagnosing illnesses like COVID-19 depended on unusual clues, such as the patient’s posture or random markers in the data.   https://healthcare-in-europe.com/en/news/ai-shortcuts-could-lead-to-misdiagnosis-of.html
    • These “shortcuts” caused incorrect diagnoses and the wrong treatments.
    • Why It Matters: When AI relies on the wrong information, patients can receive the wrong surgery or medication, putting their health at risk.

2. Law Enforcement and Legal Judgments

    • Bias in the COMPAS Algorithm
      • COMPAS is used in U.S. courts to predict if someone might commit another crime.
      • Investigations showed it labeled Black defendants as “high risk” too often and White defendants as “low risk” too often.
      • Why It Matters: If a tool is unfair, it can lead to biased jail sentences or parole decisions.

Learning Activities

A. Case Discussions

  • Activity: Talk in groups about what went wrong in these AI examples.
  • Questions:
    1. What caused the errors?
    2. How could we prevent them?
    3. Why is human judgment still important?

B. Risk Classification Chart

  • What to Do:
    1. Look at different AI use cases (in healthcare, law, or education).
    2. Decide if they are low, medium, or high risk.
    3. Suggest ways to reduce those risks (like checking for bias, adding human reviewers, or following stricter rules).

 


High-Risk AI Use Cases and Risk Mitigation

Risk Classification Chart (Detailed Activity)

Prompt:
“Find ‘high-risk’ AI use cases in different industries. Sort them into low, medium, or high risk, and then think about ways to make them safer.”

 

Example Questions:

  1. What makes this use case high risk?

    • Could it affect people’s lives, freedoms, or rights?
    • Could it impact a lot of people at once?
  2. How can the risk be reduced?

    • Could we run bias audits on the data?  If so, how?
    • Should there be more human oversight, like a review panel?  What should they do?
    • Would transparency rules help people trust the AI more?  How can we determine the transparency rules?
  3. Who should manage these risks? And how?

    • Developers who create the AI?
    • Regulators who make the rules?
    • Users who need to understand how the AI works?

 


Why This Matters

High-risk AI systems can change lives in major ways. By learning about these problems and thinking about solutions, you can help ensure AI is used safely and fairly. Remember: humans are still responsible for final decisions, especially when people’s health, freedom, or well-being is at stake.

 

 

Skip to toolbar