Safe and Responsible Use?

AI is transforming education by enhancing learning through tools like chatbots, personalized platforms, and multimedia generators, moving beyond traditional classrooms. However, while these technologies offer convenience, they also present ethical risks such as copyright infringement, which this section addresses through real-world examples, checklists, and practical guidelines for responsible AI use.

Basic Concepts for Ethical AI Use?

Why is Ethical Use Important? Need for Self-Awareness and Habit Formation

Specific AI Tool Use Cases and Ethical Principles?

AI Generated Text AI Generated Multimedia Recommendation Algorithms

Using Checklists for Habitual Compliance

Quiz?

Check your understanding of applying AI Ethics

Human-Led AI Life cycle

Discussion about Human Responsibility and Human Rights

General AI Safety Risks

General AI Safety Risks

  1. Inaccurate Information

    • What It Means: AI might provide answers that are not factually correct, leading to misunderstandings or harmful consequences.
    • Example: A chatbot giving wrong medical advice, causing a person to delay seeking proper treatment.
  2. Algorithmic Bias

    • What It Means: AI could treat certain people unfairly if the data it learned from is biased or if the design isn’t balanced.
    • Example: A facial recognition tool that works better on some skin tones than others.
  3. Cybersecurity Threats

    • What It Means: Hackers could break into AI systems, steal data, or change how an AI model works.
    • Example: Someone hacks into a chatbot’s database and leaks users’ personal messages.
  4. Privacy Violations / Personal Information Leakage

    • What It Means: AI might reveal or misuse personal information if not protected properly. Sometimes, data is collected without users’ permission.
    • Example: A fitness app that collects and shares people’s health data without telling them, or a voice assistant recording conversations without user consent.
Skip to toolbar