General AI Safety Risks
-
Inaccurate Information
- What It Means: AI might provide answers that are not factually correct, leading to misunderstandings or harmful consequences.
- Example: A chatbot giving wrong medical advice, causing a person to delay seeking proper treatment.
-
Algorithmic Bias
- What It Means: AI could treat certain people unfairly if the data it learned from is biased or if the design isn’t balanced.
- Example: A facial recognition tool that works better on some skin tones than others.
-
Cybersecurity Threats
- What It Means: Hackers could break into AI systems, steal data, or change how an AI model works.
- Example: Someone hacks into a chatbot’s database and leaks users’ personal messages.
-
Privacy Violations / Personal Information Leakage
- What It Means: AI might reveal or misuse personal information if not protected properly. Sometimes, data is collected without users’ permission.
- Example: A fitness app that collects and shares people’s health data without telling them, or a voice assistant recording conversations without user consent.