Human Centered Design Principles

Ethics, Laws and Regulations

What It Means: Many countries have rules to make AI use fair and responsible.

 

Example: A law might say that an online app must ask for permission before collecting personal data. These rules help protect everyone’s rights.

 

Fairness and ethics:
AI should treat everyone equally while being ethical.
Example: An AI that helps determine the best candidates for student boards should not have biases like selecting a students from a certain race more often.  However, is it fair to consider age and attendance records as they will show important and common traits that increase the likelihood of a successful candidate, which in turn will benefit everyone?  The stakeholders (students, parents, and teachers) should continuously ask what’s fair and in the best interest of the people involved.

 

Privacy:
AI must keep personal information safe.
Example: A smart helper robot should never share your secret or personal details with others.

 

Safety:
AI should always protect people.
Example: A school robot should be designed so it never accidentally hurts someone.

 

Transparency:
It should be clear how AI makes decisions.
Example: If an app suggests a book, it should explain why it thinks you will like it.

 

User Control:
People should be able to decide how AI works for them.

 

Monitoring and Maintenance

What It Means: AI needs regular checks and updates to stay safe and fair.

 

Example: A robot in the cafeteria should be tested often to ensure it sorts food and trash correctly, and to fix any problems right away.


Example: A learning app should let you choose the subjects you want to study.

Skip to toolbar