Institutional AI Utilization Assessment
Analyzing AI Impact in Complex Decision-Making
- Educational Opportunity Profiling: Let’s review a real world example case where AI algorithms may label certain student groups unfairly or limit their opportunities based on biased academic performance evaluations.
-
- A study published in AERA Open in July 2024 highlights AI bias in educational opportunities. The research uncovered potential racial biases in AI algorithms higher education institutions use to predict student success. These biases resulted in unfair assessments of Black and Hispanic students, potentially affecting decisions related to admissions, budgeting, and student services. The study emphasizes the need for careful design and monitoring of AI systems to prevent the perpetuation of existing biases and to ensure equitable treatment of all student groups.
- Source: Study Uncovers Racial Bias in University Admissions and Decision-Making AI Algorithms
- Hiring Decisions: Analyze real-world (anonymized) cases of biases in AI-driven hiring processes, including gender, race, and educational background biases.
- Legal Challenges Against AI Screening: In a landmark decision, a federal judge in California allowed a class-action lawsuit against Workday to proceed. The lawsuit alleges that Workday’s AI-powered hiring software perpetuates existing biases, leading to unlawful discrimination against Black, older, and disabled candidates. This case may set crucial precedents about the use of AI in employment.
Developing Guidelines
- Pre-Implementation Impact Assessment: Establish institutional forms to assess ethical, social, and legal impacts before AI deployment.
- Test before deployment of the AI system to make sure that it adheres to the standards.
- Post-Implementation Monitoring: Regularly evaluate AI usage outcomes (e.g., acceptance/rejection rates, student performance changes) to identify potential biases.