January 11th, 2023. Issue #6
TL;DR
Harvard Business Review suggests that companies should have conversations about their values and purpose regarding AI, ways to implement those values, and ways to measure progress. They should develop transparent values and principles for AI, create a cross-functional team to consider ethical implications, and establish metrics to track progress.
Think tank
Defining your ethical standard for AI
Companies need to identify ethical risks specific to their industry and organization and decide where they stand on them.
Make sure that every product goes through an ethical risk due diligence process before deployment, or even at the earliest stages of product design.
Identifying the gaps between where you are now and the future
Having productive conversations about AI ethical risk management goals require keeping an eye on what is technologically feasible for your organization.
Questions to consider include: what is the risk, how does software/quantitative analysis help, what gaps are left, what kinds of qualitative assessments need to be made and by whom to fill those gaps
Understanding the complex sources of the problems
The conversation around discriminatory algorithms should go deep into the sources of the problem and how they connect to various risk-mitigation strategies.
However, the problem is not just that the data sets are biased but there are multiple sources of discriminatory outputs
Highlights & Events
Event | People & Planet in the Information Era
Article | Instilling moral value alignment - reinforcement learning
Event | MozFest 2023
Podcast | Trust in Tech: Integrity Institute
Insights | Big Book of Sharing Economy Insights