Building AI for the Global South

My list

david-watkis-UKjPDfyJI4w-unsplash

Harm wrought by AI tends to fall most heavily on marginalized communities. In the United States, algorithmic harm may lead to the false arrest of Black men, disproportionately reject female job candidates, or target people who identify as queer. In India, those impacts can further injure marginalized populations like Muslim minority groups or people oppressed by the caste system. And algorithmic fairness frameworks developed in the West may not transfer directly to people in India or other countries in the Global South, where algorithmic fairness requires understanding of local social structures and power dynamics and a legacy of colonialism.

That’s the argument behind “De-centering Algorithmic Power: Towards Algorithmic Fairness in India,” a paper accepted for publication at the Fairness, Accountability, and Transparency (FAccT) conference, which is taking place this week. Other works that seek to move beyond a Western-centric focus include Shinto or Buddhism-based frameworks for AI design and an approach to AI governance based on the African philosophy of Ubuntu.

+INFO: Venture Beat

Related Posts

SmartCity
Thank you for registering to Tomorrow.City. You can now start exploring from your computer, or with your phone or tablet downloading our app!
Only accessible for registered users
This content is available only for registered users
TO: $$toName$$
SUBJECT: Message from $$fromName$$