Building AI for the Global South

My list

david-watkis-UKjPDfyJI4w-unsplash

Harm wrought by AI tends to fall most heavily on marginalized communities. In the United States, algorithmic harm may lead to the false arrest of Black men, disproportionately reject female job candidates, or target people who identify as queer. In India, those impacts can further injure marginalized populations like Muslim minority groups or people oppressed by the caste system. And algorithmic fairness frameworks developed in the West may not transfer directly to people in India or other countries in the Global South, where algorithmic fairness requires understanding of local social structures and power dynamics and a legacy of colonialism.

That’s the argument behind “De-centering Algorithmic Power: Towards Algorithmic Fairness in India,” a paper accepted for publication at the Fairness, Accountability, and Transparency (FAccT) conference, which is taking place this week. Other works that seek to move beyond a Western-centric focus include Shinto or Buddhism-based frameworks for AI design and an approach to AI governance based on the African philosophy of Ubuntu.

+INFO: Venture Beat

Related Posts