By: Jake Effoduh
A context-driven approach is necessary to translate principles like explainability into practice globally. These vignettes illustrate how AI can be made more trustworthy for users in the Global South through more creative, context-rooted approaches to legibility.
This article explores how realizing the concept of explainable AI could benefit from some subaltern propositions observed from an African context. One proposition is the incorporation of humans serving as AI explainers akin to griots or midwives, who can provide culturally contextualized and understandable explanations for this technology. Another proposition is for explainability to be modelled as a generative exercise that enables users to customize explanations to their language and receive communication in native dialects and familiar linguistic expressions. It is possible for explainability to not just benefit individual understanding but communities as a whole by recognizing human rights and related norms of privacy and collective identities.
