The AI revolution will be African.
The first comprehensive, human rights based framework for ethical AI development in Africa—by Africans, for the world.
The first comprehensive, human rights based framework for ethical AI development in Africa—by Africans, for the world.
for
Policy makers
Build AI regulations that protect & innovate.
for
Tech builders
Build AI with human rights at the core, not as add-ons.
for
Advocacy & civil society
Hold AI accountable with concrete frameworks, not abstract principles.
For too long, imported systems have shaped Africa’s technological future. The Toolbox changes that, putting African communities at the center of innovation.
Built WITH African experts, FOR African contexts, BY African communities.
Case studies spanning Uganda, Kenya, Nigeria, Ethiopia
10+ African researchers
Not theory. Not abstract. A step by step framework you can use Monday morning. 6 stages. Checklists. Real examples.
Integrates into existing workflow
Works for startups AND governments
Not vague ethics. Not abstract principles. Concrete human rights law with accountability & teeth.
Aligned with EU AI Act & international regulations
From Malaria to Markets: 6 African AI Success Stories
How Makerere AI Lab reduced diagnostic time from days to minutes using 3D-printed adapters and AI trained on local data and built with health workers as design partners.
How women pepper farmers in Nigeria co-designed real-time pest detection AI that solved crop mysteries—prioritizing their needs, not researcher assumptions.
6 Stages to build AI that actually serves people.
Start with the real problem—not the tech solution. Involve affected communities from day one. They’re experts in their own lives.
Makerere’s malaria AI runs offline because rural clinics have intermittent power and limited bandwidth. Context shapes every technical choice, not the other way around.
When Kenyan grandmothers shared traditional stories in Dholuo, they retained sovereignty over their linguistic heritage. Community-based licensing means data isn’t extracted, it’s partnered.
Kenyan Sign Language translation preserves complete language systems with regional variations and cultural nuances. The Deaf community evaluated every technical decision, ensuring technology empowers rather than marginalizes.
Rural health workers tested malaria diagnostics in their actual clinics, not labs. Their feedback reshaped interfaces and decision transparency before a single deployment.
Five years into KenCorpus, systems still evolve with communities. Speech recognition improves with diverse speakers. Languages change, technology adapts because partnership is ongoing.
Join us in shaping the future of AI in Africa by subscribing to our circle community.
A practical framework for building AI with human rights at the core. Six stages from objective to deployment, grounded in real African case studies.
Researchers building AI systems, policymakers developing regulations, tech companies committed to responsible practice, and civil society organizations holding AI accountable.
Ethics are situational and abstract. Human rights are concrete international law with accountability mechanisms. They provide a universal foundation that works across contexts.
It integrates the Human Rights Impact Assessment framework from the Alan Turing Institute, which aligns with EU AI Act requirements. Using the Toolbox throughout development makes compliance easier.
Built in Africa, applicable globally. The methodology works anywhere communities need to be centered in AI development. Other regions are already adapting it.
Six-stage framework, reflection questions at each stage, formal assessment documentation templates, six African case studies, and access to online courses and community support.
Free. Download it, use it, share it.
The AI & Equality Human Rights Initiative in collaboration with the African Centre for Technology Studies, based on methodology developed by Women At The Table, in collaboration with EPFL and the UN Office of the High Commissioner for Human Rights.
Start at Stage 1 with your next AI project. Use the reflection questions for team discussions. Complete formal assessments at key milestones. Engage communities throughout, not just at the end.
It’s grounded in practice, not theory. Every stage includes real examples from African researchers. It integrates human rights law, not abstract principles. And it’s designed for communities to have genuine agency, not tokenistic consultation.
What makes AI truly community-centered? sensors.AFRICA’s 5 lessons reveal why data ownership and genuine participation matter more than technology alone.”
What happens when you design environmental data systems with marginalized communities instead of importing solutions? sensors.AFRICA shows the way.
Five essential questions ensure AI health deployments in Africa are equitable, community-driven, and ethically grounded.