Code and Crisis: Artificial Intelligence and Political Misinformation in East Africa

Artificial intelligence (AI) is reshaping how information is created, shared, and consumed around the world. In East Africa, however, the slow adoption of AI technologies which is driven by infrastructural limitations, low investment in research, and evolving regulatory frameworks has created a complex digital environment. Although this lag may be viewed as a buffer against rapid disruption, it also leaves the region increasingly vulnerable to AI-generated misinformation.

East Africa is still in the early stages of integrating AI into its institutions and services. This limited presence makes it difficult to detect and respond to manipulated content which can quietly erode public trust and distort civic discourse.

The State of AI Adoption in East Africa

Artificial Intelligence (AI) is gradually gaining traction in East Africa, with increasing experimentation across sectors such as agriculture, healthcare, education, and public service delivery. Countries such as Kenya, Rwanda, and Uganda have taken early steps toward AI integration. For example, Kenya has seen the application of AI in agricultural optimisation  such as AI-driven pest detection and yield forecasting, chatbot-enabled health diagnostics, and smart logistics. Rwanda, meanwhile, has partnered with the World Economic Forum to establish a Centre for the Fourth Industrial Revolution focused on AI and emerging technologies.

However, there are significant regional disparities. While some countries such as Kenya, Uganda and Rwanda are actively building AI ecosystems and pilot projects, others like Somalia remain limited by poor digital infrastructure. Progress in AI adoption correlates closely with broader digital readiness, resulting in uneven development trajectories across the region.

Barriers to Widespread AI Adoption

Despite rising interest in AI, several structural and systemic challenges continue to hinder its widespread adoption across East Africa. These barriers do not only delay innovation but also weaken the region’s capacity to detect, counter, and regulate AI-generated misinformation.

  • Infrastructural Challenges: Limited access to stable electricity, cloud infrastructure, data centers, and high-speed internet not only hampers the deployment of AI technologies, but also slows down real-time fact-checking, content moderation, and the distribution of verified information.

  • Digital Skills Gap: The shortage of professionals trained in AI and data science means there are fewer locally grounded solutions to counter deepfakes, bots, and synthetic media. Without a skilled workforce, East Africa becomes more reliant on foreign tools, many of which are not tailored to local languages or contexts, making detection of misinformation less effective.

  • Regulatory Bottlenecks: The absence of clear regulations around data governance, algorithmic transparency, and AI ethics makes it difficult to hold perpetrators of AI-generated misinformation accountable. In this legal grey area, bad actors can exploit loopholes to manipulate information ecosystems with little fear of consequence.

  • Economic Constraints: Limited funding for local AI research and startups weakens the development of region-specific content moderation tools, particularly those that could address misinformation in African languages. As a result, global platforms dominate but often fail to detect harmful content in East African linguistic and cultural contexts.

Growth Potential

Despite current limitations, AI holds significant potential not just for economic growth but also for addressing misinformation in East Africa. AI can support the development of language technologies tailored to African languages which is crucial for detecting false narratives in local dialects. With the right investment and infrastructure, AI could strengthen regional capacity to counter synthetic media and algorithmic manipulation. Ongoing efforts by actors like the Smart Africa Alliance and emerging national AI strategies are promising but must integrate misinformation response into broader digital agendas.

The Misinformation Threat

Artificial Intelligence (AI) poses a growing risk to electoral integrity in East Africa, particularly through its use in generating and spreading misinformation. Extremist and political actors are increasingly leveraging AI to produce misleading content at scale which ranges from manipulated narratives to identity-based disinformation.

AI can be exploited to undermine elections through:

  • Deepfakes imitating politicians or public figures.

  • Automated disinformation campaigns using bots and generative text

  • Micro-targeted misinformation, where AI delivers personalized falsehoods.

  • Language manipulation, generating false content in local dialects to enhance credibility.

These technologies are already being used. For instance, ahead of Kenya’s 2022 general elections, AI‑generated deepfake videos and doctored images targeting leading candidates circulated widely on Facebook and WhatsApp, spreading false narratives about voter fraud and ethnic favoritism.

East Africa faces structural and contextual vulnerabilities that heighten its exposure to these risks. The region has seen rapid growth in social media use across both urban and rural areas, yet general awareness of how AI technologies can manipulate content remains low. This imbalance creates fertile ground for false information to spread unchecked.

Additionally, the information landscape is becoming increasingly asymmetrical. Well-funded domestic actors, including political elites and external influencers, are beginning to deploy advanced AI tools to manipulate narratives. In contrast, local institutions such as electoral commissions and civil society organizations lack the technical capacity and resources to respond effectively. This dynamic creates an environment of asymmetric information warfare, where power lies in the hands of those with technological advantage.

The region’s linguistic diversity can also present a serious challenge in the digital information space. Misinformation crafted in widely spoken local languages such as Swahili, Amharic, Somali, or Luganda often bypasses AI-driven content moderation systems, which are trained in English or other high‑resource languages. As a result, false narratives can circulate within specific linguistic communities with minimal oversight.

Initiatives such as  PesaCheck, Africa Check, and Code for Africa have made important strides in countering false narratives but their reach is still modest compared to the volume of AI-generated content being disseminated online.

Case Studies

In East Africa, Kenya stands out as a country with relatively high levels of AI adoption. However, this technological advancement has not shielded it from AI-fueled electoral misinformation. During the 2022 elections, AI-driven content, including deepfakes and algorithmically amplified disinformation became tools for manipulating public opinion. Extremist actors exploited generative AI to craft misleading narratives and fake endorsements, undermining trust in democratic institutions.

At the same time, organizations such as Code for Africa and Shujaa Inc have launched collaborative AI models such as MAPEMA to detect hate speech and monitor toxic digital discourse around elections. These efforts are promising but still challenged by the volume of content and the speed of misinformation spread.

Efforts to address these challenges legislatively are also emerging and Kenya is exploring frameworks to regulate AI use in elections. As Kenya prepares for the 2027 elections, regulatory reforms are being proposed to combat digital disinformation while preserving freedom of expression.

Tanzania presents an example of how artificial intelligence can be weaponised to reinforce structural vulnerabilities. In the lead-up to national elections, AI-generated deepfakes were circulated on social media to intimidate female leaders. One notable instance involved female candidates from Zitto Kabwe’s  ACT-Wazalendo party, who were disproportionately targeted with manipulated content aimed at delegitimising their political participation. These digital attacks were designed to shame and exclude women from public life. This case reveals how generative AI technologies can be amplifiers of gendered violence and social marginalisation, especially in already fragile civic spaces.

Recommendations and Solutions

Addressing the challenges posed by AI-fueled misinformation in East Africa requires an approach that incorporates both immediate responses and sustainable, long-term strategies.

In the short term, one of the most urgent priorities is to enhance media literacy. This is not just around traditional misinformation, but with a focus on identifying AI-generated content such as deepfakes and synthetic texts. Community-level programs, integrated into both formal education and grassroots civic platforms, can empower citizens to better discern manipulated media.

Additionally, fact-checking initiatives must evolve to match the sophistication of AI-generated misinformation. A key priority is to enhance existing fact-checking platforms by integrating AI-powered detection tools capable of identifying synthetic text, deepfakes, and algorithmic amplification patterns. A notable example is the UNDP’s iVerify platform, which combines machine learning with human oversight to detect, verify, and counter false narratives during elections.

From a long-term perspective, sustainable change demands investment in homegrown AI expertise. Governments and regional bodies should fund research institutions, university AI labs, and local start-ups to foster an ecosystem that not only understands but innovates within the AI space. This would reduce reliance on imported technologies, many of which may be ill-suited to local realities.

Moreover, the continent must not passively import regulatory frameworks from the Global North. Instead, there is a need for Africa-specific AI governance models that reflect local cultural, linguistic, and political realities. Such frameworks should strike a balance between innovation and protection, especially in high-stakes arenas like elections, civic discourse, and gender rights.

Finally, regional collaboration will be important. Just as misinformation campaigns span borders, so too must the regulatory responses. Initiatives such as pan-African research centers or legal harmonization efforts can help standardize ethical AI use across countries.

Conclusion

East Africa finds itself at a pivotal moment in the AI landscape. Although adoption of AI technologies remains relatively modest, their exploitation is accelerating rapidly particularly in electoral context. This creates a dynamic where the region could become a ground for unregulated digital manipulation.

Addressing this challenge calls for urgent, local solutions. Public understanding of AI must be improved. Regional capacity also needs greater investment. Without swift action, AI-driven disinformation may spread faster than efforts to stop it.

Next
Next

Brick by Brick: Evaluating The Merit Of Real Estate As East Africa’s Favourite Asset