Workers at a Ghanaian telecom firm were shocked when a video began circulating on WhatsApp and Facebook.
It showed the company’s CEO apparently admitting that staff pensions had been mismanaged and that “funds were gone.” Within hours, hashtags trended, customers threatened to withdraw, and investors began to panic.
Two days later, investigators from the Cyber Crime Unit confirmed the video was fake. The voice had been cloned using artificial intelligence, and the visuals were stitched together from older interviews.
The aim was to cause reputational damage and extort the CEO into paying to stop the clip from spreading further.
This imagined scenario is grounded in growing real-life patterns across Africa, where deepfakes—AI-generated videos or audios that imitate real people—are becoming powerful tools of manipulation. They reveal a dangerous shift where seeing and hearing someone is no guarantee of truth.
Why deepfakes are becoming a public risk
Across the world, synthetic media, also known as artificial digital content, has evolved from digital curiosity to a weapon of deception. A 2024 Deeptrace report estimated that online deepfake videos double every six months, with more than 95% being non-consensual or deceptive content.
Deepfake creation has become cheaper, faster and more accessible. A 2024 KPMG review of cyber risks noted that global detections of deepfake-related incidents rose by 245 per cent year over year from Q1 2023 to Q1 2024.
In Africa, deepfake cases are expanding exponentially. Identity verification services, Smile ID’s 2025 report shows that deepfake fraud incidents increased sevenfold between Q2 and Q4 of 2024 as AI tools became easier to use. Similarly, Sumsub’s “Fraud Trends for 2025” report revealed that deepfakes now account for a rising share of identity fraud in Africa, with synthetic media forming part of “fraud-as-a-service” toolkits.
South Africa’s Financial Sector Conduct Authority (FSCA) warned citizens in 2024 about manipulated videos of local entrepreneurs endorsing fraudulent investment schemes. These videos were used to lure victims into unregulated crypto and forex platforms. Similar warnings came from Kenya’s Communications Authority after a deepfake of a government minister promising cash gifts went viral on TikTok.
When manipulated media can reinforce false claims, it amplifies other forms of digital attacks. Deepfakes now intertwine with phishing, impersonation and financial fraud.
How deepfakes and cybercrime intersect
Deepfakes no longer stop at fake confessions or celebrity scandals. Cybercriminals now merge them with phishing and social engineering to steal data or money.
A 2025 TransUnion study found that audio and video impersonation accounted for 4% of all digital fraud incidents globally, and that Africa saw one of the fastest increases in deepfake-based scams. The same report noted that audio-based impersonations in financial fraud rose by over 40% between 2023 and 2024.
Interpol’s Africa Cyberthreat Assessment 2023 also linked synthetic media to extortion and “vishing” (voice phishing), warning that deepfakes are now used to “amplify credibility” in existing scams.
Imagine getting a voice message from your “boss” asking you to urgently process a payment, or an old schoolmate sending a video asking for help. The voice, face, and tone sound genuine, but behind it could be a criminal using AI to mimic them.
When you cannot trust the image or the voice
The biggest danger with deepfakes is how effortlessly they blend into everyday multimedia.
Unlike simple text misinformation, deepfakes bypass logic and appeal directly to emotion. A video of a leader saying something outrageous or a relative appearing in distress can trigger instant reactions such as anger, fear and sympathy, before anyone verifies it.
Studies from MIT’s Media Lab in 2023 found that false videos spread up to six times faster than verified news on social platforms because people forward them before critical thinking kicks in. In Africa, where platforms like WhatsApp and TikTok are the main news sources for young people, this pattern amplifies social division and financial risk.
When deepfakes strike communities and institutions
The damage deepfakes do extends beyond individuals. In election cycles, manipulated videos can sway public opinion. In corporate settings, fake statements can trigger market panic, and in community disputes, fake videos may stoke conflict between groups.
In West Africa, during political unrest in Burkina Faso, deepfake content surfaced showing the leader in fabricated meetings with world figures. However, the narrative was later debunked by Full Fact and reported by ADF Magazine.
During Nigeria’s 2023 elections, manipulated videos of candidates making inflammatory remarks spread widely on Facebook and WhatsApp before being debunked by Dubawa, a West African fact-checking institution. By then, they had already shaped public opinion and voter sentiment.
The remarks spread widely on Facebook and WhatsApp before being debunked by Dubawa.
In South Africa, broadcast anchor Leanne Manas raised concerns after her image was used in videos promoting weight loss and investments she never endorsed, a case highlighted in an FSCA warning.
In Kenya, deepfake pornography targeting female journalists has been used to intimidate and silence them, as documented in Reuters and CIPESA reports. These tactics exploit social stigma and gender bias, using false imagery to damage reputations.
Such incidents reveal that deepfakes are not distant threats. They touch workplaces, families, and democratic spaces. And when fake becomes believable, trust in all media content weakens. This is why it’s imperative for everyone to be aware and be able to detect deepfakes.
5 Simple checks anyone can apply to spot a deepfake
Spotting a deepfake can feel technical as experts use forensic software, but with practice, you, as an everyday user, can learn to pick up on red flags.
- Watch for visual inconsistencies: Look for odd lighting, mismatched shadows, unnatural blinking or lips that don’t sync with speech. Edges around hair, ears, or glasses often blur or flicker.
- Listen for audio unnaturalness: Deepfake voices may have slight distortion, odd pauses, or a lack of breathing sounds. When familiar voices seem “off,” treat them with suspicion.
- Check who first posted it: If the video dropped via unverified or new accounts, or surfaced in chat before public media, that is a warning sign.
- Reverse-search frames: Take a still image from the video and run it through Google Image Search or TinEye. If it appears elsewhere in a different context, the content may have been reused. You can also research online for other tools to use to verify.
- Cross-check with established sources: If a major figure suddenly makes a shocking statement on video, legitimate news outlets or the person’s verified channels often issue confirmations or retractions. Delay belief until you see that. If you find out it’s a deepfake, flag it immediately, whether it’s on WhatsApp, Facebook, X, or TikTok, so platforms can review and remove them.
These steps often expose anomalies that simple viewing cannot, thereby building a fact-checking habit that gives you an advantage to avoid misinformation.
The spread of deepfakes highlights a larger cultural challenge: building a society that pauses before believing. When individuals can tell what is false, everyone can become resistant to manipulation.
Hence, guarding the truth becomes a collective responsibility. Tools, careful habits, and collective awareness can blunt synthetic deception. As deepfake incidents grow across Africa and the world, we must learn to pause, verify and resist manipulation.