Fallout over South Africa’s AI-generated AI policy: ‘It’s embarrassing’
· The South African

South Africa has withdrawn its draft national artificial intelligence (AI) policy after it emerged that parts of its reference list contained fake sources believed to be AI-generated.
Visit extonnews.click for more information.
The document, released earlier this month for public comment, was meant to guide the country’s approach to AI development and regulation.
The draft proposed setting up new bodies such as a National AI Commission, an AI Ethics Board and a regulatory authority, alongside incentives like tax breaks, grants and subsidies to drive private sector participation.
Instead, the AI policy has sparked concern over how it was compiled.
Communications and Digital Technologies Minister Solly Malatsi acknowledged the fallout, saying: “The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened.”
The draft AI policy was withdrawn after it was discovered that at least 6 of its 67 academic citations were AI-generated hallucinations that cited journal articles that do not exist.
Statement on the integrity of the Draft National Artificial Intelligence Policy
— SollyMalatsi (@SollyMalatsi) April 26, 2026
Following revelations that the Draft National Artificial Intelligence Policy published for public comment contains various fictitious sources in its reference list, we initiated internal questions…
AI policy setback raises credibility concerns
Malatsi said the discovery went beyond a minor oversight.
“This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy,” he wrote.
In a recent interview with eNCA, he was more blunt about the fallout.
“It is a serious matter…it’s quite embarrassing that we find ourselves in this position,” Malatsi said.
He noted the irony that the AI policy itself aimed to establish ethical standards for AI use.
“It’s ironic that in our efforts to do that we fell short,” he added.
“The intention and the approach in developing an AI policy was also to get into a space where we have clear guidelines and clear guard rails for ethical use and adoption of AI in professional spaces.”
Over-reliance on AI tools flagged
Malatsi pointed to a lack of human checks as a key issue behind the error and resultant fallout.
“This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical. It’s a lesson we take with humility,” he said.
He also warned that relying too heavily on AI tools without proper checks can lead to serious consequences.
“It’s very clear…the over-reliance on AI tools without vigilant and robust human oversight…this is one major exposure of that,” he said.
The minister confirmed there would be consequences for those involved in drafting the AI policy, although no timeline has been given for a revised version.