South Africa’s AI policy cited fake research, created by AI: what lessons need to be learned
South Africa's first attempt to establish a binding artificial intelligence (AI) policy framework came to an abrupt halt just 16 days after it was officially gazetted.
On 10 April, the Department of Communications and Digital Technologies published the Draft South Africa National Artificial Intelligence Policy for public comment.
Journalists checked the references and found that they contained fabrications. These fell into two categories: academic journals that do not exist; and real journals in which the referenced research articles were never published.
Such fabrications are typical of a known generative AI problem called hallucination.
Withdrawing the draft, the communications minister was frank: the problem was not a technical glitch but a failure of oversight. Generative AI was used without proper human verification of the sources, compromising the credibility and integrity of the document.
Much of the public commentary has treated this as an embarrassment: the policy meant to govern AI was itself undermined by AI.
As a senior lecturer in cyber law, including the regulation of AI, I argue that framing this episode as an embarrassment obscures what needs to be examined. It misses the main point of what is at stake.
The hallucinated citations reveal two specific failures. Epistemic integrity (the assurance that research has been conducted through reliable, ethical and repeatable methods that any reader could verify) was absent. So was information integrity (the public's reasonable expectation that information from an authoritative source can be trusted).
The policy was not equipped to govern either of these failures, and has now itself demonstrated both. This matters because generative AI can be harmful, and its harms are not limited to fake references, but also include fake images, fake videos, fake voices, and the weaponisation of people's likenesses through deepfakes.
What is AI hallucination?
Hallucinations are a known problem of generative AI, the category of AI that produces text and images (audio and visual media) through tools like ChatGPT and Grok.
Hallucinations happen when an AI system, in trying to fulfil a task, produces content that sounds convincing but is inaccurate or entirely fabricated. They are a growing problem:
In universities, academics have been found listing fake AI-generated sources.
In courts in various countries, and in South Africa, lawyers have submitted non-existent sources in their pleadings. There are many examples of such cases.
In documents, such as the retracted AI policy. The hallucination did not just invent sources. It manufactured seemingly credible African scholarly authority. Highly respected authors' names were cast in a false light. It also attributed false evidence to real institutions that are recognised as authoritative publishers of academic papers.
What now?
South Africa's policy was based on responsible AI governance. Responsible AI needs accountability, transparency, and explainability. These are non-negotiable conditions, echoed by the Organisation for Economic Co-operation and Development principles and the Smart Africa AI Blueprint that the policy draws on.
These governance principles are not just for AI system designers. They bind any institution that uses AI, including use in the production of public documents. The policy failed all three in its own production. The department has some serious questions to answer on all these fronts.
1. Accountability
This is an opportunity for the department to gain the trust of South Africans and demonstrate resilient and responsible governance in action.
Accountability calls for a comprehensive explanation of the extent to which the non-existent sources have affected the policy. The department should not proceed to revision without meeting the standards that the revised policy will propose for others.
2. Transparency
Transparency demands disclosure. Which sections of the policy are materially affected by the fake sources? Which tool was used? By whom? At which stage of drafting or compiling public submissions did they enter the policy? Was AI used to generate the literature review, the founding values, the synthesis of public comments, or all the above?
The department has not told us.
3. Explainability
Explainability demands that we can trace reasoning. The hallucinated sources appear in the reference list, but without a full disclosure from the department, the public cannot know which parts of the policy they were used to support, or how deeply they shaped its foundational priorities and values.
The public comment sections, by contrast, have a verifiable record of where the information came from.
Explainability requires that we can trace what shaped the normative framework of the policy. Without a section-by-section review that informs the public which parts of the policy were affected and to what extent, by the policy's own standards, the department will have failed both the transparency and explainability requirements.
What needs to change
The retracted policy rightly recognised AI as a tool for inclusive economic growth, capacity development and human rights protection. It also acknowledged that it is a “point of departure” and that sector-specific approaches will be needed.
What must change is how generative AI is treated, both in the production of policy documents and in the mandates the policy creates for synthetic media, such as deepfakes.
These are not problems to be sorted out later at sector level. They are public trust cross-cutting challenges that require their own regulatory logic and governance mechanisms built on cross-sectoral cooperation. The revised policy must incorporate them as a structural pillar, not as a subcategory of innovation governance, but as a problem the state is already living with.
This means designating a specific mandate holder for synthetic media and information integrity. Existing regulatory bodies already hold overlapping jurisdiction over digital content, identity harms, and information distribution.
What is missing is an agreed framework on definitions, remedies, and the steps to be taken when generative AI is used to spread misinformation and disinformation through fake sources and synthetic media.
Mandating that is not a question of creating new institutions. It is a question of political will and policy design.
Acknowledgements: After drafting this article myself, I used Claude to improve the readability of the piece. I personally drafted, verified and reviewed all the substance and sources referenced in it. I take full responsibility for the contents of this article.
Nomalanga Mashinini receives funding from the National Research Foundation Thuthuka Grant. She is also a member of the Thematic Working Groups on AI Talent and Skills and AI Data Ecosystems, under the Africa AI Council, established by Smart Africa.
By Nomalanga Mashinini, Senior Lecturer, University of the Witwatersrand
Disclaimer: "The views expressed in this article are the author’s own and do not necessarily reflect ModernGhana official position. ModernGhana will not be responsible or liable for any inaccurate or incorrect statements in the contributions or columns here."