Data Visiting as Design-Based Governance

I was recently asked to speak on the topic of “Data Visiting as Design-Based Stewardship Supporting a Multi-Dimensional Governance Configuration.” I want to take issue with a word in that title.

The word is “stewardship.” It is the wrong frame.

Stewardship avoids the language of ownership. A steward keeps something safe. An owner uses something. An owner has legal rights — the right to exploit, to license, to build value. A steward has none of those rights.

Why does this matter? It matters because in Africa, we need institutions — universities, hospitals, biobanks, research councils — to understand themselves as owners of data. Ownership has legal bite. Ownership means you can decide how your data is used, who benefits, and on what terms. If African institutions see themselves merely as stewards — as custodians keeping data safe until someone from the Global North comes to analyse it — then we have not escaped data colonialism. We have institutionalised it.

Data visiting, properly understood, is a tool that empowers data owners. It allows an institution to say: you may analyse our data, but on our terms, in our environment, under our control. That is not stewardship. That is ownership in action.

What data visiting is — and what it is not

Data visiting is a form of data sharing in which data are analysed within the provider’s computing environment, without being physically transferred. The researcher visits the data; the data does not travel to the researcher.

Data visiting is not the entirety of data governance. Ethics committees, data access committees, institutional review boards, data use agreements — all of these remain part of the governance landscape. What data visiting offers is something different: it is a design-based governance tool that can be integrated into a broader governance framework. It gives data owners a configurable technical architecture through which governance decisions can be implemented directly.

A call for terminological convergence

Before going further, a point on terminology. The field must converge on data visiting — not “data visitation,” not other variants. GA4GH has adopted “data visiting.” The academic literature overwhelmingly uses “data visiting.” If we are serious about building a shared governance vocabulary, we cannot afford terminological fragmentation. The concept is hard enough to communicate without muddying it with competing labels. Data visiting is the term. Let us use it consistently.

The one-dimensional trap

Too often, we hear: “We use data visiting” — as if that settles the governance question. It does not.

Consider two systems. In the first, the researcher has full autonomy to run custom code on identifiable data, with unrestricted output — that is data visiting. In the second, the researcher submits a fixed query and receives only reviewed aggregate statistics — that is also data visiting. The governance implications could not be more different.

Saying “we do data visiting” is like saying “we have a contract.” It tells you nothing about the terms. Data visiting is not one thing. It is a configuration space — a multi-dimensional design surface. And if you reduce it to one dimension, you will get your governance wrong.

Seven dimensions, seven governance levers

This is why I developed the Seven-Dimensional Data Visiting Framework — the 7D-DVF. It disaggregates data visiting into seven adjustable dimensions: researcher autonomy, data location, data visibility, the nature of the shared data, output governance, the trust and control model, and auditability and traceability.

Each dimension is a governance lever — a concrete design decision with direct legal and ethical consequences. Researcher autonomy: how much freedom does the visiting researcher have? Data visibility: can they see raw records, or only aggregates? Output governance: are results reviewed before release, or exported freely?

The power of the framework is that it makes the governance configuration legible. An ethics committee reviewing a data visiting proposal can assess each dimension independently and ask: is this calibration proportionate to the risk? And a data owner — an African university, a national biobank — can use these levers to assert control over how their data is accessed, on their terms, in service of their priorities.

Design as governance

This brings me to the central insight. In data visiting, design functions as governance. When you choose to restrict data visibility to query-only access, that is a governance decision implemented through technical design. When you require output review before release, that is governance embedded in the system architecture.

Data visiting gives data owners the ability to embed governance decisions directly into the technical infrastructure. This is not governance layered on top of a system — it is governance built into the system. The 7D-DVF provides the language and the structure to do this rigorously, deliberately, and in a way that serves the interests of the data owner.

For Africa, this is transformative. It means that an institution that owns genomic data can participate in global research collaborations without surrendering control — without the data leaving, without ceding sovereignty, and with every governance parameter configured to build local capacity and contribute to an African bio-economy. Not as shepherds. As owners.

A challenge

Stop treating data visiting as a binary. It is a multi-dimensional configuration space. Stop saying “we do data visiting” as if that answers the governance question. Specify which data visiting — along how many dimensions, calibrated to what risks, in what legal and ethical context. And recognise data visiting for what it is — not a stewardship tool, but an ownership tool. A tool that lets data owners govern on their own terms, build their own capacity, and participate in global science as equals.

The tools exist. Use them.

Which AI models actually know South African law?

In my latest article, I put five of the most popular AI tools through a South African legal obstacle course to see how they perform in reasoning through real legal scenarios. The idea was simple: can generative AI — not trained specifically on South African law — reason like a local lawyer?

The results were illuminating. Some models impressed. Others, well, should probably be held in contempt of court.

The study covered three scenarios drawn from private law:

1. What happens when a dachshund bites someone?

2. Do you have to pay if you refuse to take your bakkie back after it’s been serviced?

3. Who’s liable when a veldfire gets out of control?

    This post focuses on that third scenario. If you’d like to see how they handled the sausage dog and the car dispute, you’ll find all the details (and comparative scores) in the full article here.

    Setting the scene: fire in the Midlands

    Imagine Jacob, a cattle farmer in the KZN Midlands, decides to burn dry grass on his farm to clear it for new growth. His neighbour Maria has warned him, repeatedly, about the risk — the wind tends to carry embers across the fence. Jacob proceeds anyway. Predictably, the fire jumps the fence, damages Maria’s grazing land, and injures two of her prized Nguni cattle.

    Maria demands compensation. Jacob says it was an accident.

    Now, this is no hypothetical exam question — it’s a legal minefield, blending common law delict and the National Veld and Forest Fire Act 101 of 1998 (NVFFA).

    What the law requires (and what AI needs to spot)

    At common law, this is a textbook case of actio legis Aquiliae — delictual liability for patrimonial loss. The plaintiff must show five elements: conduct, wrongfulness, fault, causation, and damage.

    But the NVFFA raises the stakes. Under section 34(1), there’s a statutory presumption of negligence if a veldfire spreads from one property and causes harm. This flips the burden: unless Jacob can show he took reasonable precautions, he is presumed negligent. That statutory overlay is not optional — it defines how a South African court would approach the matter.

    In this scenario, I was especially interested to see which AI models could:

    • Identify actio legis Aquiliae as the correct cause of action,
    • Recognise the relevance of the NVFFA,
    • Incorporate the statutory presumption into their analysis,
    • And apply it coherently to the facts.

    Spoiler: only one of them did all of this.

    Claude — the star pupil

    Claude not only identified the actio legis Aquiliae correctly, but it also engaged with the NVFFA in a legally accurate way. It recognised the statutory presumption of negligence, discussed section 12(1) (the duty to maintain firebreaks), and even flagged whether Jacob belonged to a fire protection association.

    Claude’s analysis wasn’t just correct — it was legally structured, cited real case law, and anticipated counterarguments. This is the only model I would even consider letting draft a first-year exam answer — let alone a client memo.

    ChatGPT — clever, but forgot about statute law

    ChatGPT correctly identified the delictual claim and applied the five elements sensibly, even citing Kruger v Coetzeeappropriately for the test of negligence. But it missed the NVFFA entirely. That omission significantly weakens the analysis, as it ignores the shift in evidentiary burden and the statutory duty of care.

    Still, its output was coherent, reasonably structured, and persuasive — provided you don’t ask it to deal with statutes unless you explicitly prompt it.

    DeepSeek — close, but misses the mark

    DeepSeek followed a similar pattern to ChatGPT: good grasp of delictual structure, but no engagement with the statute. It also relied on real case law, though its application of legal principles was occasionally vague. Competent, but not reliable if the issue involves anything beyond textbook delict.

    Grok and Gemini — not ready for the bar

    Both Grok and Gemini performed poorly. Grok referred to a “delict of negligence” — a fundamental misunderstanding of how South African law frames fault. Neither model identified actio legis Aquiliae. Neither mentioned the NVFFA. Case law citations were weak or missing. These models felt like overseas exchange students bluffing their way through a South African law tutorial. Politely put: not helpful.

    What this tells us about AI in legal research

    The veldfire scenario offers a revealing stress test for generative AI. It shows that while large models can replicate form, their depth of legal reasoning varies wildly — especially when statutory law modifies common-law doctrine.

    A few takeaways:

    • Don’t assume AI knows the law. Sometimes it does, sometimes it does not. And sometimes it is only partial.
    • Citation ≠ comprehension. Some models cite real cases but don’t understand them; others hallucinate entirely.
    • Structured reasoning is rare. Only one of the five models showed a true grasp of how common law, statute, and fact must interact in legal analysis.

    Want more?

    The full article contains all three scenarios, a comparative table of how the five models performed across seven legal criteria, and more detailed observations about hallucinated case law, doctrinal confusion, and where AI shows promise (and where it absolutely doesn’t).

    🔗 Read the full article here.

    Can ChatGPT-4 draft complex legal contracts? I put it to the test

    The legal profession is changing—and not slowly. With generative AI models like ChatGPT-4 becoming increasingly capable, it’s no longer a matter of if they’ll assist legal professionals, but how. So I decided to put ChatGPT-4 through its paces with one of the more demanding legal documents in my field: the Data Transfer Agreement (DTA) for health research.

    DTAs are not your everyday consumer contracts. They deal with sensitive personal data, cross-border data flows, and legal compliance in an increasingly regulated landscape. They are intricate, specialist documents, and they’re rarely found in the public domain—meaning they likely weren’t heavily represented in the data ChatGPT-4 was trained on. In short, if generative AI can draft a decent DTA, that would be something worth paying attention to.

    So I ran an experiment. First, I used ChatGPT-4 to generate an outline of a typical DTA. Then, I fed it each clause heading and asked for a detailed version of each clause. The result? A 6,800-word draft DTA—coherent, reasonably structured, and almost impressive.

    But not perfect.

    In my article just published in Humanities and Social Sciences Communications, I dig into where ChatGPT-4 excels and where it still falls short. Yes, it can generate most of the expected clauses. But its grasp of legal precision, clarity, and especially data protection compliance still leaves room for improvement. There are issues with redundancy, inconsistent use of terms, and ambiguous concepts like “derivative works.” And although it mentions security and compliance, it doesn’t always go deep enough to meet best-practice standards.

    The takeaway? ChatGPT-4 is not ready to replace lawyers. But it is already a powerful tool in the legal drafting toolbox—especially when used strategically. My two-stage approach (first outlining, then refining clause-by-clause) is a method I believe many legal professionals can adopt to streamline their work while maintaining full control and accountability.

    Most importantly, this experiment raised a set of broader questions—ethical ones. Should clients be told when AI was involved in drafting? Who is ultimately responsible for errors? And how do we ensure fairness in a world where algorithmic bias can quietly shape outcomes?

    The answers aren’t simple. But one thing is clear: the future of legal drafting will be a collaboration between human lawyers and artificial intelligence. This study is a small, practical step in figuring out what that collaboration should look like.

    If you’re curious to see what ChatGPT-4 produced—or if you’re a legal professional wondering how to make the most of AI without compromising on quality—you’ll find the full article here:

    📄 Read the article

    Reconciliation as a solution to AI liability in healthcare

    Introduction

    As a health lawyer deeply engaged with the evolving landscape of artificial intelligence (AI) in healthcare, I recently reflected on Dario Amodei’s insightful essay, Machines of Loving Grace: How AI Could Transform the World for the Better. Amodei envisions a future where powerful AI systems revolutionise medicine, accelerating advancements in biology and neuroscience to eradicate diseases and enhance human well-being. This optimistic outlook paints a picture of unprecedented progress, where AI compresses decades of medical innovation into a few short years.

    The challenge of AI liability in healthcare

    However, amid this excitement lies a critical challenge: the question of legal liability when AI systems cause harm in healthcare settings. Traditional liability frameworks, designed for human actors and predictable products, struggle to accommodate the complexities introduced by autonomous and adaptive AI technologies. In my research group’s previous work (Naidoo et al., 2022; Bottomley & Thaldar, 2023), we have explored these challenges extensively, proposing reconciliation as a viable solution to address AI healthcare liability issues.

    The limitations of traditional liability frameworks

    AI systems, particularly those utilising deep learning, often function as ‘black boxes.’ Their decision-making processes are not easily explainable, making it difficult for healthcare practitioners to foresee errors or for patients to understand how certain conclusions were reached. This opacity complicates the attribution of fault, a cornerstone of traditional negligence-based liability regimes. When an AI system recommends an unconventional treatment or makes a decision that leads to harm, assigning responsibility becomes a daunting task.

    In the context of Amodei’s vision, where AI surpasses human expertise across various domains, these questions become even more pressing. The potential for AI to operate autonomously raises concerns about accountability and the adequacy of existing legal frameworks. Relying solely on traditional fault-based liability may not suffice, as it could lead to unjust outcomes and hinder the adoption of beneficial AI technologies due to fear of litigation.

    Proposing a reconciliatory approach

    In my research group’s work, we have argued for a reconciliatory approach to AI liability in healthcare, emphasising the importance of fostering responsibility and accountability without stifling innovation. A reconciliatory approach shifts the focus from punitive measures to collective learning and improvement. Instead of prioritising questions like ‘Who is at fault?’ it encourages stakeholders to ask, ‘How can we prevent this harm from occurring again?’ This mindset fosters an environment where healthcare practitioners, developers, and patients work together to enhance AI systems’ safety and efficacy.

    Implementing reconciliation in practice

    One practical manifestation of this approach could involve establishing a specialised dispute resolution institution for AI-related harms in healthcare . Such an institution would operate with broad investigative powers, enabling it to access all relevant information about the AI system, its development, and its deployment. By adopting an inquisitorial rather than adversarial stance, the institution would facilitate open dialogue among stakeholders, focusing on uncovering the root causes of harm and developing strategies to mitigate future risks.

    This model draws inspiration from alternative dispute resolution mechanisms, such as South Africa’s Commission for Conciliation, Mediation, and Arbitration (CCMA). By prioritising reconciliation and collaboration, the institution can help balance the need for accountability with the imperative to support innovation in AI technology. Victims of AI-related harm would receive appropriate compensation and redress through an insurance scheme funded by AI developers, manufacturers, and healthcare providers—without the need for protracted litigation. At the same time, developers and healthcare providers would benefit from a clearer understanding of how to improve their systems and practices.

    Benefits of the reconciliatory approach

    This reconciliatory framework addresses the inherent challenges of assigning liability in the context of AI’s complexity. It acknowledges that AI systems often involve multiple actors across their lifecycle—developers, data scientists, healthcare providers, and more. By focusing on collective responsibility, the approach reduces the burden on any single party and promotes a shared commitment to patient safety.

    Moreover, this approach aligns with ethical imperatives in healthcare. It fosters transparency, encourages open communication, and supports continuous improvement of AI systems. By involving all stakeholders in the process, it enhances trust in AI technologies and facilitates their integration into healthcare practices.

    Aligning with Amodei’s vision

    Amodei’s essay underscores the transformative potential of AI but also hints at the necessity of addressing societal and ethical challenges. The reconciliatory approach I advocate complements his vision by providing a pathway to integrate AI into healthcare responsibly. It ensures that as we embrace technological advancements, we remain vigilant about safeguarding patient rights and maintaining trust in the healthcare system.

    Conclusion

    Reconciling the innovative promise of AI with the imperative of legal accountability is not only possible but essential. By adopting a reconciliatory approach, we can navigate the complexities of AI liability in healthcare, fostering an environment where technology enhances human well-being without compromising ethical standards. This approach ensures that all stakeholders—patients, practitioners, and developers—are part of the solution, working together to realise the full benefits of AI in healthcare.

    As we stand at the cusp of a new era in healthcare, it is imperative that we thoughtfully consider the legal and ethical frameworks that will guide us. By embracing reconciliation as a solution for AI healthcare liability, we honour our commitment to patient safety, support innovation, and pave the way for a future where technology and humanity work hand in hand for the betterment of all.

    Riding the AI wave: South Africa’s future lies in AI education

    Artificial Intelligence (AI) is no longer a distant concept confined to science fiction. It’s here, and it’s transforming industries, economies, and societies across the globe. As AI technologies rapidly advance, South Africa stands at a critical juncture—either we can sit on the beach and watch the AI wave roll in, or we can grab our surfboards, run into the sea, and ride the wave of innovation. Those surfboards are education. 

    A key to South Africa’s success in this new era will be AI literacy and skills. To future-proof our nation, AI literacy and skills must be integrated at all levels—primary, secondary, and tertiary. AI is no longer just for tech experts; being able to use generative AI, such as ChatGPT, effectively and responsibly must become a fundamental part of every South African’s education. We must embrace it as an essential skill, akin to reading or mathematics, and ensure that our educational system is equipped to foster this new literacy.

    AI Literacy: The bedrock of future success

    South Africa’s recently published National AI Policy Framework wisely emphasises talent development as one of its core strategic pillars, underscoring the importance of equipping South Africans with AI knowledge from a young age. At the primary school level, students should be introduced to the basics of AI, understanding what it is, how it works, and how it will impact the world they are growing into. This foundational knowledge will ensure that children grow up seeing AI as a tool they can control and use creatively, rather than something mysterious or intimidating.

    As students progress to secondary school, this AI literacy should evolve. Here, AI tools can assist with critical thinking, problem-solving, and even coding. The goal is to ensure that students not only consume AI-driven technology but also understand how to shape and innovate with it. AI must be integrated across subjects—not confined to computer science classes alone. Imagine students using AI to improve their mathematics problem-solving or to assist in complex history projects. This kind of immersion will prepare them for the challenges and opportunities of the AI-driven future.

    At the tertiary level, South Africa has an incredible opportunity to lead the continent. Our universities must prioritise AI research, and industry partnerships should be fostered to ensure students receive real-world training in AI application. The National AI Policy Framework recognises the importance of public-private partnerships, creating a fertile ground for innovation. The universities that make AI a central focus of their curricula will be the ones whose graduates lead the way in both local and global AI developments. This is how we prepare the next generation of leaders—by giving them the skills and knowledge to navigate, and even shape, an AI-infused world.

    Normalising responsible AI use

    Integrating AI into education isn’t just about teaching students how to use it—it’s about ensuring they use it responsibly. In my experience drafting AI guidelines for academia, I have seen first-hand how crucial it is to balance AI innovation with ethical responsibility. AI is a powerful tool, but it’s only as good as the humans who wield it. South Africa’s National AI Policy Framework wisely emphasises the importance of ethical guidelines, transparency, and fairness in AI systems.

    In my own field of law, AI is already transforming legal practice by automating document review, predicting case outcomes, and streamlining legal research. But the role of law schools in this AI-driven future extends beyond merely teaching students how to use these technologies. We must prepare law students to engage with AI critically, understanding its limitations, ethical implications, and the risks of bias or misuse. By equipping future lawyers with AI literacy, we are not just preparing them to use AI tools effectively—we are teaching them to lead responsibly in a world where AI is increasingly shaping justice. This sense of responsibility is crucial not only in legal practice but across all sectors where AI is integrated.

    There is also a social challenge: the stigma that still surrounds AI in certain academic circles. Some cling to the misconception that using AI is akin to ‘plagiarism’ or ‘cheating’. Such thinking is fast becoming antiquated. AI, like a calculator or search engine, can enhance learning and research when used properly. Instead of stigmatising the use of AI, we should focus on educating students and researchers about its ethical and responsible use. By demystifying AI and embracing its potential, academia can lead the way in AI literacy and responsible use.

    Bridging the digital divide

    Of course, there are challenges. South Africa’s digital divide remains a significant barrier to equitable AI adoption. Many rural schools lack the digital infrastructure necessary to even begin conversations about AI education. But this obstacle should not deter us. The National AI Policy Framework addresses this divide by prioritising digital infrastructure development, investing in connectivity, and building a supercomputing infrastructure to support AI research. These efforts, combined with targeted investments in rural areas, will ensure that all South Africans—regardless of their background—can access the benefits of AI education.

    Riding the AI wave into the future

    AI is rapidly reshaping the future, and faster than we could have imagined. South Africa’s National AI Policy Framework provides a solid foundation by offering the tools and guidance to integrate AI into our education system and beyond. However, the true challenge lies in taking decisive action—ensuring that AI literacy is embedded at every educational level, and that all South Africans have the opportunity to develop the skills needed to succeed in an AI-driven world.

    By incorporating AI education into schools, universities, and workplaces, South Africa can position itself as a competitive force on the global stage. We cannot sit on the shore and just watch the wave. We must run into the sea, surfboard in hand, and ride it into a future where South Africa is not only a player but a leader in AI innovation.

    This opinion editorial has been published in The Mercury of 10 October 2024.