A landmark judgment on informational privacy

The recent High Court judgment in De Jager v Netcare Ltd is a significant ruling that clarifies important aspects of informational privacy under South African law. Having participated as amicus curiae, I had the opportunity to make submissions that shaped the court’s reasoning and the outcome of the case. This case is noteworthy not only for its impact on privacy law but also for its implications for health research and data governance.

What was the case about?

The case dealt with a privacy challenge raised by the plaintiff, Mr. Nicolaas de Jager, who objected to the admission of surveillance data obtained by the defendant, Netcare Ltd. De Jager claimed that his constitutional right to privacy had been violated, arguing that the covert collection of health data was unconstitutional.

The key legal question before the court was whether health data obtained through covert surveillance could be admitted as evidence, given the constitutional right to privacy.

The court’s key findings on informational privacy

The judgment makes two key contributions to South African privacy law:

POPIA, not the Constitution, governs privacy disputes. A crucial aspect of the judgment is its reliance on the principle of subsidiarity, which dictates that where legislation exists to give effect to a constitutional right, litigants must rely on that legislation rather than invoking the Constitution directly. The court adopted my argument that since POPIA comprehensively governs informational privacy, the plaintiff’s direct appeal to the Constitution was misplaced.

Privacy rights are not absolute. The court affirmed that data subject rights under POPIA must be balanced against competing interests—in this case, the public interest in truth-seeking in litigation. This aligns with broader legal debates on data subject rights versus the freedom of scientific research and the public interest in health research.

My contributions to this landmark ruling.

As amicus curiae, I presented several key legal arguments, many of which were explicitly adopted by the court. Some of the most impactful points included:

The proper application of POPIA. I argued that POPIA is South Africa’s codification of informational privacy and that any privacy-based legal challenge must be grounded in POPIA. The court not only endorsed this position but relied on it as a decisive factor in dismissing the plaintiff’s constitutional privacy claim. This has clear implications for health research. POPIA represents a careful balancing of data subject privacy rights with other individual rights and the public interest in scientific progress. This balance established by POPIA should be respected.

The ‘legitimate interest’ justification under POPIA. One of the critical questions in this case was whether the covert collection of health data was lawful under POPIA. I argued that POPIA permits such collection if it is necessary to pursue a legitimate interest. The court accepted my argument that Netcare’s interest in defending itself in litigation qualified as a legitimate interest. This interpretation provides important guidance for cases involving health data collection in legal, medical, and research contexts, where privacy concerns must be weighed against other legitimate interests.

The judicial exclusion in POPIA. I urged the court to clarify section 6 of POPIA, which exempts judicial functions from the Act’s privacy protections. Contrary to the narrow reading suggested in the leading textbook on POPIA, the High Court endorsed my broad reading of this provision, affirming that POPIA does not apply to the judicial process—including not only judicial officers but also litigants. The practical, purposive reasoning underlying the broad reading of section 6 is highly relevant to other parts of POPIA, such as the meaning of ‘specific’ consent.

Why this judgment matters

Beyond its direct impact on privacy litigation, this ruling is significant for several reasons: First, it provides a roadmap for future cases involving privacy. The judgment sets a precedent for how courts should evaluate privacy rights, reinforcing that informational privacy is a contextual, not absolute, right. Second, it establishes the central position of POPIA in the balancing of rights and interests in respect of special personal information, including health data. Third, it strengthens legal clarity for health data governance. The discussions around legitimate interest and the broad reading of the judicial exception are both particularly relevant for health research.

This case was an exciting opportunity to contribute to the evolution of South African privacy law, and I am pleased that my submissions played a crucial role in shaping the court’s thinking on the issues.

The future of South Africa’s fertility healthcare sector

South Africa’s fertility healthcare sector is at a crossroads, filled with both promise and challenges. Advanced medical expertise, cutting-edge technology, and affordable treatments have positioned the country as a sought-after destination for fertility care. Yet, beneath this success lie complexities and obstacles that could shape the sector’s future.

To uncover these dynamics, we conducted a series of in-depth interviews with key role players, revealing insights into the sector’s strengths, weaknesses, opportunities, and threats (SWOT). Our report distills those insights into an analysis that can guide the sector towards sustainable growth and high-quality, ethical care.

Here is a brief summary of the SWOT analysis:

Strengths: Building on Expertise and Affordability

One of the standout strengths is South Africa’s wealth of experienced fertility specialists, who uphold high standards of care bolstered by international training. Combined with state-of-the-art technology and infrastructure, this expertise has helped local clinics achieve impressive success rates. What’s more, the affordability of these treatments compared to international standards draws a growing number of patients from around the world, further supported by a favourable legal environment.

Weaknesses: High Costs and Limited Accessibility

However, affordability is not universal. Many local patients struggle with high treatment costs, compounded by limited medical aid coverage. Additionally, the sector is grappling with regulatory gaps, inconsistencies in standards, and the ongoing emigration of skilled professionals—issues that undermine the long-term sustainability of high-quality care.

Opportunities: Growth Through Collaboration and Innovation

Despite these challenges, opportunities abound. Establishing a comprehensive regulatory framework, akin to the UK’s Human Fertilisation and Embryology Authority, could ensure uniform standards and better oversight. Expanding training and education programmes and investing in research and innovation can elevate South Africa as a global leader in fertility care. By strategically leveraging fertility tourism and promoting public-private partnerships, the sector could unlock further growth and innovation.

Threats: Regulatory Challenges and Market Dynamics

Yet, these opportunities come with threats. Inconsistencies in voluntary accreditation, intense competition among clinics, and high entry barriers pose risks to accessibility and service quality. Additionally, limited resources in public clinics create disparities in care, impacting lower-income patients the hardest.

The Road Ahead

South Africa’s fertility healthcare sector is poised for growth, but this potential depends on strategic action. By capitalising on its strengths and addressing key challenges, the sector can cement its position as a global leader in fertility care, benefitting both local and international patients.

For a deeper dive into the findings and strategic recommendations, I invite you to read our full report.

* The study was funded by the Competition Commission of South Africa. The content of our report is the responsibility of the authors, and does not necessarily reflect the view of the Competition Commission. 

Reconciliation as a solution to AI liability in healthcare

Introduction

As a health lawyer deeply engaged with the evolving landscape of artificial intelligence (AI) in healthcare, I recently reflected on Dario Amodei’s insightful essay, Machines of Loving Grace: How AI Could Transform the World for the Better. Amodei envisions a future where powerful AI systems revolutionise medicine, accelerating advancements in biology and neuroscience to eradicate diseases and enhance human well-being. This optimistic outlook paints a picture of unprecedented progress, where AI compresses decades of medical innovation into a few short years.

The challenge of AI liability in healthcare

However, amid this excitement lies a critical challenge: the question of legal liability when AI systems cause harm in healthcare settings. Traditional liability frameworks, designed for human actors and predictable products, struggle to accommodate the complexities introduced by autonomous and adaptive AI technologies. In my research group’s previous work (Naidoo et al., 2022; Bottomley & Thaldar, 2023), we have explored these challenges extensively, proposing reconciliation as a viable solution to address AI healthcare liability issues.

The limitations of traditional liability frameworks

AI systems, particularly those utilising deep learning, often function as ‘black boxes.’ Their decision-making processes are not easily explainable, making it difficult for healthcare practitioners to foresee errors or for patients to understand how certain conclusions were reached. This opacity complicates the attribution of fault, a cornerstone of traditional negligence-based liability regimes. When an AI system recommends an unconventional treatment or makes a decision that leads to harm, assigning responsibility becomes a daunting task.

In the context of Amodei’s vision, where AI surpasses human expertise across various domains, these questions become even more pressing. The potential for AI to operate autonomously raises concerns about accountability and the adequacy of existing legal frameworks. Relying solely on traditional fault-based liability may not suffice, as it could lead to unjust outcomes and hinder the adoption of beneficial AI technologies due to fear of litigation.

Proposing a reconciliatory approach

In my research group’s work, we have argued for a reconciliatory approach to AI liability in healthcare, emphasising the importance of fostering responsibility and accountability without stifling innovation. A reconciliatory approach shifts the focus from punitive measures to collective learning and improvement. Instead of prioritising questions like ‘Who is at fault?’ it encourages stakeholders to ask, ‘How can we prevent this harm from occurring again?’ This mindset fosters an environment where healthcare practitioners, developers, and patients work together to enhance AI systems’ safety and efficacy.

Implementing reconciliation in practice

One practical manifestation of this approach could involve establishing a specialised dispute resolution institution for AI-related harms in healthcare . Such an institution would operate with broad investigative powers, enabling it to access all relevant information about the AI system, its development, and its deployment. By adopting an inquisitorial rather than adversarial stance, the institution would facilitate open dialogue among stakeholders, focusing on uncovering the root causes of harm and developing strategies to mitigate future risks.

This model draws inspiration from alternative dispute resolution mechanisms, such as South Africa’s Commission for Conciliation, Mediation, and Arbitration (CCMA). By prioritising reconciliation and collaboration, the institution can help balance the need for accountability with the imperative to support innovation in AI technology. Victims of AI-related harm would receive appropriate compensation and redress through an insurance scheme funded by AI developers, manufacturers, and healthcare providers—without the need for protracted litigation. At the same time, developers and healthcare providers would benefit from a clearer understanding of how to improve their systems and practices.

Benefits of the reconciliatory approach

This reconciliatory framework addresses the inherent challenges of assigning liability in the context of AI’s complexity. It acknowledges that AI systems often involve multiple actors across their lifecycle—developers, data scientists, healthcare providers, and more. By focusing on collective responsibility, the approach reduces the burden on any single party and promotes a shared commitment to patient safety.

Moreover, this approach aligns with ethical imperatives in healthcare. It fosters transparency, encourages open communication, and supports continuous improvement of AI systems. By involving all stakeholders in the process, it enhances trust in AI technologies and facilitates their integration into healthcare practices.

Aligning with Amodei’s vision

Amodei’s essay underscores the transformative potential of AI but also hints at the necessity of addressing societal and ethical challenges. The reconciliatory approach I advocate complements his vision by providing a pathway to integrate AI into healthcare responsibly. It ensures that as we embrace technological advancements, we remain vigilant about safeguarding patient rights and maintaining trust in the healthcare system.

Conclusion

Reconciling the innovative promise of AI with the imperative of legal accountability is not only possible but essential. By adopting a reconciliatory approach, we can navigate the complexities of AI liability in healthcare, fostering an environment where technology enhances human well-being without compromising ethical standards. This approach ensures that all stakeholders—patients, practitioners, and developers—are part of the solution, working together to realise the full benefits of AI in healthcare.

As we stand at the cusp of a new era in healthcare, it is imperative that we thoughtfully consider the legal and ethical frameworks that will guide us. By embracing reconciliation as a solution for AI healthcare liability, we honour our commitment to patient safety, support innovation, and pave the way for a future where technology and humanity work hand in hand for the betterment of all.

Riding the AI wave: South Africa’s future lies in AI education

Artificial Intelligence (AI) is no longer a distant concept confined to science fiction. It’s here, and it’s transforming industries, economies, and societies across the globe. As AI technologies rapidly advance, South Africa stands at a critical juncture—either we can sit on the beach and watch the AI wave roll in, or we can grab our surfboards, run into the sea, and ride the wave of innovation. Those surfboards are education. 

A key to South Africa’s success in this new era will be AI literacy and skills. To future-proof our nation, AI literacy and skills must be integrated at all levels—primary, secondary, and tertiary. AI is no longer just for tech experts; being able to use generative AI, such as ChatGPT, effectively and responsibly must become a fundamental part of every South African’s education. We must embrace it as an essential skill, akin to reading or mathematics, and ensure that our educational system is equipped to foster this new literacy.

AI Literacy: The bedrock of future success

South Africa’s recently published National AI Policy Framework wisely emphasises talent development as one of its core strategic pillars, underscoring the importance of equipping South Africans with AI knowledge from a young age. At the primary school level, students should be introduced to the basics of AI, understanding what it is, how it works, and how it will impact the world they are growing into. This foundational knowledge will ensure that children grow up seeing AI as a tool they can control and use creatively, rather than something mysterious or intimidating.

As students progress to secondary school, this AI literacy should evolve. Here, AI tools can assist with critical thinking, problem-solving, and even coding. The goal is to ensure that students not only consume AI-driven technology but also understand how to shape and innovate with it. AI must be integrated across subjects—not confined to computer science classes alone. Imagine students using AI to improve their mathematics problem-solving or to assist in complex history projects. This kind of immersion will prepare them for the challenges and opportunities of the AI-driven future.

At the tertiary level, South Africa has an incredible opportunity to lead the continent. Our universities must prioritise AI research, and industry partnerships should be fostered to ensure students receive real-world training in AI application. The National AI Policy Framework recognises the importance of public-private partnerships, creating a fertile ground for innovation. The universities that make AI a central focus of their curricula will be the ones whose graduates lead the way in both local and global AI developments. This is how we prepare the next generation of leaders—by giving them the skills and knowledge to navigate, and even shape, an AI-infused world.

Normalising responsible AI use

Integrating AI into education isn’t just about teaching students how to use it—it’s about ensuring they use it responsibly. In my experience drafting AI guidelines for academia, I have seen first-hand how crucial it is to balance AI innovation with ethical responsibility. AI is a powerful tool, but it’s only as good as the humans who wield it. South Africa’s National AI Policy Framework wisely emphasises the importance of ethical guidelines, transparency, and fairness in AI systems.

In my own field of law, AI is already transforming legal practice by automating document review, predicting case outcomes, and streamlining legal research. But the role of law schools in this AI-driven future extends beyond merely teaching students how to use these technologies. We must prepare law students to engage with AI critically, understanding its limitations, ethical implications, and the risks of bias or misuse. By equipping future lawyers with AI literacy, we are not just preparing them to use AI tools effectively—we are teaching them to lead responsibly in a world where AI is increasingly shaping justice. This sense of responsibility is crucial not only in legal practice but across all sectors where AI is integrated.

There is also a social challenge: the stigma that still surrounds AI in certain academic circles. Some cling to the misconception that using AI is akin to ‘plagiarism’ or ‘cheating’. Such thinking is fast becoming antiquated. AI, like a calculator or search engine, can enhance learning and research when used properly. Instead of stigmatising the use of AI, we should focus on educating students and researchers about its ethical and responsible use. By demystifying AI and embracing its potential, academia can lead the way in AI literacy and responsible use.

Bridging the digital divide

Of course, there are challenges. South Africa’s digital divide remains a significant barrier to equitable AI adoption. Many rural schools lack the digital infrastructure necessary to even begin conversations about AI education. But this obstacle should not deter us. The National AI Policy Framework addresses this divide by prioritising digital infrastructure development, investing in connectivity, and building a supercomputing infrastructure to support AI research. These efforts, combined with targeted investments in rural areas, will ensure that all South Africans—regardless of their background—can access the benefits of AI education.

Riding the AI wave into the future

AI is rapidly reshaping the future, and faster than we could have imagined. South Africa’s National AI Policy Framework provides a solid foundation by offering the tools and guidance to integrate AI into our education system and beyond. However, the true challenge lies in taking decisive action—ensuring that AI literacy is embedded at every educational level, and that all South Africans have the opportunity to develop the skills needed to succeed in an AI-driven world.

By incorporating AI education into schools, universities, and workplaces, South Africa can position itself as a competitive force on the global stage. We cannot sit on the shore and just watch the wave. We must run into the sea, surfboard in hand, and ride it into a future where South Africa is not only a player but a leader in AI innovation.

This opinion editorial has been published in The Mercury of 10 October 2024.

Understanding diversity in genetic data

In genetic research, ensuring that data reflects the true diversity of human populations is crucial. Calls for more diverse data arise from the need to improve scientific discovery and address disparities in health outcomes. However, the term “diversity” often lacks clarity, leading to an over-reliance on problematic continental ancestry categories.

In our article, Defining and pursuing diversity in human genetic studies, published in Nature Genetics, my co-authors and I propose a more nuanced approach. Diversity should be seen as a way to achieve specific research goals—whether for scientific discovery, improving health outcomes, or addressing other specific challenges.

Achieving diversity requires careful consideration throughout the research process, from recruitment to data analysis and sharing. It’s not just about representation—sometimes oversampling certain populations may be necessary to achieve the goals of a particular project.

By focusing on the outcomes of research and ensuring thoughtful practices, we can help ensure that the benefits of genetic research reach as many populations as possible.