AI’s Legal Lapses: Courts Grapple with ‘Hallucinations’ and Ethical Minefields

Gavel hitting circuit board with glitching AI code.

The legal profession is confronting a growing challenge as artificial intelligence tools, particularly generative AI, demonstrate a propensity for "hallucinations" – fabricating information and sources. This phenomenon is raising significant ethical concerns and prompting urgent discussions about regulation and responsible use within courtrooms and legal practices worldwide.

Key Takeaways

  • AI tools can generate convincing but entirely fictitious case citations and legal arguments.
  • Lawyers face professional and ethical breaches, and potentially prosecution, for submitting AI-generated misinformation.
  • Privacy and data protection laws are being applied to AI, with regulators issuing guidance on responsible use.
  • A lack of comprehensive AI-specific regulation leaves a "wild west" scenario in some jurisdictions.

The Peril of AI Hallucinations in Court

Generative AI models, while powerful, operate by predicting the next word based on vast datasets. This can lead them to confidently present fabricated facts, quotes, and even entire case citations that appear legitimate but are entirely fictitious. This has already led to instances where lawyers have inadvertently submitted such misinformation to courts, resulting in reprimands and fines. In one notable case, US lawyers were penalised for using ChatGPT-generated citations that were later found to be non-existent.

Ethical and Professional Duties at Risk

Legal professionals are warned that relying on AI without rigorous fact-checking can breach their professional and ethical duties. In New Zealand, the Law Society and the judiciary have issued guidance highlighting the risks of misleading courts and clients. The potential consequences extend to disciplinary action and, in some jurisdictions like the UK, even criminal prosecution for presenting false material generated by AI.

Privacy and Data Protection Concerns

The use of generative AI also raises significant privacy and data protection issues. Regulators globally, including New Zealand’s Office of the Privacy Commissioner (OPC), are issuing guidance. Key concerns include the use of personal information in training data, the risk of confidential information being disclosed when used as prompts, and the accuracy of AI outputs. The OPC advises against inputting sensitive or confidential data into these tools and mandates thorough privacy impact assessments before adoption.

Navigating the Regulatory Landscape

While some jurisdictions are beginning to develop AI-specific regulations, many are currently relying on existing privacy and data protection laws. The Global Privacy Assembly has affirmed that current data protection principles apply to generative AI. However, the rapid evolution of AI technology often outpaces legislative efforts, creating a regulatory gap. New Zealand, for instance, is adopting a "light-touch" approach, with existing legislation expected to cover AI use, though concerns about a lack of robust regulation persist.

Global Responses and Future Directions

Globally, data protection authorities are applying existing laws to AI, emphasizing principles like privacy by design, purpose specification, and transparency. In Europe, the EU AI Act is a significant step towards comprehensive AI governance. Meanwhile, in New Zealand, the OPC’s guidance encourages senior leadership approval, necessity assessments, transparency, and human review of AI outputs. The legal system is actively grappling with how to harness AI’s benefits while mitigating its risks, ensuring the integrity of justice and protecting individual rights in this evolving technological landscape.

Sources

Let’s transform your business with our reliable IT solutions!

IT Security Briefing

Join 500+ NZ business owners getting monthly cybersecurity and IT insights — straight to your LinkedIn feed.