What does AHPRA say about AI in Australian healthcare?
AHPRA's position is unambiguous: "Regardless of what technology is used in providing healthcare the practitioner remains responsible for delivering safe and quality care". Practitioners must apply human judgement to any AI output, understand the tools they use (including how they were trained and their limitations), inform patients when AI is involved in their care, and obtain informed consent before any AI tool processes a patient's personal information.
These obligations apply across registered health professions and across all uses of AI — clinical decision support, AI scribes, generative AI for patient communications, and diagnostic AI. TGA approval of an AI tool does not remove a practitioner's responsibility for clinical oversight.
What changed in 2026: TGA's digital scribes ruling
The TGA published a significant clarification on 30 January 2026 about which AI scribing tools are regulated as medical devices.
Under the new clarification:
- AI scribes that only transcribe and translate clinical conversations into written records are not medical devices.
- AI scribes that analyse or interpret clinical conversations — for example by generating a diagnosis, differential diagnosis, or treatment recommendation not explicitly stated by the practitioner — are classified as medical devices and must be in the ARTG.
This matters because most clinics adopted AI scribes without checking whether the product crosses into "medical device" territory. Manufacturers must also monitor for "scope creep" where software updates change the intended purpose. If your scribe's latest version starts generating differential diagnoses, the regulatory classification changes — and so do your compliance obligations.
The TGA's framework is technology-agnostic: regulation is based on intended purpose, not the underlying technology. Software is a medical device if it is intended for "diagnosis, prevention, monitoring, prediction, prognosis, or treatment of a disease, injury, or disability".
The five AHPRA obligations every practitioner using AI must meet
These are the published professional obligations from AHPRA's AI guidance page:
1. Accountability
Clinical responsibility remains with the practitioner. Full stop. If an AI system recommends a treatment pathway and you follow it, the outcome is your responsibility — not the software vendor's. AHPRA states practitioners must "apply human judgment to any output of AI". Using AI does not shift liability to a technology provider, and TGA approval does not remove your clinical oversight responsibility.
2. Understanding
You cannot simply trust vendor marketing materials. AHPRA expects practitioners to "review the product information about an AI tool including how it's trained and tested on populations, intended use, and limitations". Using an AI tool without understanding how it reaches its conclusions could constitute professional misconduct, even if the recommendation proves correct.
This includes understanding "the inherent bias that can exist within data and algorithms used in AI applications", particularly affecting Aboriginal and Torres Strait Islander people and other diverse populations.
3. Transparency
Patients must know when AI is being used in their care. AHPRA states practitioners "should inform patients and clients about their use of AI and consider any concerns raised". This goes beyond a tick-box disclosure — it requires meaningful communication that helps patients understand AI's role in their specific treatment.
4. Informed consent
AHPRA requires informed consent specifically when AI tools "require input of their personal patient data". This is particularly relevant for AI scribing tools using generative AI. AHPRA also warns there are "criminal implications if consent is not obtained before recording" consultations.
5. Ethical and legal compliance
This is the obligation often missed. Practitioners must ensure privacy law compliance, manage algorithmic bias, and maintain appropriate professional indemnity insurance covering AI-assisted care. Identifiable patient data must not find its way into a public AI tool's learning database — once entered, "personal information becomes public domain".
Australia's Multi-Layered AI Governance Structure
AI compliance in healthcare does not sit with AHPRA alone. Four agencies operate in concert:
- AHPRA sets professional standards for registered practitioners
- TGA evaluates AI systems that qualify as medical devices
- Department of Health shapes overarching policy frameworks
- Office of the Australian Information Commissioner oversees data protection obligations
A single diagnostic AI system might simultaneously fall under TGA device regulations, AHPRA professional standards, and privacy legislation. This creates overlapping compliance obligations that require careful navigation. Practices that establish AI governance committees bridging these regulatory domains are best positioned to manage the complexity.
Practical Compliance Steps for AI Adoption
Document Everything
Maintain detailed records of every AI tool used in your practice, including its purpose, evidence base, known limitations, training data characteristics, and version history. Create audit trails showing how AI outputs influenced clinical decisions and when you chose to override algorithmic recommendations.
Establish Clear Override Protocols
Define the circumstances under which practitioners should override AI recommendations. Document these protocols, train your team on them, and review cases where overrides occurred to continuously improve your processes.
Review Vendor Agreements
Most AI vendor contracts limit the vendor's liability, placing the compliance burden squarely on your practice. Review agreements carefully, particularly around data handling, algorithm updates, and performance guarantees. Ensure your professional indemnity insurance explicitly covers AI-related incidents.
Start Small and Scale
Begin with low-risk, high-value applications — AI-assisted appointment scheduling, documentation support, or administrative workflow optimisation. Build organisational confidence and compliance capability before progressing to clinical decision support systems. Each step should include a compliance assessment and team training component.
The Privacy Dimension of AI in Healthcare
AI systems are data-hungry by nature, and healthcare data is among the most sensitive information that exists. The recent Privacy Act amendments — covered in detail in our POLA 2024 breakdown — add complexity, requiring practitioners to balance AI's data requirements with strengthened privacy protections.
Key privacy considerations for AI adoption include:
- Data sovereignty — Where is patient data processed and stored?
- Training data transparency — Was the AI trained on Australian population data, or could it carry biases from other populations?
- Data retention — How long do AI vendors retain patient data, and for what purposes?
- Third-party access — Who else can access data processed through AI systems?
These questions are not theoretical. Practitioners must be able to answer them for every AI tool in their practice. AHCRA's compliance dashboard helps track these obligations across your technology stack, flagging gaps before they become regulatory issues.
Ethical Considerations Beyond Compliance
AHPRA's guidelines address fundamental questions about equity in algorithmic medicine. AI systems trained on datasets that underrepresent certain populations may perpetuate or amplify healthcare disparities. A diagnostic algorithm trained primarily on Caucasian skin images may perform poorly for Indigenous Australian patients, for example.
Practitioners must consider not just whether their AI tools work, but for whom they work and why certain populations might be poorly served. This is not merely about avoiding discrimination — it is about ensuring that AI's promise of improved healthcare does not inadvertently create new forms of clinical disadvantage.
CPD Requirements for AI Competency
AHPRA's CPD framework now explicitly recognises digital health competencies as essential professional development. Practitioners who accumulate AI-specific CPD hours demonstrate ongoing education that provides protection should questions about their AI usage arise.
Effective CPD in this area should cover:
- Fundamentals of machine learning and algorithmic decision-making
- Data quality assessment and bias recognition
- Practical governance frameworks for clinical AI
- Regulatory updates across AHPRA, TGA, and privacy legislation
- Case studies of AI incidents and near-misses in healthcare
AHCRA offers CPD-accredited courses that translate complex AI compliance concepts into practical strategies healthcare professionals can implement immediately. These courses bridge the knowledge gap between AI's technical capabilities and practitioners' day-to-day compliance needs.
Preparing for the Future of AI Regulation
The trajectory of AI in Australian healthcare points toward deeper integration, not less. AI is moving from optional efficiency tool to standard clinical infrastructure, much like electronic health records did over the past decade.
Expect AHPRA's guidelines to evolve to address:
- AI systems that learn from each patient interaction, creating version control challenges
- Multi-modal AI combining imaging, pathology, and clinical notes for diagnostic recommendations
- Autonomous AI systems that operate with minimal human oversight
- AI-generated clinical documentation and patient communications
The practitioners who thrive will be those who develop genuine AI fluency — the ability to critically evaluate AI outputs, understand algorithmic limitations, and maintain clinical judgement while leveraging computational power. AHPRA's emphasis on understanding is not bureaucratic pedantry; it is recognition that safe AI adoption requires practitioners to be informed collaborators rather than passive users.
Proactive governance structures built today determine which practices will seamlessly integrate future AI innovations versus those scrambling to retrofit compliance measures after implementation. The investment in understanding AI compliance now is an investment in your practice's long-term viability.
Registered Nurse & Healthcare Compliance Professional
Justine Coupland is a registered nurse and healthcare compliance professional at AHCRA, with a background in practice management, healthcare IT, and regulatory compliance across Australia.
