3

Evidence Guide

Clinical Effectiveness

Concrete examples of what evidence looks like for each indicator in this domain. Use this alongside your self-assessment.

Version 1.0 - First Edition

3.1

Evidence-Based Practice

Our clinical care is informed by current evidence, clinical guidelines, and relevant college standards.

Established Evidence

  • A maintained list of the clinical guidelines, college position statements, and standards relevant to the practice's scope of services - including the issuing body, version/date, and where the guideline can be accessed
  • Evidence that clinicians can access these guidelines (e.g., college member logins, journal subscriptions, bookmarked links on clinical workstations, or physical copies in consulting rooms)
  • The list has been reviewed in the past 12 months and reflects the current version of each guideline

Minimum for Developing

  • Clinicians can name the key guidelines relevant to their practice but there is no consolidated list, and the practice has not verified that the versions in use are current

Excelling

  • The guidelines list is reviewed at least annually, with a nominated person responsible for checking for updates and circulating changes to the clinical team

Common Pitfalls

  • Relying solely on individual clinicians' awareness without any practice-level record - when a clinician leaves, the knowledge leaves with them
  • Listing college standards from fellowship training without checking whether they have been superseded

Established Evidence

  • A defined process for monitoring guideline changes - this may be college email alerts, journal table-of-contents subscriptions, attendance at college conferences, or a nominated clinician who reviews updates
  • At least one example from the past two years where a guideline change led to a change in clinical practice, with documentation of what changed and when
  • CPD activities that demonstrate engagement with current evidence (e.g., conference attendance, online modules, journal club participation)

Minimum for Developing

  • Clinicians stay current through their own CPD but there is no practice-level mechanism for disseminating guideline changes to the team
  • Updates are ad hoc and depend on individual initiative

Excelling

  • Guideline changes are discussed at clinical meetings, with documented decisions about whether and how they will be adopted into the practice's protocols

Common Pitfalls

  • Assuming that college membership alone keeps clinicians current - many college communications go unread
  • No mechanism for translating a clinician's individual awareness of a guideline change into an actual change in practice workflow or protocols

Established Evidence

  • Clinical records demonstrate that where a clinician's management departs from a published guideline, the rationale is documented in the patient's record
  • Clinicians understand the difference between a legitimate clinical decision to deviate from a guideline (based on patient circumstances, comorbidities, or preferences) and simply being unaware that a guideline exists
  • Examples of clinical notes that record phrases such as "guideline recommends X, however in this patient Y is preferred because..."

Minimum for Developing

  • Clinicians are aware that departures from guidelines should be documented but this is inconsistent in practice
  • Documentation, where it exists, may record the decision but not the reasoning

Excelling

  • The practice conducts periodic chart audits to check that clinically significant departures from guidelines are documented with rationale, and feeds findings back to the clinical team

Common Pitfalls

  • Treating guideline adherence as binary - either following or ignoring - rather than recognising that well-reasoned clinical variation is expected and defensible when documented
  • Assuming medicolegal safety without any written trail of reasoning - "I would have explained it in court" is not a substitute for a contemporaneous record

Established Evidence

  • Confirmation that each practitioner has met their specialist college CPD requirements for the current or most recent cycle - college portal printout, completion certificate, or CPD summary
  • Evidence that the practice supports CPD through funded study leave, paid conference attendance, or allocated non-clinical time for professional development
  • CPD activities span a mix of clinical and non-clinical domains (e.g., clinical updates, practice management, cultural safety, communication skills)

Minimum for Developing

  • Clinicians report they are meeting CPD requirements but the practice has not sighted confirmation or does not track compliance centrally

Excelling

  • CPD records are tracked by the practice, with renewal dates on a compliance calendar and proactive reminders sent before deadlines
  • CPD topics are aligned with areas identified through clinical audit, incident review, or outcome monitoring

Common Pitfalls

  • Assuming that AHPRA registration renewal confirms CPD compliance - AHPRA requires a declaration but does not routinely audit college CPD records until there is a concern
  • Practice provides no protected time for CPD, treating it entirely as the individual clinician's responsibility outside working hours
3.2

Informed Consent

We obtain valid, informed consent before providing treatment, and our consent processes would withstand external scrutiny.

Established Evidence

  • A documented consent process that is proportionate to the risk, complexity, and invasiveness of the treatments or procedures provided by the practice
  • Evidence that the practice has considered what level of consent documentation is appropriate for different service types (e.g., verbal consent with a file note for a medication review versus a signed written consent form for a procedure under sedation)
  • Consent policy or guideline document that staff can reference

Minimum for Developing

  • Consent is obtained but the process is informal and varies between clinicians, with no written policy or guideline describing what is expected

Excelling

  • The consent process is reviewed periodically (e.g., after a complaint, a near-miss, or a change in services offered) and updated to reflect current medicolegal standards and college guidance

Common Pitfalls

  • Applying a one-size-fits-all consent form to everything from a blood test to an excision under local anaesthetic - either over-documenting low-risk activities or under-documenting high-risk ones
  • No consent process at all for non-procedural clinical decisions that still carry material risk (e.g., starting immunosuppressive therapy, watchful waiting for a potentially serious diagnosis)

Established Evidence

  • Consent is obtained by the practitioner who will perform the procedure or treatment, or by a practitioner who is qualified to do so and can explain the risks, benefits, alternatives, and answer questions
  • Clinical records identify who obtained consent and when
  • Practice policy explicitly states that consent is not delegated to administrative staff, nurses, or unqualified persons

Minimum for Developing

  • The treating clinician generally obtains consent but there are instances where consent forms are pre-signed or the consent discussion is perfunctory

Excelling

  • Where registrars or trainees obtain consent, there is a supervision process that confirms they have the competence to explain the procedure and its risks - this is documented or part of the training agreement

Common Pitfalls

  • Reception staff handing patients a consent form to sign in the waiting room before the clinician has discussed the procedure - this does not constitute informed consent regardless of what the form says
  • Consent obtained by a practitioner who is not performing the procedure and cannot meaningfully discuss the surgical approach, specific risks, or alternatives

Established Evidence

  • Consent discussions include material risks - that is, risks that a reasonable person in the patient's position would want to know about, even if they are rare
  • Clinical records document the specific risks discussed, not just "risks and benefits explained"
  • The practice has a procedure-specific list of material risks for the most common procedures it performs, used as a prompt during consent discussions

Minimum for Developing

  • Clinicians are aware of Rogers v Whitaker in principle but risk disclosure is inconsistent, and clinical records often contain only generic statements about consent

Excelling

  • Consent documentation is tailored to the individual patient's circumstances - for example, documenting that a risk is particularly relevant because of the patient's occupation, lifestyle, or comorbidities
  • Consent processes are reviewed with reference to recent medicolegal decisions or AHPRA panel findings

Common Pitfalls

  • Recording "informed consent obtained" in the notes without specifying which risks were discussed - this provides almost no medicolegal protection
  • Disclosing only common risks and omitting rare but serious risks that a reasonable patient would consider material (the exact situation Rogers v Whitaker addressed)

Established Evidence

  • A signed consent form is used for all significant procedures, including the procedure name, the risks discussed, alternatives considered, and the patient's acknowledgement
  • The form is signed before the procedure commences - not during, after, or retrospectively
  • Completed consent forms are filed in or linked to the patient's clinical record

Minimum for Developing

  • Consent forms exist but are generic (e.g., "I consent to a procedure") without recording the specific risks or alternatives discussed

Excelling

  • Consent forms are procedure-specific, listing the material risks for that procedure as a checklist or prompt, and are reviewed and updated periodically to reflect current practice
  • The practice retains copies of superseded consent form versions for medicolegal reference

Common Pitfalls

  • Consent form signed on the day of the procedure moments before it commences, giving the patient no genuine opportunity to consider their decision
  • Using a single generic consent form across all procedures, which fails to document the specific risks discussed for each procedure type

Established Evidence

  • A documented process for obtaining consent from patients who require additional support: patients from culturally and linguistically diverse (CALD) backgrounds, patients with cognitive impairment, patients who are minors, and patients with communication disabilities
  • Evidence of interpreter use for consent discussions where English proficiency is limited - this means professional interpreter services, not family members interpreting
  • The practice can identify who has legal authority to consent on behalf of a patient who lacks capacity (e.g., guardian, medical treatment decision-maker, parent) and understands the difference between a substitute decision-maker and a family member

Minimum for Developing

  • The practice manages additional-needs patients on a case-by-case basis but has no written process and has not trained staff on the legal framework for substitute consent

Excelling

  • Staff have completed training on consent for patients with cognitive impairment, and the practice has a quick-reference guide for identifying substitute decision-makers under relevant state/territory legislation
  • Interpreter services are pre-booked for known CALD patients rather than arranged reactively

Common Pitfalls

  • Using a family member (especially a child) as an interpreter for consent discussions - this is inappropriate for clinical consent and may not satisfy legal requirements
  • Assuming a family member present in the room has authority to consent on behalf of a patient who lacks capacity, without verifying their legal status as a guardian or decision-maker

Established Evidence

  • For elective procedures, patients are given written information about the procedure and its risks in advance of the consent discussion, allowing them to consider their options before the day of the procedure
  • Consent and treatment do not routinely occur in the same appointment for significant elective procedures unless this is the patient's informed preference and is documented as such
  • Patients are explicitly invited to ask questions and are given time to do so without feeling rushed

Minimum for Developing

  • Clinicians provide verbal information but patients are often consented and treated in the same visit without clear documentation that this was the patient's preference

Excelling

  • Written patient information sheets specific to common procedures are routinely provided in advance, and the consent discussion references these materials
  • The practice tracks whether patients had a cooling-off period between consent and procedure

Common Pitfalls

  • Booking an elective procedure and consent discussion for the same appointment as a matter of scheduling convenience rather than clinical appropriateness - the patient may feel that declining is not a genuine option
  • Not documenting that the patient was offered time to consider when consent and treatment occur on the same day
3.3

Appropriate Use of Investigations

We request investigations that are clinically indicated, and we have reliable systems for managing results.

Established Evidence

  • Investigation requests are based on clinical indication and reference to current evidence, not routine panels or standing orders that are not reviewed
  • Where the practice uses standard investigation sets (e.g., pre-operative bloods, initial workup panels), these have been reviewed against current guidelines and reflect evidence-based practice
  • Clinical records document the reason for requesting investigations, particularly where the request is non-standard or could be questioned on audit

Minimum for Developing

  • Clinicians order investigations based on their clinical judgement but the practice has not reviewed standing order sets or routine panels against current evidence
  • There is no documentation of the clinical rationale for investigation requests in the patient record

Excelling

  • The practice periodically audits its investigation ordering patterns, looking for low-value care (investigations unlikely to change management) and comparing against Choosing Wisely recommendations or college-specific guidance

Common Pitfalls

  • Continuing to order investigations "because we always have" without reviewing whether evidence supports them for the clinical situation - particularly routine pre-operative bloods in low-risk patients
  • Ordering defensive investigations to reduce perceived medicolegal risk rather than to inform clinical decision-making

Established Evidence

  • A defined system for tracking that requested investigations have been received - this may be an electronic inbox with outstanding results flagging, a manual log, or a PMS-based tracking feature
  • The system identifies investigations that have been requested but not returned within the expected timeframe (e.g., pathology results not received within 5 business days, imaging not received within 2 weeks)
  • A nominated person or role is responsible for monitoring the tracking system daily

Minimum for Developing

  • Results arrive into the clinical system but there is no active tracking of outstanding results - the practice relies on individual clinicians remembering what they ordered

Excelling

  • The tracking system generates automated alerts for overdue results and includes escalation steps (e.g., contact the laboratory, contact the patient to confirm the test was done)

Common Pitfalls

  • Assuming that if a result is not received, the patient simply did not attend for the test - the result may have been sent to the wrong provider, lost in transmission, or filed without review
  • No system at all for tracking outstanding results, meaning abnormal results on tests not yet performed cannot be identified as missing

Established Evidence

  • All investigation results - including normal results - are reviewed by the requesting clinician or a delegated clinician before being filed or actioned
  • The clinical system shows who reviewed each result and when (e.g., electronic sign-off, initialled hard copy, or a documented review workflow)
  • Results are not filed by administrative staff without clinical review

Minimum for Developing

  • Results are generally reviewed by a clinician but there is no consistent process for documenting who reviewed them and when
  • Normal results are sometimes auto-filed or filed by staff without clinician review

Excelling

  • The practice audits results review turnaround time (e.g., 95% of results reviewed within 2 business days of receipt) and takes action if delays are identified

Common Pitfalls

  • Normal results filed without review - a result may appear normal on the printed report but be clinically significant in the context of the patient's presentation or previous results
  • No documentation of review, meaning if a missed result leads to a complaint, the practice cannot demonstrate that any clinician actually looked at it

Established Evidence

  • A defined process for identifying, escalating, and acting on abnormal or critical results, including who is responsible for each step
  • Evidence that significant abnormal results are communicated to the patient in a timely manner, with documentation of what was communicated, when, and by whom
  • Critical results (e.g., results flagged as urgent by the laboratory or reporting radiologist) have a separate escalation pathway with a shorter response timeframe

Minimum for Developing

  • Abnormal results are generally actioned by the treating clinician but there is no defined escalation pathway and no documentation of how or when the patient was informed

Excelling

  • The practice logs all critical results and reviews the log periodically to confirm that response times are meeting the defined targets
  • Near-misses (delayed action on abnormal results) are reviewed through the incident reporting system

Common Pitfalls

  • Relying on the patient to follow up on their own results - the obligation to communicate abnormal results rests with the ordering clinician, not the patient
  • No defined timeframe for actioning critical results, meaning "urgent" results may sit in a queue alongside routine results

Established Evidence

  • A written protocol for managing investigation results when the requesting clinician is on leave, ill, or has left the practice
  • A nominated covering clinician is identified and has access to the relevant patient records and result systems
  • The arrangement is communicated to staff so they know who to escalate results to during the absence

Minimum for Developing

  • Results accumulate during clinician absence and are reviewed only when the clinician returns, with no covering arrangement in place

Excelling

  • The covering arrangement is tested - the practice confirms that the covering clinician can actually access the relevant inbox/system and is reviewing results in real time during the absence

Common Pitfalls

  • A locum or covering clinician is nominated but does not have access to the clinical system, the results inbox, or the patient records needed to interpret results in context
  • Clinician departs the practice entirely and their outstanding results are never reviewed because no one inherits responsibility

Established Evidence

  • The clinical record documents what action was taken on each investigation result - whether the result was normal and noted, abnormal and the patient was contacted, or the result triggered a change in management
  • Documentation includes the date the result was reviewed, the action taken, and (where relevant) the communication with the patient
  • Entries are made contemporaneously, not retrospectively

Minimum for Developing

  • Actions are taken on results but documentation is inconsistent - some results are annotated in the record, others are simply filed with no note

Excelling

  • The practice uses a structured results workflow in its clinical software that automatically prompts for an action note when a result is reviewed, creating an auditable trail

Common Pitfalls

  • Filing a result without any annotation - if the result is later questioned, there is no evidence that it was reviewed or that any clinical decision was made
  • Documenting "noted" without recording what the noting led to - particularly for borderline or equivocal results that require follow-up or repeat testing
3.4

Referral Management

We make and receive referrals effectively, and we communicate clearly with referring and receiving practitioners.

Established Evidence

  • Defined triage categories with target timeframes for each (e.g., urgent - seen within 2 weeks, semi-urgent - seen within 4-6 weeks, routine - seen within 3 months)
  • Evidence that incoming referrals are clinically triaged by a qualified clinician, not just booked in order of receipt
  • The triage category and target timeframe are recorded against each referral, and the referring practitioner or patient is informed of the expected wait time

Minimum for Developing

  • Referrals are loosely prioritised (e.g., marked "urgent" referrals are booked sooner) but there are no defined categories, target timeframes, or documented triage process

Excelling

  • Triage timeframes are benchmarked against college or specialty-specific standards, and the practice monitors its performance against these targets with regular reporting

Common Pitfalls

  • Triage performed by reception staff based on the referral letter's tone or the GP's use of the word "urgent" rather than clinical assessment by a qualified clinician
  • No communication to the referring GP or patient about expected wait time, leading to complaints and re-referrals

Established Evidence

  • A defined process for handling referrals where the clinical information is insufficient for safe triage - including a method for contacting the referrer to request additional information
  • Evidence that the practice does not simply place inadequately documented referrals at the bottom of the waitlist or discard them
  • Template or standard wording used to request additional information from referring GPs

Minimum for Developing

  • Staff recognise when referrals lack adequate information but there is no standard process - some are followed up, others are booked regardless or left in a pile

Excelling

  • The practice tracks the frequency of insufficient referrals and feeds this data back to referring GPs through education or communication (e.g., a referral guide published on the practice website)

Common Pitfalls

  • Accepting a referral with minimal clinical information and triaging as routine by default - a patient who appears routine may be urgent once the full clinical picture is known
  • Rejecting or returning referrals without following up, creating a gap in care where neither the GP nor the specialist has taken responsibility for the patient

Established Evidence

  • When referring patients onward (to other specialists, allied health, imaging, or hospital), the outgoing referral includes sufficient clinical context: the reason for referral, relevant history, current medications, investigations already performed, and what question is being asked
  • Examples of outgoing referral letters (de-identified) demonstrate structured, complete communication
  • A template or standard structure is used for outgoing referrals to ensure consistency

Minimum for Developing

  • Outgoing referrals vary in quality - some are detailed, others are a single line with a diagnosis and no clinical context

Excelling

  • The practice has a referral template that prompts for all necessary information and includes relevant investigation results as attachments
  • Receiving practitioners are asked for feedback on referral quality

Common Pitfalls

  • Sending a one-line referral ("Please see for opinion") that forces the receiving practitioner to start from scratch, duplicating investigations and delaying care
  • Not including current medications in outgoing referrals, particularly for patients on anticoagulants, immunosuppressants, or other medications that affect management

Established Evidence

  • A system for tracking Medicare referral validity periods for each patient, with alerts or reminders when referrals are approaching expiry
  • A defined process for communicating with patients and their referring GP when a new referral is required to continue specialist care
  • Staff understand the Medicare referral rules relevant to the practice's specialty, including the distinction between standard and indefinite referrals

Minimum for Developing

  • Referral expiry is checked at booking or at the point of billing but there is no proactive tracking - expiry is identified only when the patient presents for an appointment

Excelling

  • The practice generates a regular report of referrals approaching expiry and contacts patients and GPs proactively, well before the expiry date
  • Referral validity data is used to identify patients who may be lost to follow-up

Common Pitfalls

  • Discovering at the time of billing that the referral has expired, leading to unbillable consultations or retrospective requests to GPs for backdated referrals
  • Not understanding the difference between a 12-month referral and an indefinite referral, or the specific rules that apply to the practice's specialty

Established Evidence

  • The practice maintains an active waitlist that identifies each patient's triage category, date of referral, and target timeframe for being seen
  • The waitlist is reviewed regularly (at least monthly) to identify patients who have exceeded their target timeframe
  • There is a process for re-triaging patients when new clinical information is received (e.g., a GP phones to advise that a patient's condition has deteriorated)

Minimum for Developing

  • A waitlist exists but is not actively managed - patients are simply booked in chronological order as appointments become available, without reference to triage category or target timeframes

Excelling

  • Waitlist data is reported to the clinical team periodically, including average wait times by triage category and the number of patients who exceeded their target timeframe
  • The practice contacts long-wait patients to confirm they still require the appointment and to reassess urgency

Common Pitfalls

  • Patients on the waitlist for extended periods without any contact from the practice - their clinical situation may have changed, they may have been seen elsewhere, or they may have deteriorated
  • No mechanism for a GP to escalate a patient already on the waitlist when their condition changes
3.5

Communication with Referring Practitioners

We provide timely, useful reports to referring practitioners so they can continue the patient's care.

Established Evidence

  • A defined turnaround target for sending reports to referring GPs - typically within one week for standard consultations and within 48 hours for urgent findings or significant management changes
  • Data showing actual report turnaround times (e.g., a report from the clinical system showing average days between consultation date and report sent date)
  • Evidence that the practice monitors compliance with its turnaround targets

Minimum for Developing

  • Reports are generally sent but there is no defined timeframe and no monitoring - some reports may be delayed by weeks or not sent at all

Excelling

  • Report turnaround time is tracked as a key performance indicator and reviewed monthly, with action taken when turnaround consistently exceeds the target
  • Same-day preliminary communication (e.g., a phone call or brief secure message) is provided for urgent findings

Common Pitfalls

  • Dictated reports sitting in a transcription queue for weeks - the dictation was done promptly but the report was never sent
  • Reports generated within the clinical system but never transmitted to the GP because the secure messaging system was misconfigured or the GP's details were incorrect

Established Evidence

  • Reports routinely include: clinical findings, diagnosis or differential diagnosis, management plan, medications prescribed or changed, procedures performed, planned follow-up, and any actions required by the referring GP
  • De-identified sample reports demonstrate structured, comprehensive content
  • A report template is used to ensure all required elements are included consistently

Minimum for Developing

  • Reports contain most of the required elements but are inconsistent - some reports are thorough, others omit the management plan or fail to specify what the GP should do

Excelling

  • Report templates are reviewed periodically and updated to reflect feedback from referring GPs or changes in what information is most useful
  • Reports include relevant investigation results as embedded data, not just references

Common Pitfalls

  • Reports that describe the consultation but do not specify what the GP needs to do - the GP is left guessing about follow-up responsibility
  • Medication changes not clearly flagged in the report, leading to the GP being unaware that a medication was started, stopped, or dose-adjusted

Established Evidence

  • Reports are written in clear, structured language that is useful to a GP - not dense sub-specialty jargon aimed at fellow specialists
  • Specialty-specific abbreviations are either avoided or explained on first use
  • Reports are formatted with headings, paragraphs, and clear delineation of sections (not a single block of unstructured text)

Minimum for Developing

  • Reports are legible and generally comprehensible but use specialty jargon without explanation, and formatting is inconsistent

Excelling

  • The practice has sought feedback from referring GPs on report clarity and usefulness, and has made changes based on that feedback
  • Reports are tailored to the GP audience - key actions are highlighted or summarised at the top for quick reference

Common Pitfalls

  • Writing reports as if the reader is a fellow specialist - using abbreviations, grading systems, and classification schemes that a GP may not recognise without context
  • Long, narrative reports with no structure, where the GP must read the entire letter to find the key information

Established Evidence

  • When a patient is discharged from the specialist's care back to the GP, a discharge letter is sent that includes: a summary of the episode of care, the final or current diagnosis, ongoing management recommendations, current medications, and red flags that should prompt re-referral
  • The discharge letter clearly communicates that the patient is being discharged from specialist care and that ongoing management responsibility is transferring to the GP
  • De-identified examples of discharge letters demonstrate structured, actionable communication

Minimum for Developing

  • Patients are discharged from care but the discharge communication is a standard consultation letter that does not clearly state the patient is being discharged or provide a management summary

Excelling

  • A dedicated discharge letter template is used that includes all required elements and a specific section for "when to re-refer"
  • Discharge letters are sent proactively, not only when the GP inquires about the patient's status

Common Pitfalls

  • Patient effectively discharged from specialist care by simply not being given a follow-up appointment - no communication is sent to the GP, who may assume the specialist is still managing the patient
  • Discharge letter that says "no further follow-up required" without providing the GP with guidance on ongoing monitoring or when re-referral would be appropriate

Established Evidence

  • The practice has a system for confirming that reports have actually been sent and received - not just drafted or dictated
  • For electronic transmission (secure messaging, clinical system upload), there is a delivery confirmation or sent log that can be checked
  • For reports sent by other means (fax, post), there is a record of transmission (fax confirmation, postage log)

Minimum for Developing

  • Reports are sent but the practice has no system for confirming delivery - if a report fails to transmit, no one would know

Excelling

  • The practice runs a periodic check (e.g., monthly) for consultations where no outgoing report exists in the system, and follows up on any gaps
  • Failed transmissions are identified and re-sent within 24 hours

Common Pitfalls

  • Reports drafted in the clinical system but never transmitted because the clinician did not complete the send step or the secure messaging connection failed silently
  • Assuming that because a report was faxed, it was received - fax transmission failures are common and often go unnoticed without a confirmation step
3.6

Continuity and Coordination of Care

We coordinate care within our practice and with other providers involved in the patient's care.

Established Evidence

  • Where patients are seen by more than one clinician within the practice, the clinical record serves as the primary continuity tool - entries are complete enough that the next clinician can understand the current status, plan, and any outstanding actions
  • Where handovers occur (e.g., registrar to consultant, between partners in a group practice), there is a documented or structured handover process
  • Team meetings, case conferences, or clinical discussions where shared patients are discussed are minuted or noted

Minimum for Developing

  • Clinical records are used for continuity but entries vary in completeness - some clinicians write detailed notes, others write minimal notes that do not support safe handover

Excelling

  • The practice has a structured handover template or checklist used when patients are transferred between clinicians within the practice
  • Internal handover quality is audited periodically

Common Pitfalls

  • Registrar sees the patient and documents a plan, but the supervising consultant is unaware of the consultation or the plan until the next visit - no structured handover occurred
  • Group practice where each partner keeps their own notes in a different style, and there is no standardised approach to documenting the current plan

Established Evidence

  • The clinical record notes the patient's other treating practitioners where relevant (other specialists, allied health, GP), particularly where management plans may interact or conflict
  • Evidence of proactive communication with other providers when care is complex - e.g., shared care letters, multidisciplinary team meeting participation, phone consultations documented in the record
  • The practice avoids prescribing or recommending management that conflicts with other treating practitioners' plans without prior communication

Minimum for Developing

  • Clinicians are generally aware of the patient's other providers but this is not routinely documented, and there is no proactive communication with other providers unless a problem arises

Excelling

  • The practice participates in or initiates multidisciplinary care planning for complex patients, with documented outcomes from these discussions
  • The patient's GP is routinely copied on communication sent to other specialists about the same patient

Common Pitfalls

  • Operating in a silo - prescribing medications without checking what other specialists have prescribed, leading to drug interactions or therapeutic duplication
  • Assuming the GP is coordinating all care and will resolve any conflicts - in practice, the GP may not know what each specialist has recommended until the patient brings it up

Established Evidence

  • A documented process for following up patients who do not attend (DNA) scheduled appointments, including at least one attempt to contact the patient by phone, SMS, or letter
  • For patients where non-attendance carries clinical risk (e.g., patients with active malignancy, patients awaiting urgent investigation results, patients on high-risk medications), the follow-up process includes escalation steps such as contacting the referring GP
  • DNA events are recorded in the patient's clinical record, including the follow-up action taken

Minimum for Developing

  • DNA patients are noted in the appointment book but there is no standard follow-up - some are called back, others are not, depending on staff availability or initiative

Excelling

  • DNA rates are tracked and reported periodically, with analysis of patterns (e.g., particular appointment types, days of the week, or patient demographics with higher DNA rates) and strategies implemented to reduce them
  • The practice distinguishes between low-risk DNAs (routine follow-up) and high-risk DNAs (patient safety concern) and applies different follow-up protocols accordingly

Common Pitfalls

  • Treating a DNA as the patient's choice and taking no further action - the patient may be unwell, confused about the appointment, or facing barriers to attendance
  • Not notifying the referring GP when a high-risk patient fails to attend, leaving a gap in the safety net

Established Evidence

  • A documented plan for managing continuity of care when a practitioner leaves the practice or is absent for an extended period (more than 4 weeks)
  • Patients of the departing or absent practitioner are informed in writing, offered options (transfer to another practitioner within the practice, referral to another specialist, return to GP for re-referral), and given access to their records
  • The plan includes arrangements for outstanding results, incomplete treatment episodes, and patients on waitlists

Minimum for Developing

  • The practice manages practitioner departures reactively - patients find out when they call for an appointment, and there is no proactive communication plan

Excelling

  • The practice has a standard departure/absence protocol that is activated whenever a practitioner gives notice, including a checklist of actions (patient notification, results handover, waitlist review, record access arrangements)

Common Pitfalls

  • Practitioner leaves the practice and their patients' outstanding results, pending referrals, and incomplete treatment plans are orphaned - no one takes responsibility
  • Patients discover their specialist has left only when they call for a follow-up appointment months later

Established Evidence

  • A defined process for managing the transfer of care when patients need to move to another specialist, service, or location - including the provision of a clinical summary, copies of relevant investigations, and clear communication about what has been done and what is outstanding
  • Transfer documentation is sent to the receiving practitioner before or at the time of transfer, not weeks later
  • The patient is informed about the transfer process and given a copy of the transfer summary if they request one

Minimum for Developing

  • Transfers of care happen but are informal - the patient is told to "see Dr X" without a written summary or clinical handover to the receiving practitioner

Excelling

  • The practice follows up with the receiving practitioner or service to confirm that the patient has been seen and that the transfer was successful
  • Transfer of care documentation uses a structured template that includes outstanding actions, current medications, and planned follow-up

Common Pitfalls

  • Patient relocates and requests a transfer of care, but the practice provides only raw records (hundreds of pages of clinical notes) without a clinical summary - the receiving practitioner cannot efficiently identify the current status and plan
  • No follow-up to confirm the receiving practitioner has actually received the transfer documentation or accepted the patient
3.7

Clinical Outcome Monitoring

We track and review our clinical outcomes to understand how our patients are doing.

Established Evidence

  • The practice has identified at least one or two clinical outcome measures that are meaningful for the services it provides and is actively collecting data on them
  • The choice of measures is documented, with a rationale for why they were selected (e.g., complication rates for a surgical practice, disease activity scores for a rheumatology practice, treatment completion rates for an oncology practice, readmission rates for a procedural practice)
  • Data collection is systematic and ongoing, not a one-off exercise

Minimum for Developing

  • The practice acknowledges the importance of outcome monitoring but has not yet identified specific measures or begun systematic data collection

Excelling

  • Multiple outcome measures are tracked, covering both clinical outcomes (e.g., complication rates, remission rates) and process outcomes (e.g., time to treatment, follow-up completion rates)
  • Outcome measures are aligned with college or specialty-specific quality indicators where these exist

Common Pitfalls

  • Attempting to measure everything and measuring nothing well - a small number of meaningful measures that are reliably collected is better than a large number that are inconsistent
  • Measuring only process metrics (e.g., number of patients seen) rather than clinical outcomes (e.g., patient-reported outcomes, complication rates)

Established Evidence

  • Outcome data is reviewed by the clinical team at least annually, with a documented review meeting or summary report
  • The review examines trends over time, not just snapshot data from a single period
  • Where the practice has multiple clinicians, data is reviewed at the practice level (not only individual clinician level), allowing comparison and discussion

Minimum for Developing

  • Outcome data is collected but has not yet been formally reviewed by the clinical team - data sits in a system or spreadsheet without analysis

Excelling

  • Reviews occur more frequently than annually (e.g., quarterly) and are integrated into regular clinical meetings
  • The review includes comparison to published benchmarks, college data, or registry averages where available

Common Pitfalls

  • Collecting data that no one ever looks at - outcome monitoring is only useful if the data is reviewed and discussed
  • Reviewing outcomes only when a complaint or adverse event triggers the review, rather than as a routine part of practice governance

Established Evidence

  • At least one example from the past two years where outcome data led to a defined improvement action - this might be a clinical audit, a change in surgical technique, a protocol update, additional training, or a referral for peer review
  • The improvement action is documented, including what the data showed, what was decided, what was implemented, and whether a subsequent review confirmed improvement
  • The practice demonstrates a close-the-loop approach: identify issue, take action, check the result

Minimum for Developing

  • Outcome data has been reviewed but no specific actions have been taken in response - the review was informational only

Excelling

  • The practice maintains a register of improvement actions arising from outcome review, with follow-up review dates and documented results
  • Improvement actions are linked back to clinical governance or quality improvement programs

Common Pitfalls

  • Reviewing outcome data, noting an area of concern, and taking no action - this is worse than not reviewing at all, because it demonstrates awareness without response
  • Implementing a change but never checking whether it had the intended effect

Established Evidence

  • Evidence of participation in relevant clinical quality registries (e.g., Australian Orthopaedic Association National Joint Replacement Registry, BreastSurgANZ Quality Audit, Bi-National Colorectal Cancer Audit) where these exist for the practice's specialty
  • Participation in college-sponsored audit or peer review programs, with documentation of engagement (e.g., audit reports submitted, peer review meetings attended)
  • Where participation is mandatory as part of college CPD, compliance is tracked and current

Minimum for Developing

  • The practice is aware of relevant registries or peer review programs but has not yet enrolled or participated

Excelling

  • The practice uses registry data to benchmark its outcomes against national averages and discusses the comparison at clinical meetings
  • Participation extends beyond mandatory requirements - the practice voluntarily engages in additional audit or peer review activities

Common Pitfalls

  • Participating in a registry by submitting data but never reviewing the registry reports or benchmarking data that come back - participation without engagement
  • Assuming that because peer review is not mandatory for the practice's specialty, it is not worth doing - voluntary peer review is valuable in any specialty

Established Evidence

  • Outcome review is treated as a routine part of clinical governance, not a response to complaints, incidents, or medicolegal claims
  • The practice demonstrates a non-punitive, learning-oriented approach to outcome data - individual clinician data (where reviewed) is used for professional development, not performance management in a disciplinary sense
  • Clinicians are willing to discuss their outcomes openly with peers within the practice

Minimum for Developing

  • Outcome data is available but clinicians are reluctant to review or discuss it, or outcome review occurs only when something has gone wrong

Excelling

  • The practice holds regular morbidity and mortality (M&M) meetings or case review sessions where outcomes are discussed constructively, with documented learning points
  • The practice actively promotes psychological safety around outcome discussion - clinicians are encouraged to share cases where outcomes were unexpected, including their own

Common Pitfalls

  • Outcome review used as a weapon - singling out individual clinicians in a blame-oriented way, which drives data hiding rather than transparency
  • Avoiding outcome review entirely because it feels threatening or because "we already know we're doing a good job" - this assumption is untestable without data