8

Evidence Guide

Continuous Improvement

Concrete examples of what evidence looks like for each indicator in this domain. Use this alongside your self-assessment.

Version 1.0 - First Edition

8.1

Quality Improvement Planning

We have a structured, documented approach to identifying priorities and tracking improvement.

Established Evidence

  • A written quality improvement plan exists as a standalone document or structured section within a practice management plan
  • The plan has a version date and a nominated review date within 12 months
  • Evidence that the plan was reviewed at its scheduled review point (dated review notes, updated version, or meeting minutes recording the review)

Minimum for Developing

  • An informal list of improvement intentions exists but is not structured, dated, or scheduled for review
  • The practice can describe what it wants to improve but has not committed this to a document

Excelling

  • The plan is reviewed more frequently than annually (e.g., quarterly), with documented progress notes at each review point
  • The plan includes both short-term operational improvements and longer-term strategic quality goals

Common Pitfalls

  • A plan was written for accreditation or an external review and has not been opened since - this is a document, not a quality improvement plan
  • The plan exists only in the principal practitioner's head and has never been shared or documented

Established Evidence

  • Each improvement action in the plan names a specific, measurable goal (e.g., "reduce referral letter turnaround to five business days" rather than "improve communication")
  • Each action has a named responsible person (not "the practice" or "everyone")
  • Each action has a target completion date or review date
  • The plan uses a format that makes these elements visible at a glance (table, spreadsheet, or structured template)

Minimum for Developing

  • Goals are listed but are vague, lack assigned owners, or have no timeframes attached

Excelling

  • Goals include measurable success criteria so the practice can objectively determine whether the action achieved its aim
  • The plan distinguishes between actions that are "in progress," "completed," and "not started"

Common Pitfalls

  • Goals described as activities rather than outcomes - "review our consent process" is an activity, "all patients receive written pre-procedure information at least 24 hours before their appointment" is an outcome
  • Every action is assigned to the practice manager regardless of whether they have authority or capacity to deliver it

Established Evidence

  • The improvement plan explicitly references indicators or domains from the SPQF self-assessment that were rated below the target level
  • The self-assessment results are dated and the improvement plan was created or updated after the assessment
  • There is a traceable link between "we scored Developing on indicator X" and "we have an action to address indicator X"

Minimum for Developing

  • The self-assessment has been completed but the results have not been translated into the improvement plan
  • The practice intends to use the self-assessment to inform planning but has not yet done so

Excelling

  • The improvement plan maps directly to the self-assessment, with each below-target indicator either addressed by a specific action or accompanied by a documented rationale for deferral
  • Successive self-assessments show movement from Developing to Established in areas targeted by the plan

Common Pitfalls

  • The self-assessment and the improvement plan are disconnected documents created by different people at different times with no cross-reference
  • The practice completed the self-assessment as a one-off exercise and did not use the results for anything

Established Evidence

  • Meeting minutes, progress notes, or a tracking document showing that improvement goals were discussed at least once per quarter
  • The review records what progress has been made, what barriers have been encountered, and whether timeframes need adjustment
  • Reviews involve the person responsible for each action, not just the practice manager reading a list

Minimum for Developing

  • Progress is reviewed informally or sporadically - the practice can describe what has happened but there are no dated records of review

Excelling

  • Progress reviews are scheduled in advance as recurring calendar items and are consistently conducted
  • The review process includes brief written status updates for each active goal, enabling trend tracking over time

Common Pitfalls

  • Quarterly reviews are scheduled but routinely cancelled due to clinic pressures - the plan becomes a static document
  • Reviews consist of "we haven't done anything yet" repeated quarter after quarter without any escalation or reprioritisation

Established Evidence

  • The improvement plan or tracking document shows completed actions marked as done, with the completion date recorded
  • For each completed action, a brief outcome statement records what was achieved (e.g., "consent form updated and in use since March 2025 - all clinical staff trained")
  • Completed actions are retained in the plan history rather than deleted, so the practice can demonstrate its improvement trajectory

Minimum for Developing

  • Actions are completed but the plan is not updated - there is no record that the work was done or what it achieved

Excelling

  • Completed actions include a brief evaluation of whether the intended outcome was achieved, not just confirmation that the activity occurred
  • The practice maintains a running record of improvements made, usable for annual reporting or accreditation preparation

Common Pitfalls

  • Completed actions are removed from the plan, so the practice has no evidence of what it has accomplished - the plan only ever shows what is still to do
  • "Completed" means the activity was started, not that it was finished or that it produced the desired result

Established Evidence

  • At least one example of a failed or partially successful improvement action that was reviewed and re-planned
  • The review documents why the action did not succeed (e.g., lack of staff engagement, wrong approach, insufficient time) and what will be done differently
  • The revised approach has its own target date and responsible person

Minimum for Developing

  • The practice acknowledges that some actions have not worked but has not formally reviewed why or documented an alternative approach

Excelling

  • The practice treats unsuccessful actions as learning opportunities and applies those lessons to how it plans future improvements
  • There is evidence of iterative improvement - the second attempt is materially different from the first, not just a repeat

Common Pitfalls

  • Failed actions are quietly dropped from the plan rather than reviewed - the practice pretends they never existed
  • The same action appears in the plan year after year with the same wording and the same lack of progress, with no analysis of why it keeps failing

Established Evidence

  • The improvement plan is stored in a shared location accessible to the practice manager, principal practitioner(s), and relevant team members (shared drive, practice intranet, or displayed in the staff area)
  • Staff are aware the plan exists and can describe at least one current improvement priority
  • More than one person can locate, update, and explain the plan

Minimum for Developing

  • The plan exists but only one person knows where it is or what it contains

Excelling

  • Improvement priorities from the plan are communicated to all staff (e.g., discussed at team meetings, posted on a staff noticeboard, or included in a regular update)
  • Staff contributions to improvement actions are acknowledged and visible

Common Pitfalls

  • The plan lives on the practice manager's personal computer and is inaccessible if that person is away or leaves
  • The plan is "accessible" in theory (on a shared drive) but no one other than the author has ever opened it

Established Evidence

  • The improvement plan shows evidence of prioritisation - not all actions carry equal weight, and those with direct patient safety implications are scheduled first
  • Where the practice has limited capacity, safety-related improvements are progressed ahead of cosmetic or convenience improvements
  • The rationale for prioritisation is documented or can be articulated by the principal practitioner or practice manager

Minimum for Developing

  • Improvement actions are listed without any prioritisation - everything is treated as equally important or is addressed in the order it was identified

Excelling

  • The practice uses a simple risk-based framework to prioritise improvements (e.g., likelihood and consequence assessment)
  • Patient safety improvements are explicitly flagged and tracked separately from operational or amenity improvements

Common Pitfalls

  • The practice's most recent completed improvements are all cosmetic (new signage, waiting room furniture, website redesign) while clinical process gaps remain unaddressed
  • Prioritisation is driven by what is easiest or cheapest rather than what carries the most risk if left unaddressed

Established Evidence

  • A completed self-assessment with a date within the past 24 months
  • Evidence that the self-assessment was conducted thoughtfully (ratings are supported by notes or commentary, not just ticked boxes)
  • If the practice has completed more than one self-assessment, both are retained for comparison

Minimum for Developing

  • The practice completed an initial self-assessment but has not scheduled or planned a repeat assessment
  • More than 24 months have elapsed since the last full self-assessment

Excelling

  • The practice conducts self-assessments annually rather than biennially
  • Successive assessments are compared side by side, with documented commentary on where scores have changed and why

Common Pitfalls

  • The initial self-assessment was completed when the practice first adopted the framework and has never been revisited - the results are now years out of date
  • The repeat assessment is treated as a formality rather than a genuine review - identical ratings are carried forward without re-evaluation
8.2

Internal Audit

We regularly review our own processes against defined standards and act on what we find.

Established Evidence

  • Records of at least two internal audits conducted within the past 12 months
  • Each audit record includes the topic, date, method, sample size or scope, findings, and any actions arising
  • Audits are distinguishable from routine operational checks - they involve systematic review of a sample against defined criteria

Minimum for Developing

  • The practice has conducted one audit in the past 12 months, or has conducted audits in prior years but not in the current year

Excelling

  • The practice conducts three or more audits per year, covering a range of clinical and operational topics
  • Audit topics are planned at the start of each year as part of an annual audit schedule

Common Pitfalls

  • Confusing routine stock checks or equipment inspections with formal audits - an audit requires defined criteria, a sample, and a finding
  • No audits have been conducted because "we don't have time" - a focused clinical record audit on 15 files takes approximately two hours

Established Evidence

  • A documented audit schedule or plan showing topics selected for the current year or cycle
  • The schedule includes at least one clinical topic (e.g., clinical documentation completeness, consent compliance, medication prescribing) and at least one operational topic (e.g., appointment scheduling accuracy, referral turnaround times, credentialing currency)
  • The schedule is set before audits are conducted, not retrospectively created to describe what happened to be reviewed

Minimum for Developing

  • Audits are conducted but topics are chosen ad hoc - there is no forward plan or schedule

Excelling

  • The audit schedule is informed by risk assessment, prior audit findings, incident data, or areas of known variability
  • The schedule is reviewed and adjusted mid-year if new risks or concerns emerge

Common Pitfalls

  • All audits address operational or administrative topics because they are easier - clinical audits are avoided because they feel uncomfortable or are perceived as questioning practitioner competence
  • The same two topics are audited every year without variation, leaving other areas permanently unexamined

Established Evidence

  • A written audit report or summary for each audit conducted, including the criteria used, sample reviewed, findings (both positive and negative), and any actions required
  • Evidence that findings were shared with relevant staff (meeting minutes, email distribution, notice posted in staff area)
  • Findings are stored in a central location and are accessible for future reference

Minimum for Developing

  • Audit findings are known informally but are not documented in a report or summary
  • Results were not shared beyond the person who conducted the audit

Excelling

  • Audit findings are presented at a team meeting with opportunity for discussion and input on improvement actions
  • Reports include positive findings alongside gaps, reinforcing what the practice does well

Common Pitfalls

  • Audit conducted but no written record of findings - the value of the audit is lost when the person who conducted it cannot recall the details six months later
  • Findings are shared only with the principal practitioner and not with the staff whose practice was audited

Established Evidence

  • Each audit report includes an action plan for gaps identified, with responsible persons and target dates
  • Actions arising from audits are entered into the practice improvement plan or a dedicated audit action register
  • There is evidence that at least some audit actions have been completed and their effectiveness reviewed

Minimum for Developing

  • Gaps are identified in audits but no formal actions are documented - improvements are left to individual initiative

Excelling

  • Audit actions are tracked through to completion, and a follow-up review or re-audit is scheduled to confirm the gap has been closed
  • The link between audit findings and subsequent improvements is clearly documented

Common Pitfalls

  • The audit is conducted and written up, but the findings sit in a drawer - no one is assigned to act on them and no follow-up occurs
  • Actions are documented but are too vague to be actionable (e.g., "improve documentation" rather than "update clinical note template to include allergy field and train all staff by 30 June")

Established Evidence

  • The rationale for selecting each audit topic is documented or can be articulated (e.g., "we audited consent documentation because we had two incidents where consent was not clearly recorded")
  • Audit topics correlate with the practice's risk register, incident reports, complaint themes, or areas rated Developing in the self-assessment
  • Topic selection is deliberate rather than arbitrary

Minimum for Developing

  • Audit topics are selected based on what is convenient or familiar rather than what matters most
  • There is no documented rationale for why particular topics were chosen

Excelling

  • The practice maintains a risk-informed audit topic register, with topics ranked by priority and rotated over a multi-year cycle
  • External trends (such as published safety alerts or specialty-specific audit recommendations from colleges) inform topic selection

Common Pitfalls

  • Audit topics are chosen because a template was available rather than because the topic is relevant to the practice's risk profile
  • High-risk clinical processes (e.g., procedural consent, medication management, handover communication) are never audited because they are perceived as the practitioner's domain

Established Evidence

  • At least one audit in the past 12 months examined a clinical process (e.g., documentation of clinical findings, consent processes, prescribing accuracy, clinical handover completeness) or a clinical outcome (e.g., complication rates, re-referral patterns)
  • The clinical audit used defined clinical criteria, not just operational metrics
  • The audit was conducted with the knowledge and involvement of the relevant clinician(s)

Minimum for Developing

  • All audits in the past 12 months were administrative or operational in nature - no clinical process or outcome was systematically reviewed

Excelling

  • Clinical audits are conducted at least twice per year, covering different aspects of clinical care
  • Clinical audit findings lead to changes in clinical practice, not just administrative processes

Common Pitfalls

  • Labelling an administrative check (e.g., "are referral letters filed?") as a clinical audit when no clinical content was evaluated
  • Avoiding clinical audits entirely because the principal practitioner views them as unnecessary or threatening - this is the strongest signal that clinical audit is needed

Established Evidence

  • Evidence that at least one re-audit or follow-up review has been conducted on a topic previously audited
  • The follow-up review compares current performance against the original audit findings to determine whether improvement has been sustained
  • Results of the follow-up are documented and any remaining gaps addressed

Minimum for Developing

  • Previous audits are filed and forgotten - there is no mechanism to revisit findings in future audit cycles

Excelling

  • Re-audit is built into the audit schedule as standard practice - every audit that identifies a gap has a follow-up date recorded
  • The practice can demonstrate sustained improvement over multiple audit cycles in at least one area

Common Pitfalls

  • Conducting the same audit annually but never comparing this year's results with last year's - the audit becomes a routine exercise rather than a measure of improvement
  • Improvement was achieved immediately after the initial audit but has since deteriorated because the issue was not re-checked

Established Evidence

  • A documented audit of medication management processes conducted within the past 24 months
  • The audit covers relevant aspects such as: medication storage conditions, expiry date checking, labelling, prescribing accuracy, medication reconciliation processes, or Schedule 8 (controlled substance) register compliance
  • Findings and any corrective actions are documented

Minimum for Developing

  • The practice holds or administers medications but has not conducted a formal medication audit - checks are informal or limited to expiry date reviews
  • The practice has identified the need for a medication audit but has not yet scheduled it

Excelling

  • Medication audits are conducted annually and cover both storage/handling and clinical prescribing accuracy
  • The audit is benchmarked against national medication safety standards (e.g., NSQHS Standard 4 criteria adapted for specialist practice)

Common Pitfalls

  • Assuming a medication audit is only relevant to practices with a dispensary - any practice that stores sample medications, administers injections, holds emergency drugs, or manages Schedule 8 substances should conduct this audit
  • Checking expiry dates on the shelf without auditing whether prescribing records match what was administered

Established Evidence

  • A documented IPC audit conducted within the past 24 months
  • The audit covers relevant aspects such as: hand hygiene compliance, surface cleaning and disinfection, reprocessing of reusable medical devices (if applicable), sharps management, clinical waste segregation, and PPE availability and use
  • Findings and corrective actions are documented

Minimum for Developing

  • IPC practices are in place but have not been formally audited - the practice assumes compliance without verification
  • An IPC audit is planned but has not yet been conducted

Excelling

  • IPC audits are conducted annually and use a structured tool or checklist aligned with the NHMRC Australian Guidelines for the Prevention and Control of Infection in Healthcare
  • Hand hygiene compliance is audited observationally (not just by checking that hand sanitiser dispensers are stocked)

Common Pitfalls

  • Relying on the cleaning contractor's schedule as evidence of IPC compliance - this demonstrates cleaning occurs, not that it is effective or that clinical IPC practices are followed by staff
  • Not auditing IPC in consulting-only practices where no procedures are performed - hand hygiene and surface cleaning are relevant to every practice that sees patients

Established Evidence

  • A documented audit of health records conducted within the past 24 months, reviewing a sample of patient records against defined criteria
  • Criteria typically include: patient identification on every page or entry, legibility, dated and signed entries, documented allergies, current medication list, documented consent where applicable, and referral and correspondence filing
  • Sample size is sufficient to draw meaningful conclusions (minimum 15-20 records recommended)
  • Findings and corrective actions are documented

Minimum for Developing

  • Records are maintained but have never been formally audited for completeness or quality
  • The practice has identified the need for a records audit but has not yet conducted one

Excelling

  • Health records audits are conducted annually and results are compared year-on-year to track improvement
  • The audit includes both structured data fields and the quality of clinical narrative documentation

Common Pitfalls

  • Auditing only whether records exist rather than whether they contain the required information - a file with a name and date of birth but no clinical notes passes an existence check but fails a quality check
  • Conducting the audit on the principal practitioner's own patients and finding everything satisfactory - audits should include records from all clinicians and be conducted by someone other than the treating clinician where possible
8.3

Data Use and Performance Monitoring

We collect, review, and act on data about our practice's performance.

Established Evidence

  • A documented list of KPIs that the practice has chosen to track (typically 5-10 indicators)
  • KPIs are relevant to the practice's specific context and cover a mix of clinical, operational, and patient experience measures
  • Examples include: referral-to-appointment wait time, DNA rate, patient satisfaction scores, report turnaround time, incident rate, complaint rate

Minimum for Developing

  • The practice tracks some data informally but has not identified a defined set of KPIs
  • Data is available in the practice management system but no one routinely reviews it

Excelling

  • KPIs are reviewed against targets or benchmarks (internal or external) rather than simply reported
  • The KPI set is reviewed periodically to ensure it remains relevant as the practice evolves

Common Pitfalls

  • Tracking too many indicators, resulting in data overload and no meaningful review of any of them
  • Selecting KPIs based on what the practice management software reports by default rather than what actually matters to the practice's quality and safety

Established Evidence

  • Records showing that KPI data was reviewed at scheduled intervals (at least quarterly, preferably monthly for operational KPIs)
  • Reviews involve both the principal practitioner(s) and the practice manager, not just one or the other
  • Meeting minutes or review notes document the data reviewed and any observations or decisions made

Minimum for Developing

  • KPI data is available but is not reviewed at regular intervals - it is only looked at when a problem is suspected
  • Only the practice manager reviews the data; the principal practitioner is not engaged in performance monitoring

Excelling

  • KPI trends are tracked over time (not just point-in-time snapshots), enabling the practice to identify gradual changes before they become problems
  • KPI review is a standing agenda item in scheduled practice management meetings

Common Pitfalls

  • Data is collected but never reviewed - the practice management system generates reports that no one reads
  • The principal practitioner considers operational data to be "admin" and does not participate in reviews, missing the connection between operational performance and clinical quality

Established Evidence

  • The practice can report its current average waiting time from referral receipt to first appointment, broken down by urgency category if applicable
  • Waiting time data is reviewed at least quarterly
  • Where waiting times exceed the practice's own targets or published guidelines (e.g., ACSQHC recommended timeframes), this is documented and actions to address it are considered

Minimum for Developing

  • The practice has a general sense of its waiting times but does not measure or track them systematically

Excelling

  • Waiting times are tracked by referral category, urgency, and practitioner to identify variation
  • The practice monitors changes in waiting times over time and can demonstrate improvement or stability

Common Pitfalls

  • Measuring time from when the referral is triaged rather than when it is received - this understates the actual patient wait
  • Long waiting times are accepted as inevitable ("we're a busy practice") without analysis of whether capacity, scheduling, or DNA rates are contributing factors

Established Evidence

  • DNA and late cancellation rates are calculated and reviewed at least quarterly
  • The practice can state its current DNA rate and whether it has changed over time
  • Where DNA rates are high (above 10% is a common concern threshold), the practice has investigated contributing factors and implemented strategies to reduce them (e.g., SMS reminders, telephone confirmation, review of booking processes)

Minimum for Developing

  • The practice is aware that DNAs occur but does not calculate or track the rate systematically

Excelling

  • DNA data is analysed by appointment type, day of week, practitioner, and patient demographic to identify patterns and target interventions
  • The practice can demonstrate a reduction in DNA rates following specific interventions

Common Pitfalls

  • Treating DNAs solely as a revenue problem rather than a quality issue - patients who do not attend may have unmet clinical needs, and high DNA rates often indicate barriers to access or poor appointment communication
  • Not following up patients who DNA to check on their clinical status, particularly for urgent or time-sensitive referrals

Established Evidence

  • Patient feedback (surveys, complaints, compliments, online reviews) is aggregated and reviewed at least annually
  • Feedback themes are identified and documented
  • Feedback findings are cross-referenced with the improvement plan - themes that indicate systemic issues are translated into improvement actions

Minimum for Developing

  • Patient feedback is received and responded to individually but is not aggregated or reviewed for themes

Excelling

  • Patient feedback is integrated with other data sources (incidents, complaints, KPIs) to build a holistic picture of practice performance
  • Changes made in response to patient feedback are communicated back to patients (closing the feedback loop)

Common Pitfalls

  • Patient feedback consists solely of a suggestion box that no one empties - or an online survey with a 2% response rate that cannot generate meaningful findings
  • Positive feedback is celebrated but critical feedback is dismissed or explained away rather than investigated

Established Evidence

  • An annual (or more frequent) review of all recorded incidents and near misses, looking for patterns, recurring themes, and trends over time
  • The review is documented, including the number of incidents by category, any themes identified, and any actions taken in response
  • The review is conducted by or presented to the principal practitioner(s) and practice manager

Minimum for Developing

  • Incidents are recorded and managed individually but are not reviewed in aggregate - patterns may exist but have not been identified

Excelling

  • Incident data is reviewed quarterly, with trend analysis comparing current periods to prior periods
  • Incident patterns are cross-referenced with other data (complaints, audit findings, staffing patterns) to identify systemic contributors

Common Pitfalls

  • The practice has a low number of recorded incidents and interprets this as evidence that incidents do not occur - it more likely indicates under-reporting
  • Aggregate review is limited to counting incidents by category without analysis of contributing factors or systemic patterns

Established Evidence

  • An annual review of all recorded complaints, looking for themes, recurring issues, and trends
  • The review identifies whether the same types of complaints recur (e.g., communication, wait times, billing) and whether previous corrective actions have been effective
  • The review is documented and shared with the principal practitioner(s) and practice manager

Minimum for Developing

  • Complaints are handled individually but no aggregate review has been conducted - the practice cannot describe its top complaint themes

Excelling

  • Complaint data is analysed alongside patient feedback, incident data, and operational KPIs to identify cross-cutting themes
  • The practice can demonstrate specific changes made in response to complaint trends over the past two years

Common Pitfalls

  • The practice has received very few formal complaints and concludes that patients are satisfied - without considering whether patients know how to complain, feel safe doing so, or face barriers (e.g., dependence on the specialist for ongoing care)
  • Complaints are categorised and counted but the underlying causes are not investigated - knowing you received five complaints about waiting times is less useful than understanding why waiting times are long

Established Evidence

  • The principal practitioner(s) review clinical outcome data relevant to their specialty at least annually (e.g., complication rates, re-operation rates, treatment success rates, patient-reported outcome measures)
  • The review is documented, even if briefly (e.g., a dated note in the improvement plan or clinical governance record)
  • Where clinical outcome data is not routinely available, the practice has considered what outcomes could be measured and whether collection is feasible

Minimum for Developing

  • The practice acknowledges the value of clinical outcome review but does not currently collect or review outcome data in a structured way

Excelling

  • Clinical outcomes are benchmarked against published data, specialty college standards, or registry data where available
  • Patient-reported outcome measures (PROMs) are used to supplement clinical outcome data

Common Pitfalls

  • Assuming that clinical outcome review is only possible in surgical or procedural specialties - all specialties can measure outcomes (e.g., treatment response rates, functional improvement scores, diagnostic accuracy rates)
  • The principal practitioner reviews their own outcomes in isolation with no external comparison or accountability - self-assessment without benchmarking provides limited assurance

Established Evidence

  • Where data review (KPIs, incidents, complaints, clinical outcomes) identifies a concern or opportunity for improvement, a corresponding action appears in the improvement plan
  • The link between the data finding and the improvement action is explicit and traceable
  • The improvement plan references the data source that triggered the action

Minimum for Developing

  • Data is reviewed but findings remain as observations - they are not translated into documented improvement actions

Excelling

  • Every data review produces a documented decision: either a new improvement action is created, an existing action is updated, or a conscious decision to take no action is recorded with reasons
  • The improvement plan shows a history of data-triggered actions, demonstrating that the practice responds to what the data tells it

Common Pitfalls

  • Data review and improvement planning are conducted by different people or at different times, so findings do not flow through to actions
  • The improvement plan contains aspirational goals set at the start of the year but none that arose from data reviewed during the year

Established Evidence

  • The practice uses multiple data sources to assess its quality (patient feedback surveys, clinical audits, incident reports, KPIs, outcome data) rather than relying on "no news is good news"
  • When asked about quality evidence, the practice can point to active measures and reviews, not just the absence of negative signals
  • The practice has documented how it collects quality evidence, and complaints are one input among several

Minimum for Developing

  • The practice's primary evidence of quality is "we haven't had any complaints" or "patients keep coming back"
  • Some other data sources exist but are not systematically used

Excelling

  • The practice proactively seeks critical feedback (e.g., asking patients specifically about negative experiences, conducting exit surveys) rather than passively waiting for complaints
  • The practice can articulate why absence of complaints is an insufficient measure, particularly in specialist practice settings where patients may feel dependent on the clinician

Common Pitfalls

  • Interpreting a low complaint rate as proof that care is excellent - in specialist practice, patients often do not complain because they fear it will affect their ongoing care, they do not know how, or they assume problems are normal
  • Using patient retention as a quality proxy without recognising that patients often have limited choice of specialist, especially in regional areas
8.4

Learning from Incidents Near Misses and Complaints

We treat things that go wrong as opportunities to improve, not just problems to resolve.

Established Evidence

  • A written procedure or flowchart describing how incidents and near misses are reviewed for learning after the immediate response is completed
  • The process distinguishes between the initial response (e.g., treating the patient, securing safety, reporting) and the subsequent learning review (e.g., root cause analysis, case discussion, process review)
  • The process specifies who conducts the learning review, when it occurs, and how findings are documented

Minimum for Developing

  • Incidents are responded to in the moment but there is no structured process for returning to them to extract learning
  • The practice recognises the value of learning reviews but has not formalised the process

Excelling

  • All incidents and near misses above a defined threshold are subject to a structured learning review, with the methodology adapted to the severity (e.g., brief case discussion for minor events, formal root cause analysis for serious events)
  • Learning review findings are stored in a central location and referenced in the improvement plan

Common Pitfalls

  • The initial response is thorough but nobody returns to the incident to ask "why did this happen and how do we prevent it recurring?"
  • Learning reviews are conducted informally ("we talked about it") with no documentation - the learning is lost when staff change or memories fade

Established Evidence

  • A definition of what constitutes a "significant" incident that triggers a structured review (based on actual or potential severity)
  • Evidence that structured reviews have been conducted for significant incidents within the defined timeframe (typically within 4-6 weeks of the event)
  • Structured review records include a description of what happened, contributing factors, root causes identified, and recommended actions

Minimum for Developing

  • Significant incidents are discussed informally but not subjected to a structured review methodology
  • Reviews occur but not within a consistent or defined timeframe

Excelling

  • The practice uses a recognised review methodology (e.g., root cause analysis, London Protocol, HFACS framework) appropriate to the incident severity
  • External expertise is sought for the most serious incidents where internal review capacity is insufficient

Common Pitfalls

  • The review focuses on "who did what wrong" rather than on systemic factors - this discourages future reporting and misses the real learning
  • The structured review is delayed so long that it becomes irrelevant - details are forgotten and the opportunity for timely change is lost

Established Evidence

  • At least one example in the past two years where an incident review resulted in a documented change to a policy, procedure, or training requirement
  • The change is traceable back to the incident that triggered it (e.g., "procedure updated following incident review on [date]")
  • The updated policy or procedure is in use and staff are aware of the change

Minimum for Developing

  • Incident reviews identify learning but the findings are not translated into formal changes - the practice relies on informal awareness rather than documented changes

Excelling

  • Every structured incident review results in documented actions, and those actions are tracked through to implementation
  • The practice maintains a register linking incident reviews to policy or process changes, demonstrating a systematic loop from event to improvement

Common Pitfalls

  • The review produces recommendations but no one is assigned to implement them - good intentions but no follow-through
  • Changes are made informally ("we told everyone to be more careful") without updating the relevant procedure or providing structured training

Established Evidence

  • Records showing that changes arising from incident reviews were communicated to affected staff (e.g., team meeting minutes, email notifications, updated procedure circulated with read-receipt, training session records)
  • Communication is timely - it occurs close to when the change is implemented, not months later
  • Staff can describe recent changes that were made in response to incidents

Minimum for Developing

  • Changes are made but not formally communicated - staff may or may not become aware of them through informal channels

Excelling

  • Communication includes the reason for the change (not just "we've updated policy X" but "following a recent incident involving Y, we have changed our approach to Z")
  • The practice checks that staff have understood and adopted the change, not just that they were informed

Common Pitfalls

  • An updated procedure is saved to the shared drive but no one is told it has been updated - the old version continues to be followed
  • Communication occurs only by email, which is not read by all staff, particularly clinical staff who do not routinely check email during clinical sessions

Established Evidence

  • An annual review of all complaints received, documented with themes identified (e.g., communication, wait times, billing, clinical outcomes)
  • The review identifies recurring patterns and distinguishes between isolated complaints and systemic issues
  • The review is conducted by or presented to the principal practitioner(s) and practice manager

Minimum for Developing

  • Complaints are logged and resolved individually but no annual thematic review has been conducted

Excelling

  • Complaint themes are tracked over time to determine whether previous improvement actions have been effective
  • The practice uses complaint data proactively to anticipate problems rather than reactively to address them after they recur

Common Pitfalls

  • The complaint register contains insufficient detail to enable thematic analysis - entries like "patient unhappy" do not support meaningful review
  • The annual review is cursory (e.g., "we received three complaints, all resolved") without analysis of what the complaints have in common

Established Evidence

  • Where complaint themes indicate systemic issues (not just individual service failures), corresponding improvement actions appear in the improvement plan
  • The link between the complaint theme and the improvement action is documented
  • At least one example exists of a complaint-driven improvement in the past two years

Minimum for Developing

  • Systemic complaint themes have been identified but have not been translated into the improvement plan

Excelling

  • Complaint-driven improvements are tracked through to completion and evaluated for effectiveness
  • The practice can demonstrate that a recurring complaint theme has been reduced or eliminated through targeted improvement action

Common Pitfalls

  • Complaints are treated as individual customer service issues rather than potential indicators of systemic problems - each is "resolved" in isolation without asking whether the same thing keeps happening
  • The improvement plan addresses aspirational goals but never includes actions triggered by real patient complaints

Established Evidence

  • A named person is responsible for monitoring external safety communications (TGA recalls, AHPRA safety alerts, specialist college advisories, medical indemnity insurer risk alerts)
  • A log or register of alerts received and actions taken (including "reviewed - not applicable to this practice" where relevant)
  • Evidence of timely action where an alert is relevant (e.g., product withdrawn from use, patients contacted, clinical procedure updated)

Minimum for Developing

  • The practice is generally aware of major safety alerts through professional networks but does not have a systematic monitoring process
  • No log of alerts received or actions taken is maintained

Excelling

  • The practice subscribes to relevant alert services (TGA, AHPRA, specialty colleges, MDA National or Avant risk alerts) and has a defined triage process for incoming alerts
  • Alert responses are documented and incorporated into the improvement plan where process changes are required

Common Pitfalls

  • Relying on ad hoc awareness rather than systematic monitoring - alerts are noticed if they happen to appear in a newsletter but are missed if they arrive during busy periods
  • Alerts relevant to the practice's equipment, medications, or devices are received but not acted upon because no one is assigned responsibility for reviewing them

Established Evidence

  • The practice's incident reporting process explicitly includes near misses (events that could have caused harm but did not)
  • Near misses are recorded in the same register as actual incidents and are subject to the same review process
  • Staff understand that near misses should be reported and can describe what constitutes a near miss in their context

Minimum for Developing

  • The practice acknowledges the concept of near misses but does not systematically record or review them
  • Near misses are mentioned informally but not documented

Excelling

  • Near misses are valued as the most important source of learning because they reveal system weaknesses before patients are harmed
  • The practice can demonstrate at least one improvement that originated from a near-miss report rather than an actual incident

Common Pitfalls

  • Near misses are dismissed as "nothing happened, so there's nothing to report" - this misses the fundamental principle that near misses reveal the same system failures as actual incidents, just with a different outcome by chance
  • The reporting culture penalises errors and near misses, so staff conceal them rather than report them

Established Evidence

  • A documented commitment to a "just culture" or non-punitive reporting policy, visible to all staff
  • Staff survey results or confidential feedback indicating that staff feel safe reporting errors and near misses
  • Evidence that incident reports have been received from staff at various levels (not only from the practice manager), suggesting that reporting is not limited to senior staff

Minimum for Developing

  • The practice has a reporting process but staff hesitate to use it - incident reports come almost exclusively from the practice manager or principal practitioner
  • No formal commitment to non-punitive reporting exists

Excelling

  • The practice actively encourages reporting through regular reminders, positive reinforcement when reports are submitted, and visible examples of improvements that resulted from reports
  • Leadership models reporting behaviour by reporting their own errors and near misses

Common Pitfalls

  • The practice says it has a non-punitive culture but staff who report problems receive negative feedback, are questioned about their competence, or find that their concerns are not acted upon - the stated culture and the actual culture diverge
  • All incident reports are made by the same person (usually the practice manager), indicating that other staff either do not report or do not feel empowered to do so

Established Evidence

  • A specific, named example of a change to practice that was triggered by an incident or complaint within the past 24 months
  • Documentation showing the incident or complaint, the review or investigation, the improvement action decided upon, and the implementation of that action
  • Evidence that the change is still in place (e.g., the updated procedure is current, the new process is being followed)

Minimum for Developing

  • The practice believes it has made improvements in response to incidents or complaints but cannot provide a documented example

Excelling

  • The practice can provide multiple examples of incident- or complaint-driven improvements, demonstrating a pattern of responsive quality improvement
  • At least one example includes evaluation of whether the improvement achieved its intended outcome

Common Pitfalls

  • The practice claims it "learns from everything" but cannot name a single specific change that resulted from an incident or complaint - learning without documented action is not improvement
  • The example provided is trivial (e.g., "we put up a sign") rather than substantive (e.g., "we redesigned our consent process," "we changed our medication checking procedure")
8.5

Peer Review and External Benchmarking

We look outside our own practice to test and calibrate the quality of what we do.

Established Evidence

  • Records of participation in peer review, case discussion, mortality and morbidity meetings, clinical audit groups, or similar structured clinical review activities within the past 12 months
  • Documentation includes the nature of the activity, date(s), and the practitioner's involvement (attendee, presenter, auditor)
  • Activities involve genuine peer scrutiny of clinical decision-making, not just attendance at educational lectures or conferences

Minimum for Developing

  • The principal practitioner attends educational events but does not participate in structured peer review or case discussion
  • Peer review was undertaken in prior years but not in the past 12 months

Excelling

  • The principal practitioner participates in multiple forms of peer review (e.g., a monthly case discussion group and an annual clinical audit)
  • The practitioner presents their own cases for peer discussion, not just observing others' presentations

Common Pitfalls

  • Counting CPD conference attendance as peer review - attending a lecture does not involve scrutiny of one's own clinical practice
  • The practitioner participates in peer review at a hospital appointment but not in relation to their private practice caseload - private practice patients deserve the same level of external accountability

Established Evidence

  • A record of each peer review activity, including the date, type of activity, topics discussed (in de-identified form), and learning points relevant to the practitioner's own practice
  • Documentation is retained in a CPD or clinical governance file accessible for review
  • Learning points include any intended changes to practice arising from the peer review discussion

Minimum for Developing

  • The practitioner participates in peer review but does not document it beyond a certificate of attendance
  • Learning points are informal and not recorded

Excelling

  • Peer review learning is connected to the practitioner's CPD plan and the practice improvement plan where relevant
  • A reflective note accompanies each peer review activity, describing what was learned and how it will influence future practice

Common Pitfalls

  • No documentation exists - the practitioner reports that they "discussed cases with colleagues" but there is no record of when, what was discussed, or what was learned
  • Documentation is limited to a list of dates and topics without any reflection on learning or impact on practice

Established Evidence

  • Evidence that the practice is aware of relevant college or association audit/benchmarking programs (e.g., RACS Surgical Audit, RANZCOG clinical indicators, ANZCA quality audits)
  • If participating, records of participation and any reports or feedback received
  • If not participating, a documented rationale for the decision (e.g., not applicable to the practice's scope, cost-prohibitive, or alternative benchmarking in place)

Minimum for Developing

  • The practice is not aware of whether its relevant college or association offers clinical audit or benchmarking programs

Excelling

  • The practice actively participates in college or association audit/benchmarking programs and uses the results to inform its improvement plan
  • Benchmarking data is reviewed by the principal practitioner and discussed with relevant staff

Common Pitfalls

  • Assuming that college CPD requirements constitute benchmarking - CPD ensures ongoing education but does not necessarily involve comparison of clinical performance against peers
  • Awareness of the program exists but participation is perpetually deferred because "we'll do it next year"

Established Evidence

  • Evidence that the practice has identified clinical registries relevant to its specialty (e.g., Australian Orthopaedic Association National Joint Replacement Registry, Breast Cancer Trials registry, BreastSurgANZ Quality Audit, Australian and New Zealand Hip Fracture Registry)
  • If participating, records of registration and data submission
  • If not participating, a documented rationale (e.g., not applicable to the practice's case mix, administrative burden disproportionate to practice size)

Minimum for Developing

  • The practice has not investigated whether relevant clinical registries exist for its specialty

Excelling

  • The practice participates in one or more clinical registries and uses registry reports to compare its outcomes against national benchmarks
  • Registry data is discussed at practice meetings and informs clinical quality improvement

Common Pitfalls

  • Assuming clinical registries are only for large hospital-based practices - many registries include private practice data and some specifically seek private practice participation
  • The practice submits data to a registry but never reviews the feedback reports - participation without engagement does not improve quality

Established Evidence

  • Where peer review or benchmarking identifies areas for improvement or learning, these are reflected in the practice improvement plan as specific actions
  • The link between the peer review finding and the improvement action is documented
  • At least one improvement action in the past two years was triggered by peer review or benchmarking, not solely by internal processes

Minimum for Developing

  • Peer review and benchmarking activities occur but their findings remain with the individual practitioner and are not connected to the practice's improvement plan

Excelling

  • External findings are systematically integrated with internal data (audits, incidents, complaints) to create a comprehensive picture of practice performance
  • The practice can demonstrate improvements that were identified through external comparison rather than internal review alone

Common Pitfalls

  • Peer review and the practice improvement plan exist as entirely separate activities - one is a "clinical" exercise and the other is "admin," with no connection between them
  • Benchmarking results show the practice is below average on a measure but no action is taken because the result is rationalised away ("our patients are more complex")

Established Evidence

  • If the practice trains medical students, registrars, or fellows, feedback from the relevant supervisory body (e.g., university medical school, specialist college training committee, postgraduate training authority) is received and reviewed
  • Feedback reports are retained and any concerns raised are addressed with documented actions
  • The practice can describe the most recent feedback received and any changes made in response

Minimum for Developing

  • The practice participates in training but does not recall receiving feedback from the supervisory body, or feedback was received but not reviewed
  • The practice does not participate in clinical training (this indicator may be marked as not applicable)

Excelling

  • The practice actively seeks feedback from trainees in addition to formal supervisory body feedback
  • Training quality is treated as part of the practice's overall quality framework, with training-related improvements included in the improvement plan

Common Pitfalls

  • Feedback from the supervisory body is filed without being read - the principal practitioner assumes that if there were problems, someone would have called
  • Negative feedback from trainees (informal or formal) is dismissed as reflecting the trainee's limitations rather than the practice's training environment
8.6

Regulatory Currency and Awareness

We stay current with our legal obligations, relevant standards, and evolving clinical guidance.

Established Evidence

  • A specific person (by name or role, typically the practice manager) is documented as responsible for monitoring regulatory, standards, and guideline changes
  • The responsibility is recorded in their position description or in a practice governance document
  • The named person can describe how they monitor for changes (e.g., subscriptions, professional body updates, regulatory newsletters)

Minimum for Developing

  • No one has been formally assigned this responsibility - regulatory awareness depends on information arriving by chance through professional networks or newsletters

Excelling

  • The role includes a defined list of sources to monitor (e.g., AHPRA, TGA, relevant specialty college, state health department, Privacy Commissioner) with a defined frequency of checking
  • The named person provides a brief update to the principal practitioner at least quarterly

Common Pitfalls

  • Everyone assumes someone else is monitoring changes - regulatory updates are missed because no one has explicit responsibility
  • The role is assigned to the practice manager but no time is allocated for it, and no monitoring tools or subscriptions are provided

Established Evidence

  • A documented process or procedure for reviewing legislative updates, including who reviews them, how changes are assessed for relevance, and how necessary practice changes are identified and implemented
  • Evidence that legislative updates have been reviewed in the past 12 months (e.g., a log entry noting "Privacy Act amendments reviewed - no changes required to current practice" or "WHS regulations updated - risk assessment reviewed and training scheduled")
  • Key legislation monitored includes: Privacy Act 1988 (Cth), Work Health and Safety Act (relevant state/territory), Health Practitioner Regulation National Law, and applicable state health legislation

Minimum for Developing

  • The practice is aware of relevant legislation but does not have a structured process for monitoring updates - awareness is incidental rather than systematic

Excelling

  • The practice maintains a legislative register listing key legislation, the date it was last reviewed, the next scheduled review, and any actions arising
  • Legal or compliance advice is sought when significant legislative changes affect the practice's operations

Common Pitfalls

  • Assuming that compliance with legislation at the time the practice was established means ongoing compliance - legislation changes and practices must keep up
  • Relying on the accountant or lawyer to flag relevant changes rather than proactively monitoring - external advisers may not be aware of health-specific legislative changes

Established Evidence

  • The practice is subscribed to or regularly reviews communications from AHPRA (newsletters, regulatory updates, registration standards changes)
  • The practice monitors updates from the relevant specialist college(s) (e.g., RACS, RACP, RANZCOG, RANZCP, ANZCA)
  • Where guidance from AHPRA or the college is relevant to the practice's operations (e.g., changes to advertising rules, CPD requirements, scope of practice guidelines), the practice can demonstrate that it reviewed and acted on the guidance

Minimum for Developing

  • The principal practitioner individually monitors AHPRA and college updates but these are not shared with or reviewed by the practice as a whole

Excelling

  • AHPRA and college updates are tabled at practice meetings and relevant changes are discussed with all affected staff
  • The practice maintains a record of key guidance changes and the actions taken in response

Common Pitfalls

  • AHPRA newsletters are deleted unread or filed without review - changes to advertising guidelines, mandatory notification obligations, or registration standards are missed
  • College updates are treated as relevant only to the practitioner's CPD, not to the practice's operational compliance

Established Evidence

  • Each clinical policy and procedure has a documented review date and a next-review date
  • Evidence that policies have been reviewed within their scheduled cycle (review notes, updated version dates, or a policy register showing review status)
  • Reviews consider whether the policy still reflects current clinical standards, guidelines, and legislative requirements

Minimum for Developing

  • Policies exist but have no defined review cycle - some have not been reviewed since they were first written
  • A review schedule has been created but reviews have not yet been conducted

Excelling

  • Policy reviews are informed by current evidence, updated guidelines, audit findings, and incident learnings
  • A policy register tracks all policies with their current status, review dates, and responsible reviewer

Common Pitfalls

  • Policies are "reviewed" by opening the document and changing the review date without actually reading or updating the content
  • Policies reference outdated guidelines or superseded legislation because substantive review was not conducted

Established Evidence

  • A documented review of the practice's compliance with the Australian Privacy Principles (APPs), conducted within the past 24 months
  • The review covers: how personal information is collected, used, disclosed, stored, accessed, and corrected; the practice's privacy policy; and the practice's data breach response process
  • Where gaps were identified, corrective actions are documented

Minimum for Developing

  • The practice has a privacy policy but has not reviewed its actual compliance against the APPs recently
  • The practice is aware of the APPs but could not describe how it meets each relevant principle

Excelling

  • The privacy review is conducted annually, informed by any regulatory updates from the Office of the Australian Information Commissioner (OAIC)
  • Staff receive privacy refresher training at least biennially, aligned with the APP review

Common Pitfalls

  • Having a privacy policy on the website is confused with compliance - the policy describes what the practice should do, not whether it actually does it
  • The Notifiable Data Breaches scheme obligations are unknown to the practice manager - they cannot describe what constitutes an eligible data breach or the 30-day notification requirement

Established Evidence

  • Staff induction materials include information about mandatory reporting obligations under the Health Practitioner Regulation National Law (AHPRA mandatory notifications) and applicable state/territory child protection legislation
  • Evidence that mandatory reporting obligations are reviewed with all staff at least every two years (training records, meeting minutes, policy review records)
  • Staff can describe the circumstances that trigger mandatory reporting and the process for making a report

Minimum for Developing

  • The principal practitioner is aware of mandatory reporting obligations but this knowledge has not been shared with all staff, or it was covered at initial induction but not reviewed since

Excelling

  • Mandatory reporting training includes case scenarios relevant to the practice's specialty and patient population
  • The practice maintains a quick-reference guide or flowchart for mandatory reporting that is accessible to all clinical staff

Common Pitfalls

  • Assuming mandatory reporting is only the practitioner's responsibility - reception and administrative staff also need to understand basic obligations, particularly regarding child protection concerns they may observe
  • Mandatory reporting obligations vary by state and territory for child protection - the practice applies the wrong state's rules or is unaware of the specific thresholds that apply in their jurisdiction

Established Evidence

  • The practice is subscribed to TGA safety alert notifications (System for Australian Recall Actions - SARA - or email alerts) and/or monitors the TGA website regularly
  • A log of alerts received and the practice's response (including "not applicable" where the product is not used by the practice)
  • Where an alert is relevant, evidence of prompt action (e.g., product removed from stock, patients contacted, alternative sourced)

Minimum for Developing

  • The practice is generally aware of TGA recalls through media or supplier notifications but does not have a systematic monitoring process
  • No log of alerts reviewed or actions taken is maintained

Excelling

  • The practice monitors multiple alert sources (TGA, specialist college, medical device suppliers, medical indemnity insurer) and has a triage process for incoming alerts
  • Alert responses are completed within a defined timeframe (e.g., within 5 business days of receipt)

Common Pitfalls

  • TGA alerts are assumed to be relevant only to practices using implantable devices or dispensing medications - the TGA also recalls diagnostic equipment, consumables, software, and personal protective equipment
  • The practice relies on its supplier to notify it of recalls rather than monitoring independently - supplier notifications can be delayed or missed

Established Evidence

  • Records showing that regulatory or standards changes relevant to staff were communicated in a timely manner (e.g., email, team meeting agenda item, memo, updated policy circulated)
  • Communication is targeted to affected staff, not just broadcast to everyone - staff understand why the change matters to their role
  • At least one example in the past 12 months of a regulatory change that was communicated to staff with documented evidence

Minimum for Developing

  • Regulatory changes are known to the practice manager or principal practitioner but are not systematically communicated to other staff

Excelling

  • Communication includes a plain-language explanation of what has changed, why it matters, and what staff need to do differently
  • The practice confirms that staff have received and understood the communication (e.g., read-receipt, brief quiz, discussion at team meeting)

Common Pitfalls

  • Regulatory changes are communicated by emailing the full text of the amended legislation to all staff - no one reads it and no one understands what it means for their practice
  • Changes are communicated months after they take effect, by which time the practice has been non-compliant without knowing it

Established Evidence

  • Updated policies include a version history or change log noting the date of the change, the reason for the change (e.g., "updated to reflect Privacy Act amendment effective 1 July 2025"), and the effective date
  • The change log distinguishes between routine review updates and updates triggered by regulatory or standards changes
  • Previous versions of the policy are retained (at least one prior version) for audit trail purposes

Minimum for Developing

  • Policies are updated when regulations change but the reason for the update and the effective date are not recorded - it is unclear when or why the policy changed

Excelling

  • The practice maintains a policy change register that provides a centralised view of all regulatory-driven policy updates, with dates and references to the triggering regulation
  • Policy updates triggered by regulatory change are prioritised and implemented within a defined timeframe

Common Pitfalls

  • Policies are updated but the old version is deleted, so there is no record of what changed - this makes it impossible to demonstrate compliance at any point in time
  • The "reason for change" field says "annual review" when the actual trigger was a significant regulatory change - the audit trail is misleading
8.7

Improvement Culture

Our leadership actively creates the conditions for improvement to happen.

Established Evidence

  • The principal practitioner(s) participate in quality improvement discussions, attend practice meetings where quality is discussed, and endorse improvement actions by name
  • Staff can describe the principal practitioner's involvement in quality improvement without prompting (e.g., "Dr X reviews the improvement plan with us," "Dr X led the last audit")
  • The principal practitioner has signed off on or contributed to the practice improvement plan

Minimum for Developing

  • The principal practitioner supports quality improvement in principle but delegates all quality activities to the practice manager without personal involvement
  • Quality improvement is described as "the practice manager's job" rather than a shared responsibility

Excelling

  • The principal practitioner leads by example - presenting at case discussions, sharing their own learning from peer review, and publicly acknowledging when things need to improve
  • Quality improvement is integrated into the principal practitioner's own professional development planning

Common Pitfalls

  • The principal practitioner says quality is important but never attends quality meetings, never reads the improvement plan, and never participates in audits - staff observe the behaviour, not the words
  • Quality improvement is championed enthusiastically at the start of an accreditation cycle and then disappears from view until the next cycle

Established Evidence

  • Practice meeting agendas from the past 12 months show "quality improvement" or an equivalent item (e.g., "QI update," "improvement plan progress") as a recurring agenda item
  • Meeting minutes show that quality improvement was discussed at each meeting, not just listed on the agenda
  • Discussion includes progress updates, new issues identified, and decisions about priorities or actions

Minimum for Developing

  • Quality improvement is discussed at practice meetings occasionally but is not a standing agenda item - it is raised only when there is a specific issue to address

Excelling

  • Quality improvement discussions at practice meetings are structured and time-allocated, not squeezed in at the end
  • Meeting minutes record specific decisions or actions arising from quality improvement discussions

Common Pitfalls

  • Quality improvement is on the agenda template but is routinely skipped because the meeting runs out of time or there is "nothing to report"
  • Discussion of quality improvement is limited to the practice manager reporting - other staff are not engaged in the conversation

Established Evidence

  • A mechanism exists for staff to suggest improvements (e.g., suggestion box, standing invitation at team meetings, dedicated channel in team communication platform, regular one-on-one check-ins)
  • Evidence that staff suggestions have been received and at least some have been acted upon in the past 12 months
  • Staff can describe how to raise an improvement suggestion and feel confident that it will be considered

Minimum for Developing

  • Staff are told that suggestions are welcome but no mechanism exists to receive, record, or respond to them
  • Suggestions are received informally but there is no follow-up or feedback to the person who raised them

Excelling

  • Staff suggestions are logged, acknowledged, and receive a response (accepted and actioned, accepted and scheduled, or declined with reasons)
  • The practice can provide examples of improvements that originated from non-clinical or junior staff

Common Pitfalls

  • The invitation to suggest improvements is genuine but nothing ever changes - staff learn that suggesting things is a waste of time and stop doing it
  • Only clinical suggestions are taken seriously - administrative or reception staff suggestions about patient flow, communication, or operational efficiency are dismissed

Established Evidence

  • The practice's stated response to staff raising concerns is positive and appreciative, and this is reflected in actual behaviour observed by staff
  • Staff feedback or survey results indicate that raising concerns is safe and acknowledged
  • At least one example in the past 12 months where a staff member raised a concern and was visibly thanked or acknowledged (team meeting recognition, personal acknowledgement, improvement action credited to them)

Minimum for Developing

  • The practice does not actively punish staff who raise concerns, but neither does it actively encourage or acknowledge them
  • Staff are uncertain about how their concerns will be received

Excelling

  • Acknowledging staff who raise concerns is an explicit part of the practice's culture - it is discussed at induction and modelled by leadership
  • The practice tracks whether staff who raise concerns experience any negative consequences and takes corrective action if they do

Common Pitfalls

  • Staff who identify problems are perceived as "difficult" or "negative" rather than as contributors to quality improvement - this perception may not be stated openly but is communicated through body language, tone, or exclusion
  • The principal practitioner thanks staff for positive feedback but responds defensively to criticism or problem identification

Established Evidence

  • The practice has communicated at least one completed improvement to all staff in the past 12 months (team meeting announcement, email, noticeboard update, or newsletter)
  • Communication includes what was improved, why it matters, and who contributed to making it happen
  • Celebrating improvements is a regular practice, not a one-off event

Minimum for Developing

  • Improvements are made but are not communicated or acknowledged - staff may not be aware that changes have occurred or why

Excelling

  • The practice maintains a visible "improvements achieved" record (e.g., a board in the staff area, a section in the newsletter, or a regular team meeting segment) that accumulates over time
  • Achievements are linked back to the staff suggestion, audit finding, or incident report that triggered them, reinforcing the value of contributing to quality improvement

Common Pitfalls

  • The practice focuses exclusively on what still needs to be done and never acknowledges what has already been achieved - this creates a culture of perpetual deficit rather than progress
  • Improvements are communicated only to senior staff, not to the team members who were involved in the day-to-day work of implementing them

Established Evidence

  • Staff involved in quality improvement activities (conducting audits, writing procedures, reviewing data, attending quality meetings) are able to do this work during paid hours
  • Quality meetings and improvement activities are scheduled within rostered time, not before or after shifts or during lunch breaks
  • Where external resources are needed (e.g., templates, training, software), a modest budget is available or the practice has provided alternatives

Minimum for Developing

  • Quality improvement work is expected but no time is protected for it - staff fit it in around clinical and administrative duties and it is deprioritised whenever the practice is busy

Excelling

  • Specific time is blocked in rosters or calendars for quality improvement activities (e.g., two hours per month for the practice manager, one hour per quarter for team quality meetings)
  • The practice invests in quality improvement capability (e.g., training the practice manager in audit methodology, purchasing audit tools, or subscribing to benchmarking services)

Common Pitfalls

  • The practice manager is expected to drive all quality improvement on top of their existing workload with no additional time or support - improvement work is done at home on weekends
  • Quality improvement is formally supported but is the first thing cancelled when clinical demands increase - it is treated as discretionary rather than essential

Established Evidence

  • The staff induction program or checklist includes an introduction to the practice's quality framework, improvement plan, and quality reporting processes (incident reporting, suggestion mechanism, audit participation)
  • New staff can describe the practice's approach to quality improvement within their first month
  • Induction materials are current and reference the practice's actual processes, not generic quality statements

Minimum for Developing

  • New staff receive a general orientation but quality improvement is not specifically covered
  • Induction mentions quality in passing but does not explain the practice's specific processes or the staff member's role in them

Excelling

  • Induction includes a walkthrough of the current improvement plan, explanation of how to report incidents and suggest improvements, and an introduction to the practice's audit and data review processes
  • New staff are paired with a buddy or mentor who models quality improvement behaviours during the induction period

Common Pitfalls

  • Quality improvement is described to new staff as something the practice manager does, not something they are expected to participate in
  • Induction materials reference a quality framework or improvement plan that is outdated or no longer in use - new staff receive a misleading impression of the practice's actual quality culture

Established Evidence

  • The practice manager has attended at least one professional development activity in the past 12 months related to healthcare quality, governance, audit, or practice management (e.g., AAPM conference, quality improvement workshop, online course in clinical governance)
  • The practice supports this professional development with time, funding, or both
  • The practice manager can describe what they learned and how it has influenced the practice's quality activities

Minimum for Developing

  • The practice manager undertakes general professional development but has not attended anything specifically related to healthcare quality or governance
  • The practice does not actively support or fund practice manager professional development

Excelling

  • The practice manager has a defined professional development plan that includes quality and governance topics, with scheduled activities throughout the year
  • The practice manager participates in peer networks or communities of practice with other specialist practice managers, sharing quality improvement approaches

Common Pitfalls

  • The practice manager is expected to lead quality improvement but has never received any training in audit methodology, quality improvement frameworks, or clinical governance - they are learning entirely on the job
  • Professional development funding is available for clinical staff but not for the practice manager - quality leadership capability is not invested in

Established Evidence

  • The principal practitioner and/or practice manager can name at least two specific, concrete improvements that were made as a direct result of using the SPQF
  • These improvements are documented (in the improvement plan, meeting minutes, or a quality report) and are still in effect
  • The improvements are substantive (they changed a process, policy, or practice behaviour) rather than superficial (they produced a document that no one uses)

Minimum for Developing

  • The practice has completed the self-assessment but cannot point to any specific changes that resulted from using the framework
  • The framework was treated as a tick-box exercise and has not influenced practice operations

Excelling

  • The practice can describe a narrative of improvement over time - what it looked like before the framework, what it looks like now, and what it plans to do next
  • The practice uses the framework as an ongoing management tool, not just a periodic assessment exercise

Common Pitfalls

  • The practice completed the self-assessment, filed it, and returned to business as usual - the framework produced paperwork, not improvement
  • The two "improvements" cited are trivial or would have happened regardless of the framework (e.g., "we bought a new printer" or "we hired a new receptionist") - the test is whether the framework identified a gap that led to a meaningful change in quality or safety