Work Safe Kit
Risk Management

What is a Leading Indicator?

A Leading Indicator is a proactive, predictive measure that assesses the presence, effectiveness, and health of safety systems, activities, and controls before an incident occurs. Unlike lagging indicators that count past failures, leading indicators measure inputs—providing early warning of system weaknesses while there's still time to intervene.

From Rearview Mirror to Windscreen

For decades, Australian organisations navigated workplace safety by looking backwards. Metrics like Lost Time Injury Frequency Rate (LTIFR) and Total Recordable Injury Frequency Rate (TRIFR) provided a definitive historical record of failure, but offered little insight into the current resilience of safety systems or the likelihood of future catastrophic events.

A leading indicator isn't merely a metric—it's a mechanism for foresight. By measuring inputs such as the completion of critical control verifications, the quality of hazard reporting, and the efficacy of safety leadership interactions, organisations can identify vulnerabilities in their defence barriers before harm occurs.

This shift is critical for Australian businesses. The Work Health and Safety Act 2011 imposes a positive duty of due diligence on officers (directors and executives), requiring them to have active, up-to-date knowledge of WHS matters and to verify that resources and processes are actually being used. Relying solely on injury rates—low-frequency retrospective data points—has been deemed insufficient for discharging this legal duty.

The Legal Imperative

Section 27 of the WHS Act places a personal, non-delegable duty on officers to exercise due diligence. One critical requirement is to "verify the provision and use of resources and processes" for eliminating or minimising risks. An officer cannot verify that a risk management process is working simply by looking at a report that says "Zero Injuries." The absence of injury is not proof of the presence of control.

Safe Work Australia explicitly advises that "Lost time injury (LTI) and LTI frequency rate (LTIFR) are inappropriate for informing WHS due diligence and management decisions" because they lack the necessary insight into the effectiveness of risk controls. Recent case law, including SafeWork NSW v Miller Logistics, has reinforced that courts expect officers to "actively monitor and evaluate health and safety management" using proactive data.

Leading indicators provide the granular, proactive information courts expect officers to review—reports on hazard rectification times, audit findings, and critical control verification rates. They are effectively a mechanism for legal compliance with Section 27.

Track What Matters Before It Matters

WorkSafeKit's digital check-in system automatically captures leading indicators from your lone workers in real-time.

Explore check-ins

Leading vs Lagging Indicators

Understanding the relationship between leading and lagging indicators is essential. They serve complementary roles in a balanced performance scorecard.

Lagging indicators represent the "bottom line" of safety. They measure outcomes—injuries, illnesses, days lost, and compensation costs. While necessary for statutory reporting and understanding the ultimate consequences of safety failures, they suffer from significant limitations. They're retrospective, providing information only after harm has been done. In many modern workplaces, serious injuries are rare—a company might have zero Lost Time Injuries for a year, but this doesn't necessarily mean the workplace is safe. It may simply mean the company was lucky, or that latent defects in the system haven't yet aligned to cause an event.

Feature Leading Indicators Lagging Indicators
Primary Focus Activities, systems, conditions, behaviours Incidents, injuries, costs, failures
Temporal State Future/Current (real-time) Past (historical)
Actionability High (direct control over inputs) Low (result of complex interactions)
Predictive Value High (early warning of drift) None (confirms failure occurred)
Psychological Impact Motivating (measuring achievement) Can be punitive (measuring failure)

Leading indicators, conversely, allow for intervention before loss occurs. Activities like pre-start checks, hazard reports, and safety observations happen daily, providing a rich stream of data for trend analysis. Management can directly mandate an increase in safety walks or maintenance budget, but cannot "mandate" a decrease in injuries directly—they can only influence the conditions that lead to them.

Types of Leading Indicators

Leading indicators aren't homogenous. A robust framework covers various aspects of the safety system: inputs, conditions, behaviours, and control efficacy.

Activity-Based Indicators

These metrics track the execution of safety-related activities, measuring whether the organisation is doing the work it promised in its safety management plan. Training and competency compliance measures the percentage of the workforce holding valid, current training and licences for tasks they perform. A drop in training compliance is a precursor to error-based incidents, often signalling operational pressure where production is prioritised over releasing staff for training.

Inspection schedule adherence measures the "cadence" of the safety system—the percentage of scheduled workplace inspections completed versus planned. If inspections are consistently missed or late, it indicates a lack of resources or management commitment. Corrective action closure rate is critical, measuring average time to close high-priority actions and the percentage of actions overdue. A growing backlog of open safety actions suggests that hazards are being identified but not fixed—a ticking bomb.

Condition-Based Indicators

These assess the physical state of the workplace and the integrity of safety-critical plant and equipment. Hazard reporting frequency counts the number of hazards identified and reported by the workforce. Paradoxically, a high number of hazard reports is often a positive leading indicator, suggesting a culture where workers are risk-aware and feel safe to speak up. A site with "zero hazards reported" likely has a culture of silence or blindness to risk.

Maintenance compliance for safety-critical equipment measures adherence to maintenance schedules for items like emergency stops, fire suppression systems, and crane limit switches. In high-hazard industries like mining, a backlog here is a direct predictor of catastrophic equipment failure.

Behaviour-Based Indicators

These metrics focus on the human element—what people do when they think no one is watching. Behavioural observation rates track data from peer-to-peer observation programmes, measuring observations per employee per month and the percentage of "safe" versus "at-risk" behaviours observed. High participation rates indicate engagement, and shifts in the ratio can signal drift in safety culture.

Safety leadership interactions track the visible presence of senior leaders in the field through the number of leadership safety walks conducted by the executive team. This measures "felt leadership"—when leaders are seen engaging in safety conversations, it reinforces the priority of safety to the workforce.

Psychosocial Leading Indicators

With recent amendments to WHS Regulations in NSW, WA, and other states explicitly regulating psychosocial hazards, leading indicators in this domain are vital. Excessive hours are a precursor to fatigue and burnout—tracking the percentage of workforce exceeding 50 hours per week and accrued annual leave balances (high balances indicate inability to take breaks) provides early warning.

Tracking early-stage informal complaints or grievances related to bullying and harassment allows for early intervention through mediation or culture reviews before a worker becomes ill. An increase in informal complaints is a leading indicator of potential psychological injury claims.

Critical Control Verification: The Gold Standard

Perhaps the most sophisticated evolution of leading indicators in Australia is Critical Control Management (CCM). Originating in mining and high-hazard sectors, CCM shifts focus from "preventing all injuries" to "preventing fatalities."

Not all controls are created equal. A critical control is a specific object, act, or system crucial to preventing a high-consequence event (fatality or catastrophe) or mitigating its consequences. The absence or failure of a critical control significantly increases the risk of a fatal event, even if other controls are in place.

Critical Control Verification (CCV) involves scheduled field checks to verify that critical controls are effective and present. Unlike generic safety inspections that might look at 50 different items (some trivial), a CCV check looks at one fatal risk and asks: "Is the specific control that saves lives working right now?"

Hazard Category Critical Control Verification Question
Working at Heights Fall Arrest System Is the harness current within inspection date and free from damage?
Working at Heights Exclusion Zones Are drop zones barricaded with signage to prevent entry below work?
Mobile Plant Separation of People/Plant Are pedestrian walkways clearly marked and physically separated from vehicle routes?
Mobile Plant Operator Competency Does the operator hold a valid Verification of Competency (VOC) for this specific machine?
Isolation (LOTO) Energy Isolation Has a "try start" or dead-test been performed to verify zero energy state?

The leading metrics from CCV are verification coverage (percentage of scheduled CCV checks completed) and control effectiveness rate (percentage of checks where the control was found fully effective). If a site reports "90% control effectiveness," it means 10% of the time, a life-saving control was found missing or broken—a powerful red-flag leading indicator demanding immediate action.

Automatic Alerts for Critical Controls

Configure man-down detection and duress alarms for workers in high-risk environments.

Learn about alerts

Implementation Strategies

Implementing a leading indicator programme requires strategic consideration of data collection, culture, and interpretation.

Avoiding the "Green Watermelon" Effect

A major risk in leading indicator reporting is the "green watermelon" effect—the dashboard looks green (good) on the outside, but reality is red (bad) on the inside. This occurs when indicators measure quantity without quality. If a KPI is set for "four toolbox talks per month," supervisors might rush through them, reading a generic sheet to bored workers just to tick the box. The leading indicator shows 100% green, but safety knowledge transfer is zero.

The solution is pairing quantity metrics with quality metrics. Measure not just the number of toolbox talks, but also the percentage of workers who can correctly answer three questions about the toolbox topic.

The Balanced Scorecard Approach

Leading indicators shouldn't replace lagging indicators entirely—they should complement them. A robust safety scorecard for a board or executive team includes a mix: lagging outcomes (TRIFR, workers' compensation claims) to track ultimate results; leading activities (inspection adherence, training rates) to track effort; leading risk indicators (critical control verification rates) to track catastrophic risk; and leading culture indicators (hazard reporting frequency, leadership walks) to track engagement.

Maturity Matters

The choice of leading indicators depends on the safety maturity of the organisation. Reactive organisations in the compliance stage should focus on basic compliance metrics like percentage of mandatory training completed and percentage of statutory inspections done. Developing organisations in the calculative stage should focus on system implementation through hazard reporting rates and corrective action closure times. Proactive organisations in the generative stage can focus on effectiveness and culture through quality of investigations, safety climate survey scores, and critical control effectiveness rates.

Technology Enablement

Manual collection of leading indicator data through paper checklists acts as a barrier to timeliness and accuracy. Digital checklists provide real-time data uploaded instantly for live dashboards, evidence capture through required photos of controls (reducing "tick and flick" falsification), and analytics to aggregate thousands of data points to find trends such as "inspection failure rates are 20% higher on night shift," allowing for targeted intervention.

Strategic Recommendations

To successfully integrate leading indicators into your safety programme, map indicators to critical risks by examining your risk register. If your top risk is vehicle collision, your primary leading indicator should be traffic management plan audit compliance, not the number of minor injuries reported.

Define what "good" performance looks like carefully. Setting a target of "zero hazards" is dangerous—it encourages under-reporting. A target of "100 hazards reported and closed" encourages proactive behaviour.

Empower officers with the data they need to discharge their Section 27 due diligence duties. Provide reports that answer "Are our critical controls working?" rather than just "Did anyone get hurt?" Move towards Critical Control Verification as the gold standard—actively testing life-saving controls is the single most effective way to prevent fatalities and demonstrate due diligence in court.

Finally, review and evolve your indicators regularly. Leading indicators aren't static. As a specific risk is brought under control, the indicator may be retired and replaced with a new one focusing on a different area of weakness.

References

Safe Work Australia. (2017). Measuring and reporting on work health and safety. Retrieved from safeworkaustralia.gov.au

O'Neill, S. & Wolfe, K. (2017). Measuring and Reporting on Work Health and Safety. Safe Work Australia commissioned research.

SafeWork NSW. The Work Health and Safety Duty of an Officer. Retrieved from safework.nsw.gov.au

WorkSafe WA. Health and safety leading and lagging performance indicators: guide. Retrieved from dmirs.wa.gov.au

Frequently Asked Questions

Can leading indicators fully replace LTIFR and other lagging indicators?

No, they serve different purposes. Lagging indicators like LTIFR are necessary for benchmarking, calculating insurance premiums, and understanding the ultimate "cost" of safety failures. They confirm whether your prevention efforts actually worked. Leading indicators are for steering the ship; lagging indicators are for looking at the wake. A complete safety scorecard must include both to provide a holistic view of performance.

How many leading indicators should an organisation track?

Quality is far more important than quantity. Safe Work Australia research suggests that tracking too many metrics leads to "data noise" and administrative burden. Start with 3-5 meaningful indicators that connect directly to your organisation's top critical risks—for example, one for training, one for hazard reporting, and one for critical control verification. You can add or rotate indicators as your safety programme matures.

What if our leading indicators are "green" but we're still having incidents?

This is a classic signal of the "green watermelon" effect or a misalignment of metrics. It likely means your indicators are measuring activity (e.g., "Did we hold the meeting?") rather than effectiveness (e.g., "Did the meeting improve knowledge?"). It could also mean you're measuring the wrong things—tracking slip-and-trip hazards while ignoring high-consequence risks like vehicle interactions. This disconnect requires an immediate review of the quality of your verification activities and the relevance of your chosen metrics.

Protect your lone workers with WorkSafeKit

Real-time monitoring, check-ins, and emergency alerts for your team.

Get in touch

Simplify workplace safety management

From risk assessments to real-time monitoring, WorkSafeKit helps you keep your team safe and compliant.