top of page
Search

Beyond Blame: Why "Toy Mode" Thinking and Human Error Crash Drones (And How to Fix It)

By: Colonel (ret) Bernie Derbach, KR Droneworks, 18 Dec 25 [Your Name/Organization]


There is a dangerous paradox in the modern drone industry. You can walk into a big-box store, buy a high-performance aircraft for a few hundred dollars, and fly it with an interface that looks like a video game. It feels like play.

But when we view a 25kg aircraft—or even a sub-250g micro-drone—through a "Toy Filter," we invite disaster. We start to view safety procedures as "overkill" and checklists as optional suggestions. When a crash inevitably happens, the industry’s default reaction is to label it "Human Error," blame the pilot for skipping the checklist, and consider the case closed.


But here is a radical thought: Humans don’t wake up intending to be unsafe. No RPAS pilot drives to the site intending to crash into a tree.


If we want to meet the high standards of Transport Canada’s TP 15530 (Level 1 Complex Operations), we must stop blaming pilots and start looking at the systems they operate in.


The Theory: Local Rationality


Psychologists call it "Local Rationality." It simply means that people do what makes sense to them at the time, given the information they have.


RPAS pilots operate within a complex web of pressures:


  • Procedures that may be unclear, outdated, or 50 pages long.

  • Operational Pressures from clients ("Get the shot before the sun sets!").

  • Imperfect Information (glare on screens, confusing UI).

  • Organisational Norms ("We always skip that calibration step here").


Calling the outcome "human error" ignores the conditions that shaped the decision. If the system doesn't change, the next human will make the same "error."


TP 15530: The Regulatory Safety Net


Transport Canada’s TP 15530Knowledge Requirements for Pilots of Remotely Piloted Aircraft Systems—is not just a list of rules to memorize for an exam. It is a blueprint for Threat and Error Management (TEM).


Let’s look at two specific requirements of TP 15530 through this systemic lens.


1. The Checklist Dilemma


TP 15530 cites "Lack of use of SOPs and Checklists" as a major safety issue.


  • The Old View: The pilot was lazy and negligent for not reading the checklist.

  • The Systemic View: Why did skipping the checklist make sense? Was the checklist designed for a hangar while the pilot was standing in a freezing field? Was it digital, but the iPad battery died?


Effective organizations don't just demand compliance; they design SOPs that fit the workflow so that following the rule is easier than breaking it.


2. Crew Resource Management (CRM)


TP 15530 places heavy emphasis on CRM. In a systemic view, CRM is about creating a culture where information flows freely.


  • The Trap: A junior Visual Observer sees a power line but stays quiet because the pilot is the "Boss" and hates interruptions.

  • The Fix: Training for assertiveness and communication. The system must empower the newest person on the team to shout "STOP" without fear of retribution.


The Solution: SMS and the TSB Approach


How do we move from "blame" to "improvement"? We look to the Transportation Safety Board (TSB).


When the TSB investigates a major aviation accident, their mandate is specific: They do not assign fault or liability. Their only goal is to find out why it happened to prevent recurrence.

RPAS organizations need to adopt this same mindset through a Safety Management System (SMS). This isn't just paperwork; it's a "Just Culture" where pilots can report mistakes without getting fired, allowing the organization to fix the root cause.


This continuous loop of improvement is often described as Plan-Do-Check-Act.



Practical Tool: The "5 Whys" Analysis


So, how do you apply this? The next time you have a "near miss" or an incident, do not stop at "Pilot Error." Use the 5 Whys method to dig deeper.

This image illustrates a scenario where a pilot crashes a drone because the battery died mid-flight. By asking "why" five times, we move from the surface-level "pilot error" to the true organizational root cause.




The Fix: In this scenario, you don't fire the pilot. You create a policy that says, "If a client interrupts safety protocols, the flight is terminated immediately," and you back your pilots up. You have fixed the system, not just the person.


Conclusion

Real safety improvement starts when we stop treating drones as toys and start asking better questions. By embracing the principles of TP 15530 and using tools like the 5 Whys, we move from "hoping for the best" to "engineering safety into every flight."

We must stop fixing the person and start fixing the path they walk on.


References

  • Transport Canada. (2023). Knowledge Requirements for Pilots of Remotely Piloted Aircraft Systems – Level 1 Complex Operations (TP 15530). Ottawa, ON.

  • Transport Canada. (2024). Introduction to Safety Management Systems (SMS). AC 107-001.

  • Transportation Safety Board of Canada. (n.d.). About the TSB: Mandate and Methodology.

  • Dekker, S. (2014). The Field Guide to Understanding 'Human Error'. Ashgate Publishing.

  • Reason, J. (1990). Human Error. Cambridge University Press.


Next Step for You: A Safety Stand-Down Checklist

T

o help you implement these concepts in your organization, here is a simple 30-minute agenda for a "Safety Stand-Down" meeting.


Safety Stand-Down Meeting Agenda: Beyond "Pilot Error"

  • Objective: To introduce the team to the concepts of "Just Culture" and root-cause analysis, moving away from a blame-based approach to safety.

  • Duration: 30 Minutes


1. The "Toy Mode" Discussion (5 Minutes)

  • Ask: "What's the difference between flying for fun on the weekend and flying a commercial mission?"

  • Discuss: The difference isn't the drone; it's the mindset, the pressure, and the consequences. Reiterate that we are operating aircraft, not toys.


2. Introducing "Just Culture" (5 Minutes)

  • Explain: "From now on, if you make an honest mistake and report it, you will not be punished. We want to know about near-misses so we can fix the system that allowed the mistake to happen."

  • Caveat: This doesn't apply to gross negligence (e.g., flying under the influence or willfully ignoring a direct safety order).


3. The "5 Whys" Exercise (15 Minutes)

  • Activity: Pick a recent (or hypothetical) incident or near-miss from your operations.

  • Whiteboard: Write the problem at the top.

  • Ask the Team: "Why did this happen?" Write down the answer.

  • Repeat: Ask "Why?" to that answer, and repeat four more times until you get to a systemic root cause (e.g., a bad SOP, lack of training, client pressure).

  • Action Item: Assign someone to fix the systemic issue identified.


4. Conclusion & Commitment (5 Minutes)

  • Reiterate: "Our goal is not perfect pilots; it's a perfect system that protects imperfect pilots."

  • Call to Action: Encourage the team to use the new reporting system for the next "close call" they experience.

 
 
 

Comments


bottom of page