ED Grand Rounds is a weekly, multidisciplinary, practice-focused forum to improve decision-making, teamwork, and safety culture in the Emergency room, and strengthen the care delivered in real life — beyond textbooks, protocols, and routine lectures.
Rather than repeating academic content already covered in MEM, IMT and EMS teaching programs, ED Grand Rounds focuses on the non-technical competencies that determine patient safety, clinical reasoning under uncertainty, communication, teamwork, escalation, and system learning.
Real ED case from last week (frontline → doctor → M&M → teamwork demo)
Non-punitive, Just Culture, behaviour-focused
One takeaway + one action item captured weekly
Emergency medicine failures rarely occur due to a lack of knowledge. They occur due to missed cues, poor communication, delayed decisions, and team breakdowns. ED Grand Rounds exists to address exactly that. It is not about what we know. It is about how we think, how we act, and how we improve — together.
ED Grand Rounds = (Improve) Decision making + Teamwork + Safety culture
Quick facts:
When: Wednesdays 8:30 AM
Where: Academic Hall, Aster MIMS Kottakkal
Format: 90 min | time-boxed
Attendance: Google Form submission
Live Zoom broadcast to satellite centres
Structured, time-bound, and participation-driven
Why ED Grand Rounds Exists
Most adverse events in emergency care do not occur due to a lack of knowledge. They occur due to:
Missed or weak early cues
Cognitive overload and bias
Delayed escalation
Communication breakdowns
Team coordination failures
ED Grand Rounds exists to make these invisible factors visible, discuss them safely, and convert experience into improvement.
Evidence-informed design
The structure of ED Grand Rounds is informed by established safety science and medical education literature. The forum, therefore, serves both educational and patient-safety objectives
Safety-II (Resilient healthcare): learn from frontline adaptations (Walk-in case)
Dual-process & cognitive bias: surface real-time reasoning (Doctor case)
Just Culture: system learning without blame (M&M)
CRM / Human factors: teamwork behaviours under pressure (Teamwork demo)
Debriefing science: structured reflection (AI, GAS, Plus–Delta)
Retrieval practice/Adult learning theory: MCQs improve retention (Review)
Session Structure:
Each session follows a fixed structure to ensure consistency, relevance, and depth
Each segment was explicitly mapped to established evidence frameworks
Walk-In Case (Lead: EMS/Nursing team) – 20 min
A real ED patient encounter from the previous week is presented exactly as it was experienced at arrival.
Purpose: To understand frontline judgement and early risk recognition (Safety-II).
Focus:
Initial information and uncertainty
Subtle cues that prompted concern or reassurance
Adaptive actions taken to maintain safety
Tool: Advocacy–Inquiry prompts
2. Doctor Case Path – 20 min
The treating doctor explains their thinking process in real time.
Purpose: To surface clinical reasoning under pressure, not guideline recall. (Dual-process)
Focus:
Prioritisation and decision points
Uncertainty and competing considerations
Escalation reasoning and timing
The doctor may introduce a brief academic framing ONLY to clarify:
Why this decision mattered (Pathophysiology that explains urgency)
Why uncertainty existed (Overlap syndromes, atypical presentations)
Why escalation threshold chosen (Risk-benefit trade-off)
Tool: AI
3. M&M (Senior) – 20 min
A structured, non-punitive review of a panel-selected case.
Purpose: To convert errors and near-misses into shared system learning => risk amplification + system learning (Just Culture)
Focus:
Timeline reconstruction
Where and how risk increased
Human factors, communication gaps, and system friction
Tool: AI + GAS
Please note the Core distinction between
Doctor’s Clinical Decision Path (Forward-looking)→ “How I thought and decided in real time, given uncertainty.”
M&M Session (Backwards-looking) → “How the system and team dynamics amplified or mitigated risk.”
Time orientation: Forward-looking | M&M -> Backwards-looking |
Unit of analysis: Thinking | M&M -> System + team |
Tone : Exploratory | M&M-> Analytical |
Ownership: “I” | M&M -> “We / the system” |
Question: “Why did I choose this?” | M&M -> “Why did this make sense here?” |
4. Clinical Gestalt – 15 min
In acute care and emergency medicine, clinical gestalt ("gut feeling" or "clinical impression") is the rapid, intuitive recognition of disease patterns based on prior experience.
Purpose: To strengthen pattern recognition and decision-making using retrieval practice (scripts + retrieval)
Format:
3–5 spotters per session
Presented by IMT trainee
Identify → interpret → next action
Tool: strict spotter format (no teaching)
5. Modified Megacode (Teamwork demo) – 25 min
A live, scenario-based demonstration. Clinical complexity is deliberately kept simple; the spotlight is on team behaviour, not medical brilliance.
Purpose: To train Crisis Resource Management (CRM) skills that directly impact patient safety (leadership/communication/escalation)
Focus:
Leadership and role clarity
Communication and closed-loop commands
Escalation timing
Task prioritisation under pressure
Tool: GAS + Plus–Delta + rubric
6. Review (Google Form) – 5 min
Each session is observed by a multidisciplinary panel to ensure learning quality and psychological safety.
Purpose: To support reflective learning and close the learning–action loop.
MCQ retrieval + 1 reflective/QI item
Attendance = form submission
Tool: MCQ + One Delta
Panel Composition:
Senior Emergency Physician (Chairperson, ED Grand Rounds)
Nursing / EMS representative
MEM resident representative
IMT MRCP representative
Roles: A panel member is not an examiner,/not a lecturer or not a judge. A panel member is an observer, sense-maker, and learning facilitator.
Primary responsibility:
Protect non-punitive culture & psychological safety
Enforce time-boxing and segment integrity
Capture one actionable improvement per session
Pre-Session Responsibilities: Before the session, panel members should:
Review the session agenda
Understand the intended learning focus (thinking, teamwork, communication)
Commit to non-punitive language
Panel members must not pre-form opinions about individuals or outcomes. During the session, panel members observe four domains:
A. Clinical Thinking: Were decisions explained, not justified?
B. Communication: Was the language clear and closed-loop?
C. Teamwork & Human Factors: Leadership visibility, Task delegation, and Closed-loop communication
D. ED System & Discipline: Equipment/process friction
Rules for Speaking During the Session (use one of the following techniques; evidence-based)
Pendleton principle: Start with positives, limit corrective points, and focus on learner reflection first.
Ask what went well
Reinforce positives
Ask what could improve
Add one constructive suggestion
(Note: It is primarily designed for one-to-one feedback)
Advocacy–Inquiry (AI); This model is used mainly in simulation debriefing. Structure:
Advocacy: State an observation (neutral fact)
Inquiry: Ask for participants’ thinking
Example: I noticed escalation was delayed when BP dropped. I’m curious what the team was considering at that moment.” => Why it fits: Non-judgemental, Invites thinking, Works in groups, & Protects safety
GAS model: Best for closing discussion
Gather: What happened?
Analyse: Why did it happen?
Summarise: What will we change?
Plus–Delta: Perfect for multidisciplinary settings (MDT meeting)
Plus: What helped?
Delta (= change): What would we change next time?
This avoids blame and keeps energy constructive.
Starting with Plus is neurocognitive; it reduces threat response. Because when people sense evaluation, the brain shifts to defensive mode (amygdala activation), starting with “what worked” lowers threat, and preserves participation.
“What went wrong?” implies: Error/Blame. “Delta” implies: Adjustment/Improvement/Forward-looking change
Example 1:
❌ “Why didn’t you escalate earlier?”
✅ “What would we change to make escalation easier next time?”
Example 2:
“Let’s quickly capture two things that helped today (Plus)
One thing we should change before next week (Delta).
Summary:
Please note that excess feedback reduces learning retention
Pendleton: best for 1:1 feedback => | Structured, reflective |
Advocacy–Inquiry: best for deep exploration => | Exposes thinking |
Plus–Delta: It decides change, best for group (multidisciplinary) improvement => | Fast, safe, forward-looking |
GAS: best for closing the loop
In the Grandrounds, for one segment → use one dominant model.
Case presentation: Advocacy–Inquiry (AI) => Why this fits best? Because the Goal = expose thinking, not judge performance, and will get multiple perspectives
Morbidity & Mortality (M&M) Reflection => Best model: Advocacy–Inquiry + GAS (combined). Why: Because M&M is the highest-risk segment for blame, & 'Needs depth and closure. How to run it:
AI → explore decisions, biases, system factors
GAS → structure the discussion flow
No “what went wrong” questions. Only “what increased risk” questions
Teamwork Demonstration/Modified Megacode: Best model: GAS + Plus–Delta => Why: Because this discussion is Behaviour-focused, Group performance, and needs actionable outcomes
How:
GAS to structure debrief
Plus–Delta to extract change
Panel language: “One thing that helped the team was.. One thing we should change next time is…”
Communication & Service Training: Best model: Plus–Delta (primary) + AI (selective). Rule to apply:
Use Plus–Delta openly
Use AI only if a risky behaviour appears
“Let’s capture what language worked, and what we should change next time.”
End-of-Session Closure: Best model: Plus–Delta (mandatory) => Converts discussion into action
Chair must say: “Our one change before next week is…”
Deploy them deliberately
AI → when thinking matters
GAS → when structure is needed
Plus–Delta → when change is the goal
Pendleton → Individual reflection
ED Grand Rounds – Session Structure in detail
1. The Case That Walked In (20 min)
Presenter: EMS / Nurse
What:
Standardise how EMS presents uncertainty
One real ED patient encounter from the previous week
Initial information available upon arrival
Areas of uncertainty or ambiguity
Subtle cues that prompted concern or reassurance
What made the team pause, watch, or escalate
How the team adapted to incomplete or conflicting information
Actions taken to maintain safety despite uncertainty
The evidence logic for this section => This is Resilient performance analysis. (Hollnagel model); learning from what usually goes right.
Why Safety-II => The ED reality is that if staff followed rules exactly, patients would die! ED safety comes from Team anticipation & Informal escalation.
Safety-II is interested in:
Early uncertainty, not confirmed sickness
Weak signals, not obvious red flags
Adaptive actions, not delays or mistakes
Why things went right, not why they went wrong
Avoid the traditional ways of presentation like One sick patient from last week, then discuss First decision, first red flags, first delays. The Hidden problems with this style (safety-1 approach) are
“Sick patient” → outcome-labelled
“Red flags” → checklist thinking
“Delays” → implies failure framing
This subtly pushes people toward error hunting, not adaptive success.
Background for the evidence: Erik Hollnagel is a Danish safety scientist. He worked in aviation, healthcare, and nuclear power industries. He asked a very uncomfortable question: “If healthcare workers make so many ‘errors’, why does the system succeed most of the time? That question changed modern safety science.
Safety-I: the traditional way (how hospitals usually think). The core idea is Safety = absence of harm
Find “what went wrong” -> Identify error -> Fix the error -> Add a rule / protocol
Language used “Root cause” (RCA)
Safety-II: Hollnagel’s correction (this is the upgrade). The Core idea is that safety = the ability to succeed under varying conditions. Not “no errors” — but resilience.
How it works => Study how work is actually done -> Understand adaptations -> Learn how people detect risk early -> Learn how they compensate for system gaps
Language used: “Adaptation”/“Anticipation”/“Resilience”
The Key insight is that people are not the problem. People are the solution.
Safety-I reduces failure ( for surgical procedures) = traditional patient safety
Safety -II ↑ses success (learn from what's going on) = future patient safety
In Walk-in Case Presentation, if the discussion is “What was the diagnosis?”, “What protocol should have been followed?” or “What went wrong?” => That is Safety-I thinking. However, Your EMS walk-in presentation should ask: Safety-II questions
What made this patient feel at risk?
What cues were weak but concerning?
What uncertainty existed?
What adaptation was made?
What prevented deterioration?
This turns the walk-in case into a scientific safety exercise.
Sample: Side-by-side comparison, Walk-in Case: Safety-I vs Safety-II
Safety-I version ❌
“Vitals were normal”
“Diagnosis was unclear”
“Later, the patient deteriorated”
Learning point: missed diagnosis
Safety-II version ✅
“Vitals were borderline but trending”
“EMS felt something was off”
“We kept the patient in observation”
“Early escalation prevented collapse”
Learning: early risk sensing worked
We are adopting this style because it changes behaviour
After a few weeks:
EMS stop apologising for “not knowing”
Doctors stop dismissing early unease
Teams learn to respect anticipation over certainty
That’s resilient healthcare in action. The walk-in case presentation was redesigned using Safety-II principles to examine frontline adaptive behaviours and early risk recognition under conditions of uncertainty, rather than retrospective error analysis. The presentation is not just ED education, but designing at a safety-science level. If we keep words like delay and red flags, people will hunt errors instead of studying resilience. Change the language → you change the thinking.
2. Clinical Decision Path (20 min)
Presenter: Emergency Doctor
What it really trains:
Clinical reasoning under uncertainty
Explain thinking, not guidelines.
Decision-making, not diagnosis
Escalation judgement
Explicitly map: Cue → interpretation → action → consequence
Train presenters to verbalise 'why escalation felt hard'
Along with a brief topic discussion
Evidence base:
Dual Process Theory (System 1 vs System 2),
Cognitive bias literature (anchoring, premature closure),
Acute care decision-making studies
Doctor-led case presentations were designed to surface real-time clinical reasoning using dual-process theory, with explicit exploration of intuitive judgment, analytical reasoning, and common cognitive biases known to affect acute care decision-making.
Dual Process Theory explains how doctors think
Cognitive bias literature explains why thinking fails
Acute care studies explain why outcomes suffer
Clinical Decision Path design is a practical solution to all three. This session trains
Awareness of thinking mode
Willingness to say “I was uncertain”
Early escalation behaviour
Safer decision-making under pressure
Background to the evidence:
The Dual Process Theory (System 1 vs System 2), the core idea is that human decision-making runs on two parallel systems:
System 1 — Fast, intuitive, automatic, Pattern recognition (In ED terms: “Something is not good in the initial appearance”)
System 2 — Slow, analytical, differential diagnosis, Guideline checking (In ED terms: ABCDE review)
Why this matters in emergency care: In the ED, you start in System 1 (you must), and you confirm with System 2. The Errors occur when: System 2 is never activated (misses primary assessment)
Cognitive Bias Literature (Anchoring, Premature Closure)
Cognitive biases are predictable shortcuts of System 1. They are normal human cognition under pressure.Viz,
Anchoring bias: Locking onto the first piece of information (symptom) and failing to adjust. ED example:
“Young patient = functional”
“Known COPD = infective exacerbation”
“Normal BP = stable”
"Breathind difficulty" = "Pulmonary edema"
"B lines on POCUS" = "Pulmonary edema"
"Chest pain" = "ACS"
Why does it happen?
Early labels simplify chaos
High workload rewards speed over accuracy
Premature closure: Stopping diagnostic thinking once an explanation seems “good enough”. ED example:
Stopping after one diagnosis
Ignoring new conflicting data
Delayed escalation
Biases increase with:
Time pressure
Fatigue
Cognitive overload
Hierarchy
Why this matters to Grand Rounds
Makes biases discussable without blame
Reframes them as system risks, not personal flaws
Normalises saying: “At that point, I was anchored.” => That is advanced professional maturity.
Acute Care Decision-Making Studies
Most acute care errors are not knowledge errors. They are:
Decision-timing errors
Escalation delays
Communication failures
Bias-driven misjudgements
Key findings from acute care research: Clinicians often recognise deterioration before they can articulate it. Delays occur when:
Uncertainty is uncomfortable
Escalation feels socially risky
Evidence feels “not enough yet”
In other words, People sense danger early but hesitate to act.
The Clinical Decision Path addresses this through
Forces articulation of early unease (System 1)
Validates uncertainty (Safety-II)
Examines escalation as a decision, not a failure
Uses AI and Plus–Delta to keep it non-punitive
It helps
Awareness of thinking mode
Willingness to say “I was uncertain”
Early escalation behaviour
Safer decision-making under pressure
3. M&M Reflection (20 min)
Presenter: Senior
Purpose: To transform adverse events and near-misses into system learning, behavioural improvement, and risk reduction, without blame or personal judgement. => focuses on learning, not judgment.
Core principles followed:
Just Culture principles
Modern M&M redesign literature
Human Factors Engineering
Crisis Resource Management (CRM)
Case Selection Criteria
Involving uncertainty or escalation decisions
Illustrating system or teamwork challenges
Avoid: Rare diagnostic curiosities
Structure;
1. Timeline Reconstruction (WHAT happened); Arrival → key decisions → outcome
Take a real ED case (with issued cues, delayed escalation, or system friction) from the previous week
2. Advocacy–Inquiry Exploration (WHY it made sense); Facilitator language: “I noticed X happened at this point. I’m curious what the team was thinking then.” Focus areas: Information documented in the EMR, Cognitive load and uncertainty, Environmental pressures, & Competing priorities
3. Human Factors & CRM Lens (HOW risk increased). Explicitly examine:
Communication flow,
Leadership clarity
Role allocation
Situational awareness
Escalation barriers
(Avoid diagnostic correctness debates. Examine coordination and timing)
4. GAS Closure (WHAT we change)
Gather: Key observations
Analyse: Why risk increased
Summarise: One behaviour or system change (one actionable takeaway)
Evidence:
Just Culture
Modern M&M redesign literature
Human Factors &
CRM
Language Rules
Allowed:
“This increased risk because…”
“Given the context, this decision made sense…”
Not allowed:
“Mistake”
“Error”
“Should have”
Naming individuals
Background for evidence:
Just Culture — the foundation of modern M&M
Just Culture is a safety framework that separates:
Human error (unintentional slips)
At-risk behaviour (taking shortcuts under pressure)
Reckless behaviour (conscious disregard of risk)
The core principle: We don’t ask “Who failed?” We ask “Why did it make sense for this action to occur at that time?”
The traditional M&M focuses on "errors" & Publicly dissects individual decisions => results Defensive behaviour, Silence from juniors and nurses, & Learning shuts down => This is anti-safety, even if intentions are good. What Just Culture demands in M&M
Behaviour is discussed, not character
Decisions are examined in context
System pressures are explicit
Blame language is prohibited
(This is why AI + GAS choice is correct for the Panel as well)
Modern M&M redesign literature
Over the last 15–20 years, M&M sessions were found to be:
Educationally inconsistent
Psychologically unsafe
Poor at generating system change
What modern literature recommends
Be structured
Focus on process, not outcome
Include human factors
Generate actionable system change
Protect psychological safety
Key shift in language (old M&M -> Modern M&M)
“Error” -> “Risk amplification”
“What went wrong?” -> “What increased risk?”
“Missed diagnosis” -> “Decision made under uncertainty
“Delay” -> “Timing influenced by context”
Human Factors & CRM (Crisis Resource Management): We borrowed it from the aviation industry
Why M&M must include them, because most crises fail due to coordination, not knowledge. human factors study how
Humans interact with systems/protocols
Cognitive limits affect performance
Environment, workload, and design influence decisions
CRM focuses on non-technical skills:
Leadership
Communication
Situational awareness
Task management
Escalation
Why classic M&M ignored this in Traditional M&M: Over-focused on diagnosis, Ignored teamwork, Ignored communication, Ignored hierarchy effects (basically focuses on major causes of harm, instead of safety II)
How Human Factors & CRM reshape M&M -> instead of asking: “Was the diagnosis correct?”, ask:
“How did information flow?”
“Who had situational awareness?”
“When did escalation feel difficult?”
“What distractions or overload existed?”
Mapping the evidence-based approaches to M&M
Evidence base -> What it targets (How M&M applies it)
Just Culture -> Blame reduction (No naming, no judgement)
Modern M&M -> Learning quality (Structured timeline + AI)
Human Factors -> Cognitive limits (Bias, overload, interruptions)
CRM -> Team failure points (Leadership, communications, escalation)
The M&M session is designed using Just Culture principles and modern morbidity and mortality frameworks, incorporating human factors and crisis resource management concepts to focus on system vulnerabilities, decision-making under uncertainty, and team behaviours rather than individual blame.
Just Culture answers “How do we discuss failure safely?”
The modern M&M redesign answers, “How do we make M&M useful?”
Human Factors & CRM answer “Why competent teams still fail.”
Courtesy: This framework is grounded in the work of James Reason and modern safety science. The response mapping is done with below ideas
Human Error-> Console, redesign system
At-Risk Behaviour-> Coach, remove incentives
Reckless Behaviour->Disciplinary (outside M&M) => The Art of War principle
Difference between M&M and RCA concepts
Root Cause Analysis (RCA) is a formal investigative tool used after a serious adverse event. The primary question is “Why did this failure happen?" to identify system-level causes, and prevent recurrence of the same error. It uses the problem dissection tool; 5 "why" (keep asking “Why?” until you hit a cause that is actionable and systemic, not human blame). Example;
Why did antibiotics get delayed? → Order placed late
Why was the order late? → Sepsis not recognised
Why not recognised? → Vitals not flagged
Why not flagged? → No sepsis trigger in triage
Why? → Protocol gap
M&M is a regular clinical learning forum using real cases. The primary question is “What can we learn from this case?”
Core purpose:
Improve clinical reasoning
Improve decision-making
Share cognitive lessons
Build professional maturity
Side-by-side comparison (RCA & M&M)
Purpose | Prevent recurrence. | M&M=> Improve learning |
Trigger | Serious harm. | M&M => Any meaningful case |
Focus |System failure | M&M => Clinical reasoning & decisions |
Style | Formal investigation | M&M => Educational discussion |
Ownership | Institution leadership | M&M => Clinical teams |
Outcome | Action plan | M&M => Insight & learning |
Culture | Safety enforcement | M&M => Professional growth |
RCA asks: “What broke in the system?”
M&M asks: “What should we think or do better next time?”
RCA without M&M → bureaucratic
M&M without RCA → repeated mistakes
High-functioning ED/ICU needs BOTH
4. Clinical Gestalt segment (15 min)
Presenter: IMT Resident (ED Grand Rounds provides structured exposure to acute medical cases equivalent to the unselected acute take. Active participation in ED Grand Rounds demonstrates engagement with acute care learning, reflective practice, and patient safety-focused discussions.)
Purpose:
Train rapid pattern recognition
Risk stratification
“Next best action” thinking
Format rule:
2-3 spotters per session
60–90 seconds per spotter
No long explanations
Focus on “What is it?” → “Why does it matter?” → “What would you do next?”
Evidences used:
Script Theory in Expert Clinicians
Background Evidences:
Script theory comes from cognitive psychology (Henk Schmidt's work). It explains how experts recognise situations quickly without conscious step-by-step reasoning.
Cognitive psychology deals with how the mind actually processes information, from input to action- Not feelings, not personality => It’s about thinking mechanics. The two engines of thinking are system 1 (fast, intuitive) and system II (logical, analytical).
The core idea (script theory) is that, with experience, clinicians build illness scripts, includes typical presentation patterns, red flags, and Immediate actions. So when an expert sees a patient, they don’t start from first principles. They subconsciously ask: “Which script does this resemble?”
Expert vs novice thinking
Novice
Uses analytic reasoning
Builds long differentials
Slow, effortful
Expert
Matches patterns
Filters irrelevant data
Recognises danger early
Faster, safer in acute care
This is not laziness — it is cognitive efficiency.
Relevance: Since ED work is time-critical, experts survive by: Rapid script activation, Early risk recognition, & Quick “next step” decisions. Spotter session train scripts! Spotters ask (Pattern > detail):
“What is this?”
“Why is it dangerous?”
“What do you do next?”
This mirrors script activation, not textbook recall.
Note: Script dont come from one exposure. They form from multiple encounters.
The spotter segment was designed using script theory, providing short exposure to high-risk patterns to strengthen rapid recognition and next-action decision-making in acute care.
5. Teamwork Megacode (25 min)
Managing the sick patient as a live scenario
Focus on:
Role clarity
Communication
Leadership under pressure
Task delegation
Escalation timing
Purpose: To strengthen non-technical skills that directly impact patient safety.
Not a perfect algorithm execution. The spotlight is on team behaviour, not medical brilliance. Because Most ED failures happen here, not in diagnosis, but in coordination.
Evidence base
Crisis Resource Management (CRM)
Simulation debriefing literature
Aviation safety models
Evidence Background:
Crisis Resource Management (CRM): It originated in aviation and was adapted to healthcare for managing high-stakes, time-critical situations with multiple team members - Exactly for ED.
CRM is not about medical knowledge. It is about how teams function when things go wrong.
Core CRM domains (focuses on non-technical skills):
Leadership & followership
Communication (clear, closed-loop)
Situational awareness
Task prioritisation
Resource utilisation
Escalation and call-outs
Remember, Most crises fail due to coordination failure, not knowledge failure. CRM provides a shared behavioural framework so teams don’t rely on individual brilliance.
Simulation Debriefing Literature: It explains why the “after” matters more than the “scenario”
Simulation research consistently shows that most learning does NOT occur during the scenario. It occurs during the debrief.
A poorly debriefed simulation = wasted time.
The debriefing (High-quality debriefing ) literature emphasises
Be structured
Be psychologically safe
Explore thinking, not performance
Focus on behaviours and decisions
End with actionable change
This is where Advocacy–Inquiry, GAS, and Plus–Delta come in.
“Debriefing with Good Judgment” matters, rejects:
Shaming
Teaching-by-lecture
“What went wrong?” interrogation
Instead, asks: “Given what you knew at the time, why did that action make sense?”
The modified megacode (The teamwork megacode)
Keeps clinical content simple
Shifts focus to team behaviour
Uses structured debriefing
Extracts one actionable change
Aviation Safety Models (Where all of this comes from)
The aviation industry realised something disturbing in the 1970s: Pilots were highly skilled, Aircraft were technically sound, yet crashes still occurred. Investigation showed: The aircraft didn’t fail. The team did. The Swiss Cheese Model shows (This model directly influenced healthcare safety thinking)
Accidents occur when multiple small failures align
Human actions are one layer, not the root cause
Systems must be designed to absorb human variability
Aviation responded by: "Crew Resource Management" (Healthcare borrowed this as "Crisis Resource Management, the CRM") =>
Flattening hierarchy
Standardising communication
Training captains to invite challenge
Training juniors to speak up
Outcome => Dramatic reduction in fatal accidents
Concepts:
Aviation safety explains "Why accidents happen?" => Systems, not individuals
CRM explains "How teams should behave?" => train the team with Modified megacode
Simulation debriefing explains "How teams learn" => GAS + AI + Plus–Delta
The teamwork demonstration and debriefing components were informed by Crisis Resource Management principles, simulation debriefing literature, and aviation safety models. These frameworks emphasise non-technical skills, structured reflection, and system-focused learning as key determinants of safety in high-risk, ED environments.
Note: CRM training fails if participants are cognitively overloaded by medical detail.
Why focus on communication and escalation? Because aviation learned long ago: Silence kills more reliably than ignorance.
Aviation taught us why teams fail
CRM taught us how teams should act
Simulation debriefing taught us how teams learn
The ED Grand Rounds has operationalised all three.
If we don’t measure teamwork explicitly (during training as well as in the resuscitation bay), people will keep believing: “The problem was medical complexity.” This rubric makes it visible that: How we work together is often the real determinant.
6. Review session
Format: Reflective submission (Google Form)
Attendance = submission
Evidence base:
Reflective learning theory
Adult learning theory (Knowles)
QI (quality improvement) learning loops
Reflective Learning Theory: It says that experience alone does not lead to learning. Learning happens when experience is examined, interpreted, and reframed. Two types of reflection
Reflection-in-action: Thinking while doing (e.g., adjusting during resus)
Reflection-on-action: Thinking after the event (e.g., Grand Rounds reflection)
ED Grand Rounds primarily trains reflection-on-action, which then improves reflection-in-action during real shifts. The MCQs test knowledge recall, not behaviour.
Adult Learning Theory (Malcolm Knowles): Adults learn best when the content is relevant and when they want to know why something matters. The ED Grand Rounds fits adult learning perfectly - uses real cases (relevance), solves real problems (problem-centred), and invites experience sharing (peer learning), allowing autonomy (reflection, not instruction)
QI Learning Loops: Learning without action does not improve systems. So the review continues even after Grandrounds. The QI frameworks emphasise closed learning loops:
Identify an issue
Reflect and analyse
Implement a change
Review impact
Most educational sessions:
End with “good discussion”
Do not assign ownership
Do not revisit outcomes
This may creates learning fatigue.
The review MCQs are evidence-based for: Knowledge consolidation, pattern reinforcement, and retrieval practice. They’re just insufficient if used alone. Hence, added a reflective/QI element to this session.
MCQ = Retrieval, not Reflection
Reflection → action
Education → governance
Reflective learning answers "How experience becomes insight"
Adult learning style questions explain "How professionals learn best" through Case-based questions
QI loops oversee "How learning changes systems" through emphasising One Delta + follow-up
The final review component was informed by reflective learning theory and adult learning principles, incorporating a quality improvement learning loop to ensure that reflection translated into actionable system and behavioural change.”
The review MCQs are framed as
Knowledge Anchoring
Retrieval Check
Not as Assessment/Judgement/Primary learning outcome
Attendance linked to submission; Attendance tracking is a logistical function
MCQ Pattern
MCQs should be (retrieval practice): Case-linked, “Next step” oriented. Not trivia
5–7 MCQs, using Google Form
One Reflective / QI Item will be there.
Example: a question on one teamwork or communication behaviour practice
Each ED Grand rounds session concluded with a short online review consisting of case-linked MCQs to promote retrieval practice, combined with a single reflective or quality-improvement prompt to support reflective learning and close the learning loop.
Rules of engagement
No blaming, no naming
Focus on process, not outcomes
Speak from “what I saw / what I thought”
One person speaks at a time
Respect multidisciplinary voices
ED Grand Rounds Coordination Team
ED Grand Rounds is coordinated by a dedicated internal team to ensure consistency, discipline, and continuity.
Coordinators’ responsibilities:
Case selection and presenter prep
Zoom + logistics
Form collection + dashboard updates
Follow-up on action items
Review Team: ED Grand Rounds is not a collection of activities. It is a designed safety intervention.
Your role is to protect the link between evidence → structure → behaviour.
Dr Jean K Babu, ED Grandrounds Secretary
Dr Mukthar - Panel chair
Dr Shanif Muhammed, ED Faculty
Dr Nimshya, Debriefer
Dr Fathima, ED ICU Coordinator
EMS Tenzin, EMS coordinator
EMS Harsha, ED In-Charge
Dr Aswathi, IMT Chief Resident
Dr Adeeba, MEM Chief Resident
How the review team measures impact
Discuss → Debrief → One Delta → Assign owner → Review next week
Attendance & participation
Teamwork and CRM behaviours (panel rubric)
Psychological safety sentinel flags
Top recurring risks (monthly synthesis)
Actions completed vs pending
Post–ED Grand Rounds Debrief; ED HOD + Coordination Team Review
The debrief is based on panel Google Form submissions, not opinions.
Purpose: A closed debrief within 24–48 hours after ED Grand Rounds (20–30 minutes) is conducted by the HOD with the coordination team to review learning quality, psychological safety, and action feasibility, ensuring that insights translate into sustained system improvement. The HOD’s role is stewardship of the system, not adjudication of people. Also, ensure ED Grand Rounds becomes a high-reliability process
Session Integrity Check: Did ED Grand Rounds function as designed?
Psychological Safety Scan: Was it safe to speak openly?
Learning Quality Review: Was the learning meaningful, not cosmetic?
Action Item Validation: Are the identified actions realistic and owned?
Pattern Recognition (Monthly Lens)
Are the same risks recurring?
Is the same segment repeatedly weak?
Is escalation still being delayed?
Is teamwork improving or stagnating?
The Grand Rounds system itself continues to improve
Why ED Grand Rounds Is Designed This Way
It connects Patient-Safety & Decision-Making Evidence to Practice.
What do we mean by “evidence from patient safety”? => Classic safety thinking (“Safety-I”) focused on adverse events: identify what went wrong. Modern safety science (“Safety-II / Resilient Healthcare”) adds a second lens: how things usually go right under variable conditions (“work-as-done”). “Evidence from patient safety” means research + real-world safety programs showing outcomes in acute care. It studies how and why harm occurs in real clinical environments. Over the last two decades, this research has shown that:
Most adverse events occur before the diagnosis is clear
Harm often results from delayed escalation, not a wrong diagnosis
Communication and teamwork failures are major contributors
Systems shape behaviour more than individual competence
Safety depends less on what clinicians know and more on how clinicians think, interact, and adapt under pressure.
What do we mean by “decision-making research”? => Decision-making research examines how clinicians actually make decisions, especially in time-critical settings like the ED. Key findings include:
Clinicians rely heavily on fast, intuitive thinking (pattern recognition)
This is necessary, but vulnerable to cognitive biases (anchoring, premature closure)
Safer decisions occur when clinicians reassess, seek disconfirming cues, and escalate early
Expertise is built through repeated exposure to decision moments, not lectures
How this evidence shaped ED Grand Rounds
Safety science: Classic safety thinking (“Safety-I”) focused on adverse events: identify what went wrong and fix. Modern safety science, known as “Safety-II / Resilient Healthcare,” introduces a second lens to understanding how things usually go right under variable conditions. work-as-imagined vs work-as-done. This is the logic behin walk-in session; forces discussion at the moment of uncertainty, before hindsight and labels.
2. Just Culture framework separates behaviours into categories, each requiring a different response:
Human error (slips, lapses) → console + redesign systems
At-risk behaviour (drift, shortcut, risk not recognised) → coach + remove pressures
Reckless behaviour (conscious disregard of substantial risk) → manage separately
Human factors: harm often comes from predictable conditions (patient safety research);
cognitive load (too much to process)
interruptions and multitasking
crowding/time pressure
fatigue
poor interfaces between teams/services
Just Culture is a safety tool to keep discussions truthful and useful
Modern safety practice asks, “Why did this make sense at the time?” (instead of “Who failed?”).
This is the logic behind M&M, not a clinical lecture.
3. Safety evidence from cognitive science shows
Cognitive failures include:
Anchoring (sticking to first impression)
Premature closure (stopping once a plausible diagnosis appears)
Dual-process theory explains this:
System 1 = fast, intuitive
System 2 = slow, analytical
Doctor Clinical Decision Path: It trains clinicians to reveal their thinking, identify inflection points, and practice safe “switching” and escalation.
4. A huge chunk of safety failures in acute care are not knowledge failures—they’re:
unclear leadership
role confusion
closed-loop communication failure
poor situational awareness
delayed escalation
CRM is the set of behaviours designed to prevent exactly that. This is why “Teamwork Demo” exists
Walk-in case: Safety-II + work-as-done: learn early uncertainty & frontline adaptations.
Doctor decision path: Dual-process + cognitive bias: reveal thinking traps & escalation reasoning.
Clinical Gestalt: Reinforces pattern recognition and next-step action (high-frequency, short cycles). (This is consistent with expertise-building logic even when not cited formally.)
M&M: Just Culture + human factors: system learning without blame.
Teamwork demo: CRM behaviors + structured debriefing: safety through team performance.
Review: MCQ + reflection anchors learning and produces QI actions (retrieval + learning loop).
ED Grand Rounds Success
Uncertainty is openly discussed
Early escalation is valued
EMS and junior doctors' voices are heard
One actionable learning emerges every session
The same risks become less frequent over time
In Summary
ED Grand Rounds is not a lecture series.
In the ED, Everyone starts with
incomplete information
Patients evolve
Early impressions are often provisional
If people feel they must defend their first decision, they:
Delay escalation
Ignore weak signals
Rationalise instead of reassessing
That’s how harm happens
ED Grand rounds is a culture-building and safety-improvement platform designed to help emergency teams think clearly, communicate effectively, and work together under pressure.
Consistency built the system.
Reflection strengthens it.
Teamwork sustains it.
Standard Presenter slide heading
The strategic rationale for adopting a standardised presentation format is
If we don’t standardise slides, panels can’t score reliably
If we don’t map forms to evidence, reviewers won’t buy it
Hence, if we do both, this becomes:
Auditable
Replicable, &
Publishable