Summary
Messages submitted while the main agent is in the "Waiting for background agents" / deferred-idle state get stranded in the separate Queued (N) UI region and do not automatically drain at the next available opportunity. The only way to get a stranded message processed (without destroying it via Esc/double-Esc) is to submit another message — which by itself doesn't drain the queue either, but it advances the user-visible state in a way that often coincides with a background subagent completion, making it appear as though the second submission "pushed the first one through."
(This investigation was performed collaboratively with Copilot CLI itself. I used the CLI in this session to both reproduce the symptom and dig through the minified app.js shipped in @github/copilot@1.0.48 to identify the precise control flow that causes it. The technical analysis below is the agent's findings, reviewed by me.)
Version
GitHub Copilot CLI 1.0.49-1 (also reproduced symptom in 1.0.48 via source inspection).
Observed behavior
The CLI has two distinct queueing UIs for input submitted while the agent is busy:
-
Inline pending (normal case): when the agent is actively executing a tool or streaming a response, a message submitted with Enter appears under the active turn in the main conversation window with a └ [pending] decoration. It is consumed at the next turn boundary.
-
Separate Queued (N) region (the bug surface): when the agent's turn ended while a background subagent (task tool with mode: "background", etc.) is still running, a message submitted with Enter lands under a separate Queued (N) header rendered below the input prompt — visually similar to but distinct from the inline pending UI.
Once in the Queued (N) region, the message can stay there indefinitely. Subsequent user input does not reliably drain it. In one of my real long-running sessions, the same status-update request had to be re-typed multiple times across the session before getting through (resulting in turns 17, 14, 8, 7, 6, 5, 4, 2, 1 having NULL assistant responses in the session store — they were submitted but never processed).
Root cause (from minified source inspection of app.js)
The relevant control flow in the processQueuedItems path:
enqueueItem(e) {
this.addItemToQueue(e);
this.emitEphemeral("pending_messages.modified", {});
if (!this.isProcessing) {
// 🔴 The bug guard:
if (e.kind !== "resume_pending" && this.hasActiveBackgroundWork()) return;
this.processQueue().catch(...);
}
}
hasActiveBackgroundWork() {
return this.taskRegistry
.list({ includeCompleted: false })
.some(e => e.type === "agent" && e.status === "running");
}
// At the end of a turn:
if (this.hasActiveBackgroundWork()) {
this.idleDeferredByBackgroundWork = true;
this.idleDeferredAborted = e;
} else {
this.emitSessionIdle(e);
}
// In the bg-task completion callback:
this.taskRegistry.setOnCompletionCallback(s => {
// ... record completion, emit notifications ...
this.emitDeferredSessionIdleIfReady();
});
emitDeferredSessionIdleIfReady() {
if (!this.idleDeferredByBackgroundWork || this.isProcessing || this.hasActiveBackgroundWork()) return;
if (this.itemQueue.length > 0) {
this.processQueue().catch(...); // ← the drain path
return;
}
this.emitSessionIdle(this.idleDeferredAborted);
}
The trap: enqueueItem adds messages to itemQueue but refuses to start processQueue() whenever any background agent is running. The only mechanism that drains itemQueue in this state is emitDeferredSessionIdleIfReady, which fires from a taskRegistry completion callback. So the queue can only drain when all running subagents complete AND the bg-task callback for the most-recently-completed one fires.
Two ways this becomes a user-visible stall:
- If new background subagents are continuously dispatched (autopilot, multi-step planning), there is never a clean window with
hasActiveBackgroundWork() === false, so the drain path is never taken.
- The completion callback path can race with re-entering active processing (e.g., the main agent has resumed work for some other reason), and the queue is left waiting for the next cycle.
Why "submit another message" sometimes appears to work as a workaround
It doesn't actually drain the queue directly — the hasActiveBackgroundWork() guard rejects the new submission for the same reason. But submitting another message:
- Pushes a new entry into
itemQueue, which forces a pending_messages.modified ephemeral event and re-renders the queue UI.
- Often happens to coincide with a background agent's completion callback firing (because the user typically only retries when they perceive a delay, which is itself correlated with bg agents finishing up).
So from the user's perspective, "I submitted a second message and the first one went through" — which gave me the wrong initial mental model. The actual cause is the bg agent completing, not the resubmission.
Reproduction
Empirically confirmed in this conversation by intentionally triggering "background agent in flight + main agent actively running a sync tool call" simultaneously. Submitting a message during the overlap window landed it in the separate Queued (N) region. The message remained there until both:
- The 90-second background
Start-Sleep agent completed, AND
- The 60-second main-agent sync tool call completed.
Once both conditions cleared, the queued message drained on its own and was processed normally.
We also tried isolating each condition (bg-only, main-busy-only) and neither alone reproduced — both must be active simultaneously, and we could only reliably trigger it when the main agent was doing meaningful in-flight work (not just idle waiting on bg agents).
Repro recipe in code form, runnable inside copilot:
Do exactly two things in this single turn:
(1) Dispatch a background-mode task subagent that sleeps for 90 seconds.
(2) Immediately after, run a SYNC powershell tool call `Start-Sleep -Seconds 60` with initial_wait 60.
Do not end your turn until that sync command has completed.
While the agent is in step 2, submit any test message via Enter. It lands in Queued (N).
Expected behavior
Queued user messages should drain at the next turn boundary regardless of whether background subagents are still running. Background subagents are independent execution contexts — their presence should not block the main agent from accepting new direction from the user.
A reasonable fix would be either:
- Remove the
this.hasActiveBackgroundWork() short-circuit in enqueueItem — let processQueue() start, and rely on the main agent's own turn boundaries to determine when to actually pull from the queue, OR
- Have
enqueueItem always call processQueue() (or emitDeferredSessionIdleIfReady()) and let processQueuedItems's own internal isProcessing/state checks decide what to do.
Workarounds (in order of usefulness)
1. Submit another message ← this is the cargo-cult workaround that appears to work, and is what users naturally discover. It does not actually fix the queue state — it just buys time until a bg agent completes. Until a proper fix lands, this is what most users will reach for. It is the path of least friction but is fundamentally a placebo; the real driver is the eventual bg-agent completion.
-
Wait it out. Confirmed: the queue does drain automatically once both idleDeferredByBackgroundWork clears AND hasActiveBackgroundWork() returns false. For finite well-behaved bg agents, this works but feels unresponsive — minutes of waiting is common.
-
Cancel current activity with Esc once. This aborts in-flight work via the AbortController, which ultimately flushes through the same completion-callback path and drains the queue. Side effect: you lose whatever the main agent was doing.
-
Copy-paste manually. Select the text of the queued message from the terminal, double-Esc to drop the queue entry (this is destructive — confirmed by changelog 1.0.40: "Ctrl+C and double-Esc remove pending queued messages one at a time"), then re-submit when you see a clean idle state.
-
ctrl+q is NOT a dequeue. The status-bar hint ctrl+q enqueue that appears during active execution is one-directional — it adds to the queue, it cannot remove from it. We tested this empirically in this investigation; the only "ctrl+q" string literal in the minified bundle is the keybind label inside the enqueueHint status renderer.
Severity
Medium. The bug doesn't lose data (messages eventually drain), but it severely degrades the responsiveness of long-running agentic sessions, especially autopilot mode with frequent bg-agent dispatch. In one of my real m2a sessions over a 12-hour span, this caused approximately 18 of 47 turns (~38%) to have NULL assistant responses corresponding to messages that I had to retype because they appeared to be stuck.
Environment
- OS: Windows 11
- Terminal: Windows Terminal
- CLI: 1.0.49-1 (npm package
@github/copilot)
- Model in use during investigation: Claude Opus 4.7 (1M context)
Reachability
I'm filing this from my personal account because GitHub Enterprise Managed User policy blocks my work account from posting to this repo. If maintainers want to follow up directly, my work GitHub login is samallon. Feel free to ping there if any of the diagnostic detail needs clarification or if a different set of session-store events would help isolate the drain failure.
Related issues
I searched the repo before filing and found no exact match, but these are adjacent and may inform triage:
Summary
Messages submitted while the main agent is in the "Waiting for background agents" / deferred-idle state get stranded in the separate
Queued (N)UI region and do not automatically drain at the next available opportunity. The only way to get a stranded message processed (without destroying it via Esc/double-Esc) is to submit another message — which by itself doesn't drain the queue either, but it advances the user-visible state in a way that often coincides with a background subagent completion, making it appear as though the second submission "pushed the first one through."(This investigation was performed collaboratively with Copilot CLI itself. I used the CLI in this session to both reproduce the symptom and dig through the minified
app.jsshipped in@github/copilot@1.0.48to identify the precise control flow that causes it. The technical analysis below is the agent's findings, reviewed by me.)Version
GitHub Copilot CLI 1.0.49-1(also reproduced symptom in 1.0.48 via source inspection).Observed behavior
The CLI has two distinct queueing UIs for input submitted while the agent is busy:
Inline pending (normal case): when the agent is actively executing a tool or streaming a response, a message submitted with Enter appears under the active turn in the main conversation window with a
└ [pending]decoration. It is consumed at the next turn boundary.Separate
Queued (N)region (the bug surface): when the agent's turn ended while a background subagent (tasktool withmode: "background", etc.) is still running, a message submitted with Enter lands under a separateQueued (N)header rendered below the input prompt — visually similar to but distinct from the inline pending UI.Once in the
Queued (N)region, the message can stay there indefinitely. Subsequent user input does not reliably drain it. In one of my real long-running sessions, the same status-update request had to be re-typed multiple times across the session before getting through (resulting in turns 17, 14, 8, 7, 6, 5, 4, 2, 1 having NULL assistant responses in the session store — they were submitted but never processed).Root cause (from minified source inspection of
app.js)The relevant control flow in the
processQueuedItemspath:The trap:
enqueueItemadds messages toitemQueuebut refuses to startprocessQueue()whenever any background agent is running. The only mechanism that drainsitemQueuein this state isemitDeferredSessionIdleIfReady, which fires from ataskRegistrycompletion callback. So the queue can only drain when all running subagents complete AND the bg-task callback for the most-recently-completed one fires.Two ways this becomes a user-visible stall:
hasActiveBackgroundWork() === false, so the drain path is never taken.Why "submit another message" sometimes appears to work as a workaround
It doesn't actually drain the queue directly — the
hasActiveBackgroundWork()guard rejects the new submission for the same reason. But submitting another message:itemQueue, which forces apending_messages.modifiedephemeral event and re-renders the queue UI.So from the user's perspective, "I submitted a second message and the first one went through" — which gave me the wrong initial mental model. The actual cause is the bg agent completing, not the resubmission.
Reproduction
Empirically confirmed in this conversation by intentionally triggering "background agent in flight + main agent actively running a sync tool call" simultaneously. Submitting a message during the overlap window landed it in the separate
Queued (N)region. The message remained there until both:Start-Sleepagent completed, ANDOnce both conditions cleared, the queued message drained on its own and was processed normally.
We also tried isolating each condition (bg-only, main-busy-only) and neither alone reproduced — both must be active simultaneously, and we could only reliably trigger it when the main agent was doing meaningful in-flight work (not just idle waiting on bg agents).
Repro recipe in code form, runnable inside
copilot:While the agent is in step 2, submit any test message via Enter. It lands in
Queued (N).Expected behavior
Queued user messages should drain at the next turn boundary regardless of whether background subagents are still running. Background subagents are independent execution contexts — their presence should not block the main agent from accepting new direction from the user.
A reasonable fix would be either:
this.hasActiveBackgroundWork()short-circuit inenqueueItem— letprocessQueue()start, and rely on the main agent's own turn boundaries to determine when to actually pull from the queue, ORenqueueItemalways callprocessQueue()(oremitDeferredSessionIdleIfReady()) and letprocessQueuedItems's own internal isProcessing/state checks decide what to do.Workarounds (in order of usefulness)
1. Submit another message ← this is the cargo-cult workaround that appears to work, and is what users naturally discover. It does not actually fix the queue state — it just buys time until a bg agent completes. Until a proper fix lands, this is what most users will reach for. It is the path of least friction but is fundamentally a placebo; the real driver is the eventual bg-agent completion.
Wait it out. Confirmed: the queue does drain automatically once both
idleDeferredByBackgroundWorkclears ANDhasActiveBackgroundWork()returns false. For finite well-behaved bg agents, this works but feels unresponsive — minutes of waiting is common.Cancel current activity with Esc once. This aborts in-flight work via the AbortController, which ultimately flushes through the same completion-callback path and drains the queue. Side effect: you lose whatever the main agent was doing.
Copy-paste manually. Select the text of the queued message from the terminal, double-Esc to drop the queue entry (this is destructive — confirmed by changelog 1.0.40: "Ctrl+C and double-Esc remove pending queued messages one at a time"), then re-submit when you see a clean idle state.
ctrl+qis NOT a dequeue. The status-bar hintctrl+q enqueuethat appears during active execution is one-directional — it adds to the queue, it cannot remove from it. We tested this empirically in this investigation; the only"ctrl+q"string literal in the minified bundle is the keybind label inside theenqueueHintstatus renderer.Severity
Medium. The bug doesn't lose data (messages eventually drain), but it severely degrades the responsiveness of long-running agentic sessions, especially autopilot mode with frequent bg-agent dispatch. In one of my real m2a sessions over a 12-hour span, this caused approximately 18 of 47 turns (~38%) to have NULL assistant responses corresponding to messages that I had to retype because they appeared to be stuck.
Environment
@github/copilot)Reachability
I'm filing this from my personal account because GitHub Enterprise Managed User policy blocks my work account from posting to this repo. If maintainers want to follow up directly, my work GitHub login is
samallon. Feel free to ping there if any of the diagnostic detail needs clarification or if a different set of session-store events would help isolate the drain failure.Related issues
I searched the repo before filing and found no exact match, but these are adjacent and may inform triage:
idleDeferredByBackgroundWorkand thehasActiveBackgroundWork()guard, which is exactly the guard now stranding user messages.