Skip to content
Code Guide

Event-Driven Agent Automation

Confidence: Tier 3 — Emerging pattern, early adopters report positive results but tooling is still maturing.

Instead of manually invoking Claude Code for each task, let external events drive the work. A card moves to “In Progress” in Linear, and Claude picks it up automatically. A GitHub issue gets labeled claude-fix, and an agent starts working on it within seconds.

This is the shift from pull-based (“hey Claude, do this”) to push-based (“events trigger agents”).


  1. Core Concept
  2. The Linear-Driven Agent Loop
  3. Generic Event-to-Agent Pattern
  4. Implementation Example
  5. Event Source Compatibility
  6. Guardrails
  7. Anti-Patterns
  8. Tools & Resources
  9. See Also

Traditional Claude Code usage is interactive: you open a terminal, type a prompt, iterate. Event-driven automation removes the human from the trigger step. The human still reviews output (PRs, code changes), but initiation happens through your existing project management workflow.

flowchart LR
A[Event Source] -->|webhook/poll| B[Event Filter]
B -->|matches rules| C[Context Extraction]
C -->|task data| D[Agent Selection]
D -->|spawn| E[Claude Code Agent]
E -->|results| F[Output Routing]
F -->|PR, comment, card update| A
style A fill:#f9f,stroke:#333
style E fill:#bbf,stroke:#333
style F fill:#bfb,stroke:#333

The loop is self-reinforcing: the agent’s output (a PR, a status update) feeds back into the event source, which can trigger the next step.


The most documented pattern comes from Damian Galarza’s workflow (damiangalarza.com, February 2026). Linear serves as the single source of truth for what needs doing, and Claude Code handles implementation end to end.

flowchart TD
A[Developer moves card to 'In Progress'] -->|Linear webhook| B[Agent picks up card]
B --> C[Read card description + acceptance criteria]
C --> D[Claude Code implements feature]
D --> E[Run tests + lint]
E -->|pass| F[Open PR automatically]
E -->|fail| D
F --> G[Move card to 'In Review']
G --> H[Human reviews PR]
H -->|approve + merge| I[Move card to 'Done']
H -->|request changes| D

The card description acts as the prompt. Good cards with clear acceptance criteria produce good code. Vague cards produce vague code, same as with human developers. The quality of your tickets directly determines the quality of the automation.

Linear’s structured fields (description, acceptance criteria, labels, priority) map naturally to Claude Code’s needs: what to build, how to verify it, and what constraints apply.

  • Cards must have clear acceptance criteria (not just a title)
  • The repo needs a solid test suite for automated verification
  • Branch naming conventions should be deterministic (e.g., feat/LINEAR-123-card-title)
  • PR templates help standardize the agent’s output

The Linear example is specific, but the pattern generalizes to any event source. Five components make up the pipeline:

Where the trigger originates. Could be a project management tool, a CI system, a monitoring alert, or a custom webhook.

Not every event should spawn an agent. Filters determine which events are actionable:

Terminal window
# Example: only process cards with the "claude-auto" label
if [[ "$CARD_LABELS" != *"claude-auto"* ]]; then
echo "Skipping: no claude-auto label"
exit 0
fi

Pull the relevant data from the event payload and format it as a Claude Code prompt. This is where you translate from your tool’s schema to natural language instructions.

Different event types might need different agent configurations. A bug report needs a different CLAUDE.md context than a feature request. You might use different allowed tools, different models, or different safety constraints.

Where do the results go? Typically a combination of:

  • Git branch + PR (code changes)
  • Comment on the original issue/card (status updates)
  • State transition on the card (moving to next column)
  • Slack notification (human awareness)

A minimal bash loop that polls Linear for “In Progress” cards and spawns Claude Code agents:

linear-agent-loop.sh
#!/bin/bash
# Polls Linear for cards in "In Progress" state and spawns Claude agents
LINEAR_API_KEY="${LINEAR_API_KEY:?Missing LINEAR_API_KEY}"
TEAM_ID="${LINEAR_TEAM_ID:?Missing LINEAR_TEAM_ID}"
PROCESSED_FILE="/tmp/linear-agent-processed.txt"
MAX_CONCURRENT=3
touch "$PROCESSED_FILE"
poll_linear() {
curl -s -X POST https://api.linear.app/graphql \
-H "Authorization: $LINEAR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"query": "query { team(id: \"'"$TEAM_ID"'\") { issues(filter: { state: { name: { eq: \"In Progress\" } }, labels: { name: { eq: \"claude-auto\" } } }) { nodes { id title description } } } }"
}' | jq -r '.data.team.issues.nodes[] | "\(.id)|\(.title)|\(.description)"'
}
spawn_agent() {
local issue_id="$1"
local title="$2"
local description="$3"
echo "[$(date)] Spawning agent for: $title ($issue_id)"
claude --print --dangerously-skip-permissions \
"Implement the following Linear card:
Title: $title
Description: $description
Requirements:
1. Create a feature branch named feat/$issue_id
2. Implement the described feature
3. Run tests and fix any failures
4. Create a PR with the card title" \
2>&1 | tee "/tmp/agent-$issue_id.log"
echo "$issue_id" >> "$PROCESSED_FILE"
}
while true; do
active_agents=$(jobs -r | wc -l)
if [ "$active_agents" -ge "$MAX_CONCURRENT" ]; then
echo "[$(date)] Max concurrent agents reached ($MAX_CONCURRENT), waiting..."
sleep 30
continue
fi
poll_linear | while IFS='|' read -r id title description; do
if grep -q "$id" "$PROCESSED_FILE"; then
continue # Already processed
fi
spawn_agent "$id" "$title" "$description" &
done
sleep 60 # Poll interval
done

This is a starting point, not production code. Real deployments need proper error handling, a persistent state store (not a text file), and webhook-based triggers instead of polling.


Event SourceTrigger EventsAgent Use CaseIntegration Method
LinearCard state change, label addedFeature implementation, bug fixGraphQL API / MCP server
GitHub IssuesIssue created, labeledBug triage, investigation, fix PRGitHub Actions / webhooks
GitHub PRPR opened, review requestedCode review, automated fixesGitHub Actions
JiraTransition, sprint assignmentFeature work, tech debt cleanupREST API / webhooks
SlackMessage in channel, emoji reactionQuick fixes, investigationsSlack API / bot
PagerDutyIncident createdDiagnostic scripts, initial triageWebhooks
Custom webhookAny HTTP POSTAnythingDirect HTTP endpoint

Event-driven agents run with less human oversight by design, so guardrails become critical.

An agent might process the same event twice (network retry, duplicate webhook). The agent must check if work already exists before starting:

Terminal window
# Check if branch already exists for this card
if git ls-remote --heads origin "feat/$ISSUE_ID" | grep -q "feat/$ISSUE_ID"; then
echo "Branch already exists, skipping"
exit 0
fi

Don’t let a burst of events spawn 50 agents simultaneously. Set hard limits:

  • Max concurrent agents: 3-5 for most teams
  • Cooldown period: Minimum 30 seconds between agent spawns
  • Daily budget cap: Set a maximum token spend per day

If agents keep failing on a particular type of task, stop trying:

Terminal window
FAILURE_COUNT=$(grep -c "FAILED" "/tmp/agent-failures.log" 2>/dev/null || echo 0)
if [ "$FAILURE_COUNT" -gt 5 ]; then
echo "Circuit breaker triggered: too many failures"
# Notify human, pause automation
exit 1
fi

Even in fully automated flows, keep humans in the loop at critical points:

  • PR review remains manual (agents create PRs, humans approve them)
  • Database migrations never auto-apply
  • Deployment is a separate, human-triggered step
  • Any card touching auth, billing, or PII requires explicit human approval

Anti-PatternProblemSolution
Aggressive pollingHammering the API every 5 seconds wastes resources, gets you rate-limitedUse webhooks when available, poll no faster than every 60 seconds
No circuit breakerAgent fails repeatedly on same task, burning tokens indefinitelyTrack failures per task, stop after 3 attempts, alert human
No dead letter queueFailed events disappear, nobody knows work was missedLog failed events to a persistent store for manual review
Unbounded concurrency20 cards move at once, 20 agents spawn, machine meltsHard cap on concurrent agents (3-5 is reasonable)
Vague cards as prompts”Fix the thing” produces garbage codeEnforce card quality standards, skip cards without acceptance criteria
No state persistenceScript restarts, re-processes everything from scratchStore processed event IDs in a database, not in-memory
Skipping PR reviewAgent pushes directly to mainAlways go through PR flow, humans review the output

  • linear-kanban-mcp (0xikarus on GitHub): Exposes the Linear API for kanban board management directly from Claude Code. Enables reading cards, updating states, and managing labels without leaving the agent context.
  • skillsllm.com: Offers a skill that orchestrates the full planning, validation, and execution cycle starting from a Linear card. Handles the translation from card metadata to structured Claude Code prompts.
  • Scrum Master Agent (lobehub.com): Auto-detects whether it is running inside Claude Desktop or Claude Code and adapts its behavior accordingly. Useful as a starting point for context-aware agent design.