The most important things happening across your accounts happen between CRM entries. A champion leaves. A competitor's contract expires. A school board discusses your product category. A STEM grant is awarded. None of this is in your CRM - and your current tools have no way of knowing.
You missed this
Champion departed a top-20 account
Your primary contact left the district five days ago. Your CRM still shows them as active. You'll find out when the renewal stalls in Q3.
With signal infrastructure
Stakeholder-departure signal fires within 24 hours
External-source monitoring detects the change. Renewal risk auto-escalates. Save play assigned to the owning CSM within 4 hours.
You missed this
Competitor contract expiring in a key district
A competitor's multi-year contract in one of your target accounts expires in 90 days. Your team has no displacement play because they don't know the contract exists.
With signal infrastructure
Competitive-displacement opportunity surfaces on the signal queue
Public procurement data feeds flag the expiry. Cooperative purchasing pathway identified. Displacement play routed to the AE with a full account brief.
Why most alerts aren't signals.
Diagnosis
Most revenue teams conflate "alerts" with "signal infrastructure." An alert is a notification. A signal is a governed unit of information that has an owner, an action, and a resolution. The distinction matters because alerts produce noise; signals produce outcomes.
Four diagnostic questions separate a working signal infrastructure from an alert sprawl:
Can every alert be traced to a named owner within 60 seconds? If not, it's a shared-channel notification. Nobody owns it.
Does every alert have a defined action menu? If the receiving rep has to improvise what to do next, the alert isn't wired to a playbook.
Is there a defined resolution state that closes the loop? If "I saw it" counts as resolution, you're measuring acknowledgment, not intervention.
Does the outcome feed back into the detection logic? Without a feedback loop, your thresholds drift from reality and false-positive rates creep up until the team starts ignoring the channel.
Signal infrastructure is the discipline of answering yes to all four, at scale, across every signal family, with governance and audit. That works at 50 accounts. At 500 accounts, it is the only thing that works.
Five principles of a good signal.
Design
Signals are engineered, not discovered. A signal that violates any of these five principles will decay into noise within a quarter. The team will start ignoring the channel, and the infrastructure you built becomes a wiki entry nobody reads.
PRINCIPLE 01
Actionable, not informational.
Anti-pattern: "NPS declined by 4 points" fires as a signal. The rep sees it, shrugs, moves on. There's no defined action because the scope is too vague.
Every signal should imply a bounded action the owner can take within one work session. “NPS declined on a strategic account during the renewal window” is actionable. Reach out, investigate, document. “NPS declined” without context is a dashboard widget, not a signal.
PRINCIPLE 02
Composed, not atomic.
Anti-pattern: "Usage declined" fires on every small dip. Rep gets 40 alerts a week, ignores the channel, then misses the one that actually mattered.
Good signals are composite. They combine multiple conditions that, together, cross an intervention threshold. “Usage declined 20%+ AND renewal within 90 days AND no open expansion conversation” fires maybe twice a quarter per rep, but every one of those is worth the attention. The composition filters the noise.
PRINCIPLE 03
Explainable, not black-box.
Anti-pattern: AI-generated "risk score: 73" with no decomposition. The rep can't contest it, can't coach from it, can't calibrate when it's wrong.
A signal has to say why in plain language, with evidence pointers. “Firing because: 25-day stage stall + single-threaded + two pushes this quarter.” If the rep disagrees, they can open the evidence and discuss the specific input. Without explainability, the system becomes adversarial. Reps resist it instead of using it.
PRINCIPLE 04
Scoped, not global.
Anti-pattern: One signal definition fires the same way across every segment, tier, territory, and motion. A 30-day inactivity signal means something very different for a Tier-1 strategic account than a Tier-3 transactional account.
Signals need scope modifiers: segment, tier, territory, motion, phase-of-relationship. The signal family is shared. The threshold that triggers is scoped. The architecture that supports this is a base signal definition plus a lookup table of scope-specific overrides, not 40 duplicate signals.
PRINCIPLE 05
Closed-loop, not fire-and-forget.
Anti-pattern: Signal fires, rep acts (or doesn't), no outcome feedback. Six months later, nobody knows which signal families actually predicted churn and which were noise.
Every signal needs a resolution outcome field: what happened, and did the action work? That outcome feeds back into the scoring weights. Signal families that correlate with real outcomes get amplified. Signals that fire but never correlate with anything get deprecated or retuned. Without the loop, your signal infrastructure's predictive value decays quarter by quarter.
The design insight
Every signal that exists should pay rent on the channel. If it fires weekly and produces no action, it is taxing attention without creating value. That is the reason the team will ignore the next signal that actually mattered.
The signal hierarchy: atomic → compound → derived.
Architecture
Good signal infrastructure is layered. Atomic signals detect events. Compound signals combine atomic signals with thresholds. Derived signals inject context from outside the composite. Each layer uses the one below it.
Layer 1 · Atomic
Event detection
Low-level observations. Usage dropped 15%. Contact added. Email sent. Meeting held. Contract uploaded. These are building blocks, not signals you route to reps.
Combinations of atomic signals plus thresholds that cross an intervention line. These are the signals reps see. They are the actionable unit.
Examples: "low adoption in renewal window," "single-threaded late-stage deal," "champion departed on strategic account"
Layer 3 · Derived
Context-enriched intelligence
Compound signals enriched with outside-the-org context: procurement data, market events, peer-cohort benchmarks, competitive activity. This is where the strategic signals live.
Examples: "competitor contract expiring in your target district," "peer cohort showing expansion pattern you lack," "regulatory shift affecting buying authority"
A common mistake: building Layer 2 signals without a clean Layer 1. The atomic events exist across five systems, none of them speak to each other, and every compound signal requires its own one-off integration. The architectural fix is a unified event stream, one place where atomic signals are captured, before compound signals are defined.
The eight signal families.
Taxonomy
Every compound signal belongs to a family. The taxonomy gives you a shared vocabulary across teams and a routing rubric: who owns each family by default, what cadence slot it lands in, and what play library it pulls from.
Signal Taxonomy · 8 Families
Pipeline Hygiene
Stale stage, missing next step, pushed close date, single-threaded deal
Forecast Confidence
Amount-stage mismatch, low activity vs late stage, repeated push, no mutual plan
Renewal Risk
Usage decline, stakeholder turnover, support spike, contracting lag, no sponsor
Play overdue, task aging, missed response window, missing QBR cadence
Severity taxonomy: Critical · Warning · Info.
Routing
Every signal is tagged with a severity level. The severity level drives the SLA, the routing, the escalation path, and which cadence slot the signal lands in. Teams that don't separate severity end up with either alert fatigue (everything is "urgent") or slow response (nothing is "urgent").
Severity
Definition
Response SLA
Cadence Slot
Critical
Loss is imminent or ARR impact is material. The composite signal has crossed an intervention threshold. Inaction measurably degrades the account within the week.
4-24 hours
Monday triage → Thursday at-risk
Warning
Trend is negative and requires intervention within the quarter. Not yet an emergency. Compounding signals suggest the trajectory.
48-72 hours
Weekly 1:1s → monthly account review
Info
Context worth knowing, not worth paging on. Informs strategy, not intervention. Ideal for asynchronous surfacing.
1 week (batched)
Friday retro / monthly portfolio review
Two governance rules that prevent severity drift: the definition of critical is capped (no more than ~5% of active signals should carry critical, otherwise the level loses meaning), and severity is auditable (if too many warnings get downgraded or ignored, retune the threshold, don't let the field become aspirational).
The false-positive problem.
Tuning loop
Every signal infrastructure has a false-positive rate. The rate is not the problem. Not tracking and tuning it is. Untuned signals decay into noise, and the team quietly stops trusting the channel. Here's the loop that keeps signals credible:
Capture resolution outcomes. When a rep acts on a signal, the outcome field is mandatory: “confirmed risk,” “confirmed opportunity,” “false alarm,” “already handled,” “not applicable.”
Compute false-positive rate per signal family monthly. Flag any signal with >25% false-positive rate for review.
Diagnose the source of false positives. Usually one of three: threshold is too loose, scope modifier is missing (signal firing outside the segment it should apply to), or a Layer 1 atomic signal is unreliable.
Tune, don't delete. Raise the threshold, add a scope modifier, or deprecate the upstream atomic. Document the change and the reason in the signal-definition audit log.
Re-measure the following month. If the false-positive rate didn't move, the diagnosis was wrong. Iterate.
This is the part of signal infrastructure that most orgs skip, and it is why a signal system that was great in year one is noise by year three. Without a tuning loop, your thresholds drift from reality. With one, your signals get sharper every quarter.
Multi-product signal architecture.
Edge case
If you run two products on shared infrastructure, do not duplicate your signal library. Share the definitions; scope the thresholds. A single-store-two-scopes architecture keeps the taxonomy coherent across motions and makes the flywheel work for both products.
One definition library. "Low adoption," "champion departed," "renewal risk" are defined once. Families and compound signal shapes are shared across motions.
Per-motion threshold overrides. The numbers that trigger the signal (days of inactivity, usage delta, renewal window length) are overridden per motion. Shared definition, different firing rules.
Cross-sell as its own signal family. Motion-A customer qualified for Motion B is a distinct signal family with its own routing (Motion B AE) and its own play library. Not a generic expansion signal.
Shared flywheel, scoped tuning. Resolution outcomes feed back into threshold-tuning per motion. The same signal can be well-tuned for Motion A and drifting on Motion B simultaneously. Tune each independently; share the infrastructure.
Signal infrastructure self-assessment.
12 questions
Twelve yes/no questions to pressure-test the signal layer. Count the no's by category. That's your rebuild punch list.
DESIGN
Every signal implies a bounded action the owner can take in one work session. Not "go look at this dashboard."
DESIGN
Signals are composite (multiple conditions plus thresholds), not atomic event notifications.
DESIGN
Every signal includes plain-language explainability with evidence pointers.
ARCHITECTURE
Atomic events are captured in a unified event stream before compound signals are built on top.
ARCHITECTURE
Signal thresholds are scoped (segment, tier, territory, motion), not global.
ARCHITECTURE
Layer 3 derived signals exist. You enrich compound signals with external context (procurement, market, peer-cohort).
SEVERITY
Critical severity is capped at about 5% of active signals, so it still means something.
SEVERITY
Severity drives SLA, routing, and cadence slot automatically. Not by rep judgment.
TUNING
Every resolved signal carries an outcome field: confirmed risk, confirmed opportunity, false alarm, or already handled.
TUNING
False-positive rate is computed per signal family monthly and drives tuning decisions.
FLYWHEEL
Resolution outcomes feed back into the scoring weights on a governed cadence.
MULTI-PRODUCT
Shared signal definitions with motion-scoped threshold overrides, not duplicate libraries.
A 90-day signal-infrastructure rollout.
Build sequence
Build bottom-up (atomics first, then compound, then derived) and commit to a tuning loop from day one. Don't try to ship all eight families at once. Start with the three that matter most for the next quarter.
WEEKS 1-2
Event Stream
Unify atomic events from CRM, product, CS, calendar into one capture layer. The substrate for everything else.
WEEKS 3-4
Top-3 Families
Pick the three signal families where the ROI is obvious (usually Renewal Risk + Pipeline Hygiene + Expansion Readiness). Ship those compound definitions with scope modifiers.
WEEKS 5-6
Severity + Routing
Tag every signal with severity. Wire severity to owner, SLA, and cadence slot. Remove rep judgment from routing.
WEEKS 7-8
Explainability
Every compound signal fires with a plain-language why and evidence pointers. Reps can contest inputs; trust grows.
First Layer-3 derived signals light up (competitor contract expiry, peer-cohort divergence, regulatory events). The strategic signals start arriving.
Two anti-patterns to avoid: building compound signals before the event stream is stable (every signal has its own integration debt and breaks constantly), and skipping the outcome loop (without it, quarter-three will be full of signals nobody acts on and nobody trusts).
How PILLAR builds signal infrastructure.
Signals in PILLAR are first-class architectural objects, not dashboard widgets. The five-step lifecycle (detect → score → explain → route → resolve) is the backbone. The closed-loop tuning is what keeps signals credible over time.
1
Detect
Unified atomic event stream from CRM, product, CS, calendar, and external feeds (procurement, peer-cohort, market events)
2
Score
Rules-based compound signals with scope-specific thresholds. Severity computed: Critical / Warning / Info
3
Explain
Plain-language rationale with evidence pointers. Reps can contest inputs, not arguments
4
Route
Owner assigned. Play recommended. SLA clock starts. Escalation automatic on SLA lapse
5
Resolve
Outcome captured. False-positive rate tracked. Weights tuned on a governed cadence. Model gets sharper every quarter
Your Blueprint scored your Signal Infrastructure. A facilitated session reveals what signals you're currently missing - and quantifies what they're costing you.