Revenue Architecture One-Pager
Pillar III: Process • Renewal Triage at Scale
17 min read
Revenue OS · Layer IV · Cadences →

At hundreds of renewals per quarter, you cannot sort by renewal date.

The way most CS teams triage renewals is by sorting Salesforce on renewal_date ascending and working the top of the list. That works at 50 accounts. It breaks at 500. The accounts that need attention are not the ones closest to renewal. They are the ones where risk times ARR times how much time you have left still resolves to a number big enough to act on.

5%
Typical miss rate when triage is sorted by renewal date alone. On a $50M ARR base, that is $2.5M in preventable churn per year.
44 days
Median lead time gained by portfolio-scoring vs waiting for the renewal-date cliff. The difference between a save and a loss.
3-4x
Throughput multiplier for a CSM team that runs sequenced save plays vs individual improvised outreach.

Why sort-by-date breaks at scale.

Diagnosis

At 50 accounts, the CSM carries the portfolio in their head. They know which accounts are healthy, which have sponsor risk, which had a bad QBR three weeks ago. Renewal date is a reasonable proxy for attention because the CSM's instinct fills in everything else.

At 500 accounts, that instinct breaks. Three structural reasons:

  1. The signals that predict churn fire everywhere, not just at the renewal date. Usage decline, stakeholder turnover, budget-cycle shifts, support spikes, champion departures. They are happening right now on an account whose renewal is 180 days out. Sorted-by-date misses them entirely until the account is already lost.
  2. Risk and ARR are not correlated. Your riskiest accounts are not necessarily your biggest, and your biggest are not necessarily your riskiest. Sorting by one dimension guarantees the other gets under-served.
  3. Effort-to-save varies 10x across accounts. Some accounts are recoverable with one well-timed email. Others require a six-task coordinated play. Sorted-by-date treats them identically. The CSM spends 20 hours on an account that was unrecoverable and skips the one that would have been saved with 30 minutes.

The architectural fix is not a better sort. It is a scored queue with tier-based actions, and a prioritization formula that accounts for all three dimensions at once.

The triage queue: five tiers, five actions.

Framework

Every renewal in the portfolio sits in exactly one tier, scored daily. The tier determines the action, the owner, and the cadence slot. No account is "in review" or "pending a conversation." The tier is the state.

T1 · SAVE IN PROGRESS
Active save play deployed. Risk is confirmed, intervention is underway, CSM or AE is executing a sequenced play with SLA clocks running. Tracked daily until resolution.
Thursday review
T2 · AT-RISK
Composite risk score over threshold. No play deployed yet. Needs triage this week to decide save strategy or early-renew attempt. Requires leadership attention if not claimed in 48 hours.
Monday triage
T3 · MONITOR
Trending negative but below intervention threshold. Watched for signal compounding. If two more warnings fire within 30 days, the account escalates to T2.
Weekly 1:1 surface
T4 · HEALTHY
Renewing as expected. Relationship heat is good, usage is steady, NPS is positive, contract terms are clear. Touchpoints per the standard CS cadence. Expansion signals flagged when they fire.
Standard cadence
T5 · AUTO-RENEW
Contracted auto-renewal, no human touch required. Low-touch segment or strategic-account auto-renewals with explicit opt-out windows. CSM is only alerted if a signal fires.
Exception-only

Two governance rules keep the queue honest. First: every account must be in exactly one tier. No account sits in "we'll figure it out next week." Second: transitions are auditable. When an account moves from T3 Monitor to T2 At-Risk, a log entry captures which signals fired and who decided the transition. Six months later, you can replay what happened.

ARR × Probability × Effort: the prioritization math.

Economics

Inside T2 At-Risk, you still have to rank. A CSM with 40 at-risk accounts cannot work all of them. The ranking is a simple three-variable formula.

Save Priority Score
Priority = ARR × Save Probability ÷ Expected Effort
Sort T2 descending. Work top-down. The math answers the real question: which save produces the most ARR preserved per CSM hour invested?

Each variable needs definition:

  1. ARR: annualized contract value, ideally net of expected contraction. A $200K account where the customer has already signaled they want to reduce seats by 30% is really a $140K save target, not $200K.
  2. Save Probability: historical win rate of the save play applicable to this account's signal pattern, scoped to segment and risk profile. If your renewal_save play has a 62% win rate on Enterprise-tier accounts with champion-departed signals, this account inherits that base rate. No guessing.
  3. Expected Effort: CSM hours required to execute the save play, from the play library's estimated effort field. A five-task deal-rescue play is ~15 hours. A one-touch re-engagement is ~2 hours. Effort is an explicit number, not a vibe.

A CSM looking at a queue sorted by Priority makes fundamentally different choices than one looking at a queue sorted by renewal_date. The highest-priority account might be six months out with $350K ARR and a 70% save probability at 8 hours of work. Sorted by date, it doesn't show up until Q4. Sorted by priority, it's the first save of the week.

The prioritization insight
Sorting by renewal date optimizes for the date. Sorting by ARR times probability divided by effort optimizes for revenue preserved. Those are not the same portfolio.

The save play library.

Orchestration

Every T2 and T1 account needs a save play assigned. "I'll figure out what to do" is the tribal-knowledge failure mode that breaks at scale. A library of four to six named save plays covers the majority of at-risk patterns, with a clear rubric for when to deploy each.

Renewal Save
When: health score drop, renewal within 120 days, usage decline
Five-task sequence over 14 days. Executive outreach to sponsor, usage re-engagement session, ROI review, contract-term flexibility conversation, written renewal commitment. Win rate benchmark: 50-65% in the CSM-owned segment.
Champion Rescue
When: primary champion departs or changes role
Urgency play. Map new stakeholder landscape in 72 hours, secure meeting with likely successor champion, re-pitch value from scratch, document expectations with new owner. Win rate depends heavily on whether a second sponsor exists.
Adoption Recovery
When: usage decline, still-positive sentiment, renewal 90+ days out
Slower play. Six-week sequence: usage audit, targeted re-enablement, milestone recommitment, mid-cycle success review. Lower-intensity than Renewal Save because you have more runway.
Competitive Displacement Defense
When: competitor activity detected, RFP signals, procurement conversations referencing alternatives
Highest-urgency play. Needs AE plus CSM plus executive sponsor. Counter-position, contract-term re-negotiation, proof-of-value acceleration. Run the play or lose the account.

Two rules that keep the play library credible. First: plays have measured win rates, tracked per segment and signal pattern. Under 30% win rate on enough samples means retire or rebuild the play. Second: plays have a walk-away criterion. If a play has been active for its defined duration and the customer has not engaged, the CSM escalates to a save-or-close decision with leadership. No eternal open saves.

When to walk away.

Hard decision

Not every at-risk renewal is saveable. The hardest discipline in triage-at-scale is deciding which accounts get the CSM's time and which get a graceful close. Three criteria make this decision explicit instead of political.

CRITERION 01
Champion absent and no successor identified.
The sponsor left. The replacement is unidentified or hostile. Save probability drops below 15%. Better to spend the CSM hours on three accounts with live champions than burn them re-pitching to a stranger who is not buying.
CRITERION 02
Structural budget loss at the customer level.
The customer's budget for this category was eliminated, not reallocated. Funding cliffs, organizational restructuring, or a board-level category exit. This is not a signal-layer fix. No amount of engagement recovers an account whose economic buyer no longer has a line item.
CRITERION 03
Two failed save plays in the last 12 months.
If the save play library has been run twice on the same account and the account is still in T2, the pattern is structural. The product-fit was wrong, the segmentation was wrong, or the customer was miscategorized at close. Running save attempt #3 is throwing good CSM hours after bad. Close gracefully, document what you learned, flag the pattern for ICP review.

Calling the walk-away early is not giving up. It is re-allocating scarce CSM capacity to accounts where a save is actually possible. Teams that never walk away end up saving 20% of the unsaveable and losing 40% of the saveable because they ran out of hours.

The post-save flywheel.

Feedback loop

Every resolved save, whether the outcome is renewed, expanded, churned, or lost, produces a data point. Every outcome feeds back into the save-play library's win rates, which feed back into the prioritization formula, which feeds back into which accounts get CSM time next quarter. The system gets smarter with every save.

  1. Every save play closes with a structured outcome. Renewed at full value, renewed with contraction, expanded, churned, lost to competitor, lost to budget. Not "done."
  2. Outcomes roll up to per-play win rates, scoped by segment and signal pattern. Retire plays under 30%. Promote plays over 65%.
  3. Win rates feed Save Probability in the prioritization formula. Next quarter's triage ranks accounts with more accurate probability estimates than last quarter's.
  4. Walk-away decisions produce their own learnings. Patterns of walked-away accounts surface ICP-fit issues, product-gap flags, and segmentation mistakes that inform Phase 1 work upstream.

Without the flywheel, save-play effectiveness is a perpetual hypothesis. With the flywheel, it is a number that improves every quarter.

Dual-ICP triage on shared infrastructure.

Edge case

For EdTech orgs with two motions on shared infrastructure (K-12 plus higher ed, academic plus corporate L&D, horizontal plus vertical specialty), the triage queue stays shared but tier thresholds and save plays diverge. Three adjustments:

  1. Tier thresholds are motion-specific. "Usage decline" triggers T2 At-Risk at different thresholds for an academic-cycle customer (45-60 days inactivity mid-term) versus a credentialed-outcome customer (10-14 days pre-assessment). Shared queue, scoped triggers.
  2. Save play library forks. Renewal Save, Champion Rescue, Adoption Recovery, Competitive Displacement. Each motion gets its own variant with motion-specific talk tracks, stakeholder maps, and win-rate benchmarks. Shared library shape, divergent content.
  3. Cross-motion handoff at save time. If a Motion A customer at risk is also a Motion B prospect, the save play explicitly flags the cross-sell implication. Losing the Motion A account closes the Motion B door for at least a year. The save calculation includes the downstream motion too.

Renewal-triage self-assessment.

12 questions

Twelve yes/no questions to audit triage maturity. Count the no's. That's your rebuild order.

QUEUE
Every renewal in the portfolio sits in exactly one tier (T1-T5) at all times. No "in review" or "pending conversation" escape hatches.
QUEUE
Tier transitions are auditable: you can replay six months later why an account moved from Monitor to At-Risk and who decided.
QUEUE
The tier is scored daily, not assigned manually during quarterly reviews.
PRIORITY
T2 At-Risk accounts are ranked by ARR times save probability divided by expected effort, not by renewal date.
PRIORITY
Save probability is an explicit number derived from historical play win rates, scoped to segment.
PRIORITY
Expected effort is captured in the play library as hours per play, not estimated per account.
PLAYS
A named save play library exists with at least four variants: Renewal Save, Champion Rescue, Adoption Recovery, Competitive Defense.
PLAYS
Every play has a measured win rate by segment. Plays under 30% get retired or rebuilt.
PLAYS
Every play has a defined walk-away criterion. No save runs indefinitely without a save-or-close decision.
FLYWHEEL
Resolved saves carry a structured outcome field (renewed, contracted, expanded, churned, lost-to-competitor, lost-to-budget).
FLYWHEEL
Play win rates feed back into the prioritization formula on a governed cadence, so next quarter's triage is sharper.
MULTI-ICP
If you run two motions, tier thresholds are motion-specific and the save play library has motion variants.

A 90-day rollout to triage-at-scale.

Phased plan

Six two-week phases. Ship the queue first. Then the prioritization math. Then the play library. Then the flywheel. Do not try to build everything in parallel.

WEEKS 1-2
Tier Inventory
Place every active renewal in one of five tiers based on current signals. Expect surprises: 20-40% of accounts classified differently than CSMs assumed.
WEEKS 3-4
Daily Scoring
Wire the daily scoring job. Tier assignment updates every morning. CSM sees overnight transitions on Monday triage.
WEEKS 5-6
Priority Formula
Ship the ARR times probability divided by effort ranking inside T2. Win rates seeded from historical data, refined monthly.
WEEKS 7-8
Play Library v1
Four named save plays with task sequences, effort estimates, and walk-away criteria. Top CSM per motion co-authors.
WEEKS 9-10
Thursday Review
Cross-functional at-risk review launches. Top five accounts reviewed with AE plus CSM plus leader. Save decisions or walk-away decisions made.
WEEKS 11-12
Flywheel On
Resolved saves feed back into play win rates. First quarter-end retrospective ships with numbers, not narratives.

Two anti-patterns to avoid. First: launching the tier system without the prioritization formula. T2 At-Risk becomes a dumping ground, and nothing actually gets worked first. Second: launching save plays without measured win rates. The library becomes folklore instead of infrastructure, and you cannot improve what you cannot measure.

What PILLAR does about triage at scale.

PILLAR runs the full triage queue as the operating substrate, not as a report. Every account is scored daily. The queue is the state. Plays are sequenced. Outcomes feed back into scoring. The CSM team works a prioritized list, not a spreadsheet sorted by date.

Portfolio Risk Console
Every account scored daily. Filter by tier, segment, territory, motion. Drill into any account for full signal decomposition.
Priority Ranking Built In
ARR times save probability divided by effort, computed automatically. Sort by Priority, not by renewal date. The queue is pre-ranked when the CSM opens it Monday morning.
Triage Board at Scale
Kanban lifecycle: T1 Save In Progress, T2 At-Risk, T3 Monitor, T4 Healthy, T5 Auto-Renew. Visual state, auditable transitions.
Save Play Library with Win Rates
Named plays with task sequences, effort estimates, entry and exit criteria, and measured win rates per segment. Templates per motion for dual-ICP orgs.
Walk-Away Governance
Plays that hit their defined duration without engagement surface a save-or-close decision. No perpetual open saves burning CSM hours.
Closed-Loop Flywheel
Every resolved save feeds back into play win rates and priority-formula weights. The system gets sharper every quarter without manual re-tuning.
Category definition · boundary piece
Why horizontal revenue tools can't do this.
Read →

Your Blueprint showed your Renewal & Retention score. Want to understand what triage at scale looks like for your specific portfolio?

Get Your Free Blueprint
pillargtm.com
Weekly Blueprint
Join The Architects - our weekly newsletter for EdTech and public sector revenue leaders
Subscribe →