Value-Based Pricing: Turning Willingness to Pay into a Framework
Aug 20, 2025

Point of view: Price to outcomes. If the product creates material business value, the price should reflect a fair share of that value. Cost and competitor list prices are reference points, not the driver. This post is a practical workflow you can run in weeks: measure willingness to pay (WTP), convert it into segments, a value metric, and tiers or add-ons, then ship with clear ROI and guardrails.
What value-based pricing is and why it works
Definition. Set price according to the outcomes customers get: time saved, revenue gained, risk reduced. In SaaS and AI, marginal cost is low and impact can be high, so pricing to value captures upside without breaking trust.
Why operators use it
Price tracks outcomes. More value leads to higher WTP.
Segment fit. Enterprises derive more value than startups and should pay more.
Roadmap signal. Pricing forces clarity on which outcomes matter.
AI alignment. Tokens, jobs, and compute are measurable units you can translate into a fair value metric.
Primers: OpenView on value metrics, HBR on Jobs-to-Be-Done.
How to measure willingness to pay
1) Van Westendorp (PSM)
Run a short survey after a crisp value statement:
Too cheap (quality in doubt)
Bargain (great value)
Getting expensive (hesitation)
Too expensive (will not consider)
Plot the responses. You get an acceptable range and a likely optimal point at the intersection of “too cheap” and “too expensive.” Segment the analysis by SMB, mid-market, and enterprise so the ranges map cleanly to tiers.
Guides: Van Westendorp explained, Paddle/ProfitWell walkthrough.
Add one question: “Why?” The reasons expose budget thresholds (for example, over $100k triggers CFO), perceived value drivers, and procurement constraints you can address with packaging.
2) Buyer-grade interviews
Talk to decision makers and influencers, not only end users.
Jobs and outcomes. “What job are you hiring us for? What happens without us? What does that cost?” Convert time, errors, or missed revenue into dollars. Background: Intercom JTBD overview.
Budget anchors. “What have you paid for alternatives? What approvals kick in at $X?” This sets practical price bands.
Reaction tests. Float a number, pause, then ask “What would make that a no-brainer?” Capture objections you can solve with proof, guarantees, or packaging.
3) Usage and outcome data
If you have traffic or beta users:
Identify value-correlated usage such as projects completed, tickets resolved, transactions processed, or model jobs run.
Compare casual vs. power cohorts for retention, expansion, and support load.
Where safe, A/B test price points or bundles on new sign-ups to get sensitivity data.
This work informs your value metric and upper-tier design.
4) Market context
Use competitors as anchors, not constraints.
Catalog ranges to understand expectations.
Quantify substitutes such as manual work or legacy tools to frame ROI.
Note procurement norms such as annual vs. monthly, commitment preferences, and discount expectations.
References: Salesforce editions, Slack Fair Billing.
Turn insights into a working model
A) Segment by value and ability to pay
Most B2B pricing lands in three bands:
Startup and SMB. Lower budgets, faster decisions, heavy self-serve.
Mid-market. Clear ROI expectations, lighter procurement, growing breadth of use.
Enterprise. Highest ROI potential and WTP, stronger requirements across security, compliance, and support.
For each segment, define target outcomes, a WTP band, buying hurdles, and must-have capabilities. These become your tier jobs.
B) Pick a value metric that scales with outcomes
Choose one primary metric customers intuitively link to value:
People: active users, agents, collaborators
Work units: API calls, tasks or jobs, records processed, GB scanned
Business proxies: revenue influenced, transactions settled
Checks: understandable, measurable, and hard to game. Revenue should rise as outcomes rise. For AI, common choices are tokens, GPU minutes, completed jobs, or successful generations. Consider a baseline allowance plus metered overage to keep things fair and predictable.
Primer: OpenView: Value Metrics 101.
C) Design tiers and a few add-ons
Foundation. Two to four tiers mapped to segments such as Starter, Growth, and Enterprise. Each step should feel like a real jump in outcomes and support, not a feature checklist.
Allowances. Include credits tied to the value metric. Use sensible overages or pre-paid blocks to cap variance.
Selective add-ons. Only for polarized, high-value capabilities such as advanced security, analytics, premium support, or AI copilots. This lets power users expand without forcing everyone up a tier.
Examples: Figma Dev Mode, Notion AI.
D) Set price points with ROI and PSM
For each tier, validate against:
The PSM range for that segment
ROI math with a target of 3 to 10 times for B2B
Approval thresholds such as $10k, $25k, or $100k when that reduces friction
Use psychological thresholds only when they line up with real budget lines. A common example is $24,000 vs. $25,000 annually. Overview: Nick Kolenda on pricing psychology.
E) Package and message to outcomes
Translate features into business results:
“Automate invoice triage to reclaim about 40 hours per month and reduce DSO by 5 to 10 days.”
“AI summarization to cut resolution time by 20 to 30 percent while maintaining CSAT.”
Ship a simple ROI calculator on the site. Inputs the customer knows, such as volume, salaries, and conversion rates. Output is savings or lift that justifies the price. If you meter AI, pair cost transparency with outcome framing to avoid a black-box feel.
Inspiration: HubSpot ROI calculators.
Examples
CRM, per user with segmented editions. Per user aligns to collaboration value. Higher editions add admin, compliance, and support that enterprises value most. See: Salesforce editions.
AI document processing with ROI-anchored tiers. After proving it replaces 1 to 4 FTEs, price SMB around $15k, mid-market around $60k, enterprise $120k and up. Back it with payback under six months and case studies.
Developer platform with base plus metered. Base fee plus executions. Tiers include credits. Heavy users buy pre-paid blocks to stabilize spend and forecasting.
Pitfalls to avoid
Vague metric. Rename “model credits” in plain terms like “AI tasks,” and show examples of what one task is.
Too many meters. Start with a single primary value metric and keep the rest simple.
Nickel and diming. Limit add-ons to a few high-impact modules. Most customers should not need more than one at purchase.
Wrong persona. Interview economic buyers as well as power users. Align pricing to their outcomes and approval thresholds.
Set and forget. Review quarterly, adjust after major product or value shifts, and grandfather customers cleanly.
Implementation checklist
Top three outcomes quantified and tied to a clear metric
WTP captured via PSM, buyer interviews, and usage analysis by segment
One primary value metric and two to four tiers that map to segments
Add-ons limited to a few high-impact capabilities
Prices clear ROI targets and known approval thresholds
Site and sales deck use outcome-first copy and a simple ROI calculator
Cadence to review and iterate pricing, with grandfathering policies
Further reading
Value metrics: OpenView
Van Westendorp: Paddle/ProfitWell
JTBD interviews: HBR
Price psychology: Nick Kolenda
Outcome-aligned billing ideas: Slack Fair Billing, Salesforce editions
Tanso
© 2025 Tanso. All rights reserved.
Value-Based Pricing: Turning Willingness to Pay into a Framework
Aug 20, 2025


Point of view: Price to outcomes. If the product creates material business value, the price should reflect a fair share of that value. Cost and competitor list prices are reference points, not the driver. This post is a practical workflow you can run in weeks: measure willingness to pay (WTP), convert it into segments, a value metric, and tiers or add-ons, then ship with clear ROI and guardrails.
What value-based pricing is and why it works
Definition. Set price according to the outcomes customers get: time saved, revenue gained, risk reduced. In SaaS and AI, marginal cost is low and impact can be high, so pricing to value captures upside without breaking trust.
Why operators use it
Price tracks outcomes. More value leads to higher WTP.
Segment fit. Enterprises derive more value than startups and should pay more.
Roadmap signal. Pricing forces clarity on which outcomes matter.
AI alignment. Tokens, jobs, and compute are measurable units you can translate into a fair value metric.
Primers: OpenView on value metrics, HBR on Jobs-to-Be-Done.
How to measure willingness to pay
1) Van Westendorp (PSM)
Run a short survey after a crisp value statement:
Too cheap (quality in doubt)
Bargain (great value)
Getting expensive (hesitation)
Too expensive (will not consider)
Plot the responses. You get an acceptable range and a likely optimal point at the intersection of “too cheap” and “too expensive.” Segment the analysis by SMB, mid-market, and enterprise so the ranges map cleanly to tiers.
Guides: Van Westendorp explained, Paddle/ProfitWell walkthrough.
Add one question: “Why?” The reasons expose budget thresholds (for example, over $100k triggers CFO), perceived value drivers, and procurement constraints you can address with packaging.
2) Buyer-grade interviews
Talk to decision makers and influencers, not only end users.
Jobs and outcomes. “What job are you hiring us for? What happens without us? What does that cost?” Convert time, errors, or missed revenue into dollars. Background: Intercom JTBD overview.
Budget anchors. “What have you paid for alternatives? What approvals kick in at $X?” This sets practical price bands.
Reaction tests. Float a number, pause, then ask “What would make that a no-brainer?” Capture objections you can solve with proof, guarantees, or packaging.
3) Usage and outcome data
If you have traffic or beta users:
Identify value-correlated usage such as projects completed, tickets resolved, transactions processed, or model jobs run.
Compare casual vs. power cohorts for retention, expansion, and support load.
Where safe, A/B test price points or bundles on new sign-ups to get sensitivity data.
This work informs your value metric and upper-tier design.
4) Market context
Use competitors as anchors, not constraints.
Catalog ranges to understand expectations.
Quantify substitutes such as manual work or legacy tools to frame ROI.
Note procurement norms such as annual vs. monthly, commitment preferences, and discount expectations.
References: Salesforce editions, Slack Fair Billing.
Turn insights into a working model
A) Segment by value and ability to pay
Most B2B pricing lands in three bands:
Startup and SMB. Lower budgets, faster decisions, heavy self-serve.
Mid-market. Clear ROI expectations, lighter procurement, growing breadth of use.
Enterprise. Highest ROI potential and WTP, stronger requirements across security, compliance, and support.
For each segment, define target outcomes, a WTP band, buying hurdles, and must-have capabilities. These become your tier jobs.
B) Pick a value metric that scales with outcomes
Choose one primary metric customers intuitively link to value:
People: active users, agents, collaborators
Work units: API calls, tasks or jobs, records processed, GB scanned
Business proxies: revenue influenced, transactions settled
Checks: understandable, measurable, and hard to game. Revenue should rise as outcomes rise. For AI, common choices are tokens, GPU minutes, completed jobs, or successful generations. Consider a baseline allowance plus metered overage to keep things fair and predictable.
Primer: OpenView: Value Metrics 101.
C) Design tiers and a few add-ons
Foundation. Two to four tiers mapped to segments such as Starter, Growth, and Enterprise. Each step should feel like a real jump in outcomes and support, not a feature checklist.
Allowances. Include credits tied to the value metric. Use sensible overages or pre-paid blocks to cap variance.
Selective add-ons. Only for polarized, high-value capabilities such as advanced security, analytics, premium support, or AI copilots. This lets power users expand without forcing everyone up a tier.
Examples: Figma Dev Mode, Notion AI.
D) Set price points with ROI and PSM
For each tier, validate against:
The PSM range for that segment
ROI math with a target of 3 to 10 times for B2B
Approval thresholds such as $10k, $25k, or $100k when that reduces friction
Use psychological thresholds only when they line up with real budget lines. A common example is $24,000 vs. $25,000 annually. Overview: Nick Kolenda on pricing psychology.
E) Package and message to outcomes
Translate features into business results:
“Automate invoice triage to reclaim about 40 hours per month and reduce DSO by 5 to 10 days.”
“AI summarization to cut resolution time by 20 to 30 percent while maintaining CSAT.”
Ship a simple ROI calculator on the site. Inputs the customer knows, such as volume, salaries, and conversion rates. Output is savings or lift that justifies the price. If you meter AI, pair cost transparency with outcome framing to avoid a black-box feel.
Inspiration: HubSpot ROI calculators.
Examples
CRM, per user with segmented editions. Per user aligns to collaboration value. Higher editions add admin, compliance, and support that enterprises value most. See: Salesforce editions.
AI document processing with ROI-anchored tiers. After proving it replaces 1 to 4 FTEs, price SMB around $15k, mid-market around $60k, enterprise $120k and up. Back it with payback under six months and case studies.
Developer platform with base plus metered. Base fee plus executions. Tiers include credits. Heavy users buy pre-paid blocks to stabilize spend and forecasting.
Pitfalls to avoid
Vague metric. Rename “model credits” in plain terms like “AI tasks,” and show examples of what one task is.
Too many meters. Start with a single primary value metric and keep the rest simple.
Nickel and diming. Limit add-ons to a few high-impact modules. Most customers should not need more than one at purchase.
Wrong persona. Interview economic buyers as well as power users. Align pricing to their outcomes and approval thresholds.
Set and forget. Review quarterly, adjust after major product or value shifts, and grandfather customers cleanly.
Implementation checklist
Top three outcomes quantified and tied to a clear metric
WTP captured via PSM, buyer interviews, and usage analysis by segment
One primary value metric and two to four tiers that map to segments
Add-ons limited to a few high-impact capabilities
Prices clear ROI targets and known approval thresholds
Site and sales deck use outcome-first copy and a simple ROI calculator
Cadence to review and iterate pricing, with grandfathering policies
Further reading
Value metrics: OpenView
Van Westendorp: Paddle/ProfitWell
JTBD interviews: HBR
Price psychology: Nick Kolenda
Outcome-aligned billing ideas: Slack Fair Billing, Salesforce editions
Tanso
© 2025 Tanso. All rights reserved.
Tanso
© 2025 Tanso. All rights reserved.
Value-Based Pricing: Turning Willingness to Pay into a Framework
Aug 20, 2025



Point of view: Price to outcomes. If the product creates material business value, the price should reflect a fair share of that value. Cost and competitor list prices are reference points, not the driver. This post is a practical workflow you can run in weeks: measure willingness to pay (WTP), convert it into segments, a value metric, and tiers or add-ons, then ship with clear ROI and guardrails.
What value-based pricing is and why it works
Definition. Set price according to the outcomes customers get: time saved, revenue gained, risk reduced. In SaaS and AI, marginal cost is low and impact can be high, so pricing to value captures upside without breaking trust.
Why operators use it
Price tracks outcomes. More value leads to higher WTP.
Segment fit. Enterprises derive more value than startups and should pay more.
Roadmap signal. Pricing forces clarity on which outcomes matter.
AI alignment. Tokens, jobs, and compute are measurable units you can translate into a fair value metric.
Primers: OpenView on value metrics, HBR on Jobs-to-Be-Done.
How to measure willingness to pay
1) Van Westendorp (PSM)
Run a short survey after a crisp value statement:
Too cheap (quality in doubt)
Bargain (great value)
Getting expensive (hesitation)
Too expensive (will not consider)
Plot the responses. You get an acceptable range and a likely optimal point at the intersection of “too cheap” and “too expensive.” Segment the analysis by SMB, mid-market, and enterprise so the ranges map cleanly to tiers.
Guides: Van Westendorp explained, Paddle/ProfitWell walkthrough.
Add one question: “Why?” The reasons expose budget thresholds (for example, over $100k triggers CFO), perceived value drivers, and procurement constraints you can address with packaging.
2) Buyer-grade interviews
Talk to decision makers and influencers, not only end users.
Jobs and outcomes. “What job are you hiring us for? What happens without us? What does that cost?” Convert time, errors, or missed revenue into dollars. Background: Intercom JTBD overview.
Budget anchors. “What have you paid for alternatives? What approvals kick in at $X?” This sets practical price bands.
Reaction tests. Float a number, pause, then ask “What would make that a no-brainer?” Capture objections you can solve with proof, guarantees, or packaging.
3) Usage and outcome data
If you have traffic or beta users:
Identify value-correlated usage such as projects completed, tickets resolved, transactions processed, or model jobs run.
Compare casual vs. power cohorts for retention, expansion, and support load.
Where safe, A/B test price points or bundles on new sign-ups to get sensitivity data.
This work informs your value metric and upper-tier design.
4) Market context
Use competitors as anchors, not constraints.
Catalog ranges to understand expectations.
Quantify substitutes such as manual work or legacy tools to frame ROI.
Note procurement norms such as annual vs. monthly, commitment preferences, and discount expectations.
References: Salesforce editions, Slack Fair Billing.
Turn insights into a working model
A) Segment by value and ability to pay
Most B2B pricing lands in three bands:
Startup and SMB. Lower budgets, faster decisions, heavy self-serve.
Mid-market. Clear ROI expectations, lighter procurement, growing breadth of use.
Enterprise. Highest ROI potential and WTP, stronger requirements across security, compliance, and support.
For each segment, define target outcomes, a WTP band, buying hurdles, and must-have capabilities. These become your tier jobs.
B) Pick a value metric that scales with outcomes
Choose one primary metric customers intuitively link to value:
People: active users, agents, collaborators
Work units: API calls, tasks or jobs, records processed, GB scanned
Business proxies: revenue influenced, transactions settled
Checks: understandable, measurable, and hard to game. Revenue should rise as outcomes rise. For AI, common choices are tokens, GPU minutes, completed jobs, or successful generations. Consider a baseline allowance plus metered overage to keep things fair and predictable.
Primer: OpenView: Value Metrics 101.
C) Design tiers and a few add-ons
Foundation. Two to four tiers mapped to segments such as Starter, Growth, and Enterprise. Each step should feel like a real jump in outcomes and support, not a feature checklist.
Allowances. Include credits tied to the value metric. Use sensible overages or pre-paid blocks to cap variance.
Selective add-ons. Only for polarized, high-value capabilities such as advanced security, analytics, premium support, or AI copilots. This lets power users expand without forcing everyone up a tier.
Examples: Figma Dev Mode, Notion AI.
D) Set price points with ROI and PSM
For each tier, validate against:
The PSM range for that segment
ROI math with a target of 3 to 10 times for B2B
Approval thresholds such as $10k, $25k, or $100k when that reduces friction
Use psychological thresholds only when they line up with real budget lines. A common example is $24,000 vs. $25,000 annually. Overview: Nick Kolenda on pricing psychology.
E) Package and message to outcomes
Translate features into business results:
“Automate invoice triage to reclaim about 40 hours per month and reduce DSO by 5 to 10 days.”
“AI summarization to cut resolution time by 20 to 30 percent while maintaining CSAT.”
Ship a simple ROI calculator on the site. Inputs the customer knows, such as volume, salaries, and conversion rates. Output is savings or lift that justifies the price. If you meter AI, pair cost transparency with outcome framing to avoid a black-box feel.
Inspiration: HubSpot ROI calculators.
Examples
CRM, per user with segmented editions. Per user aligns to collaboration value. Higher editions add admin, compliance, and support that enterprises value most. See: Salesforce editions.
AI document processing with ROI-anchored tiers. After proving it replaces 1 to 4 FTEs, price SMB around $15k, mid-market around $60k, enterprise $120k and up. Back it with payback under six months and case studies.
Developer platform with base plus metered. Base fee plus executions. Tiers include credits. Heavy users buy pre-paid blocks to stabilize spend and forecasting.
Pitfalls to avoid
Vague metric. Rename “model credits” in plain terms like “AI tasks,” and show examples of what one task is.
Too many meters. Start with a single primary value metric and keep the rest simple.
Nickel and diming. Limit add-ons to a few high-impact modules. Most customers should not need more than one at purchase.
Wrong persona. Interview economic buyers as well as power users. Align pricing to their outcomes and approval thresholds.
Set and forget. Review quarterly, adjust after major product or value shifts, and grandfather customers cleanly.
Implementation checklist
Top three outcomes quantified and tied to a clear metric
WTP captured via PSM, buyer interviews, and usage analysis by segment
One primary value metric and two to four tiers that map to segments
Add-ons limited to a few high-impact capabilities
Prices clear ROI targets and known approval thresholds
Site and sales deck use outcome-first copy and a simple ROI calculator
Cadence to review and iterate pricing, with grandfathering policies
Further reading
Value metrics: OpenView
Van Westendorp: Paddle/ProfitWell
JTBD interviews: HBR
Price psychology: Nick Kolenda
Outcome-aligned billing ideas: Slack Fair Billing, Salesforce editions
Tanso
© 2025 Tanso. All rights reserved.
Tanso
© 2025 Tanso. All rights reserved.
Tanso
© 2025 Tanso. All rights reserved.