Core Concepts
The methodology and mental models behind HolyShift.
[Live]
Overview
HolyShift is built on a specific methodology with specific definitions. Understanding these concepts makes every feature more useful — you know why a confidence score matters, what a demand signal actually proves, and how the Validate-Build-Grow loop compounds over time. This guide is the reference for the ideas behind the product.
Pretotyping Methodology
Pretotyping is the practice of testing whether people want something before you build it. The term comes from Alberto Savoia's work at Google — "pretend" + "prototyping." The core insight is that most products fail not because they are built badly, but because they are built without evidence of demand.
Traditional validation approaches have a sample size problem. Customer discovery interviews give you 20 to 30 data points over weeks. Surveys give you self-reported intentions that do not predict behavior. HolyShift scales pretotyping to 500 to 1,000 real market conversations per validation, run in 24 to 48 hours, and analyzes every signal systematically.
The goal is not to prove your idea is good. The goal is to find out the truth before you invest time and money. A negative result is a success — it saved you from building something nobody wants.
Demand Signals
A demand signal is specific evidence from a real conversation that indicates interest in your product. Not "this person said something positive" — but a structured observation with context and confidence scoring.
Demand signals are categorized by type:
- Expressed need — Someone describes the exact problem your product solves, unprompted.
- Willingness to pay — Someone indicates they would exchange money for a solution like yours. This is the strongest signal type.
- Active search — Someone is currently looking for a solution in your category.
- Competitor dissatisfaction — Someone is unhappy with their current solution and open to switching.
- Emotional intensity — Someone describes the problem with frustration, urgency, or strong feeling. Intensity correlates with willingness to act.
Each signal is scored by confidence — how reliable the evidence is. A single mention in one conversation is low confidence. The same signal appearing independently across 50 conversations is high confidence.
Demand Signals vs Noise
Not everything someone says in a conversation is a signal. Noise includes:
- Polite interest — "That sounds cool" without any follow-up or depth. People are generally positive when asked about new ideas. Politeness is not demand.
- Hypothetical agreement — "I could see myself using that." Could is not would. And would is not will. Look for evidence of current behavior, not hypothetical future behavior.
- Single data points — One person expressing strong interest is an anecdote. Twenty people expressing the same interest is a signal.
- Feature requests — Someone asking for a specific feature does not validate demand for your product. It validates demand for that feature, which might already exist elsewhere.
The distinction matters because acting on noise leads to building products for imaginary demand. HolyShift's signal analysis is designed to filter noise systematically, but understanding the distinction helps you interpret your report more critically.
Confidence Scores
Every demand signal, risk assessment, and recommendation in your Pretotyping Signal Report includes a confidence score. Here is what the thresholds mean:
| Score Range | Interpretation |
|---|---|
| 80-100 | High confidence. Strong, consistent evidence across many conversations. Reliable basis for decisions. |
| 60-79 | Moderate confidence. Clear pattern but with some variance. Proceed with awareness of uncertainty. |
| 40-59 | Low-moderate confidence. Signal is present but inconsistent. Investigate further before relying on it. |
| 20-39 | Low confidence. Sparse evidence. May be noise. Do not base major decisions on this alone. |
| 0-19 | Very low confidence. Insufficient evidence. Treat as unvalidated hypothesis. |
Confidence is calculated from signal frequency (how many conversations produced the signal), consistency (how similarly people expressed it), and intensity (how strongly people felt about it).
Risk Assessment Framework
Risk assessment identifies potential blockers to your product's success. Risks are categorized by type:
- Market risk — The market may be too small, too competitive, or not ready for your solution.
- Demand risk — Interest may be shallow — people like the idea but will not pay for it.
- Competitive risk — Existing solutions may be good enough, or a well-funded competitor may have an insurmountable advantage.
- Execution risk — The product may be technically difficult to build, require regulatory compliance, or depend on partnerships that are hard to secure.
- Switching risk — Your target customers may face high switching costs from their current solution, making adoption unlikely regardless of your product's quality.
Each risk is scored by both probability (how likely it is to materialize) and severity (how badly it would affect your business if it does).
Real User Language Extraction
One of the most practically valuable outputs of validation is the exact language your market uses to describe the problem you solve. This is not paraphrased or summarized — it is verbatim.
Why this matters: marketing that uses the same words your audience uses converts dramatically better than marketing written in your company's internal vocabulary. When someone reads your landing page and thinks "that is exactly how I would describe this problem," trust and relevance increase immediately.
HolyShift extracts real user language across several dimensions:
- Problem description — How people describe the pain point in their own words.
- Solution expectations — What people say they want a solution to look like or do.
- Emotional language — The feelings people associate with the problem (frustrated, overwhelmed, wasting time).
- Comparison language — How people describe alternatives they have tried and why those fell short.
This language feeds directly into the Build tool when generating landing pages, and into outreach angle suggestions in Leads Search.
The Validate, Build, Grow Loop
HolyShift's three pillars are not sequential steps — they are a loop.
Validate your idea to confirm demand exists. Build a landing page grounded in real market data. Grow your customer base through intelligence-driven outreach.
Then loop back: use growth data and intelligence signals to refine your understanding of the market. Update your landing page based on what you learn from outreach conversations. Run new validations when you pivot, expand to a new segment, or respond to competitive shifts.
Each cycle through the loop produces better data. Better data produces sharper positioning. Sharper positioning produces stronger conversion. Stronger conversion produces more growth. The loop compounds.
How HolyShift Uses Real Conversations, Not Synthetic Data
HolyShift does not generate synthetic survey responses, simulate market reactions, or use AI to predict what people might say. Every data point in your Pretotyping Signal Report comes from a real conversation with a real person in your target market.
This is a deliberate design choice. Synthetic data reflects the biases of the model that generated it. Real conversations reflect the actual beliefs, language, objections, and behaviors of your market. The difference between "an AI thinks your market would say X" and "your market actually said X" is the difference between guessing and knowing.
The trade-off is time — real conversations take 24 to 48 hours to run and analyze. Synthetic data could be generated in seconds. We believe the trade-off is worth it. The purpose of validation is to test reality, and you cannot test reality with synthetic data.
FAQ
Is pretotyping the same as prototyping? No. Prototyping tests whether something can be built. Pretotyping tests whether it should be built. Pretotyping comes first — there is no point building a functional prototype of something nobody wants.
How is a confidence score different from a sample size? Sample size is one input to confidence, but not the only one. A signal mentioned in 200 conversations but with high variance (some strongly positive, some negative, some ambiguous) might have lower confidence than a signal mentioned in 50 conversations with perfect consistency. Confidence reflects reliability, not just volume.
Can I use HolyShift for B2C validation? Yes. The methodology works for both B2B and B2C. The conversation approach is adapted based on your target market — consumer audiences are engaged differently than enterprise buyers, but the signal analysis is the same.
What if I do not agree with HolyShift's risk assessment? The risk assessment is based on patterns in hundreds of conversations. If you disagree, read the underlying evidence. You may have context the model does not have — but be honest about whether your disagreement is evidence-based or wishful thinking.
What's Next
- Your First Validation — Put these concepts into practice.
- Reading Your Report — See how these concepts appear in your actual report.
- Intelligence Overview — Understand continuous monitoring after validation.
- Learning Center — Explore all learning paths.
