Categories
Design Strategy Talks & Workshops

Strategy and Testing Business Ideas

Testing Business Ideas throughly, regardless of how great they may seem in theory, is a way to mitigate risks of your viability hypothesis being wrong.

In a series of previous posts, we talked the need of creating a set of tools that helps teams find objective ways to value design solutions, and we’ve looked at the different discussions that need to be facilitated while quantifying and qualifying strategy, namely: Pivot and Risk Mitigation, Facilitating Investment Discussions, and Visibility and Traceability.

In this post, I’ll talk about the importance of finding objective ways to exploring and (preferably) testing business ideas early – even before designs – to inform if they are worth pursuing (or not).

TL;DR;

  • While in the past designers would concentrate on enhancing desirability, the emerging strategic role of designers means they have to balance desirability, feasibility and viability simultaneously. Designers need to expand their profiles and master a whole new set of strategic practices.
  • Don’t make the mistake of executing business ideas without evidence; test your ideas throughly, regardless of how great they may seem in theory.
  • Measurement allows comparison of expected outcomes with actual outcomes and enables you to adjust strategic choices accordingly.
  • Without learning, you risk delivering a product or service no one finds valuable.
  • Your initial product strategy may contain plenty of assumptions and risks, and you may as well discover that the strategy is wrong and does not work.
  • Experiments replace guesswork, intuition and best practices with knowledge, which can help teams to make increasingly better decisions.
  • When creating experiments, you don’t want to simulate any more than you need to. This is what allows you to integrate quickly through several assumption tests.
  • Experiments will provide you with the data that will help you decide if you should persevere, pivot or stop. Pivoting is attractive only if you pivot early, when the cost of changing direction is comparatively low.

Quantifying and Qualifying Strategy

In a previous article, I mentioned that we need objective ways to value design solutions to justify the experience investments, and to look at the different points in the strategic planning and execution and identify the discussions that strategists should facilitate around what customers and users perceive as value, while tracking and tracing the implementation of strategy to ensure we are bringing value for both customers and business.

Design is the activity of turning vague ideas, market insights, and evidence into concrete value propositions and solid business models. Good design involves the use of strong business model patterns to maximize returns and compete beyond product, price and technology.

Bland, D. J., & Osterwalder, A., Testing business ideas, (2020)

From that perspective, we need to find ways to:

  • Explore (and preferably test) ideas early
  • Facilitate investment discussions by objectively describe business and user value, establishing priorities
  • Asses risk of pursuing ideas, while capturing signals that indicate if/when to pivot if an idea “doesn’t work”
  • Capture and track progress of strategy implementation
A holistic Quantifying and Qualifying set of tools and frameworks should help teams Pivot & Risk Mitigation Assessing the risk, capturing signals, know when to pivot Visibility and Traceability Capturing and tracking progress  Facilitating Investment Discussions Business /User Value, Priorities, Effort, etc Validating / Testing Ideas Finding objective ways to explore (and preferably test) ideas early
Instead of a single metric to measure ROI, let’s look at the different discussions that need to be facilitated while quantifying and qualifying strategy, namely: Pivot and Risk Mitigation, Facilitating Investment Discussions, Validating / Testing Business Ideas, Visibility and Traceability.

In that previous article I went deep into quantification and metrics, so I suggest to take a look at that if you’re interesting in measuring experiences.

Validating and Testing Business Ideas

“What do people need?”is a critical question to ask when you build product. Wasting your life’s savings and your investor’s money, risking your reputation, making false promises to employees and potential partners, and trashing months or work you can never get back is a shame. It’s also a shame to find out you were completely delusional when you thought that everyone needed the product you were working on (Sharon, T., Validating Product Ideas, 2016).

Don’t make the mistake of executing business ideas without evidence; test your ideas throughly, regardless of how great they may seem in theory.

Bland, D. J., & Osterwalder, A., Testing Business Ideas: A Field Guide for Rapid Experimentation, 2019).

To test a big business idea, you break it down into smaller chunks of testable hypotheses. These hypotheses cover three types of risk (Bland, D. J., & Osterwalder, Testing Business Ideas: A Field Guide for Rapid Experimentation, 2019):

  • First, that customers aren’t interested in your idea (desirability).
  • Second, that you can’t build and deliver your idea (feasibility).
  • Third, that you can’t earn enough money from your idea (viability).

Facilitating Investment Discussions around Value

As I mentioned in a previous post, designers must become skilled facilitators that respond, prod, encourage, guide, coach and teach as they guide individuals and groups to make decisions that are critical in the business world though effective processes. There are few decisions that are harder than deciding how to prioritise.

I’ve seen too many teams that a lot of their decisions seem to be driven by the question “What can we implement with least effort” or “What are we able to implement”, not by the question “what brings value to the user”.

From a user-centered perspective, the most crucial pivot that needs to happen in the conversation between designers and business stakeholders is the framing of value:

  • Business value
  • User value
  • Value to designers (sense of self-realisation? Did I impact someone’s life in a positive way?)

The mistake I’ve seen many designers make is to look at prioritisation discussion as a zero-sum game: our user centered design tools set may have focused too much on needs of the user, at the expense of business needs and technological constraints.

While in the past designers would concentrate on enhancing desirability, the emerging strategic role of designers means they have to balance desirability, feasibility and viability simultaneously. Designers need to expand their profiles and master a whole new set of strategic practices.”

“Strategic Designers: Capital T-shaped professionals” in Strategic Design (Calabretta et al., 2016)

To understand the risk and uncertainty of your idea you need to ask: “What are all the things that need to be true for this idea to work?” This will allow you to identify all three types of hypotheses underlying a business idea: desirabilityfeasibility, and viability (Bland, D. J., & Osterwalder, A., Testing business ideas, 2020):

  • Desirability (do they want this?) relates to the risk that the market a business is targeting is too small; that too few customers want the value proposition; or that the company can’t reach, acquire, and retain targeted customers.
  • Feasibility (Can we do this?) relates to the risk that a business can’t manage, scale, or get access to key resources (technology, IP, brand, etc.). This is isn’t just technical feasibility; we also look need to look at overall regulatory, policy, and governance that would prevent you from making your solution a success.
  • Viability (Should we do this?) relates to the risk that a business cannot generate more revenue than costs (revenue stream and cost stream). While customers may want your solution (desirable) and you can build it (feasible), perhaps there’s not enough of a market for it or people won’t pay enough for it. 
 The Sweet Spot of Innovation in Brown, T., & Katz, B., Change By Design (2009)

Design strategists should help team find objective ways to value design ideas/ approaches/ solutions to justify the investment on them from both desirability, feasibility and viability.

Quantifying and Qualifying Desirability

When assessing a Business Idea, we must always start by assessing Desirability, because — as we established in the overview section —a failure to address their customers’ needs and find product-market fit with a solution is the first hurdle that most business ideas fail to get past (Wong, R., Lean business scorecard: Desirability, 2021).

The Lean Business Scorecard v3.3 — Creative Commons Attribution– Robin Wong (download your copy)

When product managers, designers and strategists are crafting their strategy or working on discovery phasethe kind of user and customer insights they are looking for are really hard to acquire through quantitative metrics, either because we cannot derive insights from the existing analytics coming from the product, or because we are creating something new (so there are no numbers to refer to). Most of such insights (especially desirability and satisfaction) would come from preference data.

Preference data consists of the more subjective data that measures a participant’s feelings or opinions of the product.

Rubin, J., & Chisnell, D., Handbook of usability testing: How to plan, design, and conduct effective tests (2011)

Just because preference data is more subjective, it doesn’t meant is less quantifiable: although design and several usability activities are certainly qualitative, the image of good and bad designs can easily be quantified through metrics like perceived satisfactionrecommendations, etc (Sauro, J., & Lewis, J. R., Quantifying the user experience: Practical statistics for user research. 2016).

Preference Data is typically collected via written, oral, or even online questionnaires or through the debriefing session of a test. A rating scale that measure how a participant feels about the product is an example of a preference measure (Rubin, J., & Chisnell, D., Handbook of usability testing, 2011).

measurement-millimeter-centimeter-meter-162500.jpeg
Learn about ways to objectively measure the value of design in The Need for Quantifying and Qualifying Strategy (Photo by Pixabay on Pexels.com)

You can find examples of preference data that design strategist can collect to inform strategic decisions in my previous post, so I’ll just mention the ones that I find to get more traction with business stakeholders.

Quantifying and Qualifying Feasibility

Maybe I’m an idealist, but I believe everything is feasible — given enough time and resources. The task of strategists then becomes understand the expectations of stakeholders, facilitate the discussions necessary to to identify the gap between vision and the current state, then work out what needs to be true to get to that vision.

beach bench boardwalk bridge
Learn more about creating product vision in The Importance of Vision (Photo by Pixabay on Pexels.com)

With that being said, the gap between current state and vision can only be filled by the people that are actually going to do the work, which is why I think a lot projects fail: if decisions are made (e.g.: roadmaps, release plans, investment priorities, etc) without involving that people that actually going to do the work.

The Lean Business Scorecard v3.3 — Creative Commons Attribution– Robin Wong (download your copy)

Feasibility focuses on the business operations and capabilities needed to run and grow a business idea, not just the feasibility of the solution at the core of the idea

Wong, R., Lean business scorecard: Feasibility (2021)

We need to ensure feasibility before we decide, not after. Not only this end up saving a lot of wasted time, but it turns out that getting the engineers’s perspective earlier also tends to improve the solution itself, and it’s critical to shared learning (Cagan, M., Inspired: How to create tech products customers love, 2017).

Quantifying and Qualifying Viability

Similarly to Feasibility, we need to validate business viability of our ideas during discovery, not after (Cagan, M., Inspired: How to create tech products customers love, 2017).

It’s absolutely critical to ensure that the solution we build will meet the needs of our business — before we talk the time and expense to build out the product.

Cagan, M., Inspired: How to create tech products customers love (2017)

Once you have sufficient evidence that you’ve found the right opportunity to address AND you have a solution that helps your target audience do something they couldn’t before, you then need to prove you can get paid enough for this product or service to have a commercially viable business that can sustain itself over time (Wong, R., Lean business scorecard: Viability, 2021)

The Lean Business Scorecard v3.3 — Creative Commons Attribution– Robin Wong (download your copy)

Quantitative Aspects of Testing Business Ideas

It’s an old saying that says what gets measured gets done. There is more than a little truth to this. If aspirations are to be achieved, capabilities developed, and management systems created, progress needs to be measured (“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013).

Measurement allows comparison of expected outcomes with actual outcomes and enables you to adjust strategic choices accordingly.

“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013)

I will refrain from proposing a single metric for quantifying and qualifying design for a few reasons:

  • Different organizations have specific strategic choices about winning that uniquely positions in their corresponding industry: these metrics should take in consideration both the goals of users, but also what is the business trying to learn from the study, then design usability studies accordingly.
  • Different organizations are different levels of design maturity: if you’ve never done any kind of usability studies, it’s not only hard to build the capability to run such studies, but also figure out how to feedback the information back into product decisions.
  • Some of these metrics are discovered too late: since some of these metrics are collected either during usability studies or after the product or service is released, it means that — by the time you collect them — a lot of product decisions have already been made. Some of these decisions could be quite expensive to reverse or pivot at that point, so it might be too late for quantifying and qualifying the success of strategy.
  • Beware of how you measure: quantitative metrics are good to explain ‘What’ and ‘How many’ of a given hypothesis; the ‘Why’ are usually better captured through qualitative research methods
  • The world is different now: some of the signals and indicators that worked for measuring success may not work for new products or services you are trying to create.

Never assume that the metrics and standards used to evaluate the existing business have relevance for the innovation initiative.

Govindarajan, V., & Trimble, C., The other side of innovation: Solving the execution challenge (2010)

Quantifying and Qualifying the value and performance of design does not need to be complex or foreboding. There is a case for intuition, a case for qualitative user research, a case for quantitative research, and a case for synthesis. And there is even room for imponderables, because somethings are simply beyond definition or measurement (Lockwood, T., “Design Value: A Framework for Measurement” in Building Design Strategy, 2008).

Quantitative research can tell you how many customers are doing (or not doing) something. But it won’t tell you why the customers are doing it (or not doing it).

Olsen, D. The lean product playbook, 2015

So, what would a set of tools that both empowers intuition and creativity, but also help us find objective ways to value design solutions look like?

Pirate Metrics (a.k.a. AARRR!)

Pirate Metrics—a term coined by venture capitalist Dave McClure—gets its name from the acronym for five distinct elements of building a successful business. McClure categorizes the metrics a startup needs to watch into acquisition, activation, retention, revenue, and referral—AARRR (Croll, A., & Yoskovitz, B. Lean Analytics. 2013).

Pirate Metrics or AARRR!: acquisition, activation, retention, revenue, and referral
Pirate Metrics or AARRR! in Lean Analytics (Croll, A., & Yoskovitz, B., 2013)

McClure recommends tracking two or three key metrics for each of the five elements of his framework. That is a good idea because your conversion funnel ins’t really just one overall metric; you can track the more details metrics, making a distinction between, the macro-metrics and the micro-metrics that relate to them (Olsen, D. The lean product playbook, 2015).

HEART Metrics

What the research team from Google noted was that while small scale frameworks were common place measuring the experience on a large scale via automated means had no framework in place. Thus the Heart Framework is specifically targeted at that kind of measurement. However, the principles are equally useful at a small scale level; though the methodologies used to derive measurements at a smaller scale are likely to be substantially different (Rodden, K., Hutchinson, H., & Fu, X., Measuring the user experience on a large scale: User-centered metrics for web applications, 2010).

There are five metrics used in the HEART framework:

  • Happiness
  • Engagement
  • Adoption
  • Retention
  • Task Success
The Google HEART framework is incredibly useful to measure the quality of your user experience. The metrics are Happiness, Engagement, Adoption, Retention, Task Success.
“Google HEART metrics” in What Makes a Good UX/UI Design? 

HEART can be seen as a manifestation of the Technology Acceptance Model (TAM)—after all, both include Adoption in their names. The TAM is itself a manifestation of the Theory of Reasoned Action. The TRA is a model that predicts behavior from attitudes. The TAM suggests that people will adopt and continue to use technology (the EAR of the HEART) based on the perception of how easy it is to use (H), how easy it actually is to use (T), and whether it’s perceived as useful (H). The SUS, SEQ, and the ease components of the UMUX-Lite and SUPR-Q are all great examples of measuring perceived ease, and bring the Happiness to the HEART model (Sauro, J., Should you love the HEART framework?, 2019):

Jeff Sauro mapped together what he sees as the overlaps among the TRA, TAM, and other commonly collected UX metrics.
Linking the HEART framework with other existing models (TAM, TRA, and metrics that link them) in Should you love the HEART framework? (Sauro, J., 2019)

Qualitative Aspects of Testing Business Ideas

As I mentioned early in this article quantitative data maybe easier to get but it has its limitations:

  • Some of these metrics are discovered too late: since some of these metrics are collected either during usability studies or after the product or service is released, it means that — by the time you collect them — a lot of product decisions have already been made. Some of these decisions could be quite expensive to reverse or pivot at that point, so it might be too late for quantifying and qualifying the success of strategy.
  • Beware of how you measure: quantitative metrics are good to explain ‘What’ and ‘How many’ of a given hypothesis; the ‘Why’ are usually better captured through qualitative research methods

Forming Hypothesis

Every project begins with assumptions. There’s not getting around this fact. We assume we know out customers (and whom our future customers will be). We assume we know what the competition is doing and where our industry is headed. We assume we ca price the stability of our markets. These assumptions are predicated on our ability to predict the future. (Gothelf, J., & Seiden, J., Sense and respond, 2017).

If we accept that we’re always starting with assumptions, the real question becomes, What do we do about the risk of being wrong?

Gothelf, J., & Seiden, J., Sense and respond (2017).

Many companies try to deal with complexity with analytical firepower and sophisticated mathematics. That is unfortunate, since the most essential elements of creating a hypothesis can typically be communicated through simple pencil-and-paper sketches (Govindarajan, V., & Trimble, C., The other side of innovation: Solving the execution challenge, 2010.)

The key to dealing with complexity is to focus on having good conversations about assumptions.

Break Down the Hypothesis in The other side of innovation: Solving the execution challenge, Govindarajan, V., & Trimble, C., (2010)

That said, flawed assumptions are one of the worst barriers to innovation. They’re invisible, chronic and insidious, and we’re all ruled by them in one situation or another. How do they hold us back? (Griffiths, C., & Costi, M., The Creative Thinking Handbook: Your step-by-step guide to problem solving in business, 2019):

  • They lead us to think we know all the facts when we really don’t. Assumptions such as We have to launch a new range of products every year to keep up with competitors’ should be checked for validity.
  • They cause us to become trapped by our own self-imposed limits and specialisations, for example Xerox’s failure to capture the personal computing market by limiting itself to making better copiers.
  • Rules, like assumptions, keep us stuck in outdated patterns. The more entrenched the rule is, the greater the chance that it’s no longer valid. Sometimes, we need to shake up or reverse our existing patterns to stand out from everyone else.

Usually, we want to start with the biggest questions and work our way down into the details. Typically, you would start with questions like these (Gothelf, J., & Seiden, J., Sense and respond, 2017):

  • Does the business problem exist?
  • Does the customer need exist?
  • How do we know whether this feature or service will address that need?
Help teams with facilitating investment discussions with Assumptions Mapping in Bland, D. J., & Osterwalder, A., Testing business ideas (2020)

As you sit down with your teams to plan out your next initiative, ask them these questions (Gothelf, J., & Seiden, J., Sense and respond. 2017):

  • What is the most important thing (or things) we need to learn first?
  • What is the fastest, most efficient way to learn that?
calculator and pen on table
Learn more about facilitating investment discussions by finding objective ways to value ideas, approaches, solutions to justify the investment on them (Photo by Pixabay on Pexels.com)

Viability Hypothesis

The Business Model Canvas contains financial risk in the revenue stream and cost structure. Identify the viability hypotheses you are making in (Bland, D. J., & Osterwalder, Testing Business Ideas: A Field Guide for Rapid Experimentation, 2019):

Revenue Streams

Wbelievthat we

  • can get customers to pay a specific price for our value propositions.
  • can generate sufficient revenues.

Cost Structure

Wbelievthat we

  • can manage costs from our infrastucture and keep them under control.

Profit

Wbelievthat we

  • can generate more revenues than costs in order to make a profit.

If you only have one hypothesis to test it’s clear where to spend the time you have to do discovery work. If you have many hypotheses, how do you decide where your precious discovery hours should be spent? Which hypotheses should be tested? Which ones should be de-prioritised or just thrown away? To help answer this question, Jeff Gothelf put together the Hypothesis Prioritisation Canvas (Gothelf, J., The hypothesis prioritization canvas, 2019):

The hypothesis prioritization canvas helps facilitate an objective conversation with your team and stakeholders to determine which hypotheses will get your attention and which won’t (Gothelf, J., 2019)
The hypothesis prioritization canvas helps facilitate an objective conversation with your team and stakeholders to determine which hypotheses will get your attention and which won’t (Gothelf, J., 2019)

Design and Conduct Tests

You test the most important hypothesis with appropriate experiments. Each experiment generates evidence and insights that allow you to learn and decide. Based on the evidence and your insights you either adapt your idea, if you learn that you were in the wrong path, or continue testing other aspects of your ideas, if the evidence supports your direction (Bland, D. J., & Osterwalder, Testing Business Ideas: A Field Guide for Rapid Experimentation, 2019).

A successful project is not deemed successful because it is delivered accordant to a plan, but because it stood the test of reality.

“Walk the walk” in The decision maker’s playbook. Mueller, S., & Dhar, J. (2019)

By building, measuring and learning, designers are able to get closer to great user experiences sooner rather than later (Gothelf, J., & Seiden, J., Lean UX: Applying lean principles to improve user experience, 2021)

Experiments and Outcomes

Experimentation is at the heart of what software developers call agile development. Rather than planning all activities up-front and then sequentially, agile development emphasises running many experiments and learning from them (Mueller, S., & Dhar, J., The decision maker’s playbook, 2019)

Experiments replace guesswork, intuition and best practices with knowledge.

Mueller, S., & Dhar, J., The decision maker’s playbook (2019)

It takes a certain level of maturity to run effective experiments. To avoid shipping experiments for the sake of shipping experiments, teams need to focus on delivering outcomes. They also need to be willing to embrace failure to make progress (Garbugli, É., Solving Product, 2020).

With the right mindset, experiments can help teams to make increasingly better decisions.

Garbugli, É., Solving Product, 2020

To create this mindset, assemble a multidisciplinary team and let them work out their own process. Teresa Torres calls this multidisciplinary team “Product Trio” (a designer, a product manager, and a developer).

For a learning culture to thrive, your teams must feel safe to experiment. Experiments are how we learn, but experiments — by nature — fail frequently. In a good experiment, you learn as much from failure as from success. If failure is stigmatised, teams will take few risks (Gothelf, J., & Seiden, J., Sense and respond. 2017).

On average, 80% of experiments fail to deliver the expected outcomes but with the right method, 100% of experiments can help you learn and progress (Garbugli, É., Solving Product, 2020).

This means that your progress will not be linear and predictable and that you should not be judged by your delivery rate (the amount of stuff you ship) but by your learning rate, and by your overall progress towards strategic goals — in other words, by the extent to which you achieve the outcomes in question.

Gothelf, J., & Seiden, J., Sense and respond (2017).

You should focus on one or two core goals at a time, aligning with your North Star metric or the AARRR steps that you’re focused on. Your goals should be big, your experiments small and nimble (Garbugli, É., Solving Product, 2020).

Teams will be more willing to experiment if they feel they are not being measured by the delivery of hard requirements, but appreciated by achieving great outcomes that create value.

You might be asking, “what you do mean by outcome”. Joshua Seiden defines as outcome “a change in user behaviour that drives business results.”

Using outcomes creates focus and alignment. It eliminates needless work. And it puts the customer at the center of everything you do

Seiden, J., Outcomes over Output (2019)

You can help the team and leaders to start thinking in terms of outcomes by asking three simple questions (Seiden, J., Outcomes over Output, 2019):

  • What are the user and customer behaviours that drive business results? If the team gets stuck on trying to answer that question, there is a good chance that working on alignment diagrams will help.
  • How do we get people to do more of these things?
  • How do we know we’re right? The easiest (and the hardest) way to answer that question is to design and conduct tests.
The Project Logic Model, adapted from Kellogg Foundation
“What are the changes in behaviour that drive business results?” in Outcomes over Output, Seiden, J. (2019)

Managing by outcomes communicates to the team how they should be measuring success. A clear outcome helps a team align around the work they should be prioritizing, it helps them choose the right customer opportunities to address, and it helps them measure the impact of their experiments. Without a clear outcome, discovery work can be never-ending, fruitless, and frustrating (Torres, T., Continuous Discovery Habits, 2021).

multiracial colleagues shaking hands at work
Learn how to use Jobs to be Done to facilitate two-way negotiations between leadership and product teams that allows for managing by outcomes (Photo by Sora Shimazaki on Pexels.com)

Testing Business Ideas through Experiments

Technology is awesome. It really is. It helps humans communicate, find old friends, work more effectively, have fun, find places, and oh-so-many other great things. In many cases, technology is also hard, time-consuming, and expensive to develop. In this step, you will need to find a way to solve a problem you want to solve with or without technology. Manual ways of solving problems are, without a doubt, inefficient, yet they will teach you a lot about what people want without actually developing any technology (Sharon, T., Validating Product Ideas, 2016).

Learning through experimentation has a number of benefits (Mueller, S., & Dhar, J., The decision maker’s playbook, 2019):

  • It allows you to focus on actual outcomes: a successful project is not deemed successful because it is delivered according to a plan, but because it stood the test of reality.
  • It decreases re-work: because the feedback cycles are short, potential errors or problems are spotted quickly and can be smoothed out faster than conventional planning.
  • It reduces risks: because of increased transparency throughout the implementation process, risks can be better managed than in conventional project.

One way to help the team think through experiments is to think how are we going to answer the following questions (Croll, A., & Yoskovitz, B. Lean Analytics. 2013):

  • What do you want to learn and why?
  • What is the underlying problem we are trying to solve, and who is feeling the pain? This helps everyone involved have empathy for that we are doing.
  • What is our hypothesis?
  • How will we run the experiment, and what will we build to support it?
  • Is the experiment safe to run?
  • How will we conclude the experiment, and what steps will be taken to mitigate issues that result from the experiment’s conclusion?
  • What measures will we use to invalidate out hypothesis with data? Include what measures will indicate the experiment isn’t safe to continue.
Experiment Canvas in Design a better business: New tools, skills, and mindset for strategy and innovation (Van Der Pijl, P., Lokitz, J., & Solomon, L. K., 2016)
Experiment Canvas in Design a better business: New tools, skills, and mindset for strategy and innovation (Van Der Pijl, P., Lokitz, J., & Solomon, L. K., 2016)

What’s important to understand is that testing rarely means just building a smaller version of what you want to sell. It’s not about building, nor selling something. It’s about testing the most important assumptions, to show this idea could work. And that does not necessarily require building anything for a very long time. You need to first prove that there’s a market, that people have the jobs pains and gains and that they’re willing to pay (Bland, D. J., & Osterwalder, A., Testing business ideas, 2020).

A strong assumption test simulates an experience, giving your participant the opportunity to behave either in accordance to your assumption or not. This behaviour is what allows us to evaluate our assumption (Torres, T., Continuous Discovery Habits, 2021).

Any quantifying and qualifying set of tools or frameworks should have a component of Learning and Deciding  (Bland, D. J., & Osterwalder, A.,Testing business ideas, 2020)
Learn and Decide Loop in Testing business ideas (Bland, D. J., & Osterwalder, A., 2020)

To constructs a good assumption test, you’ll want to think carefully about the right moment to simulate (Torres, T., Continuous Discovery Habits, 2021).

You don’t want to simulate any more than you need to. This is what allows you to integrate quickly through several assumption tests.

Torres, T., Continuous Discovery Habits (2021)

With any new business experiment you need to ask yourself how quickly you can get started and how quickly it produces insights. For example, an interview series with potential customers or partners can be set up fairly quickly. Launching a landing page and driving traffic to it can be done with even greater speed. You’ll generate insights quickly. A technology prototype on the other hand will take far more time to design and test. Such a prototype might gather a good understanding of user behaviour, but it will require more time to generate insights (Bland, D. J., & Osterwalder, A., Testing business ideas, 2020).

More and more organizations test their business ideas before implementing them. The best ones perform a mix of experiments to prove that their ideas have legs. They ask two fundamental questions to design the ideal mix of experiments:

  1. Speed: How quickly does an experiment produce insights?
  2. Strength: How strong is the evidence produced by an experiment?
Strength versus Speed in How Strong is Your Innovation Evidence? (Osterwalder, A., 2017).

Before jumping to an experiment, it is important to consider the key principles of rapid experimentation when testing a new business idea (Bland, D. J., How to select the next best test from the experiment library, 2020):

  1.  Go cheap and fast early on in your journey – Don’t spend a lot of money if possible early on. You are only beginning to understand the problem space, so you don’t want to spend money when you can learn for free. Also try to move quickly to learn fast, instead of slow and perfect.

    Quite often, I am asked “how many experiments should I run”? This is a very hard question to answer as it depends on so many variables. But a general rule of thumb is 12 experiments in 12 weeks. 1 experiment a week is a good pace to keep up momentum and 12 data points is typically a good checkpoint to come back and reassess your business model.
  2. Increase the strength of evidence with multiple experiments for the same hypothesis – Don’t hesitate to run multiple experiments for the same hypothesis. Rarely do we witness a team that only runs one experiment and uncovers a multimillion dollar opportunity. The idea here is to not get too excited or too depressed after running one experiment. Give yourself permission to run multiple to understand if you have genuinely validated your hypothesis.

    Remember, a typical business model may have several critical hypotheses you need to test and for each hypothesis, you may need to run multiple experiments to validate it. Refer to our previous blog post on Assumptions Mapping to learn how to define your hypotheses and determine which ones to run first.
  3. Always pick the experiment that produces the strongest evidence, given your constraints – Not every experiment applies to every business. B2B differs from B2C which differs from B2G. A 100 year old corporation’s brand is much more important than a 100 hour old startup’s brand. Pick an experiment that produces evidence, but don’t risk it all. Make small bets that are safe to fail.

    When working with corporate innovators, the phrase I most often hear is “we can’t do that”. Innovators often feel hamstrung when working in heavily regulated industries. Many may choose to bypass testing certain hypotheses because of the constraints. This is not something I would recommend. If you really cannot test a hypothesis, the last resort would be to consider pivoting your business model. Remember, a testable idea is always better than a good idea.
  4. Reduce uncertainty as much as you can before you build anything – In this day and age you can learn quite a bit without building anything at all. Deferring your build as long as possible, because it is often the most expensive way to learn.

    In the Experiment Library we’ve compiled a list of creative ways to demonstrate a prototype without building out your final product. Experiments such as Wizard of Oz, Clickable Prototype and Single Feature MVP.
Strategyzer Experiment Library
Matching an experiment to the type of risk in your business strategy (Desirability, Viability and Feasibility) ensures you collect the right kind of input before deciding and — more importantly — don’t waste any time conducting experiments (Bland, D. J., How to select the next best test from the experiment library, 2020).

Early Signs versus Large Scale Experiments

Inevitably, someone on your team is going to raise a concern with making decisions based on small numbers. How can we have confidence in the data if we talk to only five customers? You might be tempted to test with larger pools of people to help get buy-in. But this strategy comes at a cost—it takes more time. We don’t want to invest the time, energy, and effort into an experiment if we don’t even have an early signal that we are on the right track (Torres, T., Continuous Discovery Habits, 2021)

Rather than starting with a large-scale experiment (e.g., surveying hundreds of customers, launching a production-quality A/ B test, worrying about representative samples), we want to start small. You’ll be pleasantly surprised by how much you can learn from getting feedback from a handful of customers.”

Torres, T., Continuous Discovery Habits (2021)

With assumption testing, most of our learning comes from failed tests. That’s when we learn that something we thought was true might not be. Small tests give us a chance to fail sooner. Failing faster is what allows us to quickly move on to the next assumption, idea, or opportunity (Torres, T., Continuous Discovery Habits, 2021).

As we test assumptions, we want to start small and iterate our way to bigger, more reliable, more sound tests, only after each previous round provides an indicator that continuing to invest is worth our effort. We stop testing when we’ve removed enough risk and/ or the effort to run the next test is so great that it makes more sense to simply build the idea.”

Torres, T., Continuous Discovery Habits (2021)

Karl Popper, a renowned 20th-century philosopher of science, in the opening quote argues, “Good tests kill flawed theories,” preventing us from investing where there is little reward, and “we remain alive to guess again,” giving us another chance to get it right (Torres, T., Continuous Discovery Habits, 2021).

Risk Analysis and Assessment

Once risks are identified they can be prioritised according to their potential impact and the likelihood of them occurring. This helps to highlight not only where things might go wrong and what their impact would be, but how, why and where these catalyst might be triggered (Kourdi, J., Business Strategy: A guide to effective decision-making, 2015):

  • Technology. New hardware, software or system configuration can trigger risks, as can new demand on existing information systems and technology.
  • Organisational Change. Risks are triggers by — for example — new management structures or reporting lines, new strategies and commercial agreements.
  • Processes. New product, markets and acquisitions all cause change and can trigger risks.
  • People. New employees, losing key people, poor succession planning, or weak people management can all create dislocation but the main danger is behaviour: everything from laziness to fraud, exhaustion and simple human error can trigger risks.
  • External factors. Change to regulation and political, economic or social developments can all affect strategic decisions by bringing to the surfaces risks that may have lain hidden.

Without learning, you risk delivering a product or service no one finds valuable.

Gothelf, J., & Seiden, J., “The Sense and Repond Model” in Sense and respond (2017)

Analyse risks at the start of each iteration (or test); reassess them regularly. For each identified risk, ask yourself (Podeswa, H., “Analyse Risk” in The Business Analyst’s Handbook. 2008):

  • Who owns the risk?
  • What is the likelihood of the risk occurring?
  • What is the impact on the business if it occurs?
  • What is the best strategy for dealing with this risk?
  • Is there anything that can be done to prevent it from happening or to mitigate (lessen) the damage if it does occur?
Risk Map in Mapping Project Risk & Uncertainty (Alkira Consulting, 2021)
Risk Map in Mapping Project Risk & Uncertainty (Alkira Consulting, 2021)

Start by selecting the biggest risks: the uncertainty that must be address now so that you don’t take the product in the wrong direction and experience late failure (e.g.: figuring out a a late stage that you are building a product nobody really wants or needs). Next, determine how you can best address the risks — for instance, by observing target users, interviewing customers, or employing a minimum viable product (MVP). Carry out the necessary work and collect the relevant feedback or data. Then analyse the results and use the newly gained insights to decide if you should persevere, pivot, or stop — if you should stick with your strategy, change it, or no longer pursue your vision and take the appropriate actions accordingly (Pichler, R., Strategize, 2016).

Whether you work for a small start-up or an existing large organization, validate your riskiest assumptions as quickly and cheaply as possible so you don’t waste valuable time and resources toiling away at something that likely will never work.
Riskiest Assumption Canvas in Design a better business: New tools, skills, and mindset for strategy and innovation (Van Der Pijl, P., Lokitz, J., & Solomon, L. K., 2016)

Iteratively reworking the product strategy encourages you to carry out just enough market research just in time to avoid too much or too little research, addressing the biggest risks first so that you can quickly understand which parts of your strategy are working and which are not, thus avoid late failure (Pichler, R., Strategize, 2016).

Learn and Decide

Learning faster than everyone else is no longer enough. You need to put that learning into action, because what you’ve learned has an expiration date: both markets and technology move so quickly that the insights you’ve gained can expire within months, weeks or even days. You should take action as (Bland, D. J., & Osterwalder, A., Testing business ideas, 2020):

  • Next steps to make progress with testing and de-risking a business ideas.
  • Informed decisions based on collected insights.
  • Decisions to abandon, change, and/or continue testing a business ideas.
With the Learning Card, you can systematically capture insights from your experiments and turn them into actionable learnings in four steps. These two tools paired together will supercharge your experiments and help you find the right business model and value proposition for your idea.
Leaning Card in Testing business ideas (Bland, D. J., & Osterwalder, A., 2020)

Persevere, Pivot, or Stop

Once you have collected the relevant feedback or data, reviewed and analysed it, ask yourself if your strategy is still valid: your initial product strategy may contain plenty of assumptions and risks, and you may as well discover that the strategy is wrong and does not work. If that is the case, then you have two choices (Pichler, R., Strategize, 2016):

  • stop and let go of your vision, or
  • stick with the vision and change the strategy, which is also called pivot.

Pivoting is attractive only if you pivot early, when the cost of changing direction is comparatively low.

Pichler, R., Strategize (2016)

You should therefore aim to find out quickly if anything is wrong if your strategy, and if you need to fail, then fail fast. While a late pivot can happen, you should avoid it, because the later it occurs, the more difficult and costly is is likely to be (Pichler, R., Strategize, 2016).

By building, measuring and learning, designers are able to get closer to great user experiences sooner rather than later (Gothelf, J., & Seiden, J., Lean UX: Applying lean principles to improve user experience, 2013).

people playing poker
Learn more about what methods, tools or techniques are available for pivot and risk mitigation, and what signals we need capture in order to know if we should Persevere, Pivot or Stop (Photo by Javon Swaby on Pexels.com)

Avoid These Common Anti-patterns

As you design and run your assumption tests, keep these common anti-patterns in mind (Torres, T., Continuous Discovery Habits, 2021):

  • Overly complex simulations. Some teams spend countless hours, days, or even weeks trying to design and develop the perfect simulation. It’s easy to lose sight of the goal. In your first round of testing, you are looking to design fast tests that will help you gather quick signals. Design your tests to be completed in a day or two, or a week, at most. This will ensure that you can keep your discovery iterations high.
  • Using percentages instead of specific numbers when defining evaluation criteria. Many teams equate 70% and 7 out of 10. So instead of defining their evaluation criteria as 7 out of 10, they tend to favor the percentage. These sound equivalent, but they aren’t. First, when testing with small numbers, we can’t conclude that 7 out of 10 will continue to mean 70% as our participant size grows. We want to make sure that we don’t draw too strong a conclusion from our small signals. Second, and more importantly, “70%” is ambiguous. If we test with 10 people and only 6 exhibit our desired behavior, some of us might conclude that the test failed. Others might argue that we need to test with more people. Be explicit from the get-go about how many people you will test with when defining your success criteria.
  • Not defining enough evaluation criteria. It’s easy to forget important evaluation criteria. At a minimum, you need to define how many people to test with and how many will exhibit the desired behavior. But for some tests, defining the desired behavior may involve more than one number. For example, if your test involves sending an email, you might need to define how many people will receive the email, how long you’ll give them to open the email, and whether your success criteria is “opens” or “clicks.” Pay particular attention to the success threshold. Complex actions may require multiple measurements (e.g., opens the email, clicks on the link, takes an action).
  • Testing with the wrong audience. Make sure that you are testing with the right people. If you are testing solutions for a specific target opportunity, make sure that your participants experience the need, pain point, or desire represented by that target opportunity. Remember to recruit for variation. Don’t just test with the easiest audience to reach or the most vocal audience.
  • Designing for less than the best-case scenario. When testing with small numbers, design your assumption tests such that they are likely to pass. If your assumption test passes with the most likely audience, then you can expand your reach to tougher audiences. This might feel like cheating, but you’ll be surprised how often your assumption tests still fail. If you fail in the best-case scenario, your results will be less ambiguous. If your test fails with a less-than-ideal audience, someone on the team is going to argue you tested with the wrong audience, and you’ll have to run the test again. Remember, we want to design our tests to learn as much as we can from failures.

The Right Time for Testing Business Ideas

“You should ask yourself the question “Do people want my product?” all the time—right when you have an idea, when you make a lot of progress with building and developing the product, and definitely after you launch it. Keep doing that. By asking the question before you actually build the product, feature, or service, you are reducing waste—time, resources, and energy. The more you learn about what people want before you build anything, the less time and effort you will spend on redundant code, hundreds of hours of irrelevant meetings, and negative emotions of team members when they realize they wasted their blood, sweat, and tears on something nobody wanted (Sharon, T., Validating Product Ideas, 2016).

You might be asking yourself “These are all great, but when should I be doing what?”. Without knowing what kind of team set up you have, and what kinds of processes you run in your organization, the best I can do is to map all of the techniques above the the Double Diamond framework.

The Double Diamond Framework

Design Council’s Double Diamond clearly conveys a design process to designers and non-designers alike. The two diamonds represent a process of exploring an issue more widely or deeply (divergent thinking) and then taking focused action (convergent thinking).  

  • Discover. The first diamond helps people understand, rather than simply assume, what the problem is. It involves speaking to and spending time with people who are affected by the issues.
  • Define. The insights gathered from the discovery phase can help you to define the challenge in a different way.
  • Develop. The second diamond encourages people to give different answers to the clearly defined problem, seeking inspiration from elsewhere and co-designing with a range of different people.
  • Deliver. Delivery involves testing out different solutions at small-scale, rejecting those that will not work and improving the ones that will.
Design Council’s framework for innovation also includes the key principles and design methods that designers and non-designers need to take, and the ideal working culture needed, to achieve significant and long-lasting positive change.
A clear, comprehensive and visual description of the design process in What is the framework for innovation? (Design Council, 2015)

Map of Testing Business Ideas Activities and Methods

Process Awareness characterises a degree to which the participants are informed about the process procedures, rules, requirements, workflow and other details. The higher is process awareness, the more profoundly the participants are engaged into a process, and so the better results they deliver.

In my experience, the biggest disconnect between the work designers need to do and the mindset of every other team member in a team is usually about how quickly we tend — when not facilitated — to jump to solutions instead of contemplate and explore the problem space a little longer.

Map of Quantifying and Qualifying Activities in the Double Diamond (Discover, Define, Develop and Deliver)

Knowing when team should be diverging, when they should be exploring, and when they should closing will help ensure they get the best out of their collective brainstorming and multiple perspectives’ power and keep the team engaged.

Testing Business Ideas during “Discover”

This phase has the highest level of ambiguity, so creating shared understanding by having a strong shared vision and good problem framing is really critical. Testing Business Ideas in this phase is probably the best way to increase your level of confidence that you’ve got the right problem framing.

Here are my recommendations for suggested quantifying and qualifying activities and methods:

beach bench boardwalk bridge
Learn more about creating product vision in The Importance of Vision (Photo by Pixabay on Pexels.com)

Testing Business Ideas during “Define”

This phase we should see the level of ambiguity diminishing, and facilitating investment discussions have the highest pay off in mitigating back-and-forth. That said, the cost of changing your mind increases drastically in this phase. Helping the team with creating great choices is critical, and experimentation should be providing the data that will help them make good decisions.

Here are my recommendations for suggested quantifying and qualifying activities and methods:

calculator and pen on table
Learn more about facilitating investment discussions by finding objective ways to value ideas, approaches, solutions to justify the investment on them (Photo by Pixabay on Pexels.com)

Testing Business Ideas during “Develop”

In this phase, the we should be starting to capture signals to decide if we should persevere, pivot or stop. Since this is the phase we are moving away from simulations, and we can put something that resembles the final product in product of customers and users, we should be focusing as much as possible on capturing both preference and performance data from with concept validation and usability testing.

Here are my recommendations for suggested quantifying and qualifying activities and methods:

people playing poker
Learn more about what methods, tools or techniques are available for pivot and risk mitigation, and what signals we need capture in order to know if we should Persevere, Pivot or Stop (Photo by Javon Swaby on Pexels.com)

Testing Business Ideas during “Deliver”

In this phase, it is probably too late to be testing business ideas, so the visibility and traceability systems should be collecting data from real customer usage, and helping us make hard choices about pivot, persevere, or stop on next iteration of the product.

On the other hand — since the product is now on the hand of customers and users — we should be able to collect the richest data from live usage that can inform decisions about our viability hypothesis, enabling you to adjust strategic choices accordingly.

Here are my recommendations for suggested quantifying and qualifying activities and methods:

close up photo of survey spreadsheet
Learn more about the visibility and traceability aspects of the execution of an idea/approach (Photo by Lukas on Pexels.com)

Facilitating discussions around Testing Business Ideas

I’m of the opinion that designers — instead of complaining that everyone else is jumping too quickly into solutions — should facilitate the discussions and help others raise the awareness around the creative and problem solving process.

I’ll argue for the Need of Facilitation in the sense that — if designers want to influence the decisions that shape strategy — they must step up to the plate and become skilled facilitators that respond, prod, encourage, guide, coach and teach as they guide individuals and groups to make decisions that are critical in the business world though effective processes.

That said, my opinion is that facilitation here does not only means “facilitate workshops”, but facilitate the decisions regardless of what kinds of activities are required.

photo of people near wooden table
Learn more about becoming a skilled facilitator (Photo by fauxels on Pexels.com)

Bland, D. J., & Osterwalder, A. (2020). Testing business ideas: A field guide for rapid experimentation. Standards Information Network.

Bland, D. J. (2020). How to select the next best test from the experiment library. Retrieved July 25, 2022, from Strategyzer.com website: https://www.strategyzer.com/blog/how-to-select-the-next-best-test-from-the-experiment-library

Brown, T., & Katz, B. (2009). Change by design: how design thinking transforms organizations and inspires innovation. [New York]: Harper Business

Croll, A., & Yoskovitz, B. (2013). Lean Analytics: Use Data to Build a Better Startup Faster. O’Reilly Media.

Design Council. (2015, March 17). What is the framework for innovation? Design Council’s evolved Double Diamond. Retrieved August 5, 2021, from designcouncil.ork.uk website: https://www.designcouncil.org.uk/news-opinion/what-framework-innovation-design-councils-evolved-double-diamond

Garbugli, É. (2020). Solving Product: Reveal Gaps, Ignite Growth, and Accelerate Any Tech Product with Customer Research. Wroclaw, Poland: Amazon.

Gothelf, J. (2019, November 8). The hypothesis prioritization canvas. Retrieved April 25, 2021, from Jeffgothelf.com website: https://jeffgothelf.com/blog/the-hypothesis-prioritization-canvas/

Gothelf, J., & Seiden, J. (2021). Lean UX: Applying lean principles to improve user experience. Sebastopol, CA: O’Reilly Media.

Gothelf, J., & Seiden, J. (2017). Sense and respond: How successful organizations listen to customers and create new products continuously. Boston, MA: Harvard Business Review Press.

Govindarajan, V., & Trimble, C. (2010). The other side of innovation: Solving the execution challenge. Boston, MA: Harvard Business Review Press.

Griffiths, C., & Costi, M. (2019). The Creative Thinking Handbook: Your step-by-step guide to problem solving in business. London, England: Kogan Page.

Kourdi, J. (2015). Business Strategy: A guide to effective decision-making. New York, NY: PublicAffairs

Lafley, A.G., Martin, R. L., (2013), “Playing to Win: How Strategy Really Works”, 272 pages, Publisher: Harvard Business Review Press (5 Feb 2013)

Lockwood, T., “Design Value: A Framework for Measurement” in Building Design Strategy: Using Design to Achieve Key Business Objectives, Lockwood, T., Walton, T., (2008); Allworth Press; 1 edition (November 11, 2008)

Mueller, S., & Dhar, J. (2019). The decision maker’s playbook: 12 Mental tactics for thinking more clearly, navigating uncertainty, and making smarter choices. Harlow, England: FT Publishing International.

Olsen, D. (2015). The lean product playbook: How to innovate with minimum viable products and rapid customer feedback (1st ed.). Nashville, TN: John Wiley & Sons.

Osterwalder, A. (2017). How Strong is Your Innovation Evidence? Retrieved December 24, 2021, from Strategyzer.com website: https://www.strategyzer.com/blog/how-strong-is-your-innovation-evidence

Pichler, R. (2016). Strategize: Product strategy and product roadmap practices for the digital age. Pichler Consulting.

Podeswa, H. (2008). The Business Analyst’s Handbook. Florence, AL: Delmar Cengage Learning.

Rodden, K., Hutchinson, H., & Fu, X. (2010). Measuring the user experience on a large scale: User-centered metrics for web applicationsProceedings of the 28th International Conference on Human Factors in Computing Systems – CHI ’10. New York, New York, USA: ACM Press.

Rubin, J., & Chisnell, D. (2011). Handbook of usability testing: How to plan, design, and conduct effective tests (2nd ed.). Chichester, England: John Wiley & Sons.

Sauro, J., & Lewis, J. R. (2016). Quantifying the user experience: Practical statistics for user research (2nd Edition). Oxford, England: Morgan Kaufmann.

Sharon, T. (2016). Validating Product Ideas (1st Edition). Brooklyn, New York: Rosenfeld Media.

Torres, T. (2021). Continuous Discovery Habits: Discover Products that Create Customer Value and Business Value. Product Talk LLC.

Van Der Pijl, P., Lokitz, J., & Solomon, L. K. (2016). Design a better business: New tools, skills, and mindset for strategy and innovation. Nashville, TN: John Wiley & Sons.

Wong, R. (2021). Lean business scorecard: Desirability. Retrieved February 25, 2022, from Medium website: https://robinow.medium.com/lean-business-scorecard-desirability-ede59c82da78

Wong, R. (2021). Lean business scorecard: Feasibility. Retrieved February 25, 2022, from Medium website: https://robinow.medium.com/lean-business-scorecard-feasibility-aa36810ae779

Wong, R. (2021). Lean business scorecard: Viability. Retrieved February 25, 2022, from Medium website: https://robinow.medium.com/lean-business-scorecard-viability-de989a59aa74

By Itamar Medeiros

Originally from Brazil, Itamar Medeiros currently lives in Germany, where he works as Director of Design Strategy at SAP.

Working in the Information Technology industry since 1998, Itamar has helped truly global companies in several countries (Argentina, Brazil, China, Czech Republic, Germany, India, Mexico, The Netherlands, Poland, The United Arab Emirates, United States, Hong Kong) create great user experience through advocating Design and Innovation principles.

During his 7 years in China, he promoted the User Experience Design discipline as User Experience Manager at Autodesk and Local Coordinator of the Interaction Design Association (IxDA) in Shanghai.

9 replies on “Strategy and Testing Business Ideas”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.