Categories
Design Strategy Talks & Workshops

The Need for Quantifying and Qualifying Strategy

Learn about quantifying and qualifying strategy as we make a case for a set of tools that helps teams find objective ways to value design.

In the last few posts, I talked about how designers and strategists can facilitate the discussions that lead to making good decisions, the importance of creating choices, and how priorities make things happen.

In this post, I’ll talk about the need for a set of tools for quantifying and qualifying strategy that both empowers intuition and creativity, but also helps teams find objective ways to value design solutions to justify the experience investments that bring us ever closer to our vision and goals.

TL;DR;

  • Design may enhance performance, but unless there are metrics to gauge that benefit, the difference it makes depends on conjecture and faith.
  • There are challenges to consider while quantifying and qualifying strategy:
    • Different organizations have specific strategic choices about winning that uniquely positions them in their corresponding industry.
    • Different organizations are different levels of design maturity.
    • Some metrics are collected too late: some strategic decisions made early on could be quite expensive to reverse or pivot at that point.
    • Beware of how you measure: quantitative metrics are good to explain ‘What’ and ‘How many’ of a given hypothesis; the ‘Why’ are usually better captured through qualitative research methods
    • The world is different now: some of the signals and indicators that worked for measuring success may not work for new products or services you are trying to create.
  • We need objective ways to value design solutions to justify the experience investments at different decision points in the strategic planning and execution, and identify the discussions that strategists should facilitate (e.g.: investment discussions, pivot and risk mitigation) while tracking and tracing the implementation of the strategy to ensure we are bringing value for our customers and our business.
  • Measurement allows the comparison of expected outcomes with actual outcomes and enables you to adjust strategic choices accordingly.
  • Your initial product strategy may contain plenty of assumptions and risks, and you may as well discover that the strategy is wrong and does not work. If that is the case, then you have two choices:
    • Stop and let go of your vision, or
    • stick with the vision and change the strategy, which is also called a pivot.
  • When product managers, designers, and strategists are crafting their strategy or working on discovery phase, the kind of user and customer insights they are looking for are really hard to acquire through quantitative metrics: most insights (especially desirability and satisfaction) would come from preference data.

Quantifying and Qualifying Design

As businesses increasingly recognize the power of design to provide significant benefits, business executives increasingly are asking for metrics to evaluate the performance of the design. What is needed is a framework for quantifying and qualifying design and strategy, a specific set of criteria and methods to be used as a structure to define and measure the values of design (Lockwood, T., “Design Value: A Framework for Measurement” in Building Design Strategy, 2008).

“Design may enhance performance, but unless there are metrics to gauge that benefit, the difference it makes depends on conjecture and faith.”

Lockwood, T., “Design Value: A Framework for Measurement” in Building Design Strategy: Using Design to Achieve Key Business Objectives, (2008)

The following identifies ten categories of design measurement all of which are relevant to business criteria and can be used as a framework for measuring the value of design (Lockwood, T., “Design Value: A Framework for Measurement” in Building Design Strategy: Using Design to Achieve Key Business Objectives, 2008):

  1. Purchase Influence / Emotion
  2. Enable Strategy / to enter new markets
  3. Build brand image and corporate reputation
  4. Improve time to market and development process
  5. Design return of investment (ROI) / cost savings
  6. Enable product and service innovation
  7. Increase customer satisfaction / develop communities of customers
  8. Design patents and trademarks / create intellectual property
  9. Improve usability
  10. Improve sustainability

On a personal note: I’ve been in too many discussions in which stakeholders and designers get caught up on what is the Return of Investment (ROI) of design. If you keep having to justify the value that design brings to the organization, or you find it difficult to have meaningful conversations with stakeholders on how to make the connection between their business to any of the ten categories above, you may consider if this is the right organization for you to work for.

It is crucial that designers and strategists engage with their business stakeholders to understand what objectives and unique positions they want their products to assume in the industry, and the choices that are making in order to achieve such objectives and positions.

Six Strategic Questions, adapted from "Strategy Blueprint" in Mapping Experiences: A Guide to Creating Value through Journeys, Blueprints, and Diagrams (Kalbach, 2020).
Six Strategic Questions, adapted from “Strategy Blueprint” in Mapping Experiences: A Guide to Creating Value through Journeys, Blueprints, and Diagrams (Kalbach, 2020)

If you clearly articulated the answer to the six strategic questions (what are our aspirations, what are our challenges, what will we focus on, what are our guiding principles, what type of activities), strategies can still fail — spectacularly — if you fail to establish management systems that support those choices. Without the supporting systems, structures, and measures for quantifying and qualifying outcomes, strategies remain a wish list, a set of goals that may or may not ever be achieved (“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013).

group of people sitting in front of table
Learn more about the skills required for design strategists to influence influence the decisions that drive design vision forward in Strategy and Stakeholder Management (Photo by Rebrand Cities on Pexels.com)

Value of Design and Metrics

Increasingly, usability practitioners and user researchers are expected to quantify the benefits of their efforts. If they don’t, someone else will — unfortunately that someone else might not use the right metrics or methods (Sauro, J., & Lewis, J. R., Quantifying the user experience. 2016).

If you want to choose the right metrics, you need to keep five things in mind (Croll, A., & Yoskovitz, B. Lean Analytics, 2013):

  • Qualitative versus quantitative metrics: Qualitative metrics are unstructured, anecdotal, revealing, and hard to aggregate; quantitative metrics involve numbers and statistics and provide hard numbers but less insight.
  • Vanity versus actionable metrics: Vanity metrics might make you feel good, but they don’t change how you act. Actionable metrics change your behavior by helping you pick a course of action.
  • Exploratory versus reporting metrics: Exploratory metrics are speculative and try to find unknown insights: to give you the upper hand, while reporting metrics keep you abreast of normal, managerial, day-to-day operations.
  • Leading versus lagging metrics: Leading metrics give you a predictive understanding of the future; lagging metrics explain the past. Leading metrics are better because you still have time to act on them- the horse hasn’t left the barn yet.
  • Correlated versus causal metrics: If two metrics change together, they’re correlated, but if one metric causes another metric to change, they’re causal. If you find a causal relationship between something you want (like revenue) and something you can control (like which ad you show), then you can change the future.

Usability Metrics

When deciding on the most appropriate metrics, two main aspects of the user experience to consider are performance and satisfaction (Tullis, T., & Albert, W., Measuring the user experience. 2013).

Performance Metrics

This consists of objective measures of behavior, such as error rates, time, and counts of observed behavior elements. This type of data comes from observation of either the live usability test or a review of the video recording after the test has been completed. The number of errors made on the way to completing a task is an example of a performance measure (Rubin, J., & Chisnell, D., Handbook of usability testing: How to plan, design, and conduct effective tests, 2011).

There are five basic types of performance metrics (Tullis, T., & Albert, W., Measuring the user experience. 2013):

  1. Tasks success is perhaps the most widely used performance metric. It measures how effectively users are able to complete a given set of tasks.
  2. Time-on-task is a common performance metric that measures how much time is required to complete a task.
  3. Errors reflect the mistakes made during a task. Errors can be useful in pointing out particularly confusing or misleading parts of an interface.
  4. Efficiency can be assessed by examining the amount of effort users spend to complete a task, measured by the number of steps or actions required to complete a task, or by the ratio of task success to the average time per task.
  5. Learnability is a way to measure how performance changes over time.

Satisfaction Metrics

Satisfaction is all about what users say or think about their interaction with the product, traditionally captured during post-study surveys. Users might report what was easy to use, what was confusing, or that it exceeded their expectations. Users might have opinions about the product being visually appealing or untrustworthy (Tullis, T., & Albert, W., Measuring the user experience, 2013).

Do performance and satisfaction always correlate?

Perhaps surprisingly, performance and satisfaction don’t always go hand-in-hand.

We’ve seen many instances of a user struggling to perform key tasks with an application and then giving it glowing satisfaction rating. Conversely, we’ve seen users give poor satisfaction ratings to an application that worked perfectly

Tullis, T., & Albert, W., “Planning a Usability Study” in Measuring the user experience (2013)

So it is important that you look at both performance and satisfaction metrics to get an accurate overall picture of the user experience (Tullis, T., & Albert, W., Measuring the user experience. 2013).

I’ll cover satisfaction metrics (also known as preference data) extensively later in this post.

Product Metrics

Product metrics tell you how healthy your product is, and, ultimately, your business, given that a healthy product contributes to the overall health of the business. They are the lifeblood of every product manager. Keeping a pulse on your product is crucial for knowing when you should act and where. This is how we set direction (Perri, M., Escaping the build trap, 2019).

In a start-up, you don’t always know which metrics are key, because you’re not entirely sure what business you’re in. You’re frequently changing the activity you analyze. You’re still trying to find the right product or the right audience. In a start-up, the purpose of analytics is to find your way to the right product and market before the money runs out (Croll, A., & Yoskovitz, B. Lean Analytics, 2013).

But it’s easy to become stuck measuring the wrong things. Frequently, teams turn to measure what we call vanity metrics. This concept, introduced in Lean Startup, is about goals that look shiny and impressive because they always get bigger. People are excited to share how many users are on their product, how many daily page views they have, or how many logins their system has (Perri, M., Escaping the build trap, 2019).

Although vanity number may make you look great to investors, they do not help product teams or business make decisions. They do not cause you to change or behavior or priorities.

Perri, M., Escaping the build trap (2019).

On the enterprise, the six strategic questions (as per the Six Strategic Questions illustration above) become even more critical for us to align on so that we can pick the right metrics that we can monitor if we are delivering on our Value Proposition:

  • What are our aspirations? Strategy implies the need for change, a desire to move from point A to point B. What are the hurdles to doing so? What opposing forces must you overcome to be able to reach the desired outcome? What problems are you solving? Focus on customers and users, but you may also have internal challenges you want to list here too.
  • What are our challenges? To develop a great strategy, we first need to clarify what is the purpose of our enterprise, our mission, or our winning aspirations. The term “winning” means different things to different people so the first step in developing a great strategy is to specify exactly what winning will look like for us
  • What will we focus on? Once we’ve stated are aspirations (“what winning will look like”), we then need to identify a playing field where we can realize our aspirations. No company can be all things to all people and win, so where-to-play choices – which markets, which customer segments, which channels, which industries, etc. – narrow our focus.

What makes a Good Metric?

Here are some rules of thumb for what makes a good metric — a number that will drive the changes you’re looking for (Croll, A., & Yoskovitz, B. Lean Analytics, 2013):

  • A good metric is comparative. Being able to compare a metric to other time periods, groups of users, or competitors helps you understand which way things are moving. “Increased conversion from last week” is more meaningful than “2% conversion.”
  • A good metric is understandable. If people can’t remember it and discuss it, it’s much harder to turn a change in the data into a change in the culture.
  • A good metric is a ratio or a rate. Accountants and financial analysts have several ratios they look at to understand, at a glance, the fundamental health of a company. You need some, too.
  • A good metric changes the way you behave. This is by far the most important criterion for a metric: what will you do differently based on changes in the metric?
    • “Accounting” metrics like daily sales revenue, when entered into your spreadsheet, need to make your predictions more accurate. These metrics form the basis of Lean Startup’s innovation accounting, showing you how close you are to an ideal model and whether your actual results are converging on your business plan.
    • “Experimental” metrics (like the results of a test) help you optimize the product, pricing, or market. Changes in these metrics will significantly change your behavior.

There are several reasons ratios tend to be the best metrics (Croll, A., & Yoskovitz, B. Lean Analytics, 2013):

  • Ratios are easier to act on. Think about driving a car. Distance traveled is informational. But speed–distance per hour–is something you can act on, because it tells you about your current state, and whether you need to go faster or slower to get to your destination on time.
  • Ratios are inherently comparative. If you compare a daily metric to the same metric over a month, you’ll see whether you’re looking at a sudden spike or a long-term trend. In a car, speed is one metric, but speed right now over average speed this hour shows you a lot about whether you’re accelerating or slowing down.
  • Ratios are also good for comparing factors that are somehow opposed, or for which there’s an inherent tension. In a car, this might be the distance covered divided by traffic tickets. The faster you drive, the more distance you cover–but the more tickets you get. This ratio might suggest whether or not you should be breaking the speed limit.

Leading versus Lagging Metrics

Both leading and lagging metrics are useful, but they serve different purposes (Croll, A., & Yoskovitz, B. Lean Analytics, 2013):

  • A leading metric (sometimes called a leading indicator) tries to predict the future. For example, the current number of prospects in your sales funnel gives you a sense of how many new customers you’ll acquire in the future. If the current number of prospects is very small, you’re not likely to add many new customers. You can increase the number of prospects and expect an increase in new customers.
  • A lagging metric, such as churn (which is the number of customers who leave in a given time period) gives you an indication that there’s a problem–but by the time you’re able to collect the data and identify the problem, it’s too late. The customers who churned out aren’t coming back. That doesn’t mean you can’t act on a lagging metric (i.e., work to improve churn and then measure it again), but it’s akin to closing the barn door after the horses have left. New horses won’t leave, but you’ve already lost a few.

Be aware that indicators only make sense in the context of when they are captured. For example. Retention is a lagging indicator, which is impossible to act on immediately. It will be months before you have solid data to show that people stayed with you. That is why we also need to measure leading indicators like activation, happiness, and engagementLeading indicators tell us whether we’re on our way to achieving those lagging indicators like retention. To determine the leading indicators for retention, you can qualify what keeps people retained- for example, happiness and usage of the product. The success metrics we set around options are leading indicators of outcomes we expect on our initiatives because options are strategies on a shorter time scale, as we talked about in the previous chapter (Perri, M., Escaping the build trap, 2019).

For example, if I tell my team that our goal is to generate more customers for an online booking service this year than we did last year, there’s no way for us to know if that happened until after the fact. In other words, it’s too late then! The key is figuring out what data point(s) will help you stay on track throughout the year so that when December comes around and we need to look back at our progress so far—and make decisions based on whether or not things have gone according to plan—we’ll have data points available right when we need them most, e.g., while making sales forecasts (Gadvi, V., How to identify your North Star Metric, 2022).

In some cases, a lagging metric for one group within a company is a
leading metric for another. For example (Croll, A., & Yoskovitz, B. Lean Analytics, 2013)

  • The number of quarterly bookings is a lagging metric for salespeople (the contracts are signed already)
  • For the finance department (that’s focused on collecting payment), quarterly booking is a leading indicator of expected revenue (since the revenue hasn’t yet been realized).

Ultimately, you need to decide whether the thing you’re tracking helps you make better decisions sooner. Lagging and leading metrics can both be actionable, but leading indicators show you what will happen, reducing your cycle time and making you leaner.

Croll, A., & Yoskovitz, B. Lean Analytics (2013)

As a product manager in a large organization, it cannot be easy to impact a lagging indicator (e.g.: a North Star metric) directly. In these scenarios, you’ll have to identify leading indicators that can you can influence at the product/feature level and come up with a plan to drive those numbers, eventually moving the lagging indicator in the right direction.

Feature / ProductIntermediate Metric / Leading IndicatorNorth Star Metric
Recommendation Platform% of customers who added the recommended products to their basket;
% of customers who bought the recommended products;
% of customers who clicked on the recommended products;
% of customers who visited the site based on emails that showed the recommended products
Average Order Value
Examples of intermediate metrics or leading indicators for a Recommendation Platform that contributes to Average Order Value (Gadvi, V., How to identify your North Star Metric, 2022)

Measuring the metrics at your strategics option level helps to prevent surprises when the cold, hard facts come in later at the strategic initiative level.

Perri, M., Escaping the build trap (2019)

Now let’s look at some popular metrics that are quickly becoming industry standards for monitoring product traction.

Pirate Metrics (a.k.a. AARRR!)

Pirate Metrics—a term coined by venture capitalist Dave McClure—gets its name from the acronym for five distinct elements of building a successful business. McClure categorizes the metrics a startup needs to watch into acquisition, activation, retention, revenue, and referral—AARRR (Croll, A., & Yoskovitz, B. Lean Analytics. 2013).

Pirate Metrics or AARRR!: acquisition, activation, retention, revenue, and referral
Pirate Metrics or AARRR! in Lean Analytics (Croll, A., & Yoskovitz, B., 2013)

McClure recommends tracking two or three key metrics for each of the five elements of his framework. That is a good idea because your conversion funnel ins’t really just one overall metric; you can track the more details metrics, making a distinction between, the macro-metrics and the micro-metrics that relate to them (Olsen, D. The lean product playbook, 2015).

HEART Metrics

What the research team from Google noted was that while small-scale frameworks were commonplace measuring the experience on a large scale via automated means had no framework in place. Thus the Heart Framework is specifically targeted at that kind of measurement. However, the principles are equally useful at a small scale level; though the methodologies used to derive measurements at a smaller scale are likely to be substantially different (Rodden, K., Hutchinson, H., & Fu, X., Measuring the user experience on a large scale: User-centered metrics for web applications, 2010).

There are five metrics used in the HEART framework:

  • Happiness
  • Engagement
  • Adoption
  • Retention
  • Task Success
The Google HEART framework is incredibly useful to measure the quality of your user experience. The metrics are Happiness, Engagement, Adoption, Retention, Task Success.
“Google HEART metrics” in What Makes a Good UX/UI Design? 

HEART can be seen as a manifestation of the Technology Acceptance Model (TAM)—after all, both include Adoption in their names. The TAM is itself a manifestation of the Theory of Reasoned Action. The TRA is a model that predicts behavior from attitudes. The TAM suggests that people will adopt and continue to use technology (the EAR of the HEART) based on the perception of how easy it is to use (H), how easy it actually is to use (T), and whether it’s perceived as useful (H). The SUS, SEQ, and the ease components of the UMUX-Lite and SUPR-Q are all great examples of measuring perceived ease, and bringing the Happiness to the HEART model (Sauro, J., Should you love the HEART framework?, 2019):

Jeff Sauro mapped together what he sees as the overlaps among the TRA, TAM, and other commonly collected UX metrics.
Linking the HEART framework with other existing models (TAM, TRA, and metrics that link them) in Should you love the HEART framework? (Sauro, J., 2019)

When you’re deciding which traction metrics the product team needs to track, here are a few things to keep in mind (Pichler, R., Strategize: Product strategy and product roadmap practices for the digital age, 2016):

  • Avoid vanity metrics, which measure that make your product look good but don’t add value.
  • Don’t measure everything that can be measured, and don’t trust blindly an analytics tool to collect the right data. Instead, use the business goals to choose a small number of metrics that truly help you understand how your product performs. Otherwise, you risk wasting time and effort analyzing data that provides little or no value.
  • Be aware that some metrics are sensitive to the product life cycle. For example, you usually cannot measure profit before your product enters the growth stage.  Tracking adoption rates and referrals are very useful in the introduction and growth stages but are less so in the mature and decline stages.

North Star Metrics

A north star metric is a key performance indicator (KPI) that you use to measure the progress of your business. It has one purpose: to keep you focused on what’s important. A metric shouldn’t be something obscure or abstract, like “more customers” or “higher engagement.” Those goals can be helpful and can be used as input into your north star metric, but they don’t make great KPIs themselves because they don’t provide any information about how well you’re meeting them (Gadvi, V., How to identify your North Star Metric, 2022).

A north star metric is one metric that captures the core value that your product delivers to customers, ideally in a single number. It should be specific to your business and easy to understand and measure.

Gadvi, V., How to identify your North Star Metric (2022)

Let’s look at four reasons why you should use North Star, or “the One Metric That Matters” OMTM (Croll, A., & Yoskovitz, B. Lean Analytics. 2013):

  • It answers the most important question you have. At any given time, you’ll be trying to answer a hundred different questions and juggling a million things. You need to identify the riskiest areas of your business as quickly as possible, and that’s where the most important question lies. When you know what the right question is, you’ll know what metric to track in order to answer that question. That’s the OMTM.
  • It forces you to draw a line in the sand and have clear goals. After you’ve identified the key problem on which you want to focus, you need to set goals. You need a way of defining success.
  • It focuses the entire company. Avinash Kaushik has a name for trying to report too many things: data puking. Nobody likes puke. Use the OMTM as a way of focusing your entire company. Display your OMTM prominently through web dashboards, on TV screens, or in regular emails.
  • It inspires a culture of experimentation. By now you should appreciate the importance of experimentation. It’s critical to move through the build-›measure-*learn cycle as quickly and as frequently as possible. To succeed at that, you need to actively encourage experimentation. It will lead to small-f failures, but you can’t punish that. Quite the opposite: failure that comes from planned, methodical testing is simply how you learn. It moves things forward in the end. It’s how you avoid the big-F failure. Everyone in your organization should be inspired and encouraged to experiment. When everyone rallies around the One Metric that Matter, it gives us the opportunity to experiment independently to improve it, it’s a powerful force.

Product managers working in established companies have this figured out, but if you’re a founding product manager or an entrepreneur, here’s what it means for you. The key to picking the right North Start / OMTM metrics is to find the one that appropriately aligns with your business model (check the table below). So, let’s say you were the founder of an online store selling vegan products. Your North Star Metric would be Average Order Value – defined as the total amount spent per order over a specific period. It is calculated using the following formula (Gadvi, V., How to identify your North Star Metric, 2022):

Business ModelExampleNorth Star Metrics
User Generated Content + AdsFacebook, Quora, Instagram, YoutubeMonthly Active Users (MAU), Time on Site(ToS)
FreemiumSpotify, Mobile Games, TinderMonthly Active Users (MAU), % who upgrade to paid
Enterprise SAASSlack, AsanaMonthly Active Users (MAU), % who upgrade to paid
2-sided marketplaceAirbnb, UberMonthly Active Users (MAU), monthly active riders/drivers, Monthly Active Users (MAU) buyers/sellers
EcommerceAmazon, eBay, FlipkartAverage order value (AOV); basket size
Picking good North Star Metrics starts with finding which ones are appropriate to your business model (Gadvi, V., How to identify your North Star Metric, 2022)

Defining a North Star metric, and evaluating each release in relation to that metric, will help drive your business forward. In early stages of the product lifecycle, your metric will be more qualitative-but if it’s indicative of customer value, it will tell you whether you’re headed in the right direction.

Garbugli, É.  Solving Product, (2020)

Your north star metric should also be accessible for all your team members to understand and communicate, even if they don’t work in data science or analytics. Having a clear north star metric helps everyone in the organization stay aligned around what matters most when making decisions about new features or products — which will ultimately make them more successful by bringing them closer to their users’ needs (Gadvi, V., How to identify your North Star Metric, 2022).

Each strategy we had at Netflix (from our personalization strategy to our theory that a simpler experience would improve retention) had a very specific metric that helped us to evaluate if the strategy was valid or not. If the strategy moved the metric, we knew we were on the right path. If we failed to move the metric, we moved on the next idea. Identifying these metrics took a lot of the politics and ambiguity out of which strategies were succeeding or not.

Gibson Biddle, Netflix Former VP of Product in Solving Product (Garbugli, É., 2020)
multiracial colleagues shaking hands at work
Learn more about the relationship between product metrics and product outcomes and managing by outcomes (Photo by Sora Shimazaki on Pexels.com)

Design Metrics for Quantifying and Qualifying Strategy

It’s an old saying that says what gets measured gets done. There is more than a little truth to this. If aspirations are to be achieved, capabilities developed, and management systems created, progress needs to be measured (“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013).

Measurement allows comparison of expected outcomes with actual outcomes and enables you to adjust strategic choices accordingly.

“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013)

I will refrain from proposing a single metric for quantifying and qualifying design for a few reasons:

  • Different organizations have specific strategic choices about winning that uniquely positions in their corresponding industry: these metrics should take into consideration both the goals of users, but also what is the business trying to learn from the study, then design usability studies accordingly.
  • Different organizations are different levels of design maturity: if you’ve never done any kind of usability studies, it’s not only hard to build the capability to run such studies, but also figure out how to feedback the information back into product decisions.
  • Some of these metrics are discovered too late: since some of these metrics (e.g.: performance) are collected either during usability studies or after the product or service is released, it means that — by the time you collect them — a lot of product decisions have already been made. Some of these decisions could be quite expensive to reverse or pivot at that point, so it might be too late for quantifying and qualifying the success of a strategy.
  • Beware of how you measure: quantitative metrics are good to explain ‘What’ and ‘How many’ of a given hypothesis; the ‘Why’s are usually better captured through qualitative research methods
  • The world is different now: some of the signals and indicators that worked for measuring success may not work for new products or services you are trying to create.

Never assume that the metrics and standards used to evaluate the existing business have relevance for the innovation initiative.

Govindarajan, V., & Trimble, C., The other side of innovation: Solving the execution challenge (2010)

So, before we talk about what would a set of tools that both empowers intuition and creativity would look like, we need to talk about quantitative data and qualitative data.

Quantitative data is easy to understand. It’s the numbers we track and measure–for example, sports scores and movie ratings. As soon as something is ranked, counted, or put on a scale, it’s quantified. Quantitative data is nice and scientific, and (assuming you do the math right) you can aggregate it, extrapolate it, and put it into a spreadsheet. But it’s seldom enough to get a business started. You can’t walk up to people, ask them what problems they’re facing, and get a quantitative answer. For that, you need qualitative input (Croll, A., & Yoskovitz, B. Lean Analytics. 2013)

Qualitative data is messy, subjective, and imprecise. It’s the stuff of interviews and debates. It’s hard to quantify. You can’t measure qualitative data easily. If quantitative data answers “what” and “how much,” qualitative data answers “why.” Quantitative data abhors emotion; qualitative data marinates in it (Croll, A., & Yoskovitz, B. Lean Analytics. 2013).

Quantitative research can tell you how many customers are doing (or not doing) something. But it won’t tell you why the customers are doing it (or not doing it).

Olsen, D. The lean product playbook, 2015

Initially, you’re looking for qualitative data. You’re not measuring results numerically. Instead, you’re speaking to people–specifically, to people you think are potential customers in the right target market. You’re exploring. You’re getting out of the building (Croll, A., & Yoskovitz, B. Lean Analytics. 2013).

Qualitative metrics are unstructured, anecdotal, revealing, and hard to aggregate; quantitative metrics involve numbers and statistics and provide hard numbers but less insight.

Croll, A., & Yoskovitz, B. Lean Analytics (2013)

Collecting good qualitative data takes preparation. You need to ask specific questions without leading potential customers or skewing their answers. You have to avoid letting your enthusiasm and reality distortion rub off on your interview subjects. Unprepared interviews yield misleading or meaningless results (Croll, A., & Yoskovitz, B. Lean Analytics. 2013).

Quantifying and Qualifying the value and performance of design does not need to be complex or foreboding. There is a case for intuition, a case for qualitative user research, a case for quantitative research, and a case for synthesis. And there is even room for imponderables because some things are simply beyond definition or measurement (Lockwood, T., “Design Value: A Framework for Measurement” in Building Design Strategy, 2008).

So, what would a set of tools that both empowers intuition and creativity, but also help us find objective ways to value design solutions look like?

Quantifying and Qualifying Value

When customers evaluate a product or service, they weigh its perceived value against the asking price. Marketers have generally focused much of their time and energy on managing the price side of that equation since raising prices can immediately boost profits. But that’s the easy part: Pricing usually consists of managing a relatively small set of numbers, and pricing analytics and tactics are highly evolved. What consumers truly value, however, can be difficult to pin down and psychologically complicated (Almquist, E., Senior, J., & Bloch, N., The Elements of Value, 2016).

The Elements of Value Pyramid: in the lowest level of the pyramid, Functional; one level higher, emotional; one level higher, life changing; in the upper most level; social impact.
“30 Elements of Value” in The Elements of Value (Almquist, E., Senior, J., & Bloch, N., 2016)

How can leadership teams actively manage value or devise ways to deliver more of it, whether functional (saving time, reducing cost) or emotional (reducing anxiety, providing entertainment)? Discrete choice analysis—which simulates demand for different combinations of product features, pricing, and other components—and similar research techniques are powerful and useful tools, but they are designed to test consumer reactions to preconceived concepts of value—the concepts that managers are accustomed to judging (Almquist, E., Senior, J., & Bloch, N., The Elements of Value, 2016).

Quantifying and Qualifying Value, Satisfaction and Desirability

When product managers, designers, and strategists are crafting their strategy or working on discovery phase, the kind of user and customer insights they are looking for is really hard to acquire through quantitative metrics, either because we cannot derive insights from the existing analytics coming from the product, or because we are creating something new (so there are no numbers to refer to). Most of such insights (especially desirability and satisfaction) would come from preference data.

Preference data consists of the more subjective data that measures a participant’s feelings or opinions of the product.

Rubin, J., & Chisnell, D., Handbook of usability testing: How to plan, design, and conduct effective tests (2011)

Just because preference data is more subjective, it doesn’t mean is less quantifiable: although Design and several usability activities are certainly qualitative, the image of good and bad designs can easily be quantified through metrics like perceived satisfaction, recommendations, etc (Sauro, J., & Lewis, J. R., Quantifying the user experience: Practical statistics for user research. 2016).

Preference Data is typically collected via written, oral, or even online questionnaires or through the debriefing session of a test. A rating scale that measures how a participant feels about the product is an example of a preference measure (Rubin, J., & Chisnell, D., Handbook of usability testing, 2011).

Now let’s look at some examples of preference data that design strategists can collect to inform strategic decisions.

Value Opportunity Analysis (VOA)

Value Opportunity Analysis (VOA) is an evaluative method that creates a measurable way to predict the success or failure of a product by focusing on the user’s point of view. The Value Opportunity Analysis (VOA) can happen at two stages throughout the design process (Hanington, B., & Martin, B., Universal methods of design, 2012):

  • VOA is typically used in the concept generation stage when prototyping is still low fidelity or even on paper.
  • It is also used at the launch stage, and tested for quality assurance to determine market readiness.  An example could be testing a current design prior to investing on a redesign.
While quantifying and qualifying value, there are seven value opportunities in a Value Opportunity Analysis: Emotion (Adventure, Independence, Security, Sensuality, Confidence, Power), Aesthetics (Visual, Auditory, Tactile, Olfactory, Taste) Identity ( Point in time, Sense of place, Personality), Impact (Social, Environmental), Ergonomics (Comfort, Safety, Ease of use) Core Technology (Reliable, Enabling) Quality (Craftsmanship, Durability)
Value Opportunity Analysis in  Universal methods of design (Hanington, B., & Martin, B., 2012)

There are seven value opportunities (Hanington, B., & Martin, B., Universal methods of design, 2012):

  1. Emotion: Adventure, Independence, Security, Sensuality, Confidence, Power
  2. Aesthetics: Visual, Auditory, Tactile, Olfactory, Taste
  3. Identity: Point in time, Sense of Place, Personality
  4. Impact: Social, Environmental
  5. Ergonomics: Comfort, Safety, Ease of use
  6. Core Technology: Reliable, Enabling
  7. Quality: Craftsmanship, Durability
Usefulness, Satisfaction, and Ease of Use (USE)

The Usefulness, Satisfaction, and Ease of Use Questionnaire (USE, Lund, 2001) measures the subjective usability of a product or service. It is a 30-item survey that examines four dimensions of usability (Sauro, J., & Lewis, J. R., Quantifying the user experience. 2016):

  • Usefulness
  • Ease of use
  • Ease of learning
  • Satisfaction.
Example of USE questionnaire (from Journal of Otolaryngology)
American Customer Satisfaction Index (ACSI)

The Satisfaction level indicator is reliant on 3 critical 10-point scale questions to obtain customer satisfaction. These American Customer Satisfaction Index (ACSI) questions are categorized into (Fornell et al, The American Customer Satisfaction Index, 1996):

  • Satisfaction
  • Expectation Levels
  • Performance
While intended as a macroeconomic measure of U.S. consumers in general, many corporations have used the American Customer Satisfaction Index (ACSI) to quantifying and qualifying the satisfaction of their own customers
Example of ACSI Questionnaire in ACSI (American Customer Satisfaction Index) Score & Its Calculation (Verint Systems Inc, 2021)

While intended as a macroeconomic measure of U.S. consumers in general, many corporations have used the American Customer Satisfaction Index (ACSI) for quantifying and qualifying the satisfaction of their own customers.

System Usability Scale (SUS)

The System Usability Scale (SUS) consists of ten statements to which participants rate their level of agreement. No attempt is made to assess different attributes of the system (e.g. usability, usefulness, etc.): the intent is to look at the combined rating (Tullis, T., & Albert, W., Measuring the user experience. 2013).

The System Usability Scale (SUS) provides a “quick and dirty”, reliable tool for measuring the usability. It consists of a 10 item questionnaire with five response options for respondents; from Strongly agree to Strongly disagree.
System Usability Scale (SUS) questions in How Low Can You Go? Is the System Usability Scale Range Restricted? (Kortum, P., Acemyan, C. Z., (2013)
Usability Metric for User Experience (UMUX)

In response to the need for a shorter questionnaire, Finstad introduced the Usability Metric for User Experience (UMUX) in 2010. It’s intended to be similar to the SUS but is shorter and targeted toward the ISO 9241 definition of usability (effectiveness, efficiency, and satisfaction). It contains two positive and two negative items with a 7-point response scale. The four items are (Sauro, J., & Lewis, J. R., Quantifying the user experience. 2016):

  • [This system’s] capabilities meet my requirements.
  • Using [this system] is a frustrating experience.
  • [This system] is easy to use.
  • I have to spend too much time correcting things with [this system].
UMUX-Lite

To improve the UMUX, Lewis et al (“Measuring perceived usability: The SUS, UMUX-LITE, and AltUsability” in International Journal of Human-Computer Interaction31(8), 496–505, 2015) proposed a shorter all-positive questionnaire called the UMUX-Lite using the same 7-point scale with the following two items (Sauro, J., & Lewis, J. R., Quantifying the user experience. 2016):

  • [This system’s] capabilities meet my requirements.
  • [This system] is easy to use.
Net Promoter Score (NPS)

Within a broad field of methods for measuring satisfaction, one popular framework is the net promoter score. The simplicity of the method is its great advantage—customers are simply asked “How likely is it that you would recommend our company/ product to a friend or colleague?” This type of survey is relatively simple to conduct, and it is constant across companies and industries. This makes it easy for companies to compare their performance with the competition, and good net promoter scores have been documented to relate directly to business growth (Polaine, A., Løvlie, L., & Reason, B., Service design, 2013).

I mention NPS because it’s a well-known metric, but I — and other design leaders — have reservations about it: there have been challenges to the claim of a strong relationship between NPS and the company growth. Also, there is no well-defined method for computing confidence intervals around NPS (Sauro, J., & Lewis, J. R., Quantifying the user experience. 2016)

Watch “Seeing the Big Picture: The Development of an Experience Scorecard” with Bill Albert
Watch Bill Albert talk about how NPS and CSAT scores are not enough to measure the quality of the experience in Seeing the Big Picture: The Development of an Experience Scorecard
Desirability Testing

A Desirability Test is great for gauging first-impression emotional responses to products and services (Hanington, B., & Martin, B., Universal methods of design, 2012):

  • Explores the affective responses that different designs elicit from people based on first impressions.
  • Using index cards with positive, neutral, and negative adjectives written on them, participants pick those that describe how they feel about a design or a prototype
  • Can be conducted using low-fidelity prototypes as a baseline before the team embarks on a redesign.

Participants are offered different visual-design alternatives and are expected to associate each alternative with a set of attributes selected from a closed list.

Experience Sampling

Experience Sampling is a strategic research technique that answers a high-level business (or roadmap) question rather than evaluating a design or product that already exists. Experience Sampling is good for uncovering unmet needs, which will lead to generating great ideas for new products and for validating (or invalidating) ideas you already have (Sharon, T., Validating Product Ideas, 2016).

Learn more about Experience Sampling in Don’t Listen to Users: Sample their experience with Tomer Sharon

In an experience sampling study, research participants are interrupted several times a day or week to note their experience in real time. The key is to ask the same question over and over again at random times during the day or at work. This cadence and repetition strengthen your finding’s validity and allow you to identify patterns (Sharon, T., Validating Product Ideas, 2016).

Jobs To Be Done (JTBD) and Outcome-Driven Innovation

Outcome-Driven Innovation (ODI) is a strategy and innovation process built around the theory that people buy products and services to get jobs done. It links a company’s value creation activities to quantifying and qualifying customer-defined metrics. Ulwick found that previous innovation practices were ineffective because they were incomplete, overlapping, or unnecessary.

Outcome-Driven Innovation® (ODI) is a strategy and innovation process that enables a company to create and market winning product and service offerings with a success rate that is 5-times the industry average

Ulwick, A.,  What customers want: Using outcome-driven innovation to create breakthrough products and services (2005)

Clayton Christensen credits Ulwick and Richard Pedi of Gage Foods with the way of thinking about market structure used in the chapter “What Products Will Customers Want to Buy?” in his Innovator’s Solution and called “jobs to be done” or “outcomes that customers are seeking”.

UX Matrix: OPPORTUNITY SCORES
Clayton Christensen credits Ulwick and Richard Pedi of Gage Foods with the way of thinking about market structure used in the chapter “What Products Will Customers Want to Buy?” in his Innovator’s Solution and called “jobs to be done” or “outcomes that customers are seeking”.

Ulwick’s “opportunity algorithm” measures and ranks innovation opportunities. Standard gap analysis looks at the simple difference between importance and satisfaction metrics; Ulwick’s formula gives twice as much weight to importance as to satisfaction, where importance and satisfaction are the proportion of high survey responses.

You’re probably asking yourself “where do these values come from?” That’s where User Research comes in handy: once you’ve got the List of Use Cases, you go back to your users and probe how important each use case is, and how satisfied with the product they are with regards to each use case.

Once you’ve obtained the opportunity scores for each use case, what comes next? There are two complementary pieces of information that the scores reveal: where the market is underserved and where it is overserved. We can use this information to make some important targeting and resource-related decisions.

Opportunity Scores: GRAPH
Plotting the Jobs-to-be-Done, in order to map where the market is underserved and where the it is overserved (Ulwick, A., What customers want: Using outcome-driven innovation to create breakthrough products and services, 2005)

Almost as important as knowing where the market is underserved is knowing where it is overserved. Jobs and outcomes that are unimportant or already satisfied represent little opportunity for improvement and consequently should not receive any resource allocation in most markets, it is not uncommon to find a number of outcomes that are overserved-and companies that are nevertheless continuing to allocate them development resources (Ulwick, A. W., What customers want, 2005).

man in red long sleeve shirt holding a drilling tool
Bringing Business Impact and User Needs together with Jobs-to-be-done (JTBD)

Learn how Jobs to be Done (JTBD) work as a great “exchange” currency to facilitate strategy discussions around value between designers, business stakeholders, and technology people (Photo by Blue Bird on Pexels.com)

The Importance versus Satisfaction Framework

Similar to Outcome-Driven Innovation, this framework proposes you should be quantifying and qualifying customers’ needs that any particular feature of the product is going to address (Olsen, D. The lean product playbook, 2015):

  • How important is that?
  • Then how satisfied are people with the current alternatives that are out there?
Dan Olsen's framework proposes you should be quantifying and qualifying the customer need that any particular feature of the product is going to address: How important is that? Then how satisfied are people with the current alternatives that are out there? You want to build things that have high important needs with low satisfaction
Importance versus Satisfaction Quadrants in The lean product playbook (Olsen, D., 2015)

What I like about Olsen’s approach to assessing opportunities is that he created a couple of variations of opportunities scores:

  • Customer Value Delivered = Importance x Satisfaction
  • Opportunity to Add Value = Importance x (1 – Satisfaction)
  • Opportunity = Importance – Current Value Delivered
Kano Model

The Kano Model, developed by Dr. Noriaki Kano, is a way of classifying customer expectations into three categories: expected needs, normal needs, and exciting needs. This hierarchy can be used to help with our prioritization efforts by clearly identifying the value of solutions to the needs in each category (“Kano Model” in Product Roadmaps Relaunched, Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M., 2017):

  • The customer’s expected needs are roughly equivalent to the critical path: if those needs are not met, they become dissatisfiers.
  • If you meet the expected needs, customers will start articulating normal needs, or satisfiers — things they don’t normally need in the product but will satisfy them.
  • When normal needs are largely met, then exciting needs (delighters or wows) go beyond the customers’ expectations.
“X axis: Investment; Y axis: Satisfaction” in Kano Model Analysis in Product Design

The Kano methodology was initially adopted by operations researchers, who added statistical rigor to the question pair results analysis. Product managers have leveraged aspects of the Kano approach in Quality Function Deployment (QFD). More recently, this methodology has been used by Agile teams and in market research (Moorman, J., “Leveraging the Kano Model for Optimal Results” in UX Magazine, 2012).

Learn more about Quantifying and Qualifying User Needs and Delight using the Kano Method (Jan Moorman: Measuring User Delight using the Kano Methodology)
Learn more about the Kano Method from Measuring User Delight using the Kano Methodology (Moorman, J., 2012)

What I propose below is that we need objective ways for Quantifying and Qualifying the value of design solutions to justify the experience investments, look at the different points in the strategic planning and execution and identify the discussions that strategists should facilitate around what customers and users perceive as value, while tracking and tracing the implementation of a strategy to ensure we are bringing value for both customers and business.

Design is the activity of turning vague ideas, market insights, and evidence into concrete value propositions and solid business models. Good design involves the use of strong business model patterns to maximize returns and compete beyond product, price and technology.

Bland, D. J., & Osterwalder, A., Testing business ideas, (2020)

From that perspective, we need to find ways to:

  • Explore (and preferably test) ideas early
  • Facilitate investment discussions by objectively describing business and user value, establishing priorities
  • Asses risk of pursuing ideas, while capturing signals that indicate if/when to pivot if an idea “doesn’t work”
  • Capture and track the progress of strategy implementation
A holistic Quantifying and Qualifying set of tools and frameworks should help teams Pivot & Risk Mitigation Assessing the risk, capturing signals, know when to pivot Visibility and Traceability Capturing and tracking progress  Facilitating Investment Discussions Business /User Value, Priorities, Effort, etc Validating / Testing Ideas Finding objective ways to explore (and preferably test) ideas early
Instead of a single metric to measure ROI, let’s look at the different discussions that need to be facilitated while quantifying and qualifying strategy, namely: Testing Business Ideas, Facilitating Investment Discussions, Pivot and Risk Mitigation, Visibility and Traceability.

Validating and Testing Business Ideas

“What do people need?” is a critical question to ask when you build a product. Wasting your life’s savings and your investor’s money, risking your reputation, making false promises to employees and potential partners, and trashing months of work you can never get back is a shame. It’s also a shame to find out you were completely delusional when you thought that everyone needed the product you were working on (Sharon, T., Validating Product Ideas, 2016)

Don’t make the mistake of executing business ideas without evidence; test your ideas throughly, regardless of how great they may seem in theory.

Bland, D. J., & Osterwalder, A., Testing Business Ideas: A Field Guide for Rapid Experimentation, 2019).

To test a big business idea, you break it down into smaller chunks of testable hypotheses. These hypotheses cover three types of risk (Bland, D. J., & Osterwalder, Testing Business Ideas: A Field Guide for Rapid Experimentation, 2019):

  • Desirability (do they want this?) relates to the risk that the market a business is targeting is too small; that too few customers want the value proposition; or that the company can’t reach, acquire, and retain targeted customers.
  • Feasibility (Can we do this?) relates to the risk that a business can’t manage, scale, or get access to key resources (technology, IP, brand, etc.). This isn’t just technical feasibility; we also look need to look at overall regulatory, policy, and governance that would prevent you from making your solution a success.
  • Viability (Should we do this?) relates to the risk that a business cannot generate more revenue than costs (revenue stream and cost stream). While customers may want your solution (desirable) and you can build it (feasible), perhaps there’s not enough of a market for it or people won’t pay enough for it. 
A Veen Diagram representing the intersection between Desirability, Viability and Feasibility.
The Sweet Spot of Innovation in Brown, T., & Katz, B., Change By Design (2009)

Design strategists should help the team find objective ways to value design ideas/ approaches/ solutions to justify the investment in them from both desirability, feasibility, and viability.

crop laboratory technician examining interaction of chemicals in practical test modern lab
Testing Business Ideas thoroughly, regardless of how great they may seem in theory, is a way to mitigate the risks of your viability hypothesis being wrong (Photo by RF._.studio on Pexels.com)

Facilitating Investment Discussions

As I mentioned in a previous post, designers must become skilled facilitators that respond, prod, encourage, guide, coach, and teach as they guide individuals and groups to make decisions that are critical in the business world through effective processes. There are few decisions that are harder than deciding how to prioritise.

pen calendar to do checklist
Learn more about Prioritisation in Strategy and Prioritisation (Photo by Breakingpic on Pexels.com)

I’ve seen too many teams that a lot of their decisions seem to be driven by the question “What can we implement with the least effort” or “What are we able to implement”, not by the question “what brings value to the user”.

From a user-centered perspective, the most crucial pivot that needs to happen in the conversation between designers and business stakeholders is the framing of value:

  • Business value
  • User value
  • Value to designers (sense of self-realization? Did I impact someone’s life in a positive way?)

The mistake I’ve seen many designers make is to look at prioritization discussion as a zero-sum game: our user-centered design tools set may have focused too much on the needs of the user, at the expense of business needs and technological constraints.

That said, there is a case to be made that designers should worry about strategy because it helps shape the decisions that not only create value for users but also value for employees.

Companies that achieve enduring financial success create substantial value for their customers, their employees, and their suppliers.

Oberholzer-Gee, F. (2021). Better, simpler strategy (2021)

Therefore, a strategic initiative is worthwhile only if it does one of the following (Oberholzer-Gee, F. (2021). Better, simpler strategy. 2021):

  • Creates value for customers by raising their willingness to pay (WTP): If companies find ways to innovate or to improve existing products, people will be willing to pay more. In many product categories, Apple gets to charge a price premium because the company raises the customers’ WTP by designing beautiful products that are easy to use, for example. WTP is the most a customer would ever be willing to pay. Think of it as the customer’s walk-away point: Charge one cent more than someone’s WTP, and that person is better off not buying. Too often, managers focus on top-line growth rather than on increasing willingness to pay. A growth-focused manager asks, “What will help me sell more?” A person concerned with WTP wants to make her customers clap and cheer. A sales-centric manager analyzes purchase decisions and hopes to sway customers, whereas a value-focused manager searches for ways to increase WTP at every stage of the customer’s journey, earning the customer’s trust and loyalty. A value-focused company convinces its customers in every interaction that it has their best interests at heart.
  • Creates value for employees by making work more appealing: When companies make work more interesting, motivating, and flexible, they are able to attract talent even if they do not offer industry-leading compensation. Paying employees more is often the right thing to do, of course. But keep in mind that more-generous compensation does not create value in and of itself; it simply shifts resources from the business to the workforce. By contrast, offering better jobs not only creates value, it also lowers the minimum compensation that you have to offer to attract talent to your business, or what we call an employee’s willingness-to-sell (WTS) wage. Offer a prospective employee even a little less than her WTS, and she will reject your job offer; she is better off staying with her current firm. As is the case with prices and WTP, value-focused organizations never confuse compensation and WTS. Value-focused businesses think holistically about the needs of their employees (or the factors that drive WTS).
  • Creates value for suppliers by reducing their operating costs: Like employees, suppliers expect a minimum level of compensation for their products. A company creates value for its suppliers by helping them raise their productivity. As suppliers’ costs go down, the lowest price they would be willing to accept for their goods—what we call their willingness-to-sell (WTS) price—falls. When Nike, for example, created a training center in Sri Lanka to teach its Asian suppliers lean manufacturing, the improved production techniques helped suppliers reap better profits, which they then shared with Nike.
Oberholzer's Value Stick
The Value Stick is an interesting tool that provides insight into where the value is in a product or service. It relates directly to the Michael Porter’s Five Forces, reflecting how strong those forces are: Willingness to Pay (WTP), Price, Cost, and Willingness to Sell (WTS). The difference between Willingness to Pay (WTP) and Willingness to Sell (WTS) — the length of the stick — is the value that a firm creates (Oberholzer-Gee, F., Better, simpler strategy, 2021)

This idea is captured in a simple graph, called a value stick. WTP sits at the top and WTS at the bottom. When companies find ways to increase customer delight and increase employee satisfaction and supplier surplus (the difference between the price of goods and the lowest amount the supplier would be willing to accept for them), they expand the total amount of value created and position themselves for extraordinary financial performance. 

Organizations that exemplify value-based strategy demonstrate some key behaviors (Oberholzer-Gee, F., “Eliminate Strategic Overload” in Harvard Business Review, 2021):

  • They focus on value, not profit. Perhaps surprisingly, value-focused managers are not overly concerned with the immediate financial consequences of their decisions. They are confident that superior value creation will result in improved financial performance over time.
  • They attract the employees and customers whom they serve best. As companies find ways to move WTP or WTS, they make themselves more appealing to customers and employees who particularly like how they add value.
  • They create value for customers, employees, or suppliers (or some combination) simultaneously. Traditional thinking, informed by our early understanding of success in manufacturing, holds that costs for companies will rise if they boost consumers’ willingness to pay—that is, it takes more-costly inputs to create a better product. But value-focused organizations find ways to defy that logic.

While in the past designers would concentrate on enhancing desirability, the emerging strategic role of designers means they have to balance desirability, feasibility and viability simultaneously. Designers need to expand their profiles and master a whole new set of strategic practices.”

“Strategic Designers: Capital T-shaped professionals” in Strategic Design (Calabretta et al., 2016)

For such a conversation to pivot towards focus on value to happen, designers will need to get better at influencing the strategy of their design project. However, some designers lack the vocabulary, tools, and frameworks to influence it in ways that drive user experience vision forward. Advocating for how can we inform the decisions that increase our customer’s Willingness to Pay (WTS) by — for example — increasing customer’s delight.

To understand the risk and uncertainty of your idea you need to ask: “What are all the things that need to be true for this idea to work?” This will allow you to identify all three types of hypotheses underlying a business idea: desirabilityfeasibility, and viability (Bland, D. J., & Osterwalder, A., Testing business ideas, 2020)

Design strategists should help the team find objective ways to value design ideas/ approaches/ solutions to justify the investment in them from both desirability, feasibility, and viability.

calculator and pen on table
Learn more about facilitating investment discussions by finding objective ways to value ideas, approaches, and solutions to justify the investment on them (Photo by Pixabay on Pexels.com)

Pivot and Risk Mitigation

In a previous post, I mentioned that — more often than not — is not for the lack of ideas that teams cannot innovate, but because of all the friction or drag created by not having a shared vision and understanding of the problems they are trying to solve. It has become a personal rallying cry for me to help teams create shared understanding.

Shared understanding is the collective knowledge of the team that builds over time as the team works together. It’s a rich understanding of the space, the product, and the customers.

“Creating Shared Understanding” in Lean UX: Applying lean principles to improve userexperience, Gothelf, J., & Seiden, J. (2021)

It’s been my experience that — left to chance — it’s only natural that teams will stray from vision and goals. Helping teams paddle in the same direction requires not only good vision and goals, but also leadership, and intentional facilitation. All the collaboration that goes into creating shared understanding can help mitigate the risk of teams straying away.

That’s why is important that designers engage with stakeholders early and often to make sure we’ve got the right framing of the problem space around the 3 vision-related questions (as per the Six Strategic Questions illustration above):

  • What are our aspirations?
  • What are our challenges?
  • What will we focus on?
beach bench boardwalk bridge as a visual representation of long term vision
Learn more about creating product vision in Strategy and The Importance of Vision (Photo by Pixabay on Pexels.com)

Having a strong shared vision and shared understanding of the problems we are trying to solve will help facilitate Pivot and Risk Mitigation discussions in the sense that — when it comes to deciding if we should Persevere, Pivot or Stop — we need to know what to pivot away from and what to pivot towards.

people playing poker
Learn more about what methods, tools or techniques are available for pivot and risk mitigation, and what signals we need to capture in order to know if we should Persevere, Pivot or Stop (Photo by Javon Swaby on Pexels.com)

Visibility and Traceability

It’s an old saying that what gets measured gets done. There is more than a little truth to this. If aspirations are to be achieved, capabilities developed, and management systems created, progress needs to be measured (“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013).

Measurement allows comparison of expected outcomes with actual outcomes and enables you to adjust strategic choices accordingly.

“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013)

If you clearly articulated the answer to the six strategic questions (what are our aspirations, what are our challenges, what will we focus on, what are our guiding principles, what type of activities), strategies can still fail — spectacularly — if you fail to establish management systems that support those choices. Without the supporting systems, structures, and measures for quantifying and qualifying outcomes, strategies remain a wish list, a set of goals that may or may not ever be achieved (“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013).

Learn more about how to create great choices (aisle architecture building business)
Learn more about how to create great choices in Strategy and the Art of Creating Choices (Photo by Pixabay on Pexels.com)

Although Design and several usability activities are certainly qualitative, the image of good and bad designs can easily be quantified in conversation, completion rates, completion times, perceived satisfaction, recommendations, and sales (Sauro, J., & Lewis, J. R., Quantifying the user experience: Practical statistics for user research. 2016).

Once we’ve created a shared understanding of the strategic choices and the unique positions our stakeholders want to create, creating visibility and traceability systems for capturing and tracking progress around the execution of an idea/approach will allow us to make comparisons of expected outcomes with actual outcomes and enable you to adjust strategic choices accordingly.

close up photo of survey spreadsheet
Learn more about the visibility and traceability aspects of the execution of an idea/approach (Photo by Lukas on Pexels.com)

The Right Time for Quantifying and Qualifying

You might be asking yourself “These are all great, but when should I be doing what?”. Without knowing what kind of team setup you have, and what kinds of processes you run in your organization, the best I can do is to map all of the techniques above the Double Diamond framework.

The Double Diamond Framework

Design Council’s Double Diamond clearly conveys a design process to designers and non-designers alike. The two diamonds represent a process of exploring an issue more widely or deeply (divergent thinking) and then taking focused action (convergent thinking).  

  • Discover. The first diamond helps people understand, rather than simply assume, what the problem is. It involves speaking to and spending time with people who are affected by the issues.
  • Define. The insights gathered from the discovery phase can help you to define the challenge in a different way.
  • Develop. The second diamond encourages people to give different answers to the clearly defined problem, seeking inspiration from elsewhere and co-designing with a range of different people.
  • Deliver. Delivery involves testing out different solutions at a small scale, rejecting those that will not work, and improving the ones that will.
Design Council’s framework for innovation also includes the key principles and design methods that designers and non-designers need to take, and the ideal working culture needed, to achieve significant and long-lasting positive change.
A clear, comprehensive and visual description of the design process in What is the framework for innovation? (Design Council, 2015)

Map of Quantifying and Qualifying Activities and Methods

Process Awareness characterizes the degree to which the participants are informed about the process procedures, rules, requirements, workflow, and other details. The higher the process awareness, the more profoundly the participants are engaged in a process, and so the better results they deliver.

In my experience, the biggest disconnect between the work designers need to do and the mindset of every other team member in a team is usually about how quickly we tend — when not facilitated — to jump to solutions instead of contemplating and exploring the problem space a little longer.

Map of Quantifying and Qualifying Activities in the Double Diamond (Discover, Define, Develop and Deliver)

Knowing when teams should be diverging, when they should be exploring, and when they should closing will help ensure they get the best out of their collective brainstorming and multiple perspectives’ power and keep the team engaged.

Quantifying and Qualifying Activities during “Discover”

Here are my recommendations for suggested quantifying and qualifying activities and methods:

Quantifying and Qualifying Activities during “Define”

Here are my recommendations for suggested quantifying and qualifying activities and methods:

Quantifying and Qualifying Activities during “Develop”

Here are my recommendations for suggested quantifying and qualifying activities and methods:

Quantifying and Qualifying Activities during “Deliver”

Here are my recommendations for suggested quantifying and qualifying activities and methods:

Facilitate Quantifying and Qualifying Discussions

I’m of the opinion that designers — instead of complaining that everyone else is jumping too quickly into solutions — should facilitate the discussions and help others raise awareness around the creative and problem-solving process.

I’ll argue for the Need for Facilitation in the sense that — if designers want to influence the decisions that shape strategy — they must step up to the plate and become skilled facilitators that respond, prod, encourage, guide, coach, and teach as they guide individuals and groups to make decisions that are critical in the business world through effective processes.

That said, my opinion is that facilitation here does not only means “facilitate workshops”, but facilitating the decisions regardless of what kinds of activities are required.

photo of people near wooden table
Learn more about becoming a skilled facilitator (Photo by fauxels on Pexels.com)

Almquist, E., Senior, J., & Bloch, N. (2016). The Elements of Value: Measuring—and delivering— what consumers really want. Harvard Business Review, (September 2016), 46–53.

Bland, D. J., & Osterwalder, A. (2020). Testing business ideas: A field guide for rapid experimentation. Standards Information Network.

Brown, T., & Katz, B. (2009). Change by design: how design thinking transforms organizations and inspires innovation. [New York]: Harper Business

Christensen, C. M., & Raynor, M. E. (2013). The innovator’s solution: Creating and sustaining successful growth. Boston, MA: Harvard Business Review Press.

Croll, A., & Yoskovitz, B. (2013). Lean Analytics: Use Data to Build a Better Startup Faster. O’Reilly Media.

Design Council. (2015, March 17). What is the framework for innovation? Design Council’s evolved Double Diamond. Retrieved August 5, 2021, from designcouncil.ork.uk website: https://www.designcouncil.org.uk/news-opinion/what-framework-innovation-design-councils-evolved-double-diamond

Fornell, C., Johnson, M. D., Anderson, E. W., Cha, J., & Bryant, B. E. (1996). The American customer satisfaction index: Nature, purpose, and findings. Journal of Marketing60(4), 7.

Gadvi, V., (2022), How to identify your North Star Metric, retrieved 22 September 2022 from Mind the Product website https://www.mindtheproduct.com/how-to-identify-your-north-star-metric/

Garbugli, É. (2020). Solving Product: Reveal Gaps, Ignite Growth, and Accelerate Any Tech Product with Customer Research. Wroclaw, Poland: Amazon.

Gothelf, J., & Seiden, J. (2021). Lean UX: Applying lean principles to improve user experience. Sebastopol, CA: O’Reilly Media.

Govindarajan, V., & Trimble, C. (2010). The other side of innovation: Solving the execution challenge. Boston, MA: Harvard Business Review Press.

Hanington, B., & Martin, B. (2012). Universal methods of design: 100 Ways to research complex problems, develop innovative ideas, and design effective solutions. Beverly, MA: Rockport.

Kalbach, J. (2020), “Mapping Experiences: A Guide to Creating Value through Journeys, Blueprints, and Diagrams“, 440 pages, O’Reilly Media; 2nd edition (15 December 2020)

Kortum, P., & Acemyan, C. Z. (2013). How Low Can You Go? Is the System Usability Scale Range Restricted? Journal of Usability Studies9(1), 14–24. https://uxpajournal.org/wp-content/uploads/sites/7/pdf/JUS_Kortum_November_2013.pdf

Lafley, A.G., Martin, R. L., (2013), “Playing to Win: How Strategy Really Works”, 272 pages, Publisher: Harvard Business Review Press (5 Feb 2013)

Lewis, J. R., Utesch, B. S., & Maher, D. E. (2015). Measuring perceived usability: The SUS, UMUX-LITE, and AltUsability. International Journal of Human-Computer Interaction31(8), 496–505.

Lockwood, T., “Design Value: A Framework for Measurement” in Building Design Strategy: Using Design to Achieve Key Business Objectives, Lockwood, T., Walton, T., (2008); Allworth Press; 1 edition (November 11, 2008)

Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M. (2017). Product Roadmaps Relaunched. Sebastopol, CA: O’Reilly Media.

Lund, A. M. (2001). Measuring usability with the USE questionnaire. Usability Interface, 8(2), 3-6 (www.stcsig.org/usability/newsletter/index.html).

Moorman, J., (2012), “Leveraging the Kano Model for Optimal Results” in UX Magazine, captured 11 Feb 2021 from https://uxmag.com/articles/leveraging-the-kano-model-for-optimal-results

Oberholzer-Gee, F. (2021). Better, simpler strategy: A value-based guide to exceptional performance. Boston, MA: Harvard Business Review Press.

Oberholzer-Gee, F. (2021). Eliminate Strategic Overload. Harvard Business Review, (May-June 2021), 11.

Olsen, D. (2015). The lean product playbook: How to innovate with minimum viable products and rapid customer feedback (1st ed.). Nashville, TN: John Wiley & Sons.

Perri, M. (2019). Escaping the build trap. Sebastopol, CA: O’Reilly Media.

Pichler, R. (2016). “Choose the Right Key Performance Indicators” in Strategize: Product strategy and product roadmap practices for the digital age. Pichler Consulting. 

Polaine, A., Løvlie, L., & Reason, B. (2013). Service design: From insight to implementation. Rosenfeld Media.

Rodden, K., Hutchinson, H., & Fu, X. (2010). Measuring the user experience on a large scale: User-centered metrics for web applicationsProceedings of the 28th International Conference on Human Factors in Computing Systems – CHI ’10. New York, New York, USA: ACM Press.

Rubin, J., & Chisnell, D. (2011). Handbook of usability testing: How to plan, design, and conduct effective tests (2nd ed.). Chichester, England: John Wiley & Sons.

Sauro, J., & Lewis, J. R. (2016). Quantifying the user experience: Practical statistics for user research (2nd Edition). Oxford, England: Morgan Kaufmann.

Sharon, T. (2016). Validating Product Ideas (1st Edition). Brooklyn, New York: Rosenfeld Media.

Tullis, T., & Albert, W. (2013). Measuring the user experience: Collecting, analyzing, and presenting usability metrics (2nd edition). Morgan Kaufmann.

Ulwick, A. (2005). What customers want: Using outcome-driven innovation to create breakthrough products and services. Montigny-le-Bretonneux, France: McGraw-Hill.

Van Der Pijl, P., Lokitz, J., & Solomon, L. K. (2016). Design a better business: New tools, skills, and mindset for strategy and innovation. Nashville, TN: John Wiley & Sons.

By Itamar Medeiros

Originally from Brazil, Itamar Medeiros currently lives in Germany, where he works as Director of Design Strategy at SAP.

Working in the Information Technology industry since 1998, Itamar has helped truly global companies in multiple continents create great user experience through advocating Design and Innovation principles. During his 7 years in China, he promoted the User Experience Design discipline as User Experience Manager at Autodesk and Local Coordinator of the Interaction Design Association (IxDA) in Shanghai.

Itamar holds a MA in Design Practice from Northumbria University (Newcastle, UK), for which he received a Distinction Award for his thesis Creating Innovative Design Software Solutions within Collaborative/Distributed Design Environments.

17 replies on “The Need for Quantifying and Qualifying Strategy”

I learned a few new concepts with this article: Vanity and Pirate Metric, Heart Method, Framing value and more.

I see myself benefiting from the Framing Value as my current project focus on an internal services rather than a product. As much as I want to focus on the user’s needs, my goal is to improve the business by focusing on internal processes. And for that, there are technological constraints to implement accessible products. The triangle Business value, User value and Value to designers is helpful to frame an Accessibility Program including internal stakeholders.

The Vanity Metric captured my attention as well. It is described as not helpful for product teams to make business decisions because they do not cause change or behavior or priorities. I understand the vanity metric only reinforces what is right, so that the strategy continues the way it is. But what if we could track and observe such numbers and cross reference with other trends? What if we stablish assumptions to delve into a comparative analyses considering past and current data? What if we track and identify when it is time to change? Maybe this all belong to a “Leading metric” describe latter in the article. Maybe it is also associated to “comparative metric” that helps “understand which way things are moving”.

Nonetheless, this article brings insightful recommendations and methods to work qualitative and quantitative data and what they represent to the product and business.

One of my favorite topics! Very used to quantifying but always trying to improve my skills in qualifying. I found the variety of different frameworks you present here to be very useful, and I plan to leverage them in future work. .
The topic of the north star metric made me curious about how we could leverage a similar principle within our product strategies. Since our work is so exploratory, we don’t have a lot of direct metrics that we can leverage to know if we are tracking towards a clear goal or not.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: